**Special Functions with Applications to Mathematical Physics**

Editor

**Francesco Mainardi**

MDPI • Basel • Beijing • Wuhan • Barcelona • Belgrade • Manchester • Tokyo • Cluj • Tianjin

*Editor* Francesco Mainardi University of Bologna Bologna, Italy

*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from Special Issue published online in the open access journal *Mathematics* (ISSN 2227-7390) (available at: https://www.mdpi.com/journal/mathematics/special issues/Special Functions Mathematical Physics).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

LastName, A.A.; LastName, B.B.; LastName, C.C. Article Title. *Journal Name* **Year**, *Volume Number*, Page Range.

**ISBN 978-3-0365-6990-1 (Hbk) ISBN 978-3-0365-6991-8 (PDF)**

© 2023 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license, which allows users to download, copy and build upon published articles, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications.

The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons license CC BY-NC-ND.

## **Contents**



## **Preface to "Special Functions with Applications to Mathematical Physics"**

This MDPI booklet lists the articles published in the Special Issue of the journal Mathematics devoted to special functions with applications in mathematical physics in the years 2020–2021.

The call for papers considered theories and applications of high transcendental functions, including topics found mainly in the list of keywords:



However, the Special Issue were not limited to the above list, for example, when the content of a paper was clearly related to some high transcendental functions and their applications.

Special attention was reserved for distinct functions exhibiting some relevance in the framework of the theories and applications of the fractional calculus and in their visualization through illuminating plots.

Both research and survey articles were included in this booklet, according to the content list.

**Francesco Mainardi** *Editor*

## *Article* **Asymptotic Expansion of the Modified Exponential Integral Involving the Mittag-Leffler Function**

#### **Richard Paris**

Division of Computing and Mathematics, Abertay University, Dundee DD1 1HG, UK; r.paris@abertay.ac.uk

Received: 21 February 2020; Accepted: 10 March 2020; Published: 16 March 2020

**Abstract:** We consider the asymptotic expansion of the generalised exponential integral involving the Mittag-Leffler function introduced recently by Mainardi and Masina [*Fract. Calc. Appl. Anal.* **21** (2018) 1156–1169]. We extend the definition of this function using the two-parameter Mittag-Leffler function. The expansions of the similarly extended sine and cosine integrals are also discussed. Numerical examples are presented to illustrate the accuracy of each type of expansion obtained.

**Keywords:** asymptotic expansions; exponential integral; Mittag-Leffler function; sine and cosine integrals

**MSC:** 26A33; 33E12; 34A08; 34C26

#### **1. Introduction**

The complementary exponential integral Ein(*z*) is defined by

$$\operatorname{Ein}(z) = \int\_0^z \frac{1 - e^{-t}}{t} \, dt = \sum\_{n=1}^\infty \frac{(-)^{n-1} z^n}{nn!} \quad (z \in \mathbb{C}) \tag{1}$$

and is an entire function. Its connection with the classical exponential integral <sup>E</sup>1(*z*) = <sup>∞</sup> *<sup>z</sup> t* <sup>−</sup>1*e*−*<sup>t</sup> dt*, valid in the cut plane | arg *z*| < *π*, is [1], p. 150.

$$\text{Ein}(z) = \log z + \gamma + \mathcal{E}\_1(z),\tag{2}$$

where *γ* = 0.5772156 . . . is the Euler-Mascheroni constant.

In a recent paper, Mainardi and Masina [2] proposed an extension of Ein(*z*) by replacing the exponential function in (1) by the one-parameter Mittag-Leffler function

$$E\_{\mathfrak{a}}(z) = \sum\_{n=0}^{\infty} \frac{z^n}{\Gamma(an+1)} \qquad (z \in \mathbb{C}, \ a > 0),$$

which generalises the exponential function *ez*. They introduced the function for any *α* > 0 in the cut plane | arg *z*| < *π*

$$\operatorname{Ein}\_{\mathfrak{a}}(z) = \int\_0^z \frac{1 - E\_{\mathfrak{a}}(-t^{\mathfrak{a}})}{t^{\mathfrak{a}}} dt = \sum\_{n=0}^\infty \frac{(-)^n z^{an+1}}{(an+1)\Gamma(an+\mathfrak{a}+1)},\tag{3}$$

which when *α* = 1 reduces to the function Ein(*z*). A physical application of this function for 0 ≤ *α* ≤ 1 arises in the study of the creep features of a linear viscoelastic model; see Reference [3] for details. An analogous extension of the generalised sine and cosine integrals was also considered in Reference [2]. Plots of all these functions for *α* ∈ [0, 1] were given.

Here we consider a slightly more general version of (3) based on the two-parameter Mittag-Leffler function given by

$$E\_{\mathfrak{a},\mathfrak{F}}(z) = \sum\_{n=0}^{\infty} \frac{z^n}{\Gamma(\mathfrak{a}n + \beta)} \qquad (z \in \mathbf{C}, \mathfrak{a} > 0),$$

where *β* will be taken to be real. Then the extended complementary exponential integral we shall consider is

$$\mathrm{Ein}\_{\mathfrak{u},\mathfrak{f}}(z) = \int\_0^z \frac{1 - E\_{\mathfrak{a},\mathfrak{f}}(-t^{\mathfrak{a}})}{t^{\mathfrak{a}}} dt = \sum\_{n=1}^\infty \frac{(-)^{n-1}}{\Gamma(an + \beta)} \int\_0^z t^{an - n} dt$$

$$= z \sum\_{n=0}^\infty \frac{(-)^n z^{an}}{(an + 1)\Gamma(an + n + \beta)} \qquad\qquad (4)$$

upon replacement of *n* − 1 by *n* in the last summation. When *β* = 1 this reduces to (3) so that Ein*α*,1(*z*) = Ein*α*(*z*).

The asymptotic expansion of this function will be obtained for large complex *z* with the parameters *α*, *β* held fixed. We achieve this by consideration of the asymptotics of a related function using the theory developed for integral functions of hypergeometric type as discussed, for example, in Reference [4], §2.3. An interesting feature of the expansion of Ein*α*,*β*(*x*) for *x* → +∞ when *α* ∈ (0, 1] is the appearance of a logarithmic term whenever *α* = 1, <sup>1</sup> 2 , 1 <sup>3</sup> , ... . Similar expansions are obtained for the extended sine and cosine integrals in Section 4. The paper concludes with the presentation of some numerical results that demonstrate the accuracy of the different expansions obtained.

#### **2. The Asymptotic Expansion of a Related Function for** *|z| →* **∞**

To determine the asymptotic expansion of Ein*α*,*β*(*z*) for large complex *z* with the parameters *α* and *β* held fixed, we shall find it convenient to consider the related function defined by

$$F(\chi) := \sum\_{n=0}^{\infty} \frac{\chi^n}{(an+\gamma)\Gamma(an+n+\beta)} = \sum\_{n=0}^{\infty} g(n) \frac{\chi^n}{n!} \qquad (\chi \in \mathbb{C}), \tag{5}$$

where

$$\log(n) = \frac{\Gamma(n+1)}{(an+\gamma)\Gamma(an+a+\beta)} = \frac{\Gamma(an+\gamma)\Gamma(n+1)}{\Gamma(an+\gamma+1)\Gamma(an+a+\beta)} \dots$$

It is readily seen that, when *γ* = 1,

$$\operatorname{Ein}\_{\alpha,\beta}(z) = z \, F(-z^{\alpha})\,.$$

The parameter *γ* > 0, but will be chosen to have two specific values in Sections 3 and 4; namely, *γ* = 1 and *γ* = 1 + *α*. It will be shown that the asymptotic expansion of *F*(*χ*) consists of an algebraic and an exponential expansion valid in different sectors of the complex *χ*-plane.

The function *F*(*χ*) in (5) is a case of the Fox-Wright function

$$\mathbb{E}\_p \Psi\_q(\chi) = \sum\_{n=0}^{\infty} \frac{\prod\_{r=1}^p \Gamma(a\_r n + a\_r)}{\prod\_{r=1}^q \Gamma(\beta\_r n + b\_r)} \frac{\chi^n}{n!},\tag{6}$$

corresponding to *p* = *q* = 2. In (6) the parameters *α<sup>r</sup>* and *β<sup>r</sup>* are real and positive and *ar* and *br* are arbitrary complex numbers. We also assume that the *α<sup>r</sup>* and *ar* are subject to the restriction

$$a\_r n + a\_r \neq 0, -1, -2, \dots \qquad (n = 0, 1, 2, \dots \; ; \; 1 \le r \le p).$$

so that no gamma function in the numerator in (6) is singular. We introduce the following parameters associated (empty sums and products are to be interpreted as zero and unity, respectively) with *<sup>p</sup>*Ψ*q*(*χ*) which play a key role in the analysis of its asymptotic behaviour. are given by

$$\kappa = 1 + \sum\_{r=1}^{q} \beta\_r - \sum\_{r=1}^{p} a\_{r\prime} \qquad \qquad h = \prod\_{r=1}^{p} a\_r^{\alpha\_r} \prod\_{r=1}^{q} \beta\_r^{-\beta\_r} \, ,$$

$$\vartheta = \sum\_{r=1}^{p} a\_r - \sum\_{r=1}^{q} b\_r + \frac{1}{2}(q - p) \, , \qquad \vartheta' = 1 - \vartheta \, . \tag{7}$$

The asymptotic expansion of *F*(*χ*) is discussed in detail in Reference [5] Section 12, and is summarised in [4,6]. The algebraic expansion of *F*(*χ*) is obtained from the Mellin-Barnes integral representation [4], p. 56.

$$F(\chi) = \frac{1}{2\pi i} \int\_{c-\infty i}^{c+\infty i} \frac{\Gamma(-s)\Gamma(1+s)(\chi e^{\mp \pi i})^s}{(as+\gamma)\Gamma(as+a+\beta)} ds, \qquad |\arg(-\chi)| < \pi(1-\frac{1}{2}a),$$

where, with −*γ*/*α* < *c* < 0, the integration path lies to the left of the poles of Γ(−*s*) at *s* = 0, 1, 2, ... but to the right of the poles at *s* = −*γ*/*α* and *s* = −*k* − 1, *k* = 0, 1, 2, ... . The upper or lower sign is taken according as arg *χ* > 0 or arg *χ* < 0, respectively. It is seen that when *α* = *γ*/*m*, *m* = 1, 2, ... the pole at *s* = −*m* is double and its residue must be evaluated accordingly. Displacement of the integration path to the left when 0 < *α* < 2 and evaluation of the residues then produces the algebraic expansion *H*(*χe*∓*π<sup>i</sup>* ), where

$$H(\chi) = \begin{cases} \frac{\pi/a}{\sin \gamma \pi/a} \frac{\chi^{-\gamma/a}}{\Gamma(a+\beta-\gamma)} + \sum\_{k=0}^{\infty} \frac{(-)^{k} \chi^{-k-1}}{(\gamma - a(k+1)) \Gamma(\beta - ak)} & (a \neq \frac{\gamma}{m})\\\\ \frac{(-)^{m-1} \chi^{-m}}{\Gamma(a+\beta-\gamma)} \left\{ \frac{m}{\gamma} \log \chi - \psi(a+\beta-\gamma) \right\} + \sum\_{\substack{k=0\\k\neq m-1}}^{\infty} \frac{(-)^{k} \chi^{-k-1}}{(\gamma - a(k+1)) \Gamma(\beta - ak)} & (a = \frac{\gamma}{m}), \end{cases} \tag{8}$$

and *ψ* denotes the logarithmic derivative of the gamma function.

The exponential expansion associated with *<sup>p</sup>*Ψ*q*(*χ*) is given by [6] p. 299, [4] p. 57.

$$\mathcal{E}(\chi) := X^{\theta} \mathscr{C}^{X} \sum\_{j=0}^{\infty} A\_{j} X^{-j}, \qquad X = \mathfrak{x} (h\chi)^{1/\kappa}, \tag{9}$$

where the coefficients *Aj* are those appearing in the inverse factorial expansion

$$\frac{1}{\Gamma(1+s)} \frac{\prod\_{r=1}^{p} \Gamma(a\_r n + a\_r)}{\prod\_{r=1}^{g} \Gamma(\beta\_r n + b\_r)} = \kappa A \rho (h \mathbf{x}^{\mathbf{x}})^s \left\{ \sum\_{j=0}^{M-1} \frac{\mathbf{c}\_j}{\Gamma(\kappa s + \theta' + j)} + \frac{\rho\_M(\mathbf{s})}{\Gamma(\kappa s + \theta' + M)} \right\} \tag{10}$$

with *c*<sup>0</sup> = 1. Here *M* is a positive integer and *ρM*(*s*) = *O*(1) for |*s*| → ∞ in | arg *s*| < *π*. The constant *A*<sup>0</sup> is specified by

$$A\_0 = (2\pi)^{\frac{1}{2}(p-q)}\kappa^{-\frac{1}{2}-\theta} \prod\_{r=1}^p \alpha\_r^{a\_r-\frac{1}{2}} \prod\_{r=1}^q \beta\_r^{\frac{1}{2}-b\_r}.$$

The coefficients *cj* are independent of *s* and depend only on the parameters *p*, *q*, *αr*, *βr*, *ar* and *br*.

For the function *F*(*χ*), we have

$$\kappa = \kappa, \quad h = \alpha^{-\alpha}, \quad \theta = -\kappa - \beta, \quad A\_0 = \alpha^{-1}.$$

We are in the fortunate position that the normalised coefficients *cj* in this case can be determined explicitly as *cj* = (*α* + *β* − *γ*)*j*. This follows from the well-known (convergent) expansion given in Reference [4,7], p. 41.

$$\frac{1}{(as+\gamma)\,\Gamma(as+\alpha+\beta)} = \sum\_{j=0}^{\infty} \frac{(a+\beta-\gamma)\_j}{\Gamma(as+\vartheta'+j)} \qquad (\Re(s)>-\gamma/a), \tag{11}$$

to which, in the case of *F*(*χ*), the ratio of gamma functions appearing on the left-hand side of (10) reduces. Then, with *X* = *χ*1/*<sup>α</sup>* we have from (9) the exponential expansion associated with *F*(*χ*) given by

$$\mathcal{E}(\chi) = \frac{1}{a} \chi^{\mathfrak{d}/a} \exp\left[\chi^{1/a}\right] \sum\_{j=0}^{\infty} (a + \beta - \gamma)\_j \chi^{-j/a} \,. \tag{12}$$

From Reference [4] pp. 57–58, we then obtain the asymptotic expansion for |*χ*| → ∞ when 0 < *α* < 2

$$F(\chi) \sim \begin{cases} \mathcal{E}(\chi) + H(\chi e^{\mp \pi i}) & |\arg \chi| < \frac{1}{2}\pi a \\\\ H(\chi e^{\mp \pi i}) & |\arg(-\chi)| < \pi(1 - \frac{1}{2}a) \end{cases} \tag{13}$$

and, when *α* = 2,

$$F(\chi) \sim \mathcal{E}(\chi) + \mathcal{E}(\chi e^{\mp 2\pi i}) + H(\chi e^{\mp \pi i}) \qquad |\arg \chi| \le \pi. \tag{14}$$

The upper and lower signs are chosen according as arg *χ* > 0 or arg *χ* < 0, respectively. It may be noted that the expansions <sup>E</sup>(*χe*∓2*π<sup>i</sup>* ) in (14) only become significant in the neighbourhood of arg *χ* = ±*π*. When *α* > 2, the expansion of *F*(*χ*) is exponentially large for all values of arg *χ* (see Reference [4], p. 58) and accordingly we omit this case as it is unlikely to be of physical interest.

**Remark 1.** *The exponential expansion* <sup>E</sup>(*χ*) *in (13) continues to hold beyond the sector* <sup>|</sup> arg *<sup>χ</sup>*<sup>|</sup> <sup>&</sup>lt; <sup>1</sup> <sup>2</sup>*πα, where it becomes exponentially small in the sectors πα* ≤ | arg *<sup>χ</sup>*<sup>|</sup> <sup>&</sup>lt; <sup>1</sup> <sup>2</sup>*πα when* 0 < *α* ≤ 1*. The rays* arg *χ* = ±*πα are Stokes lines, where* <sup>E</sup>(*χ*) *is maximally subdominant relative to the algebraic expansion <sup>H</sup>*(*χe*∓*π<sup>i</sup>* )*. On these rays,* E(*χ*) *undergoes a Stokes phenomenon, where the exponentially small expansion "switches off" in a smooth manner as* <sup>|</sup> arg *<sup>χ</sup>*<sup>|</sup> *increases [1], §2.11(iv), with its value to leading order given by* <sup>1</sup> <sup>2</sup>E(*χ*)*; see Reference [8] for a more detailed discussion of this point in the context of the confluent hypergeometric functions. We do not consider exponentially small contributions to F*(*χ*) *here, except to briefly mention in Section 3 the situation pertaining to the case α* = 1*.*

#### **3. The Asymptotic Expansion of Ein***α***,***β***(***z***) for** *|z| →* **∞**

The asymptotic expansion of Ein*α*,*β*(*z*) defined in (4) can now be constructed from that of *F*(*χ*) with the parameter *γ* = 1. It is sufficient, for real *α*, *β*, to consider 0 ≤ arg *z* ≤ *π*, since the expansion when arg *<sup>z</sup>* <sup>&</sup>lt; 0 is given by the conjugate value. With *<sup>χ</sup>* <sup>=</sup> <sup>−</sup>*z<sup>α</sup>* <sup>=</sup> *<sup>e</sup>*−*π<sup>i</sup> zα*, the exponentially large sector <sup>|</sup> arg *<sup>χ</sup>*<sup>|</sup> <sup>&</sup>lt; <sup>1</sup> <sup>2</sup>*πα* becomes | − *<sup>π</sup>* <sup>+</sup> *<sup>α</sup>* arg *<sup>z</sup>*<sup>|</sup> <sup>&</sup>lt; <sup>1</sup> <sup>2</sup>*πα*; that is

$$
\theta\_0 < \arg z < \theta\_0 + \pi, \qquad \theta\_0 := \frac{\pi}{2a} (2 - a). \tag{15}
$$

On the boundaries of this sector the exponential expansion is of an oscillatory character. When 0 < *α* < <sup>2</sup> <sup>3</sup> , we note that the exponentially large sector (15) lies outside the sector of interest 0 ≤ arg *z* ≤ *π*.

We define the algebraic and exponential asymptotic expansions

$$H\_{a,\beta}(z) = \begin{cases} \frac{\pi/a}{\sin(\pi/a)\,\Gamma(a+\beta-1)} + \sum\_{k=0}^{\infty} \frac{(-)^k z^{1-a(k+1)}}{(1-a(k+1))\Gamma(\beta-ak)} & (a \neq m^{-1}) \\\\ \frac{(-)^{m-1}}{\Gamma(a+\beta-1)} \{\log z - \phi(a+\beta-1)\} + \sum\_{\substack{k=0 \\ k \neq m-1}}^{\infty} \frac{(-)^k z^{1-a(k+1)}}{(1-a(k+1))\Gamma(\beta-ak)} & (a = m^{-1}), \end{cases} \tag{16}$$

where *m* = 1, 2, . . . , and

$$\mathcal{E}\_{a,\beta}(z) = \frac{(e^{-\pi i/a}z)^{\vartheta}}{a} \exp\left[e^{-\pi i/a}z\right] \sum\_{j=0}^{\infty} (a+\beta-1)\_j (e^{-\pi i/a}z)^{-j} \tag{17}$$

where we recall that *ϑ* = −*α* − *β*. Then the following result holds:

**Theorem 1.** *Let m be a positive integer, with α* > 0 *and β real and θ*<sup>0</sup> = *π*(2 − *α*)/(2*α*)*. Then the following expansions hold for* |*z*| → ∞

$$\text{Ein}\_{a,\beta}(z) \sim H\_{a,\beta}(z) \qquad (0 \le \text{arg } z \le \pi) \tag{18}$$

*when* 0 < *α* < <sup>2</sup> <sup>3</sup> *, and*

$$\text{Ein}\_{\mathfrak{a},\mathfrak{g}}(z) \sim \begin{cases} H\_{\mathfrak{a},\mathfrak{g}}(z) & (0 \le \arg z < \theta\_0) \\\\ z \mathcal{E}\_{\mathfrak{a},\mathfrak{g}}(z) + H\_{\mathfrak{a},\mathfrak{g}}(z) & (\theta\_0 \le \arg z \le \pi) \end{cases} \tag{19}$$

*when* <sup>2</sup> <sup>3</sup> ≤ *α* < 2*. Finally, when α* = 2 *we have* Ein2,*β*(−*z*) = −Ein2,*β*(*z*) *and it is therefore sufficient to consider* <sup>0</sup> <sup>≤</sup> arg *<sup>z</sup>* <sup>≤</sup> <sup>1</sup> <sup>2</sup>*π. Then, from (14), we obtain the expansion when α* = 2

$$\text{Ein}\_{2,\emptyset}(z) \sim z\{\mathcal{E}\_{2,\emptyset}(z) + \mathcal{E}\_{2,\emptyset}(z e^{\pi i})\} + H\_{2,\emptyset}(z) \qquad (0 \le \text{arg } z \le \frac{1}{2}\pi). \tag{20}$$

We note from Theorem 1 that when *z* → −∞ the value of Ein*α*,*β*(*z*) is, in general, complex-valued.

In the case of main physical interest, when *z* = *x* > 0 is a real variable, we have the following expansion:

**Theorem 2.** *When z* = *x* (> 0) *we have from Theorem 1 the expansions*

$$\operatorname{Ein}\_{\mathfrak{a},\mathfrak{G}}(\mathfrak{x}) \sim H\_{\mathfrak{a},\mathfrak{G}}(\mathfrak{x}) \tag{21}$$

*for* 0 < *α* < 2*, and from (17) and (20) when α* = 2

$$\text{Ein}\_{2,\beta}(\mathbf{x}) \sim H\_{2,\beta}(\mathbf{x}) - \mathbf{x}^{-1-\beta} \sum\_{j=0}^{\infty} \frac{(1+\beta)\_j}{\mathbf{x}^j} \cos\left[\mathbf{x} - \frac{1}{2}\pi(\beta+j)\right] \tag{22}$$

*as x* → +∞*.*

It is worth noting that a logarithmic term is present in the asymptotic expansion of Ein*α*,*β*(*x*) whenever *α* = 1, <sup>1</sup> 2 , 1 <sup>3</sup> ,....

*The Case α* = 1

The special case *α* = 1 deserves further consideration. From (16) and (21) we obtain the expansion

$$\mathrm{Ein}\_{1,\beta}(\mathbf{x}) \sim \frac{1}{\Gamma(\beta)} \{ \log \mathbf{x} - \psi(\beta) \} - \sum\_{k=1}^{\infty} \frac{(-\mathbf{x})^{-k}}{k \, \Gamma(\beta - k)} \qquad (\mathbf{x} \to +\infty). \tag{23}$$

If *β* = 1, the asymptotic sum in (23) vanishes and

$$\text{Ein}\_{1,1}(\mathbf{x}) \sim \log \mathbf{x} + \gamma \tag{24}$$

for large *x*. But we have the exact evaluation (compare (2))

$$\text{Ein}\_{1,1}(\mathbf{x}) = \mathbf{x} \sum\_{n=0}^{\infty} \frac{(-\mathbf{x})^n}{(n+1)^2 n!} = \log \mathbf{x} + \gamma + \mathcal{E}\_1(\mathbf{x})$$

$$\sim \log \mathbf{x} + \gamma + \frac{e^{-\mathbf{x}}}{\mathbf{x}} \sum\_{j=0}^{\infty} \frac{(-)^j j!}{\mathbf{x}^j} \qquad (\mathbf{x} \to +\infty) \tag{25}$$

by Reference [1], (6.12.1). The additional asymptotic sum appearing in (25) is exponentially small as *x* → +∞ and is consequently not accounted for in the result (24).

From Remark 1, it is seen that there are Stokes lines at arg *z* = ±*π*(1 − *α*), which coalesce on the positive real axis when *α* = 1. In the sense of increasing arg *z* in the neighbourhood of the positive real axis, the exponential expansion E1,*β*(*z*) is in the process of *switching on* across arg *z* = *π*(1 − *α*) and E1,*β*(*z*) (where the bar denotes the complex conjugate) is in the process of *switching off* across arg *z* = −*π*(1 − *α*). When *α* = 1, this produces the exponential contribution

$$\mathbb{E}\_{\mathbb{Z}}^{\frac{1}{2}}\{\mathcal{E}\_{1,\beta}(\mathbf{x}) + \overline{\mathcal{E}\_{1,\beta}(\mathbf{x})}\} = \frac{\varepsilon^{-\mathbf{x}}}{\mathfrak{x}^{\beta}}\cos\pi\beta\sum\_{j=0}^{\infty}\frac{(-)^{j+1}(\beta)\_{j}}{\mathfrak{x}^{j}}$$

for large *x*. Thus, the more accurate version of (23) should read

$$\mathrm{Ein}\_{1,\mathsf{f}}(\mathbf{x}) \sim \frac{1}{\Gamma(\boldsymbol{\beta})} \{ \log \mathbf{x} - \boldsymbol{\psi}(\boldsymbol{\beta}) \} - \sum\_{k=1}^{\infty} \frac{(-\mathbf{x})^{-k}}{k \, \Gamma(\boldsymbol{\beta} - \mathbf{k})} - \frac{\mathbf{e}^{-\mathbf{x}}}{\mathbf{x}^{\boldsymbol{\beta}}} \cos \pi \boldsymbol{\beta} \sum\_{j=0}^{\infty} \frac{(-)^{j}(\boldsymbol{\beta})\_{j}}{\mathbf{x}^{j}} \tag{26}$$

as *x* → +∞. When *β* = 1, this correctly reduces to (25).

When *β* = 2, we have [9]

$$\operatorname{Ein}\_{1,2}(\mathbf{x}) = \mathbf{x} \sum\_{n=0}^{\infty} \frac{(-\mathbf{x})^n}{(n+1)^2 (n+2) n!} = \log \mathbf{x} - \psi(2) + \frac{1}{\mathbf{x}} + \mathcal{E}\_1(\mathbf{x}) - \frac{e^{-\mathbf{x}}}{\mathbf{x}}.$$

$$\sim \log \mathbf{x} - \psi(2) + \frac{1}{\mathbf{x}} + \frac{e^{-\mathbf{x}}}{\mathbf{x}} \sum\_{j=1}^{\infty} \frac{(-)^j j!}{\mathbf{x}^j} \qquad (\mathbf{x} \to +\infty).$$

This can be seen also to agree with (26) after a little rearrangement.

#### **4. The Generalised Sine and Cosine Integrals**

The sine and cosine integrals are defined by [1], §6.2,

$$\text{Si}(z) = \int\_0^z \frac{\sin t}{t} \, dt, \qquad \text{Ci}(z) = \int\_0^z \frac{1 - \cos t}{t} \, dt.$$

Mainardi and Masina [2] generalised these definitions by replacing the trigonometric functions by

$$\sin\_{\mathfrak{u}}(t) = t^{\mathfrak{u}} E\_{2\mathfrak{u}, \mathfrak{a} + \beta}(-t^{2\mathfrak{a}}) = \sum\_{n=0}^{\infty} \frac{(-)^{n} t^{(2n+1)\mathfrak{a}}}{\Gamma(2n\mathfrak{a} + \mathfrak{a} + \beta)}, \quad \cos\_{\mathfrak{u}}(t) = E\_{2\mathfrak{a}, \mathfrak{f}}(-t^{2\mathfrak{a}}) = \sum\_{n=0}^{\infty} \frac{(-)^{n} t^{2n\mathfrak{a}}}{\Gamma(2n\mathfrak{a} + \beta)}$$

with *β* = 1 to produce

$$\begin{cases} \text{Sin}\_{\mathfrak{d}}(z) = \int\_{0}^{z} \frac{\sin\_{\mathfrak{d}}(t)}{t^{a}} \, dt = \sum\_{n=0}^{\infty} \frac{(-)^{n} z^{2na+1}}{(2na+1)\Gamma(2na+a+1)} \\\\ \text{Cin}\_{\mathfrak{d}}(z) = \int\_{0}^{\infty} \frac{1-\cos\_{\mathfrak{d}}(t)}{t^{a}} \, dt = \sum\_{n=0}^{\infty} \frac{(-)^{n} z^{2na+a+1}}{(2na+a+1)\Gamma(2na+2a+1)} \end{cases} \tag{27}$$

Here we extend the definitions (27) by including the additional parameter *β* ∈ **R** in the Mittag-Leffler functions and consider the functions

$$\begin{cases} \text{Sin}\_{\mathfrak{a},\beta}(z) = z \sum\_{n=0}^{\infty} \frac{(-)^{n} z^{2na}}{(2na+1)\Gamma(2na+a+\beta)} \\\\ \text{Cin}\_{\mathfrak{a},\beta}(z) = z^{1+a} \sum\_{n=0}^{\infty} \frac{(-)^{n} z^{2na}}{(2na+a+1)\Gamma(2na+2a+\beta)} . \end{cases} \tag{28}$$

The asymptotics of Sin*α*,*β*(*z*) and Cin*α*,*β*(*z*) can be deduced from the results in Section 2. However, here we restrict ourselves to determining the asymptotic expansion of these functions for large |*z*| in a sector enclosing the positive real *z*-axis, where for 0 < *α* < 1 they only have an algebraic-type expansion. We observe in passing that

$$\text{Sim}\_{\mathfrak{a},\mathfrak{\beta}}(z) = \text{Err}\_{\mathfrak{2}\mathfrak{a},\mathfrak{\beta}-\mathfrak{a}}(z). \tag{29}$$

Comparison of the series expansion for Sin*α*,*β*(*z*) with *F*(*χ*) in Section 2, with the substitutions *α* → 2*α*, *β* → *β* − *α* and *γ* = 1 (or from the above identity combined with Theorems 1 and 2), produces the following expansion:

**Theorem 3.** *For m* = 1, 2, . . . *and* 0 < *α* < 1 *we have the algebraic expansions*

$$\operatorname{Sim}\_{\mathfrak{a},\beta}(z)$$

$$\sim \begin{cases} \frac{\pi/(2a)}{\sin(\pi/(2a))\Gamma(a+\beta-1)} + \sum\_{k=0}^{\infty} \frac{(-)^{k}2^{1-2a(k+1)}}{(1-2a(k+1))\Gamma(\beta-(2k+1)a)} & (a \neq (2m)^{-1}) \\\\ \frac{(-)^{m-1}}{\Gamma(a+\beta-1)} \{\log z - \psi(a+\beta-1)\} + \sum\_{\substack{k=0 \\ k\neq m-1}}^{\infty} \frac{(-)^{k}2^{1-2a(k+1)}}{(1-2a(k+1))\Gamma(\beta-(2k+1)a)} & (a = (2m)^{-1}) \end{cases} \tag{30}$$

*as* |*z*| → ∞ *in the sector* | arg *z*| < *π*(1 − *α*)/(2*α*)*.*

A similar treatment for Cin*α*,*β*(*z*) shows that with the substitutions *α* → 2*α*, *β* → *β* and *γ* = 1 + *α* we obtain the following expansion:

**Theorem 4.** *For m* = 1, 2, . . . *and* 0 < *α* < 1 *we have the algebraic expansions*

Cin*<sup>α</sup>*,*β*(*z*)

$$\sim \begin{cases} \frac{\pi/(2a)}{\cos(\pi/(2a))\Gamma(a+\beta-1)} + \sum\_{k=0}^{\infty} \frac{(-)^{k} 2^{1-(2k+1)a}}{(1-(2k+1)a)\Gamma(\beta-2ak)} & (a \neq (2m-1)^{-1})\\\\ \frac{(-)^{m-1}}{\Gamma(a+\beta-1)} \{\log z - \psi(a+\beta-1)\} + \sum\_{k=0}^{\infty} \frac{(-)^{k} 2^{1-(2k+1)a}}{(1-(2k+1)a)\Gamma(\beta-2ak)} & (a = (2m-1)^{-1}) \end{cases} \tag{31}$$

*as* |*z*| → ∞ *in the sector* | arg *z*| < *π*(1 − *α*)/(2*α*)*.*

The expansions of Sin*α*,*β*(*x*) and Cin*α*,*β*(*x*) as *x* → +∞ when 0 < *α* < 1 follow immediately from Theorems 3 and 4.

As *x* → +∞ when *α* = 1, the exponentially oscillatory contribution to Sin1,*β*(*x*) can be obtained directly from (22) together with (29). In the case of Cin1,*β*(*x*), we obtain from (9) with *κ* = 2, *h* = <sup>1</sup> 4 , *<sup>ϑ</sup>* <sup>=</sup> <sup>−</sup><sup>2</sup> <sup>−</sup> *<sup>β</sup>*, *<sup>X</sup>* <sup>=</sup> *<sup>χ</sup>*1/2 and *<sup>A</sup>*<sup>0</sup> <sup>=</sup> <sup>1</sup> <sup>2</sup> the exponential expansion

$$\mathcal{E}(\chi) = \frac{1}{2} \chi^{\vartheta/2} \exp\left[\chi^{1/2}\right] \sum\_{j=0}^{\infty} c\_j \chi^{-j/2}, \qquad \chi = e^{-\pi i} \chi^2.$$

with the coefficients *cj* = (*β*)*j*. Then the exponential contribution to Cin1,*β*(*x*) is

$$\{\mathbf{x}^2\{\mathcal{E}(\chi) + \mathcal{E}(\chi e^{\pi i})\}\} = -\mathbf{x}^{-\beta} \sum\_{j=0}^{\infty} \frac{(\beta)\_j}{\mathbf{x}^j} \cos\left[\mathbf{x} - \frac{1}{2}\pi(\beta + j)\right] \qquad (\mathbf{x} \to +\infty).$$

Collecting together these results we finally obtain the following theorem.

**Theorem 5.** *When α* = 1 *and β is real the following expansions hold:*

$$\begin{split} \text{Sim}\_{1,\beta}(\mathbf{x}) \sim \frac{\pi}{2\Gamma(\beta)} - \sum\_{k=0}^{\infty} \frac{(-)^{k} \mathbf{x}^{-2k-1}}{(2k+1)\Gamma(\beta - 1 - 2k)} \\ &+ \mathbf{x}^{-\beta} \sum\_{j=0}^{\infty} \frac{(\beta)\_{j}}{\mathbf{x}^{j}} \sin \left[ \mathbf{x} - \frac{1}{2} \pi (\beta + j) \right] \end{split} \tag{32}$$

and

$$\mathrm{Cin}\_{1,\beta}(\mathbf{x}) \sim \frac{1}{\Gamma(\beta)} \left\{ \log \| \mathbf{x} - \boldsymbol{\psi}(\beta) \| \right\} - \sum\_{k=1}^{\infty} \frac{(-)^{k} \mathbf{x}^{-2k}}{2k \Gamma(\beta - 2k)}$$

$$- \mathbf{x}^{-\beta} \sum\_{j=0}^{\infty} \frac{(\beta)\_{j}}{\mathbf{x}^{j}} \cos \left[ \mathbf{x} - \frac{1}{2} \pi (\beta + j) \right] \tag{33}$$

*as x* → +∞*.*

When *β* > 0, it is seen that Sin1,*β*(*x*) approaches the constant value *π*/(2Γ(*β*)) whereas Cin1,*β*(*x*) grows logarithmically like log (*x*)/Γ(*β*) as *x* → +∞.

#### **5. Numerical Results**

In this section we present numerical results confirming the accuracy of the various expansions obtained in this paper. In all cases we have employed optimal truncation (that is truncation at, or near, the least term in modulus) of the algebraic and (when appropriate) the exponential expansions. The numerical values of Ein*α*,*β*(*x*) were computed from (4) using high-precision evaluation of the terms in the suitably truncated sum.

We first present results in the physically interesting case of 0 < *α* ≤ 1 and *β* = 1 considered in Reference [2]. Table <sup>1</sup> shows the values (In the tables we write the values as *<sup>x</sup>*(*y*) instead of *<sup>x</sup>* <sup>×</sup> <sup>10</sup>*y*.) of the absolute relative error in the computation of Ein*α*,1(*x*) from the asymptotic expansions in

Theorem 2 for several values of *x* and different *α* in the extended range 0 < *α* ≤ 2. The expansion for 0 < *α* < 2 is given by the algebraic expansion in (21); this contains a logarithmic term for the values *α* = <sup>1</sup> 4 , 1 <sup>2</sup> , 1. The progressive loss of accuracy when *α* > 1 can be attributed to the presence of the approaching exponentially large sector, whose lower boundary is, from (15), given by *θ*<sup>0</sup> = *π*(2 − *α*)/(2*α*). In the final case *α* = 2, the accuracy is seen to suddenly increase considerably. This is due to the inclusion of the (oscillatory) exponential contribution, which from (22), takes the form

$$\operatorname{Ein}\_{2,1}(\mathbf{x}) \sim \frac{1}{2}\pi - \frac{1}{\mathbf{x}} - \sum\_{j=1}^{\infty} \frac{j!}{x^{j+1}} \cos(\mathbf{x} - \frac{1}{2}\pi j) \qquad (\mathbf{x} \to +\infty).$$

In Figure 1 we show some plots of Ein*α*,1(*x*) for values of *α* in the range 0 < *α* ≤ 1. In Figure 2 the asymptotic approximations for two values of *α* are shown compared with the corresponding curves of Ein*α*,1(*x*).

**Figure 1.** Plots of Ein*α*,1(*x*) for different values of *α*.

**Table 1.** The absolute relative error in the computation of Ein*α*,*β*(*x*) from Theorem 2 for different values of *α* and *x* when *β* = 1.


**Figure 2.** Plots of Ein*α*,1(*x*) (solid curves) and the leading asymptotic approximation (dashed curves) for (**a**) *α* = 0.75 and (**b**) *α* = 1.

Table 2 shows the values of the absolute relative error in the computation of Ein*α*,*β*(*z*) from the asymptotic expansions in Theorem 1 for complex *z* for values of *α* in the range 0 < *α* ≤ 2. It will noticed that there is a sudden reduction in the error when *α* = 1 and *θ* = *π*/4. In this case, the value of *θ*<sup>0</sup> = <sup>1</sup> <sup>2</sup>*π* and a more accurate treatment would include the exponentially small contribution *z*E*α*,*β*(*z*). When this term is included we find the absolute relative error equal to 6.935 <sup>×</sup> <sup>10</sup>−11.

**Table 2.** The absolute relative error in the computation of Ein*α*,*β*(*z*) from Theorem 1 for different *α* and *θ* when *z* = 20*ei<sup>θ</sup>* and *β* = 1/3.


Finally, in Table 3 we present the error associated with the expansions of the generalised sine and cosine integrals Sin*α*,*β*(*x*) and Cin*α*,*β*(*x*) as *x* → +∞ given in Theorems 3–5. For Sin*α*,*β*(*x*), the logarithmic expansion in (30) arises for *α* = <sup>1</sup> <sup>4</sup> and *<sup>α</sup>* <sup>=</sup> <sup>1</sup> <sup>2</sup> ; for Cin*α*,*β*(*x*) the logarithmic expansion in (31) arises for *α* = <sup>1</sup> <sup>3</sup> . In Figure 3 are shown plots (We remark that the plot of Cin*α*1(*x*) in Figure 3b differs from that shown in Figure 4 of Reference [2].) of Sin*α*,1(*x*) and Cin*α*,1(*x*) for different *α* and in Figure 4 the leading asymptotic approximations from the expansions in Theorem 5 are compared with the corresponding plots of these functions.

In conclusion, it is worth mentioning that the function Ein*α*,*β*(*z*), and also the generalised sine and cosine integrals, can be extended by using the three-parameter Mittag-Leffler function (or Prabhakar function) defined by

$$E\_{\alpha,\beta}^{\rho}(z) = \sum\_{n=0}^{\infty} \frac{(\rho)\_n}{\Gamma(\alpha n + \beta)} \frac{z^n}{n!}.$$

A comprehensive discussion of this function and its applications can be found in Reference [10]; see also Reference [6] Section 5.1, for details of its large-*z* asymptotic expansion.


**Table 3.** The absolute relative error in the computation of Sin*α*,*β*(*x*) and Cin*α*,*β*(*x*) from Theorems 3–5 for different *α* and *x* when *β* = 4/3.

**Figure 3.** Plots of the generalised sine and cosine integrals (**a**) Sin*α*,1(*x*) and (**b**) Cin*α*,1(*x*) for *α* = 0.25, 0.50, 0.75, 1.

**Figure 4.** *Cont*.

**Figure 4.** Plots of the generalised sine and cosine integrals (solid curves) and their leading asymptotic approximations (dashed curves) from Theorems 3, 4 and 5: (**a**) Sin*α*,1(*x*) when *α* = 0.25, 0.75, (**b**) Sin*α*,1(*x*) when *α* = 1, (**c**) Cin*α*,1(*x*) when *α* = 0.25, 0.75 and (**d**) Cin*α*,1(*x*) when *α* = 1.

#### **6. Conclusions**

The large-*z* asymptotic expansions of the modified exponential integral Ein*α*,*β*(*z*) involving the two-parameter Mittag-Leffer function have been determined by exploiting the known asymptotic theory developed for integral functions of hypergeometric type, namely the Fox-Wright function. The appearance of logarithmic terms in the expansion of Ein*α*,*β*(*x*) for *x* → +∞ for certain values of *α* ∈ (0, 1] is emphasised. Similar expansions have been obtained for the extended sine and cosine integrals.

**Funding:** This research received no external funding.

**Acknowledgments:** I would like to thank Francesco Mainardi for the invitation to contribute to this special edition.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


© 2020 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Mathematical Aspects of Krätzel Integral and Krätzel Transform**

#### **Arak M. Mathai <sup>1</sup> and Hans J. Haubold 2,\***


Received: 11 February 2020; Accepted: 22 March 2020; Published: 3 April 2020

**Abstract:** A real scalar variable integral is known in the literature by different names in different disciplines. It is basically a Bessel integral called specifically Krätzel integral. An integral transform with this Krätzel function as kernel is known as Krätzel transform. This article examines some mathematical properties of Krätzel integral, its connection to Mellin convolutions and statistical distributions, its computable representations, and its extensions to multivariate and matrix-variate cases, in both the real and complex domains. An extension in the pathway family of functions is also explored.

**Keywords:** Mellin convolutions; Krätzel integrals; reaction-rate probability integral; continuous mixtures; Bayesian structures; fractional integrals; statistical distribution of products and ratios; multivariate and matrix-variate cases; real and complex domains

**MSC:** 26B12; 26A33; 60E10; 62E15; 33C60

#### **1. Introduction**

In this paper, real scalar mathematical or random variables are denoted by small letters *x*, *y*, *z*, ... and the corresponding vector/matrix variables are denoted by capital letters *X*,*Y*, .... Variables in the complex domain are denoted with a tilde such as *x*˜, *y*˜, *X*˜ ,*Y*˜.... Constant vectors/matrices are denoted by capital letters *A*, *B*, ... whether in the real or complex domain. Scalar constants are denoted by *a*, *b*, .... If *X* = (*xij*) is a *p* × *q* matrix where the *xij*s are distinct real scalar variables, then the wedge product of the differentials is denoted by d*<sup>X</sup>* <sup>=</sup> <sup>∧</sup>*<sup>p</sup> <sup>i</sup>*=<sup>1</sup> <sup>∧</sup>*<sup>q</sup> <sup>j</sup>*=<sup>1</sup> d*xij*. If *x* and *y* are real scalar variables, then the wedge product of their differentials is defined as d*x* ∧ d*y* = −d*y* ∧ d*x* so that d*x* ∧ d*x* = 0, d*y* ∧ d*y* = 0. If *<sup>X</sup>*˜ is in the complex domain, then *<sup>X</sup>*˜ = *<sup>X</sup>*<sup>1</sup> + *iX*<sup>2</sup> where *<sup>X</sup>*1, *<sup>X</sup>*<sup>2</sup> are real and *<sup>i</sup>* = (−1). Then, <sup>d</sup>*X*˜ <sup>=</sup> <sup>d</sup>*X*<sup>1</sup> <sup>∧</sup> <sup>d</sup>*X*2. The determinant of a *<sup>p</sup>* <sup>×</sup> *<sup>p</sup>* real matrix *<sup>X</sup>* is denoted by <sup>|</sup>*X*<sup>|</sup> or det(*X*) and when in the complex domain the absolute value of the determinant is denoted by |det(*X*)|. The trace of a square matrix *A* is denoted by tr(*A*). The integral

$$\int\_{A}^{B} f(X) \mathrm{d}X = \int\_{O < A < X < B} f(X) \mathrm{d}X$$

means a real-valued scalar function *f*(*X*) of the *p* × *p* real positive definite matrix *X* is integrated out over *X* > *O* (positive definite), *X* − *A* > *O*, *B* − *X* > *O*, *A* > *O*, *B* > *O* where *A* and *B* are *p* × *p* constant positive definite matrices. The corresponding integral in the complex domain is denoted as *<sup>B</sup> A* ˜ *f*(*X*˜)d*X*˜ .

#### *1.1. Krätzel Integral*

Let *x* be a real scalar variable. Consider the following integrals:

$$K\_1 = \int\_0^\infty x^{\gamma - 1} e^{-ax - \frac{b}{x}} dx, a > 0, b > 0, \gamma > 0 \tag{1}$$

$$\mathcal{K}\_2 = \int\_0^\infty \mathbf{x}^{\gamma - 1} \mathbf{e}^{-a\mathbf{x}^\delta - b\mathbf{x}^{-\rho}} \mathrm{d}\mathbf{x}, a > 0, b > 0, \gamma > 0, \delta > 0, \rho > 0. \tag{2}$$

This *K*<sup>2</sup> in Equation (2) is known as the generalized Krätzel integral and Equation (1) as the basic Krätzel integral. When *δ* = 1 in Equation (2), we have the Laplace transform of *xγ*−1e−*bx*−*<sup>ρ</sup>* with Laplace parameter *a*. For *δ* = 1, *ρ* = <sup>1</sup> <sup>2</sup> in Equation (2), we have the basic reaction-rate probability integral in nuclear and solar neutrino astrophysics (see [1,2]). When *δ* = 1, *ρ* = 1, the integrand in Equation (1) is the inverse Gaussian density for appropriate values of *a*, *b*, *γ* and multiplied by a normalizing constant. In addition, Equation (2) is a generalized situation of the same and Equation (1) provides the moment expression for the inverse Gaussian density, multiplied by a normalizing constant. Krätzel transform is associated with Equation (1) (see [3]). Some authors call Equation (2) as the generalized gamma, ultra gamma, Bessel integral, etc. In [4], it is shown that in the simple poles case it is a Bessel series and hence it is more appropriate to call it as a generalized Bessel integral.

The highlight of the present discussion is to point out the importance and usefulness of Krätzel function in various topics in widely different areas and to consider its extensions of various types. Krätzel integrals appear in Mellin convolution of product of two functions; in statistical distribution theory as the density of a product of two independently distributed generalized gamma random variables; in Bayesian analysis when the conditional and marginal densities belong to generalized gamma densities; in model building, especially in the pathway models where the limiting forms end up in Krätzel functions; in nuclear reaction-rate theory; and in inverse Gaussian models in stochastic processes, to mention a few topics. Krätzel function is also associated with generalized gamma and ultra gamma integrals, Kobayashi integrals and generalized special functions such as G- and H-functions. In the present discussion, we also consider extensions of Krätzel function to multivariate cases involving many scalar variables, matrix-variate cases in the real and complex domains and extensions involving multiple integrals.

#### *1.2. Evaluation of the Integral in Equation (2)*

One can evaluate Equation (2) by using different approaches. One can interpret Equation (2) as the Mellin convolution of a product and then take the inverse Mellin transform to evaluate the integral. One can draw a parallel to the statistical density of a product of two positive real scalar random variables and then evaluate the density to obtain the value of Equation (2). One can treat Equation (2) as a function *g*(*b*) of *b*. Then, the Mellin transform of *g*(*b*) with Mellin parameter *s* is the following for *γ* > 0, *δ* > 0, *a* > 0, *b* > 0, *η* > 0:

$$\begin{split} M\_{\mathcal{S}}(\mathbf{s}) &= \int\_0^\infty b^{s-1} \{ \int\_0^\infty \mathbf{x}^{\gamma-1} \mathbf{e}^{-a\mathbf{x}^\delta - b\mathbf{x}^{-\rho}} \mathbf{d}x \} \mathbf{d}b \\ &= \int\_0^\infty \int\_0^\infty b^{s-1} \mathbf{x}^{\gamma-1} \mathbf{e}^{-a\mathbf{x}^\delta - b\mathbf{x}^{-\rho}} \mathbf{d}x \wedge \mathbf{d}b. \end{split}$$

Integrating out *b* first and then *x*, we have the following:

$$\begin{aligned} \int\_0^\infty b^{s-1} \mathbf{e}^{-bx^{-\rho}} \mathbf{d}b &= \Gamma(s) \mathbf{x}^{\rho s}, \Re(s) > 0\\ \int\_0^\infty \mathbf{x}^{\gamma+\rho s-1} \mathbf{e}^{-ax^\delta} \mathbf{d}x &= \frac{1}{\delta} \Gamma(\frac{\gamma+\rho s}{\delta}) a^{-(\frac{\gamma+\rho s}{\delta})}, \Re(\gamma+\rho s) > 0 \end{aligned}$$

where (·) means the real part of (·). That is,

$$M\_{\mathfrak{J}}(s) = \frac{1}{\delta a^{\frac{\gamma}{\mathfrak{J}}}} \Gamma(s) \Gamma(\frac{\gamma + \rho s}{\delta}) a^{-\frac{\rho}{\mathfrak{J}}s}.\tag{3}$$

Taking the inverse Mellin transform of Equation (3) we have *g*(*b*) or the integral in Equation (2) as the following:

$$K\_2 = \frac{1}{\delta a^{\frac{\gamma}{\delta}}} \frac{1}{2\pi i} \int\_{c-i\infty}^{c+i\infty} \Gamma(s) \Gamma(\frac{\gamma}{\delta} + \frac{\rho}{\delta}s) (ba^{\frac{\rho}{\delta}})^{-s} \mathrm{d}s, i = \sqrt{(-1)}\tag{4}$$

where the *c* in the contour is > 0. Note that Equation (4) can be written as a H-function.

$$K\_2 = \frac{1}{\delta a^{\frac{\gamma}{\delta}}} H\_{0,2}^{2,0} \left[ b a^{\frac{\mathfrak{g}}{\mathfrak{g}}} \vert\_{\left(0,1\right), \left(\frac{\mathfrak{g}}{\mathfrak{g}}, \frac{\mathfrak{g}}{\mathfrak{g}}\right)} \right]. \tag{5}$$

For the theory and applications of the H-function, see [5]. When *ρ* = *δ*, we have Equation (5) reducing to a Meijer's G-function as the following:

$$K\_2 = \frac{1}{\delta a^{\frac{\gamma}{\delta}}} G\_{0,2}^{2,0} \left[ a b \big|\_{0,\frac{\gamma}{\delta}} \right]. \tag{6}$$

For the theory and applications of G-function, see [6].

#### *1.3. Computable Series form for Equation (2)*

Consider the Mellin–Barnes integral representation in Equation (4). This integral can be evaluated as the sum of the residues at the poles of the gammas Γ(*s*) and Γ( *<sup>γ</sup> <sup>δ</sup>* <sup>+</sup> *<sup>ρ</sup> <sup>δ</sup> s*). The poles of Γ(*s*) are at *s* = 0, −1, −2, .... When the poles of the integrand are simple. then the sum of the residues at the poles of Γ(*s*) is the following:

$$(A) \ \delta a^{\frac{\gamma}{\delta}}\Big|\_{\nu=0} ^{-1} \sum\_{\nu=0}^{\infty} \frac{(-1)^{\nu}}{\nu!} \Gamma(\frac{\gamma}{\delta} - \frac{\rho}{\delta}\nu)(ba^{\frac{\rho}{\delta}})^{\nu}.$$

The poles of Γ( *<sup>γ</sup> <sup>δ</sup>* <sup>+</sup> *<sup>ρ</sup> <sup>δ</sup> <sup>s</sup>*) are at *<sup>γ</sup> <sup>δ</sup>* <sup>+</sup> *<sup>ρ</sup> <sup>δ</sup> <sup>s</sup>* <sup>=</sup> <sup>−</sup>*ν*, *<sup>ν</sup>* <sup>=</sup> 0, 1, 2, ... or the poles are at *<sup>s</sup>* <sup>=</sup> <sup>−</sup>*<sup>γ</sup> <sup>ρ</sup>* <sup>−</sup> *<sup>δ</sup> <sup>ρ</sup> ν* and in the simple poles case the sum of the residues is the following:

$$(B) \quad \frac{b^{\frac{\gamma}{\rho}}}{\delta} \sum\_{\nu=0}^{\infty} \frac{(-1)^{\nu}}{\nu!} \Gamma(-\frac{\gamma}{\rho} - \frac{\delta}{\rho}\nu)(ab^{\frac{\delta}{\rho}})^{\nu}.$$

Hence, the sum of residues from (*A*) and (*B*) in the simple poles case is the following:

$$K\_{2} = (\delta a^{\frac{\gamma}{\delta}})^{-1} \sum\_{\nu=0}^{\infty} \frac{(-1)^{\nu}}{\nu!} \Gamma(\frac{\gamma}{\delta} - \frac{\rho}{\delta}\nu)(ba^{\frac{\rho}{\delta}})^{\nu}$$

$$+ \frac{b^{\frac{\gamma}{\rho}}}{\delta} \sum\_{\nu=0}^{\infty} \frac{(-1)^{\nu}}{\nu!} \Gamma(-\frac{\gamma}{\rho} - \frac{\delta}{\rho}\nu)(ab^{\frac{\delta}{\rho}})^{\nu}.\tag{7}$$

#### *1.4. G-function in the Simple Poles Case*

Let *ρ* = *δ* so that the H-function in Equation (5) becomes the G-function in Equation (6) and when *γ <sup>δ</sup>* is not an integer then the G-function has simple poles. Consider this case and it is available from Equation (7) by putting *δ* = *ρ*. Then, the gammas reduce to the following:

$$\Gamma(\frac{\gamma}{\rho} - \nu) = \frac{\Gamma(\frac{\gamma}{\rho})}{(-1)^{\nu}(-\frac{\gamma}{\rho} + 1)\_{\nu}} \text{ and } \Gamma(-\frac{\gamma}{\rho} - \nu) = \frac{\Gamma(-\frac{\gamma}{\rho})}{(-1)^{\nu}(\frac{\gamma}{\rho} + 1)\_{\nu}}.$$

where, in general, the notation (*a*)*<sup>m</sup>* = *a*(*a* + 1)...(*a* + *m* − 1), *a* = 0,(*a*)<sup>0</sup> = 1 is the Pochhammer symbol. Hence, *K*<sup>2</sup> in Equation (2) for this simple poles case and for *δ* = *ρ* is the following:

$$K\_{2} = \frac{\Gamma(\frac{\gamma}{\rho})}{\rho a^{\frac{\gamma}{\rho}}} \sum\_{\nu=0}^{\infty} \frac{1}{(-\frac{\gamma}{\rho} + 1)\_{\nu} \nu!} (ab)^{\nu} + \frac{\Gamma(-\frac{\gamma}{\rho}) b^{\frac{\gamma}{\rho}}}{\rho} \sum\_{\nu=0}^{\infty} \frac{1}{(\frac{\gamma}{\rho} + 1)\_{\nu} \nu!} (ab)^{\nu}$$

$$= \frac{\Gamma(\frac{\gamma}{\rho})}{\rho a^{\frac{\gamma}{\rho}}} {}\_{0}F\_{1}(: -\frac{\gamma}{\rho} + 1; ab) + \frac{\Gamma(-\frac{\gamma}{\rho}) b^{\frac{\gamma}{\rho}}}{\rho} {}\_{0}F\_{1}(: \frac{\gamma}{\rho} + 1; ab), \tag{8}$$

where <sup>0</sup>*F*<sup>1</sup> is a hypergeometric series with no upper and one lower parameters. Observe that, in this simple poles case, Equation (2) or *K*<sup>2</sup> of Equation (8) is a linear function of Bessel series and hence it is appropriate to call Equation (1) as Bessel integral and Equation (2) as the generalized Bessel integral rather than calling them as ultra gamma integral or generalized gamma integral or anything connected with gamma integral.

#### *1.5. Poles of Order Two, ρ* = *δ*, *<sup>γ</sup> <sup>δ</sup>* = *m*, *m* = 1, 2, ...

In this case, the poles at *s* = 0, −1, −2, ..., −(*m* − 1) are simple and poles at *s* = −*m*, −*m* − 1, ... are of order two each. In this case, we may write (2) as the following:

$$K\_2 = \frac{1}{\rho a^{\frac{2}{\rho}}} \frac{1}{2\pi i} \int\_{c-i\infty}^{c+i\infty} \Gamma(s) \Gamma(m+s) (ab)^{-s} ds. \tag{9}$$

Sum of the residues at the poles *s* = 0, −1, ... − (*m* − 1), coming from (9), is the following:

$$(\mathcal{C}) \underset{\rho a^{\frac{\gamma}{\rho}}}{\text{ }} \underset{\nu=0}{\text{ }} \sum\_{\nu=0}^{m-1} \frac{(-1)^{\nu}}{\nu!} \Gamma(m-\nu)(ab)^{\nu}.$$

For *s* = −*m* − *ν*, *ν* = 0, 1, ... or *s* = −*ν*, *ν* = *m*, *m* + 1, ... the poles are of order two and the residue, denoted by *Rν*, is the following: Let *h*(*s*) = Γ(*s*)Γ(*m* + *s*)(*ab*)−*<sup>s</sup>* . Then,

$$\begin{split} R\_{\boldsymbol{V}} &= \lim\_{s \to -\boldsymbol{\nu}} \frac{\mathbf{d}}{\operatorname{ds}} [(s+\boldsymbol{\nu})^2 \Gamma(s) \Gamma(m+s)(ab)^{-s}] \\ &= \lim\_{s \to -\boldsymbol{\nu}} \frac{\mathbf{d}}{\operatorname{ds}} [(s+\boldsymbol{\nu})^2 \frac{(s+\boldsymbol{\nu}-1)^2 \dots (s+m)^2}{(s+\boldsymbol{\nu}-1)^2 \dots (s+m)^2} \frac{(s+m-1)\dots s}{(s+m-1)\dots s} \Gamma(s) \Gamma(m+s)(ab)^{-s}] \\ &= \lim\_{s \to -\boldsymbol{\nu}} \frac{\mathbf{d}}{\operatorname{ds}} [\frac{\Gamma^2(s+\boldsymbol{\nu}+1)}{(s+\boldsymbol{\nu}-1)^2 \dots (s+m)^2 (s+m-1)\dots s} (ab)^{-s}]. \end{split}$$

Observe that <sup>d</sup> <sup>d</sup>*<sup>s</sup> <sup>h</sup>*(*s*) = *<sup>h</sup>*(*s*) <sup>d</sup> <sup>d</sup>*<sup>s</sup>* ln *<sup>h</sup>*(*s*) and (*ab*)−*<sup>s</sup>* <sup>=</sup> <sup>e</sup>−*s*ln(*ab*). Note that

$$\lim\_{s \to -\nu} h(s) = \frac{(-1)^m (ab)^\nu}{\nu! (\nu - m)!}, \nu = m, m+1, \dots$$

$$\lim\_{s \to -\nu} \frac{\mathbf{d}}{\mathbf{ds}} \ln h(s) = \lim\_{s \to -\nu} [2\psi(s+\nu+1) - \frac{2}{s+\nu-1} - \dots - \frac{2}{s+m}$$

$$\begin{split} -\frac{1}{s+m-1} - \dots - \frac{1}{s} - \ln(ab) \\ = 2\psi(1) + 2[1 + \frac{1}{2} + \dots + \frac{1}{\nu-m}] + [\frac{1}{\nu-m+1} + \dots + \frac{1}{\nu}] - \ln(ab) \\ = \psi(\nu+1) + \psi(\nu-m+1) - \ln(ab). \end{split}$$

Therefore,

$$R\_{\nu} = [\psi(\nu+1) + \psi(\nu-m+1) - \ln(ab)][\frac{(-1)^m (ab)^{\nu}}{\nu! (\nu-m)!}], \nu = m, m+1, \dots$$

Then, in this case, (2) reduces to the following:

$$\begin{aligned} K\_2 &= \frac{1}{\rho \alpha^{\frac{\gamma}{\rho}}} \sum\_{\nu=0}^{m-1} \frac{(-1)^{\nu}}{\nu!} \Gamma(m-\nu)(ab)^{\nu} \\ &+ \sum\_{\nu=m}^{\infty} [\psi(\nu+1) + \psi(\nu-m+1) - \ln(ab)] [\frac{(-1)^m}{\nu!(\nu-m)!}(ab)^{\nu}], \nu = m, m+1, \dots \end{aligned}$$

where *<sup>ψ</sup>*(·) is the psi function or the logarithmic derivative of the gamma function, *<sup>ψ</sup>*(*z*) = <sup>d</sup> <sup>d</sup>*<sup>z</sup>* ln Γ(*z*).

The most general case is to consider Γ(*s*)Γ( *<sup>γ</sup> <sup>δ</sup>* <sup>+</sup> *<sup>ρ</sup> <sup>δ</sup> s*) having some poles of order one and the remaining of order two. After writing this situation in a convenient way, one can use the procedure in Section 1.5 to obtain the final result. Since the expressions would take up too much space, it is not discussed here.

#### **2. Krätzel Integral from Mellin Convolution**

Let *x*<sup>1</sup> > 0 and *x*<sup>2</sup> > 0 be real scalar variables. Let *f*1(*x*1) and *f*2(*x*2) be real-valued scalar functions associated with *x*<sup>1</sup> and *x*2, respectively. Then, the Mellin transforms of *f*<sup>1</sup> and *f*2, with Mellin parameter *s*, are the following, whenever they exist:

$$M\_{f\_1}(\mathbf{s}) = \int\_0^\infty \mathbf{x}\_1^{s-1} f\_1(\mathbf{x}\_1) \mathbf{dx}\_1,\\ M\_{f\_2}(\mathbf{s}) = \int\_0^\infty \mathbf{x}\_2^{s-1} f\_2(\mathbf{x}\_2) \mathbf{dx}\_2. \tag{10}$$

Then,

$$\begin{aligned} M\_{f\_1}(s)M\_{f\_2}(s) &= \int\_0^\infty \int\_0^\infty \mathbf{x}\_1^{s-1} \mathbf{x}\_2^{s-1} f\_1(\mathbf{x}\_1) f\_2(\mathbf{x}\_2) \, \mathrm{d}\mathbf{x}\_1 \wedge \mathrm{d}\mathbf{x}\_2 \\ &= \int\_0^\infty \int\_0^\infty u^{s-1} f\_1(v) f\_2(\frac{u}{v}) \frac{1}{v} \mathrm{d}u \wedge \mathrm{d}v, u = \mathbf{x}\_1 \mathbf{x}\_1, v = \mathbf{x}\_1 \\ &= \int\_0^\infty u^{s-1} g(u) \mathrm{d}u \end{aligned}$$

where

$$\begin{split} g(u) &= \int\_0^\infty \frac{1}{v} f\_1(v) f\_2(\frac{u}{v}) \mathrm{d}v \\ &= \int\_0^\infty \frac{1}{v} f\_1(\frac{u}{v}) f\_2(v) \mathrm{d}v. \end{split} \tag{11}$$

That is,

$$M\_{\mathcal{S}}(\mathbf{s}) = M\_{f\_1}(\mathbf{s}) M\_{f\_2}(\mathbf{s}).\tag{12}$$

This Equation (12) is the Mellin convolution of the product involving two functions and Equation (11) is the corresponding integral representation. Let *f*<sup>1</sup> and *f*<sup>2</sup> be generalized exponential functions of the following types:

$$(D)\ \ f\_{\dot{j}}(\mathbf{x}\_{\dot{j}}) = \mathbf{x}\_{\dot{j}}^{\gamma\_{\dot{j}} - 1} \mathbf{e}^{-a\_{\dot{j}} \mathbf{x}\_{\dot{j}}^{\delta\_{\dot{j}}}}, a\_{\dot{j}} > 0, \delta\_{\dot{j}} > 0, \gamma\_{\dot{j}} > 0, j = 1, 2.$$

Then, Equation (11) becomes the following:

$$\begin{aligned}(E) \; \mathcal{g}(\mu) &= \mu^{\gamma\_2 - 1} \int\_0^\infty \upsilon^{\gamma\_1 - \gamma\_2 - 1} \mathbf{e}^{-a\_1 \upsilon^{\delta\_1} - a\_2 (\frac{\mu}{\upsilon})^{\delta\_2}} \mathbf{d}\upsilon \\ (F) \; &= \mu^{\gamma\_1 - 1} \int\_0^\infty \upsilon^{\gamma\_2 - \gamma\_1 - 1} \mathbf{e}^{-a\_1 (\frac{\mu}{\upsilon})^{\delta\_1} - a\_2 \upsilon^{\delta\_2}} \mathbf{d}\upsilon.\end{aligned}$$

Here, (*E*) and (*Fi*) provide equivalent representations for *g*(*u*). In (*E*), if *δ*<sup>1</sup> = *δ*, *a*<sup>1</sup> = *a*, *δ*<sup>2</sup> = *ρ*, *a*2*uδ*<sup>2</sup> = *b*, *γ*<sup>1</sup> − *γ*<sup>2</sup> = *γ*, then the integral becomes Krätzel integral of (2) in Section 1. Hence, Krätzel integral is also available as a Mellin convolution of a product involving two functions, see [7].

Instead of taking *fj*(*xj*) of the form in (*D*), if we take *f*1(*x*1) = <sup>1</sup> <sup>Γ</sup>(*α*) *<sup>x</sup><sup>γ</sup>* <sup>1</sup> (<sup>1</sup> <sup>−</sup> *<sup>x</sup>*1)*α*−<sup>1</sup> for (*γ*) <sup>&</sup>gt; −1, (*α*) > 0 or *α* > 0, *γ* > −1 when real, and *f*2(*x*2) = *f*(*x*2) where *f*(*x*2) is an arbitrary function, then Equation (11) becomes the following:

$$\log(u) = \int\_{\upsilon} \frac{1}{\upsilon} f\_1(\frac{u}{\upsilon}) f\_2(\upsilon) d\upsilon = \int\_{\upsilon} \frac{1}{\Gamma(a)} \frac{1}{\upsilon} (\frac{u}{\upsilon})^{\gamma} (1 - \frac{u}{\upsilon})^{a-1} f(\upsilon) d\upsilon,\\ \Re(a) > 0, \Re(\gamma) > -1$$

$$= \frac{u^{\gamma}}{\Gamma(a)} \int\_{\upsilon \ge u} \upsilon^{-\gamma - a} (\upsilon - u)^{a-1} f(\upsilon) d\upsilon = K\_{2, \gamma}^{-a} f \tag{13}$$

where *K*−*<sup>α</sup>* 2,*<sup>γ</sup> f* in (13) is Erdélyi–Kober fractional integral of the second kind of order *α* and parameter *γ*, see [8]. Thus, the Mellin convolution of a product is also associated with fractional integral of the second kind. A general definition of all versions of fractional integrals in terms of Mellin convolutions of products and ratios is given in [8].

#### **3. Krätzel Integral as the Density of a Product**

Let *x*<sup>1</sup> > 0 and *x*<sup>2</sup> > 0 be two real scalar positive random variables, independently distributed with density functions *f*1(*x*1) and *f*2(*x*2), respectively. Due to statistical independence their joint density, denoted by *f*(*x*1, *x*2), is the product, *f*(*x*1, *x*2) = *f*1(*x*1)*f*2(*x*2). Let *u* = *x*1*x*<sup>2</sup> be the product and let *<sup>x</sup>*<sup>1</sup> <sup>=</sup> *<sup>v</sup>* or *<sup>x</sup>*<sup>2</sup> <sup>=</sup> *<sup>v</sup>*. Then, d*x*<sup>1</sup> <sup>∧</sup> <sup>d</sup>*x*<sup>2</sup> <sup>=</sup> <sup>1</sup> *<sup>v</sup>*d*u* ∧ d*v*. Let *g*(*u*, *v*) be the joint density of *u* and *v*. Then,

$$g(u,v) = \frac{1}{v}f\_1(v)f\_2(\frac{u}{v}) = \frac{1}{v}f\_1(\frac{u}{v})f\_2(v)$$

and the marginal density of *u*, denoted by *g*1(*u*) is the following:

$$\begin{split} g\_1(\boldsymbol{u}) &= \int\_{\boldsymbol{v}} \frac{1}{\boldsymbol{v}} f\_1(\boldsymbol{v}) f\_2(\frac{\boldsymbol{u}}{\boldsymbol{v}}) \mathrm{d}v \\ &= \int\_{\boldsymbol{v}} \frac{1}{\boldsymbol{v}} f\_1(\frac{\boldsymbol{u}}{\boldsymbol{v}}) f\_2(\boldsymbol{v}) \mathrm{d}v. \end{split} \tag{14}$$

Let *fj*(*xj*) be a generalized gamma density of the form

$$f\_j(\mathbf{x}\_j) = \mathbf{c}\_{\dot{\jmath}} \mathbf{x}\_{\dot{\jmath}}^{\gamma\_j - 1} \mathbf{e}^{-a\_j \mathbf{x}\_{\dot{\jmath}}^{\delta\_j}}, \\ a\_j > 0, \gamma\_j > 0, \delta\_j > 0, j = 1, 2 \tag{15}$$

where *cj* is the normalizing constant. For the *fj*(*xj*) in Equation (15), we have Equation (14) as the following:

$$\begin{split} g\_1(u) &= c\_1 c\_2 u^{\gamma\_2 - 1} \int\_0^\infty \upsilon^{\gamma\_1 - \gamma\_2 - 1} \mathbf{e}^{-a\_1 \upsilon^{\delta\_1} - a\_2 (\frac{u}{\upsilon})^{\delta\_2}} \mathbf{d}\upsilon \\ &= c\_1 c\_2 u^{\gamma\_1 - 1} \int\_0^\infty \upsilon^{\gamma\_2 - \gamma\_1 - 1} \mathbf{e}^{-a\_1 (\frac{u}{\upsilon})^{\delta\_1} - a\_2 \upsilon^{\delta\_2}} \mathbf{d}\upsilon. \end{split} \tag{16}$$

Observe that the two expressions for *g*1(*u*) in Equation (16) are not only generalized Krätzel integrals but they are also statistical densities of a product. We can evaluate the explicit form of the density by using arbitrary moments and then inverting the expression. Consider the (*s* − 1)th moments of *x*<sup>1</sup> and *x*2. Then, *E*[*x*1*x*2] *<sup>s</sup>*−<sup>1</sup> <sup>=</sup> *<sup>E</sup>*[*xs*−<sup>1</sup> <sup>1</sup> ]*E*[*xs*−<sup>1</sup> <sup>2</sup> ] due to statistical independence, where *<sup>E</sup>*[·] denotes the expected value of [·]. That is,

$$E[\mathbf{x}\_j^{s-1}] = \int\_0^\infty \mathbf{x}\_j^{s-1} f\_\circ(\mathbf{x}\_j) \, \mathrm{d}x\_j = M\_{f\_\circ}(\mathbf{s}), j = 1, 2$$

whenever the expected values exist, where *Mfj* (*s*) is the Mellin transform of the density *fj*, with Mellin parameter *s*, when this Mellin transform exists. Evaluating *E*[*xs*−<sup>1</sup> *<sup>j</sup>* ] for the density in Equation (15), we have the following:

$$E[x\_j^{s-1}] = \frac{a\_j^{-\frac{(s-1)}{\delta\_j}} \Gamma(\frac{\gamma\_j + s - 1}{\delta\_j})}{\Gamma(\frac{\gamma\_j}{\delta\_j})}, \Re(\gamma\_j + s - 1) > 0, j = 1, 2. \tag{17}$$

Observe that in Equation (17) the explicit form of the normalizing constant *cj* is used, *cj* is such that *E*[*xs*−<sup>1</sup> *<sup>j</sup>* ] = 1 when *s* = 1. Then, taking the product

$$E[u^{s-1}] = \{ \prod\_{j=1}^{2} \frac{a\_j^{\frac{1}{\gamma\_j}}}{\Gamma(\frac{\gamma\_j}{\delta\_j})} \} \{ \prod\_{j=1}^{2} \Gamma(\frac{\gamma\_j - 1}{\delta\_j} + \frac{s}{\delta\_j}) a\_j^{-\frac{s}{\delta\_j}} \},\tag{18}$$

for (*γ<sup>j</sup>* + *s* − 1) > 0, *j* = 1, 2. Then, the density *g*1(*u*) is available from the inverse Mellin transform or by inverting Equation (18). That is,

$$\begin{split} g\_1(u) &= \mathbb{C} \frac{1}{2\pi i} \int\_{c-i\infty}^{c+i\infty} \{ \prod\_{j=1}^2 \Gamma(\frac{\gamma\_j - 1}{\delta\_j} + \frac{s}{\delta\_j}) \} (a\_1^{\frac{1}{\gamma\_1}} a\_2^{\frac{1}{\gamma\_2}} u)^{-s} \, \mathrm{d}s \\ &= \, \mathrm{C} H\_{0,2}^{2,0} \left[ a\_1^{\frac{1}{\gamma\_1}} a\_2^{\frac{1}{\gamma\_2}} u \big|\_{\left( \frac{\gamma\_1 - 1}{\delta\_1}, \frac{1}{\delta\_1} \right) , \left( \frac{\gamma\_2 - 1}{\delta\_2}, \frac{1}{\delta\_2} \right)} \right] \, \mathrm{} \end{split} \tag{19}$$
  $\mathrm{C} = \prod\_{j=1}^2 \frac{a\_j^{\frac{1}{\gamma\_j}}}{\Gamma(\frac{\gamma\_j}{\delta\_j})} \, \mathrm{d}s$ 

Note that Equation (19) is the explicit form of the Krätzel integral as well as the statistical density *g*1(*u*). Instead of generalized gamma density for *fj*(*xj*), suppose that the density of *x*<sup>1</sup> is a type-1 beta density with the parameters (*γ* + 1, *α*) and *f*2(*x*2) is an arbitrary density then *f*<sup>1</sup> is of the form

$$f\_1(\mathbf{x}\_1) = \frac{\Gamma(\alpha + \gamma + 1)}{\Gamma(\gamma + 1)\Gamma(\alpha)} \mathbf{x}\_1^{\gamma} (1 - \mathbf{x}\_1)^{\alpha - 1}, 0 \le \mathbf{x}\_1 \le 1, \alpha > 0, \gamma > -1.$$

Usually, the parameters in a statistical density are real. Then, *g*1(*u*) becomes the following:

$$\begin{split} g\_1(u) &= \int\_{\upsilon} \frac{1}{\upsilon} f\_1(\frac{u}{\upsilon}) f\_2(\upsilon) \mathrm{d}\upsilon \\ &= \frac{\Gamma(a+\gamma+1)}{\Gamma(\gamma+1)\Gamma(a)} \int\_{\upsilon \ge u} \frac{1}{\upsilon} (\frac{u}{\upsilon})^{\gamma} (1-\frac{u}{\upsilon})^{a-1} f(\upsilon) \mathrm{d}\upsilon \\ &= \frac{\Gamma(\gamma+a+1)}{\Gamma(\gamma+1)} \frac{u^{\gamma}}{\Gamma(a)} \int\_{\upsilon \ge u} v^{-\gamma-a} (v-u)^{a-1} f(\upsilon) \mathrm{d}v \\ &= \frac{\Gamma(a+\gamma+1)}{\Gamma(\gamma+1)} K\_{2,\gamma}^{-a} f\_1 u > 0, \gamma > -1 \end{split} \tag{20}$$

where *K*−*<sup>α</sup>* 2,*<sup>γ</sup> f* is Erdélyi–Kober fractional integral of the second kind of order *α* and parameter *γ*. From Equation (20), note that this fractional integral is a constant multiple of a statistical density of a product of positive random variables also. For generalizations of this result for the matrix-variate case, in real and complex domains, see [8]. By taking the density of a ratio of real scalar positive random variables, where the variables are independently distributed, with *x*<sup>1</sup> having a type-1 beta density with the parameters (*γ*, *α*) and *x*<sup>2</sup> having an arbitrary density we can show that the density of the ratio *u* = *<sup>x</sup>*<sup>2</sup> *<sup>x</sup>*<sup>1</sup> will produce a constant multiple of Erdélyi–Kober fractional integral of the first kind of order *α* and parameter *γ*, details or the generalizations of this result may be seen [8].

#### **4. Krätzel Integral and Bayesian Structures**

In a simple Bayesian structure in Bayesian statistical analysis, we have a conditional density of a random variables *x*, conditioned on a parameter *θ*, or written as *f*1(*x*|*θ*) or the density of *x*, given *θ*. Then, *θ* has its own marginal density denoted by *f*2(*θ*). Then, the joint density of *x* and *θ* is *f*1(*x*|*θ*)*f*2(*θ*). When both *x* and *θ* are continuous variables, we call this situation as a continuous mixture. When one variable is discrete and the other continuous, we call it simply a mixture density. Then, the unconditional density of *x*, denoted by *f*(*x*), is given by

$$f(\mathbf{x}) = \int\_{\theta} f\_1(\mathbf{x}|\theta) f\_2(\theta) \mathrm{d}\theta. \tag{21}$$

A general format of the structure in Equation (21) is of the following type:

$$f(\mathbf{x}\_1) = \int\_{\mathcal{X}\_2} \dots \int\_{\mathcal{X}\_k} f\_1(\mathbf{x}\_1|\mathbf{x}\_2, \dots, \mathbf{x}\_k) f\_2(\mathbf{x}\_2|\mathbf{x}\_3, \dots, \mathbf{x}\_k) \dots f\_{k-1}(\mathbf{x}\_{k-1}|\mathbf{x}\_k) f\_k(\mathbf{x}\_k) d\mathbf{x}\_2 \wedge \dots \wedge d\mathbf{x}\_k. \tag{22}$$

For an application of this type of unconditional density for *k* = 3, see [9]. When all the densities involved in Equations (21) and (22) are continuous, we also call Equations (21) and (22) as continuous mixtures. Consider Equation (21), where

$$f\_1(\mathbf{x}|\theta) = \frac{\theta^{\gamma\delta}}{\Gamma(\gamma)} \mathbf{x}^{\gamma - 1} \mathbf{e}^{-\theta^{\delta}x}, \mathbf{x} \ge 0, \theta > 0, \delta > 0, \gamma > 0$$

and

$$f\_2(\theta) = \frac{\rho b^{\frac{a}{\rho}}}{\Gamma(\frac{a}{\rho})} \theta^{-a-1} \mathbf{e}^{-b\theta^{-\rho}}, b > 0, a > 0, \rho > 0, \theta > 0$$

so that

$$f\_1(\mathbf{x}|\boldsymbol{\theta})f\_2(\boldsymbol{\theta}) = \frac{\rho b^{\frac{\mathbf{a}}{\rho}}}{\Gamma(\gamma)\Gamma(\frac{\mathbf{a}}{\rho})} \mathbf{x}^{\gamma - 1} \theta^{\gamma \delta - \mathbf{a} - 1} \mathbf{e}^{-\mathbf{x}\theta^{\delta} - \mathbf{b}\theta^{-\rho}}.$$

Then, the unconditional density is the following, denoting *θ* = *v* in the integral and denoting the unconditional density of *x*, again by *f*(*x*):

$$f(\mathbf{x}) = \mathbb{C}\_1 \int\_{v=0}^{\infty} v^{\gamma \delta - a - 1} \mathbf{e}^{-\mathbf{x}v^{\delta} - bv^{-\rho}} \mathbf{d}v \tag{23}$$

where

$$\mathbb{C}\_1 = \frac{\rho b^{\frac{\alpha}{\rho}}}{\Gamma(\gamma)\Gamma(\frac{\alpha}{\rho})} x^{\gamma - 1}, \alpha > 0, \rho > 0, \delta > 0, \gamma > 0, \rho > 0, x > 0.$$

Observe that Equation(23) is of the same structure of the Krätzel integral of Equation (2) of Section 1. Note that, if we use the general structure in Equation (22) and consider all densities as generalized gamma densities, then we obtain a generalization and extension of Krätzel integral to a multivariate situation. Such generalizations is considered below in this paper.

#### **5. Pathway Extension of Krätzel Integral**

The author of [10] introduced a pathway model for rectangular matrix-variate case. By using a pathway parameter there, one can go to three different families of functions. When a model is fitted to a given data, then one member from the pathway family is sure to fit the data if the data fall into one of the three wide families of functions or in the transitional stages of going from one family to another family. The pathway model for real positive scalar variable situation is the following:

$$f\_3(\mathbf{x}) = \varepsilon\_3 \mathbf{x}^{\gamma - 1} [1 + a(a - 1)\mathbf{x}^{\delta}]^{-\frac{\eta}{a - 1}}, \mathbf{x} > 0, a > 1, \eta > 0, \delta > 0, a > 0. \tag{24}$$

When *α* < 1, then we can write *α* − 1 = −(1 − *α*) so that the model in (24) switches to the model

$$f\_4(\mathbf{x}) = \mathfrak{c}\_4 \mathbf{x}^{\gamma - 1} [1 - a(1 - a)\mathbf{x}^{\delta}]^{\frac{\eta}{1 - a}}, \mathfrak{a} < 1, \eta > 0, \mathfrak{a} > 0, \delta > 0 \tag{25}$$

and, further, 1 <sup>−</sup> *<sup>a</sup>*(<sup>1</sup> <sup>−</sup> *<sup>α</sup>*)*x<sup>δ</sup>* <sup>&</sup>gt; 0 in order to create statistical density out of *<sup>f</sup>*4(*x*). Its support is finite or it is a finite-range density, whereas in Equation (24) it is of infinite range and *x* > 0 there. When *α* → 1, both Equations (24) and (25) go to the model

$$f\_{\mathsf{F}}(\mathbf{x}) = \mathsf{c}\_{\mathsf{F}} \mathbf{x}^{\gamma - 1} \mathsf{e}^{-a\eta \mathbf{x}^{\delta}}, \mathsf{a} > 0, \mathsf{x} > 0, \delta > 0, \eta > 0. \tag{26}$$

Thus, through the pathway parameter *α* one can move among the three families of functions *fj*(*x*), *j* = 3, 4, 5. Both Equations (24) and (25) can be taken as extensions of Equation (26). If Equation (26) is the ideal or stable situation in a physical system, then the unstable neighborhoods are given by Equations (24) and (25). The movement of *α* also describes the transitional stages. For the properties, generalizations and extension of the pathway model, see [11].The model in Equation (25) for *γ* = 1, *a* = 1, *η* = 1 and for *α* < 1, *α* > 1, *α* → 1 is Tsallis' statistics in non-extensive statistical mechanics [12]. Some properties and other aspects of the pathway model see [11,13]. The model in Equation (24) for *a* = 1, *η* = 1, *α* > 1, *α* → 1 is superstatistics (see [14]). Superstatistics considerations come from the unconditional density described in Section 4 when the conditional and marginal densities belong to the exponential and gamma families of densities. Consider the model in Equation (24) with different parameters, take *f*<sup>1</sup> and *f*<sup>2</sup> of Section 1, and consider Mellin convolutions. Let *f*<sup>31</sup> and *f*<sup>32</sup> be two densities belonging to Equation (24) with different parameters. That is, let

$$f\_{\hat{3}\hat{\jmath}}(\mathbf{x}\_{\hat{\jmath}}) = \mathbf{c}\_{3\hat{\jmath}} \mathbf{x}\_{\hat{\jmath}}^{\gamma\_{\hat{\jmath}}-1} \left[1 + a\_{\hat{\jmath}}(\mathbf{a}\_{\hat{\jmath}} - 1)\mathbf{x}\_{\hat{\jmath}}^{\delta\_{\hat{\jmath}}}\right]^{-\frac{\eta\_{\hat{\jmath}}}{\mathbf{a}\_{\hat{\jmath}}-1}}, \mathbf{x}\_{\hat{\jmath}} > 0, \mathbf{a}\_{\hat{\jmath}} > 1, \mathbf{a}\_{\hat{\jmath}} > 0, \gamma\_{\hat{\jmath}} > 0, \delta\_{\hat{\jmath}} > 0 \tag{27}$$

for *j* = 1, 2. Let *u* = *x*1*x*2, *v* = *x*1. Consider the Mellin convolution of a product or let *xj* > 0, *j* = 1, 2 be independently distributed real scalar positive random variables with the densities *f*<sup>31</sup> and *f*<sup>32</sup> of (27) respectively. Then, the density of *u* = *x*1*x*2, denoted by *gp*(*u*), where *p* stands for the pathway model, is the following:

$$\begin{split} g\_{\mathcal{P}}(u) &= \int\_{\upsilon} \frac{1}{\upsilon} f\_{31}(\upsilon) f\_{32}(\frac{u}{\upsilon}) \mathrm{d}\upsilon \\ (G) &= c\_{31} c\_{32} u^{\gamma\_2 - 1} \int\_{\upsilon = 0}^{\infty} \upsilon^{\gamma\_1 - \gamma\_2 - 1} [1 + a\_1 (a\_1 - 1) \upsilon^{\delta\_1}]^{-\frac{\eta\_1}{a\_1 - 1}} \\ &\times [1 + a\_2 (a\_2 - 1) (\frac{u}{\upsilon})^{\delta\_2}]^{-\frac{\eta\_2}{a\_2 - 1}} \mathrm{d}\upsilon \end{split} \tag{28}$$

for *α<sup>j</sup>* > 1, *aj* > 0, *δ<sup>j</sup>* > 0, *η<sup>j</sup>* > 0, *j* = 1, 2. See also the versatile integral discussed in [15]. Various types of extensions of Krätzel integrals are involved in Equation (28). When *α*<sup>1</sup> → 1, the first factor or the density in (*G*) goes to the exponential form whereas the second part in Equation (28) remains in the type-2 beta family form. This is one extension. In addition, when *α*<sup>2</sup> → 1, the second part density in Equation (28) goes to the exponential form whereas the first part remains in the type-2 beta family of functions. When *α*<sup>1</sup> → 1 and *α*<sup>2</sup> → 1, Equation (28) goes to the format of the Krätzel integral in Equation (2) of Section 1. A model of the form in Equation (28) for the cases *α<sup>j</sup>* < 1, *α<sup>j</sup>* > 1, *α<sup>j</sup>* → 1, individually, is studied in detail in [15].

#### *Connection to Kobayashi Integrals*

In Equation (28), let *α*<sup>1</sup> → 1 and *α*<sup>2</sup> remain the same. Then, Equation (28) reduces to the following form:

$$g\_p(u) = c\_{31} c\_{32} u^{\gamma\_2 - 1} \int\_{v=0}^{\infty} v^{\gamma\_1 - \gamma\_2 - 1} \mathbf{e}^{-a\_1 \eta\_1 v^{\delta\_1}} $$

$$\times \left[ 1 + a\_2 (a\_2 - 1) (\frac{u}{v})^{\delta\_2} \right]^{-\frac{\eta\_2}{\delta\_2 - 1}} \mathbf{d}v. \tag{29}$$

Observe that Equation (29) is a more general form of ultra gamma integral and Kobayashi integral. The Kobayashi form is available from the Mellin convolution of a ratio. Let *u*<sup>1</sup> = *<sup>x</sup>*<sup>2</sup> *<sup>x</sup>*<sup>1</sup> with *<sup>x</sup>*<sup>1</sup> = *<sup>v</sup>*, and let *x*<sup>1</sup> and *x*<sup>2</sup> be independently distributed pathway random variables as described in Section 5. Then, *x*<sup>1</sup> = *v*, *x*<sup>2</sup> = *u*1*v* and d*x*<sup>1</sup> ∧ d*x*<sup>2</sup> = *v*d*u*<sup>1</sup> ∧ d*v*. Then, the pathway density of *u*1, denoted by *gp*1(*u*1), is the following for *α*<sup>1</sup> → 1:

$$g\_{p1}(\mu\_1) = c\_{31}c\_{32}u^{\gamma\_2 - 1} \int\_{v=0}^{\infty} v^{\gamma\_1 + \gamma\_2 - 1} e^{-a\_1\eta\_1 v^{\delta\_1}}$$

$$\times \left[1 + a\_2(a\_2 - 1)(\mu\_1 v)^{\delta\_2}\right]^{-\frac{\eta\_2}{a\_2 - 1}}\tag{30}$$

for *aj* > 0, *γ<sup>j</sup>* > 0, *δ<sup>j</sup>* > 0, *η<sup>j</sup>* > 0, *j* = 1, 2, *α*<sup>2</sup> > 1. Kobayashi integral is obtained from Equation (30) by putting *<sup>a</sup>*2(*α*<sup>2</sup> <sup>−</sup> <sup>1</sup>)*uδ*<sup>2</sup> <sup>1</sup> <sup>=</sup> *<sup>λ</sup>* and *<sup>η</sup>*<sup>2</sup> *<sup>α</sup>*2−<sup>1</sup> <sup>=</sup> *<sup>η</sup>*, (see [16,17]). Some people call Kobayashi form as ultra gamma integral. Observe that Equation (30) is a much more general and flexible format and for varying *α*<sup>2</sup> we have three families of functions in Equation (30) including Kobayashi format. The Mellin transform of *gp*1(*u*1), with Mellin parameter *s*, is available from *u*<sup>1</sup> = *<sup>x</sup>*<sup>2</sup> *<sup>x</sup>*<sup>1</sup> form, namely

$$M\_{\mathcal{S}\_{\mathbb{P}^1}}(\mathbf{s}) = M\_{f\_1}(2 - \mathbf{s})M\_{f\_2}(\mathbf{s}) \text{ or } E[\mathbf{u}\_1^{s-1}] = E[\mathbf{x}\_1^{-s+1}]E[\mathbf{x}\_2^{s-1}]$$

and these moments are available from the pathway densities of *x*<sup>1</sup> and *x*<sup>2</sup> with *α*<sup>1</sup> → 1.

#### **6. Multivariate Extensions of Krätzel Integrals**

Let us start with the case of three variables. Let *xj* > 0, *j* = 1, 2, 3 be three real scalar variables and let the associated functions be *fj*(*xj*), *j* = 1, 2, 3, respectively. If *xj* > 0, *j* = 1, 2, 3 are real scalar random variables, independently distributed, then *fj*(*xj*), *j* = 1, 2, 3 may be the corresponding densities. Let *<sup>u</sup>* <sup>=</sup> *<sup>x</sup>*1*x*2*x*<sup>3</sup> be the product and let *<sup>v</sup>* <sup>=</sup> *<sup>x</sup>*2*x*3, *<sup>w</sup>* <sup>=</sup> *<sup>x</sup>*3. Then, *<sup>x</sup>*<sup>1</sup> <sup>∧</sup> <sup>d</sup>*x*<sup>2</sup> <sup>∧</sup> <sup>d</sup>*x*<sup>3</sup> <sup>=</sup> <sup>1</sup> *vw* d*u* ∧ d*v* ∧ d*w*. Mellin convolution of a product involving three real scalar variables is considered in [18]. Let

$$f\_j(\mathbf{x}\_j) = c\_j \mathbf{x}\_j^{\gamma\_j - 1} \mathbf{e}^{-a\_j \mathbf{x}\_j^{\delta\_j}}, a\_j > 0, \delta\_j > 0, \gamma\_j > 0, j = 1, 2, 3 \tag{31}$$

where *cj* is a constant and it may be normalizing constant if *fj* in Equation (31) is a density. Then, the density of *u* or Mellin convolution of the product, again denoted by *g*(*u*), is the following:

$$\begin{split} \mathbf{g}(u) &= \int\_{\upsilon} \int\_{w} \frac{1}{vw} f\_1(\frac{u}{vw}) f\_2(\frac{\upsilon}{w}) f\_3(w) \, \mathrm{d}v \wedge \mathrm{d}w \\ &= c\_1 c\_2 c\_3 \int\_{\upsilon} \int\_{w} \frac{1}{vw} (\frac{u}{w})^{\gamma\_1 - 1} (\frac{\upsilon}{w})^{\gamma\_2 - 1} w^{\gamma\_3 - 1} \\ &\times e^{-a\_1 (\frac{u}{w})^{\delta\_1} - a\_2 (\frac{\upsilon}{w})^{\delta\_2} - a\_3 w^{\delta\_3}} \mathrm{d}v \wedge \mathrm{d}w \end{split} \tag{33}$$

where Equation (32) is the general structure whatever be the *fj*s, and Equation (33) is the case when *fj*s belong to Equation (31). Then, Equation (33) can be taken as a bivariate version of the Krätzel integral. Observe that in the exponent we have *v* and *w* with positive and negative exponents. If we take *u* = *x*1*x*2*x*3, *v* = *x*2, *w* = *x*3, then the exponential part in *g*(*u*) is of the following form:

$$\mathbf{e}^{-a\_1(\frac{m}{tw})^{\delta\_1} - a\_2w^{\delta\_2} - a\_3w^{\delta\_3}}...$$

In the format of Equation (33), we can take *v* = *x*1*x*2, *w* = *x*<sup>2</sup> or *v* = *x*2*x*3, *w* = *x*1. These produce two more different forms corresponding to Equation (33). We can also take *u* = *x*1*x*2*x*<sup>3</sup> = *u*12*x*3, *u*<sup>12</sup> = *x*1*x*2. We can get the density of *u*<sup>12</sup> first by using *f*<sup>1</sup> and *f*2. Let the density of *u*<sup>12</sup> be denoted as *g*12(*u*12). Then, by using *g*<sup>12</sup> and *f*3, we can get the density of *u*. This produces another bivariate extension of the Krätzel integral. Follow the same procedure by taking *u* = *u*23*x*1, *u*13*x*<sup>2</sup> where *u*<sup>23</sup> = *x*2*x*3, *u*<sup>13</sup> = *x*1*x*3. In these cases, obtain the densities of *u*<sup>13</sup> and *u*<sup>23</sup> first and then proceed. These produce other different bivariate extensions of Krätzel integrals. For example, let *u* = *x*1*x*2*x*<sup>3</sup> = *u*12*x*3, *u*<sup>12</sup> = *x*1*x*2. Let the density of *u*<sup>12</sup> be *g*12(*u*12). Then, from the two-variables case,

$$(H) \quad \mathcal{g}\_{12}(\mu\_{12}) = \int\_{\upsilon} \frac{1}{\upsilon} f\_1(\frac{\mu\_{12}}{\upsilon}) f\_2(\upsilon) \mathrm{d}\upsilon.$$

Let the density of *u* be *g*(*u*). Then,

$$\begin{aligned} (I) \quad \mathfrak{g}(\mu) &= \int\_{\mathcal{w}} \frac{1}{\mathcal{w}} \mathfrak{g}\_{12}(\frac{\mathfrak{u}}{\mathfrak{u}\_{12}}) f\_3(w) \mathrm{d}w \\ &= \int\_{\mathcal{w}} \frac{1}{\mathcal{w}} [\int\_{\mathcal{v}} \frac{1}{\mathcal{v}} f\_1(\frac{\mathfrak{u}\_{12}}{\mathcal{v}}) f\_2(\mathfrak{v}) \mathrm{d}v] f\_3(w) \mathrm{d}w \\ &= \int\_{\mathcal{v}} \int\_{\mathcal{w}} \frac{1}{\mathcal{v}w} f\_1(\frac{\mathfrak{u}\_{12}}{\mathcal{v}}) f\_2(\mathfrak{v}) f\_3(w) \mathrm{d}v \wedge \mathrm{d}w. \end{aligned}$$

However, we also have

$$(J) \quad \mathcal{g}\_{12}(\mathfrak{u}\_{12}) = \int\_{\upsilon} \frac{1}{\upsilon} f\_1(\upsilon) f\_2(\frac{\mathfrak{u}\_{12}}{\upsilon}) \mathrm{d}\upsilon.$$

Substituting for *g*<sup>12</sup> from (*J*) into (*H*), we have the following and other forms from the symmetry also:

$$\begin{split} g(u) &= \int\_{\mathcal{U}} \frac{1}{w} [\int\_{\mathcal{v}} \frac{1}{\upsilon} f\_1(\upsilon) f\_2(\frac{u}{\upsilon}) \mathrm{d}\upsilon] f\_3(w) \mathrm{d}w \\ (K) &= \int\_{\mathcal{v}} \int\_{\mathcal{w}} \frac{1}{\upsilon w} f\_1(\upsilon) f\_2(\frac{u}{\upsilon}) f\_3(w) \mathrm{d}\upsilon \wedge \mathrm{d}w \\ &= \int\_{\mathcal{v}} \int\_{\mathcal{w}} \frac{1}{\upsilon w} f\_1(\upsilon) f\_2(w) f\_3(\frac{u}{\upsilon}) \mathrm{d}\upsilon \wedge \mathrm{d}w \\ &= \int\_{\mathcal{v}} \int\_{\mathcal{w}} \frac{1}{\upsilon w} f\_1(\frac{u}{\upsilon}) f\_2(w) f\_3(\upsilon) \mathrm{d}\upsilon \wedge \mathrm{d}w \\ &= \int\_{\mathcal{v}} \int\_{\mathcal{w}} \frac{1}{\upsilon w} f\_1(w) f\_2(\upsilon) f\_3(\frac{u}{\upsilon}) \mathrm{d}\upsilon \wedge \mathrm{d}w \\ &= \int\_{\mathcal{v}} \int\_{\mathcal{w}} \frac{1}{\upsilon w} f\_1(w) f\_2(\frac{u}{\upsilon}) f\_3(\upsilon) \mathrm{d}\upsilon \wedge \mathrm{d}w. \end{split}$$

A few such forms, as in (*K*), are described in [7] and hence these are not repeated here. From the products of four or more variables *xj* > 0, *j* = 4, 5, ..., *k*, we can have several different extensions of Krätzel integral for bivariate, trivariate and general multivariate cases. The method is similar to what is explained above and hence further discussion is omitted. Even though hundreds of different integral representations are available for the density of *u* = *x*1...*xk*, the explicit evaluation of the density *g*(*u*) of *u* is possible by inverting the corresponding Mellin transform, namely

$$M\_{\S}(s) = \prod\_{j=1}^{k} M\_{f\_j}(s)$$

and take the inverse Mellin transform of ∏*<sup>k</sup> <sup>j</sup>*=<sup>1</sup> *Mfj* (*s*) to obtain the density *g* of *u* = *x*1*x*2...*xk*.

#### *Connections to Fractional Integrals*

Let *xj* > 0, *j* = 1, 2, 3 be real scalar random variables, independently distributed with densities *fj*(*xj*), *<sup>j</sup>* <sup>=</sup> 1, 2, 3, respectively. Let *<sup>u</sup>* <sup>=</sup> *<sup>x</sup>*1*x*2*x*3, *<sup>v</sup>* <sup>=</sup> *<sup>x</sup>*2, *<sup>w</sup>* <sup>=</sup> *<sup>x</sup>*3. Then, d*x*<sup>1</sup> <sup>∧</sup> <sup>d</sup>*x*<sup>2</sup> <sup>∧</sup> <sup>d</sup>*x*<sup>3</sup> <sup>=</sup> <sup>1</sup> *vw* d*u* ∧ d*v* ∧ d*w*. Let *f*<sup>1</sup> be a real scalar type-1 beta density with the parameters (*γ* + 1, *α*), or with the density:

$$f\_1(\mathbf{x}\_1) = \frac{\Gamma(\gamma + 1 + \alpha)}{\Gamma(\gamma + 1)\Gamma(\alpha)} x\_1^{\gamma} (1 - x\_1)^{\alpha - 1}, 0 \le x\_1 \le 1, \alpha > 0, \gamma > -1.$$

Let *f*<sup>2</sup> and *f*<sup>3</sup> be arbitrary densities. Then,

$$f\_1(\mathbf{x}\_1) = f(\frac{u}{vw}) = \frac{\Gamma(\gamma + 1 + a)}{\Gamma(\gamma + 1)\Gamma(a)} (\frac{u}{vw})^{\gamma} (1 - \frac{u}{vw})^{a - 1}. \tag{34}$$

Then, the density of *u* from (34), *f*<sup>2</sup> and *f*3, denoted again by *g*(*u*), is the following:

$$\begin{split} \mathbf{g}(u) &= \frac{\Gamma(\gamma + 1 + a)}{\Gamma(\gamma + 1)} \frac{u^{\gamma}}{\Gamma(a)} \int\_{\mathbb{T}} \int\_{w} (vw)^{-\gamma - a} (vw - u)^{a - 1} f\_2(v) f\_3(w) \mathrm{d}v \wedge \mathrm{d}w \\ &= \frac{\Gamma(\gamma + 1 + a)}{\Gamma(\gamma + 1)} K\_{2, \gamma}^{-a}(f\_2, f\_3). \end{split} \tag{35}$$

If *f*<sup>3</sup> and the corresponding *w* are absent, then *K*−*<sup>α</sup>* 2,*γ*(*f*2, *<sup>f</sup>*3) = *<sup>K</sup>*−*<sup>α</sup>* 2,*<sup>γ</sup> f*<sup>2</sup> which is Erdélyi–Kober fractional integral of the second kind and of order *α* and parameter *γ* where the arbitrary function is *f*2. Similarly, when *f*<sup>2</sup> and *v* are absent, we get Erdélyi–Kober fractional integral of the second kind of order *α* and parameter *γ* with the arbitrary function *f*3. Hence, Equation (35) is a bivariate generalization of Erdélyi–Kober fractional integral of the second kind. This generalization in Equation (35) is different from the multivariate case of Mathai [8] and multi-index case of Kiryakova [19]. Other extension to bivariate case of fractional integrals are available from the various representations in (*K*) of Section 6 by taking one or two, out of the three functions there, as real scalar type-1 beta densities.

Let *u*<sup>1</sup> = *<sup>x</sup>*<sup>1</sup> *<sup>x</sup>*<sup>2</sup> with *<sup>x</sup>*<sup>1</sup> <sup>=</sup> *<sup>v</sup>* so that *<sup>x</sup>*<sup>2</sup> <sup>=</sup> *<sup>v</sup> <sup>u</sup>*<sup>1</sup> and d*x*<sup>1</sup> <sup>∧</sup> <sup>d</sup>*x*<sup>2</sup> <sup>=</sup> <sup>−</sup> *<sup>v</sup> u*2 1 d*u*<sup>1</sup> ∧ d*v*. Then, the density of *u*1, denoted by *g*1(*u*1), is the following:

$$g\_1(\boldsymbol{u}\_1) = \int\_{\boldsymbol{v}} \frac{\boldsymbol{v}}{\boldsymbol{u}\_1^2} f\_1(\boldsymbol{v}) f\_2(\frac{\boldsymbol{v}}{\boldsymbol{u}\_1}) \mathrm{d}\boldsymbol{v}.\tag{36}$$

Let *f*1(*v*) = *f*(*v*), be an arbitrary density and let *f*2(*x*2) be a real scalar type-1 beta density with the parameters (*γ*, *α*). Then, from Equation (36),

$$\begin{split} g\_1(\boldsymbol{u}\_1) &= \frac{\Gamma(\gamma + a)}{\Gamma(\gamma)\Gamma(a)} \int\_{\boldsymbol{v}} \frac{\boldsymbol{v}}{\boldsymbol{u}\_1^2} f(\boldsymbol{v}) (\frac{\boldsymbol{v}}{\boldsymbol{u}\_1})^{\gamma - 1} (1 - \frac{\boldsymbol{v}}{\boldsymbol{u}\_1})^{a - 1} \mathrm{d}\boldsymbol{v} \\ &= \frac{\Gamma(\gamma + a)}{\Gamma(\gamma)} \frac{\boldsymbol{u}\_1^{-a - \gamma}}{\Gamma(a)} \int\_{\boldsymbol{v} \leq \boldsymbol{u}\_1} \boldsymbol{v}^{\gamma} (\boldsymbol{u} - \boldsymbol{v})^{a - 1} f(\boldsymbol{v}) \mathrm{d}\boldsymbol{v} \\ &= \frac{\Gamma(\gamma + a)}{\Gamma(\gamma)} \boldsymbol{K}\_{1, \gamma}^{-a} f \end{split} \tag{37}$$

where *K*−*<sup>α</sup>* 1,*<sup>γ</sup> f* is Erdélyi–Kober fractional integral of the first kind of order *α* and parameter *γ*. Consider the generalization to three variables. Let *u*<sup>1</sup> = *<sup>x</sup>*2*x*<sup>3</sup> *<sup>x</sup>*<sup>1</sup> , *<sup>x</sup>*<sup>2</sup> <sup>=</sup> *<sup>v</sup>*, *<sup>x</sup>*<sup>3</sup> <sup>=</sup> *<sup>w</sup>* <sup>⇒</sup> *<sup>x</sup>*<sup>1</sup> <sup>=</sup> *vw <sup>u</sup>*<sup>1</sup> . Then, d*x*<sup>1</sup> ∧ d*x*<sup>2</sup> ∧ <sup>d</sup>*x*<sup>3</sup> <sup>=</sup> <sup>−</sup>*vw u*2 1 d*u*<sup>1</sup> ∧ d*v* ∧ d*w* and the marginal density of *u*1, again denoted by *g*1(*u*1), is the following:

$$\begin{split} g\_1(u\_1) &= \frac{\Gamma(\gamma + a)}{\Gamma(\gamma)\Gamma(a)} \int\_{\mathcal{V}} \int\_w \frac{vw}{u\_1^2} (\frac{vw}{u\_1})^{\gamma - 1} (1 - \frac{vw}{u\_1})^{a - 1} f\_2(v) f\_3(w) \, \mathrm{d}v \wedge \mathrm{d}w \\ &= \frac{\Gamma(\gamma + a)}{\Gamma(\gamma)} \frac{u\_1^{-\gamma - a}}{\Gamma(a)} \int\_{\mathcal{V}} \int\_w (vw)^{\gamma} (u\_1 - vw)^{a - 1} f\_2(v) f\_3(w) \, \mathrm{d}v \wedge \mathrm{d}w \\ &= \frac{\Gamma(\gamma + a)}{\Gamma(\gamma)} K\_{1, \gamma}^{-a}(f\_1, f\_2) \end{split} \tag{38}$$

where *K*−*<sup>α</sup>* 1,*γ*(*f*2, *f*3) of Equation (38) may be called Erdélyi–Kober fractional integral of the first kind of order *α* and parameter *γ* in the bivariate case or with two arbitrary functions. Here, the integrals are over 0 ≤ *v* ≤ 1, 0 ≤ *w* ≤ 1, 0 ≤ *vw* ≤ *u*1. This type of generalization is different from the ones available in the literature. Various definitions of fractional integrals, fractional derivatives, and fractional differentials equations and their properties may be seen in [20–22].

#### **7. Krätzel Integral in the Real Matrix-variate Case**

It is easier to interpret Krätzel integral in terms of statistical distributions. Let *X*<sup>1</sup> and *X*<sup>2</sup> be two *p* × *p* real positive definite matrix random variables with the densities *f*1(*X*1) and *f*2(*X*2), respectively. Density here means a real-valued scalar function *f*(*X*) of the positive definite matrix *X* > *O*, such that *<sup>f</sup>*(*X*) <sup>≥</sup> 0 for all *<sup>X</sup>* <sup>&</sup>gt; *<sup>O</sup>* and *<sup>X</sup>*>*<sup>O</sup> f*(*X*)d*X* = 1. That is, for *Xj* > *O*, *j* = 1, 2 ( positive definite), *fj*(*Xj*) <sup>≥</sup> 0 for all *Xj* <sup>&</sup>gt; *<sup>O</sup>* and *Xj*>*<sup>O</sup> fj*(*Xj*)d*Xj* = 1, *<sup>j</sup>* = 1, 2. Let *Xj* > *<sup>O</sup>* have a real matrix-variate gamma density. That is,

$$f\_j(X\_j) = \frac{|A\_j|^{\gamma j}}{\Gamma\_p(\gamma\_j)} |X\_j|^{\gamma\_j - \frac{p+1}{2}} \mathbf{e}^{-\text{tr}(A\_j X\_j)}, \\ X\_j > O\_r A\_j > O\_r \Re(\gamma\_j) > \frac{p-1}{2}, j = 1, 2 \tag{39}$$

where, in Equation (39), *Aj* > *O* is a *p* × *p* real positive definite constant matrix for *j* = 1, 2.. When *p* = 1, we have the corresponding scalar variable gamma density. The real matrix-variate gamma function Γ*p*(*γj*) is explained below. In the scalar case we have taken exponents *δ<sup>j</sup>* > 0, *j* = 1, 2 but if we take exponents in the matrix-variate case then the transformations will not produce nice forms for further derivations, see the types of difficulties from [23], and hence we have taken *δ*<sup>1</sup> = *δ*<sup>2</sup> = 1 in the matrix-variate case. Let us consider symmetric product *U* = *X* 1 2 <sup>2</sup> *X*1*X* 1 2 <sup>2</sup> where *X* 1 2 <sup>2</sup> > *O* is the positive definite square root of the positive definite matrix *X*<sup>2</sup> > *O*. We have taken the symmetric product because the transformations are on symmetric cases. Let *V* = *X*2. Then, from Mathai [23], we can derive d*X*<sup>1</sup> ∧ d*X*<sup>2</sup> = |*V*| <sup>−</sup> *<sup>p</sup>*+<sup>1</sup> <sup>2</sup> d*U* ∧ d*V* and then proceeding as in the scalar variable case, the density of *U*, denoted again by *g*(*U*), is given by the following:

$$\log(\mathcal{U}) = \int\_{V} |V|^{-\frac{p+1}{2}} f\_1(V^{-\frac{1}{2}} \mathcal{U}V^{-\frac{1}{2}}) f\_2(V) \mathcal{U}V \tag{40}$$

where *f*<sup>1</sup> and *f*<sup>2</sup> in Equation (40) are some general densities. Consider the case when *fj*(*Xj*) is a real matrix-variate gamma density given by the following:

$$f\_{\hat{\jmath}}(X\_{\hat{\jmath}}) = \frac{|A\_{\hat{\jmath}}|^{\gamma\_{\hat{\jmath}}}}{\Gamma\_p(\gamma\_{\hat{\jmath}})} |X\_{\hat{\jmath}}|^{\gamma\_{\hat{\jmath}} - \frac{p+1}{2}} \mathbf{e}^{-\text{tr}(A\_{\hat{\jmath}}X\_{\hat{\jmath}})},\tag{41}$$

for *Aj* <sup>&</sup>gt; *<sup>O</sup>*, *Xj* <sup>&</sup>gt; *<sup>O</sup>*, (*γj*) <sup>&</sup>gt; *<sup>p</sup>*−<sup>1</sup> <sup>2</sup> , *j* = 1, 2, where Γ*p*(*γj*) is the real matrix-variate gamma given by

$$
\Gamma\_p(a) = \pi^{\frac{p(p-1)}{4}} \Gamma(a) \Gamma(a-\frac{1}{2}) \dots \Gamma(a-\frac{p-1}{2}), \Re(a) > \frac{p-1}{2}.\tag{42}
$$

For the densities in Equation (41), with Γ*p*(*γj*) defined in Equation (42), the density of *U* is given by the following:

$$\log(lI) = \mathbb{C}|\boldsymbol{l}I|^{\gamma\_1 - \frac{p+1}{2}} \int\_{V>0} |\boldsymbol{V}|^{\gamma\_2 - \gamma\_1 - \frac{p+1}{2}} \mathbf{e}^{-\text{tr}(\boldsymbol{V}^{-\frac{1}{2}}A\_1V^{-\frac{1}{2}}\boldsymbol{U}) - \text{tr}(A\_2V)} \mathbf{d}V \tag{43}$$

for *Aj* <sup>&</sup>gt; *<sup>O</sup>*, *<sup>V</sup>* <sup>&</sup>gt; *<sup>O</sup>*, *<sup>U</sup>* <sup>&</sup>gt; *<sup>O</sup>*, (*γj*) <sup>&</sup>gt; *<sup>p</sup>*−<sup>1</sup> <sup>2</sup> , *j* = 1, 2 where

$$\mathbb{C} = \prod\_{j=1}^{2} \frac{|A\_j|^{\gamma\_j}}{\Gamma\_p(\gamma\_j)}.$$

This Equation (43) is the Krätzel integral in the real matrix-variate case. Note that, if *A*<sup>1</sup> is a positive scalar quantity, then it can be taken out of *V* and then *V*−<sup>1</sup> will be obtained corresponding to the real scalar case.

The model in Equation (41) is also connected to Maxwell-Boltzmann and Raleigh densities in physics. Their matrix-variate, multivariate and rectangular matrix-variate extensions and some applications in reliability analysis are given in [24]. Their complex matrix-variate analogs can be worked out but they do not seem to be in print in the literature yet.

#### **8. Krätzel Integral in the Complex Matrix-variate Case**

Here, we consider *<sup>p</sup>* <sup>×</sup> *<sup>p</sup>* Hermitian positive definite matrices *<sup>X</sup>*˜ *<sup>j</sup>* <sup>&</sup>gt; *<sup>O</sup>*, *<sup>j</sup>* <sup>=</sup> 1, 2 and Hermitian positive definite square root *X*˜ 1 2 <sup>2</sup> . Consider the symmetric product *<sup>U</sup>*˜ <sup>=</sup> *<sup>X</sup>*˜ 1 2 <sup>2</sup> *<sup>X</sup>*˜ <sup>1</sup>*X*˜ 1 2 <sup>2</sup> , *<sup>V</sup>*˜ <sup>=</sup> *<sup>X</sup>*˜ 2. Then, from [23] we have d*X*˜ <sup>1</sup> <sup>∧</sup> <sup>d</sup>*X*˜ <sup>2</sup> <sup>=</sup> <sup>|</sup>det(*V*)<sup>|</sup> <sup>−</sup>*p*d*U*˜ <sup>∧</sup> <sup>d</sup>*V*˜ . Let the density of *<sup>U</sup>*˜ be denoted by *<sup>g</sup>*˜(*U*˜ ) when *X*˜ *<sup>j</sup>*, *j* = 1, 2 are independently distributed with the complex matrix-variate gamma densities given by

$$\tilde{f}\_{\hat{\jmath}}(\boldsymbol{\tilde{X}}\_{\hat{\jmath}}) = \frac{|\det(\boldsymbol{A}\_{\hat{\jmath}})|^{\gamma\_{\hat{\jmath}}}}{\Gamma\_p(\gamma\_{\hat{\jmath}})} |\det(\boldsymbol{X}\_{\hat{\jmath}})|^{\gamma\_{\hat{\jmath}}-p} \mathbf{e}^{-\text{tr}(\boldsymbol{A}\_{\hat{\jmath}}\boldsymbol{\tilde{X}}\_{\hat{\jmath}})}, \boldsymbol{X}\_{\hat{\jmath}} > \boldsymbol{O}, \mathbb{R}(\gamma\_{\hat{\jmath}}) > p - 1, j = 1, 2\tag{44}$$

where Γ˜ *<sup>p</sup>*(*α*) is the complex matrix-variate gamma given by the following:

$$\bar{\Gamma}\_p(a) = \pi^{\frac{p(p-1)}{2}} \Gamma(a) \Gamma(a-1) \dots \Gamma(a-p+1), \mathfrak{R}(a) > p-1. \tag{45}$$

Then, from Equations (44) and (45), proceeding as in the real matrix-variate case the density of *U*˜ , denoted by *g*˜(*U*˜ ), is the following:

$$\mathcal{g}(\mathcal{U}) = \tilde{\mathcal{C}} |\det(\mathcal{U})|^{\gamma\_1 - p} \int\_{\mathcal{V} > O} |\det(\mathcal{V})|^{\gamma\_2 - \gamma\_1 - p} \mathbf{e}^{-\text{tr}(\tilde{\mathcal{V}}^{-\frac{1}{2}} A\_1 \tilde{\mathcal{V}}^{-\frac{1}{2}} \mathcal{U}) - \text{tr}(A\_2 \mathcal{V})} \mathbf{d}\mathcal{V}$$

for (*γj*) <sup>&</sup>gt; *<sup>p</sup>* <sup>−</sup> 1, *Aj* <sup>&</sup>gt; *<sup>O</sup>*, *<sup>V</sup>*˜ <sup>&</sup>gt; *<sup>O</sup>*, *<sup>U</sup>*˜ <sup>&</sup>gt; *<sup>O</sup>*, *<sup>j</sup>* <sup>=</sup> 1, 2 where

$$\mathbb{C} = \prod\_{j=1}^{2} \frac{|\det(A\_j)|^{\gamma\_j}}{\Gamma\_p(\gamma\_j)}.$$

#### **9. Extension to Rectangular Matrix-variate Case**

Let *X* = (*xij*) be a *p* × *q*, *q* ≥ *p* matrix of full rank *p* where the elements *xij*s are distinct real scalar variables. Let *A* > *O* be *p* × *p* and *B* > *O* be *q* × *q* constant real positive definite matrices. Let a prime denote the transpose, let tr(·) be the trace of (·), and let, for example, *<sup>A</sup>*<sup>1</sup> <sup>2</sup> be the positive definite square root of the positive definite matrix *A* > *O*. Consider the model

$$f(X) = \mathbb{C}[A^{\frac{1}{2}}XBX^{\prime}A^{\frac{1}{2}}]^\gamma |I + a\_1(q\_1 - 1)(A^{\frac{1}{2}}XBX^{\prime}A^{\frac{1}{2}})|^{-\frac{1}{q\_1 - 1}}$$

$$\times |I + a\_2(q\_2 - 1)(A^{\frac{1}{2}}XBX^{\prime}A^{\frac{1}{2}})^{-1}|^{-\frac{1}{q\_2 - 1}}\tag{46}$$

for *aj* <sup>&</sup>gt; 0, *qj* <sup>&</sup>gt; 1, *<sup>j</sup>* <sup>=</sup> 1, 2, *<sup>γ</sup>* <sup>&</sup>gt; <sup>−</sup>*<sup>q</sup>* <sup>2</sup> <sup>+</sup> *<sup>p</sup>*−<sup>1</sup> <sup>2</sup> . Observe that

$$\lim\_{q\_{\vec{j}} \to 1} |I + a\_{\vec{j}}(q\_{\vec{j}} - 1)(A^{\frac{1}{2}}XBX'A^{\frac{1}{2}})|^{-\frac{1}{q\_{\vec{j}} - 1}} = \mathbf{e}^{-\text{tr}(A^{\frac{1}{2}}XBX'A^{\frac{1}{2}})}\tag{47}$$

for *j* = 1, 2. Let

$$f\_1(X) = \lim\_{q\_1 \to 1} f(X), \\ f\_2(X) = \lim\_{q\_2 \to 1} f(X), \\ f\_3(X) = \lim\_{q\_1 \to 1, q\_2 \to 1} f(X).$$

Then,

$$f\_1(X) = \mathbb{C}\_1 |A^{\frac{1}{2}} X B X' A^{\frac{1}{2}}|^\gamma e^{-a\_1(A^{\frac{1}{2}} X B X' A^{\frac{1}{2}})}$$

$$\times |I + a\_2(q\_2 - 1)(A^{\frac{1}{2}} X B X' A^{\frac{1}{2}})^{-1}|^{-\frac{1}{q\_2 - 1}}.\tag{48}$$

$$f\_2(X) = \mathbb{C}\_2|A^{\frac{1}{2}}XBX^\prime A^{\frac{1}{2}}|^\gamma |I + a\_1(q\_1 - 1)(A^{\frac{1}{2}}XBX^\prime A^{\frac{1}{2}})|^{-\frac{1}{q\_1 - 1}}$$

$$\times e^{-a\_2 \text{tr}(A^{\frac{1}{2}}XBX^\prime A^{\frac{1}{2}})^{-1}}.\tag{49}$$

$$f\_3(X) = \mathbb{C}\_3 |A^{\frac{1}{2}} X B X' A^{\frac{1}{2}}|^{\gamma} \mathbf{e}^{-a\_1(A^{\frac{1}{2}} X B X' A^{\frac{1}{2}}) - a\_2(A^{\frac{1}{2}} X B X' A^{\frac{1}{2}})^{-1}}.\tag{50}$$

Then, *f*3(*X*), coming from Equations (46) and (47), is the real rectangular matrix-variate version of Krätzel integral. In a physical model building situation, if Equation (50) is the stable or ideal situation, then Equations (46), (48) and (49) describe the unstable neighborhoods. From the discussion in Sections 2 and 3, we can see that the model in Equations (46) and (48)–(50) can also be generated by M-convolution of product or density of a product in the real matrix-variate case. In Equation (50), for simplicity, we have taken the coefficient parameters as scalar quantities. We can evaluate the normalizing constants *C*, *C*1, *C*2, *C*<sup>3</sup> by using the following steps: Let

$$(L)\quad \mathcal{Y} = A^{\frac{1}{2}} X B^{\frac{1}{2}} \Rightarrow \mathbf{d}X = |A|^{-\frac{p}{2}} |B|^{-\frac{q}{2}} \mathbf{d}Y$$

from the general linear transformation (see [23] for the Jacobian in (*L*) and other Jacobians to follow). Let the corresponding function *f*(*X*) be denoted by *f*01(*Y*). Then,

$$f\_{01}(Y) = C|A|^{-\frac{p}{2}}|B|^{-\frac{q}{2}}|YY'|^{\gamma}|I + a\_1(q\_1 - 1)(YY')|^{-\frac{1}{q\_1 - 1}}$$

$$\times |I + a\_2(q\_2 - 1)(YY')^{-1}|^{-\frac{1}{q\_2 - 1}}.\tag{51}$$

Let the corresponding functions *f*1(*X*), *f*2(*X*), *f*3(*X*) be denoted by *f*11(*Y*), *f*21(*Y*), *f*31(*Y*), respectively. Note that *Y* has *pq* real scalar variables whereas *S* = *YY* , which is a *p* × *p* real positive

definite matrix, has only *p*(*p* + 1)/2 elements. However, we can obtain a relationship between d*Y* and d*S* (see [23]). It is the following:

$$(\mathcal{M})\quad\mathbf{d}\mathcal{Y} = \frac{\pi^{\frac{p}{2}}}{\Gamma\_p(\frac{q}{2})}|S|^{\frac{q}{2}-\frac{p+1}{2}}\mathbf{d}S\_{\prime\prime}$$

where *Y* in (*M*) is *p* × *q*, whereas *S* is *p* × *p*. Let the corresponding functions of *S* be denoted by *f*02(*S*), *f*12(*S*), *f*22(*S*), *f*32(*S*), respectively. Then, for example, *f*02(*S*) is the following:

$$\begin{split} f\_{\Omega}(S) &= C|A|^{-\frac{p}{2}}|B|^{-\frac{q}{2}}|S|^{\gamma+\frac{q}{2}-\frac{p+1}{2}}|I + a\_1(q\_1 - 1)(S)|^{-\frac{1}{q\_1-1}}, \\ &\times |I + a\_2(q\_2 - 1)(S)^{-1}|^{-\frac{1}{q\_2-1}}. \end{split}$$

#### *9.1. Multivariate Situation*

In Equation (46) and Equations (48)–(50), let *p* = 1 and *q* > 1; then, *Y* is 1 × *q* and of the form *Y* = (*y*1, ..., *yq*). Then, *YY* = *y*<sup>2</sup> <sup>1</sup> + ... + *<sup>y</sup>*<sup>2</sup> *<sup>q</sup>*. Then, for *p* = 1, the constant matrix *A* is 1 × 1 and let it be *a*<sup>3</sup> > 0. Then, from Equation (51),

$$\begin{split} f\_{01} &= \text{Ca}\_3^{-\frac{1}{2}} |B|^{-\frac{q}{2}} (y\_1^2 + \ldots + y\_q^2)^\gamma [1 + a\_1(q\_1 - 1)(y\_1^2 + \ldots + y\_q^2)]^{-\frac{1}{q\_1 - 1}} \\ &\times [1 + a\_2(q\_2 - 1)(y\_1^2 + \ldots + y\_q^2)^{-1}]^{-\frac{1}{q\_2 - 1}}. \end{split}$$

Then, *f*<sup>31</sup> becomes the following:

$$f\_{31}(Y) = C\_3 a\_3^{-\frac{1}{2}} |B|^{-\frac{q}{2}} [(y\_1^2 + \ldots + y\_q^2)]^\gamma$$

$$\times e^{-a\_1(y\_1^2 + \ldots + y\_q^2) - a\_2(y\_1^2 + \ldots + y\_q^2)^{-1}}\tag{52}$$

for −∞ < *yj* < ∞, *j* = 1, ..., *q*. We may call Equation (52) as the multivariate version of the basic Krätzel integral and *f*<sup>01</sup> for *p* = 1 as the pathway extended form of *f*<sup>31</sup> in Equation (52).

Note that for a general *p* > 1 we do not take exponents for (*A*<sup>1</sup> <sup>2</sup> *XBX A*1 <sup>2</sup> ) because in the general case matrix transformations create problems while computing the Jacobians. The types of problem is described in [23]. However, for the scalar cases in *f*02, *f*12, *f*22, *f*32, we can take arbitrary exponents. Hence, we have the general Krätzel integrals in the multivariate case as the following:

$$f\_{33}(Y) = C\_3 a\_3^{-\frac{1}{2}} |B|^{-\frac{q}{2}} [(y\_1^2 + \ldots + y\_q^2)]^\gamma$$

$$\times e^{-a\_1(y\_1^2 + \ldots + y\_q^2)^\delta - a\_2(y\_1^2 + \ldots + y\_q^2)^{-\rho}}\tag{53}$$

for *δ* > 0, *ρ* > 0. Corresponding exponents can be included in *f*03, *f*13, *f*<sup>23</sup> as well. For evaluating the normalizing constant, we can do the following steps. Make use of the transformation and Jacobian in (*M*) for *p* = 1. Then, *S* = *s* is a scalar variable. Then, for *p* = 1, Equation (53) becomes the following:

$$f\_{34}(s) = a\_3^{-\frac{1}{2}} |B|^{-\frac{q}{2}} \frac{\pi^{\frac{q}{2}}}{\Gamma(\frac{q}{2})} s^{\gamma + \frac{q}{2} - 1} e^{-a\_1 s^\delta - a\_2 s^{-\rho}}.$$

Since *s* is a real scalar variable here, one can use the scalar version of Mellin convolution of a product or density of product of Sections 2 and 3, go to the Mellin transforms to evaluate the normalizing constant. The same procedure works for all the models *f*04, *f*14, *f*<sup>24</sup> also.

#### *9.2. Evaluation of the Normalizing Constant*

Let

$$\int\_{s=0}^{\infty} s^{\gamma + \frac{\rho}{2} - 1} e^{-as^{\delta} - bs^{-\rho}} \mathrm{d}s = g(b) \text{ say.} $$

Let *Mg*(*t*) be the Mellin transform of *g*(*b*) with Mellin parameter *t*. Then,

$$M\_{\mathfrak{F}}(t) = \int\_0^\infty b^{t-1} \{ \int\_{s=0}^\infty \mathbf{s}^{\gamma + \frac{q}{2} - 1} \mathbf{e}^{-as^\delta - bs^{-\rho}} \mathbf{ds} \} \mathbf{d} ds.$$

Evaluating the *b*-integral we have the following:

$$\int\_0^\infty b^{t-1} \mathbf{e}^{-b \circ \cdots \circ} \mathbf{d}b = \Gamma(t) s^{\rho t}, \text{ for } \mathbb{R}(t) > 0.$$

Now, evaluating the *s*-integral, we have the following:

$$\int\_0^\infty s^{\gamma + \frac{q}{2} + \rho t - 1} e^{-as^\delta} ds = \frac{\Gamma(\frac{\gamma + \rho t + q/2}{\delta})}{\delta a^{\frac{\gamma + \rho t + q/2}{\delta}}} , \Re(\gamma + \rho t + q/2) > 0.$$

That is,

$$M\_{\mathfrak{F}}(t) = \frac{1}{\delta s^{\frac{\gamma + q/2}{\delta}}} \Gamma(t) \Gamma(\frac{\gamma + q/2}{\delta} + \frac{\rho}{\delta}t) a^{-\frac{\rho}{\delta}t} \dots$$

By taking the inverse Mellin transform, we have *g*(*b*) as the following:

$$\begin{split} g(b) &= \frac{1}{\delta a^{\frac{\gamma + q/2}{\delta}}} \frac{1}{2\pi i} \int\_{c - i\infty}^{c + i\infty} \Gamma(t) \Gamma(\frac{\gamma + q/2}{\delta} + \frac{\rho}{\delta} t) (ba^{\frac{\rho}{3}})^{-t} dt \\ &= \frac{1}{\delta a^{\frac{\gamma + q/2}{\delta}}} H\_{0,2}^{2,0} \left[ ba^{\frac{\rho}{3}} \big|\_{ (0,1), (\frac{\gamma + q/2}{\delta}, \frac{\rho}{3}) } \right] \end{split}$$

where *H*(·) is the H-function, see [5]. Then, the normalizing constant is the following:

$$C = a\_3^{\frac{1}{2}} |B|^{\frac{q}{2}} \frac{\Gamma(\frac{q}{2})}{\pi^{\frac{q}{2}}} \frac{\delta a^{\frac{\gamma+q/2}{\delta}}}{H\_{0,2}^{2,0} \left[ba^{\frac{q}{\delta}}\big|\_{ (0,1), (\frac{\gamma+q/2}{\delta}, \frac{\theta}{\delta})} \right]}.$$

Note that, when *ρ* = *δ*, the H-function reduces to the G-function of the form *G*2,0 0,2 *ab* 0, *<sup>γ</sup>*+*q*/2 *δ* . Then, replace the H-function by the G-function. Observe that, when *p* = 1, *A* is 1 × 1 and let it be *a*<sup>3</sup> > 0. This is the *a*<sup>3</sup> appearing above.

**Author Contributions:** Conceptualization, A.M.M. and H.J.H.; methodology, A.M.M. and H.J.H.; validation, A.M.M. and H.J.H.; formal analysis, A.M.M. and H.J.H.; All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Acknowledgments:** The authors would like to thank Francesco Mainardi for inviting us to contribute to this Special Issue. The authors would like to thank the reviewers for making valuable comments, which helped to improve the presentation in this paper.

**Conflicts of Interest:** The authors declare no conflict of interest in this paper.

#### **References**

1. Mathai, A.M.; Haubold, H.J. *Erdélyi-Kober Fractional Calculus: From a Statistical Perspective, Inspired by Solar Neutrino Physics*; Springer Briefs in Mathematical Physics: Singapore, 2018.


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Di**ff**erentiation of the Mittag-Le**ffl**er Functions with Respect to Parameters in the Laplace Transform Approach**

#### **Alexander Apelblat**

Department of Chemical Engineering, Ben Gurion University of the Negev, Beer Sheva 84105, Israel; apelblat@bgu.ac.il

Received: 5 March 2020; Accepted: 21 April 2020; Published: 26 April 2020

**Abstract:** In this work, properties of one- or two-parameter Mittag-Leffler functions are derived using the Laplace transform approach. It is demonstrated that manipulations with the pair direct–inverse transform makes it far more easy than previous methods to derive known and new properties of the Mittag-Leffler functions. Moreover, it is shown that sums of infinite series of the Mittag-Leffler functions can be expressed as convolution integrals, while the derivatives of the Mittag-Leffler functions with respect to their parameters are expressible as double convolution integrals. The derivatives can also be obtained from integral representations of the Mittag-Leffler functions. On the other hand, direct differentiation of the Mittag-Leffler functions with respect to parameters produces an infinite power series, whose coefficients are quotients of the digamma and gamma functions. Closed forms of these series can be derived when the parameters are set to be integers.

**Keywords:** derivatives with respect to parameters; Mittag-Leffler functions; Laplace transform approach; infinite power series; integral representations; convolution integrals; quotients of digamma and gamma functions

#### **1. Introduction**

At the beginning of the previous century, the exponential function was generalized by the Swedish mathematician G.M. Mittag-Leffler, who introduced a new power series that is named after him today [1]. Quite unexpectedly, enormous interest has developed regarding the Mittag-Leffler functions over the last four decades because of their ability to describe diverse physical phenomena far more easily than other approaches in a host of scientific and engineering disciplines. Consequently, the Mittag-Leffler functions have become one of the most important special functions in mathematics. Examples where they appear include kinetics of chemical reactions, time and space fractional diffusion, nonlinear waves, viscoelastic systems, neural networks, electric field relaxations, and statistical distributions [2–8]. In mathematics, the Mittag-Leffler functions play an important role in fractional calculus, solution of systems with fractional differential, and integral equations [9,10]. As a result of all this activity, there is now extensive literature on their properties and history [11–13]. A number of reviews have been produced [14–16], and of these, the monograph by Gorenflo, Kilbas, Mainardi, and Rogosin [17] occupies a special place.

The one-parameter, classical Mittag-Leffler function *E*α(*z*) is defined in the whole complex plane by the following power series:

*<sup>E</sup>*α(*z*) = -∞ *k*=0 *zk* <sup>Γ</sup>(α*<sup>k</sup>* <sup>+</sup> <sup>1</sup>) , (1)

where Reα > 0.

Later, Wiman [18] introduced the two-parameter Mittag-Leffler function *E*α*,*β(*z*), which is given by

$$E\_{\alpha,\beta}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{\Gamma(\alpha k + \beta)}\,,\tag{2}$$

where Reα > 0 and Reβ > 0. Only these two functions, not generalizations thereafter, will be studied here.

There are two main aims in this work. The first is to show that many well-known and new functional relations can be easily derived via the Laplace transform theory and the second is to consider differentiation with respect to the parameters α and β. Throughout this paper, all mathematical operations or manipulations with functions, series, integrals, integral representations, and transforms will be formal. There will be no proofs of validity of given expressions, though they are, without doubt, correct. The following sections present many results that have been derived independently by other methods, while the new results are verified by two different numerical procedures. Thus, in the framework of applied operational calculus, the reported results are only valid for real positive values of arguments and parameters.

My previous involvement with the Mittag-Leffler functions has been limited only to establishing their connections to the Volterra functions. In my monograph devoted to the Volterra functions [19], I presented in Appendix A some representations of the Mittag-Leffler functions in terms of other special functions. They can also be derived directly using the Laplace transform technique when applied to *E*α(±*t* <sup>α</sup>) functions. Evidently, this restricts the transform–inverse pair only to the positive real axis. New results, together with some from [19], are presented below.

According to the definitions of the Mittag-Leffler functions, there is a clear distinction between the argument, *z*, and the parameters, α and β, as the latter appear in the coefficients. Nevertheless, *E*α(*z*) = *f*(α, *z*) and *E*α,β(*z*) = *f*(α, β, *z*) can be regarded as the bivariate and trivariate functions, respectively.

As this is the first investigation dealing with mathematical operations with respect to variables α and β, its scope is only limited to derivatives of the Mittag-Leffler functions. The special forms of the Laplace transforms of *E*α(±*t* <sup>α</sup>) and *E*α,β(±*t* <sup>α</sup>) functions will be studied extensively to establish known properties of the Mittag-Leffler functions and to derive new functional relations. As will be demonstrated, the differentiation operations will lead to power series with coefficients being quotients of psi and gamma functions. In some cases, these series can be evaluated in a closed form, i.e., in terms of elementary and special functions. Computation methods used in this investigation to obtain the Mittag-Leffler functions and their derivatives with respect to α differ from those reported in the literature. This results from the fact that the Mittag-Leffler functions are available as the build-in functions in the MATHEMATICA program.

#### **2. Properties of the Mittag-Le**ffl**er Functions in the Laplace Transform Approach**

The Laplace transform of the Mittag-Leffler function *E*α(*t* <sup>ρ</sup>) is given by

$$L\{E\_{\alpha}(t^{\rho})\} = \frac{1}{s} \sum\_{k=0}^{\infty} \frac{\Gamma(\rho k + 1)}{\Gamma(ak + 1)} \left(\frac{1}{s^{\rho}}\right)^{k},\tag{3}$$

which is not valid to all values of ρ and α as discussed in [17].

For ρ = α, (3) becomes

$$L\{E\_{\alpha}(t^{\alpha})\} = \frac{s^{\alpha - 1}}{s^{\alpha} - 1} \, , \tag{4}$$

where Reα > 0 and Re*s* > 1 and for negative *t* <sup>α</sup> is

$$L\{E\_{\alpha}(-t^{\alpha})\} = \frac{s^{\alpha - 1}}{s^{\alpha} + 1} \,. \tag{5}$$

In a similar manner, the Laplace transforms of two-parameter Mittag-Leffler functions, *t* <sup>β</sup>−*1E*<sup>α</sup>,β(±λ*<sup>t</sup>* <sup>α</sup>), in [17] are found to be

$$L\left\{t^{\beta-1}E\_{\alpha,\beta}(\pm\lambda t^{\alpha})\right\} = \frac{s^{\alpha-\beta}}{s^{\alpha}-\lambda} \, , \tag{6}$$

where Reα > 0, Reβ > 0 and Re*s* > |λ| <sup>1</sup>/α.

Not only are the inverse transforms simple to derive from them results, but one is able to identify functions for particular values of α and β. Carrying this out will require algebraic manipulations, the similarity properties of the Laplace transformation, the Heaviside expansion theorem, the convolution (product) theorem, some substitution formulas, and other techniques and rules of the operational calculus.

In the first application of the Laplace transform theory, we consider positive integer values of α from 1 to 4. Then, the Mittag-Leffler functions reduce to elementary or special functions due to the simple inverse transforms.

For α = 1, one finds that

$$E\_1(t) = L^{-1} L \{ E\_1(t) \} = L^{-1} \{ \frac{1}{s - 1} \} = \varepsilon^t \,. \tag{7}$$

For α = 2, one obtains

$$E\_2(t^2) = L^{-1} L \left\{ E\_2(t^2) \right\} = L^{-1} \left\{ \frac{s}{s^2 - 1} \right\} = L^{-1} \left\{ \frac{s}{(s-1)(s+1)} \right\} = \coth t\_{\text{max}} \tag{8}$$

where the dominator has been decomposed into partial fractions. However, the more expedient method is to evaluate the contributions from the residues at *s* = ±1.

Carrying out this procedure for −*t* <sup>2</sup> yields

 $E\_2(-t^2) = L^{-1}$  $L\left\{E\_2(-t^2)\right\} = L^{-1}\left\{\frac{s}{s^2+1}\right\} = L^{-1}\left\{\frac{s}{\left(s-i\right)\left(s+i\right)}\right\} = \left(\frac{s^2}{s+i}\right)^{\left(s-i\right)} = \frac{s^2}{\frac{s^2}{2}+\frac{s^{-1}}{2}} = \cos t$ 

For α = 3, one finds that

$$\begin{split} &E\_3(t^3) = L^{-1}L\Big\{E\_3(t^3)\Big\} = L^{-1}\Big\{\frac{s^2}{s^3 - 1}\Big\} = L^{-1}\Big\{\frac{s^2}{(s-1)(s^2 + s + 1)}\Big\} = \\ &L^{-1}\Big\{\frac{s^2}{(s-1)(s+\frac{1+i\sqrt{5}}{2})(s+\frac{1-i\sqrt{5}}{2})}\Big\} = \\ &\frac{s^2c^t}{(s+\frac{1+i\sqrt{5}}{2})(s+\frac{1-i\sqrt{5}}{2})}\Big|\_{s=1} + \frac{s^2c^{-t(1+i\sqrt{5})/2}}{(s-1)(s+\frac{1-i\sqrt{5}}{2})}\Big|\_{s=-\frac{1+i\sqrt{5}}{2}} \\ &+\frac{s^2c^{-t(1-i\sqrt{5})/2}}{(s-1)(s+\frac{1+i\sqrt{5}}{2})}\Big|\_{s=-\frac{1-i\sqrt{5}}{2}} = \frac{1}{3}[c^t + 2c^{-t/2}\cos(\frac{\sqrt{3}}{2}t)] \,. \end{split} \tag{10}$$

Similarly, for negative *t* <sup>α</sup>, one arrives at

 $E\_3(-t^3) = L^{-1} L\left\{E\_3(-t^3)\right\} = L^{-1} \left\{\frac{s^2}{s^3+1}\right\} = \lambda^{-\frac{1}{3}} [e^{-t} + 2e^{t/2} \cos(\frac{\sqrt{3}}{2}t)]$   $\frac{1}{3} [e^{-t} + 2e^{t/2} \cos(\frac{\sqrt{3}}{2}t)]$   $.$ 

The calculations become more tedious as α increases. However, for α = *n*, an integer, we obtain in general case

$$E\_{\rm nl}(\pm t^{\rm n}) = L^{-1} L \{ E\_{\rm nl}(\pm t^{\rm n}) \} = L^{-1} \left\{ \frac{s^{\rm n} - 1}{s^{\rm n} - 1} \right\}.\tag{12}$$

It is obvious that for integer values of α, the Mittag-Leffler functions can be expressed in terms of elementary functions, such as combination of exponential, hyperbolic, and trigonometric functions.

When α is not an integer, special functions are involved. Then, one must use a combination of tables of inverse Laplace transforms, substitution formulas, the convolution theorem, and other rules. For example, from the table of inverse transforms [20], we have

$$\begin{array}{l} L^{-1}\left\{\frac{1}{\sqrt{s}}\right\} = \frac{1}{\sqrt{\pi t}} \\ L^{-1}\left\{\frac{1}{\sqrt{s}-1}\right\} = \frac{1}{\sqrt{\pi t}} \pm \epsilon^t \text{erfc}(-\sqrt{t}) \\ \text{erfc}(-t^{1/2}) = -\epsilon \text{erfc}(t^{1/2}) = \text{erf}(t^{1/2}) - 1 \end{array} \tag{13}$$

Hence, we find that

$$\begin{aligned} \left\{ E\_{1/2}(\pm\sqrt{t}) = L^{-1}L \Big\{ E\_{1/2}(\pm\sqrt{t}) \right\} &= L^{-1} \Big\{ \frac{1}{\sqrt{\epsilon}(\sqrt{s}-1)} \Big\} = \\\ L^{-1} \Big\{ -\frac{1}{\sqrt{s}} \pm \frac{1}{(\sqrt{s}-1)} \Big\} &= \epsilon^t [1 - \operatorname{erf}(\sqrt{t})] \,. \end{aligned} \tag{14}$$

The cases with α = ±1/4 are more complex. Therefore, only the final result for α = 1/4 from [19] is presented here. This is

$$\begin{split} &L^{-1}(\pm t^{1/4}) = L^{-1}L\Big[E\_{1/4}(\pm t^{1/4})\Big] = L^{-1}\Big\{\frac{1}{s^{3/4}(s^{1/4}-1)}\right\} = \\ &L^{-1}\Big\{\frac{1}{\sqrt{s}(\sqrt{s}-1)} \pm \frac{1}{s^{1/4}(s-1)} \pm \frac{1}{s^{3/4}(s-1)}\Big{}\Big{} = \\ &\epsilon^{t}\Big{\Big{(}1+\epsilon r f(\sqrt{t}) \pm \frac{\gamma\left(\frac{1}{4},t\right)}{\Gamma\left(\frac{1}{4}\right)} \pm \frac{\gamma\left(\frac{3}{4},t\right)}{\Gamma\left(\frac{3}{4}\right)}\Big{}\Big{}, \\ &\gamma(a,t) = \Gamma(a) - \Gamma(a,t) = \int\_{0}^{\mathbf{x}^{a-1}} \mathbf{z}^{a-1} \mathbf{z}^{-\mathbf{x}} d\mathbf{x} \Big{,} \end{split} \tag{15}$$

where the last equation in (15) is the integral representation for the incomplete gamma function.

We can also determine relations between the Mittag-Leffler functions using the Laplace transformation. Putting β = α + 1 in (6) yields

$$L\left\{t^{\alpha}E\_{\alpha,\alpha+1}(t^{\alpha})\right\}=\frac{1}{s(s^{\alpha}-1)}\,. \tag{16}$$

However, noting that

$$L\{E\_a(t^a) - 1\} = \frac{s^{a-1}}{s^a - 1} - \frac{1}{s} = \frac{1}{s(s^a - 1)}\,'\,\tag{17}$$

we can derive the well-known relation for the Mittag-Leffler functions

$$E\_{\alpha} \left( \begin{array}{c} t^{\alpha} \end{array} \right) - 1 = t^{\alpha} E\_{\alpha, \alpha + \ 1} \left( \begin{array}{c} t^{\alpha} \end{array} \right) . \tag{18}$$

A similar result for the two-parameter Mittag-Leffler function can be derived from

$$L\left\{t^{\alpha+\beta-1}E\_{\alpha,\alpha+\ \beta}(t^{\alpha})\right\} = \frac{1}{\mathbf{s}^{\beta}(\mathbf{s}^{\alpha}-1)}\,,\tag{19}$$

and

$$L\left\{t^{\beta-1}E\_{a,\beta}(t^a) - \frac{t^{\beta-1}}{\Gamma(\beta)}\right\} = \frac{s^{a-\beta}}{(s^a-1)} - \frac{1}{s^{\beta}} = \frac{1}{s^{\beta}(s^a-1)}\,. \tag{20}$$

Hence, we arrive at

$$E\_{a, \beta}(\ t^a) = \frac{1}{\Gamma(\beta)} + t^a E\_{a, a+, \beta}(\ t^a) \,. \tag{21}$$

For α and β integers, (21) can be written as

$$\begin{array}{l} E\_{1,\emptyset}(t) = \frac{1}{\Gamma(\beta)} + t \, E\_{1,n+1}(t) \\ E\_{n,\emptyset}(t^n) = \frac{1}{\Gamma(\beta)} + t^n \, E\_{n,n+\n}(t^n) \\ E\_{n,n}(t^n) = \frac{1}{(n-1)!} + t^n \, E\_{n,2n}(t^n) \\ E\_{n,\mathcal{m}}(t^n) = \frac{1}{(m-1)!} + t^n \, E\_{n,\mathcal{m}} + n \, (t^n) \end{array} \tag{22}$$

Of the many substitution formulas in the Laplace transform theory, only three will be employed here. From [21] we have

$$\begin{aligned} L\{f(t)\} &= F(s) \\ L^{-1} \left\{ \frac{1}{\sqrt{s}} F(\sqrt{s}) \right\} &= \frac{1}{\sqrt{nt}} \int\_0^\infty e^{-u^2/4t} f(u) \, du \end{aligned} \tag{23}$$

By wring the Laplace transform of *E*α(*t* <sup>α</sup>) as

$$L\{E\_{\mathfrak{a}}(t^{\mathfrak{a}})\} = \frac{s^{\mathfrak{a}-1}}{s^{\mathfrak{a}}-1} = \frac{1}{\sqrt{s}} \frac{\left(\sqrt{s}\right)^{2\mathfrak{a}-1}}{\left[\left(\sqrt{s}\right)^{2\mathfrak{a}}-1\right]}\,\,\,\,\,\tag{24}$$

we find that the Mittag-Leffler function can be represented by

$$E\_{\alpha}(t^{\alpha}) = \frac{1}{\sqrt{\pi t}} \int\_{0}^{\infty} e^{-u^{2}/4t} E\_{2\alpha}(u^{2\alpha}) \, du \, . \tag{25}$$

The operational rule for the Macdonald function *K*1/3(*z*) is

$$L^{-1}\left\{\frac{1}{s^{2/3}}F(s^{1/3})\right\} = \frac{1}{\pi}\int\_0^\infty \sqrt{\frac{u}{t}}\,\,\mathcal{K}\_{1/3}\left(\frac{2u^{3/2}}{\sqrt{27t}}\right)f(u)\,\,du\,\,.\tag{26}$$

Writing the Laplace transform of *E*α(*t* <sup>α</sup>) as

$$L\{E\_a(t^a)\} = \frac{s^{a-1}}{s^a - 1} = \frac{\left(s^{1/3}\right)^{3a-1}}{s^{2/3}\left[\left(s^{1/3}\right)^{3a} - 1\right]},\tag{27}$$

gives

$$E\_a(t^a) = \frac{1}{\pi} \int\_0^\infty \sqrt{\frac{u}{t}} \, K\_{1/3} \left( \frac{2u^{3/2}}{\sqrt{2\mathcal{T}} \, t} \right) E\_{3,a}(u^{3,a}) \, du \, . \tag{28}$$

For specific values of α, the Mittag-Leffler functions in the integrands of (25) and (28) can be expressed as elementary or special functions. Then, the Mittag-Leffler functions on the left-hand side will be represented by definite integrals over infinity.

The third substitution formula is

$$L^{-1}\left\{\frac{1}{s^2}F(\frac{1}{s})\right\} = \int\_0^\infty \sqrt{\frac{t}{u}} \, J\_1(2\sqrt{tu}) \, f(u) \, du \, \,\tag{29}$$

where *J*1(*z*) is the Bessel function of the first kind and of the first order

From

$$L\{1 - E\_{\alpha}(t^{\alpha})\} = \frac{1}{s} - \frac{s^{\alpha - 1}}{s^{\alpha} - 1} = \frac{1}{s^2} \frac{\left(\frac{1}{s}\right)^{\alpha - 1}}{\left[\left(\frac{1}{s}\right)^{\alpha} - 1\right]},\tag{30}$$

it follows that

$$\frac{E\_a(t^a) - 1}{\sqrt{t}} = \int\_0^\infty f\_1(2\sqrt{tu}) \, E\_a(u^a) \, \frac{du}{\sqrt{u}} \,. \tag{31}$$

Many properties and functional relations for the Mittag-Leffler functions can be obtained from the convolution theorem. These are found by expressing the Laplace transforms of *E*α(*t* <sup>α</sup>) in various forms and then evaluating the inverses via convolution integrals. For example, using

$$L\{E\_a(t^a)\} = \frac{s^{a-1}}{s^a - 1} = \frac{s^{2a-1}}{s^{2a} - 1} + \frac{s^{2a-1}}{s^{2a} - 1} \cdot \frac{1}{s^a} \,, \tag{32}$$

immediately yields

$$\begin{aligned} E\_{\alpha}(t^{\alpha}) &= E\_{2\alpha}(t^{2\alpha}) + E\_{2\alpha}(t^{2\alpha}) \* \frac{t^{a-1}}{\Gamma(a)} = \\ E\_{2\alpha}(t^{2\alpha}) &+ \int\_{0}^{t} E\_{2\alpha}(u^{2\alpha}) \frac{(t-u)^{a-1}}{\Gamma(a)} \, du \, . \end{aligned} \tag{33}$$

All convolution integrals can be transformed into finite trigonometric integrals by a suitable change of variable. Therefore, putting *u* = *t*[*cos*θ] <sup>2</sup> in (33) yields

$$\begin{aligned} \left. \frac{1}{\Gamma(a)} \int\_0^t E\_{2a}(u^{2a})(t-u)^{-a-1} \, du &= \\ \left. \frac{t^{a-1}}{\Gamma(a)} \int\_0^{\pi/2} \sin(2\theta) \left[ (\sin \theta)^2 \right]^{a-1} E\_{2a} [t^{2a} (\cos \theta)^{4a}] \, d\theta \right. \end{aligned} \tag{34}$$

Similarly, from

$$L\{t^{\beta -1}E\_{\alpha,\beta}(t^{\alpha})\} = \frac{s^{\alpha-\beta}}{s^{\alpha}-1} = \frac{s^{2\alpha-\beta}}{s^{2\alpha}-1} + \frac{s^{2\alpha-\beta}}{s^{2\alpha}-1} \cdot \frac{1}{s^{\alpha}}\,. \tag{35}$$

it follows that

$$E\_{a, \emptyset}(t^a) = E\_{2a, \emptyset}(t^{2a}) + \int\_0^t \left(\frac{u}{t}\right)^{\theta - 1} E\_{2a, \emptyset}(u^{2a}) \frac{(t - u)^{a - 1}}{\Gamma(a)} \, du \,. \tag{36}$$

A different convolution integral can be derived from

$$\frac{1}{s^{\beta+1}} = \frac{s^{\alpha-\beta}}{s^{\alpha}-1} \cdot \left[\frac{1}{s} - \frac{1}{s^{\alpha+1}}\right],\tag{37}$$

whose inverse Laplace transform is

$$\frac{t^{\beta}}{\Gamma(\beta+1)} = \int\_{0}^{t} u^{\beta-1} E\_{\alpha,\beta}(u^{\alpha}) \left[ 1 - \frac{(t-u)^{\alpha}}{\Gamma(\alpha+1)} \right] du \,. \tag{38}$$

Introducing the Laplace transform of *E*α,β(±*t* <sup>α</sup>) in the form

$$L\left\{t^{\beta-1}E\_{\alpha,\beta}(\pm t^{\alpha})\right\} = \frac{s^{\alpha-\beta}}{s^{\alpha}-1} = \frac{s^{\alpha-1}}{s^{\alpha}-1} \cdot \frac{1}{s^{\beta-1}}\,. \tag{39}$$

gives

$$t^{\beta -1} E\_{a, \beta}(\pm t^a) = E\_a(\pm t^a) \ast \frac{t^{\beta - 2}}{\Gamma(\beta - 1)} = \int\_0^t E\_a(\pm u^a) \frac{(t - u)^{\beta - 2}}{\Gamma(\beta - 1)} \, du \,. \tag{40}$$

For β = α, this becomes

$$\begin{split} \, \, t^{a-1} \, \, E\_{\alpha,a} \left( \pm t^a \right) &= E\_a \left( \pm t^a \right) \* \frac{t^{a-2}}{\Gamma(a-1)} = \\ \int \, \, \int\_{\alpha} E\_{\alpha} \left( \pm \imath t^a \right) \frac{\left( t - \imath \right)^{a-2}}{\Gamma(a-1)} \, d\imath \, \, . \end{split} \tag{41}$$

For α and β, positive integers, (40) reduces to

$$t^{m-1}E\_{n,m}(\pm t^n) = \int\_0^t E\_n(\pm u^n) \frac{(t-u)^{m-2}}{(m-2)!} \,. \tag{42}$$

where *n* = 1, 2, 3, ... and *m* = 2, 3, 4, ... .

These convolution integrals are easily evaluated because the Mittag-Leffler functions reduce to elementary functions. For example, for *n* = 1 and *m* = 2 and 3 and noting that *E*1(*t*) = e*<sup>t</sup>* , it follows that

$$\begin{aligned} t \ E\_{1,2}(t) &= \int\_0^t e^{\mu} \, d\mu = e^t - 1 \\ t^2 \ E\_{1,3}(t) &= \int\_0^t e^{\mu} \left( t - \mu \right) \, d\mu = e^t - t - 1 \end{aligned} \tag{43}$$

The Mittag-Leffler functions for *n* = 1 to 4 and *m* = 2 to 4 are presented in [19].

The operational rules of the Laplace transformation enable us to obtain representations for derivatives of the Mittag-Leffler functions *t* <sup>β</sup>−1*E*α,β(*t* <sup>α</sup>). It is obvious from (2) that the derivative for any order is zero at the origin. In this case, differentiation of the Mittag-Leffler function is equivalent to multiplying the Laplace transform by powers of *s*. Because

$$\begin{array}{l} L\left| f^{(n)}(t) \right> = s^n F(s) \,, \\ f(0) = f'(0) = f''(0) = \dots = f^{(n)}(0) \, \end{array} \tag{44}$$
  $n = 1, 2, 3, \dots \; \, \, \,$ 

we find that for Reα > 0, Reβ ≥ *n* + 1 and Re*s* > 1

$$\left\{ L \left( \frac{d^n}{dt^n} \Big[ t^{\beta - 1} E\_{\alpha, \beta}(t^n) \right] \right\} = s^n \left( \frac{s^{\alpha - \beta}}{s^{\alpha} - 1} \right) = \frac{s^{\alpha - (\beta - n)}}{s^{\alpha} - 1} \right. \tag{45}$$

Hence, the Laplace inverse transform becomes

$$\frac{d^n}{dt^n} \left[ t^{\beta - 1} E\_{\alpha, \beta}(t^{\alpha}) \right] = t^{\beta - n - 1} E\_{\alpha, \beta - n}(t^{\alpha}) \ . \tag{46}$$

In case of *E*α(*t* <sup>α</sup>) function, its value is unity at the origin. Only the first derivative has a simple Laplace transform, which is

$$L\left\{\frac{d}{dt}\left[E\_a(t^a)\right]\right\} = s\left(\frac{s^{a-1}}{s^a-1}\right) - 1 = \frac{1}{s^a-1} = \left(\frac{s^{a-1}}{s^a-1}\right) \cdot \frac{1}{s^{a-1}}\,\,\,\,\tag{47}$$

the inverse transform of (47) is

$$\frac{d}{dt}[E\_a(t^a)] = E\_a(t^a) \* \frac{t^{a-2}}{\Gamma(a-1)}\,. \tag{48}$$

However, according to (41), this convolution integral is also given by

$$\frac{d}{dt}[E\_a(t^a)] = t^{a-1}E\_{a,a}(-t^a)\,. \tag{49}$$

The *n*-dimensional integrals of the Mittag-Leffler functions are easily evaluated because this is equivalent to dividing the Laplace transform, *F*(*s*), by *sn*

$$L\left\{\int \stackrel{t}{\longleftrightarrow} \stackrel{u\_{n-1}}{\longleftrightarrow} \cdots \stackrel{u\_1}{\longleftrightarrow} f(u\_1) \, du\_1 \, du\_2 \, \cdots \, du\_n \right\} = \frac{1}{s^n} F(s) \, \,\tag{50}$$

Then, we obtain

$$L\left\{\begin{aligned} &\frac{t}{s}\int\_{0}^{u\_{n-1}}\cdots\int\_{0}^{u\_{1}}u\_{1}^{\beta-1}E\_{\alpha,\beta}(u\_{1}^{\alpha})\,du\_{1}\,du\_{2}\cdots\cdot du\_{n} \\ &\frac{s^{\alpha-(\beta+n)}}{s^{\alpha-1}}, \end{aligned}\right\} = \frac{1}{s^{\alpha}}\Big(\frac{s^{\alpha-\beta}}{s^{\alpha}-1}\Big) = \tag{51}$$

The inverse transform of (51) is

$$\bigcup\_{0=0}^{t} \bigcap\_{0=0}^{u\_{n-1}} \cdots \bigcap\_{0}^{u\_1} u\_1^{\theta - 1} \to\_{\alpha, \theta} (u\_1^{\alpha}) \text{ d}u\_1 \, \text{d}u\_2 \cdots \, \text{d}u\_n = t^{\theta + \cdot n - 1} E\_{\alpha, \theta + \cdot n}(t^{\alpha}) \,. \tag{52}$$

For *n* = 1 and β = 1,

$$\begin{cases} \int\_{0}^{t} \boldsymbol{u}^{\beta -1} \boldsymbol{E}\_{a,\emptyset}(\boldsymbol{u}^{a}) \, d\boldsymbol{u} = \boldsymbol{t}^{\beta} \, \boldsymbol{E}\_{a,\emptyset + 1}(\boldsymbol{t}^{a}) \, \text{} \\ \int\_{0}^{t} \boldsymbol{E}\_{a}(\boldsymbol{u}^{a}) \, d\boldsymbol{u} = \boldsymbol{t} \, \boldsymbol{E}\_{a,2}(\boldsymbol{t}^{a}) \, \text{} \end{cases} \tag{53}$$

Together with the linearity property of the Laplace transformation, operational calculus is able to determine the sums of the Mittag-Leffler functions as power series. Consider the infinite and finite geometrical series, namely,

$$\begin{aligned} 1 + \mathbf{x} + \mathbf{x}^2 + \dots + \mathbf{x}^k + \dots &= \frac{1}{1 - \mathbf{x}} \\ 1 + \mathbf{x} + \mathbf{x}^2 + \dots & \mathbf{x}^{n-1} + \mathbf{x}^n = \frac{\mathbf{x}^{n+1} - 1}{\mathbf{x} - 1} \end{aligned} \tag{54}$$

where 0 < *x* < 1.

By taking the Laplace transforms of all the terms in the power series of the corresponding Mittag-Leffler function, one obtains for *s* > 1,

$$F(s) = \frac{s^{a-1}}{s^a - 1} + \frac{s^{a-2}}{s^a - 1} + \frac{s^{a-3}}{s^a - 1} + \dots + \frac{s^{a-k}}{s^a - 1} + \dots \, \, \, \, \, \} \tag{55}$$

The inverse transform of *F*(*s*) is given by the following series of the Mittag-Leffler functions:

$$\begin{aligned} L^{-1}\{F(s)\} &= E\_a(t^a) + t^i E\_{a,2}(t^a) + t^2 E\_{a,3}(t^a) + \dots \\ &+ t^k E\_{a,k+1}(t^a) + \dots = \sum\_{k=1}^{\infty} t^{k-1} E\_{a,k}(t^a) \,. \end{aligned} \tag{56}$$

In order to invert *F*(s), one must express (55) as

$$F(s) = \frac{s^{a-1}}{s^a - 1} + \frac{s^{a-1}}{s^a - 1} \left\{ \frac{1}{s} + \frac{1}{s^2} + \frac{1}{s^3} \dots + \frac{1}{s^k} + \dots \right\},\tag{57}$$

The series inside the brackets is merely the geometric series. Using (54) one finds that

$$F(s) = \frac{s^{a-1}}{s^a - 1} + \frac{s^{a-1}}{s^a - 1} \left[ \frac{1}{1 - (1/s)} - 1 \right] = \frac{s^{a-1}}{s^a - 1} + \frac{s^{a-1}}{s^a - 1} \cdot \frac{1}{s - 1} \tag{58}$$

Finally, inverting *F*(*s*) yields

$$\begin{aligned} \sum\_{k=1}^{\infty} t^{k-1} E\_{a,k}(t^a) &= E\_a(t^a) + E\_a(t^a) \* \epsilon^t = \\ E\_a(t^a) + \int\_0^t \epsilon^{(t-u)} E\_a(u^a) \, du \, . \end{aligned} \tag{59}$$

For the case of a finite series of the Mittag-Leffler functions, one requires the second result in (54) to determine the Laplace transform *F*(*s*), which is given by

$$\begin{array}{l} F(s) = \frac{s^{a-1}}{s^a - 1} + \frac{s^{a-1}}{s^a - 1} \left[ \frac{(1/s)^n - 1}{(1/s) - 1} - 1 \right] = \\ \frac{s^{a-1}}{s^a - 1} + \left\{ \frac{s^{a-1}}{s^a - 1} - \frac{s^{a-(n+1)}}{s^a - 1} \right\} \cdot \frac{1}{s - 1} \end{array} \tag{60}$$

According to the convolution theorem, the inverse transform of this finite sum is

$$\begin{aligned} \sum\_{k=1}^{n} \, \mathbf{t}^{k-1} E\_{a,k}(\mathbf{t}^a) &= E\_a(\mathbf{t}^a) \, + \mathbf{e}^t \ast \left\{ E\_{a\mathbf{l}}(\mathbf{t}^a) - \mathbf{t}^n E\_{a,n+1}(\mathbf{t}^a) \right\} = \\ \, E\_{a}(\mathbf{t}^a) &+ \int\_{0}^{t} \mathbf{e}^{(t-u)} \left\{ E\_{a\mathbf{l}}(\mathbf{u}^a) - \mathbf{u}^n E\_{a\mathbf{l},n} + \mathbf{e}(\mathbf{u}^a) \right\} \, d\mathbf{u} \, \end{aligned} \tag{61}$$

Similarly, we can use (54) for negative value of *x*

$$1 - \mathbf{x} + \mathbf{x}^2 - \dots + \mathbf{x}^k - \dots = \frac{1}{1 + \mathbf{x}} \,' \,' \tag{62}$$

Then, the corresponding Laplace transform becomes

$$\begin{array}{l} F(s) = \frac{s^{a-1}}{s^a - 1} + \frac{s^{a-1}}{s^a - 1} \left\{ -\frac{1}{s} + \frac{1}{s^2} - \frac{1}{s^3} + \dots + \frac{1}{s^k} + \dots \right\} =\\ \frac{s^{a-1}}{s^a - 1} + \frac{s^{a-1}}{s^a - 1} \left( \frac{s}{s+1} - 1 \right) = \frac{s^{a-1}}{s^a - 1} - \frac{s^{a-1}}{s^a - 1} \cdot \frac{1}{s+1} \end{array} \tag{63}$$

Inversion of this result yields

$$\begin{aligned} \sum\_{k=1}^{\infty} \left(-1\right)^{k-1} t^{k-1} E\_{a,k}(t^a) &= E\_a(t^a) - E\_{a}(t^a) \* e^{-t} = \\ E\_{a}(t^a) - \int\_{0}^{t} e^{-(t-u)} E\_{a}(u^a) \, du \, . \end{aligned} \tag{64}$$

According to the binomial theorem for *x* < 1, we have

$$P(\mathbf{x}) = 1 - 2\mathbf{x} + 3\mathbf{x}^2 - 4\mathbf{x}^3 + \dots = \sum\_{k=-1}^{\infty} \left(-1\right)^{k-1} k \cdot \mathbf{x}^{k-1} = \frac{1}{\left(1+\mathbf{x}\right)^2} \,, \tag{65}$$

The Laplace transform corresponding to this series is

$$\begin{array}{l} F(s) = \frac{s^{a-1}}{s^{a}-1} + \frac{s^{a-1}}{s^{a}-1} \left\{ -\frac{2}{s} + \frac{3}{s^{2}} - \frac{4}{s^{3}} \dots \right\} = \frac{s^{a-1}}{s^{a}-1} +\\ \frac{s^{a-1}}{s^{a}-1} \left[ \left(\frac{s}{s+1}\right)^{2} - 1 \right] = \frac{s^{a-1}}{s^{a}-1} - \frac{s^{a-1}}{s^{a}-1} \left[ \frac{1}{\left(s+1\right)^{2}} + \frac{2s}{\left(s+1\right)^{2}} \right], \end{array} \tag{66}$$

The inverse transform of the second term in (66) is

$$\begin{aligned} \left\{ \begin{aligned} L^{-1} \left\{ -\frac{s^{a-1}}{s^a - 1} \left[ \frac{1}{\left(s+1\right)^2} + \frac{2s}{\left(s+1\right)^2} \right] \right\} =\\ -E\_a(t^a) \ast \left[ t \, e^{-t} + 2e^{-t}(1-t) \right] = E\_a(t^a) \ast \left[ \left(t-2\right) \, e^{-t} \right], \end{aligned} \tag{67}$$

Thus, the infinite series of the Mittag-Leffler functions in (65) and (67) is

$$\begin{array}{ll} E\_{a,1}(t^a) - 2t \, E\_{a,2}(t^a) + 3t^2 \, E\_{a,3}(t^a) - 4t^3 \, E\_{a,4}(t^a) + \dots = \\ \sum\_{\substack{k=1\\k=1}}^{\infty} (-1)^{k-1} k t^{k-1} E\_{a,k}(t^a) =\\ E\_{a}(t^a) + \int\_{0}^{t} [(t-u-2) \, e^{-(t-u)}] \, E\_{a}(u^a) \, du \, . \end{array} \tag{68}$$

From the preceding examples, it is obvious that if the function *f*(*t*) is expanded into the Taylor series,

$$f(\mathbf{x}) = \sum\_{k=0}^{\infty} \frac{f^{(k)}(0)}{k!} \mathbf{x}^k \tag{69}$$

Then, the sum of the corresponding series of the Mittag-Leffler functions can be expressed in terms of convolution integrals. This is only possible if the inverse Laplace transforms, *L*<sup>−</sup>1[*f*(1/*s*) <sup>−</sup> 1], are known.

Now, consider the binomial series with the power of 1/2. Then, we have some derivatives of the function *f*(*t*), which are equal to zero at the origin

$$f(\mathbf{x}) = \sqrt{1 + \mathbf{x}^2} = 1 + \frac{\mathbf{x}^2}{2} - \frac{\mathbf{x}^4}{8} + \frac{\mathbf{x}^6}{16} - \frac{5\mathbf{x}^8}{128} + \dots \text{ . }\tag{70}$$

The corresponding series of the Mittag-Leffler functions is

$$\begin{cases} S(t^a) = \\ E\_{a,1}(t^a) + \frac{\ell^2}{2} E\_{a,3}(t^a) - \frac{\ell^4}{8} E\_{a,5}(t^a) + \frac{\ell^6}{16} E\_{a,7}(t^a) - \frac{5\ell^8}{128} E\_{a,9}(t^a) - \dots \end{cases} \tag{71}$$

while the Laplace transform of *S*(*t* <sup>α</sup>) after few manipulations is given by

$$\begin{array}{l} F(s) = \frac{s^{a-1}}{s^a - 1} + \frac{s^{a-1}}{s^a - 1} \left\{ \frac{1}{2s^2} - \frac{1}{8s^4} + \frac{1}{16s^6} - \frac{5}{128s^8} + \dots \right\} = \\ \frac{s^{a-1}}{s^a - 1} + \frac{s^{a-1}}{s^a - 1} \left[ \sqrt{1 + \left(\frac{1}{s}\right)^2} - 1 \right] = \frac{s^{a-1}}{s^a - 1} + \frac{s^{a-1}}{s^a - 1} \left[ \frac{s}{\sqrt{s^2 + 1}} - 1 \right] \\ = \frac{s^{a-1}}{s^a - 1} - \frac{s^{a-1}}{s^{a-1} - 1} \cdot \frac{1}{\sqrt{s^2 + 1} \left[ s + \sqrt{s^2 + 1} \right]} \end{array} \tag{72}$$

Noting that the inverse Laplace transform of the Bessel function of the first kind and of the first order is

$$L^{-1}\left\{\frac{1}{\sqrt{s^2+1}\ \left[s+\sqrt{s^2+1}\right]}\right\} = f\_1(t) \,,\tag{73}$$

one finds that the series of the Mittag-Leffler functions in (74) can be expressed as

$$\begin{split} &E\_{a,1}(t^a) + \frac{t^2}{2}E\_{a,3}(t^a) - \frac{t^4}{8}E\_{a,5}(t^a) + \frac{t^6}{16}E\_{a,7}(t^a) - \frac{5}{128}t^8E\_{a,9}(t^a) - \dots \\ &= E\_{a,1}(t^a) - E\_{a,1}(t^a) \* f\_1(t) = E\_{a,1}(t^a) - \int\_0^t E\_{a,1}(u^a) \, f\_1(t - u) \, du \\ &= E\_{a}(t^a) - \int\_0^{\pi/2} t \sin(2\theta) \, E\_{a}[t^a(\cos\theta)^{2a}] \, \, f\_1[t(\sin\theta)^2] \, d\theta \, . \end{split} \tag{74}$$

#### **3. Di**ff**erentiation and Integration of the Mittag-Le**ffl**er Functions with Respect to Parameters in the Laplace Transform Approach**

The operational rules of the Laplace transformation are also appropriate in the evaluation of derivatives of the Mittag-Leffler functions with respect to parameters. Differentiation under the integral transform sign is permissible if the function *f*(*t*,α) is continuous with respect to the variable *t* and the parameter α. Then, we have

$$\begin{array}{l} L[f(t,a)] = F(s,a) \\ L\left\{\frac{\partial f(t,a)}{\partial a}\right\} = \frac{\partial F(s,a)}{\partial a} = G(s,a) \\ L^{-1}\left\{\frac{\partial F(s,a)}{\partial a}\right\} = L^{-1}\{G(s,a)\} = \frac{\partial f(t,a)}{\partial a} \end{array} \tag{75}$$

The Laplace transform *G*(*s*,α) of the derivative of the Mittag-Leffler function *E*α(*t* <sup>α</sup>) is

$$\begin{array}{l} G(s,\alpha) = L\left\{\frac{\partial \mathbb{E}\_a(t^a)}{\partial a}\right\} = \frac{\partial}{\partial a} \left(\frac{s^{a-1}}{s^a - 1}\right) = \left[\frac{s^{a-1}\ln s}{s^a - 1} - \frac{s^{2a-1}\ln s}{\left(s^a - 1\right)^2}\right] \\\ = -\frac{s^{a-1}}{s^a - 1} \cdot \frac{\ln s}{s^a - 1} = -\frac{s^{a-1}}{s^a - 1} \cdot \frac{s^{a-1}}{s^a - 1} \cdot \frac{\ln s}{s^{a-1}} \end{array} \tag{76}$$

In order to avoid evaluating a complex integral in the inversion process, *G*(*s*,α) is expressed as the product of three Laplace transforms. The convolution theorem can be applied for *G*(*s*,α) because inverse of the third term in (76) is given for Reλ > 0 in [20]

$$L^{-1}\left\{\frac{\ln s}{s^{\lambda}}\right\} = \frac{t^{\lambda - 1}}{\Gamma(\lambda)}[\psi(\lambda) - \ln t] \; \; \; \tag{77}$$

From (76) and (77) it follows that

$$\frac{\partial E\_a(t^a)}{\partial \alpha} = E\_a(t^a) \ast E\_a(t^a) \ast \left\{ \frac{t^{a-2}}{\Gamma(\alpha - 1)} [\ln t - \psi(\alpha - 1)] \right\} \tag{78}$$

where α > 1.

Thus, due to two convolutions, the derivative with respect to α is expressed by a double convolution integral. If the Laplace transform in (76) is written as

$$G(s,a) = -\frac{s^{a-1}}{s^a - 1} \cdot \frac{s^{a-\lambda}}{s^a - 1} \cdot \frac{\ln s}{s^{a-\lambda}}\ \ ,\tag{79}$$

the inverse transform of (79) becomes

$$\begin{split} \frac{\partial E\_{a}(t^{a})}{\partial a} &= \\ E\_{a}(t^{a}) \* \left[t^{\lambda-1} E\_{a,\lambda}(t^{a})\right] \* \left\{\frac{t^{a}}{\Gamma(a-\lambda)} \begin{bmatrix} \ln t - \psi(a-\lambda) \end{bmatrix}\right\} \end{split} \tag{80}$$

where 0 < λ < α < 1.

The case α = 1 will be considered in the next section.

In a similar manner, the Laplace transform of derivative of the Mittag-Leffler function *t* <sup>β</sup>−1*E*α,β(*t* α) with respect to α is

$$\begin{split} \mathcal{G}\left(\mathbf{s}, \alpha, \beta\right) &= L \left\{ \frac{\partial \left[t^{\beta - 1} E\_{a\beta} \left(t^{a}\right)\right]}{\partial a} \right\} = \frac{\partial}{\partial a} \left(\frac{s^{a - \beta}}{s^{a - 1}}\right) = \\ &= -\frac{s^{a - \beta}}{s^{a - 1}} \cdot \frac{\ln s}{s^{a - 1}} = -\frac{s^{a - 1}}{s^{a - 1}} \cdot \frac{s^{a - \beta}}{s^{a - 1}} \cdot \frac{\ln s}{s^{a - 1}} \end{split} \tag{81}$$

This gives

$$\begin{split} \frac{\frac{\partial \left[t^{\beta-1}E\_{a,\beta}(t^{a})\right]}{\partial \alpha}}{} &=\\ E\_{\alpha}(t^{\alpha}) \* \left\{t^{\beta-1}E\_{a,\beta}(t^{\alpha})\right\} \* \left\{\frac{t^{a-2}}{\Gamma(\alpha-1)}\left[\ln t - \psi(\alpha-1)\right]\right\} \end{split} \tag{82}$$

where α > 1.

As expected, for β = 1, (82) reduces to (78).

For 0 < α < 1, from (79), it follows that

$$G(s,\alpha,\beta) = -\frac{s^{\alpha-\lambda}}{s^{\alpha}-1} \cdot \frac{s^{\alpha-\beta}}{s^{\alpha}-1} \cdot \frac{\ln s}{s^{\alpha-\lambda}}\,\,\,\,\tag{83}$$

and

$$\begin{split} \frac{\frac{\partial \left[t^{\beta-1}E\_{\alpha,\beta}(t^{\alpha})\right]}{\partial \alpha}}{\frac{1}{\alpha} \left\{t^{\lambda-1}E\_{\alpha,\lambda}(t^{\alpha})\right\} \* \left\{t^{\beta-1}E\_{\alpha,\beta}(t^{\alpha})\right\} \* \left\{\frac{t^{\alpha-\lambda}}{\Gamma(\alpha-\lambda)}\left[\ln t - \psi(\alpha-\lambda)\right]\right\}} \end{split} \tag{84}$$

0 < λ < α < 1.

For β, a variable, the Laplace transform of *t* <sup>β</sup>−1*E*α,β(*t* <sup>α</sup>) derivative is

$$\begin{split} \mathcal{H}(\mathbf{s}, \alpha, \beta) &= L \{ \frac{\partial \mathbb{M}^{\beta} - \mathrm{E}\_{\alpha, \beta}(\mathbf{t}^{\alpha})}{\partial \beta} \} = \frac{\partial}{\partial \beta} \{ \frac{\mathbf{s}^{\alpha} - \beta}{\mathbf{s}^{\alpha} - 1} \} = \\ &= -\frac{s^{\alpha - \beta} \ln s}{s^{\alpha} - 1} = -\frac{s^{\alpha - (\beta - \lambda)}}{s^{\alpha - 1}} \cdot \frac{\ln s}{s^{\lambda}} \,, \end{split} \tag{85}$$

and the inverse transform is

$$\begin{cases} \frac{\partial t^{\beta - 1} E\_{a, \beta}(t^{\alpha})}{\partial \beta} = t^{\beta - 1} \ln t \, E\_{a, \beta}(t^{\alpha}) + t^{\beta - 1} \frac{\partial E\_{a, \beta}(t^{\alpha})}{\partial \beta} = \\ \{t^{\beta - \lambda - 1} E\_{a, \beta - \lambda}(t^{\alpha})\} \* \left\{\frac{t^{\lambda - 1}}{\Gamma(\lambda)} [\ln t - \psi(\lambda)]\right\}. \end{cases} \tag{86}$$

where β > λ > 0.

As in the case with differential operations, there are rules in the Laplace transformation for evaluation of integrals. The Laplace transform of the Mittag-Leffler function *t* <sup>β</sup>−1*E*α,β(*t* <sup>α</sup>) enables one to derive the following integral

$$I(t,\lambda) = \int\_0^\lambda t^{\beta - 1} E\_{a,\beta}(t^a) \, d\beta \,\,. \tag{87}$$

The Laplace transform of (87) can be determined by changing the order of integration as follows:

$$\begin{split} \left\{ \int e^{-st} \left\{ \int\_{0}^{\lambda} t^{\beta} \,^{-1} E\_{\alpha, \beta}(t^{\alpha}) \, d\beta \right\} dt &= \int\_{0}^{\lambda} \left\{ e^{-st} t^{\beta} \,^{-1} E\_{\alpha, \beta}(t^{\alpha}) \, dt \right\} d\beta = \\ \int\_{\frac{\lambda}{s^{\alpha}-1}}^{\lambda} \frac{s^{\alpha-\beta}}{s^{\alpha}-1} \, d\beta &= \frac{s^{\alpha}}{s^{\alpha}-1} \cdot \frac{1}{\ln s} - \frac{s^{\alpha-\lambda}}{s^{\alpha}-1} \cdot \frac{1}{\ln s} \,. \end{split} \tag{88}$$

The inverse of (ln*s*) <sup>−</sup><sup>1</sup> is closely related to a Volterra function [19] as

$$L^{-1}\left\{\frac{1}{\ln s}\right\} = \int\_0^\infty \frac{t^{\mu-1}}{\Gamma(u)} \, du \, \,\,\tag{89}$$

It follows from (47) that

$$L^{-1}\left\{\frac{s^{\alpha}}{s^{\alpha}-1}\right\} = \delta(t) + \frac{d}{dt}[E\_{\alpha}(t^{\alpha})] \; , \tag{90}$$

whereas (49) gives

$$\frac{d}{dt}[E\_a(t^a)] = t^{a-1}E\_{a,a}(-t^a) \,, \tag{91}$$

The final result in terms of convolution integrals is

$$I(t, \lambda) = \left[\delta(t) + t^{a-1} E\_{a,a}(t^a) - t^{\lambda-1} E\_{a,\lambda}(t^a)\right] \* \int\_0^\infty \frac{t^{\mu-1}}{\Gamma(u)} \, du \,. \tag{92}$$

Two limits of integration in (87) can be altered to

$$\begin{array}{lcl}\stackrel{\circ\alpha}{\int}t^{\beta-1}E\_{\alpha,\emptyset}(t^{\alpha})\,d\beta=\left[\delta(t)+t^{\alpha-1}E\_{\alpha,\emptyset}(\,^{t}t^{\alpha})\right]\*\stackrel{\circ\alpha}{\int}\frac{t^{\alpha-1}}{\Gamma(u)}\,du\\\stackrel{\circ\alpha}{\int}t^{\beta-1}E\_{\alpha,\emptyset}(t^{\alpha})\,d\beta=\left[t^{\lambda-1}E\_{\alpha,\emptyset}(\,^{t}t^{\alpha})\right]\*\stackrel{\circ\alpha}{\int}\frac{t^{\alpha-1}}{\Gamma(u)}\,du\end{array} \tag{93}$$

The second term on the right-hand side of (88), written in a different form as inversion of the Volterra function, is as follows

$$\begin{split} \, ^{L-1}\{ \frac{s^{a-\lambda}}{s^a-1} \cdot \frac{1}{\ln s} \} &= L^{-1} \{ \frac{s^{a-(\lambda-1)}}{s^a-1} \cdot \frac{1}{s \ln s} \} = \, t^{\lambda-2} E\_{a,\lambda} \, \_{-1} (t^a) \ast \nu(t) \,,\\ \nu(t) &= \int \frac{t^{\mu}}{\Gamma(u+1)} \, du \, . \end{split} \tag{94}$$

The connection between the Mittag-Leffler functions and the Volterra functions in the Laplace transformation is discussed in detail in [19].

#### **4. Derivatives of the Mittag-Le**ffl**er Functions with Respect to Parameters** α **and** β **Expressed as Power Series**

As it has been shown in the previous section, the differentiation with respect to parameters of the Mittag-Leffler functions can be represented formally, in closed form, in terms of double convolution integrals. Unfortunately, these convolution integrals are not amenable to numerical computations. Hence, an alternative approach is required. Differentiating (1) and (2) with respect to α and β yields

$$\begin{split} \frac{\partial E\_{a}(t)}{\partial \alpha} &= G(\alpha, t) = -\sum\_{k=1}^{\infty} \left( \frac{\psi(\alpha k + 1)}{\Gamma(\alpha k + 1)} \right) k t^{k}, \\ \frac{\partial E\_{a\beta}(t)}{\partial \alpha} &= -\sum\_{k=1}^{\infty} \left( \frac{\psi(\alpha k + \beta)}{\Gamma(\alpha k + \beta)} \right) k t^{k} \end{split} \tag{95}$$

and

$$\frac{\partial E\_{\alpha\beta}(t)}{\partial \beta} = -\sum\_{k=0}^{\infty} \left( \frac{\psi(\alpha k + \beta)}{\Gamma(\alpha k + \beta)} \right) t^k. \tag{96}$$

The second derivatives are

$$\frac{\partial^2 E\_a(t)}{\partial a^2} = G'(a, t) = \sum\_{k=1}^{\infty} \left\{ \frac{[\psi(ak+1)]^2 - \psi^{(1)}\ (ak+1)}{\Gamma(ak+1)} \right\} k^2 t^k,\tag{97}$$

$$\begin{array}{lcl}\frac{\partial^2 E\_{a,\emptyset}(t)}{\partial\emptyset^2} &= \sum\_{k=0}^{\infty} \left\{ \frac{\left[\psi\left(a\ k+\beta\right)\right]^2 - \psi^{(1)}\left(ak+\beta\right)}{\Gamma\left(ak+\beta\right)} \right\} t^k \\\frac{\partial^2 E\_{a,\emptyset}(t)}{\partial\!a\!a\!\beta!} &= \sum\_{k=-1}^{\infty} \left\{ \frac{\left[\psi\left(a\ k+\beta\right)\right]^2 - \psi^{(1)}\left(ak+\beta\right)}{\Gamma\left(ak+\beta\right)} \right\} kt^k \end{array} \tag{98}$$

Higher derivatives with respect to α and β yield similar summands, only differing in powers of *k*. Infinite series with the digamma functions in their summands do not appear often in mathematical investigations [22,23]. This changed in 2008 with the huge collection of results in the book by Brychkov [24]. Nevertheless, in their general form, infinite series with quotients of the digamma and gamma functions in their summands are still unsolved. However, for specific values of α and β, MATHEMATICA is able to determine closed forms for them, although they are rather cumbersome with mixture of elementary and special functions. Their validity was checked by carrying out numerical calculations with (95) and (96). Only a limited number of results will appear in this section, with the remainder appearing in Tables 1 and 2.


**Table 1.** First derivatives of the Mittag-Leffler functions with respect to the parameter α*.*


**Table 1.** *Cont.*

**Table 2.** First derivatives of the Mittag-Leffler functions with respect to the parameter β*.*


Convergence conditions for the power series reported in this section were not established, and therefore *t* values are in some cases restricted (e.g., in (99) and (100) for |*t*| < 1). These summands were obtained from MATHEMATICA, but the validity was numerically checked for only some of them.

The simplest cases occur when α and β equal zero or unity. Then, we find that

$$\frac{\partial \, E\_{\alpha\beta}(t)}{\partial \alpha}|\_{\alpha=0,\beta=-1} = -\frac{\psi(1)}{\Gamma(1)}\sum\_{k=1}^{\infty} kt^k = \frac{\gamma \cdot t}{\left(t-1\right)^2} \,, \tag{99}$$

$$\frac{\partial \operatorname{E}\_{\alpha\beta}(t)}{\partial \alpha}|\_{\alpha=0,\emptyset} = -\frac{\psi(\beta)}{\Gamma(\beta)}\sum\_{k=1}^{\infty} kt^k = -\frac{\psi(\beta)}{\Gamma(\beta)}\frac{t}{\left(t-1\right)^2},\tag{100}$$

$$\begin{aligned} \frac{\partial \mathcal{E}\_{a,\emptyset}(t)}{\partial a}|\_{\mathcal{A}=1,\emptyset=0} &= -\sum\_{k=1}^{\infty} \left( \frac{\psi(k)}{\Gamma(k)} \right) k t^k = \\ t \{ e^t (1+t) [\mathcal{C}hi(t) - \mathcal{S}hi(t) - \ln t] + 1 - e^t \} \end{aligned} \tag{101}$$

$$\begin{cases} \frac{\partial \mathbb{E}\_{\omega, \boldsymbol{\theta}}(t)}{\partial \boldsymbol{a}}|\_{\boldsymbol{a}=1, \boldsymbol{\theta}=1} = -\sum\_{k=1}^{\infty} \left( \frac{\psi(k+1)}{\Gamma(k+1)} \right) k \boldsymbol{t}^{k} =\\ 1 - \boldsymbol{\epsilon}^{t} \{ t [\ln t + \Gamma(0, t)] + 1 \} \vdots \{ \Gamma(0, t) \} = -Ei(-t) \; , \end{cases} \tag{102}$$

and

$$\frac{\partial \ln E\_{a,\#}(t)}{\partial \beta}|\_{\alpha=1,\beta=-1} = -\sum\_{k=0}^{\infty} \left( \frac{\psi(k+1)}{\Gamma(k+1)} \right) t^k = -e^t [\ln t + \Gamma(0,t)] \,, \tag{103}$$

where Γ(0,*t*) = −*Ei*(−*t*), and the hyperbolic sine and cosine integrals and the exponential integral are defined by

$$\begin{array}{l} \text{Shi}(t) = \int\_{0}^{t} \frac{\sinh u}{u} \, du \\ \text{Cli}(t) = -\int\_{0}^{t} \frac{1 - \cosh u}{u} \, du + \gamma + \ln t \\ \text{I}-\text{Ei}(-t) = \int\_{\frac{\pi}{4}}^{\frac{\pi}{4}} \frac{e^{-u}}{u} \, du \end{array} \tag{104}$$

γ represents Euler's constant.

For α*,* β = 0, 1, and 2, the following sums of infinite series are known:

$$\begin{array}{c} \frac{\partial \mathbb{E}\_{a,\theta}(t)}{\partial \alpha}|\_{\alpha=1,\theta=2} = -\sum\_{k=1}^{\infty} \left( \frac{\psi(k+2)}{\Gamma(k+2)} \right) k t^k = \\\ 1 + \gamma + t^t [(t-1) \operatorname{Util}(t) + \operatorname{Shi}(t) - t \left( \operatorname{Shi}(t) + \ln t \right) + \ln t - 1] \end{array} \tag{105}$$

$$\begin{array}{lcl} \frac{\partial E\_{\alpha\beta}(t)}{\partial a}|\_{\text{all}=2,\beta=0} & = -\sum\_{k=1}^{\infty} \left( \frac{\psi(2k)}{\Gamma(2k)} \right) \text{k}t^k & =\\ \frac{\sqrt{t}\left[2\text{Cli}(\sqrt{t}) - \text{ln}\,t\big]\left[\sinh(\sqrt{t}) + \sqrt{t}\cosh(\sqrt{t})\right]\right]}{4} & -\\ \frac{2\sqrt{t}\text{Shi}(\sqrt{t})\big[\sqrt{t}\sinh(\sqrt{t}) + \cosh(\sqrt{t})\big] - 2\sqrt{t}\text{sinh}(\sqrt{t})}{4} & , \end{array} \tag{106}$$

$$\begin{array}{lcl}\frac{\partial E\_{x,\emptyset}(t)}{\partial \alpha}|\_{\alpha=2,\emptyset=-1} = -\sum\_{k=1}^{\infty} \left(\frac{\psi(2k+1)}{\Gamma(2k+1)}\right)kt^k = \\\frac{\sqrt{t}\sinh(\sqrt{t})[2\operatorname{Cli}(\sqrt{t})-\ln t]-2\cosh(\sqrt{t})\ [\sqrt{t}\operatorname{Shi}(\sqrt{t})+1]+2}{4},\end{array} \tag{107}$$

$$\begin{array}{lcl} \frac{\partial E\_{\alpha\beta}(t)}{\partial x}|\_{\alpha=2,\emptyset=-2} & = & \sum\_{k=1}^{\infty} \left( \frac{\psi(2k+2)}{\Gamma(2k+2)} \right) k t^k = \\ \frac{[2Ch(\sqrt{t}) - \ln t](\sqrt{t})\cosh(\sqrt{t}) - \sinh(\sqrt{t})] - 2\sinh(\sqrt{t})}{4\sqrt{t}} + \\ \frac{[2Sh(\sqrt{t})]\cosh(\sqrt{t}) - \sqrt{t}\sinh(\sqrt{t})]}{4\sqrt{t}} \end{array} \tag{108}$$

and

$$\begin{cases} \frac{\partial E\_{a,\xi}(t)}{\partial \beta}|\_{A=1,\beta=2} = -\sum\_{k=0}^{\infty} \left( \frac{\psi(k+2)}{\Gamma(k+2)} \right) t^k = \\ -\frac{\gamma+t^l \left[ \mathrm{Sli}(t) - \mathrm{Cli}(t) + \ln t \right]}{t} \end{cases} \tag{109}$$

$$\begin{array}{lcl} \frac{\partial \mathbb{E}\_{\alpha,\beta}(t)}{\partial \boldsymbol{\delta}}|\_{\alpha=\ \mathbf{2},\boldsymbol{\delta}=\ 0} = -\sum\_{k=0}^{\infty} \left( \frac{\psi(2k)}{\Gamma(2k)} \right) t^k = 1+\\ \frac{\sqrt{t} [\sinh(\sqrt{t})(2\operatorname{Chi}(\sqrt{t}) - \ln t) - 2\cosh(\sqrt{t})\operatorname{Shi}(t)]}{2}, \end{array} \tag{110}$$

$$\begin{array}{lcl} \frac{\partial \mathbb{E}\_{\alpha, \emptyset}(-t)}{\partial \emptyset}|\_{\alpha = -2, \emptyset = -0} = -\sum\_{k=0}^{\infty} \left( \frac{\psi(2k)}{\Gamma(2k)} \right) (-t)^k = \\\frac{\sqrt{t} [\sin(\sqrt{t})] \text{Ci}(\sqrt{t}) - \ln t] - 2 \cos(\sqrt{t}) \text{Si}(\sqrt{t})}{2} \end{array} \tag{111}$$

$$\begin{array}{lcl}\frac{\partial E\_{a,\emptyset}(t)}{\partial \beta}|\_{\alpha=2,\emptyset=-1} = -\sum\_{k=0}^{\infty} \left(\frac{\psi(2k+1)}{\Gamma(2k+1)}\right) t^k = \\\ -\sinh(\sqrt{t}) S h i(\sqrt{t}) + \frac{\cosh(\sqrt{t})[2\text{Chi}(\sqrt{t}) - \ln t]}{2} \end{array} \tag{112}$$

$$\begin{array}{lcl} \frac{dE\_{\alpha,\theta}(t)}{d\theta}|\_{\alpha=-2,\theta\gamma=2} = -\sum\_{k=0}^{\infty} \left( \frac{\psi(2k+2)}{\Gamma(2k+2)} \right) t^k = \\\ -2\cosh(\sqrt{t})\mathrm{Shi}(\sqrt{t}) + \sinh(\sqrt{t})[2\mathrm{Chi}(\sqrt{t}) - \ln t] \\\ \frac{-2\cosh(\sqrt{t})\mathrm{Shi}(\sqrt{t}) + \sinh(\sqrt{t})[2\mathrm{Chi}(\sqrt{t}) - \ln t]}{2\sqrt{t}} \end{array} \tag{113}$$

and

$$\begin{array}{l} \frac{dE\_{\alpha,\beta}(-t)}{d\beta}|\_{\alpha=2,\beta=2} = -\sum\_{k=0}^{\infty} \left( \frac{\psi(2k+2)}{\Gamma(2k+2)} \right) (-t)^k = \\ \frac{|\sqrt{t}\cosh(\sqrt{t}) - \sinh(\sqrt{t})| \left[ 2\text{Ric}(\sqrt{t}) - \ln t \right]}{4\sqrt{t}} +\\ \frac{\text{Shi}(\sqrt{t}) |\cosh(\sqrt{t}) - \sqrt{t}\sinh(\sqrt{t})| - \sinh(\sqrt{t})}{2\sqrt{t}} \end{array} \tag{114}$$

where the sine and cosine integrals are defined by

$$\begin{aligned} \text{Si}(t) &= \int\_0^t \frac{\sin u}{u} \, du \\ \text{Ci}(t) &= -\int\_t^{\frac{\cos u}{u}} \, du \end{aligned} \tag{115}$$

A number of numerical methods for evaluating the Mittag-Leffler functions and their derivatives with respect to the argument *z* are given in the literature [25–27]. Fortunately, the Mittag-Leffler functions are available in MATHEMATICA, which means that the first and the second derivatives with respect to α can also be evaluated. The results for 0.05 < α < 5.0 and 0 < *t* < 2.25 can be obtained from the author on request. Two numerical methods were used to verify the results. In the first method, direct summation of infinite series (95) and (96) was performed in MATHEMATICA module, while in the second method, the calculations were carried out by applying the central differences to *O*(*h*4) with *h* = 0.001.

$$\frac{\partial E\_a(t)}{\partial a} = \frac{-E\_{a+2h}(t) + 8E\_{a+h}(t) - 8E\_{a-h}(t) + E\_{a-2h}(t)}{12\text{ h}}\tag{116}$$

and

$$\frac{\frac{\partial}{\partial \alpha^2} E\_n(t)}{-E\_{n+\ 2h}^{\partial \alpha^2}(t) + 16 \operatorname{E}\_{n+\ h}(t) - 30 \operatorname{E}\_n(t) + 16 \operatorname{E}\_{n-h}(t) - \operatorname{E}\_{n-2h}(t)}}\tag{117}$$

The above results of the Mittag-Leffler functions were evaluated in MATHEMATICA.

The Mittag-Leffler functions, *f*(α,*t*) = *E*α(*t*), as a function of α for constant *t* are plotted in Figure 1. The rapid exponential behavior of these functions means that only narrow intervals of the functions can be plotted. As can be seen, they are always positive and become more divergent as *t* increases. For 0 < α < 1, they possess a maximum, which moves as *t* is increased. For large values α and *t*, they tend to zero.

**Figure 1.** The Mittag-Leffler functions *E*α(*t*) as a function of α at constant values of argument *t*. 1—0.25; 2—0.50; 3—0.75; 4—0.85; 5—1.0; 6—1.5; 7—2.0.

The first derivatives of the Mittag-Leffler with respect to α or *G*(α,*t*) = ∂*E*α(*t*)/∂α are plotted in Figure 2 Their behavior mirrors *E*α(*t*), except that they are inverted as they are always negative.

**Figure 2.** *G*(α,*t*)—First derivatives of the Mittag-Leffler functions with respect to α plotted at constant values of *t*. 1—0.25; 2—0.50; 3—0.75; 4—0.85; 5—1.0; 6—1.5; 7—2.0.

The second derivatives with respect to α, *G'*(α,*t*) = ∂2*E*α(*t*)/∂α<sup>2</sup> are presented in Figure 3. Their behavior resembles that of the Mittag-Leffler functions (Figure 1). However, for small values of *t*, they move from negative to positive values. The divergent behavior of *G*'(α,*t*) also applies for large values of *t*, but for increasing values of α and *t*, they tend to zero.

**Figure 3.** *G'*(α,*t*)—Second derivatives of the Mittag-Leffler functions with respect to α plotted at constant values of *t*. 1—0.25; 2—0.50; 3—0.75; 4—0.85; 5—1.0; 6—1.5; 7—2.0.

#### **5. Derivatives of the Mittag-Le**ffl**er Functions with Respect to Parameters** α **and** β **from Integral Representations**

Derivatives with respect to α and β can be determined by direct differentiation of the integrands in integral representations of the Mittag-Leffler functions. Because no general expression exists for integral representations [25,27–33], it is possible to use only those that are valid for real positive and negative values of *t* and for restricted values of α and β.

For 0 < α < 1 and *t* > 0, these are

$$E\_a(t^a) = \frac{e^t}{\alpha} - \frac{\sin(\pi\alpha)}{\pi} \int\_{\rho}^{\infty} \frac{e^{-tu} u^{\alpha - 1}}{u^{2\alpha} - 2u^{\alpha}\cos(\pi\alpha) + 1} \, du \, \, \tag{118}$$

$$E\_a(-t^a) = \frac{\sin(\pi\alpha)}{\pi} \int\_{\rho}^{\infty} \frac{e^{-tu}u^{a-1}}{u^{2a} + 2u^a \cos(\pi\alpha) + 1} \, du \, . \tag{119}$$

and

$$E\_{\alpha\beta}(t^a) = \frac{e^t}{\alpha} - \frac{1}{\pi} \int\_{\sigma}^{\infty} \frac{e^{-tu} u^{\alpha-\beta} [u^a \sin(\pi\beta) + \sin[\pi(\alpha-\beta)]]}{u^{2a} - 2u^a \cos(\pi a) + 1} \, du \, \, \tag{120}$$

$$E\_{\alpha,\beta}(-t^{\alpha}) = \frac{1}{\pi} \int\_{\rho}^{\infty} \frac{e^{-tu} \, u^{\alpha-\beta} \{ u^{\alpha} \sin(\pi \beta) + \sin[\pi(\alpha-\beta)] \}}{u^{2a} + 2u^{\alpha} \cos(\pi a) + 1} \, du \,. \tag{121}$$

In (120) and (121), 0 < β < α + 1.

Direct differentiation of (118) and (119) with respect to α gives

$$\begin{split} \frac{\partial E\_{a}(t^{a})}{\partial a} &= -\frac{e^{t}}{a^{2}} - \cos(\pi a) \int\_{0}^{\infty} \frac{e^{-u \ t \ \ln a - 1}}{u^{2a} - 2u^{a} \cos(\pi a) + 1} \, du - \\ \frac{\sin(\pi a)}{\pi} \int\_{0}^{\infty} \frac{e^{-u \ t \ \ln a - 1} \left[ (1 - u^{2a}) \ln u - 2\pi u^{a} \sin(\pi a) \right]}{\left[ u^{2a} - 2u^{a} \cos(\pi a) + 1 \right]^{2}} \, du \end{split} \tag{122}$$

$$\begin{split} \frac{\partial E\_{a}(-t^{a})}{\partial \alpha} &= \cos(\pi \alpha) \int\_{0}^{\infty} \frac{e^{-u \cdot t}}{u^{2a} + 2u^{a} \cos(\pi \alpha) + 1} \, du + \\ \frac{\sin(\pi \alpha)}{\pi} \int\_{0}^{\infty} \frac{e^{-u \cdot t} \, u^{a-1} \left[ (1 - u^{2a}) \ln u + 2\pi u^{a} \sin(\pi \alpha) \right]}{\left[ u^{2a} + 2u^{a} \cos(\pi \alpha) + 1 \right]^{2}} \, du \end{split} \tag{123}$$

where the first integrals in (122) and (123) can be written in terms of the Mittag-Leffler functions using (118) and (119).

In the same manner, one can obtain derivatives of the Mittag-Leffler functions *E*α*,*β(±*t* <sup>α</sup>) with respect to α and β. Thus, we find that

$$\begin{split} \frac{\partial E\_{\alpha\beta}(t^n)}{\partial\alpha} &= -\frac{\varrho'}{a^2} - \int\_0^\infty \frac{e^{-tu} \, u^{\alpha-\beta} \cos[\pi(\alpha-\beta)]}{u^{2a} - 2u^a \cos(\pi a) + 1} \, du - \\ &\frac{1}{\pi} \int\_0^\infty \frac{e^{-tu} \, u^{\alpha-\beta} \ln u \, [2u^a \sin(\pi\beta) + \sin[\pi(\alpha-\beta)]]}{u^{2a} - 2u^a \cos(\pi a) + 1} \, du + \\ &\frac{2}{\pi} \int\_0^\infty \frac{e^{-tu} \, u^{2a-\beta} \ln u \left[u^a \sin(\pi\beta) + \sin[\pi(\alpha-\beta)]\right] \left[u^a - \cos(\pi a)\right]}{\left[u^{2a-2\alpha} \cos(\pi a) + 1\right]^2} \, du \\ &+ 2 \int\_0^\infty \frac{e^{-tu} \, u^{2a-\beta} \sin(\pi a) \left[u^a \sin(\pi\beta) + \sin[\pi(\alpha-\beta)]\right]}{\left[u^{2a-2\alpha} \cos(\pi a) + 1\right]^2} \, du \, \end{split} \tag{124}$$

and

$$\begin{split} \frac{\partial E\_{a,\beta}(t^{\mu})}{\partial \beta} &= \frac{1}{\pi} \int\_{0}^{\infty} \frac{t^{-tu} u^{a-\beta} \ln u \left[ u^{u} \sin(\pi \beta) + \sin[\pi(a-\beta)] \right]}{u^{2a} - 2u^{a} \cos(\pi a) + 1} \, du - \\ \int\_{0}^{\infty} \frac{t^{-tu} u^{a-\beta} \left[ u^{a} \cos(\pi \beta) - \cos[\pi(a-\beta)] \right]}{u^{2a} - 2u^{a} \cos(\pi a) + 1} \, du \, \, . \end{split} \tag{125}$$

For the negative real axis, one obtains

$$\begin{split} \theta^{-1} \frac{\partial \operatorname{E}\_{a,\beta}(-t^{a})}{\partial a} &= \int\_{0}^{\infty} \frac{c^{-tu} u^{a-\beta} \cos[\pi(a-\beta)]}{u^{2n} + 2u^{a} \cos(\pi a) + 1} \, du - \\ &\frac{1}{\pi} \int\_{0}^{\infty} \frac{c^{-tu} u^{a-\beta} \ln u \{2u^{a} \sin(\pi \beta) + \sin[\pi(a-\beta)]\}}{u^{2n} + 2u^{a} \cos(\pi a) + 1} \, du - \\ &\frac{2}{\pi} \int\_{0}^{\infty} \frac{u^{2a-\beta} \ln u \{u^{a} \sin(\pi \beta) + \sin[\pi(a-\beta)]\}}{\left[u^{2a} + 2u^{a} \cos(\pi a) + 1\right]^{2}} \, du \\ &+ 2 \int\_{0}^{\infty} \frac{c^{-tu} u^{2a-\beta} \sin(\pi a) \{u^{a} \sin(\pi \beta) + \sin[\pi(a-\beta)]\}}{\left[u^{2a} + 2u^{a} \cos(\pi a) + 1\right]^{2}} \, du \, \, \end{split} \tag{126}$$

and

$$\begin{split} &t^{\beta-1} \ln t \, E\_{\alpha,\beta}(-t^{\alpha}) + t^{\beta-1} \frac{\frac{\partial E\_{\alpha,\beta}(-t^{\alpha})}{\partial \beta}}{\partial \beta} = \\ &- \frac{1}{\pi} \int \frac{e^{-tu} \, u^{\alpha-\beta} \ln u \Big\{ u^{\alpha} \sin(\pi \beta) + \sin[\pi(\alpha-\beta)] \big\}}{u^{2\alpha} + 2u^{\alpha} \cos(\pi \alpha) + 1} \, d\mu + \\ &\int\_{0}^{\infty} \frac{e^{-tu} \, u^{\alpha-\beta} \Big\{ u^{\alpha} \cos(\pi \beta) - \cos[\pi(\alpha-\beta)] \big\}}{u^{2\alpha} + 2u^{\alpha} \cos(\pi \alpha) + 1} \, d\mu \, . \end{split} \tag{127}$$

The infinite integrals in (122) to (127) are valid for restricted values of α and β*.* As can be expected, they represent the Laplace transforms and are similar to convolution integrals in Section 3.

#### **6. Conclusions**

For the first time, the parameters of the Mittag-Leffler functions in (1) and (2) have been treated as variables, and derivatives with respect to them have consequently been determined and discussed. Thus, it has been shown that operational calculus is a powerful tool for determining the properties of the Mittag-Leffler functions. Using the Laplace transform theory, new functional relations, together with infinite and finite series of the Mittag-Leffler functions, have also been calculated. Moreover,

derivatives with respect to α and β have been found to be expressible in terms of convolution integrals. Direct differentiation of (1) and (2) yields infinite power series with quotients of digamma and gamma functions in their coefficients. For small integer values of α and β, closed forms are derived in terms of elementary and special functions. The Mittag-Leffler functions, together with their first and second derivatives, are graphed as functions of α and *t*. On a final note, it should be mentioned that Biyajima et al. [30,31] have used (102) in their new blackbody radiation law, but not the closed form given here.

**Funding:** This research received no external funding.

**Acknowledgments:** I wish to express my profound gratitude to Francesco Mainardi, Department of Physics and Astronomy, Bologna University, Bologna, Italy, for his help, advice, and kind encouragement over the years. He was the first person to show me the importance of the Mittag-Leffler functions. I am also grateful to Yuri A. Brychkov from the Computing Center of the Russian Academy of Sciences, Moscow, Russia, and Victor Adamchik from the Computer Science Department, University of Southern California, Los Angeles, USA, for simplifying and verifying several series from MATHEMATICA in this work. I thank Juan Luis Gonzales-Santander Martinez, Department of Mathematics, Universidad de Oviedo, Oviedo, Spain, for explaining MATHEMATICA and producing more elegant and efficient programs from my original programs. Finally, I thank the referees for their constructive comments, which helped improve this work considerably.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


© 2020 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Review* **The Wright Functions of the Second Kind in Mathematical Physics**

#### **Francesco Mainardi 1,\* and Armando Consiglio <sup>2</sup>**


Academic Editor: Francesco Mainardi Received: 18 April 2020; Accepted: 19 May 2020; Published: 1 June 2020

**Abstract:** In this review paper, we stress the importance of the higher transcendental Wright functions of the second kind in the framework of Mathematical Physics. We first start with the analytical properties of the classical Wright functions of which we distinguish two kinds. We then justify the relevance of the Wright functions of the second kind as fundamental solutions of the time-fractional diffusion-wave equations. Indeed, we think that this approach is the most accessible point of view for describing non-Gaussian stochastic processes and the transition from sub-diffusion processes to wave propagation. Through the sections of the text and suitable appendices, we plan to address the reader in this pathway towards the applications of the Wright functions of the second kind.

**Keywords:** fractional calculus; Wright functions; Green's functions; diffusion-wave equation; Laplace transform

**MSC:** 26A33; 33E12; 34A08; 34C26

#### **1. Introduction**

The special functions play a fundamental role in all fields of Applied Mathematics and Mathematical Physics because any analytical results are expressed in terms of some of these functions. Even if the topic of special functions can appear boring and their properties mainly treated in handbooks, we would promote the relevance of some of them not yet so well known. We devote our attention to the Wright functions, in particular with the class of the second kind. These functions, as we will see hereafter, are fundamental to deal with some non-standard deterministic and stochastic processes. Indeed, the Gaussian function (known as the normal probability distribution) must be generalized in a suitable way in the framework of partial differential equations of non-integer order for describing the anomalous diffusion and the transition from fractional diffusion to wave propagation.

Furthermore, their usefulness and meaningfulness also extends to other topics. For example, these functions and their Laplace Transforms can be applied in electromagnetic problems, see the 1958 paper by Ragab [1] (where the Wright functions were used without knowing their existence) and the recent 2020 paper by Stefa ´nski and Gulgowski [2]. Recently, the Wright functions have been used in the theory of coherent states by Garra, Giraldi, and Mainardi [3].

This survey article aims to discuss the relevance of the Wright Functions and also to focus on the not well-known *Four Sisters Functions* and their importance in time-fractional diffusion-wave equations.

The plan of the paper is organized as follows. In Section 2, we introduce the Wright functions, entirely in the complex plane that we distinguish in two kinds in relation to the value-range of the two parameters on which they depend. In particular, we devote our attention to two Wright functions of the second kind introduced by Mainardi with the term of auxiliary functions. One of them, known as

M-Wright function, generalizes the Gaussian function so it is expected to play a fundamental role in non-Gaussian stochastic processes.

Indeed, in Section 3, we show how the Wright functions of the second kind are relevant in the analysis of time-fractional diffusion and diffusion-wave equations being related to their fundamental solutions. This analysis leads to generalizing the known results r of the standard diffusion equation in the one-dimensional case that is recalled in Appendix A by means of auxiliary functions as particular cases of the Wright functions of the second kind known as M-Wright or Mainardi functions. For readers' convenience, in Appendix B, we will also provide an introduction to the time-derivative of fractional order in the Caputo sense We remind that nowadays, as usual, by fractional order, we mean a non-integer order, so that the term "*fractional*" is a misnomer kept only for historical reasons.

In Section 4, we consider again the Mainardi auxiliary functions functions for their role in probability theory and in particular in the framework of Lévy stable distributions whose general theory is recalled in Appendix C.

In Section 5, we show how the auxiliary functions turn out to be included in a class that we denote *the four sister functions*. On their turn, these four functions depending on a real parameter *ν* ∈ (0, 1) are the natural generalization of *the three sisters functions* introduced in Appendix A devoted to the standard diffusion equation. The attribute of sisters was put in by one of us (F. M.) because of their inter-relations, in his lecture notes on Mathematical Physics, so this is only a personal reason that we hope to be shared by the readers.

Finally, in Section 6, we provide some concluding remarks paying attention to work to be done in the next future.

We point out that we have equipped our theoretical analysis with several plots hoping they will be considered illuminating for the interested readers. We also note that we have limited our review to the simplest boundary values problems of equations in one space dimension referring the readers to suitable references for more general treatments in Section 3.1.

#### **2. The Wright Functions of the Second Kind and the Mainardi Auxiliary Functions**

The classical *Wright function* that we denote by *Wλ*,*μ*(*z*), is defined by the series representation convergent in the whole complex plane,

$$\mathcal{W}\_{\lambda,\mu}(z) := \sum\_{n=0}^{\infty} \frac{z^n}{n!\Gamma(\lambda n + \mu)}, \quad \lambda > -1, \quad \mu \in \mathbb{C}, \tag{1}$$

The *integral representation* reads as:

$$\mathcal{W}\_{\lambda,\mu}(z) = \frac{1}{2\pi i} \int\_{Ha\_{-}} e^{\sigma + z\sigma^{-\lambda}} \frac{d\sigma}{\sigma^{\mu}}, \quad \lambda > -1, \quad \mu \in \mathbb{C}, \tag{2}$$

where *Ha*<sup>−</sup> denotes the Hankel path: this one is a loop which starts from −∞ along the lower side of negative real axis, encircling it with a small circle the axes origin and ends at −∞ along the upper side of the negative real axis.

*Wλ*,*μ*(*z*) is then an *entire function* for all *λ* ∈ (−1, +∞). Originally, Wright assumed *λ* ≥ 0 in connection with his investigations on the asymptotic theory of partition [4,5] and only in 1940 he considered −1 < *λ* < 0, [6]. We note that, in the Vol 3, Chapter 18 of the handbook of the Bateman Project [7], presumably for a misprint, the parameter *λ* is restricted to be non-negative, whereas the Wright functions remained practically ignored in other handbooks. In 1993, Mainardi, being aware only of the Bateman handbook, proved that the Wright function is entire also for −1 < *λ* < 0 in his approaches to the time fractional diffusion equation that will be dealt with in the next section.

In view of the asymptotic representation in the complex domain and of the Laplace transform for positive argument *z* = *r* > 0 (*r* can be the time variable *t* or the space variable *x*), the Wright functions are distinguished in *first kind* (*λ* ≥ 0) and *second kind* (−1 < *λ* < 0) as outlined in the Appendix F of the book by Mainardi [8]. In particular, for the asymptotic behavior, we refer the interested reader to the two papers by Wong and Zhao [9,10], and to the surveys by Luchko and by Paris in the Handbook of Fractional Calculus and Applications, see, respectively, [11,12], and references therein.

We note that the Wright functions are an entire of order 1/(1 + *λ*); hence, only the first kind functions (*λ* ≥ 0) are of exponential order, whereas the second kind functions (−1 < *λ* < 0) are not of exponential order. The case *λ* = 0 is trivial since *W*0,*μ*(*z*) = e*z*/Γ(*μ*). As a consequence of the difference in the orders, we must point out the different Laplace transforms proved e.g., in [8,13], see also the recent survey on Wright functions by Luchko [11]. We have:

• for the first kind, when *λ* ≥ 0

$$\mathcal{W}\_{\lambda,\mu}(\pm r) \, \div \, \frac{1}{s} E\_{\lambda,\mu} \left( \pm \frac{1}{s} \right) \, ; \tag{3}$$

• for the second kind, when −1 < *λ* < 0 and putting for convenience *ν* = −*λ* so 0 < *ν* < 1

$$\mathcal{W}\_{-\nu,\mu}(-r) \div E\_{\nu,\mu+\nu} \begin{pmatrix} -s \end{pmatrix} \,. \tag{4}$$

Above, we have introduced the Mittag–Leffler function in two parameters *α* > 0, *β* ∈ C defined as its convergent series for all *z* ∈ C

$$E\_{\mathfrak{a}, \mathfrak{F}}(z) := \sum\_{n=0}^{\infty} \frac{z^n}{\Gamma(\mathfrak{a}n + \beta)}. \tag{5}$$

For more details on the special functions of the Mittag–Leffler type, we refer the interested readers to the treatise by Gorenflo et al. [14], where, in the forthcoming 2nd edition, the Wright functions are also treated in some detail.

In particular, two Wright functions of the second kind, originally introduced by Mainardi and named *Fν*(*z*) and *Mν*(*z*) (0 < *ν* < 1), are called *auxiliary functions* in virtue of their role in the time fractional diffusion equations considered in the next section. These functions, *Fν*(*z*) and *Mν*(*z*), are indeed special cases of the Wright function of the second kind *Wλ*,*μ*(*z*) by setting, respectively, *λ* = −*ν* and *μ* = 0 or *μ* = 1 − *ν*. Hence, we have:

$$F\_{\mathcal{V}}(z) := \mathcal{W}\_{-\mathcal{V}} \mathfrak{o}(-z), \quad 0 < \boldsymbol{\nu} < 1,\tag{6}$$

and

$$M\_{\nu}(z) := W\_{-\nu, 1-\nu}(-z), \ 0 < \nu < 1. \tag{7}$$

Those functions are interrelated through the following relation:

$$F\_V(z) = \nu z M\_V(z),\tag{8}$$

which reminds us of the second relation in (A9), seen for the standard diffusion equation.

The series representations of the auxiliary functions are derived from those of *Wλ*,*μ*(*z*). Then:

$$F\_{\nu}(z) := \sum\_{n=1}^{\infty} \frac{(-z)^n}{n! \Gamma(-\nu n)} = \frac{1}{\pi} \sum\_{n=1}^{\infty} \frac{(-z)^{n-1}}{n!} \Gamma(\nu n + 1) \sin \left( \pi \nu n \right) \tag{9}$$

and

$$M\_{\nu}(z) := \sum\_{n=0}^{\infty} \frac{(-z)^{n}}{n! \Gamma[-\nu n + (1-\nu)]} = \frac{1}{\pi} \sum\_{n=1}^{\infty} \frac{(-z)^{n-1}}{(n-1)!} \Gamma(\nu n) \sin \left(\pi \nu n\right),\tag{10}$$

where in both cases the *reflection formula* for the Gamma function (Equation (11)) it has been used among the first and the second step of Equations (9) and (10),

$$
\Gamma(\zeta)\Gamma(1-\zeta) = \pi/\sin\pi\zeta.\tag{11}
$$

In addition, the integral representations of the auxiliary functions are derived from those of *Wλ*,*μ*(*z*). Then:

$$F\_{\nu}(z) := \frac{1}{2\pi i} \int\_{Ha\_{-}} e^{\sigma - z\sigma^{\nu}} d\sigma, \quad z \in \mathbb{C}\_{\prime} \quad 0 < \nu < 1 \tag{12}$$

and

$$M\_V(z) := \frac{1}{2\pi i} \int\_{Ha\_-} \mathcal{e}^{r - z\sigma^r} \frac{d\sigma}{\sigma^{1 - \nu}}, \quad z \in \mathbb{C}, \quad 0 < \nu < 1. \tag{13}$$

Explicit expressions of *Fν*(*z*) and *Mν*(*z*) in terms of known functions are expected for some particular values of *ν* as shown and recalled by Mainardi in the first 1990s in a series of papers [15–18] that is,

$$M\_{1/2}(z) = \frac{1}{\sqrt{\pi}} e^{-z^2/4},\tag{14}$$

$$M\_{1/3}(z) = 3^{2/3} \text{Ai}(z/3^{1/3}).\tag{15}$$

Liemert and Klenie [19] have added the following expression for *ν* = 2/3

$$M\_{2/3}(z) = 3^{-2/3} \left[ 3^{1/3} z \operatorname{Ai} \left( z^2 / 3^{4/3} \right) - 3 \operatorname{Ai}' \left( z^2 / 3^{4/3} \right) \right] \operatorname{e}^{-2z^3/27},\tag{16}$$

where Ai and Ai denote the *Airy function* and its first derivative. Furthermore, they have suggested in the positive real field IR<sup>+</sup> the following remarkably integral representation

$$M\_{\mathcal{V}}(\mathbf{x}) = \frac{1}{\pi} \frac{\mathbf{x}^{\nu/(1-\nu)}}{1-\nu} \int\_{0}^{\pi} \mathbb{C}\_{\mathcal{V}}(\boldsymbol{\phi}) \exp\left(-\mathbb{C}\_{\mathcal{V}}(\boldsymbol{\phi})\right) \mathbf{x}^{1/(1-\nu)} \, d\boldsymbol{\phi},\tag{17}$$

where

$$C\_{\nu}(\phi) = \frac{\sin(1-\nu)}{\sin\phi} \left(\frac{\sin\nu\phi}{\sin\phi}\right)^{\nu/(1-\nu)}\tag{18}$$

corresponding to Equation (7) of the article written by Saa and Venegeroles [20] .

The Wright function of both kinds and in particular the Mainardi auxiliary functions considerd in this paper turn out to be particular cases of more general transcendental functions as the Fox *H* functions, the Fox–Wright functions and the multi-index Mittag–Leffler functions. The relations with the classical Mittag–Leffler functions with two parameters have already been pointed out so; for more parameters, we refer the interested reader, e.g., to the papers by Kiryakova [21], Kilbas, Koroleva, Rogosin [22], and references therein.

We outline that for more Laplace transform pairs involving the Wright and the Mittag–Leffler functions the reader is referred to Ansari and Refahi Sheikhani [23] and to the tutorial survey by Mainardi [24].

#### **3. The Wright Functions of the Second Kind and the Time-Fractional Diffusion Wave Equation**

As we will see, the Wright functions of the second kind are relevant in the analysis of the Time-Fractional Diffusion-Wave Equation (TFDWE).

We find it convenient to show the plots of the *M*-Wright functions on a space symmetric interval of IR in Figures 1 and 2, corresponding to the cases 0 ≤ *ν* ≤ 1/2 and 1/2 ≤ *ν* ≤ 1, respectively.

From these figures, we recognize the non-negativity of the *M*-Wright function on IR for 1/2 ≤ *ν* ≤ 1 consistently with the analysis on distribution of zeros and asymptotics of Wright functions carried out by Luchko, see [11,25] and by Luchko and Kiryakova [26].

**Figure 1.** Plots of the *M*-Wright function as a function of the *x* variable, for 0 ≤ *ν* ≤ 1/2.

**Figure 2.** Plots of the *M*-Wright function as a function of the *x* variable, for 1/2 ≤ *ν* ≤ 1.

For this purpose, we introduce now the TFDWE as a generalization of the standard diffusion equation and we see how the two Mainardi auxiliary functions come into play. The TFDWE is thus obtained from the standard diffusion equation (or the D'Alembert wave equation) by replacing the first-order (or the second-order) time derivative by a fractional derivative (of order 0 < *β* ≤ 2) in the Caputo sense, obtaining the following Fractional PDE:

$$\frac{\partial^{\beta}u}{\partial t^{\beta}} = D \frac{\partial^{2}u}{\partial x^{2}} \qquad 0 < \beta \le 2, \quad D > 0,\tag{19}$$

where *D* is a positive constant whose dimensions are *L*2*T*−*<sup>β</sup>* and *u* = *u*(*x*, *t*; *β*) is the field variable, which is assumed again to be a causal function of time. The Caputo fractional derivative is recalled in the Appendix B so that in explicit form the TFDWE (19) splits in the following integro-differential equations:

$$\frac{1}{\Gamma(1-\beta)} \int\_0^t (t-\tau)^{-\beta} \left(\frac{\partial u}{\partial \tau}\right) d\tau = D \frac{\partial^2 u}{\partial x^2}, \quad 0 < \beta \le 1;\tag{20}$$

$$\frac{1}{\Gamma(2-\beta)} \int\_0^t (t-\tau)^{1-\beta} \left(\frac{\partial^2 u}{\partial \tau^2}\right) d\tau = D \frac{\partial^2 u}{\partial x^2}, \quad 1 < \beta \le 2. \tag{21}$$

In view of our analysis, we find it convenient to put:

$$\nu = \frac{\beta}{2}, \quad 0 < \nu \le 1. \tag{22}$$

We can then formulate the basic problems for the Time Fractional Diffusion-Wave Equation using a correspondence with the two problems for the standard diffusion equation.

Denoting by *f*(*x*) and *g*(*t*) two given, sufficiently well-behaved functions, we define:

#### (a) Cauchy problem

$$\begin{cases} u(\mathbf{x},0^+;\nu) = f(\mathbf{x}), & -\infty < \mathbf{x} < +\infty; \\ u(\pm\infty, t; \nu) = 0, & t > 0 \end{cases} \tag{23}$$

#### (b) Signalling problem

$$\begin{cases} u(\mathbf{x},0^+;\nu) = 0, & 0 \le \mathbf{x} < +\infty; \\ u(0^+,t;\nu) = \mathbf{g}(t), \; u(+\infty,t;\nu) = 0, & t > 0 \end{cases} \tag{24}$$

If 1/2 < *ν* ≤ 1 corresponding to 1 < *β* ≤ 2, we must consider also the initial value of the first time derivative of the field variable *ut*(*x*, 0+; *ν*), since, in this case, Equation (19) turns out to be akin to the wave equation and consequently two linear independent solutions are to be determined. However, to ensure the continuous dependence of the solutions to our basic problems on the parameter *ν* in the transition from *ν* = (1/2)<sup>−</sup> to *ν* = (1/2)+, we agree to assume *ut*(*x*, 0+; *ν*) = 0.

For the Cauchy and Signalling problems, following the approaches by Mainardi, see, e.g., [15] and related papers, we introduce now the Green functions G*c*(*x*, *t*; *ν*) and G*s*(*x*, *t*; *ν*) that for both problems can be determined by the *LT* technique, so extending the results known from the ordinary diffusion equation. We recall that the Green functions are also referred to as the fundamental solutions, corresponding respectively to *f*(*x*) = *δ*(*x*) and *g*(*t*) = *δ*(*t*) with *δ*(·) is the Dirac delta generalized function

The expressions for the Laplace Transforms of the two Green's functions are:

$$\mathcal{G}\_{\mathbf{c}}(\mathbf{x}, \mathbf{s}; \nu) = \frac{1}{2\sqrt{D}s^{1-\nu}} \mathbf{e}^{(-|\mathbf{x}| / \sqrt{D})s^{\nu}} \tag{25}$$

and

$$\widetilde{\mathcal{G}}\_{\mathbf{s}}(\mathbf{x}, \mathbf{s}; \nu) = \mathbf{e}^{-(\mathbf{x}/\sqrt{D})\mathbf{s}^{\nu}} \tag{26}$$

Now, we can easily recognize the following relation:

$$\frac{d}{ds}\widetilde{\mathcal{G}}\_s = -2\nu \ge \widetilde{\mathcal{G}}\_c, \quad \ge > 0 \tag{27}$$

which implies for the original Green functions the following *reciprocity relation* for *x* > 0'and *t* > 0 and 0 < *ν* < 1:

$$2\nu z \mathcal{G}\_{\varepsilon}(\mathbf{x}, t; \nu) = t \mathcal{G}\_{\varepsilon}(\mathbf{x}, t; \nu) = F\_{\nu}(z) = \nu z M\_{\nu}(z) \quad z = \frac{\mathbf{x}}{\sqrt{D}t^{\nu}} \tag{28}$$

where *z* is the *similarity variable* and *Fν*(*z*) and *Mν*(*z*) are the Mainardi auxilary functions introduced in the previous section. Indeed, Equation (28) is the generalization of Equation (A8) that we have seen for the standard diffusion equation due to the introduction of the time fractional derivative of order *ν*.

Then, the two Green functions of the Cauchy and Signalling problems turn out to be expressed in terms of the two auxiliary functions as follows.

For the Cauchy problem, we have

$$\mathcal{G}\_{\mathbb{C}}(\mathbf{x}, t; \nu) = \frac{t^{-\nu}}{2\sqrt{D}} M\_{\mathbb{V}}\left(\frac{|\mathbf{x}|}{\sqrt{D}t^{\nu}}\right) \quad -\infty < \mathbf{x} < +\infty \quad t \ge 0 \tag{29}$$

that generalizes Equation (A5).

For the Signalling problem, we have:

$$\mathcal{G}\_{\rm S}(\mathbf{x}, t; \nu) = \frac{\nu \mathbf{x} t^{-\nu - 1}}{\sqrt{D}} M\_{\rm V} \left( \frac{\mathbf{x}}{\sqrt{D} t^{\nu}} \right) \quad \mathbf{x} \ge \mathbf{0}, \quad t \ge \mathbf{0} \tag{30}$$

that generalizes Equation (A7).

#### *3.1. Complements to the Time-Fractional Diffusion-Wave Equations*

The use of the Wright functions of the second kind in time fractional diffusion-wave equations has appeared in several papers for a variety of different purposes, see, e.g., Bazhlekova [27], D'Ovidio [28], Gorenflo, Luchko and Mainardi [29], Mentrelli and Pagnini [30], Mosley and Ansari [31], Pagnini [32], Povstenko [33], and references therein.

The boundary value problems dealt with previously can be considered with a source data function *f*(*x*) and *g*(*t*) different from the Dirac generalized functions, in particular with box-type functions as it has been carried out recently by us, see [34].

An interesting generalization of the TFDWE is obtained by considering time-fractional derivatives of distributed order. In this respect, we cite, e.g., the papers by Kochubei [35], Li, Luchko and Yamamoto [36], Mainardi, Pagnini and Gorenflo [37], and Mainardi et. al [38].

The TFDWE can also be generalized in 2D and 3D space dimensions. so consequently the Wright functions play again a fundamental role. However, we prefer to refer the interested reader to the literature, in particular to the papers by Luchko and collaborators [11,25,39–43], by Hanyga [44] and to the recent analysis by Kemppainen [45]. All of them are originated in some way from the seminal paper by Schneider and Wyss [46]. In some of these papers, the authors have considered also fractional differentiation both in time and in space, so that they have generalized to more than one dimension the former analysis by Mainardi, Luchko, and Pagnini [47] on the space-time fractional diffusion-wave equations.

#### **4. The** *M***-Wright Functions in Probability Theory and the Stable Distributions**

We recognize that the Wright *M*-function with support in IR<sup>+</sup> can be interpreted as probability density function (*pdf*) because it is non negative and also it satisfies the normalization condition:

$$\int\_0^\infty M\_\nu(x) \, dx = 1\,\,. \tag{31}$$

We now provide more details on these densities in the framework of the theory of probability.

**Theorem 1.** *Let <sup>M</sup>ν*(*x*) *be the M-Wright function in* <sup>R</sup>+*,* <sup>0</sup> <sup>≤</sup> *<sup>ν</sup>* <sup>&</sup>lt; <sup>1</sup> *and <sup>δ</sup>* <sup>&</sup>gt; <sup>−</sup>1*. Then, the (finite)* absolute moments *of order δ are given by:*

$$\int\_0^\infty x^\delta \, M\_V(x) \, d\mathfrak{x} = \frac{\Gamma(\delta + 1)}{\Gamma(\nu \delta + 1)}.\tag{32}$$

**Proof.** The proof is based on the integral representation of the *M*-Wright function:

$$\begin{split} \int\_{0}^{\infty} \mathbf{x}^{\delta} M\_{\boldsymbol{\nu}}(\mathbf{x}) d\mathbf{x} &= \int\_{0}^{\infty} \mathbf{x}^{\delta} \left[ \frac{1}{2\pi i} \int\_{H a\_{-}} \boldsymbol{\epsilon}^{\sigma - \lambda \sigma^{\nu}} \frac{d\sigma}{\sigma^{1 - \nu}} \right] d\mathbf{x} \\ &= \frac{1}{2\pi i} \int\_{H a\_{-}} \boldsymbol{\epsilon}^{\sigma} \left[ \int\_{0}^{\infty} \boldsymbol{\epsilon}^{-\lambda \sigma^{\nu}} \boldsymbol{x}^{\delta} d\mathbf{x} \right] \frac{d\sigma}{\sigma^{1 - \nu}} \\ &= \frac{\Gamma(\delta + 1)}{2\pi i} \int\_{H a\_{-}} \frac{\boldsymbol{\epsilon}^{\sigma}}{\sigma^{\nu \delta + 1}} d\sigma = \frac{\Gamma(\delta + 1)}{\Gamma(\nu \delta + 1)} \end{split} \tag{33}$$

The exchange between two integrals and the following identity contributed to the final result for Equation (33):

$$\int\_0^\infty \mathbf{e}^{-\mathbf{x}\sigma^\nu} \mathbf{x}^\delta d\mathbf{x} = \frac{\Gamma(\delta + 1)}{(\sigma^\nu)^{\delta + 1}}.\tag{34}$$

In particular, for *δ* = *n* ∈ IN, the above formula provides the moments of integer order. Indeed, recalling the Mittag–Leffler function introduced in Equation (5) with *α* = *ν* and *β* = 1:

$$E\_{\nu}(z) := \sum\_{n=0}^{\infty} \frac{z^n}{\Gamma(\nu n + 1)}, \quad \nu > 0, \quad z \in \mathbb{C}, \tag{35}$$

the moments of integer order can also be computed from the Laplace transform pair

$$M\_{\nu}(\mathbf{x}) \doteq E\_{\nu}(-\mathbf{s})\tag{36}$$

proved in the Appendix F of [8] as follows:

$$\int\_0^{+\infty} \mathbf{x}^n \, M\_{\nu}(\mathbf{x}) \, d\mathbf{x} = \lim\_{s \to 0} (-1)^n \, \frac{d^n}{ds^n} \, E\_{\nu}(-s) = \frac{\Gamma(n+1)}{\Gamma(\nu n + 1)}.\tag{37}$$

#### *4.1. The Auxiliary Functions versus Extremal Stable Densities*

We find it worthwhile to recall the relations between the Mainardi auxiliary functions and the extremal Lévy stable densities as proven in the 1997 paper by Mainardi and Tomirotti [48]. For readers' convenience, we refer to Appendix C for an essential account of the general Lévy stable distributions in probability. Indeed, from a comparison between the series expansions of stable densities in (A41) and (A42) and of the auxiliary functions in Equations (9) and (10), we recognize that the auxiliary functions are related to the extremal stable densities as follows:

$$L\_{\mathfrak{a}}^{-\mathfrak{a}}(\mathbf{x}) = \frac{1}{\mathfrak{x}} F\_{\mathfrak{a}}(\mathbf{x}^{-\mathfrak{a}}) = \frac{\mathfrak{a}}{\mathfrak{x}^{\mathfrak{a}+1}} M\_{\mathfrak{a}}(\mathbf{x}^{-\mathfrak{a}}) \quad 0 < \mathfrak{a} < 1 \quad \mathbf{x} \ge 0 \tag{38}$$

$$L\_{a}^{a-2}(\mathbf{x}) = \frac{1}{\mathbf{x}} F\_{1/a}(\mathbf{x}) = \frac{1}{a} M\_{1/a}(\mathbf{x}) \quad 1 < a \le 2 \quad -\infty < \mathbf{x} < +\infty \,. \tag{39}$$

In the above equations, for *α* = 1, the skewness parameter turns out to be *θ* = −1, so we get the singular limit

$$M\_1^{-1}(\mathbf{x}) = M\_1(\mathbf{x}) = \delta(\mathbf{x} - \mathbf{1})\,. \tag{40}$$

Hereafter, we show in Figures 3 and 4 the plots the extremal stable densities according to their expressions in terms of the *M*-Wright functions, see Equations (38) and (39) for *α* = 1/2 and *α* = 3/2, respectively.

**Figure 4.** Plot of the bilateral extremal stable pdf for *α* = 3/2.

We recognize that the above plots are consistent with the corresponding ones shown by Mainardi et al. [47] for the stable pdf's derived as fundamental solutions of a suitable space-fractional diffusion equation.

#### *4.2. The Symmetric M-Wright Function*

We easily recognize that extending the function *Mν*(*x*) in a symmetric way to all of IR (that is putting *x* = |*x*|) and dividing by 2 we have a *symmetric pd f* with support in all of IR.

As the parameter *ν* changes between 0 and 1, the *pdf* goes from the Laplace *pdf* to two half discrete delta *pdf*s passing for *ν* = 1/2 through the Gaussian *pdf*.

To develop a visual intuition, also in view of the subsequent applications, we show n Figures 5 and 6 the plots of the symmetric *M*-Wright function on the real axis at *t* = 1 for some rational values of the parameter *ν* ∈ [0, 1]

*Mathematics* **2020**, *8*, 884

**Figure 5.** Plot of the symmetric *M*-Wright function *Mν*(|*x*|) for 0 ≤ *ν* ≤ 1/2. Note that the *M*-Wright function becomes a Gaussian density for *ν* = 1/2.

**Figure 6.** Plot of the symmetric *M*-Wright type function *Mν*(|*x*|)| for 1/2 ≤ *ν* ≤ 1. Note that the *M*-Wright function becomes a a sum of two delta functions centered in *x* = ±1 for *ν* = 1.

The readers are invited to look the YouTube video by Consiglio whose title is "Simulation of the *M*-Wright function", in which the author shows the evolution of this function as the parameter *ν* changes between 0 and 0.85 in a finite interval of IR centered in *x* = 0.

**Theorem 2.** *Let Mν*(|*x*|) *be the symmetric M-Wright function pdf. Then, its characteristic function is:*

$$\mathcal{F}\left[\frac{1}{2}M\_{\mathcal{V}}(|x|)\right] = E\_{2\mathcal{V}}(-\kappa^2) \tag{41}$$

**Proof.** The proof is based on the series development of the cosine function and on Equation (33):

$$\begin{split} \mathcal{F}\left[\frac{1}{2}M\_{\mathcal{V}}(|\mathbf{x}|)\right] &:= \frac{1}{2}\int\_{-\infty}^{+\infty} e^{+i\mathbf{x}\cdot\mathbf{x}}M\_{\mathcal{V}}(|\mathbf{x}|)d\mathbf{x} \\ &= \int\_{0}^{\infty} \cos\left(\kappa \mathbf{x}\right)M\_{\mathcal{V}}(\mathbf{x})d\mathbf{x} \\ &= \sum\_{n=0}^{\infty}(-1)^{n}\frac{\kappa^{2n}}{(2n)!}\int\_{0}^{\infty} \mathbf{x}^{2n}M\_{\mathcal{V}}(\mathbf{x})d\mathbf{x} \\ &= \sum\_{n=0}^{\infty}(-1)^{n}\frac{\kappa^{2n}}{\Gamma(2\nu n + 1)} = E\_{2\nu}(-\kappa^{2}) \end{split} \tag{42}$$

#### *4.3. The Wright* M*-Function in Two Variables*

In view of time-fractional diffusion processes related to time-fractional diffusion equations, it is worthwhile to introduce the function in two variables

$$\mathcal{M}\_{\nu}(\mathbf{x},t) := t^{-\nu} \, M\_{\nu}(\mathbf{x}t^{-\nu}) \quad 0 < \nu < 1 \quad \mathbf{x}, t \in \mathbb{R}^{+} \tag{43}$$

which defines a spatial probability density in *x* evolving in time *t* with self-similarity exponent *H* = *ν*. Of course, for *x* ∈ IR, we have to consider the symmetric version of the *M*-Wright function. Hereafter, we provide a list of the main properties of this function, which can be derived from the Laplace and Fourier transforms for the corresponding Wright *M*-function in one variable.

From Equations (39) and (43), we derive the Laplace transform of <sup>M</sup>*ν*(*x*, *<sup>t</sup>*) with respect to *<sup>t</sup>* <sup>∈</sup> IR+,

$$\mathcal{L}\left\{\mathbb{M}\_{\upsilon}(\mathbf{x},t);t\to s\right\} = \mathbf{s}^{\upsilon-1}\mathbf{e}^{-\mathbf{x}\mathbf{s}^{\upsilon}}.\tag{44}$$

From Equation (18), we derive the Laplace transform of <sup>M</sup>*ν*(*x*, *<sup>t</sup>*) with respect to *<sup>x</sup>* <sup>∈</sup> IR+,

$$\mathcal{L}\left\{\mathbb{M}\_{\mathbb{V}}(\mathbf{x},\mathbf{t});\mathbf{x}\rightarrow\mathbf{s}\right\} = E\_{\mathbb{V}}\left(-\mathrm{st}^{\nu}\right).\tag{45}$$

From Equation (55), we derive the Fourier transform of M*ν*(|*x*|, *t*) with respect to *x* ∈ IR,

$$\mathcal{F}\left\{\mathbb{M}\_{\nu}(|\mathbf{x}|,t);\mathbf{x}\rightarrow\mathbf{x}\right\}=2E\_{2\nu}\left(-\kappa^{2}t^{\nu}\right).\tag{46}$$

Using the Mellin transforms, Mainardi et al. [49] derived the following interesting integral formula of composition,

$$\mathcal{M}\_{\nu}(\mathbf{x},t) = \int\_{0}^{\infty} \mathbb{M}\_{\lambda}(\mathbf{x},\mathbf{r}) \mathcal{M}\_{\mu}(\mathbf{r},t) \, d\mathbf{r} \quad \nu = \lambda\mu \,. \tag{47}$$

Special cases of the Wright M-function are simply derived for *ν* = 1/2 and *ν* = 1/3 from the corresponding ones in the complex domain, see Equations (28) and (29). We devote particular attention to the case *ν* = 1/2 for which we get the Gaussian density in IR,

$$\mathcal{M}\_{1/2}(|\mathbf{x}|,t) = \frac{1}{2\sqrt{\pi t}t^{1/2}}\mathbf{e}^{-\mathbf{x}^2/(4t)}.\tag{48}$$

For the limiting case *ν* = 1, we obtain

$$M\_1(|\mathbf{x}|,t) = \frac{1}{2} \left[ \delta(\mathbf{x} - t) + \delta(\mathbf{x} + t) \right]. \tag{49}$$

We conclude this section pointing out that the *M*-Wright functions have been applied by several authors in the theory of probability and stochastic processes, see, e.g., Beghin and Orsingher [50], Cahoy [51,52], Garra, Orsingher and Polito [53], Le Chen [54], Consiglio, Luchko and Mainardi [55], Gorenflo and Mainardi [56], Mainardi, Mura and Pagnini [57], Pagnini [58], Scalas and Viles [59], and references therein. Furthermore, these functions have been found in the first passage problem for Lévy flights dealt by the group of Prof. Metzler, see e.g., [60,61].

#### **5. The Four Sisters**

In this section, we show how some Wright functions of the second kind can provide an interesting generalization of the three sisters discussed in Appendix A. The starting point is a (not well- known) paper published in 1970 by Stankovic [62], where (in our notation) the following Laplace transform pair is proved rigorously:

$$t^{\mu-1} \,\, \mathcal{W}\_{-\nu,\mu}(\mathbf{x}, t) \,\, \dot{\mathbf{x}} \,\, \mathbf{e}^{-\mu} \,\, \mathbf{e}^{-\mathbf{x}\mathbf{s}^{\nu}} \quad 0 < \nu < 1 \quad \mu \ge 0 \tag{50}$$

where *x* and *t* are positive. We note that the Stankovic formula can be derived in a formal way by developing the exponential function in positive power of *s* and inverting term by term as described in the Appendix F of the book by Mainardi [8].

We recognize that the Laplace Transforms of the Three Sisters functions *<sup>φ</sup>*(*x*,*s*), *<sup>ψ</sup>*(*x*,*s*) and *<sup>χ</sup>*(*x*,*s*) are particular cases of the Equation (50) for *ν* = 1/2 that is of

$$t^{\mu-1} \,\,\mathcal{W}\_{-1/2,\mu}(\mathbf{x}, t) \,\,\stackrel{\cdot}{\cdot} \,\,\mathbf{s}^{-\mu} \,\,\mathbf{e}^{-\mathbf{x}\sqrt{s}}\,,\tag{51}$$

according to the following scheme:

$$\dot{\phi}(\mathbf{x},s)\text{ with }\mu=1; \quad \dot{\psi}(\mathbf{x},s)\text{ with }\mu=0; \quad \tilde{\chi}(\mathbf{x},s)\text{ with}\mu=1/2.$$

If *ν* is no longer restricted to *ν* = 1/2, we define *Four Sisters functions* as follows:

$$\begin{aligned} \mu &= 0, \quad \mathbf{e}^{-\mathbf{x}\mathbf{x}^{\nu}} \div t^{-1} \mathcal{W}\_{-\nu,0}(-\mathbf{x}t^{-\nu}),\\ \mu &= 1 - \nu, \quad \frac{\mathbf{e}^{-\mathbf{x}\mathbf{x}^{\nu}}}{\mathbf{s}^{1-\nu}} \div t^{-\nu} \mathcal{W}\_{-\nu,1-\nu}(-\mathbf{x}t^{-\nu}),\\ \mu &= \nu, \quad \frac{\mathbf{e}^{-\mathbf{x}\mathbf{x}^{\nu}}}{\mathbf{s}^{\nu}} \div t^{\nu-1} \mathcal{W}\_{-\nu,\nu}(-\mathbf{x}t^{-\nu}),\\ \mu &= 1, \quad \frac{\mathbf{e}^{-\mathbf{x}\mathbf{x}^{\nu}}}{\mathbf{s}} \div \mathcal{W}\_{-\nu,1}(-\mathbf{x}t^{-\nu}). \end{aligned} \tag{52}$$

Hereafter, in Figures 7–9, we show some plots of these functions, both in the *t* and in the *x* domain for some values of *ν* (*ν* = 1/4, 1/2, 3/4).

Note that for *ν* = 1/2 we only find three functions, that is the Three Sisters functions of Appendix A.

**Figure 7.** Plots of the four sisters functions in linear scale with *ν* = 1/4; top: versus *t* (*x* = 1), bottom: versus *x* (*t* = 1).

**Figure 8.** Plots of the three sisters functions in linear scale with *ν* = 1/2; top: versus *t* (*x* = 1), bottom: versus *x* (*t* = 1).

**Figure 9.** Plots of the four sisters functions in linear scale with *ν* = 3/4; top: versus *t* (*x* = 1), bottom: versus *x* (*t* = 1).

#### **6. Conclusions**

In our survey on the Wright functions, we have distinguished two kinds, pointing out the particular class of the second kind. Indeed, these functions have been shown to play key roles in several processes governed by non-Gaussian processes, including sub-diffusion, transition to wave propagation, Lévy stable distributions. Furthermore, we have devoted our attention to four functions of this class that we agree to called *the Four Sisters functions*. All these items justify the relevance of the Wright functions of the second kind in Mathematical Physics.

**Author Contributions:** F.M.: Conceptualization of the work; A.C.: Plots. Formal analysis, review and editing have been performed by F.M. and A.C. in equal parts. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Acknowledgments:** The research activity of both the authors has been carried out (without funding) in the framework of the activities of the National Group of Mathematical Physics (GNFM, INdAM). All graphs in the figures of the present paper have been drawn using MATLAB. They have been realized mainly referring to the power series definition of the related functions by adopting a sufficiently large number of terms. The authors would like to thank the anonymous reviewers for their helpful and constructive comments.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A. The Standard Diffusion Equation and the Three Sisters**

In this Appendix, let us recall the Diffusion Equation in the one-dimensional case

$$\frac{\partial \mu}{\partial t} = D \frac{\partial^2 \mu}{\partial x^2} \tag{A1}$$

where *u* is the field variable, the constant *D* > 0 is the diffusion coefficient , whose dimensions are *L*2*T*−1, and *x* , *t* denote the space and time coordinates, respectively.

Two basic problems for Equation (A1) are the *Cauchy* and *Signalling* ones introduced hereafter In these problems, some initial values and boundary conditions are set; specify the values attained by the field variable and/or by some of its derivatives on the boundary of the space-time domain is an essential step to guarantee the existence, the uniqueness and the determination of a solution of physical interest to the problem, not only for the Diffusion Equation.

Two *data functions f*(*x*) and *g*(*t*) are then introduced to write formally these conditions; some regularities are required to be satisfied by *f*(*x*) and *g*(*t*), and in particular *f*(*x*) must admit the Fourier transform or the Fourier series expansion if the support is finite, while *h*(*t*) must admit the Laplace Transform. We also require without loss of generality that the field variable *u*(*x*, *t*) is vanishing for *t* < 0 for every *x* in the spatial domain. Given these premises, we can specify the two aforementioned problems.

In the *Cauchy problem*, the medium is supposed to be unlimited (−∞ < *x* < +∞) and to be subjected at *t* = 0 to a known disturbance provided by the data function *f*(*x*). Formally:

$$\begin{cases} \lim\_{t \to 0^+} \mu(\mathbf{x}, t) = f(\mathbf{x}), & -\infty < \mathbf{x} < +\infty; \\ \lim\_{\mathbf{x} \to \pm \infty} \mu(\mathbf{x}, t) = 0, & t > 0. \end{cases} \tag{A2}$$

This is a pure *initial-value problem* (IVP) as the values are specified along the boundary *t* = 0.

In the *Signalling problem*, the medium is supposed to be semi-infinite (0 ≤ *x* < +∞) and initially undisturbed. At *x* = 0 (the accessible end) and for *t* > 0, the medium is then subjected to a known disturbance provided by the causal function *g*(*t*). Formally:

$$\begin{cases} \lim\_{t \to 0^+} u(\mathbf{x}, t) = 0, & 0 \le \mathbf{x} < +\infty; \\ \lim\_{\mathbf{x} \to 0^+} u(\mathbf{x}, t) = \mathcal{g}(t), \lim\_{\mathbf{x} \to +\infty} u(\mathbf{x}, t) = 0 & t > 0. \end{cases} \tag{A3}$$

This problem is referred to as an *initial boundary value problem* (IBVP) in the quadrant {*x*, *t*} > 0. For each problem, the solutions turn out to be expressed by a proper convolution between the data functions and the *Green functions* G that are the fundamental solutions of the problems.

For the Cauchy problem, we have:

$$u(\mathbf{x},t) = \int\_{-\infty}^{+\infty} \mathcal{G}\_{\mathbb{C}}(\xi,t) f(\mathbf{x} - \xi) d\xi = \mathcal{G}\_{\mathbb{C}}(\mathbf{x},t) \* f(\mathbf{x}) \tag{A4}$$

with

$$\mathcal{G}\_{\mathbb{C}}(\mathbf{x},t) = \frac{1}{2\sqrt{\pi Dt}} \mathbf{e}^{-\mathbf{x}^2/(4Dt)}. \tag{A5}$$

For the Signalling problem, we have:

$$\mu(\mathbf{x},t) = \int\_0^t \mathcal{G}\_{\mathcal{S}}(\mathbf{x},\tau)\mathbf{g}(t-\tau)d\tau = \mathcal{G}\_{\mathcal{S}}(\mathbf{x},t) \ast \mathbf{g}(t) \quad -\infty < \mathbf{x} < +\infty, \quad t \ge 0 \tag{A6}$$

with

$$\mathcal{G}\_S(\mathbf{x}, t) = \frac{\mathbf{x}}{2\sqrt{\pi Dt}^3} \mathbf{e}^{-x^2/(4Dt)} \quad \mathbf{x} \ge \mathbf{0}, \quad t \ge \mathbf{0} \,. \tag{A7}$$

Following the lecture notes in Mathematical Physics by Mainardi [63], we note that the following relevant property is valid for {*x*, *t*} > 0:

$$\mathbf{x}\mathcal{G}\_{\mathbb{C}}(\mathbf{x},t) = t\mathcal{G}\_{\mathbb{S}}(\mathbf{x},t) = F(z) \tag{A8}$$

where

$$z = \frac{\mathbf{x}}{\sqrt{Dt}}, \quad F(z) = \frac{z}{2}M(z), \quad M(z) = \frac{1}{\sqrt{\pi}}\mathbf{e}^{-z^2/4}.\tag{A9}$$

According to Mainardi' s notations, Equation (A8) is known as *reciprocity relation*, *F*(*z*) and *M*(*z*) are called *auxiliary functions* and *z* is the *similarity variable*.

A particular case of the Signalling problem is obtained when *g*(*t*) = *H*(*t*) (the Heaviside unit step function) and the solution *u*(*x*, *t*) turns out to be expressed in terms of the *complementary error function*:

$$\mu(\mathbf{x},t) = \mathcal{H}\_S(\mathbf{x},t) = \int\_0^t \mathcal{G}\_S(\mathbf{x},\tau)d\tau = \text{erfc}\left(\frac{\mathbf{x}}{2\sqrt{Dt}}\right) \quad \mathbf{x} \ge 0, \quad t \ge 0. \tag{A10}$$

As is well known, the three above fundamental solutions can be obtained via the Fourier and Laplace transform methods. Introducing the parameter *a* = |*x*|/ <sup>√</sup>*D*, the Laplace transforms of these functions turns out to be simply related in the Laplace domain Re(*s*) > 0, as follows:

$$\phi(a,t) := \text{erfc}\left(\frac{a}{2\sqrt{t}}\right) \div \frac{e^{-as^{1/2}}}{s} := \widetilde{\phi}(a,s),\tag{A11}$$

$$\psi(a,t) := \frac{a}{2\sqrt{\pi t}} t^{-3/2} e^{-a^2/(4t)} \div e^{-as^{1/2}} := \tilde{\psi}(a,s),\tag{A12}$$

$$\chi(a,t) := \frac{1}{\sqrt{\pi t}} t^{-1/2} e^{-a^2/(4t)} \div \frac{e^{-as^{1/2}}}{s^{1/2}} := \tilde{\chi}(a,s) \tag{A13}$$

where the sign ÷ is used for the juxtaposition of a function with its Laplace transform. We easily note that Equation (A11) is related to the Step-Response problem, Equation (A12) is related to the Signalling problem and Equation (A13) is related to the Cauchy problem. Following the lecture notes by Mainardi [63], we agree to call the above functions *the three sisters functions* for their role in the standard diffusion equation. They will be discussed with details hereafter.

Everything that we have said above will be found again as a special case of the *Time Fractional Diffusion Equation* where the time derivative of the first order is replaced by a suitable time derivative of non-integer order.

It is easy to demonstrate that each of them can be expressed as a function of one of the two others *three sisters* (Table A1).


**Table A1.** Relations among the *three sisters* in the Laplace domain.

The *three sisters* in the *t* domain may be all directly calculated by making use of the *Bromwich formula* taking account of the contribution of the branch cut of <sup>√</sup>*<sup>s</sup>* and of the pole of 1/*s*. We obtain:

$$\begin{aligned} \widetilde{\phi}(a,s) &\div \phi(a,t) = 1 - \frac{1}{\pi} \int\_0^\infty \mathbf{e}^{-rt} \sin(a\sqrt{r}) \frac{\mathbf{d}r}{r}, \\\widetilde{\psi}(a,s) &\div \psi(a,t) = \frac{1}{\pi} \int\_0^\infty \mathbf{e}^{-rt} \sin(a\sqrt{r}) \,\mathrm{d}r \end{aligned}$$

$$\begin{aligned} \widetilde{\chi}(a,s) &\div \chi(a,t) = \frac{1}{\pi} \int\_0^\infty \mathbf{e}^{-rt} \cos(a\sqrt{r}) \frac{\mathbf{d}r}{\sqrt{r}}. \end{aligned}$$

Then, through the substitution *<sup>ρ</sup>* <sup>=</sup> <sup>√</sup>*r*, we arrive at the Gaussian integral and, consequently, we find the previous explicit expressions of the *three sisters* that is:

$$\phi(a,t) = \text{erfc}(\frac{a}{2\sqrt{t}}) = 1 - \frac{2}{\sqrt{\pi t}} \int\_0^{a/2\sqrt{t}} \text{e}^{-\text{u}^2} \,\text{d}\mu$$

$$\psi(a,t) = \frac{a}{2\sqrt{\pi}} \,t^{-3/2} \,\text{e}^{-a^2/4t}$$

$$\chi(a,t) = \frac{1}{\sqrt{\pi t}} \,t^{-1/2} \,\text{e}^{-a^2/4t} \,\text{d}\mu$$

reminding us of the definition of the complementary error function.

Alternatively, we can compute the *three sisters* in the *t* domain by using the relations among the *three sisters* in the Laplace domain listed in Table A1. However, in this case, one of the *three sisters* in the *t* domain must already be known. Assuming to know *φ*(*a*, *t*) from Equation (A11), we get:


$$\text{s } \widetilde{\phi}(a, s) \div \frac{\partial}{\partial t} \, \phi(a, t)$$

since *φ*(*a*, 0+) = 0 we can obtain (A12), namely

$$
\psi(a,t) = \frac{a}{2\sqrt{\pi}} \, t^{-3/2} \, \mathbf{e}^{-a^2/4t} \, ;
$$


$$\chi(a,t) = -\frac{\partial}{\partial a}\,\phi(a,t) = \frac{1}{\sqrt{\pi}}\,t^{-1/2}\,\mathbf{e}^{-a^2/4t}.$$

For more details, we refer the reader again to [63].

#### **Appendix B. Essentials of Fractional Calculus**

Fractional calculus is the field of mathematical analysis which deals with the investigation and applications of integrals and derivatives of arbitrary order. The term *fractional* is a misnomer, but it is retained for historical reasons, following the prevailing use.

This appendix is based on the 1997 surveys by Gorenflo and Mainardi [64] and by Mainardi [65]. For more details on the classical treatment of fractional calculus, the reader is referred to the nice and rigorous book by Diethelm [66] published in 2010 by Springer in the series Lecture Notes in Mathematics.

According to the Riemann–Liouville approach to fractional calculus, the notion of fractional integral of order *α* (*α* > 0) is a natural consequence of the well known formula (usually attributed to Cauchy) that reduces the calculation of the *n*−fold primitive of a function *f*(*t*) to a single integral of convolution type. In our notation, the Cauchy formula reads

$$f^{\mathbb{N}}f(t) := f\_{\mathbb{N}}(t) = \frac{1}{(n-1)!} \int\_0^t (t-\tau)^{n-1} f(\tau) \,d\tau \quad t > 0 \quad n \in \mathbb{N} \tag{A14}$$

where IN is the set of positive integers. From this definition, we note that *fn*(*t*) vanishes at *t* = 0 with its derivatives of order 1, 2, ... , *n* − 1 . For convention, we require that *f*(*t*) and henceforth *fn*(*t*) is a *causal* function, i.e., identically vanishing for *t* < 0 .

In a natural way, one is led to extend the above formula from positive integer values of the index to any positive real values by using the Gamma function. Indeed, noting that (*n* − 1)! = Γ(*n*) and introducing the arbitrary *positive* real number *α*, one defines the *Fractional Integral of order α* > 0 :

$$J^a f(t) := \frac{1}{\Gamma(a)} \int\_0^t (t - \tau)^{a-1} f(\tau) \, d\tau \quad t > 0 \quad a \in \mathbb{R}^+ \tag{A15}$$

where IR<sup>+</sup> is the set of positive real numbers. For complementation, we define *J*<sup>0</sup> := *I* (Identity operator), i.e., we mean *J*<sup>0</sup> *f*(*t*) = *f*(*t*). Furthermore, by *J<sup>α</sup> f*(0+), we mean the limit (if it exists) of *<sup>J</sup><sup>α</sup> <sup>f</sup>*(*t*) for *<sup>t</sup>* <sup>→</sup> <sup>0</sup><sup>+</sup> ; this limit may be infinite.

We note the *semigroup property <sup>J</sup><sup>α</sup> <sup>J</sup><sup>β</sup>* <sup>=</sup> *<sup>J</sup>α*+*<sup>β</sup> α β* <sup>≥</sup> 0 which implies the *commutative property J<sup>β</sup> J<sup>α</sup>* = *J<sup>α</sup> J<sup>β</sup>* and the effect of our operators *J<sup>α</sup>* on the power functions

$$f^{a}t^{\gamma} = \frac{\Gamma(\gamma+1)}{\Gamma(\gamma+1+a)}t^{\gamma+a} \quad a \ge 0 \quad \gamma > -1 \quad t > 0. \tag{A16}$$

These properties are of course a natural generalization of those known when the order is a positive integer.

Introducing the Laplace transform by the notation <sup>L</sup> { *<sup>f</sup>*(*t*)} :<sup>=</sup> <sup>∞</sup> <sup>0</sup> <sup>e</sup>−*st <sup>f</sup>*(*t*) *dt* = *<sup>f</sup>* (*s*) *s* ∈ C and using the sign ÷ to denote a Laplace transform pair, i.e., *f*(*t*) ÷ *f* (*s*), we point out the following rule for the Laplace transform of the fractional integral,

$$J^{\mathfrak{a}}f(t) \doteq \frac{\widetilde{f}(s)}{s^{\mathfrak{a}}} \quad \mathfrak{a} \geq 0 \tag{A17}$$

which is the generalization of the case with an *n*-fold repeated integral.

After the notion of fractional integral, that of fractional derivative of order *α* (*α* > 0) becomes a natural requirement and one is attempted to substitute *α* with −*α* in the above formulas. However, this generalization needs some care in order to guarantee the convergence of the integrals and preserve the well known properties of the ordinary derivative of integer order.

Denoting by *<sup>D</sup><sup>n</sup>* with *<sup>n</sup>* <sup>∈</sup> IN the operator of the derivative of order *<sup>n</sup>* , we first note that *<sup>D</sup><sup>n</sup> <sup>J</sup><sup>n</sup>* <sup>=</sup> *I J<sup>n</sup> <sup>D</sup><sup>n</sup>* <sup>=</sup> *I n* <sup>∈</sup> IN i.e., *<sup>D</sup><sup>n</sup>* is left-inverse (and not right-inverse) to the corresponding integral operator *J<sup>n</sup>* . In fact, we easily recognize from Equation (A14) that

$$f^{\mathfrak{n}}D^{\mathfrak{n}}f(t) = f(t) - \sum\_{k=0}^{n-1} f^{(k)}(0^{+})\,\frac{t^{k}}{k!} \quad t > 0 \,. \tag{A18}$$

As a consequence, we expect that *D<sup>α</sup>* is defined as left-inverse to *Jα*. For this purpose, introducing the positive integer *m* such that *m* − 1 < *α* ≤ *m* , one defines the *Fractional Derivative of order α* > 0 as *D<sup>α</sup> f*(*t*) := *D<sup>m</sup> Jm*−*<sup>α</sup> f*(*t*) i.e.,

$$D^n f(t) := \begin{cases} \frac{d^m}{dt^m} \left[ \frac{1}{\Gamma(m-a)} \int\_0^t \frac{f(\tau)}{(t-\tau)^{a+1-m}} d\tau \right], & m-1 < a < m, \\\frac{d^m}{dt^m} f(t) & a = m \,. \end{cases} \tag{A19}$$

Defining for complementation *<sup>D</sup>*<sup>0</sup> <sup>=</sup> *<sup>J</sup>*<sup>0</sup> <sup>=</sup> *<sup>I</sup>* , then we easily recognize that *<sup>D</sup><sup>α</sup> <sup>J</sup><sup>α</sup>* <sup>=</sup> *<sup>I</sup> <sup>α</sup>* <sup>≥</sup> 0 and

$$D^a t^\gamma = \frac{\Gamma(\gamma + 1)}{\Gamma(\gamma + 1 - a)} t^{\gamma - a} \quad a \ge 0 \quad \gamma > -1 \quad t > 0 \,. \tag{A20}$$

Of course, these properties are a natural generalization of those known when the order is a positive integer.

Note the remarkable fact that the fractional derivative *D<sup>α</sup> f* is not zero for the constant function *f*(*t*) ≡ 1 if *α* ∈ IN . In fact, (A20) with *γ* = 0 teaches us that

$$D^a 1 = \frac{t^{-a}}{\Gamma(1-a)} \quad a \ge 0 \quad t > 0 \,. \tag{A21}$$

This, of course, is ≡ 0 for *α* ∈ IN, due to the poles of the gamma function in the points 0, −1, −2, ... . We now observe that an alternative definition of fractional derivative was introduced by Caputo in 1967 [67] in a geophysical journal and in 1969 [68] in a book in Italian. Then, the Caputo definition was adopted in 1971 by Caputo and Mainardi [69,70] in the framework of the theory of *Linear Viscoelasticity*. Nowadays, it is usually referred to as the *Caputo fractional derivative* and reads *D<sup>α</sup>* <sup>∗</sup> *<sup>f</sup>*(*t*) :<sup>=</sup> *<sup>J</sup>m*−*<sup>α</sup> <sup>D</sup><sup>m</sup> <sup>f</sup>*(*t*) with *m* − 1 < *α* ≤ *m m* ∈ IN i.e.,

$$D\_\*^a f(t) := \begin{cases} \frac{1}{\Gamma(m-a)} \int\_0^t \frac{f^{(m)}(\tau)}{(t-\tau)^{a+1-m}} d\tau & m-1 < a < m\\ \frac{d^m}{dt^m} f(t) & a = m . \end{cases} \tag{A22}$$

We recall that there are a number of discussions on the priority of this definition that surely was formerly considered by Liouville as stated by Butzer and Westphal [71]. However, Liouville did not recognize the relevance of this representation derived by a trivial integration by part, whereas Caputo, even if unaware of the Riemann–Liouville representation, promoted his definition in several papers for all the applications where the Laplace transform plays a fundamental role. We agree to denote Equation (A22) as the *Caputo fractional derivative* to distinguish it from the standard Riemann–Liouville fractional derivative (A19).

The Caputo definition (A22) is of course more restrictive than the Riemann–Liouville definition (A19), in that it requires the absolute integrability of the derivative of order *m*. Whenever we use the operator *D<sup>α</sup>* <sup>∗</sup>, we (tacitly) assume that this condition is met. We easily recognize that in general

$$D^{\mathfrak{a}}f(t) := D^{\mathfrak{m}}f^{\mathfrak{m}-\mathfrak{a}}f(t) \neq \mathfrak{J}^{\mathfrak{m}-\mathfrak{a}}D^{\mathfrak{m}}f(t) := D^{\mathfrak{a}}\_{\*}f(t) \tag{A23}$$

unless the function *<sup>f</sup>*(*t*) along with its first *<sup>m</sup>* <sup>−</sup> 1 derivatives vanishes at *<sup>t</sup>* <sup>=</sup> <sup>0</sup>+. In fact, assuming that the passage of the *m*-derivative under the integral is legitimate, one recognizes that, for *m* − 1 < *α* < *m* and *t* > 0

$$D^{\mathfrak{a}}f(t) = D^{\mathfrak{a}}\_{\*}f(t) + \sum\_{k=0}^{m-1} \frac{t^{k-a}}{\Gamma(k-a+1)} f^{(k)}(0^{+}) \tag{A24}$$

and therefore, recalling the fractional derivative of the power functions (A20),

$$D^{\mathfrak{a}}\left(f(t) - \sum\_{k=0}^{m-1} \frac{t^{k}}{k!} f^{(k)}(0^{+})\right) = D^{\mathfrak{a}}\_{\*}f(t) \,. \tag{A25}$$

The alternative definition (A22) for the fractional derivative thus incorporates the initial values of the function and of its integer derivatives of lower order. The subtraction of the Taylor polynomial of degree *<sup>m</sup>* <sup>−</sup> 1 at *<sup>t</sup>* <sup>=</sup> <sup>0</sup><sup>+</sup> from *<sup>f</sup>*(*t*) means a sort of regularization of the Riemann–Liouville fractional derivative. In particular, for 0 < *α* < 1, we get

$$D^{\mathfrak{a}}\left(f(t) - f(0^+)\right) = D^{\mathfrak{a}}\_\* f(t) \dots$$

According to the Caputo definition, the relevant property for which the fractional derivative of a constant is still zero can be easily recognized, i.e.,

$$D\_\*^{\mathfrak{a}}1 \equiv 0 \quad \mathfrak{a} > 0 \,. \tag{A26}$$

We now explore the most relevant differences between the two fractional derivatives (A19) and (A22). We observe, again by looking at (A20), that *Dαt <sup>α</sup>*−<sup>1</sup> <sup>≡</sup> <sup>0</sup> *<sup>α</sup>* <sup>&</sup>gt; <sup>0</sup> *<sup>t</sup>* <sup>&</sup>gt; 0 . From above, we thus recognize the following statements about functions which for *t* > 0 admit the same fractional derivative of order *α* with *m* − 1 < *α* ≤ *m m* ∈ IN

$$D^{\mathfrak{a}}f(t) = D^{\mathfrak{a}}g(t) \iff f(t) = g(t) + \sum\_{j=1}^{m} c\_j \, t^{\mathfrak{a}-j} \tag{A27}$$

$$D\_\*^
u f(t) = D\_\*^
u g(t) \iff f(t) = g(t) + \sum\_{j=1}^m c\_j \, t^{m-j}.\tag{A28}$$

In these formulas, the coefficients *cj* are arbitrary constants.

For the two definitions, we also point out a difference with respect to the *formal* limit as *α* → (*m* − 1) <sup>+</sup>. From (A19) and (A22) we obtain, respectively,

$$a \to (m-1)^{+} \implies D^{a}f(t) \to D^{m}f \\ f(t) = D^{m-1}f(t);\tag{A29}$$

$$m \to (m-1)^{+} \implies D\_{\ast}^{\mu}f(t) \to f \, D^{m}f(t) = D^{m-1}f(t) - f^{(m-1)}(0^{+}) \,. \tag{A30}$$

We now consider the *Laplace transform* of the two fractional derivatives. For the standard fractional derivative *Dα*, the Laplace transform, assumed to exist, requires the knowledge of the (bounded) initial values of the fractional integral *<sup>J</sup>m*−*<sup>α</sup>* and of its integer derivatives of order *<sup>k</sup>* <sup>=</sup> 1, 2, ... , *<sup>m</sup>* <sup>−</sup> 1 . The corresponding rule reads, in our notation,

$$D^a f(t) \xleftarrow{} s^a \widetilde{f}(s) - \sum\_{k=0}^{m-1} D^k \overline{f^{(m-a)}} f(0^+) s^{m-1-k} \quad m-1 < a \le m \,. \tag{A31}$$

The *Caputo fractional derivative* appears to be more suitable to be treated by the Laplace transform technique in that it requires the knowledge of the (bounded) initial values of the function and of its integer derivatives of order *k* = 1, 2, ... , *m* − 1 analogous with the case when *α* = *m* . In fact, by using Eqaution (A17) and noting that

$$J^{a}D\_{\*}^{a}f(t) = J^{a}J^{m-a}D^{m}f(t) = J^{m}D^{m}f(t) = f(t) - \sum\_{k=0}^{m-1} f^{(k)}(0^{+})\frac{t^{k}}{k!}.\tag{A.32}$$

we easily prove the following rule for the Laplace transform,

$$D\_\*^a f(t) \div s^a \tilde{f}(s) - \sum\_{k=0}^{m-1} f^{(k)}(0^+) \, s^{a-1-k} \quad m-1 < a \le m \,. \tag{A33}$$

Indeed, the result (A33), first stated by Caputo by using the Fubini–Tonelli theorem, appears as the most "natural" generalization of the corresponding result well known for *α* = *m* .

In particular, Gorenflo and Mainardi have pointed out the major utility of the Caputo fractional derivative in the treatment of differential equations of fractional order for *physical applications*. In fact, in physical problems, the initial conditions are usually expressed in terms of a given number of bounded values assumed by the field variable and its derivatives of integer order, no matter if the governing evolution equation may be a generic integro-differential equation and therefore, in particular, a fractional differential equation.

#### **Appendix C. The Lévy Stable Distributions**

We now introduce the so-called *Lévy Stable Distributions*. The term stable has been assigned by the French mathematician Paul Lévy, who, in the 1920s, started a systematic research in order to generalize the celebrated *Central Limit Theorem* to probability distributions with infinite variance. For stable distributions, we can assume the following DEFINITION: *If two independent real random variables with the same shape or type of distribution are combined linearly and the distribution of the resulting random variable has the same shape, the common distribution (or its type, more precisely) is said to be stable*.

The restrictive condition of stability enabled Lévy (and then other authors) to derive the *canonic form* for the characteristic function of the densities of these distributions. Here, we follow the parameterization by Feller [72,73] revisited by Gorenflo & Mainardi in [74], see also [47]. Denoting by *Lθ <sup>α</sup>*(*x*) a generic stable density in IR, where *α* is the *index of stability* and and *θ* the asymmetry parameter, improperly called *skewness*, its characteristic function reads:

$$L\_{\mathfrak{a}}^{\theta}(\mathbf{x}) \div \hat{L}\_{\mathfrak{a}}^{\theta}(\mathbf{x}) = \exp\left[-\psi\_{\mathfrak{a}}^{\theta}(\mathbf{x})\right] \quad \psi\_{\mathfrak{a}}^{\theta}(\mathbf{x}) = |\mathfrak{x}|^{\mathfrak{a}} \operatorname{\mathbf{e}}^{\hat{i}(\operatorname{sign}\,\mathbf{x})\theta\pi/2} \tag{A34}$$

$$0 < a \le 2 \ |\theta| \le \min\left\{a, 2 - a\right\}.$$

We note that the allowed region for the parameters *α* and *θ* turns out to be a diamond in the plane {*α*, *θ*} with vertices in the points (0, 0) (1, 1) (1, −1) (2, 0), which we call the *Feller–Takayasu diamond*, see Figure A1. For values of *θ* on the border of the diamond (that is *θ* = ±*α* if 0 < *α* < 1, and *θ* = ±(2 − *α*) if 1 < *α* < 2), we obtain the so-called *extremal stable densities*.

We also note the *symmetry relation L<sup>θ</sup> <sup>α</sup>*(−*x*) = *<sup>L</sup>*−*<sup>θ</sup> <sup>α</sup>* (*x*), so that a stable density with *<sup>θ</sup>* <sup>=</sup> <sup>0</sup> is symmetric.

**Figure A1.** The Feller–Takayasu diamond for Lévy stable densities.

Stable distributions have noteworthy properties of which the interested reader can be informed from the relevant existing literature. Hereafter, we recall some peculiar PROPERTIES:


These properties derive from the canonic form (A34) through the scaling property of the Fourier transform.

*Self-similarity* means

$$L\_a^{\theta}(\mathbf{x}, t) \doteq \exp\left[-t\psi\_a^{\theta}(\mathbf{x})\right] \Longleftrightarrow L\_a^{\theta}(\mathbf{x}, t) = t^{-1/a} L\_a^{\theta}(\mathbf{x}/t^{1/a})\Big|\tag{A35}$$

where *t* is a positive parameter. If *t* is time, then *L<sup>θ</sup> <sup>α</sup>*(*x*, *t*) is a spatial density evolving on time with self-similarity.

*Infinite divisibility* means that, for every positive integer *n*, the characteristic function can be expressed as the *n*th power of some characteristic function, so that any stable distribution can be expressed as the *n*-fold convolution of a stable distribution of the same type. Indeed, taking in (A34) *θ* = 0, without loss of generality, we have

$$\mathbf{e}^{-t|\mathbf{x}|^{a}} = \left[\mathbf{e}^{-(t/n)|\mathbf{x}|^{a}}\right]^{n} \Longleftrightarrow L^{0}\_{\mathbf{a}}(\mathbf{x}, \mathbf{t}) = \left[L^{0}\_{\mathbf{a}}(\mathbf{x}, \mathbf{t}/n)\right]^{\*n} \tag{A36}$$

where

$$\left[L\_a^0(\mathbf{x}, \mathbf{t}/n)\right]^{\*n} := L\_a^0(\mathbf{x}, \mathbf{t}/n) \ast L\_a^0(\mathbf{x}, \mathbf{t}/n) \ast \cdots \ast L\_a^0(\mathbf{x}, \mathbf{t}/n)$$

is the multiple Fourier convolution in IR with *n* identical terms.

Only for a few particular cases, the inversion of the Fourier transform in (A34) can be carried out using standard tables, and well-known probability distributions are obtained.

For *α* = 2 (so *θ* = 0), we recover the *Gaussian pdf* that turns out to be the only stable density with finite variance, and more generally with finite moments of any order *δ* ≥ 0. In fact,

$$L\_2^0(\mathbf{x}) = \frac{1}{2\sqrt{\pi}} \mathbf{e}^{-\mathbf{x}^2/4}. \tag{A37}$$

All the other stable densities have finite absolute moments of order *δ* ∈ [−1, *α*) as we will later show.

For *α* = 1 and |*θ*| < 1, we get

$$L\_1^{\theta}(\mathbf{x}) = \frac{1}{\pi} \frac{\cos(\theta \pi/2)}{[\mathbf{x} + \sin(\theta \pi/2)]^2 + [\cos(\theta \pi/2)]^2} \tag{A38}$$

which for *θ* = 0 includes the *Cauchy-Lorentz pdf*:

$$L\_1^0(\mathbf{x}) = \frac{1}{\pi} \frac{1}{1 + \mathbf{x}^2} \,. \tag{A39}$$

In the limiting cases *θ* = ±1 for *α* = 1, we obtain the *singular Dirac pdf's*

$$L\_1^{\\\pm 1}(\mathbf{x}) = \delta(\mathbf{x} \pm 1). \tag{A40}$$

In general, we must recall the power series expansions provided in [73]. We restrict our attention to *x* > 0 since the evaluations for *x* < 0 can be obtained using the symmetry relation. The convergent expansions of *L<sup>θ</sup> <sup>α</sup>*(*x*) (*x* > 0) turn out to be:

for 0 < *α* < 1 |*θ*| ≤ *α* :

$$L\_a^{\theta}(\mathbf{x}) = \frac{1}{\pi \,\mathrm{x}} \sum\_{n=1}^{\infty} (-\mathbf{x}^{-a})^n \frac{\Gamma(1+na)}{n!} \sin\left[\frac{n\pi}{2}(\theta - a)\right];\tag{A41}$$

for 1 < *α* ≤ 2 |*θ*| ≤ 2 − *α* :

$$L\_{\mathbf{a}}^{\theta}(\mathbf{x}) = \frac{1}{\pi \,\mathrm{x}} \sum\_{n=1}^{\infty} (-\mathbf{x})^{n} \frac{\Gamma(1 + n/a)}{n!} \sin\left[\frac{n\pi}{2a}(\theta - a)\right]. \tag{A42}$$

From the series in (A41) and the symmetry relation, we note that *the extremal stable densities for* 0 < *α* < 1 *are unilateral*, precisely vanishing for *x* > 0 if *θ* = *α*, vanishing for *x* < 0 if *θ* = −*α*. In particular, the unilateral extremal densities *<sup>L</sup>*−*<sup>α</sup> <sup>α</sup>* (*x*) with 0 < *<sup>α</sup>* < 1 have support in IR<sup>+</sup> and Laplace transform exp(−*sα*). For *<sup>α</sup>* <sup>=</sup> 1/2, we obtain the so-called *Lévy-Smirnov pd f* :

$$L\_{1/2}^{-1/2}(\mathbf{x}) = \frac{\mathbf{x}^{-3/2}}{2\sqrt{\pi}} \text{ e }^{-1/(4\mathbf{x})} \quad \mathbf{x} \ge \mathbf{0} \,. \tag{A43}$$

As a consequence of the convergence of the series in (A41) and (A42) and of the symmetry relation, we recognize that the stable *pdf* 's with 1 < *α* ≤ 2 are entire functions, whereas with 0 < *α* < 1 have the form:

$$L\_a^{\theta}(\mathbf{x}) = \begin{cases} (1/\mathbf{x})\,\Phi\_1(\mathbf{x}^{-a}) & \text{for } \mathbf{x} > 0\\ (1/|\mathbf{x}|)\,\Phi\_2(|\mathbf{x}|^{-a}) & \text{for } \mathbf{x} < 0 \end{cases} \tag{A44}$$

where Φ1(*z*) and Φ2(*z*) are distinct entire functions. The case *α* = 1 (|*θ*| < 1) must be considered in the limit for *α* → 1 of (A41) and (A42) because the corresponding series reduce to power series akin with geometric series in 1/*x* and *x*, respectively, with a finite radius of convergence. The corresponding stable *pdf* 's are no longer represented by entire functions, as can be noted directly from their explicit expressions (A38) and (A39).

We omit to provide the asymptotic representations of the stable densities referring the interested reader to Mainardi et al. (2001) [47]. However, based on asymptotic representations, we can state as follows: for 0 < *α* < 2, the stable *pdf* 's exhibit *fat tails* in such a way that their absolute moment of order *δ* is finite only if −1 < *δ* < *α*. More precisely, one can show that, for non-Gaussian, not extremal, stable densities the asymptotic decay of the tails is

$$L\_a^\theta(\mathbf{x}) = O\left(|\mathbf{x}|^{-(a+1)}\right) \quad \mathbf{x} \to \pm \infty. \tag{A45}$$

For the extremal densities with *α* = 1, this is valid only for one tail (as |*x*| → ∞), the other (as |*x*| → ∞) being of exponential order. For 1 < *α* < 2, the extremal *pdf* 's are two-sided and exhibit an exponential left tail (as *x* → −∞) if *θ* = +(2 − *α*) or an exponential right tail (as *x* → +∞) if *θ* = −(2 − *α*). Consequently, the Gaussian *pdf* is the unique stable density with finite variance. Furthermore, when 0 < *α* ≤ 1, the first absolute moment is infinite so we should use the median instead of the non-existent expected value in order to characterize the corresponding *pdf* .

Let us also recall a relevant identity between stable densities with index *α* and 1/*α* (a sort of reciprocity relation) pointed out in [73], that is, assuming *x* > 0,

$$\frac{1}{\mathfrak{x}^{a+1}} L\_{1/a}^{\theta} (\mathfrak{x}^{-a}) = L\_{\mathfrak{a}}^{\theta^\*} (\mathfrak{x}) \ 1/2 \le a \le 1 \ \theta^\* = \mathfrak{a} (\theta + 1) - 1. \tag{A46}$$

The condition 1/2 ≤ *α* ≤ 1 implies 1 ≤ 1/*α* ≤ 2. A check shows that *θ*<sup>∗</sup> falls within the prescribed range |*θ*∗| ≤ *α* if |*θ*| ≤ 2 − 1/*α*.

We leave as an exercise for the interested reader the verification of this reciprocity relation in the limiting cases *α* = 1/2 and *α* = 1.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Review* **The Four-Parameters Wright Function of the Second kind and its Applications in FC**

#### **Yuri Luchko**

Department of Mathematics, Physics, and Chemistry, Beuth Technical University of Applied Sciences Berlin, Luxemburger Str. 10, 13353 Berlin, Germany; luchko@beuth-hochschule.de

Received: 20 May 2020; Accepted: 10 June 2020; Published: 12 June 2020

**Abstract:** In this survey paper, we present both some basic properties of the four-parameters Wright function and its applications in Fractional Calculus. For applications in Fractional Calculus, the four-parameters Wright function of the second kind is especially important. In the paper, three case studies illustrating a wide spectrum of its applications are presented. The first case study deals with the scale-invariant solutions to a one-dimensional time-fractional diffusion-wave equation that can be represented in terms of the Wright function of the second kind and the four-parameters Wright function of the second kind. In the second case study, we consider a subordination formula for the solutions to a multi-dimensional space-time-fractional diffusion equation with different orders of the fractional derivatives. The kernel of the subordination integral is a special case of the four-parameters Wright function of the second kind. Finally, in the third case study, we shortly present an application of an operational calculus for a composed Erdélyi-Kober fractional operator for solving some initial-value problems for the fractional differential equations with the left- and right-hand sided Erdélyi-Kober fractional derivatives. In particular, we present an example with an explicit solution in terms of the four-parameters Wright function of the second kind.

**Keywords:** four-parameters Wright function of the second kind; one-dimensional time-fractional diffusion-wave equation; scale-invariant solutions; multi-dimensional space-time-fractional diffusion equation; subordination formula; left- and right-hand sided Erdélyi-Kober fractional derivatives

**MSC:** 26A33; 33E20; 30C15; 30D15; 45J05; 45K05; 44A20

#### **1. Introduction**

In calculus, differential equations, and mathematical physics both elementary and most of the special functions can be expressed in terms of the so-called generalized hypergeometric function *pFq* that is defined as the following series (in the case it converges):

$$\,\_pF\_q\left(a\_1, \ldots, a\_p; b\_1, \ldots, b\_q; \, z\right) := \sum\_{k=0}^\infty \frac{\prod\_{n=1}^p (a\_n)\_k}{\prod\_{n=1}^q (b\_n)\_k} \frac{z^k}{k!} \tag{1}$$

with the Pochhammer symbol (*z*)*k*, *k* ∈ N given by the formula

$$(z)\_k = \frac{\Gamma(z+k)}{\Gamma(k)} = \prod\_{n=0}^{k-1} (z+n).$$

In particular, all elementary functions can be represented in terms of the famous hypergeometric Gauss function <sup>2</sup>*F*1. Other particular cases and properties of the generalized hypergeometric function can be found in [1].

If *p* ≤ *q*, the series at the right-hand side of the formula (1) is absolutely convergent for all values of *z* ∈ C. For *p* = *q* + 1, the series converges for |*z*| < 1 and for |*z*| = 1 under some additional conditions. If *p* > *q* + 1, the series is divergent.

To overcome this restriction and to somehow define the function *pFq* in the case *p* > *q* + 1, in [2] Meijer introduced a very general special function presently known in the literature as the *G*-function. For definition, properties, and particular cases of the *G*-function we refer the readers to [1].

However, it turned out that the special functions of Fractional Calculus (FC) belong in general neither to particular cases of the generalized hypergeometric function *pFq* nor to particular cases of the Meijer *G*-function. They are particular cases of the more general generalized Wright or Fox-Wright functions or the Fox *H*-function ([1,3–7]).

The probably most used and important special functions of FC are the Mittag–Leffler function and its generalizations and the Wright function and its generalizations. For the theory of the Mitag-Leffler type functions and their applications we refer the readers to the book [3] and the recent survey [8] (see also numerous references therein). As to the Wright function and its generalizations, parts of their theory and some applications were presented in [5,6,9–29].

In this paper, the focus is on the four-parameters Wright function and its applications in FC. Depending on the signs of the parameters, we distinguish between the four-parameters Wright function of the first kind and of the second kind. The four-parameters Wright function of the first kind was first considered by Fox in [30] and by Wright in [28] (more precisely, this function was a particular case of the generalized Fox-Wright function that satisfies some conditions). In [12], the four-parameters Wright function of the first kind was employed as a kernel of an integral transform. It is the first application of this function known to the author. Another useful application of the four-parameters Wright function of the first kind was presented in [31], where the authors developed an operational calculus for an integral operator with the Gauss hypergeometric function as the kernel. This operational calculus was then used for derivation of the exact solutions of some integral equations of Volterra-type with the Gauss hypergeometric function in the kernel in terms of the four-parameters Wright function of the first kind.

As to the four-parameters Wright function of the second kind, it was first introduced in Luchko and Gorenflo [18]. Luchko and Gorenflo also provided some important properties of this function including its integral representation via the Mittag–Leffler function and its asymptotic behavior. Moreover, they applied the four-parameters Wright function of the second kind for derivation of the explicit analytical scale-invariant solutions to a one-dimensional space-time fractional diffusion equation. In this paper, some important results from [18] and the subsequent publications [5,15,32,33] will be revisited.

The rest of the paper is organized as follows: In the 2nd Section, we introduce the Wright function, the four-parameters Wright function, and the generalized Wright or Fox-Wright function and provide some of their important properties with the special focus on the four-parameters Wright function of the second kind. In the 3rd Section, three examples of applications of the four-parameters Wright function of the second kind in FC are presented. The first example deals with analysis of the scale-invariant solutions to a one-dimensional time-fractional diffusion-wave equation ([14,15]). It turns out that they can be represented in terms of the Wright function of the second kind and the four-parameters Wright function of the second kind. The second example is devoted to a subordination formula for the solutions to a multi-dimensional space-time-fractional diffusion equation with different orders of the fractional derivatives ([33]). The kernel of the subordination integral is a special case of the four-parameters Wright function of the second kind that is non-negative and can be interpreted as a probability density function. In the third example, we present an application of the operational method suggested in [32] for derivation of solution to an initial-value problem for a fractional differential equation with the left- and right-hand sided Erdélyi-Kober fractional derivatives in terms of the four-parameters Wright function of the second kind.

#### **2. The Four-Parameters Wright Function**

The generalized hypergeometric function *<sup>p</sup>*Ψ*<sup>q</sup>* presently known as the generalized Wright or Fox-Wright function was introduced and investigated by Fox in [30] and by Wright in [28]. It is defined by the convergent series

$${}\_{p}\Psi\_{q}\left[\begin{matrix}(a\_{1},A\_{1}),\ldots,(a\_{p},A\_{p})\\(b\_{1},B\_{1})\ldots (b\_{q},B\_{q})\end{matrix};z\right]:=\sum\_{k=0}^{\infty}\frac{\prod\_{i=1}^{p}\Gamma(a\_{i}+A\_{i}k)}{\prod\_{i=1}^{q}\Gamma(b\_{i}+B\_{i}k)}\frac{z^{k}}{k!}, z\in\mathbb{C}\tag{2}$$

with *ai* ∈ R, *Ai* > 0, *i* = 1, ... , *p*, *bi* ∈ R, *Bi* > 0, *i* = 1, ... , *q*. In the case *Ai* = 1, *i* = 1, ... , *p*, *Bi* = 1, *i* = 1, ... , *q*, the generalized Wright function coincides with the generalized hypergeometric function (1) up to a constant factor. Even more, in the case of the positive rational parameters *Ai* ∈ Q, *i* = 1, ... , *p*, *Bi* ∈ Q, *i* = 1, ... , *q*, the generalized Wright function can be represented as a final sum of the generalized hypergeometric functions with the power functions weights. Say, in the case *p* = 0, *q* = 1, and *B*<sup>1</sup> = *<sup>n</sup> <sup>m</sup>* ∈ Q, *n*, *m* > 0, we have the following representation ([14]):

$${}\_{0}"\Psi\_{1}\left[\begin{matrix}-\\\left(\boldsymbol{\beta},\frac{\boldsymbol{n}}{m}\right);z\end{matrix}\right]=\sum\_{p=0}^{m-1}\frac{z^{p}}{p!\Gamma(\boldsymbol{\beta}+\frac{\boldsymbol{n}}{m}\boldsymbol{p})}\,\_{0}F\_{n+m-1}\left(-;\Delta(\boldsymbol{n},\frac{\boldsymbol{\beta}}{n}+\frac{\boldsymbol{p}}{m}),\Delta^{\*}(\boldsymbol{m},\frac{\boldsymbol{p}+1}{m});\frac{z^{m}}{m^{m}n^{n}}\right),\tag{3}$$

where Δ(*k*, *a*) and Δ∗(*k*, *a*) are defined by

$$
\Delta(k, a) = \{a, a + \frac{1}{k}, \dots, a + \frac{k-1}{k}\}, \ \Delta^\*(k, a) = \Delta(k, a) \ \{1\}.
$$

In the case of the formula (3), the set Δ∗(*k*, *a*) is correctly defined since 1 is an element of any set Δ(*m*, *<sup>p</sup>*+<sup>1</sup> *<sup>m</sup>* ), 0 ≤ *p* ≤ *m* − 1. The method employed in [14] for derivation of the formula (3) can be also applied to obtain similar but of course even more complicated representations for the function *<sup>p</sup>*Ψ*<sup>q</sup>* with the positive rational parameters *Ai* ∈ Q, *i* = 1, ... , *p*, *Bi* ∈ Q, *i* = 1, ... , *q* in terms of the generalized hypergeometric function (1).

It is worth mentioning that both in [28,30], the parameters *Ai* and *Bi* were supposed to be positive real numbers. However, in [29], Wright considered a particular case of the function *<sup>p</sup>*Ψ*<sup>q</sup>* with *p* = 0 and *q* = 1 and the coefficient *B*<sup>1</sup> being any real number greater than -1. Presently this function is called the Wright function. Following Wright, it is denoted by *φ*(*ρ*, *β*; *z*):

$$\phi(\rho,\beta;z) := \,\_0\Psi\_1\left[ \begin{matrix} - \\ (\beta,\rho) \end{matrix}; z \right] = \sum\_{k=0}^{\infty} \frac{z^k}{k!\Gamma(\beta+\rho k)}, \; z \in \mathbb{C}, \rho > -1, \; \beta \in \mathbb{C}. \tag{4}$$

For *ρ* > −1, the series at the right-hand side of the formula (4) is convergent for all *z* ∈ C. It is also convergent for *ρ* = −1 and |*z*| < 1 and for *ρ* = −1 and |*z*| = 1 under the condition (*β*) > 1. However, the Wright function is an entire function only in the case *ρ* > −1 and thus this condition is usually included into its definition.

In [3,19], the function (4) with the positive parameter *ρ* was called the Wright function of the first kind, whereas in the case of the negative parameter *ρ* (0 > *ρ* > −1) it was called the Wright function of the second kind. In the case *ρ* = 0, the Wright function is reduced to the exponential function:

$$\phi(0,\beta;z) = \sum\_{k=0}^{\infty} \frac{z^k}{k!\Gamma(\beta)} = \frac{\mathfrak{e}^z}{\Gamma(\beta)}.\tag{5}$$

In analogy to the Wright function (4), the generalized Wright function (2) can be considered also in the case, some or even all of the parameters *Ai* and *Bi* are negative numbers. The well-known asymptotic behavior of the Euler Gamma-function allows determination of the convergence radius of the series at the right-hand side of (2): it is absolutely convergent for all *z* ∈ C under the condition

Δ > −1, where Δ is determined by the parameters of the generalized Wright function as follows (see, e.g., [3,16,34]):

$$\Delta = \sum\_{i=1}^{q} B\_i - \sum\_{i=1}^{p} A\_{i\prime} \text{ } \delta = \prod\_{i=1}^{p} |A\_i|^{-A\_i} \prod\_{i=1}^{q} |B\_i|^{B\_i} \text{ } \mu = \sum\_{i=1}^{q} b\_i - \sum\_{i=1}^{p} a\_i + \frac{p-q}{2} \text{ } \tag{6}$$

In the case Δ > −1, the function (2) is an entire function. However, in the case Δ = −1, the series at the right-hand side of (2) is also absolutely convergent for |*z*| < *δ* and for |*z*| = *δ* under the condition (*μ*) > 1/2 (see [34] for details).

In this paper, we mainly deal with another important particular case of the generalized Wright function (2), specifically with the so-called four-parameters Wright function:

$$\mathcal{W}\_{\left(\rho\_1,\beta\_1\right),\left(\rho\_2,\beta\_2\right)}\left(z\right) := \,\_1\Psi\_2\left[ \begin{matrix} (1,1) \\ \left(\beta\_1,\rho\_1\right) \left(\beta\_2,\rho\_2\right) \end{matrix}; z \right] \,. \tag{7}$$

According to the definition of the generalized Wright function, the series representation of the four-parameters Wright function is as follows:

$$\mathcal{W}\_{(\rho\_1,\beta\_1),(\rho\_2,\beta\_2)}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{\Gamma(\beta\_1 + \rho\_1 k)\Gamma(\beta\_2 + \rho\_2 k)}, \ \rho\_1, \rho\_2 \in \mathbb{R}, \ \beta\_1, \ \beta\_2 \in \mathbb{C}, z \in \mathbb{C}.\tag{8}$$

For *ρ*<sup>1</sup> + *ρ*<sup>2</sup> > 0, the series at the right-hand side of (8) is absolutely convergent ∀*z* ∈ C. For *ρ*<sup>1</sup> + *ρ*<sup>2</sup> = 0, the series is absolutely convergent for |*z*| < 1 and for |*z*| = 1 under the condition (*β*<sup>1</sup> + *β*2) > 2. Finally, the series is divergent for any *z* = 0 in the case *ρ*<sup>1</sup> + *ρ*<sup>2</sup> < 0.

Without any loss of generality, in what follows we always suppose that the condition *ρ*<sup>1</sup> ≥ *ρ*<sup>2</sup> holds true in the definition of the four-parameters Wright function. This assumption will lead to simpler formulations of some results concerning the four-parameters Wright function. Moreover, we will distinguish between the four-parameters Wright function of the first kind (*ρ*<sup>2</sup> > 0) and of the second kind (*ρ*<sup>2</sup> < 0). The properties and applications of the four-parameters Wright function of the second kind are very different from those of the function of the first kind. Thus, we found it appropriate to introduce a separate notation for the four-parameters Wright function of the second kind:

$$\Phi\_{(\rho\_1,\p\_1),(\rho\_2,\p\_2)}(z) := \mathcal{W}\_{(\rho\_1,\p\_1),(\rho\_2,\p\_2)}(z),\ \rho\_2 < 0. \tag{9}$$

The notation *W*(*ρ*1,*β*1),(*ρ*2,*β*2) is kept for the four-parameters Wright function (including the cases of the functions of the first and of the second kinds).

In what follows, we always suppose that the condition *ρ*<sup>1</sup> + *ρ*<sup>2</sup> > 0 is satisfied. This condition along with the inequality *ρ*<sup>1</sup> ≥ *ρ*<sup>2</sup> leads to the inequality *ρ*<sup>1</sup> > 0. Thus, the parameter *ρ*<sup>1</sup> of the four-parameters Wright function is always positive, whereas the parameter *ρ*<sup>2</sup> is positive in the case of the function of the first kind and negative in the case of the function of the second kind. In the case *ρ*<sup>2</sup> = 0, the four-parameters Wright function is reduced to the two-parameters Mittag–Leffler function:

$$\mathcal{W}\_{\left(\rho\_1,\beta\_1\right),\left(0,\beta\_2\right)}\left(z\right) = \frac{1}{\Gamma(\beta\_2)} E\_{\rho\_1,\beta\_1}(z) = \frac{1}{\Gamma(\beta\_2)} \sum\_{k=0}^{\infty} \frac{z^k}{\Gamma(\beta\_1 + \rho\_1 k)}.\tag{10}$$

For the theory and applications of the two-parameters Mittag–Leffler function we refer to the book [3]; in this paper we do not consider this function. Please note that in [3] the function (7) is called the generalized Mittag–Leffler function or the four-parametric Mittag–Leffler function.

Another important particular case of the four-parameters Wright function (7) is the Wright function (4):

$$W\_{(1,1),(\rho,\emptyset)}(z) = \phi(\rho,\emptyset; z). \tag{11}$$

For the properties and applications of the Wright function we refer to the recent survey [5], see also the references therein.

As already mentioned, the four-parameters Wright function is an entire function provided the condition *ρ*<sup>1</sup> + *ρ*<sup>1</sup> > 0 holds true.

**Theorem 1.** *Let the condition ρ*<sup>1</sup> + *ρ*<sup>1</sup> > 0 *be satisfied. Then the four-parameters Wright function is an entire function of the variable z. Its order p and type σ are given by the relations*

$$p = \frac{1}{\rho\_1 + \rho\_2}, \; \sigma = \frac{\rho\_1 + \rho\_2}{\left(\rho\_1^{\rho\_1} |\rho\_2|^{\rho\_2}\right)^{\frac{1}{\rho\_1 + \rho\_2}}}.\tag{12}$$

The proof of the theorem is based on the Stirling formula for the asymptotic of the Gamma-function and can be found in [3,12,16].

Since it is a function of the hypergeometric type, the four-parameters Wright function possesses a very useful Mellin–Barnes integral representation ([35]):

$$W\_{\left(\rho\_{1},\delta\_{1}\right),\left(\rho\_{2},\delta\_{2}\right)}\left(z\right) = \frac{1}{2\pi i} \int\_{\mathcal{L}\_{-\infty}} \frac{\Gamma(s)\Gamma(1-s)}{\Gamma(\beta\_{1}-\rho\_{1}s)\Gamma(\beta\_{2}-\rho\_{2}s)} (-z)^{-s} \, ds,\tag{13}$$

where *L*−<sup>∞</sup> is a left loop located in a horizontal strip. It goes from the point −∞ + *iy*<sup>1</sup> to the point −∞ + *iy*<sup>2</sup> with *y*<sup>1</sup> < 0 < *y*<sup>2</sup> and separates the poles of the Gamma-function Γ(*s*) (the points *sk* = 0, −1, −2, . . . ) from the poles of the Gamma-function Γ(1 − *s*) (the points *sl* = 1, 2, 3, . . . ).

The formula (13) can be easily proved by evaluating the Mellin–Barnes integral taking into account the Jordan lemma, the formula

$$\text{res}\_{s=-k} \Gamma(s) = \frac{(-1)^k}{k!}, \; k = 0, 1, 2, \dots \tag{14}$$

the known asymptotic of the Gamma-function, and the Cauchy residue theorem.

Depending on the sign of the parameter *ρ*<sup>2</sup> (the parameter *ρ*<sup>1</sup> is always positive), the right-hand side of the representation (13) can be interpreted as the Fox *H*-function:

$$\mathcal{W}\_{(\rho\_1,\beta\_1),(\rho\_2,\beta\_2)}(z) = H\_{1,3}^{1,1}\begin{pmatrix} (0,1) & \\ (0,1), (1-\beta\_1,\rho\_1), (1-\beta\_2,\rho\_2) \end{pmatrix} - z \Big), \ \rho\_2 > 0,\tag{15}$$

$$\Phi\_{(\rho\_1,\beta\_1),(\rho\_2,\beta\_2)}(z) = H\_{2,2}^{1,1}\left( \begin{array}{c} (0,1), \ (\beta\_2,-\rho\_2) \\ (0,1), \ (1-\beta\_1,\rho\_1) \end{array} \Big| -z \right), \ \rho\_2 < 0. \tag{16}$$

It is worth mentioning that both the Mellin–Barnes integral representation (13) and the Fox *H*-function representations (15) and (16) can be used for derivation of several useful properties of the four-parameters Wright function including its particular cases for the rational values of the parameters ([1,3,5]) or its asymptotic behavior ([6,7]).

Because the focus of this paper is on applications of the four-parameters Wright function of the second kind in FC, in the rest of this section we mainly restrict ourselves to a short discussion of its important properties. For the proofs, we refer the interested readers to [18].

A very useful integral representation of the four-parameters Wright function is given in the following theorem:

**Theorem 2** ([18])**.** *The four-parameters Wright function possesses the following integral representation in terms of the two parameters Mittag–Leffler function* (10)*:*

$$\mathcal{W}\_{\left(\rho\_1,\theta\_1\right),\left(\rho\_2,\theta\_2\right)}\left(z\right) = \frac{1}{2\pi i} \int\_{\gamma\left(\varepsilon;\rho\right)} e^{\zeta} \zeta^{-\beta\_2} \mathcal{E}\_{\rho\_1,\theta\_1}\left(z\zeta^{-\rho\_2}\right) d\zeta,\tag{17}$$

*where γ*(*ε*; *ϕ*) ( > 0, *<sup>π</sup>* <sup>2</sup> < *ϕ* ≤ *π*) *is a contour in the complex plane with the nondecreasing* arg *ζ that consists of the ray* arg *ζ* = −*ϕ*, |*ζ*| ≥ *ε, the arc* −*ϕ* ≤ arg *ζ* ≤ *ϕ of the circle* |*ζ*| = *ε, and the ray* arg *ζ* = *ϕ*, |*ζ*| ≥ *ε.*

In the case of the four-parameters Wright function of the second kind, the integration contour *γ*(*ε*; *ϕ*) in Theorem 2 can be replaced by a simpler one:

**Theorem 3** ([18])**.** *For any k*<sup>0</sup> ∈ N *satisfying the condition k*<sup>0</sup> > max{−1, ((1 − *β*2)/(−*ρ*2))}*, the four-parameters Wright function of the second kind can be represented as follows:*

$$\begin{split} \Phi\_{(\rho\_1,\beta\_1),(\rho\_2,\beta\_2)}(z) &= \sum\_{k=0}^{k\_0} \frac{z^k}{\Gamma(\beta\_1+\rho\_1 k)\Gamma(\beta\_2+\rho\_2 k)} + \\ &\frac{1}{2\pi i} \int\_{L\_-} \left( e^{\frac{r}{\nu}\zeta^{-\beta\_2}E\_{\rho\_1,\beta\_1}(z\zeta^{-\rho\_2})} - \sum\_{k=0}^{k\_0} \frac{(z\zeta^{-\rho\_2})^k}{\Gamma(\beta\_1+\rho\_1 k)} \right) d\zeta, \end{split} \tag{18}$$

*where L*<sup>−</sup> *is a cut in the complex ζ-plane along the negative real semi-axis.*

**Remark 1.** *As already mentioned, for β*<sup>1</sup> = *ρ*<sup>1</sup> = 1*, the four-parameters Wright function is reduced to the Wright function* (4) *and the integral representation* (17) *with ρ*<sup>2</sup> = *ρ* > −1 *and β*<sup>2</sup> = *β* ∈ R *takes the well-known form*

$$\Phi(\rho,\beta;z) = \frac{1}{2\pi i} \int\_{\gamma(\varepsilon;\rho)} \exp\{\zeta + z\zeta^{-\rho}\} \zeta^{-\beta} \,d\zeta. \tag{19}$$

*This integral representation was obtained by Wright in [27,29] and then used for derivation of the asymptotic behavior of the Wright function. In particular, he showed that the Wright function of the second kind has an algebraic asymptotic expansion on the positive real semi-axis provided the condition* 1/3 < −*ρ* < 1 *holds true (K* = 0, 1, 2, . . . *):*

$$\Phi(\rho,\beta;\mathbf{x}) = \sum\_{k=0}^{K-1} \frac{\mathbf{x}^{(\beta-1-k)/( -\rho)}}{(-\rho)\Gamma(k+1)\Gamma(1+(\beta-l-k)/(-\rho))} + O(\mathbf{x}^{(\beta-1-K)/(-\rho)}), \; \mathbf{x} \to +\infty. \tag{20}$$

For the four-parameters Wright function of the second kind, a similar result was obtained in [18].

**Theorem 4** ([18])**.** *Under the condition ρ*1/3 < −*ρ*<sup>2</sup> < *ρ*<sup>1</sup> ≤ 2*, the four-parameters Wright function of the second kind has the following asymptotic on the positive real semi-axis:*

$$\Phi\_{(\rho\_1,\beta\_1),(\rho\_2,\beta\_2)}(\mathbf{x}) = \sum\_{k=0}^{K-1} \frac{\mathbf{x}^{(\beta\_2-1-k)/(-\rho\_2)}}{(-\rho\_2)\Gamma(k+1)\Gamma(\beta\_1+\rho\_1(\beta\_2-1-k)/(-\rho\_2))} - \tag{21}$$

$$\sum\_{p=1}^{P} \frac{\mathbf{x}^{-p}}{\Gamma(\beta\_1-\rho\_1 p)\Gamma(\beta\_2-\rho\_2 p)} + O(\mathbf{x}^{(\beta\_2-1-K)/(-\rho\_2)}) + O(\mathbf{x}^{-1-P}), \quad \mathbf{x} \to +\infty$$

*for any K* = 0, 1, 2, . . . , *and P* = 0, 1, 2, . . . *.*

For geometric properties of the four-parameters Wright function we refer the interested readers to the very recent paper [11].

#### **3. Applications of the Four-Parameters Wright Function of the Second Kind**

In this section, we consider three examples of applications of the four-parameters Wright function of the second kind in FC.

The first example concerns the well-studied one-dimensional time-fractional diffusion-wave equation with the Caputo derivative. For analytical treatment of this equation, the Wright functions of the second kind play a fundamental role ([10,14,15,19,36]). Say, the fundamental solution to this equation can be expressed in terms of some special cases of the Wright function of the second kind (so-called Mainardi auxiliary functions). However, it turns out that the formulas for the scale-invariant solutions to the one-dimensional diffusion-wave equation involve both the Wright function of the second kind and the four-parameters Wright function of the second kind.

In the second example, we deal with a subordination formula for solutions to a multi-dimensional space-time-fractional diffusion equation ([33]). This equation is obtained from the diffusion equation by replacing the first order time derivative by the Caputo fractional derivative and the Laplace operator by the fractional Laplacian. This time, it is the four-parameters Wright function of the second kind that is of importance for this equation. In particular, a special case of the four-parameters Wright function of the second kind appears in the kernel of a subordination formula that connects the solution operators of this equation with different orders of the fractional derivatives to the classical solution of the conventional diffusion equation. Moreover, this kernel function is non-negative and can be interpreted as a probability density function.

The third example deals with the ordinary fractional differential equations that contain both the left- and the right-hand sided fractional derivatives. In [32], an operational method for the so-called composed Erdélyi-Kober fractional derivatives was suggested and applied for derivation of the analytical solutions to the initial-value problems for a special class of such equations. In this section, we present an equation of this sort with an explicit solution expressed in terms of the four-parameters Wright function of the second kind.

#### *3.1. Scale-Invariant Solutions to the One-Dimensional Time-Fractional Diffusion-Wave Equation*

In this subsection, we deal with the fractional diffusion-wave equation, which is obtained from the conventional diffusion or wave equation by replacing the first- or second-order time derivative, respectively, by the Caputo fractional derivative:

$$\frac{\partial^{\mathfrak{a}}u(\mathbf{x},t)}{\partial t^{\mathfrak{a}}} = \frac{\partial^{2}u(\mathbf{x},t)}{\partial \mathbf{x}^{2}},\ \ 1 < \mathfrak{a} < \mathbf{2},\ t > 0,\ \mathbf{x} > \mathbf{0}.\tag{22}$$

The Caputo fractional derivative of order *α*, 1 < *α* < 2, is defined as follows:

$$\frac{\partial^a u(\mathbf{x},t)}{\partial t^a} = \frac{1}{\Gamma(2-a)} \int\_0^t (t-\tau)^{1-a} \frac{\partial^2 u(\mathbf{x},\tau)}{\partial \tau^2} d\tau. \tag{23}$$

In particular, we are interested in the scale-invariant solutions to this equation. First, we introduce some basic notions concerning the similarity method for the general equation

$$F(u) = 0, \ u = u(\mathbf{x}, t). \tag{24}$$

A one-parameter family of scaling transformations, denoted by *Tλ*, is called a transformation of the (*x*, *t*, *u*)-space of the form

$$\pounds = \lambda^a \mathfrak{x}, \quad \overline{\mathfrak{f}} = \lambda^b \mathfrak{t}, \quad \mathfrak{u} = \lambda^c \mathfrak{u}, \tag{25}$$

where *a*, *b*, and *c* are some constants and *λ* is a real parameter restricted to an open interval *I* containing the value *λ* = 1.

The general Equation (24) is called invariant under the one-parameter family *T<sup>λ</sup>* of scaling transformations (25) if and only if *T<sup>λ</sup>* translates any solution *u* of (24) to a solution *u*¯ of the same equation:

$$F(\mathfrak{a}) = 0 \quad \text{if} \quad \mathfrak{a} = T\_{\lambda}\mathfrak{a}.\tag{26}$$

A real-valued function *η*(*x*, *t*, *u*) is called an invariant of the one-parameter family *T<sup>λ</sup>* of scaling transformations if it is unaffected by the transformations from *Tλ*:

$$\eta(T\_{\lambda}(\mathfrak{x},t,\mathfrak{u})) := \eta(\mathfrak{x},t,\mathfrak{u}) \quad \text{for all} \quad \lambda \in I.$$

The general theory ([37]) says that on the half-space {(*x*, *t*, *u*) : *x* > 0, *t* > 0}, the invariants of the scaling transformations (25) are provided by the functions

$$
\eta\_1(\mathbf{x}, t, \mathfrak{u}) = \mathfrak{x} t^{-a/b}, \\
\eta\_2(\mathbf{x}, t, \mathfrak{u}) = t^{-c/b} \mathfrak{u}. \tag{27}
$$

Say, let the Equation (24) be a second-order partial differential equation

$$G(\mathbf{x}, \mathfrak{t}, \mathfrak{u}, \mathfrak{u}\_{\mathfrak{t}}, \mathfrak{u}\_{\mathfrak{t}}, \mathfrak{u}\_{\mathfrak{t}}, \mathfrak{u}\_{\mathfrak{t}\mathfrak{t}}, \mathfrak{u}\_{\mathfrak{t}\mathfrak{t}}, \mathfrak{u}\_{\mathfrak{t}\mathfrak{t}}) = 0. \tag{28}$$

If this equation is invariant under the family *T<sup>λ</sup>* of scaling transformations (25), then the substitution

$$u(\mathbf{x},t) = \mathbf{t}^{c/b}v(z), \quad z = \mathbf{x}\mathbf{t}^{-a/b} \tag{29}$$

reduces the Equation (28) to a second-order ordinary differential equation

$$\mathcal{g}(z, v, v', v'') = 0.\tag{30}$$

In [9,14,15,18], the scale-invariant solutions for the equation of type (22) with the fractional derivatives in the Caputo and Riemann–Liouville sense and for the more general time- and space-fractional partial differential equations were obtained. In all cases, these solutions were expressed in terms of the Wright function of the second kind and the four-parameters Wright function of the second kind. In what follows, we present some of these results for the Equation (22).

The group of scaling transformations for the fractional diffusion-wave Equation (22) can be determined in explicit form.

**Theorem 5** ([9])**.** *The group of scaling transformations of the Equation (22) has the form*

$$T\_{\lambda} \circ (\mathfrak{x}, \mathfrak{t}, \mathfrak{u}) = (\lambda \mathfrak{x}, \lambda^{\frac{2}{\alpha}} \mathfrak{t}, \lambda^{\varepsilon} \mathfrak{u})$$

*with an arbitrary constant c* ∈ R *and its invariants are given by the formulas*

$$
\eta\_1(\mathbf{x}, t) = \mathbf{x} t^{-a/2}, \ \eta\_2(\mathbf{x}, t, \mathbf{u}) = t^{-c a/2} \mathbf{u}. \tag{31}
$$

In what follows, for the sake of convenience, we use the notation *γ* = *c α*/2.

The general theory of the Lie groups ([37]) and Theorem 5 ensure that the scale-invariant solutions of the Equation (22) have the form

$$\mu(\mathbf{x},t) = t^{\gamma}v(y), \ y = \mathbf{x}t^{-\kappa/2}, \ \gamma = \mathbf{c} \ \mathbf{a}/2. \tag{32}$$

Substitution of the function *u* from the formula (32) into the partial fractional differential Equation (22) transforms it into an ordinary fractional differential equation with an unknown function *v*(*y*). More precisely, the following result holds true:

**Theorem 6** ([9])**.** *The scale-invariant solutions of the Equation* (22) *in the form* (32) *satisfy the equation*

$$(\*\_\*P\_{2/a}^{\gamma-1,a}v)(y) = v^{\prime\prime}(y), \; y > 0,\tag{33}$$

*where the operator* <sup>∗</sup>*Pγ*−1,*<sup>α</sup>* 2/*<sup>α</sup> is the Caputo type modification of the right-hand sided Erdélyi-Kober fractional derivative defined by*

$$(\,\_\*P\_\delta^{\tau,a}g)(y) := (K\_\delta^{\tau,n-a} \prod\_{j=0}^{n-1} (\tau + j - \frac{1}{\delta} u \frac{d}{du})g)(y), \; y > 0, \; \delta > 0, \; n - 1 < u \le n \in \mathbb{N}.\tag{34}$$

*The operator Kτ*,*<sup>α</sup> <sup>δ</sup>* , *α* > 0 *is the right-hand sided Erdélyi-Kober fractional integral defined by*

$$(K\_{\delta}^{\tau,a}g)(y) := \frac{1}{\Gamma(a)} \int\_{1}^{\infty} (u-1)^{a-1} u^{-(\tau+a)} g(yu^{1/\delta}) \, du. \tag{35}$$

For *α* = 1 and *α* = 2, the fractional diffusion-wave Equation (22) is reduced to the conventional one-dimensional diffusion or wave equation, respectively. The Equation (33) for the scale-invariant solutions of (22) is an ordinary differential equation, not a fractional one. In the case *α* = 1 (the diffusion equation) we have the representation

$$(\_\*P\_2^{\gamma,1}v)(y) = (\gamma - \frac{1}{2}y\frac{d}{dy})v(y).$$

and the Equation (33) takes the well-known form

$$
v''(z) + \frac{1}{2}yv'(y) - \gamma v(y) = 0.
$$

In the case *α* = 2 (the wave equation) we get the formula

$$(\,\_\*P\_1^{\gamma-1,2}v)(y) = (\gamma-1-y\frac{d}{dy})(\gamma-y\frac{d}{dy})v(y) = y^2v''(y) - 2(\gamma-1)yv'(y) + \gamma(\gamma-1)v(y)$$

and the equation (33) is transformed to the following ODE:

$$2(y^2 - 1)v''(y) - 2(\gamma - 1)yv'(y) + \gamma(\gamma - 1)v(y) = 0.1$$

These both cases are discussed in detail in [37].

It turns out that the Equation (34) can be solved in explicit form in terms of the Wright function of the second kind and the four-parameters Wright function of the second kind.

**Theorem 7** ([14])**.** *The scale-invariant solutions of the fractional diffusion-wave Equation* (22) *are given by the formulas*

$$u(x,t) = \mathbb{C}\_1 t^{\gamma} \phi(-\frac{a}{2}, 1+\gamma; -y) + \mathbb{C}\_2 t^{\gamma} \left(\frac{1}{2} \phi(-\frac{a}{2}, 1+\gamma; y) - y^{2 + 2\frac{\gamma - 1}{a}} \Phi\_{(2, 3 + 2\frac{\gamma - 1}{a}), (-a, 2-a)}(y^2)\right) \tag{36}$$

*in the case* <sup>1</sup> <sup>−</sup> *<sup>α</sup>* <sup>&</sup>lt; *<sup>γ</sup>* <sup>&</sup>lt; 1, *<sup>γ</sup>* <sup>=</sup> <sup>1</sup> <sup>−</sup> *<sup>α</sup>* <sup>2</sup> , *γ* = 0*, and*

$$u(\mathbf{x},t) = \mathbb{C}\_1 \boldsymbol{\phi}(-\frac{a}{2}, 1; -y) + \mathbb{C}\_2 \left(\frac{1}{2}\boldsymbol{\phi}(-\frac{a}{2}, 1; y) - y^{2 - \frac{2}{d}} \boldsymbol{\Phi}\_{(2, 3 - \frac{2}{d}), (-a, 2 - a)}(y^2)\right) + \mathbb{C}\_3 \tag{37}$$

*in the case γ* = 0*, where y* = *xt*<sup>−</sup> *<sup>α</sup>* <sup>2</sup> *is the first scale-invariant* (31)*, φ is the Wright function of the second kind defined by* (4)*,* Φ *is the four-parameters Wright function of the second kind defined by* (7)*, and C*1, *C*2, *C*<sup>3</sup> *are arbitrary constants.*

For further results regarding the scale-invariant solutions to the fractional diffusion-wave equations we refer to [9,14,15,18].

#### *3.2. Subordination Formula for the Multi-Dimensional Space-Time-Fractional Diffusion Equations*

The object of analysis in this subsection is the multi-dimensional space-time-fractional diffusion equation

$$D\_t^{\beta}u(\mathbf{x},t) = -(-\Delta)^{\frac{d}{2}}u(\mathbf{x},t), \quad \mathbf{x} \in \mathbb{R}^n, \ t > 0, \ 0 < a \le 2, \ 0 < \beta \le 1. \tag{38}$$

In the Equation (38), the time-fractional derivative *D<sup>β</sup> <sup>t</sup>* is defined in the Caputo sense:

$$D\_t^\beta u(\mathbf{x}, t) = \left( I\_t^{n - \beta} \frac{\partial^n u}{\partial t^n} \right)(t), \quad n - 1 < \beta \le n, \ n \in \mathbb{N} \tag{39}$$

with *I γ <sup>t</sup>* being the Riemann–Liouville fractional integral:

$$u(I\_t^\gamma u)(t) = \begin{cases} \frac{1}{\Gamma(\gamma)} \int\_0^t (t-\tau)^{\gamma-1} u(\mathbf{x}, \tau) \,d\tau & \text{for } \gamma > 0, \\ u(\mathbf{x}, t) & \text{for } \gamma = 0. \end{cases}$$

The fractional Laplacian −(−Δ) *α* <sup>2</sup> is understood as a pseudo-differential operator with the symbol −|*κ*| *<sup>α</sup>* ([38,39]):

$$\left(\mathcal{F} - (-\Delta)^{\frac{\kappa}{2}}u\right)(\mathbf{x}) = -|\mathbf{x}|^{\mathfrak{a}}(\mathcal{F}u)(\mathbf{x})\,,\tag{40}$$

where (<sup>F</sup> *<sup>f</sup>*)(*κ*) is the Fourier transform of a function *<sup>u</sup>* at the point *<sup>κ</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* defined by

$$(\mathcal{F}u)(\kappa) = f(\kappa) = \int\_{\mathbb{R}^n} e^{i\mathbf{x}\cdot\mathbf{x}} u(\mathbf{x}) \, d\mathbf{x} \,. \tag{41}$$

The fractional Laplacian can be also represented as a hypersingular integral ([39]):

$$-(-\Delta)^{\frac{\mu}{2}}u(\mathbf{x}) = -\frac{1}{d\_{\mathrm{n},m}(\mathbf{a})} \int\_{\mathbb{R}^n} \frac{\left(\Delta\_{\mathbf{h}}^m u\right)(\mathbf{x})}{|\mathbf{h}|^{n+\mu}} \, d\mathbf{h}, \ 0 < \mathbf{a} < m, \ m \in \mathbb{N}, \ \mathbf{x} \in \mathbb{R}^n \tag{42}$$

with a suitably defined finite differences operator Δ*<sup>m</sup>* h *f* (x) and a normalization constant *dn*,*m*(*α*).

The representation (42) of the fractional Laplacian in form of the hypersingular integral does not depend on *m*, *m* ∈ N provided *α* < *m* ([39]). For other representations of the fractional Laplacian we refer the reader to [40].

In what follows, we consider the Cauchy problem for the space-time-fractional diffusion Equation (38) with the Dirichlet initial condition:

$$u(\mathbf{x},0) = f(\mathbf{x}) \,, \quad \mathbf{x} \in \mathbb{R}^n. \tag{43}$$

Because the initial-value problem (38), (43) is linear, its solution can be represented in the form

$$u(\mathbf{x},t) = \int\_{\mathbb{R}^n} G\_{\mathbf{a},\emptyset,\mathbf{u}}(\zeta,t) \, f(\mathbf{x} - \zeta) \, d\zeta. \tag{44}$$

In (44), the function *f* is the initial condition and *Gα*,*β*,*<sup>n</sup>* is the first fundamental solution of (38), i.e., its solution with the initial condition

$$\mu(\mathbf{x},0) = \prod\_{i=1}^{n} \delta(\mathbf{x}\_i) \,, \quad \mathbf{x} = (\mathbf{x}\_1, \mathbf{x}\_2, \dots, \mathbf{x}\_n) \in \mathbb{R}^n$$

where *δ* is the Dirac delta function.

In the case of the conventional diffusion equation (*α* = 2 and *β* = 1 in the Equation (38)), the fundamental solution is well-known:

$$G\_{2,1,n}(\mathbf{x},t) = \frac{1}{(\sqrt{4\pi t})^n} \exp\left(-\frac{|\mathbf{x}|^2}{4t}\right). \tag{45}$$

It turned out that the fundamental solution *Gα*,*β*,*<sup>n</sup>* to the multi-dimensional space-time-fractional diffusion Equation (38) can be represented in terms of the fundamental solution *G*2,1,*<sup>n</sup>* of the conventional diffusion equation. The result obtained in [33] for the first time is given in the following theorem:

**Theorem 8** ([33])**.** *For the fundamental solution Gα*,*β*,*n*(x, *t*) *to the multi-dimensional space-time-fractional diffusion-wave Equation* (38) *with* 0 < *β* ≤ 1*,* 0 < *α* ≤ 2*, and* 2*β* + *α* < 4 *the following subordination formula is valid:*

$$\mathcal{G}\_{\mathbf{a},\boldsymbol{\theta},\mathbf{u}}(\mathbf{x},t) = \int\_0^\infty t^{-\frac{2\beta}{a}} \Psi\_{\mathbf{a},\beta}(\operatorname{st}^{-\frac{2\beta}{a}}) \operatorname{G}\_{\mathbf{2},\mathbf{1},\mathbf{u}}(\mathbf{x},\mathbf{s}) \, d\mathbf{s},\tag{46}$$

*where the fundamental solution G*2,1,*n*(x,*s*) *to the conventional diffusion equation is given by the formula* (45) *and the kernel function* Ψ*α*,*<sup>β</sup> is a probability density function in s*, *s* ∈ R<sup>+</sup> *for each value of t*, *t* > 0 *defined as follows:*

$$\Psi\_{a,\beta}(\tau) = \begin{cases} \tau^{\frac{\theta}{2}-1} \Phi\_{\left(\frac{\theta}{2},\frac{\theta}{2}\right), \left(-\beta,1-\beta\right)} \left(-\tau^{\frac{\theta}{2}}\right) \text{ if } \frac{\theta}{\pi} < \frac{1}{2}, \\\\ -\tau^{-1-\frac{\theta}{2}} \Phi\_{\left(\beta,1-\beta\right), \left(-\frac{\theta}{2},-\frac{\theta}{2}\right)} \left(-\tau^{-\frac{\theta}{2}}\right) \text{ if } \frac{\theta}{\pi} > \frac{1}{2}, \\\\ \begin{cases} \frac{\tau^{\frac{\theta}{2}-1}}{\pi} \sum\_{k=0}^{\infty} \sin\left(\frac{\pi a}{2}(k+1)\right) \left(-\tau^{\frac{\theta}{2}}\right)^{k} \text{ if } 0 < \tau < 1 \\\\ -\frac{\tau^{-1}}{\pi} \sum\_{k=0}^{\infty} \sin\left(\frac{\pi a}{2}k\right) \left(-\tau^{-\frac{\theta}{2}}\right)^{k} \text{ if } \tau > 1 \end{cases} \end{cases} \tag{47}$$

*In the formula* (47)*, the function* Φ *is the four-parameters Wright function of the second kind defined by* (9)*.*

It is worth mentioning that even if the subordination formula (46) concerns just the fundamental solution, it can be extended to the solution operator for the initial-value problem (38), (43). Indeed, let us suppose that a more general subordination formula for the fundamental solution *Gα*,*β*,*<sup>n</sup>* is valid:

$$\mathcal{G}\_{\mathfrak{a},\mathfrak{\beta},\mathfrak{\boldsymbol{n}}}(\mathbf{x},t) = \int\_0^\infty \Psi(\mathfrak{a},\mathfrak{\boldsymbol{\beta}},\mathfrak{s},t) \mathcal{G}\_{\mathfrak{k},\mathfrak{\boldsymbol{\beta}},\mathfrak{\boldsymbol{n}}}(\mathbf{x},\mathfrak{s}) \, d\mathfrak{s} \,\tag{48}$$

where the kernel function Ψ = Ψ(*α*, *β*,*s*, *t*) can be interpreted as a probability density function in *s*, *s* ∈ R<sup>+</sup> for each value of *t*, *t* > 0 (the formula (47) is a particular case of the formula (48)). Then we have the following chain of relations:

$$\mathcal{S}\_{\mathsf{a},\mathsf{\beta},\mathsf{n}}(t)f = \int\_{\mathbb{R}^{n}} \mathcal{G}\_{\mathsf{a},\mathsf{\beta},\mathsf{n}}(\mathsf{\zeta},t) \, f(\mathbf{x}-\mathsf{\zeta}) \, d\mathsf{\zeta} = \int\_{\mathbb{R}^{n}} \int\_{0}^{\infty} \Psi(\mathsf{a},\mathsf{\beta},\mathsf{s},t) \mathcal{G}\_{\mathsf{k},\mathsf{\beta},\mathsf{n}}(\mathsf{\zeta},\mathsf{s}) \, ds \, f(\mathbf{x}-\mathsf{\zeta}) \, d\mathsf{\zeta} = \int\_{0}^{\infty} \Psi(\mathsf{a},\mathsf{\beta},\mathsf{s},t) \mathcal{G}\_{\mathsf{k},\mathsf{\beta},\mathsf{n}}(\mathsf{\zeta},\mathsf{s}) \, d\mathsf{\zeta} = \int\_{0}^{\infty} \Psi(\mathsf{a},\mathsf{\beta},\mathsf{s},t) \mathcal{G}\_{\mathsf{k},\mathsf{\beta},\mathsf{n}}(\mathsf{\zeta},\mathsf{s}) \, f \, d\mathsf{s}.$$
 

Thus, the subordination formula

$$S\_{\mathfrak{a},\mathfrak{\beta},n}(t) \, f = \int\_0^\infty \Psi(\mathfrak{a}, \mathfrak{\beta}, \mathfrak{s}, \mathfrak{t}) S\_{\mathfrak{h},\mathfrak{\beta},n}(\mathfrak{s}) \, f \, d\mathfrak{s} \tag{49}$$

holds true for the solution operator *Sα*,*β*,*n*. Vice versa, any subordination formula for the solution operator *Sα*,*β*,*<sup>n</sup>* to the initial-value problem (38), (43) in the form (49) induces a subordination formula of the type (48) for the fundamental solution *Gα*,*β*,*<sup>n</sup>* just by setting *f* to be the Dirac *δ*-function.

In the rest of this subsection, we provide some important remarks concerning the kernel Ψ*α*,*<sup>β</sup>* of the subordination formula (46).

In [33], the kernel function Ψ*α*,*<sup>β</sup>* given by the formula (47) was first deduced in form of the following Mellin–Barnes integral:

$$\Psi\_{a,\theta}(\tau) = \frac{2}{a} \frac{1}{2\pi i} \int\_{\gamma - i\infty}^{\gamma + i\infty} \frac{\Gamma\left(\frac{2}{a} - \frac{2}{a}s\right) \Gamma\left(1 - \frac{2}{a} + \frac{2}{a}s\right)}{\Gamma\left(1 - \frac{2\beta}{a} + \frac{2\beta}{a}s\right) \Gamma\left(1 - s\right)} \tau^{-s} ds. \tag{50}$$

The series representation (47) was derived by evaluating the Mellin–Barnes integral (50) taking into account the Jordan lemma, the formula (14) for the residual of the Gamma-function Γ(*s*) at the point *s* = −*k*, the asymptotic behavior of the Gamma-function, and the Cauchy residue theorem.

The kernel function Ψ*α*,*<sup>β</sup>* can be also interpreted as the inverse Laplace transform of the Mittag–Leffler function *<sup>E</sup>β*(−*λ<sup>α</sup>* <sup>2</sup> ):

$$E\_{\hat{\mathbb{P}}}(-\lambda^{\frac{\mathfrak{a}}{2}}) = \int\_0^\infty \Psi\_{a,\emptyset}(\tau) \, e^{-\lambda\tau} \, d\tau,\tag{51}$$

where the Mittag–Leffler function *E<sup>β</sup>* is defined as follows:

$$E\_{\beta}(z) = E\_{\beta,1}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{\Gamma(1+\beta k)}, \text{ } \beta > 0, \ z \in \mathbb{C}. \tag{52}$$

For the time-fractional diffusion equation (*α* = 2, 0 < *β* ≤ 1 in the Equation (38)) the subordination formula (46) with the kernel function Φ*α*,*<sup>β</sup>* given by the 1st line of (47) is valid. In this case, the four-parameters Wright function of the second kind is reduced to the Wright function of the second kind and we arrive at the known formula ([41,42])

$$\mathcal{G}\_{2,\beta,\mathfrak{n}}(\mathbf{x},t) = \int\_0^\infty t^{-\beta} \phi(-\beta, 1-\beta; -\mathrm{s}t^{-\beta}) \mathcal{G}\_{2,1,\mathfrak{n}}(\mathbf{x}, \mathbf{s}) \, ds, \ 0 < \beta < 1. \tag{53}$$

For the space-fractional diffusion equation (*β* = 1, 0 < *α* ≤ 2 in the Equation (38)), the subordination formula (46) with the kernel function Ψ*α*,*<sup>β</sup>* given by the 2nd line of (47) is valid. It is easy to verify that the kernel function can be rewritten in the following form:

$$-\tau^{-1-\frac{\mathfrak{g}}{2}}\Phi\_{(\mathfrak{f},1-\mathfrak{f}),(-\mathfrak{f},-\mathfrak{f})}\left(-\tau^{-\frac{\mathfrak{g}}{2}}\right) = \tau^{-1}\Phi\_{(\mathfrak{f},1),(-\mathfrak{f},0)}\left(-\tau^{-\frac{\mathfrak{g}}{2}}\right)\cdot\tau$$

Thus, also in the case of the space-fractional diffusion equation, the four-parameters Wright function of the second kind from the formula (47) is reduced to the Wright function of the second kind and we arrive at the subordination formula in the form

$$\mathcal{G}\_{\mathfrak{a},1,\mathfrak{v}}(\mathbf{x},t) = \int\_0^\infty s^{-1} \phi(-\frac{\mathfrak{a}}{2}, 0; -s^{-\frac{\mathfrak{a}}{2}}t) \, \mathcal{G}\_{2,1,\mathfrak{v}}(\mathbf{x}, \mathbf{s}) \, ds, \; 0 < \mathfrak{a} < 2. \tag{54}$$

#### *3.3. FDEs with the Left- and Right-Hand Sided Erdélyi-Kober Fractional Derivatives*

In this part of the section, we consider an initial-value problem for an ordinary fractional differential equation with the left- and right-hand sided Erdélyi-Kober fractional derivatives defined on the positive semi-axis. The equations of this type appear in the fractional calculus of variations as the Euler-Lagrange equations. However, to the best knowledge of the author, the only method for analytical treatment of these equations defined on an infinite interval, say, on the positive real semi-axis, is the operational method recently suggested in [32]. Here we present an example of application of this method to the following sample equation (*a* > *b* > 0, *n* − 1 < *aμ* ≤ *n*, *n* ∈ N):

$$(\,\_\*D\_{1/a}^{-
u-
u\mu,\rho\mu}y)(\mathbf{x}) + \rho\,\mathbf{x}^{\mu}(\,\_\*P\_{1/b}^{\beta-b\mu,b\mu}y)(\mathbf{x}) = f(\mathbf{x}), \; \mathbf{x} > 0, \; \rho > 0 \tag{55}$$

subject to the initial conditions (*k* = 0, . . . , *n* − 1)

$$\lim\_{\mathbf{x}\to\mathbf{0}}\mathbf{x}^{\frac{1}{d}(1-a-a\mu+k)}\prod\_{i=k+1}^{n-1}\left(1-a-a\mu+i+ax\frac{d}{dx}\right)\mathbf{y}(\mathbf{x})=\mathbf{c}\_{k}.\tag{56}$$

In the Equation (55), the operator <sup>∗</sup>*Pβ*−*bμ*,*b<sup>μ</sup>* 1/*<sup>b</sup>* is the Caputo type modification of the right-hand sided Erdélyi-Kober fractional derivative given by the formula (34). The operator <sup>∗</sup>*D*−*α*−*aμ*,*a<sup>μ</sup>* 1/*<sup>a</sup>* is the Caputo type modification of the left-hand sided Erdélyi-Kober fractional derivative defined as follows:

$$(({}\_\*D\_{\beta}^{\gamma,\delta}f)(\mathbf{x}) = (I\_{\beta}^{\gamma+\delta,n-\delta} \prod\_{k=0}^{n-1} \left(1+\gamma+k+\frac{1}{\beta}t\frac{d}{dt}\right)f)(\mathbf{x}),\tag{57}$$

where *I γ*,*δ <sup>β</sup>* stays for the left-hand sided Erdélyi-Kober fractional integral of order *δ*:

$$(I\_{\beta}^{\gamma,\delta}f)(\mathbf{x}) = \frac{1}{\Gamma(\delta)} \int\_0^1 (1-t)^{\delta-1} t^{\gamma} f\left(\mathbf{x} t^{\frac{1}{\beta}}\right) \, dt,\ \delta,\beta > 0, \gamma \in \mathbb{R}.\tag{58}$$

It is worth mentioning that the initial conditions in form (56) are determined by the projector operator of the left-hand sided Erdélyi-Kober fractional integral *I* −*α*−*aμ*,*aμ* 1/*a*

$$g(Py)(\mathbf{x}) = y(\mathbf{x}) - (I\_{1/a}^{-\kappa - a\mu, a\mu} \, \_\*D\_{1/a}^{-\kappa - a\mu, a\mu} \, y)(\mathbf{x}) = \sum\_{k=0}^{n-1} c\_k \mathbf{x}^{-\frac{1}{a}(1 - a - a\mu + k)},\tag{59}$$

$$x\_k = \lim\_{x \to 0} x^{\frac{1}{d}(1 - a - a\mu + k)} \prod\_{i=k+1}^{n-1} \left(1 - a - a\mu + i + ax\frac{d}{dx}\right) y(x) \tag{60}$$

and thus, they are quite natural for the Equation (38).

In this paper, we do not repeat the derivation of the exact solution to the initial-value problem (55), (56) presented in [32] and restrict ourselves to formulation of the final result.

**Theorem 9.** *Let a* > *b* > 0*, n* − 1 < *aμ* ≤ *n, n* ∈ N*, f* ∈ O*, and the condition*

$$\frac{a-1}{a} < \frac{\beta}{b} \tag{61}$$

*be satisfied. Then the initial-value problem* (38)*,* (56) *possesses a unique solution on the space* O *in the form*

$$y(\mathbf{x}) = \sum\_{k=0}^{n-1} c\_k y\_k(\mathbf{x}) + y\_f(\mathbf{x}),\tag{62}$$

*where the functions yk*, *k* = 0, . . . , *n* − 1 *are defined by*

$$y\_k(\mathbf{x}) = \Gamma(a\mu - k)\Gamma\left(\beta + \frac{b}{a}(1 - a - a\mu + k)\right) \mathbf{x}^{\mu - \frac{1}{a}(1 - a + k)} \Phi\_{(a\mu, a\mu - k), (-b\mu, \theta + \frac{b}{a}(1 - a - a\mu + k)}(-\rho \cdot \mathbf{x}^{\mu}),\tag{63}$$

*and the function yf is given by the formula*

$$y\_f(\mathbf{x}) = g(\mathbf{x}) + (g \stackrel{\lambda}{\*} y\_\Phi)(\mathbf{x}),\tag{64}$$

*with*

$$g(\mathbf{x}) = (I\_{1/a}^{-a-a\mu, a\mu} f)(\mathbf{x}), \ y\_{\Phi}(\mathbf{x}) = \rho \ge \mathbf{x}^{\mu-\lambda} \Phi\_{(a\mu, 1-a+a(\mu-\lambda)), (-b\mu, \theta-b(\mu-\lambda)}(-\rho \ge \mathbf{x}^{\mu})) $$

*and the convolution <sup>λ</sup>* ∗ *defined as follows:*

$$(f\stackrel{\lambda}{\*}g)(\mathbf{x}) = (I\_{1/a}^{1-2a-a\lambda,a+a\lambda-1} \, \_\*P\_{1/b}^{\delta,\theta+b\lambda} \, f \circ \mathbf{g})(\mathbf{x})\tag{65}$$

*with*

$$(f \circ g)(\mathbf{x}) = \mathbf{x}^{\lambda} \int\_{0}^{1} \int\_{0}^{1} \pi\_{1}^{-a} (1 - \tau\_{1})^{-a} \tau\_{2}^{\beta - 1} (1 - \tau\_{2})^{\beta - 1} f\left(\frac{\mathbf{x} \tau\_{1}^{a}}{\tau\_{2}^{b}}\right) g\left(\frac{\mathbf{x} (1 - \tau\_{1})^{a}}{(1 - \tau\_{2})^{b}}\right) d\tau\_{1} d\tau\_{2}.\tag{66}$$

*The function yf satisfies the inhomogeneous Equation* (55) *and homogeneous initial conditions, whereas the functions yk*, *k* = 0, ... , *n* − 1 *satisfy the homogeneous Equation* (55) *(f*(*x*) ≡ 0, *x* > 0*) and the initial conditions (k* = 0, . . . , *n* − 1, *j* = 0, . . . , *n* − 1*)*

$$\lim\_{x \to 0} x^{\frac{1}{d}(1 - a - a\mu + j)} \prod\_{i = j + 1}^{n - 1} \left( 1 - a - a\mu + i + a x \frac{d}{dx} \right) y\_k(x) = \begin{cases} 1, \ j = k, \\ 0, \ j \neq k. \end{cases} \tag{67}$$

In the formulation of the theorem, the space of functions denoted by O consists of the functions that are continuous on the semi-axis ]0, ∞[ and can be represented as the convergent power series with the power functions weights in some neighborhoods *U*<sup>1</sup> (0) and *U*<sup>2</sup> (+∞) of the points *x* = 0 and *x* = +∞, respectively, i.e., in the form

$$f(\mathbf{x}) = \mathbf{x}^{\alpha} \sum\_{k=0}^{\infty} a\_k (\mathbf{x}^{\rho})^k, \rho > 0, \ \mathbf{x} \in \mathcal{U}\_{\mathfrak{C}\_1}(0), \tag{68}$$

and

$$f(\mathbf{x}) = \mathbf{x}^{\mathcal{S}} \sum\_{k=0}^{\infty} b\_k (\mathbf{x}^{-\sigma})^k, \ \sigma > 0, \ \mathbf{x} \in \mathcal{U}\_{\mathbf{f}\_2}(+\infty). \tag{69}$$

The functions from O have a power law asymptotic behavior at the points 0 and +∞ that appears to be an appropriate asymptotics for solutions of the fractional differential equations that contain both the left- and right-hand sided Erdélyi-Kober fractional derivatives.

Finally, we mention that the results formulated in Theorem 9 remain valid also in the case of the Equation (55) with a negative parameter *ρ* under the additional condition *a*/3 < *b*. This can be proved by the operational method presented in [32] and employing the asymptotic behavior of the four-parameters Wright function of the second kind on the positive semi-axis given in Theorem 4.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Review* **Exact Values of the Gamma Function from Stirling's Formula**

#### **Victor Kowalenko**

School of Mathematics and Statistics, The University of Melbourne, Parkville, VIC 3010, Australia; vkowa@unimelb.edu.au

Received: 6 May 2020; Accepted: 19 June 2020; Published: 1 July 2020

**Abstract:** In this work the complete version of Stirling's formula, which is composed of the standard terms and an infinite asymptotic series, is used to obtain exact values of the logarithm of the gamma function over all branches of the complex plane. Exact values can only be obtained by regularization. Two methods are introduced: Borel summation and Mellin–Barnes (MB) regularization. The Borel-summed remainder is composed of an infinite convergent sum of exponential integrals and discontinuous logarithmic terms that emerge in specific sectors and on lines known as Stokes sectors and lines, while the MB-regularized remainders reduce to one complex MB integral with similar logarithmic terms. As a result that the domains of convergence overlap, two MB-regularized asymptotic forms can often be used to evaluate the logarithm of the gamma function. Though the Borel-summed remainder has to be truncated, it is found that both remainders when summed with (1) the truncated asymptotic series, (2) Stirling's formula and (3) the logarithmic terms arising from the higher branches of the complex plane yield identical values for the logarithm of the gamma function. Where possible, they also agree with results from Mathematica.

**Keywords:** asymptotic series; asymptotic form; Borel summation; complete asymptotic expansion; divergent series; domain of convergence; gamma function; Mellin–Barnes regularization; regularization; remainder; Stokes discontinuity; Stokes line/sector; Stokes phenomenon; Stirling's formula

**MSC:** 30B10; 30B30; 30E15; 30E20; 34E05; 34E15; 40A05; 40G10; 40G99; 41A60

#### **1. Introduction**

Discovered in the 1730s [1], Stirling's formula is a well-known result for determining approximate values of the gamma function, Γ(*z*), which is so important in the definition of Mittag–Leffler functions. Mystery has lingered whether it is indeed possible to obtain exact values of the gamma function from the complete version of the formula as opposed to its more famous truncated form. Moreover, due to the function's rapid exponentiation, its logarithm or ln Γ(*z*) is studied more often. This, however, introduces multivaluedness, which makes the asymptotic analysis of the function more formidable. Consequently, no one has ever been able to obtain exact values of either function via the entire formula.

In its entirety, Stirling's formula is an asymptotic expansion and is, therefore, divergent. Here exact values of ln Γ(*z*) are determined for all values of arg *z* from the complete asymptotic expansion of the formula. This process known as exactification represents the ultimate goal of hyperasymptotics, whose primary aim is to obtain far more accurate values from asymptotic expansions than standard Poincare´ asymptotics [2]. In such studies one not only includes all the terms in a dominant asymptotic series, but also, subdominant exponential terms, which are said to lie beyond all orders [3]. To observe their effect, hyperasymptotic calculations are generally carried out to more than 20 decimal places.

Since a complete asymptotic expansion is composed of divergent series, exactification involves obtaining meaningful values from them. This is achieved by the process of regularization, which is defined here as the removal of the infinity in the remainder of an asymptotic series so as to make the series summable. It was first demonstrated in [4] that the infinity in the remainder of an asymptotic series arises from an impropriety in the asymptotic method used to derive it. Hence regularization represents the method of correcting asymptotic methods.

Two very different techniques will be used to regularize the divergent series in this work. As discussed in [5,6], the most common method of regularizing a divergent series is Borel summation, but often, it produces results that are not amenable to fast and accurate computation. To overcome this drawback, the numerical technique of Mellin–Barnes regularization was developed in [7]. In this method, divergent series are expressed in terms of Mellin–Barnes integrals and divergent arc-contour integrals. Regularization removes the latter resulting in the Mellin–Barnes integrals yielding finite values, similar to the Hadamard finite part of a divergent integral [8]. Amazingly, the finite values obtained from applying the technique to an asymptotic expansion yield exact values of the original function with the main difference being that instead of dealing with Stokes sectors and lines, one now deals with overlapping domains of convergence over which the Mellin–Barnes integrals are valid.

#### **2. Stirling's Formula**

Stirling's formula [1] for the factorial function is often written for large integers, *n*, as

$$
\ln n! = \ln \Gamma(n+1) = n \ln n - n + \frac{1}{2} \ln(2\pi n) + \dotsb \cdot \,\_r \tag{1}
$$

As this is accurate to within 1% for *n* > 5, it represents a good approximation in standard (Poincare´) asymptotics [2], but not so in hyperasymptotics. Moreover, our aim is to consider complex values, not large integers. Thus, we replace the factorial function by the more general gamma function, Γ(*z* + 1). The terms in (1), denoted here by *F*(*z*), then become the leading terms of the complete asymptotic expansion for ln Γ(*z*). They will be treated as a separate contribution in all calculations of ln Γ(*z*), so that the reader will be able to observe just how inadequate standard asymptotics is compared with hyperasymptotics.

Occasionally, a problem arises where there is an interest in the missing terms in (1). Then Stirling's formula is expressed differently. For example, according to No. 6.1.41 in [9], for *z* → ∞ and |arg *z*| < *π*, ln Γ(*z*) is given by

$$\ln \Gamma(z) \sim F(z) + \frac{1}{12z} - \frac{1}{360z^3} + \frac{1}{1260z^5} - \frac{1}{1680z^7} + \dots \, \, \, \, \tag{2}$$

where *F*(*z*) represents all the terms in Stirling's formula, namely,

$$F(z) = \left(z - \frac{1}{2}\right) \ln z - z + \frac{1}{2} \ln(2\pi). \tag{3}$$

Hence the leading terms are identical to those in (1). In other texts the dots in (2) are replaced by the Landau gauge symbol, which would be *O*(*z*−9) here since it is the next highest order term. In [10] the power series after ln(2*π*) is truncated with the coefficients expressed in terms of the Bernoulli numbers, while the remainder term, *RN*(*z*) in No. 8.344, is given as

$$|R\_N(z)| = \left| \sum\_{k=N}^{\infty} \frac{B\_{2k}}{2k(2k-1)z^{2k-1}} \right| < \frac{|B\_{2n}|}{2n(2n-1)|z|^{2n-1}\cos^{2n-1}((\arg z)/2)}\,\mathrm{}\mathbf{1}\_{2n-1}$$

Although the remainder is dependent upon *z* and *N*, for *z* > 0, the series diverges once *N* passes the optimal point of truncation, *NOP*. Moreover, the above result is even more vague than (2) because the expansion is only valid for "large" values of |*z*| without indicating what large means. Here, we shall evaluate exact values of ln Γ(*z*) from the complete version of Stirling's formula by following the concepts and theory in [6], but before this can be done, the following lemma is required.

**Lemma 1.** *Via regularization, the power/Taylor series expansion for* arctan*u, namely,* ∑<sup>∞</sup> *<sup>k</sup>*=<sup>0</sup> *<sup>u</sup>*2*k*<sup>+</sup>1/(2*<sup>k</sup>* + <sup>1</sup>)*, can be expressed as*

$$\sum\_{k=0}^{\infty} \frac{(-1)^k u^{2k+1}}{(2k+1)} \begin{cases} = \arctan u \\ \equiv \arctan u \end{cases} , \qquad -1 < \Re \left( iu \right) < 1 \quad \text{ } \tag{5}$$
 
$$\Re \left( iu \right) \le -1, \quad \text{and} \quad \Re \left( iu \right) \ge 1 \quad \text{ } \tag{6}$$

**Proof.** For brevity, the proof is not given here, but appears in [11].

It should be noted that an equivalence symbol appears in one of the results, indicating that one side possesses a divergent series, while the other side represents a finite regularized value. That is, arctan, *u* is defined for all values of *u*, while the series representation for the function is divergent when *u* does not lie within −1 < (*iu*) < 1. Since the equivalence symbol is less stringent than an equals sign, we can re-write the lemma as

$$\sum\_{k=0}^{\infty} \frac{(-1)^k u^{2k+1}}{(2k+1)} \equiv \arctan u, \quad \forall u. \tag{6}$$

Therefore, if the series appears in a problem, then it can be replaced by the right-hand side (rhs). Though equivalence statements will appear throughout this paper, it does not necessarily mean that a power series is divergent for all values of the variable.

Now we derive the complete form of Stirling's formula. This will not be original, but we need to establish that it is complete. Binet's second expression for ln Γ(*z*) in [2] is

$$
\ln \Gamma(z) = F(z) + 2 \int\_0^\infty dt \, \frac{\arctan(t/z)}{e^{2\pi t} - 1}. \tag{7}
$$

By making a change of variable, *y* = 2*πt*, and noting that *z* is complex, we can then introduce (5). Replacing *k* by *k* + 1 yields

$$\ln \Gamma(z) - F(z) \equiv \frac{1}{\pi} \sum\_{k=1}^{\infty} \frac{(-1)^{k+1}}{(2k-1)} \left(\frac{1}{2\pi z}\right)^{2k-1} \int\_0^{\infty} dy \, \frac{y^{2k-1}}{e^y - 1}. \tag{8}$$

The left-hand side (lhs) of (8) is finite (convergent), while the rhs can be either divergent or convergent. From No. 3.411(1) in [10], the integral in the above equivalence is equal to Γ(2*k*)*ζ*(2*k*), where *ζ*(*z*) represents the Riemann zeta function. Thus, the above result becomes

$$\ln \Gamma(z) - F(z) \equiv 2z \sum\_{k=1}^{\infty} \frac{(-1)^{k+1}}{(2k-1)} \frac{\Gamma(2k) \zeta(2k)}{(2\pi z)^{2k}}.\tag{9}$$

From here on, *S*(*z*) denotes the series on the rhs. On the other hand, Paris and Kaminski [12,13], replace the terms on the lhs by Ω(*z*). With the aid of the reflection formula for the gamma function, the following continuation formula can be derived:

$$
\Omega(z) + \Omega\left(z e^{\pm i\pi}\right) = -\ln\left(1 - e^{\mp 2i\pi z}\right) \,\, . \tag{10}
$$

This enables one to obtain values of ln Γ(*z*) whenever *z* is situated in the left-hand complex plane via the corresponding values in the right-hand complex plane. Furthermore, the rhs will play an important role when the Stokes phenomenon is discussed later.

In order to continue with this study, the following definitions are required:

**Definition 1.** *An asymptotic (power) series is defined here as an infinite power series with zero radius of absolute convergence.*

**Definition 2.** *An asymptotic form is composed of: (1) a complete asymptotic expansion, which not only possesses all terms in a dominant asymptotic power series, e.g., S*(*z*) *above, but also all the terms in each subdominant asymptotic series, should they exist, and (2) the common sector or ray in the complex plane over which the argument of the variable in each series is valid.*

By truncating *S*(*z*) at *N* terms, we arrive at

$$S(z) = z \sum\_{k=1}^{N-1} \frac{(-1)^k}{(2z)^{2k}} \Gamma(2k - 1) \, c\_k(1) - 2z \sum\_{n=1}^{\infty} \sum\_{k=N}^{\infty} \frac{(-1)^k}{(2\pi n z)^{2k}} \, \Gamma(2k - 1),\tag{11}$$

where the first term will be denoted as *TSN*(*z*), *N* is the truncation parameter and *ck*(1) represents a specific value of the cosecant polynomials [14], given by

$$c\_k(1) = -2\zeta(2k)/\pi^{2k}.\tag{12}$$

The infinite series over *k* in the second term is known as a generalized Type I terminant [6]. Terminants were first introduced by Dingle [15] because he found that special functions often possess asymptotic series whose late coefficients exhibit gamma function growth, viz. Γ(*k* + *α*). A Type II terminant differs in that the coefficients possess an extra phase factor of (−1)*k*.

The notation *S<sup>I</sup> <sup>p</sup>*,*q*(*N*, *zβ*) was introduced in [6] to denote the generalization of Dingle's Type I terminants, which are defined as

$$S\_{p,q}^I\left(N, z^\beta\right) \doteq \sum\_{k=N}^\infty (-1)^k \Gamma(pk+q) z^{\beta^\beta k} \ . \tag{13}$$

Alternatively, (11) can be expressed as

$$S(z) = z \sum\_{k=1}^{N-1} \frac{(-1)^k}{(2z)^{2k}} \Gamma(2k - 1) \, c\_k(1) - 2z \sum\_{n=1}^{\infty} S\_{2, -1}^{I} \left( N, \left( 1/2n \pi z \right)^2 \right). \tag{14}$$

Thus, *β* and *z* in (13) are equal to 2 and 1/2*nπz* in (14). Although [6] states that both *p* and *q* have to be positive and real, it is *N* + *q*/*p*, which appears in the regularized value of a generalized terminant. Therefore, provided (*N* + *q*/*p*) > 0, the regularized value of the series still exists. Alternatively, *k* can be replaced by *k* + 1 in the infinite series in (14), in which case *q* equals unity. Since *S<sup>I</sup>* 2,−1 *N*, *z*<sup>2</sup> <sup>=</sup> <sup>−</sup>*z*2*S<sup>I</sup>* 2,1 *<sup>N</sup>* <sup>−</sup> 1, *<sup>z</sup>*<sup>2</sup> , we can apply the result in [6] to *S<sup>I</sup>* 2,1 *<sup>N</sup>* <sup>−</sup> 1, *<sup>z</sup>*<sup>2</sup> instead.

According to Rule A in ([15], Chapter 1), Stokes lines occur whenever the arguments or phases of the variable result in the terms of an asymptotic series becoming homogeneous in phase and having the same sign. In the case of the generalized terminant in (13), this means that Stokes lines occur whenever arg −*z<sup>β</sup>* = 2*lπ*, for *l*, an integer. Then the terms in either *S<sup>I</sup>* 2,−1 *N*, 1/*z*2) or *S<sup>I</sup>* 2,1 *N*, 1/*z*2) are all positively real. Because *l* is arbitrary, we can replace −1 by exp(−*iπ*). Thus, we find that the Stokes lines for *S*(*z*) occur whenever arg *z* = −(*l* + 1/2)*π*, i.e., at half integer multiples of *π*.

The concept of a primary Stokes sector/line was introduced in [6] to indicate the first Stokes sector/line over which an asymptotic expansion is derived. It was also necessary to define asymptotic forms since two functions can have the same complete asymptotic expansion, but will still be different if the expansion applies over different primary Stokes sectors or lines. For example, in solving a problem for positive real values of the variable, one may obtain a generalized Type I terminant as the asymptotic solution. However, as the variable moves off the real axis, it will acquire subdominant semi-residue contributions of opposing signs in either direction as a result of the Stokes phenomenon. However, if the same asymptotic solution is obtained for positive imaginary values of the variable, then as the variable hits the positive and negative real axes, the asymptotic solution will acquire a semi-residue contribution. When the variable moves into the lower half of the complex plane, the asymptotic solution will acquire a full residue contribution. Clearly, both cases are different and will yield different

values even though the same generalized Type I terminant was derived. Hence the original functions or solutions for these cases are different. In the first case the positive real axis becomes the primary Stokes line for the generalized Type I terminant, while in the second case, the upper half of the complex plane represents the primary Stokes sector. Then as more secondary Stokes sectors/lines are encountered either in a clockwise or anti-clockwise direction from the primary Stokes sector/line, more Stokes discontinuities arise at the boundaries. Although the choice of a primary Stokes sector/line is arbitrary, it will be taken here to be the Stokes sector/line situated in the principal branch of the complex plane, since most asymptotic expansions are derived under the condition that the variable lies initially in the principal branch of the complex plane.

Before we can regularize the asymptotic series, *S*(*z*), we require the following lemma:

**Lemma 2.** *Regularization of the Taylor series for the logarithmic function yields*

$$\sum\_{k=1}^{\infty} \frac{(-1)^{k+1}}{k} \ z^k \begin{cases} \equiv \ln(1+z) \\ = \ln(1+z) \end{cases}, \qquad \Re z \le -1 \quad \text{,} \tag{15}$$

**Proof.** There is no need for the proof to appear here as it can be found in [16].

As in the first lemma, we can replace the equals sign in the lemma by the less stringent equivalence symbol, which reduces the lemma to

$$\sum\_{k=1}^{\infty} \frac{(-z)^k}{k} \equiv -\ln(1+z), \quad \forall z. \tag{16}$$

With this result we can now regularize *S*(*z*), which will enable the asymptotic forms for ln Γ(*z*) to be derived.

**Theorem 1.** *As a result of the regularization of its asymptotic power series, the logarithm of the gamma function possesses the following asymptotic forms:*

$$\ln \Gamma(z) = F(z) + z \sum\_{k=1}^{N-1} \frac{(-1)^k}{(2z)^{2k}} \Gamma(2k - 1) \, c\_k(1) + R\_N^{SS}(z) + S D\_M^{SS}(z), \tag{17}$$

*where the remainder RSS <sup>N</sup>* (*z*) *is given by*

$$R\_N^{SS}(z) = \frac{2\left(-1\right)^{N+1}z}{(2\pi z)^{2N}} \int\_0^\infty dy \, y^{2N-2} e^{-y} \sum\_{n=1}^\infty \frac{1}{n^{2N-2}\left((y/2\pi z)^2 + n^2\right)}\tag{18}$$

*and the Stokes discontinuity term SDM*(*z*) *is given by*

$$SD\_M^{SS}(z) = -\lfloor M/2 \rfloor \ln\left(-e^{\pm 2i\pi z}\right) - \frac{\left(1 - (-1)^M\right)}{2} \ln\left(1 - e^{\pm 2i\pi z}\right). \tag{19}$$

*The remainder is valid for either* (*M* − 1/2)*π* < *θ* = arg *z* < (*M* + 1/2)*π or* −(*M* + 1/2)*π* < *θ* < −(*M* − 1/2)*π, where M is a non-negative integer. However, the Stokes discontinuity term possesses two forms that are complex conjugates. The upper-signed version of (19) applies to* (*M* − 1/2)*π* < *θ* < (*M* + 1/2)*π, while the lower-signed version is valid over* −(*M* + 1/2)*π* < *θ* < −(*M* − 1/2)*π. For z lying on the Stokes lines, i.e., for <sup>θ</sup>* <sup>=</sup> <sup>±</sup>(*<sup>M</sup>* <sup>+</sup> 1/2)*π, <sup>R</sup>SS <sup>N</sup>* (*z*) *and SDSS <sup>M</sup>* (*z*) *are replaced by <sup>R</sup>SL <sup>N</sup>* (*z*) *and SDSL <sup>M</sup>* (*z*)*, respectively. Then the remainder is given by*

$$R\_N^{SL}(z) = \frac{2z}{(2\pi|z|)^{2N-2}} P \int\_0^\infty dy \, y^{2N-2} e^{-y} \sum\_{n=1}^\infty \frac{1}{n^{2N-2} (y^2 - 4n^2 \pi^2 |z|^2)'} \tag{20}$$

*while the Stokes discontinuity term becomes*

$$SD\_M^{SL}(z) = (-1)^M \left( \lfloor M/2 \rfloor + \frac{1 - (-1)^M}{2} \right) 2\pi |z| - \frac{1}{2} \ln \left( 1 - e^{-2\pi |z|} \right). \tag{21}$$

In (20), *P* denotes the Cauchy principal value.

**Proof.** For brevity, the proof is not presented here as it can be found in [11].

The remainder in Theorem 1 is conceptually different from the remainder term in standard Poincare´ asymptotics, which is expressed in terms of the Landau gauge symbol, O(), or as + ... In fact, (17) would typically be written as

$$\ln \Gamma(z) = F(z) - \frac{c\_1(1)}{4z} + \frac{c\_2(1)}{8z^3} - \frac{3c\_3(1)}{8z^5} + \mathcal{O}\left(\frac{1}{z^7}\right). \tag{22}$$

Moreover, by introducing *c*1(1) = −1/3, *c*2(1) = −1/45, *c*3(1) = −2/945 and *c*4(1) = −1/4725, into the above result, we obtain (2). For real values of *z*, (22) is referred to as a large *z* or *z* → ∞ expansion with the limit point at infinity. For *z* complex, it becomes a large |*z*| expansion. In other cases, where the Landau gauge symbol is omitted, a tilde often replaces the equals sign. Nevertheless, in all these representations it means that the later terms in the truncated power series have been neglected despite their eventual divergence past the optimal point of truncation.

#### **3. Numerical Analysis**

In the previous section the asymptotic forms for ln Γ(*z*) were derived via Borel summation. However, we still need to verify that these results yield exact values of the special function. This section aims to present such a numerical analysis. For the analysis to be effective, a large number of values of |*z*| is not required. This is because the results change across Stokes sectors or rays, but within each sector or on each line, they behave uniformly with respect to *z*. Thus, a few values of |*z*| are necessary for testing the validity of the asymptotic forms. In fact, only two values of |*z*| are necessary: a relatively large one, where the asymptotic series in (17) can be truncated, and a small one, where truncation breaks down completely. Then a range of values for both *N* and arg *z* or *θ*, need to be considered across the Stokes sectors and lines. Note also that selecting extremely large/small values of |*z*| may result in overflow or underflow problems in the numerical calculations. This would then give the misleading impression that the asymptotic forms are incorrect rather than implying a deficiency in the computing system. Since the variable in the asymptotic series is 1/(2*nπz*)<sup>2</sup> with *n* ranging from unity to infinity, |*z*| = 3 is deemed to be sufficiently large, while for |*z*| = 1/10, there is no optimal point of truncation. The second value is, therefore, sufficiently small to demonstrate the breakdown of standard Poincare asymptotics. ´

Before undertaking the numerical analysis, let us present plots of ln Γ(*z*) to help the reader understand the nature of the function. Figure 1 displays graphs of the real part of the function for several fixed values of |*z*| used in this paper as a function of *θ* over (0, *π*). There we see for the larger values of |*z*|, the real part of ln Γ(*z*) dips to a minimum before it begins to grow dramatically, which is the rapid exponentiation mentioned in the introduction. The smaller values of |*z*| do not vary as much, although both are similar to the larger values of |*z*| in that they dip to a minimum and rise afterwards. Unlike the other graphs, the graph for |*z*| = 1/2 has a positive minimum and increases rather slowly.

**Figure 1.** ln Γ(*z*) as a function of *θ* between 0 and *π* for fixed values of |*z*|.

Figure 2 displays graphs of the imaginary part of ln Γ(*z*) for the same fixed values of |*z*| as a function of *θ* over (0, *π*). Here we see that the large values of |*z*| rise to a positive maximum before rapidly decreasing into the negative right quadrant. The plot for |*z*| = 9/10 does not attain a positive maximum, but decreases relatively slowly from the origin into the negative right quadrant. The graph for |*z*| = 1/2 follows that for |*z*| = 9/10 until about *θ* = *π*/2. Then it decreases faster than the |*z*| = 9/10 graph, but when *θ* is close to *π*, it rises until it meets the |*z*| = 9/10 graph at *θ* = *π*.

**Figure 2.** ln Γ(*z*) as a function of *θ* between 0 and *π* for fixed values of |*z*|.

The optimal point of truncation, *NOP*, is determined by calculating the first value of the truncation parameter, *N*, when successive terms in an asymptotic series begin to dominate the preceding terms. That is, it occurs at the first value of *k*, where the *k* + 1-th term is greater than the *k*-th term in *S*(*z*), namely,

$$\left|\frac{2k(2k-1)}{(2z)^2}\frac{c\_{k+1}(1)}{c\_k(1)}\right| = \left|\frac{2k(2k-1)}{(2\pi z)^2}\frac{\zeta(2k+2)}{\zeta(2k)}\right| \approx 1.\tag{23}$$

Since the ratio of the Riemann zeta functions is close to unity, we observe that *NOP* occurs around *π*|*z*|. Therefore, for |*z*|=3, *NOP* will be close to 10, while for |*z*|=1/10, it does not exist, meaning that *NOP* = 0. In the latter case the first or leading term of the asymptotic series will yield the "nearest" value to ln Γ(*z*), but it will not be accurate. On the other hand, the larger *NOP* is, the more accurate truncation of the asymptotic series becomes.

Typically, when a software package such as Mathematica [17] determines values of a special function, it only does so over the principal branch of the complex plane. Hence, the numerical analysis will be confined to arg *z* over (−*π*, *π*], which means in turn that the numerical analysis of (17) will only be conducted over the three Stokes sectors, −3*π*/2 < *θ* < −*π*/2, −*π*/2 < *θ* < *π*/2 and *π*/2 < *θ* < 3*π*/2, and the two Stokes lines at *θ* = ±*π*/2. In other words, only the *M* = 0 and *M* = ±1 results in Theorem 1 will be tested for the time being. By denoting the truncated sum in (17) by *TSN*(*z*), i.e.,

$$TS\_N(z) = z \sum\_{k=1}^{N-1} \frac{(-1)^k}{(2z)^{2k}} \Gamma(2k-1) \, c\_k(1),\tag{24}$$

we need to verify the following results:

$$\ln \Gamma(z) = \begin{cases} F(z) + TS\_N(z) + R\_N^{SS}(z) + SD\_1^{SS,II}(z), & \pi/2 < \theta \le \pi, \\ F(z) + TS\_N(z) + R\_N^{SL}(z) + SD\_0^{SL}(z), & \theta = \pi/2, \\ F(z) + TS\_N(z) + R\_N^{SS}(z), & -\pi/2 < \theta < \pi/2, \\ F(z) + TS\_N(z) + R\_N^{SL}(z) + SD\_0^{SL}(z), & \theta = -\pi/2, \\ F(z) + TS\_N(z) + R\_N^{SS}(z) + SD\_1^{SS,I}(z), & -\pi < \theta < -\pi/2. \end{cases} \tag{25}$$

In the above the superscripts, U and L, have been introduced into the Stokes discontinuity terms in the Stokes sectors to indicate the upper- and lower-signed versions of (21). Although equal to zero, the Stokes discontinuity term for the third asymptotic form will be denoted as *SDSS* <sup>0</sup> (*z*).

If we put *N* = 4 in the third result of (24) and neglect the final term or remainder, then we arrive at (2). However, the remaining terms in this result are now expressed as

$$R\_N^{\rm SS}(z) = \frac{2\left(-1\right)^{N+1}z}{(2\pi z)^{2N-2}} \sum\_{n=1}^{\infty} \frac{1}{n^{2N-2}} \int\_0^{\infty} dy \, \frac{y^{2N-2}e^{-y}}{\left(y^2 + 4\pi^2 n^2 z^2\right)}\,\tag{26}$$

and

$$R\_N^{SL}(z) = \frac{2z}{(2\pi|z|)^{2N-2}} \sum\_{n=1}^{\infty} \frac{1}{n^{2N-2}} P \int\_0^{\infty} dy \, \frac{y^{2N-2} e^{-y}}{(y^2 - 4\pi^2 n^2 |z|^2)'} \tag{27}$$

while the Stokes discontinuity terms are given by

$$SD\_1^{SS}(z) = -\ln\left(1 - e^{\pm 2\pi z i}\right),\tag{28}$$

and

$$SD\_0^{SL}(z) = -\frac{1}{2} \ln \left( 1 - e^{-2\pi|z|} \right). \tag{29}$$

Note the connection with Ω(*z*) mentioned below (9).

For the numerical analysis we need to consider the results over the Stokes sectors separately from those at the Stokes lines since the latter require the evaluation of the Cauchy principal value and the Stokes discontinuity terms possess a factor of 1/2 compared with zero when |*θ*| < *π*/2 or unity when |*θ*| > *π*/2. Thus, ln Γ(*z*) will be evaluated via two different Mathematica modules: one involving the standard numerical integration routine called NIntegrate, and another, where NIntegrate is adapted to evaluate only the Cauchy principal value.

When *θ* > 0, the Stokes discontinuity terms can be combined into one expression, denoted by *SD*+(*z*). This is given by

$$SD^{+}(z) = -S^{+} \ln\left(1 - e^{2\pi iz}\right) \,, \tag{30}$$

where the Stokes multiplier, *S*+, is written as

$$S^{+} = \begin{cases} 1, & \pi/2 < \theta \le \pi \\ 1/2 \text{ , } \quad \theta = \pi/2 \\ 0, & -\pi/2 < \theta < \pi/2 \end{cases} \tag{31}$$

Similarly, the Stokes discontinuity terms in the lower half of the principal branch, *SD*−(*z*), can be written in terms of another Stokes multiplier, *S*−, as follows:

$$SD^{-}\left(z\right) = S^{-}\ln\left(1 - e^{-2\pi iz}\right),\tag{32}$$

where *S*− is given by

$$S^{-} = \begin{cases} 0 & -\pi/2 < \theta < \pi/2 \text{ ,} \\ 1/2 & \theta = -\pi/2 \text{ ,} \\ 1 & -\pi < \theta < -\pi/2 \text{ .} \end{cases} \tag{33}$$

From the above, we see that the Stokes multipliers are discontinuous, which is known as the conventional view of the Stokes phenomenon. However, an alternative view of the Stokes phenomenon arose in the late 1980s where they were no longer regarded as step-functions. Instead, it was proposed that they undergo a smooth, but rapid, transition from zero to unity, equalling 1/2 at the Stokes line [18]. Today, this is known as Stokes smoothing, despite the fact that Stokes never regarded the multipliers as being smooth [19]. According to this approach, first put forward by Berry and then made more "rigorous" by Olver [20], the Stokes multiplier reduces to the error function, erf(*z*). Later, Berry [21] and Paris and Wood [22] found an approximate form for the Stokes multipliers of ln Γ(*z*), which were given as

$$S^{\pm}(z) \sim \frac{1}{2} \pm \frac{1}{2} \operatorname{erf} \Big( (\theta \pm \pi/2) \sqrt{\pi |z|} \Big). \tag{34}$$

A graph of (34) for |*z*| = 3 versus *θ* is displayed in Figure 3 together with the conventional view or (31). For *θ* < 1, (34) is virtually zero, while for *θ* > 2, it is almost equal to unity. In between, however, the rapid smoothing occurs with the greatest deviation from the step-function occurring in the vicinity of the Stokes line where both views possess a common (green) point at (*π*/2, 1/2). If smoothing occurs, then Theorem 1 cannot possibly yield exact values of ln Γ(*z*), especially for *θ* between 13*π*/32 and 17*π*/32 excluding *π*/2.

We can establish the correct view by calculating ln Γ(*z*) for *θ* between 13*π*/32 and 17*π*/32 using (30) since smoothing implies that (30) cannot possibly yield exact values of ln Γ(*z*). However, if we obtain exact values of ln Γ(*z*), then we know that the conventional view holds and smoothing is a fallacy. The problem with testing (34) directly is that it applies to much larger values of |*z*| than 3. The proponents of smoothing have not provided the form for smaller values of |*z*|. For very large values of |*z*|, truncating the asymptotic expansion at a few terms will yield very accurate values for ln Γ(*z*), which can obscure both views unless an extremely high precision and time-consuming analysis is undertaken. Hence much smaller values of |*z*| will be considered in (30), so that the Stokes discontinuity term can no longer be neglected.

**Figure 3.** The conventional Stokes multiplier *<sup>S</sup>*<sup>+</sup> (blue) vs. the smoothed version (red) for <sup>|</sup>*z*<sup>|</sup> <sup>=</sup> 3 as a function of *θ*.

Before Stokes smoothing can be investigated, we must show that (24) behaves as a typical asymptotic expansion. That is, we must show that for large values of |*z*|, the remainder can be neglected to yield accurate, but nevertheless approximate, values of ln Γ(*z*) up to and not very far from the optimal point of truncation, while for small values of |*z*|, it is simply invalid to neglect the remainder. For this demonstration we do not require the Stokes discontinuity terms. Thus, we shall study the asymptotic series for |*θ*| < *π*/2, in particular *θ* = 0, because it does not require complex arithmetic.

From (26) we see that the evaluation of the remainder involves two computationally intensive tasks. The first is the infinite sum over *n*, which arose due to an infinite number of singularities lying on each Stokes line. The second issue is the numerical integration of the exponential integral. The latter can be avoided by decomposing the denominator into partial fractions and using No. 3.383(10) from [10]. For |*θ*| < *π*/2, one then obtains

$$R\_N^{SS}(z) = \frac{\Gamma(2N-1)}{2\pi i} \sum\_{n=1}^{\infty} \frac{1}{n} \left( e^{-2\pi nzi} \,\Gamma(2 - 2N, -2\pi nzi) \right)$$

$$-e^{2\pi nzi} \,\Gamma(2 - 2N, 2\pi nzi) \,\text{.}\tag{35}$$

The above result can also be obtained by combining (4.3), (4.10) and (4.11) in [22].

A module was written to evaluate ln Γ(*z*) in Mathematica with the remainder given by (35) and *n* set to an upper limit of 105 to ensure 50 figure accuracy. Table 1 displays a small sample of the results obtained from the code. For more details about the code including its performance and listing as well as other results, the reader should consult [11]. Note that all the results are real, which is to be expected since ln Γ(3) = ln 2. In actual fact, Mathematica printed out a tiny imaginary part with each value, but it was often zero to the first 50+ decimal places and thus was discarded. The appearance of these tiny imaginary values indicates the size of the numerical error. The few cases where the errors were less than 50 decimal places will be discussed shortly.


**Table 1.** ln Γ(3) via (35) for various values of the truncation parameter, *N*.

The first column displays the values of the truncation parameter, *N* for each calculation. The second row in the table gives the value of Stirling's formula for *z* = 3, which only agrees with the actual value of ln Γ(3) at the bottom of the table to the first decimal place. For each value of *N* there are three rows. The first row labelled TS displays the value of the truncated sum in (25), while the row labelled *RSS <sup>N</sup>* (3) presents the value of the remainder given by (35) with the upper limit set to 105. The third row labelled Sum is the sum of Stirling's formula, the truncated sum and the remainder. It yields the same value of ln 2 as at the bottom of the table except for *N* = 2.

For *<sup>N</sup>* <sup>=</sup> 2, the truncated sum and remainder equal 0.027 777 ··· and <sup>−</sup>9.98 529 ···× <sup>10</sup>−5, respectively. When they are summed with *F*(3), they yield a value that agrees with ln Γ(3) to 19 decimal places, which is well-below the 50 decimal figure accuracy mentioned above and nowhere near as accurate as other results such as *N* =50. The reason this has occurred is that the factor of *n*2*N*−<sup>2</sup> in the denominator of (26) affects the calculation of the remainder for the small values of *N* such as 1 or 2. In these cases the upper limit of 10<sup>5</sup> needs to be increased substantially to improve the accuracy, which does not apply for higher values of *N*.

The remainder is smallest in magnitude when *N* = 11, which agrees with our estimate below (23) for the optimal point of truncation, *NOP*. For *N* = *NOP*, the sum of the values only differs from the actual value of ln Γ(3) at the fifty-third decimal place. Moreover, for *N* close to *NOP*, there is little deterioration in the accuracy, but for *N* = 30 and 50, well past *NOP*, the remainder dominates, whereas in the other calculations, it is small. This is consistent with standard Poincare´ asymptotics, where the remainder is neglected. Therefore, for all but the last two calculations, Stirling's formula yields the main contribution to ln Γ(3). For the last two values of *N*, the truncated sum and remainder dominate, but their divergence is cancelled out. For example, when *N* = 50, the remainder and truncated sum are <sup>O</sup>(1025). Hence the first 26 decimal places of both quantities cancel each other, thereby enabling Stirling's formula to become the main contribution. Unfortunately, losing these decimal places produces an imaginary term that is zero to a reduced number of decimal places, 23 instead of 50+ as mentioned above.

Now consider *z* = 1/10, which is unheard of in standard Poincare´ asymptotics and also in the hyperasymptotic calculations of [12,18,21,23,24]. Furthermore, Paris [13] has specifically carried out a hyperasymptotic calculation of ln Γ(*z*) using Hadamard expansions for Ω(*z*). Depending on the number of chosen levels, his results are accurate at best to 10−<sup>45</sup> for *<sup>z</sup>* <sup>&</sup>gt; 8 (*NOP* <sup>&</sup>gt; 25). Hence Table <sup>1</sup> displays far more accurate results, but with *z* = 3.

Table 2 presents a sample of results for *z*=1/10 in the third asymptotic form of (25) with *RSS <sup>N</sup>* (*z*) given by (35). In this case Stirling's formula is nowhere near as accurate as in Table 1. Except for *N* = 2, adding the truncated series to Stirling's formula worsens the accuracy. This has arisen because there is no optimal point of truncation. Therefore, the remainder must be evaluated. As a result that the remainder diverges far more rapidly in this case, there is a greater cancellation of decimal places than in Table 1. Thus, the total values in Table 2 are generally not as accurate, the exception being very low values of *N*. Despite this, these results could not have been achieved without regularization.


**Table 2.** ln Γ(1/10) via (35) for various values of the truncation parameter, *N*.

Now we assume that the routine, Gamma[*N*, *z*], does not exist in Mathematica. Then a new program implementing the first, third and fifth asymptotic forms in (25) with the remainder given by (26) is required. As before, the upper limit in the sum will be set to 105. To calculate each term in the remainder, the program, which appears as the second program in the appendix of [11], employs NIntegrate inside a Do loop. Since it is a different approach for calculating ln Γ(*z*), it can be used to check the results in Table 1. The version in [11] has the precision and accuracy goals set to 30 for thirty figure accuracy, which means, in turn, that the working precision must be set to a much higher level, e.g., 60. Higher values for these options can be set, but it comes at the expense of computing time. The integrand employed in NIntegrate is called Intgrd and is basically the integrand in (26). Due to lack of space, the calculated quantities are displayed here to 25 decimal places, although they were frequently far more accurate. In addition, unlike the previous calculations, we consider complex values of *z*, i.e., *θ* takes on values within the principal branch of the complex plane except at ±*π*/2. For brevity, only |*z*| = 1/10 is presented here. The results for |*z*| = 3 appear in Table 3 of [11].

Table 3 presents a very small sample of the results obtained by running the second program in the appendix of [11] with |*z*|=1/10. Although positive values of *θ* were considered, only negative values are displayed here. In the table, there are six results for each value of *N* and *θ*. Stirling's formula is represented by the first value. The second value, denoted by TS, represents the value of the truncated sum in (24), while the third value is the regularized remainder, (26), as evaluated via NIntegrate. The fourth value for each calculation of ln Γ(*z*) is the Stokes discontinuity term, which according to (32) is zero for |*θ*| < *π*/2 and is purely logarithmic for *θ* over (−*π*, −*π*/2). The fifth value, denoted by Total, represents the sum of the four preceding values, while the final value is the actual value of ln Γ(*z*) using LogGamma[*z*] in Mathematica.

Since there is no optimal point of truncation, the results in Table 3 for *N* > 3 are mainly dominated by the truncated sum and its regularized remainder. In fact, both values dominate so much that many decimal places are cancelled as observed for *N* = 30 and 50 in Table 1. Once again, pressure is being put on the accuracy of the final total. For example, for *N* = 9 and *θ* = −6*π*/13, both the truncated sum and the regularized remainder are <sup>O</sup>(1013), which means a loss of thirteen decimal places when they are summed. Since the accuracy and precision goals were set to 30, this implies that the sum of the truncated series and the regularized remainder should only be accurate to 17 decimal places. Fortunately, the total value agrees with the value of ln Γ(*z*) to 28 decimal places because the working precision was set much higher (to 60) than the precision and accuracy goals.


**Table 3.** ln Γ(*z*) via (25) with |*z*|=1/10 for various values of the truncation parameter, *N*, and arg*z*.

Although *F*(exp(*iθ*)/10) provides a substantial contribution to ln Γ(exp(*iθ*)/10), it is no longer accurate. The truncated sum is capable of improving the accuracy slightly for small values of the truncation parameter. For example, when the truncated sum is added to *F*(*z*) for *N* =3 and *θ* =−*π*/6, the real part is closer to the real part of ln Γ(exp(−*iπ*/6)/10), but not so the imaginary part. In fact, all the results are dominated by the truncated sum and its regularized remainder, but since they act against each other, their sum is not as large as Stirling's formula. Nevertheless, one cannot neglect the remainder as in standard Poincare´ asymptotics. In order to obtain the exact value of ln Γ(exp(−*iπ*/6)/10) via (25), the remainder must counterbalance the truncated sum, which will only occur if the regularization has been performed correctly. When the regularized remainder is included in the total, exact values of ln Γ(exp(*iθ*)/10) are obtained. For *θ* <−*π*/2, however, the Stokes

discontinuity term must be included. In fact, *SDSS*,*<sup>L</sup>* <sup>1</sup> (*z*) is greater than the sum of the truncated series and the regularized remainder, which highlights its importance outside the primary Stokes sector.

So far, we have managed to verify the asymptotic forms in (25) connected with Stokes sectors. Now we consider the asymptotic forms for the two Stokes lines. As *θ* is fixed in both asymptotic forms, the Stokes discontinuity term will only depend upon |*z*|. In other words, it is solely real. Furthermore, since *TSN*(*z*) depends only on odd powers of *z* in (24), *TSN*(*z*) and *RSL*, *<sup>N</sup>* (*z*) must be imaginary along both Stokes lines. This is consistent with Rule D in ([15], Chapter 1), which states that an asymptotic series crossing a Stokes line generates a discontinuity that is *π*/2 out of phase with the series on the line.

The third code in the appendix of [11] implements the second and fourth asymptotic forms of (25) in Mathematica. This program is very different from the previous program because it includes a Which statement in the Do loop. This is necessary because the singularity in the Cauchy principal value integral in (27) alters with each value of *n*. Moreover, the integral has been divided into several smaller intervals in order to achieve the best possible accuracy. The interval in which the singularity is situated is then determined via the Which statement. This interval is, in turn, divided into two intervals to avoid the singularity in accordance with the definition of the principal value. To ensure that the principal value is evaluated without encountering convergence problems, the option Method—>PrincipalValue must also be introduced into NIntegrate. Finally, in order to achieve the same accuracy as in Table 3, WorkingPrecision has been increased to 80. Hence the program takes much longer to execute.

Table 4 presents a sample of the results generated by running the third program in [11] with the variable modz set equal to 3. A similar set of calculations was performed for modz equal to 1/10, whose results appear in Table 6 of [11], but for brevity, they are not presented here. Although both Stokes lines were considered by putting the variable theta in the program equal to ±Pi/2, only the results for positive values of theta are presented here, again for the sake of brevity. The calculations took much longer for larger values of the truncation parameter, ranging from 26 hrs for *N* = 1 to 47.5 hrs for *N* = 50. Because the values of *F*(3 exp(*iπ*/2)) and *SDSL* <sup>0</sup> (3 exp(*iπ*/2)) are independent of the truncation parameter, they only appear once at the top of the table, while their sum appears immediately below them in the row labelled Combined. As stated above, the Stokes discontinuity term is purely real, whereas the truncated sum and regularized value of the remainder are purely imaginary. Therefore, the real part of the value in the Combined row represents the real part of ln Γ(3 exp(*iπ*/2)), which can be checked by comparing it with the real part of ln Γ(3 exp(3*iπ*/2)) at the bottom of the table. Thus, the Stokes discontinuity term only corrects the real part of Stirling's formula on a Stokes line. On the other hand, the imaginary part of ln Γ(3 exp(*iπ*/2)) can only be calculated exactly if the regularization of (25) has been performed correctly. The last decimal figure of the imaginary part of ln Γ(3 exp(*iπ*/2)) was printed out as a 6 instead of a 5, because the accuracy was set to 25 decimal places in the output stage. Since more than 25 figures appear in the table, this statement should have been modified to consider a higher level of accuracy. Therefore, we should only be worried when the results agree for less than 25 decimal places. The redundant places have been introduced here to indicate that the results in the Total column have been computed via a different approach from the LogGamma routine in Mathematica at the bottom of the table. That is, we should expect differences to occur at some stage, but only outside the specified level of accuracy.

In the table we see that the regularized value of the remainder decreases steadily until the truncation parameter reaches *NOP* around 11, before it begins to diverge. Note that the imaginary part of the Total value for *N* =1 is only accurate to 6 decimal places compared with the imaginary part of ln Γ(3 exp(*iπ*/2)). As discussed previously, this arises because the power of *n* in the denominator of *RSL* <sup>1</sup> (*z*) is zero when *N* is equal to unity. Though not displayed in the table, the remainder at the optimal point of truncation, *<sup>R</sup>*11(<sup>3</sup> exp(*iπ*/2)), has a minimum magnitude of <sup>O</sup>(10−11). Beyond this point, the magnitude of the regularized value of the remainder increases so that its magnitude is <sup>O</sup>(10−6) for *N* =20. By the time *N* = 30, both the truncated series and regularized value of the remainder dominate the calculation, but since they act against each other, they combine to yield the extra imaginary value enabling the imaginary part in the Combined row to agree with ln Γ(3 exp(*iπ*/2)). In fact, the most surprising result in the table is the last result for *N* = 50 because at least 25 decimal places cancel before we obtain the regularized value for the entire asymptotic series. As mentioned previously, the cancellation of these decimal places puts pressure on the accuracy and precision goals, which have been set to 30, as stated above. Fortunately, because WorkingPrecision was set to 80, it appears that the neglected terms in setting a limit of 10<sup>5</sup> in the summation are negligible. Thus, the remainder has been evaluated to a much greater accuracy than specified by the accuracy and precision goals in the program. Consequently, the Total value for *N* =50 agrees with the actual value of ln Γ(3 exp(*iπ*/2)).


So far, we have not seen any evidence of Stokes smoothing as espoused by Berry [18], Olver [20] and Paris, Kaminski and Wood [12,13,25]. As indicated earlier, smoothing implies that there is no discontinuity in the vicinity of a Stokes line, whereas we have been able to obtain exact values of ln Γ(*z*) near Stokes lines assuming the existence of a discontinuity. Because such smoothing occurs rapidly in the vicinity of Stokes lines, it could perhaps be argued that we have not investigated the asymptotic behaviour of ln Γ(*z*) sufficiently close to the Stokes lines. If a rapid transition does occur, then it means that we have still not exactified the Stokes approximation in the vicinity of the Stokes lines. From Figure 3, which represents the situation for |*z*|=3, Stokes smoothing is expected to be most pronounced for *θ* lying between 13*π*/32 and 19*π*/32. Alternatively, the Stokes multiplier is expected to be quite close to 1/2 for small values of *δ*, where *θ* = *π*(1/2 + *δ*) and |*δ*| < 3/32. On the other hand, if the conventional view of the Stokes phenomenon is valid, then the Stokes multiplier *S*<sup>+</sup> will equal unity for 0 < *δ* < 1 and zero for −1 < *δ* < 0 according to (31). Thus, a narrow region of positive and negative values of *δ* exists, where one of the views can be disproved. In summary, introducing very small values of *δ* into the respective asymptotic forms of (24) should not yield exact values of ln Γ(*z*) if smoothing occurs since the Stokes multiplier should be close to 1/2 and not toggle between zero and unity according to the sign of *δ*.

Table 5 presents a small sample of the results obtained by running the second program in [11] for |*z*| = 3 and various values of *δ*, where *θ* = (1/2 + *δ*)*π*. The code was run for different values of *N* except those close to unity for the reason given above. For each positive value of *δ*, there are three rows of values, while for each negative value there are only two rows because the Stokes discontinuity term is zero. The first row for each value of *δ*, labelled LogGamma[*z*] in the Method column, represents the value obtained from the LogGamma routine in Mathematica. Depending upon the sign of *δ*, the second row displays the Stokes discontinuity term. In general, this term was found to possess real and imaginary parts of <sup>O</sup>(10−8) or even a couple of orders lower. The next value for each value of *<sup>δ</sup>* is labelled either 1st AF or 3rd AF in the Method column according to whether the first or third asymptotic form in (25) was used to calculate the value of ln Γ(*z*). For brevity, the values of the truncated sum, the regularized value of the remainder and Stirling's formula do not appear in the table.


It should be noted that when <sup>|</sup>*δ*<sup>|</sup> is extremely small, e.g., <sup>O</sup>(10−5), NIntegrate experiences convergence problems because the integration is now too close to the singularities on the Stokes line. For example, when *δ* = 10−5, the program printed out a value that agreed with the actual value to 25 decimal places for the real part, but the imaginary part only agreed to 18 decimal places. Although this calculation is not presented in the table, it does represent a degree of success since the imaginary part of the Stokes discontinuity term is <sup>O</sup>(10−12). That is, the Stokes discontinuity term still had to be correct to the first six decimal places for the agreement to occur at 18-th decimal place.

For *δ* > 0 in the table, the first asymptotic form in (25) yields the exact value of ln Γ(*z*) even though the Stokes discontinuity term is very small. Nevertheless, in the case of Stokes smoothing, this term should be almost half the values appearing in the table. For *δ* < 0, if smoothing occurs, then the third asymptotic form in (25) should also not yield exact values of ln Γ(*z*) because it is missing almost half the Stokes discontinuity term. Yet, we observe the opposite; the third asymptotic form yields exact values of ln Γ(*z*) for *δ* < 0. Therefore, Stokes smoothing does not occur. These results are discussed in more detail in [26].

An explanation why Stokes smoothing is fallacious appears in ([6], Section 6.1), where it is shown that the form for the Stokes multiplier given by Berry and Olver is based on applying standard asymptotic techniques. Olver's "rigorous proof" [20] involves truncating an asymptotic series via Laplace's method. Since only the lowest order terms are retained in this approach, Olver arrives at the error function result for the Stokes multiplier. The neglected higher order terms are not only divergent, but are also extremely difficult to regularize. If they could be regularized, then they would produce the necessary corrections to turn the smooth function in Figure 3 into a step-function, thereby confirming the conventional view of the Stokes phenomenon.

#### **4. Mellin–Barnes Regularization**

In the preceding section we were able to exactify Stirling's formula by carrying out hyperasymptotic calculations of the asymptotic forms in (25). However, there were two drawbacks with the numerical analysis. The first is that an upper limit was applied to the infinite sums appearing in the expressions for the regularized value of the remainder. Despite this, the regularized values were extremely accurate for an upper limit of 10<sup>5</sup> in (26) and (27). This results in the second drawback, the considerable effort required to calculate the remainder. Ideally, we do not want to truncate any result here so that we can dispel any doubt that we are evaluating an approximation. If the infinite sum over *n* can be replaced by a single result, then there will be a huge reduction in the execution time since there would be only one call to NIntegrate. Such an expression emerges when we consider Mellin–Barnes regularization of ln Γ(*z*) in the following theorem.

**Theorem 2.** *Via the Mellin–Barnes (MB) regularization of the asymptotic series S*(*z*) *given by either (9) or (11), the logarithm of the gamma function can be expressed as*

$$\ln \Gamma(z) = \left(z - \frac{1}{2}\right) \ln z - z + \frac{1}{2} \ln(2\pi) + z \sum\_{k=1}^{N-1} \frac{(-1)^k}{(2z)^{2k}} \Gamma(2k - 1) c\_k(1)$$

$$-2z \int\_{\substack{c-i\infty\\ \text{Max}[N-1, 1/2]}}^{c+i\infty} ds \left(\frac{1}{2\pi z}\right)^{2s} \frac{e^{\pm 2Mi\pi s}}{e^{-i\pi s} - e^{i\pi s}} \zeta(2s) \Gamma(2s - 1) + S\_{MB}(M, z), \tag{36}$$

*where*

$$S\_{MB}(M, z) = \pm \left\lfloor M/2 \right\rfloor \ln \left( -e^{-2i\pi z} \right) - \left( \frac{1 - (-1)^M}{2} \right) \ln \left( 1 - e^{\pm 2i\pi z} \right),\tag{37}$$

*for* (±*M* − 1)*π* < *θ* = arg *z* < (±*M* + 1)*π, and M* ≥ 0*, but excluding θ equal to half-integer values of π. The strips involving θ represent domains of convergence for the MB integral in (36) with the upper-signed forms applying to positive θ and the lower-signed ones to negative θ. For θ* = ±(*M* − 1/2)*π, SMB*(*M*, *z*) *reduces to*

$$S\_{MB}(M, z) = \left[ \left( \frac{(-1)^M - 1}{2} \right) \ln \left( 1 - e^{-2\pi|z|} \right) + 2(-1)^{M+1} \left\lfloor \frac{M}{2} \right\rfloor \pi \left| z \right| \right],\tag{38}$$

*while for θ* = ±(*M* + 1/2)*π, it is given by*

$$S\_{MB}(M, z) = \left[ \left( \frac{(-1)^M - 1}{2} \right) \ln \left( 1 - e^{-2\pi|z|} \right) + 2\pi |z| \left( (-1)^M \left\lfloor \frac{M}{2} \right\rfloor + \frac{(-1)^M - 1}{2} \right) \right].\tag{39}$$

**Proof.** For the sake of brevity, the proof is omitted as it appears in ([11], Section 4).

Comparing the above results with those in Theorem 1, we see that not only is the remainder of the asymptotic series in (11) expressed as an MB integral, but there are also no discontinuities from crossing Stokes lines. Instead, the MB integral is valid over a strip or domain of convergence with the Stokes lines situated inside the domains of convergence. Although (38) and (39) apply at half integer values of *π*, they no longer represent Stokes lines as in Theorem 1. They have been isolated here as a result of the MB regularization of *S*(*z*) since ln Γ(*z*) itself possesses jump discontinuities at *θ* = (*l* + 1/2)*π*, for *l*, an integer not equal to 0 or −1. Thus, MB regularization produces a totally different representation of the original function from its asymptotic forms, and relies on the continuity of the function. If the original function possesses discontinuities as ln Γ(*z*) does, then the MB-regularized value will not yield the value of the original function unless the analysis is adapted as explained in the proof.

Since Stokes multipliers do not appear in the MB regularization of ln Γ(*z*) for *θ* = ±*π*/2, this implies that the Stokes discontinuities obtained by Borel summation can be fictitious. That is, although we observed jumps in the Stokes multipliers at *θ* = ±*π*/2, it does not mean that ln Γ(*z*) is necessarily discontinuous there. In fact, discontinuities will only occur at Stokes lines if the original function possesses singularities on them. In the case of ln Γ(*z*) singularities only occur when *θ* = ±(*l* + 1/2)*π* and *l* > 0.

Another feature of the above results is that the sum over *n* has vanished. It has effectively been replaced by the Riemann zeta function. As a consequence, we now only have one integral to evaluate the remainder in (11). This will save much computational effort provided that the software package is able to evaluate the zeta function extremely accurately. Fortunately, this is accomplished using the Zeta routine in Mathematica [17].

Although the results in Theorem 2 have been proven, as in the case of Theorem 1, we cannot be certain that they are indeed valid because we have observed in the case of "Stokes smoothing" that proofs in asymptotics are not reliable unless they are verified by numerical analysis. Since the results in Theorem 1 have already been validated, we can use them to establish the validity of the MB-regularized forms in Theorem 2. Therefore, the next section presents a numerical analysis where the MB-regularized forms for ln Γ(*z*3) are matched with the corresponding Borel-summed forms in Theorem 1.

#### **5. Further Numerical Analysis**

According to the definition of the regularized value [4–7], it must be invariant irrespective of how it is obtained. Therefore, we need to demonstrate that the MB-regularized forms in Theorem 2 yield identical values to the Borel-summed forms in Theorem 1, especially for the higher Stokes sectors and lines not studied previously. To access the higher/lower sectors or lines, higher powers of the variable *z* need to considered such as *z*<sup>3</sup> in ln Γ(*z*). This is tantamount to finding an asymptotic solution to a problem, which happens to yield the asymptotic forms of ln Γ(*z*3). In this case the principal branch is still (−*π*, *<sup>π</sup>*], but Mathematica is only able to evaluate ln <sup>Γ</sup>(*z*3) for *<sup>θ</sup>* over (−*π*/3, *<sup>π</sup>*/3].

From Theorem 2, two different representations exist for the regularized value of ln Γ(*z*) since replacing *M* by either *M* − 1 or *M* + 1 in (36) produces a different asymptotic form, where each is valid over one half of the domain of convergence for *M* = *M*. For example, the upper-signed version of (36) is valid for *π* < *θ* < 3*π* when *M* = 2, while for *M* = 1 and *M* = 3, it is only valid over 0 < *θ* < 2*π* and 2*π* < *θ* < 4*π*, respectively. Thus, the *M* = 1 result is valid for the bottom half of the domain of convergence for *M* = 2, while the *M* = 3 result applies to the top half of the domain of convergence for *M* = 2. This means that we are not only able to evaluate ln Γ(*z*) for higher/lower values of *θ* or arg *z*, but we can check the results against the asymptotic forms from overlapping domains of convergence. In addition, the *M* = 0 results can be checked with the values of ln Γ(*z*3) evaluated by Mathematica. Finally, we can check to see whether the MB-regularized forms of ln Γ(*z*3) yield identical values to the corresponding Borel-summed asymptotic forms in Theorem 1. Previously, we had no method of checking whether the Borel-summed asymptotic forms for ln Γ(*z*) outside the principal branch were

correct. Now this problem can be tackled by comparing the resulting Borel-summed asymptotic forms when *z* is replaced by a power of itself with the corresponding MB-regularized forms.

If *<sup>z</sup>* is replaced by *<sup>z</sup>*3, then for *<sup>M</sup>* <sup>=</sup> 0 or <sup>−</sup>*π*/3 <sup>&</sup>lt; *<sup>θ</sup>* <sup>&</sup>lt; *<sup>π</sup>*/3, (36) becomes

$$
\ln \Gamma \left( z^3 \right) = F \left( z^3 \right) + T \mathcal{S}\_N \left( z^3 \right) + \Delta \ln \Gamma \left( z^3 \right), \tag{40}
$$

where

$$
\Delta \ln \Gamma \left( z^3 \right) = -2z^3 I\_{\underline{U}}(M=0) = -2z^3 I(0), \tag{41}
$$

and

$$I\_{\rm II}(M) = \int\_{\substack{c+i\infty\\c-i\infty\\\text{Max}[N-1,1/2]}}^{c+i\infty} ds \, \frac{(1/2\pi z^3)^{2s} e^{\pm 2i\pi Ms}}{e^{-i\pi s} - e^{i\pi s}} \, \zeta(2s) \, \Gamma(2s-1). \tag{42}$$

In (40), *TSN*(*z*) represents the truncated part of the asymptotic series, *S*(*z*) at *N*, as in (25), while the subscript U or L in (42) denotes whether the upper-signed or lower-signed version has been used. For *M* = 0, the subscript is dropped. Thus, ln Γ(*z*3) is composed of Stirling's formula, the truncated series and an MB integral as the regularized value of the remainder. On the other hand, for *M* = 1, the upper-signed version of (36) yields

$$
\Delta \ln \Gamma \left( z^3 \right) = -2z^3 I\_{ll}(1) - \ln \left( 1 - e^{2i\pi z^3} \right) . \tag{43}
$$

The domain of convergence for this integral is 0 < *θ* < 2*π*/3, but it is not valid when *θ* = *π*/2 since *SMB*(*M*, *<sup>z</sup>*3) is discontinuous whenever *<sup>θ</sup>* <sup>=</sup> <sup>±</sup>(*<sup>M</sup>* <sup>±</sup> 1/2)*π*/3 excluding *<sup>M</sup>* <sup>=</sup> 0. For *<sup>θ</sup>* <sup>=</sup> *<sup>π</sup>*/6, (38) can be used, but all that happens is the logarithmic term on the right-hand side of (43) is replaced by ln <sup>1</sup> <sup>−</sup> *<sup>e</sup>*−2*π*|*z*<sup>|</sup> 3 , which indicates that there is no discontinuity in ln Γ(*z*3) at *θ* = *π*/6.

For *M* = 1, when *θ* = ±(*M* + 1/2)*π*/3, *θ* = ±*π*/2. The upper value of *θ* lies in the domain of convergence for (43). In (36), we substitute (39) with *z* equal to *z*<sup>3</sup> for *SMB*(*M*, *z*). Then we arrive at

$$
\Delta \ln \Gamma \left( z^3 \right) = -2z^3 I\_{ll}(1) - 2\pi |z|^3 - \ln \left( 1 - e^{-2\pi |z|^3} \right). \tag{44}
$$

As a result of the penultimate term, we expect a discontinuity when (44) is evaluated later. In addition, we can replace *<sup>F</sup>*(*z*3) and *TSN*(*z*3) in (40) by *<sup>F</sup>*(−*i*|*z*<sup>|</sup> <sup>3</sup>) and *TSN*(−*i*|*z*<sup>|</sup> <sup>3</sup>), respectively, while *z*<sup>3</sup> in (44) can be replaced by −*i*|*z*| 3.

When compared with (40), we see that (43) and (44) possess extra terms, which are similar to the Stokes discontinuity term in the Borel-summed asymptotic forms of Theorem 1. The difference here is that the lines of discontinuity are located in the domains of convergence. Thus, the asymptotic form is only different on the lines, whereas with Stokes lines, the regularized value is different before, on and after them. Moreover, we expect both forms for ln Γ(*z*3) to yield identical values when the domains of convergence overlap, i.e., over (0, *π*/3). This does not occur with the Stokes phenomenon, indicating again that MB regularization is different from Borel summation.

For *M* = 2 and 3, the upper-signed version of (36) with *z* replaced by *z*<sup>3</sup> yields

$$
\Delta \ln \Gamma \left( z^3 \right) = \begin{cases}
\end{cases} \tag{45}$$

These results, which are similar to (43) except for the logarithmic terms, are not valid for *θ* =*π*/2 and *θ* =5*π*/6.

For *<sup>M</sup>* <sup>=</sup> 1 and *<sup>θ</sup>* <sup>=</sup> <sup>±</sup>(*<sup>M</sup>* <sup>+</sup> 1/2)*π*/3, (39) was used to derive the asymptotic form of ln <sup>Γ</sup>(*z*3). However, when *θ* =±(*M* − 1/2)*π*/3, *θ* can also equal *π*/2, but now the upper-signed version of (42) with *M* = 2 applies. Moreover, *SMB*(*M*, *z*) is determined by putting *M*=2 and replacing *z* by *z*<sup>3</sup> in (38). Hence for *M*=2 and *θ* =*π*/2, we find that

$$\ln \Gamma \left( z^3 \right) = F \left( z^3 \right) + T \text{S}\_N \left( z^3 \right) - 2 z^3 I \text{u} \left( 2 \right) - 2 \,\pi |z^3|. \tag{46}$$

For *θ* = 5*π*/6, we have either *M*=3 when *θ* = (*M* − 1/2)*π*/3 or *M*=2 when *θ* = (*M* + 1/2)*π*/3. In the first case *SMB*(*M*, *z*) is given by (38) with *z*=*z*<sup>3</sup> and *M*=3. In the second case (39) applies with *z* = *z*<sup>3</sup> and *M* = 2. Thus, we obtain

$$
\Delta \ln \Gamma \left( z^3 \right) = \begin{cases}
\end{cases} \tag{47}
$$

The corresponding lower-signed results from (36) with *z* replaced by *z*<sup>3</sup> are simply complex conjugates of the above results. For brevity, they are not presented here. However, the interested reader will find them in ([11], Section 5).

Two separate numerical analyses will be presented here: the first aims to show the agreement between the MB-regularized asymptotic forms for ln Γ(*z*3) and their Borel-summed counterparts, and the second deals with the behaviour of ln Γ(*z*3) at the Stokes lines/rays. The first one includes an explanation of how to evaluate ln Γ(*z*3) from the MB-regularized asymptotic forms. Then the results are compared with the Borel-summed asymptotic forms in Section 3 with *z* replaced by *z*3. We shall observe that although both MB-regularized asymptotic forms are defined at each Stokes line, they give incorrect values of ln Γ(*z*3) with the difference being discontinuous jumps of 2*πi*. The second study at the Stokes lines/rays will be concerned with obtaining the correct values of ln Γ(*z*3) via both the Borel-summed and MB-regularized asymptotic forms by applying the Zwaan–Dingle principle [6,15], which states that an initially real function cannot suddenly become imaginary.

Since there are no Stokes lines of discontinuity in the above results, there are always two MB-regularized asymptotic forms that yield the values of ln Γ(*z*3) for all values of *θ* or arg *z*, except when *θ* = *kπ*/3 and *k* is an integer. Thus, the values from two different asymptotic forms for the regularized value of ln Γ(*z*3) can be checked against each other, which is simply not possible with Borel-summed results.

Because it represents a value where standard Poincare´ asymptotics breaks down, we shall carry out the numerical study of the above results with |*z*| set equal to 1/10 as before. Note that the actual variable in the above asymptotic forms is 2*πz*3. Therefore, we are dealing with a very small value, which means that both the truncated series, *TSN*(*z*), and the MB integral in the above results begin to diverge very rapidly for relatively small values of the truncation parameter, e.g., *N* =4. Consequently, a cancellation of many decimal places will occur when adding *TSN*(*z*) to the MB integral. Despite the accuracy and precision goals being set to 30, one may not necessarily obtain a final value that is accurate at this level even though WorkingPrecision is now set higher to 80, not 60 as in Section 3. As stated earlier, the problem can be overcome by specifying much larger values of AccuracyGoal, PrecisionGoal and WorkingPrecision in NIntegrate, but it will come at the expense of computing time.

Table 6 presents a very small sample of the results from the fourth program in the appendix of [11] for various values of *N* and *θ* or arg *z*. There are five sets of results, four with *θ* positive, and one where it is negative. The first row of each calculation gives the value of Stirling's formula, while the next row displays the value of *TSN*(*z*3). Then the remainder denoted by MB Int. appears. As mentioned earlier, because the domains of convergence of the MB integrals overlap one another, two different MB integrals are computed for the remainder. The first MB integral is represented by M1, while the second is represented by M2. The second MB integral is not evaluated when *θ* =*lπ*/3 and *l*, an integer, as demonstrated by the third calculation. The values of *N* and *θ* appear together with the value of the

first MB integral in each set. The values of *SMB*(*M*, *z*3) are displayed in the rows immediately after the MB integrals. Then the results for the entire asymptotic form appear, which can be compared with the value of LogGamma from Mathematica.


**Table 6.** Values of ln <sup>Γ</sup>(*z*3) with <sup>|</sup>*z*<sup>|</sup> <sup>=</sup> 1/10 and varying *<sup>N</sup>* and *<sup>θ</sup>* in the Mellin–Barnes (MB)-regularized forms.

The first calculation in Table 6 lists the results for *θ* = −*π*/12 and *N* = 3. Then (40) and the complex conjugate of (43) corresponding to M1 = 0 and M2 = -1, respectively, yield the value of ln Γ(exp(−*iπ*/4)/1000). Stirling's formula on the first row is substantial, but not accurate, compared with the actual value from the LogGamma routine in the bottom row of the calculation. The second row of the calculation displays the value of *TS*3(exp(−*iπ*/4)/1000), which is <sup>O</sup>(106). Thus, at least six decimal figures need to be cancelled by the remainder or MB integral, which occurs when the value on the next row is included in (40). The value of *SMB*(0, exp(−*iπ*/4)/1000) (zero since M1 = 0) appears on the fourth row of the calculation, while the sum of all the preceding quantities appears in the fifth row labelled as 'Total via M1'. The total value agrees with the actual value of ln Γ(exp(−*iπ*/4)/1000) to 30 decimal places, well within the accuracy and precision limits despite the cancellation of six

decimal figures. The sixth row of the first calculation displays the value of the MB integral for M2 = −1. As expected, it agrees with the first six decimal figures of the values for both the truncated sum and the MB integral in (40). However, *SMB*(1, *z*3), which is now non-vanishing, appears on the seventh row. There it can be seen that the real and imaginary parts of this value are much greater in magnitude than those from Stirling's formula. If this value is summed only with Stirling's formula, then the resulting value deviates from the value of ln Γ(exp(−*iπ*/4)/1000) far more than either value on its own, but when it is summed with the truncated sum and MB-regularized remainder, it yields ln Γ(exp(−*iπ*/4)/1000) to 29 decimal places despite the cancellation of six decimal figures.

The other calculations in Table 6 are similar to the first set of results except the MB-integrals and *SMB*(*M*, *z*3) are evaluated according to the relevant domain of convergence. The third calculation presents less results because it has already been stated that there is only one MB-regularized form, viz. (44), which is applicable. Nevertheless, the final result agrees with the value obtained from Mathematica. An interesting result in this calculation is that (ln <sup>Γ</sup>(*z*3)) = <sup>−</sup>*<sup>π</sup>* for *<sup>θ</sup>* <sup>=</sup> *<sup>π</sup>*/3 because the asymptotic series is composed of purely real terms when *θ* = *kπ*/3 and *k* is an integer. Hence the imaginary part of *TSN*(*z*3) vanishes for all these values. In addition, the imaginary part of the MB integral can be shown to vanish by splitting the integral into two integrals and making the substitutions, *s* = *c* + *it* in the upper half of the complex plane and *s* = *c* − *it* in the lower half. Then all the terms become complex conjugates of each other. Expanding out all the terms, one is left with a real integral, while the imaginary part reduces to

$$\mathbb{E}\otimes\ln\Gamma\left(|z|^{\mathfrak{Z}}\exp(i\pi)\right)=\mathbb{S}F\left(|z|^{\mathfrak{Z}}e^{i\pi}\right)-\mathbb{S}\ln\left(1-e^{-2i\pi|z|^{\mathfrak{Z}}}\right).\tag{48}$$

From Stirling's formula we obtain

$$\, \Im \Im F\left( |z|^3 e^{i\pi} \right) = -\left( |z|^3 + 1/2 \right) \pi\_\prime \tag{49}$$

while the second term in (48) becomes

$$\mathbb{E}\left\{\Im\ln\left(1-e^{-2i\pi|z|^3}\right)\right\}=\Im\ln\left(e^{-i\pi|z|^3}\right)+\Im\ln\left(2i\sin\left(\pi|z|^3\right)\right).\tag{50}$$

Introducing these results into (48) yields

$$\left. \right| \otimes \ln \Gamma \left( z^3 \right) \Big|\_{\theta = \pi/3} = -\pi. \tag{51}$$

In the last two calculations of Table 6, *θ* > *π*/3, which means that the LogGamma routine can no longer be used. We are now on our own, a new frontier in mathematics, where only the totals via the M1 and M2 asymptotic forms can yield the value of ln Γ(*z*3). Moreover, when *θ* equals 2*π*/3 or *π*, there will only be one MB-regularized form that yields the regularized value. For these cases we require the Borel-summed regularized values as a check.

In the fifth calculation, *<sup>N</sup>* is set equal to 5, which yields a value of <sup>O</sup>(1017) for the truncated sum. Hence at least 16 decimal figures need to be cancelled in order to obtain ln Γ(*e*12*i<sup>π</sup>*/7/1000). Since *θ* = 4*π*/7, the domains of convergence are (0, 2*π*/3) and (*π*/3, *π*) corresponding to M1 = 1 and M2 = 2. Thus, (43) and (45) apply, which is interesting because *SMB*(*M*, *z*3) is very different in these asymptotic forms, particularly the imaginary parts. As expected, the MB integrals for both asymptotic forms yield the 17 decimal figures in the real parts needed to cancel the real part of the truncated sum, *TS*5(*e*12*i<sup>π</sup>*/7/1000). On the other hand, only 11 decimal figures are cancelled in the imaginary parts. As a result of the cancellation, the real parts in the totals only agree to 17 decimal figures. The same applies to the imaginary parts, which is surprising since there were less cancelled figures.

Because 4*π*/7 is closer to the upper limit of 2*π*/3 of the domain of convergence for (43), one expects the total obtained via M1 in the table to be the less accurate of the two forms. In actual fact, it turns out that this value is more accurate than the total via (45) by a few extra decimal places. Nevertheless, if WorkingPrecision is set to 100 and AccuracyGoal and PrecisionGoal to 40, then both totals are found to agree to 32 decimal places, although the computation time is more than doubled. Another method of avoiding long computation times is to keep *N* as low as possible.

The final calculation in Table 6 is similar to the previous one except that (45) is introduced into (40) to yield the MB-regularized asymptotic forms for *θ* = 8*π*/9. For *N* = 6, the truncated sum is <sup>O</sup>(1023). Since the highest degree of cancellation between the truncated sum and remainder occurs here, we find that this calculation yields the least accurate results of all those in the table. Despite this fact, the final results still agree with each other to 10 decimal places. Hence the results in Table 6 confirm the validity of the MB-regularized asymptotic forms for ln Γ(*z*3).

We now consider the MB-regularized asymptotic forms near Stokes lines. Although the code should not be run when *θ* corresponds directly to a Stokes line, one can do so since the MB integrals are defined. Table 7 displays some of the results obtained by running the fourth program in [11] near the Stokes lines at *θ* = *π*/2, *θ* = 5*π*/6 and *θ* = −*π*/6 with |*z*| = 1/10 and *N* = 5. When *θ* = *π*/2, the code evaluates ln Γ(*z*3) via (43) and (45) with M1 = 1 and M2 = 2, respectively. The first two results in the table display the values of (43) and (45) near the discontinuity at *π*/2 with *θ* = 19*π*/40. As expected, both forms of ln Γ(*z*3) yield identical values. At *θ* = *π*/2, however, both forms yield different results, but only for the imaginary parts. In fact, there is a jump discontinuity of 2*πi* between the results with the first form yielding −*iπ*/2 and the second, 3*iπ*/2. Note, however, that the discontinuities arise only from taking the logarithm of the gamma function. The gamma function itself is not discontinuous. As expected, neither result for *θ* =*π*/2 is correct. The correct result is the midway between −*π*/2 and <sup>3</sup>*π*/2. That is, ln <sup>Γ</sup>(*z*3)*θ*=*π*/2 <sup>=</sup>*π*/2.

**Table 7.** ln Γ(*z*3) via the MB-regularized forms in the vicinity of the lines of discontinuity given by *θ* =−*π*/6, *θ* =*π*/2 and *θ* =5*π*/6 with |*z*|=1/10 and *N* =5.


The next set of six results display the case where *θ* is very close to 5*π*/6, viz. ±*π*/150 away. In this case both forms in (45) are used to calculate ln Γ(*z*3). Once again, both forms yield identical results below and above *θ* =5*π*/6. However, for *θ* = 5*π*/6, they yield identical values for both the real and imaginary parts. In fact, although the imaginary parts have the same value of *iπ*/2, it is incorrect because ln <sup>Γ</sup>(*z*3) experiences a jump discontinuity of <sup>−</sup>2*πi*. Mathematica has simply chosen the wrong

value for the logarithmic terms in (45), as explained on p. 564 of [17]. Noting that there is a jump of <sup>−</sup>2*<sup>π</sup>* means that ln <sup>Γ</sup>(*z*3)*θ*=5*π*/6 <sup>=</sup> <sup>−</sup>*π*/2, which corresponds to midway for the results before and after the Stokes line.

The final set of results in Table 7 have been obtained for *θ* very close to −*π*/6. Because |*θ*| < *π*/3, we can evaluate ln Γ(*z*3) via the LogGamma[*z*] routine in Mathematica. Hence there are more results for this calculation. This calculation employs (40) (M1 = 0) and the complex conjugate of (43) (M2 = 1). As before, the two versions of ln Γ(*z*3) give identical values above and below the Stokes line at *θ* = −*π*/6. Moreover, they agree with the values obtained via LogGamma[*z*]. The interesting point about this calculation, however, is that all three values at the Stokes line *θ* = −*π*/6 also agree. This is expected since there is no extra logarithmic term in (42), while in the other asymptotic form the logarithmic term is purely real for *<sup>θ</sup>* <sup>=</sup> <sup>±</sup>*π*/6. Thus, there is no discontinuity in ln <sup>Γ</sup>(*z*3) at *<sup>θ</sup>* <sup>=</sup>±*π*/6, which shows that Stokes discontinuities in Borel-summed regularized values can be fictitious.

The final calculation is the verification of the Borel-summed asymptotic forms for ln Γ(*z*3). It was not possible to check these results previously because their regions of validity do not overlap. Now we use the MB-regularized asymptotic forms to verify their Borel-summed counterparts. In addition, we can confirm that the MB-regularized asymptotic forms for *θ* =*kπ*/3, where *k* equals ±1, ±2 and 3, are correct, since only one MB-regularized asymptotic form is valid for these values.

To carry out the first task, we need to replace *z* by *z*<sup>3</sup> in Theorem 1. Hence the Borel-summed regularized values for ln Γ(*z*3) become

$$\ln \Gamma \left( z^3 \right) = F \left( z^3 \right) + T S\_N \left( z^3 \right) + R\_N^{\pm} \left( z^3 \right) + S D\_M^{\pm} \left( z^3 \right), \tag{52}$$

where, as before, *TSN*(*z*3) is the truncated form of *SN*(*z*3) at *N*,

$$R\_N^+ \left( z^3 \right) = \frac{2(-1)^{N+1} z^3}{(2\pi z^3)^{2N-2}} \int\_0^\infty dy \, y^{2N-2} \, e^{-y} \sum\_{n=1}^\infty \frac{1}{n^{2N-2} \left( y^2 + (2n\pi z^3)^2 \right)} \tag{53}$$

$$R\_N^- \left( z^3 \right) = \frac{2 \ z^3}{(2 \pi |z^3|)^{2N-2}} P \int\_0^\infty dy \, y^{2N-2} e^{-y} \sum\_{n=1}^\infty \frac{1}{n^{2N-2} (y^2 - 4n^2 \pi^2 |z^3|^2)'} \tag{54}$$

$$SD\_M^+\left(z^3\right) = -\lfloor M/2 \rfloor \ln\left(-e^{\pm 2i\pi z^3}\right) - \frac{\left(1 - (-1)^M\right)}{2} \ln\left(1 - e^{\pm 2i\pi z^3}\right),\tag{55}$$

and

$$SD\_M^{-}\left(z^3\right) = (-1)^M \left( \lfloor M/2 \rfloor + \frac{1 - (-1)^M}{2} \right) 2\pi |z^3| - \frac{1}{2} \ln\left(1 - e^{-2\pi|z^3|}\right). \tag{56}$$

The upper- and lower-signed versions of (55) are valid for (*M* − 1/2)*π*/3 < *θ* < (*M* + 1/2)*π*/3 and −(*M* +1/2)*π*/3 < *θ* < −(*M* −1/2)*π*/3, respectively, while (56) is only valid for *θ* = ±(*M* +1/2)*π*/3. Therefore, for −*π* < *θ* ≤ *π* or the principal branch for *z*, Stokes lines occur at ±*π*/6, ±*π*/2, and ±5*π*/6. We shall investigate these cases after we have considered the results for the Stokes sectors first. The Borel-summed asymptotic forms that are valid for the Stokes sectors can be expressed as

$$\ln \Gamma \left( z^3 \right) = \begin{cases} F\left( z^3 \right) + TS\_N \left( z^3 \right) + R\_N^+ \left( z^3 \right) - \ln \left( -e^{2i\pi z^3} \right) - \ln \left( 1 - e^{2i\pi z^3} \right), & 5\pi/6 < \theta \le \pi, \\ F\left( z^3 \right) + TS\_N \left( z^3 \right) + R\_N^+ \left( z^3 \right) - \ln \left( -e^{2i\pi z^3} \right), & \pi/2 < \theta < 5\pi/6, \\ F\left( z^3 \right) + TS\_N \left( z^3 \right) + R\_N^+ \left( z^3 \right) - \ln \left( 1 - e^{2i\pi z^3} \right), & \pi/6 < \theta \le \pi/2, \\ F\left( z^3 \right) + TS\_N \left( z^3 \right) + R\_N^+ \left( z^3 \right), & -\pi/6 < \theta < \pi/6, \\ F\left( z^3 \right) + TS\_N \left( z^3 \right) + R\_N^+ \left( z^3 \right) - \ln \left( 1 - e^{-2i\pi z^3} \right), & -\pi/2 < \theta < -\pi/6, \\ F\left( z^3 \right) + TS\_N \left( z^3 \right) + R\_N^+ \left( z^3 \right) - \ln \left( -e^{-2i\pi z^3} \right), & -5\pi/6 < \theta < -\pi/2, \\ F\left( z^3 \right) + TS\_N \left( z^3 \right) + R\_N^+ \left( z^3 \right) - \ln \left( -e^{-2i\pi z^3} \right), & -\ln \left( 1 - e^{-2i\pi z^3} \right). \end{cases} \tag{57}$$

The main difference between these results and the earlier MB-regularized values is that though they possess similar logarithmic terms, they emerge in different sectors within the principal branch.

Table 8 presents a very small sample of the results obtained by running the fifth program in the appendix of [11] with <sup>|</sup>*z*<sup>|</sup> <sup>=</sup> 5/2 and the upper limit in *<sup>R</sup>*<sup>+</sup> *<sup>N</sup>*(*z*3) and *<sup>R</sup>*<sup>−</sup> *<sup>N</sup>*(*z*3) set to 10<sup>5</sup> as in Section 3. The first calculation displays the results obtained for *θ* =−*π*/7 and *N* =4. Since *NOP* =8 according to (23), Stirling's formula or *F*(*z*3) yields a reasonable approximation to the actual value of ln Γ((5/2)3*e*−3*iπ*/7), which appears at the bottom of the calculation. Consequently, the truncated sum is small, only contributing at the third decimal place. Two MB-regularized asymptotic forms apply: (1) (40) denoted by M1 = 0, and (2) the complex conjugate of (43) denoted by M2 = −1. The MB-Integrals in the remainder are <sup>O</sup>(10−12). For M1 = 0, there is no logarithmic term, while for M2 = <sup>−</sup>1, there is a contribution, but it is almost negligible, <sup>O</sup>(10−42). That is, the M1 = 0 and M2 = <sup>−</sup>1 calculations are virtually identical to one another, well within the accuracy and precision goals set in NIntegrate. Therefore, the totals representing the sum of *F*(*z*3), the MB integrals and the logarithmic terms, are not only identical to one another, but they also agree with the value obtained from the LogGamma routine in Mathematica.

The result labelled Borel Rem represents the Borel-summed remainder or *R*<sup>+</sup> *<sup>N</sup>*(*z*3) in the fourth asymptotic form of (57), where the upper limit in the sum has been set to 105. Despite this truncation, it is identical to the values obtained from the MB regularized asymptotic forms. In actual fact, the Borel Rem value is identical to the first 34 decimal figures of the MB integrals, well within the accuracy and precision goals. Bearing in mind that the remainder is very small, this means that only the first 13 or so decimal figures of each remainder calculation will contribute to the totals. That is, the remainder is truly subdominant.

The second calculation displays the results for *θ* = 2*π*/3 and *N* = 2, which has only one valid MB-regularized asymptotic form. In addition, Stirling's formula, the truncated sum and the MB integral are all real, while the logarithmic term yields the imaginary contribution of −*π*/4. Hence we see that ln Γ |*z*| <sup>3</sup>*e*2*i<sup>π</sup>* <sup>=</sup> <sup>−</sup>*π*/4. As expected, the value of the MB integral is very small, <sup>O</sup>(10−7), while Stirling's formula provides an accurate value for ln Γ (125 exp(2*iπ*)/8). Appearing below the Total is the remainder of the Borel-summed version for ln Γ |*z*| <sup>3</sup> exp(2*iπ*) or *R*<sup>+</sup> *<sup>N</sup>*(*z*3) in the second result of (57), which in turn has the same logarithmic term as (45). Hence both calculations are expected to be identical. However, a closer inspection reveals that they agree to 23 decimal figures, but not the expected 30 specified by the accuracy and precision goals. This discrepancy, which arises from the truncation of the Borel-summed remainder, is an example where the upper limit of the sum over *n* has to be set much higher in order to achieve the desired accuracy.

The final set of values have been obtained by setting *θ* equal to 11*π*/12 and *N* to 4. Once again, there are two MB-regularized asymptotic forms, both obtained from (45). The Borel-summed asymptotic form in this case is given by the first form in (57). All the remainder terms are tiny, <sup>O</sup>(10−12). If only the logarithmic term for all three forms is added to Stirling's formula, then a good approximation is obtained. In this instance the logarithmic term for the Borel-summed form is identical to the second form in (45), but by comparing it with the value obtained via the M1=2 form, the extra

term is very small indeed, only differing at the 20-th decimal place. Nevertheless, all three totals agree with each other as in the other cases in the table.


**Table 8.** ln <sup>Γ</sup>(*z*3) for <sup>|</sup>*z*<sup>|</sup> <sup>=</sup> 5/2 and various values of *<sup>θ</sup>* and *<sup>N</sup>*.

Before we can be assured that there is complete agreement between both sets of asymptotic forms, we need to carry out a final numerical analysis at the Stokes lines. The Borel-summed asymptotic forms at these lines are given by (52), but now with *R*− *<sup>N</sup>*(*z*3) or (54) and *SD*<sup>−</sup> *<sup>M</sup>*(*z*3) or (56). Putting *<sup>M</sup>* equal to 0, 1 and 2, yields the specific forms at the Stokes lines, which are

$$\ln \Gamma \left( z^3 \right) = \begin{cases} F\left( z^3 \right) + T \text{S}\_N \left( z^3 \right) + R\_N^- \left( z^3 \right) - \frac{1}{2} \ln \left( 1 - e^{-2\pi \left| z^3 \right|} \right), & \theta = \pm \pi/6, \\ F\left( z^3 \right) + T \text{S}\_N \left( z^3 \right) + R\_N^- \left( z^3 \right) - \frac{1}{2} \ln \left( 1 - e^{-2\pi \left| z^3 \right|} \right) \\ -2\pi |z^3|, & \theta = \pm \pi/2, \\ F\left( z^3 \right) + T \text{S}\_N \left( z^3 \right) + R\_N^- \left( z^3 \right) - \frac{1}{2} \ln \left( 1 - e^{-2\pi \left| z^3 \right|} \right) \\ + 2\pi |z^3|, & \theta = \pm 5\pi/6. \end{cases} \tag{58}$$

Note the similarity of the Stokes discontinuity terms with the corresponding terms or *SMB*(*z*3) in the MB-regularized asymptotic forms. The major difference occurs with the logarithmic term, which is represented by either a zero or full residue contribution in the MB-regularized asymptotic forms, while it is always represented by a semi-residue or half the contribution in the Borel-summed asymptotic forms.

Table 9 presents a small sample of the results obtained by running the final program in the appendix of [11]. Since the MB integrals yielded values of <sup>O</sup>(10−3), there was no significant cancellation of decimal figures as in Table 8. The first column of Table 9 displays the value of *θ* for the respective Stokes line. Here they are presented for the Stokes lines at: (1) *θ* = *π*/6, (2) *θ* = −*π*/2 and (3) *θ* = 5*π*/6. As stated before, ln Γ(*z*3) cannot be evaluated by Mathematica for the last two cases. Thus, LogGamma[*z*] appears as an extra result for *θ* = *π*/6. The second column of Table 9 displays the equation that was used to calculate the value of ln Γ(*z*3). The label 'c.c.' denotes that the complex conjugate of the equation was used, which applies here because *θ* is negative. The third column displays the actual values to 27 decimal places. We see that not only do the two different MB-regularized asymptotic forms agree with one another at each Stokes line, they also agree with the results obtained from the the Borel-summed asymptotic forms in (58) and where possible, with the LogGamma routine in Mathematica.


**Table 9.** ln Γ *z*<sup>3</sup> for |*z*| = 9/10 at the Stokes lines within the principal branch.

#### **6. Conclusions**

In [16] it was stated that a fully-fledged theory of divergent series could only be realized if more complicated problems were studied than those presented in [6]. Amongst these was the extension of the asymptotics of the gamma function to the entire complex plane since the Stokes lines possess an infinite number of singularities rather than one as studied in [6]. This has been achieved here, which leaves the development of the complete asymptotic expansion for the confluent hypergeometric function over the entire complex plane as the next problem. In this instance it will be necessary to develop and regularize infinite subdominant series throughout the complex plane.

**Funding:** This research received no external funding.

**Acknowledgments:** The author thanks Professor Mainardi for the invitation to contribute to this special issue.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


© 2020 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Transformations of the Hypergeometric <sup>4</sup>***F***<sup>3</sup> with One Unit Shift: A Group Theoretic Study**

**Dmitrii Karp 1,***<sup>∗</sup>* **and Elena Prilepkina 2,3**


Received: 2 October 2020; Accepted: 2 November 2020; Published: 5 November 2020

**Abstract:** We study the group of transformations of <sup>4</sup>*F*<sup>3</sup> hypergeometric functions evaluated at unity with one unit shift in parameters. We reveal the general form of this family of transformations and its group property. Next, we use explicitly known transformations to generate a subgroup whose structure is then thoroughly studied. Using some known results for <sup>3</sup>*F*<sup>2</sup> transformation groups, we show that this subgroup is isomorphic to the direct product of the symmetric group of degree 5 and 5-dimensional integer lattice. We investigate the relation between two-term <sup>4</sup>*F*<sup>3</sup> transformations from our group and three-term <sup>3</sup>*F*<sup>2</sup> transformations and present a method for computing the coefficients of the contiguous relations for <sup>3</sup>*F*<sup>2</sup> functions evaluated at unity. We further furnish a class of summation formulas associated with the elements of our group. In the appendix to this paper, we give a collection of *Wolfram Mathematica*® routines facilitating the group calculations.

**Keywords:** generalized hypergeometric function; hypergeometric transformations; transformation groups; symmetric group

#### **1. Introduction and Preliminaries**

Groups comprising transformation of the generalized hypergeometric functions that preserve their value at unity can be traced back to Kummer's formula ([1], Corollary 3.3.5), see (2) below. These groups play an important role in mathematical physics. In particular, the group theoretic properties of hypergeometric transformations constitute the key ingredient of a succinct description of the symmetries of Clebsh-Gordon's and Wigner's 3−*j*, 6−*j* and 9−*j* coefficients from the angular momentum theory [2–5]. The Karlsson-Minton summation formula for the generalized hypergeometric function with integral parameter differences (IPD) was largely motivated by a computation of a Feymann's path integral. Furthermore, IPD hypergeometric functions appear in calculation of several integrals in high energy field theories and statistical physics [6]. See also introduction and references in [7] for further applications in mathematical physics and relation to Coxeter groups.

The generalized hypergeometric function ([1], Section 2.1.2), ([8], Chapter 16) is defined by the series

$${}\_{p+1}F\_p \left( \begin{matrix} a\_1, \dots, a\_{p+1} \\ b\_1, \dots, b\_p \end{matrix} \bigg| z \right) = \sum\_{n=0}^{\infty} \frac{(a\_1)\_n \cdot \dots \cdot (a\_{p+1})\_n}{n!(b\_1)\_n \cdot \dots \cdot (b\_p)\_n} z^n \tag{1}$$

*Mathematics* **2020**, *8*, 1966; doi:10.3390/math8111966 www.mdpi.com/journal/mathematics

whenever it converges. When evaluated at the unit argument, *z* = 1, it represents a function of 2*p* + 1 complex parameters with obvious symmetry with respect to separate permutation of the *p* + 1 top and the *p* bottom parameters. As the above series diverges at *z* = 1 if the parametric excess satisfies ∑*p <sup>k</sup>*=1(*bk* − *ak*) − *ap*+<sup>1</sup> < 0, the first problem that arises is to construct an analytic continuation to the values of parameters in this domain. For <sup>3</sup>*F*<sup>2</sup> function this problem is partially solved by the transformation ([1], Corollary 3.3.5)

$${}\_{3}F\_{2}\left(\begin{matrix}a,b,c\\d,e\end{matrix}\right)=\frac{\Gamma(e)\Gamma(d+e-a-b-c)}{\Gamma(e-c)\Gamma(d+e-b-a)}{}\_{3}F\_{2}\left(\begin{matrix}d-b,d-a,c\\d,d+e-b-a\end{matrix}\right)\tag{2}$$

discovered by Kummer in 1836. In the above formula we have omitted the argument 1 from the notation of the hypergeometric series and this convention will be adopted throughout the paper. The series the right hand side of (2) converges when (*e* − *c*) > 0 so that we get the analytic continuation to this domain. An important aspect of the above formula is that it can be applied to itself directly or after permuting some of the top and/or bottom parameters. This leads to a family of transformations which can can be studied by group theoretic methods. A notable member of this family is Thomae's (1879) transformation ([1], Corollary 3.3.6)

$${}\_{3}F\_{2}\left( \begin{matrix} a,b,c \\ d,e \end{matrix} \right) = \frac{\Gamma(d)\Gamma(e)\Gamma(s)}{\Gamma(c)\Gamma(s+b)\Gamma(s+a)} {}\_{3}F\_{2}\left( \begin{matrix} d-c,e-c,s \\ s+a,s+b \end{matrix} \right),\tag{3}$$

where *s* = *d* + *e* − *a* − *b* − *c*, which gave the name to the whole family of <sup>3</sup>*F*<sup>2</sup> transformations generated by the algorithm described above. In an important work [9] the authors undertook a detailed group theoretic study of Thomae's transformations as well as transformations for the terminating <sup>4</sup>*F*<sup>3</sup> series and Bailey's three-term relations for <sup>3</sup>*F*2. In particular, they have shown ([9], Theorem 3.2) that the function

$$f(\mathbf{x}, y, z, u, v) = \frac{{}\_3F\_2\left(\begin{array}{c} \mathbf{x} + \mathbf{u} + \mathbf{v}, y + \mathbf{u} + \mathbf{v}, z + \mathbf{u} + v\\\mathbf{x} + \mathbf{y} + z + 2\mathbf{u} + \mathbf{v}, \mathbf{x} + y + z + \mathbf{u} + 2v\right)}{\Gamma(\mathbf{x} + \mathbf{y} + z + 2\mathbf{u} + v)\Gamma(\mathbf{x} + \mathbf{y} + z + \mathbf{u} + 2v)\Gamma(\mathbf{x} + \mathbf{y} + z)},\tag{4}$$

is invariant with respect to the entire symmetric group *P*<sup>5</sup> acting on its 5 arguments (note that another, simpler version of this symmetry is given by ([2], Equation (7)). This symmetry was, in fact, first observed by Hardy in his 1940 lectures ([10], Notes on Lecture VII). The work [9] initiated the whole stream of papers on group-theoretic interpretations of hypergeometric and *q*-hypergeometric transformations. See, for instance, Refs. [2,4,7,11–14] and references therein.

We note in passing that the analytic continuation problem for general *p* was solved by Nørlund [15] and Olsson [16] with later rediscovery by Bühring [17] without resorting to group-theoretic methods. More recently, Kim, Rathie and Paris derived ([18], p. 116) the following transformation

$${}\_4F\_3 \left( \begin{matrix} a, b, c, f+1 \\ d, e, f \end{matrix} \right) = \frac{\Gamma(c)\Gamma(\psi)}{\Gamma(c-c)\Gamma(\psi+c)} {}\_4F\_3 \left( \begin{matrix} d-a-1, d-b-1, c, \eta+1 \\ d, d+e-a-b-1, \eta \end{matrix} \right),\tag{5}$$

with *ψ* = *d* + *e* − *a* − *b* − *c* − 1 and

$$\eta = \frac{(d-a-1)(d-b-1)f}{ab+(d-a-b-1)f}.$$

This transformation can be iterated, but it is not immediately obvious what is the general form of the transformations obtained by such iterations. In our recent paper ([19], p. 14, above Theorem 2) we found another identity of a similar flavor which can be viewed as a generalization of (2):

$${}\_4F\_3\left( \begin{matrix} a,b,c,f+1\\d,e,f \end{matrix} \right) = \frac{(\psi f - c(d-a-b))\Gamma(e)\Gamma(\psi)}{f\Gamma(e+d-a-b)\Gamma(e-c)} {}\_4F\_3\left( \begin{matrix} d-a,d-b,c,f+1\\d,e+d-a-b,f \end{matrix} \right) \tag{6}$$

where *ξ* = *f* + (*d* − *a* − *b*)(*f* − *c*)/(*e* − *c* − 1). The main purpose of this paper is to present a general form of the family of transformations of which the above two identities are particular cases, demonstrate that this family forms a group and analyze the structure of the subgroup generated by explicitly known transformations (5)–(8). Before we delve into this analysis let us now record two more transformations generating this subgroup. A proof will be given in Section 6.

**Lemma 1.** *The following identities hold*

$${}\_4F\_3\left( \begin{matrix} a,b,c,f+1\\d,e,f \end{matrix} \right) = \frac{(f\psi+bc)\Gamma(\psi)\Gamma(d)\Gamma(e)}{f\Gamma(a)\Gamma(\psi+b+1)\Gamma(\psi+c+1)} {}\_4F\_3\left( \begin{matrix} \psi,d-a,e-a,\zeta+1\\d+e-a-c,d+e-a-b,\zeta \end{matrix} \right),\tag{7}$$

*where ζ* = *ψ* + *bc*/ *f , ψ* = *d* + *e* − *a* − *b* − *c* − 1*; and*

$${}\_{4}F\_{3}\left( \begin{matrix} a,b,c,f+1\\d,e,f \end{matrix} \right) = \frac{(abc+fd\psi)\Gamma(\psi)\Gamma(e)}{f d \Gamma(e-a)\Gamma(\psi+a+1)} {}\_{4}F\_{3}\left( \begin{matrix} a,d-b,d-c,\nu+1\\d+1,\psi+a+1,\nu \end{matrix} \right),\tag{8}$$

*where ν* = (*abc* + *f dψ*)/(*bc* + *fψ*)*.*

Please note that each <sup>4</sup>*F*<sup>3</sup> function containing a parameter pair *f* + 1 *f* can be decomposed into a sum of two <sup>3</sup>*F*<sup>2</sup> functions (and we will demonstrate that there are numerous different decompositions of this type). Hence, each of the identities (5)–(8) can be written as a four-term relation for <sup>3</sup>*F*2. However, it will be seen from the subsequent considerations that, in fact, all such relations reduce to three or even two terms, and, moreover, the structure seems to be more transparent if we keep the <sup>4</sup>*F*<sup>3</sup> function as the basic building block of our analysis. It will be revealed that the group structure of our transformations is closely related to that of the Thomae group generated by two-term transformations (2) and (3) and with contiguous three terms relations for <sup>3</sup>*F*2. We believe that our subgroup generated by (5)–(8) covers all possible two-term transformations for <sup>4</sup>*F*<sup>3</sup> with one unit shift (more precisely all transformations of the form (10) below), but we were unable to prove this claim and leave it as a conjecture.

The paper is organized as follows. In the following section we give a general form of the transformations exemplified above and prove that they form a group. We further demonstrate that this group is isomorphic to a subgroup of SL(Z) (integer matrices with unit determinant). In Section 3, we give a comprehensive analysis of the structure of the subgroup generated by the transformations (5)–(8) by showing that it is isomorphic to a direct product of the symmetric group *P*<sup>5</sup> and the integer lattice Z5. In Section 4 we explore the relation between our transformations and three-term relations for <sup>3</sup>*F*<sup>2</sup> hypergeometric function. In particular, we show that the contiguous relations for <sup>3</sup>*F*<sup>2</sup> functions studied recently in [20] can also be computed from the elements of our group. Section 5 provides a method of deducing summation formulas for <sup>4</sup>*F*<sup>3</sup> with non-linearly restricted parameters while Section 6 contains the proof of Lemma 1. Finally, the Appendix A contains explicit forms of some key elements of our subgroup and several *Wolfram Mathematica*® routines facilitating the group calculations.

#### **2. The Group Structure of the Unit Shift <sup>4</sup>***F***<sup>3</sup> Transformations**

Inspecting the <sup>4</sup>*F*<sup>3</sup> transformations presented in Section 1 we see that they share a common structure that we will present below. To this end, let **r** = (*a*, *b*, *c*, *d*,*e*, 1)*<sup>T</sup>* be the column vector and define

$$F(\mathbf{r}, f) = {}\_4F\_3 \left( \begin{matrix} a, b, c, f + 1 \\ d, e, f \end{matrix} \right). \tag{9}$$

All transformations found in Section 1 have the following general form

$$F(\mathbf{r}, f) = \mathbb{C}(\mathbf{r}, f) F(D\mathbf{r}, \eta), \tag{10}$$

where *D* is a unit determinant 6 × 6 matrix with integer entries and the bottom row (0, 0, 0, 0, 0, 1);

$$\eta = \frac{\varepsilon f + \lambda(\mathbf{r})}{\kappa(\mathbf{r})f + \beta(\mathbf{r})},\tag{11}$$

where *ε* ∈ {0, 1}, *λ*(**r**), *α*(**r**) and *β*(**r**) are rational functions of the arguments *a*, *b*, *c*, *d*,*e* (some of them may vanish identically, but *λ* = 1 if *ε* = 0). The coefficient *C*(**r**, *f*) has the form

$$\mathbf{C}(\mathbf{r},f) = \frac{\mathbf{N}(\mathbf{r})f + P(\mathbf{r})}{\mathbf{K}(\mathbf{r})f + L(\mathbf{r})} \tag{12}$$

where *N*(**r**), *P*(**r**), *K*(**r**), *L*(**r**) are (possibly vanishing) functions of Γ-type by which we mean ratios of products of gamma functions whose arguments are integer linear combinations of the components of (*a*, *b*, *c*, *d*,*e*, 1). When *N*(**r**) = 0 we will additionally require that the ratio *P*(**r**)/*N*(**r**) be a rational function of parameters. In fact, this last requirements is redundant, but in order to avoid it the following claim is needed: the ratio *F*2(**r**)/*F*1(**r**) with *Fi*, *i* = 1, 2, defined in (14), is not a function of gamma type for general parameters. We were unable to find a proof of this claim in the literature although it seems to be generally accepted to be true.

Formula (10) defines a transformation *T* characterized by the matrix *D* and the functions *C*(**r**, *f*), *η* = *η*(**r**, *f*). Two such transformations *T*1, *T*<sup>2</sup> will be considered equal if *D*<sup>1</sup> = *D*2, *C*1(**r**, *f*) ≡ *C*2(**r**, *f*) and *η*1(**r**, *f*) ≡ *η*2(**r**, *f*).

According to the elementary relation (*f* + 1)*<sup>n</sup>* = (*f*)*n*(1 + *n*/ *f*), we have

$$F(\mathbf{r}, f) = F\_1(\mathbf{r}) + \frac{1}{f} F\_2(\mathbf{r}),\tag{13}$$

where

$$F\_1(\mathbf{r}) = {}\_3F\_2\left( \begin{matrix} a, b, c \\ d, e \end{matrix} \right), \quad F\_2(\mathbf{r}) = \frac{abc}{d\varepsilon} {}\_3F\_2\left( \begin{matrix} a+1, b+1, c+1 \\ d+1, e+1 \end{matrix} \right). \tag{14}$$

It is not immediately obvious if the composition of two transformations (10) with *η* and *C* having the forms (11) and (12), respectively, should have the same form. The following theorem shows that it is indeed the case and these transformations form a group.

**Theorem 1.** *Each transformation* (10) *necessarily has the form*

$$F(\mathbf{r}, f) = M(\mathbf{r}) \frac{\varepsilon f + \lambda(\mathbf{r})}{f} F(D\mathbf{r}, \eta), \text{ where } \eta = \frac{\varepsilon f + \lambda(\mathbf{r})}{a(\mathbf{r})f + \beta(\mathbf{r})},\tag{15}$$

*M*(**r**) *is a function of* Γ*-type, ε* ∈ {0, 1}*, λ*(**r**)*, α*(**r**)*, β*(**r**) *are rational functions of the arguments a*, *b*, *c*, *d*,*e* (*possibly vanishing but with λ* = 1 *if ε* = 0)*.*

*The collection* T *of transformations* (15) *forms a group with respect to composition. More explicitly, if T*1, *T*<sup>2</sup> ∈ T *with parameters indexed correspondingly, then T* = *T*<sup>2</sup> ◦ *T*<sup>1</sup> *is given by*

$$\begin{aligned} \text{(I)} \, f \, \varepsilon\_1 \varepsilon\_2 + \alpha\_1(\mathbf{r}) \lambda\_2(D\_1 \mathbf{r}) \neq 0, \, \text{then } \varepsilon = 1, \, M(\mathbf{r}) = M\_1(\mathbf{r}) M\_2(D\_1 \mathbf{r}) (\varepsilon\_1 \varepsilon\_2 + \alpha\_1(\mathbf{r}) \lambda\_2(D\_1 \mathbf{r})),\\ \lambda(\mathbf{r}) = \frac{\varepsilon\_2 \lambda\_1(\mathbf{r}) + \lambda\_2(D\_1 \mathbf{r}) \beta\_1(\mathbf{r})}{\varepsilon\_1 \varepsilon\_2 + \alpha\_1(\mathbf{r}) \lambda\_2(D\_1 \mathbf{r})}, \quad a(\mathbf{r}) = \frac{\varepsilon\_1 a\_2 (D\_1 \mathbf{r}) + a\_1(\mathbf{r}) \beta\_2(D\_1 \mathbf{r})}{\varepsilon\_1 \varepsilon\_2 + \alpha\_1(\mathbf{r}) \lambda\_2(D\_1 \mathbf{r})},\\ \beta(\mathbf{r}) = \frac{\lambda\_1(\mathbf{r}) a\_2 (D\_1 \mathbf{r}) + \beta\_1(\mathbf{r}) \beta\_2(D\_1 \mathbf{r})}{\varepsilon\_1 \varepsilon\_2 + a\_1(\mathbf{r}) \lambda\_2(D\_1 \mathbf{r})}, \quad D = D\_2 D\_1. \end{aligned}$$

*(II) If ε*1*ε*<sup>2</sup> + *α*1(**r**)*λ*2(*D*1**r**) = 0*, then ε* = 0*, M*(**r**) = *M*1(**r**)*M*2(*D*1**r**)(*ε*2*λ*1(**r**) + *λ*2(*D*1**r**)*β*1(**r**))*,*

$$
\lambda(\mathbf{r}) = 1, \quad \kappa(\mathbf{r}) = \frac{\varepsilon\_1 \alpha\_2(D\_1 \mathbf{r}) + \alpha\_1(\mathbf{r}) \beta\_2(D\_1 \mathbf{r})}{\varepsilon\_2 \lambda\_1(\mathbf{r}) + \lambda\_2(D\_1 \mathbf{r}) \beta\_1(\mathbf{r})},
$$

$$
\beta(\mathbf{r}) = \frac{\lambda\_1(\mathbf{r}) \alpha\_2(D\_1 \mathbf{r}) + \beta\_1(\mathbf{r}) \beta\_2(D\_1 \mathbf{r})}{\varepsilon\_2 \lambda\_1(\mathbf{r}) + \lambda\_2(D\_1 \mathbf{r}) \beta\_1(\mathbf{r})}, \quad D = D\_2 D\_1.
$$

*Each <sup>T</sup>* ∈ T *of the form* (15) *has an inverse <sup>T</sup>*−<sup>1</sup> *determined by the parameters <sup>ε</sup>*ˆ*, <sup>M</sup>*<sup>ˆ</sup> (**r**)*, <sup>λ</sup>*ˆ(**r**), *<sup>α</sup>*ˆ(**r**)*, <sup>β</sup>*ˆ(**r**)*, <sup>D</sup>*<sup>ˆ</sup> *given by:*

*(III) If β*(**r**) = 0*, then ε*ˆ = 1 *and*

$$
\hat{M}(\mathbf{r}) = \frac{\beta(D^{-1}\mathbf{r})}{M(D^{-1}\mathbf{r})(\varepsilon\beta(D^{-1}\mathbf{r}) - a(D^{-1}\mathbf{r})\lambda(D^{-1}\mathbf{r})}, \quad \hat{\lambda}(\mathbf{r}) = -\frac{\lambda(D^{-1}\mathbf{r})}{\beta(D^{-1}\mathbf{r})},
$$

$$
\hat{\alpha}(\mathbf{r}) = -\frac{a(D^{-1}\mathbf{r})}{\beta(D^{-1}\mathbf{r})}, \quad \hat{\beta}(\mathbf{r}) = \frac{\varepsilon}{\beta(D^{-1}\mathbf{r})}, \quad \hat{\mathcal{D}} = D^{-1}.
$$

*(IV) If β*(**r**) = 0*, then ε*ˆ = 0 *and*

$$\hat{M}(\mathbf{r}) = \frac{1}{M(D^{-1}\mathbf{r})a(D^{-1}\mathbf{r})}, \ \hat{\lambda}(\mathbf{r}) = 1, \ \pounds(\mathbf{r}) = \frac{a(D^{-1}\mathbf{r})}{\lambda(D^{-1}\mathbf{r})}, \ \not\quad \not\quad \not\quad \mathcal{D} = D^{-1}.$$

**Proof of Theorem 1.** We start by showing that the form of the coefficient *C*(**r**, *f*)=(*N f* + *P*)/(*K f* + *L*) defined in (12) is restricted to

$$\mathcal{C}(\mathbf{r}, f) = M + \mathcal{W} / f,\tag{16}$$

where *M* = *M*(**r**), *W* = *W*(**r**) are some functions of Γ-type, possibly one of them vanishing. It follows from (12) and (13) that transformation (10) is equivalent to

$$\frac{F\_1(\mathbf{r})f + F\_2(\mathbf{r})}{f} = \frac{(Nf + P)(F\_1(D\mathbf{r})\eta + F\_2(D\mathbf{r}))}{(Kf + L)\eta},\tag{17}$$

where *N* = *N*(**r**), *P* = *P*(**r**), *K* = *K*(**r**), *L* = *L*(**r**). Solving this equation we get

*<sup>η</sup>* <sup>=</sup> *<sup>f</sup>*(*f N* <sup>+</sup> *<sup>P</sup>*)*F*2(*D***r**) *LF*2(**r**) + *fKF*2(**r**) − *f* <sup>2</sup>*NF*1(*D***r**) − *f PF*1(*D***r**) + *f* <sup>2</sup>*KF*1(**r**) + *f LF*1(**r**) .

In order that *η* had the form (11) the following identity must hold

$$\begin{aligned} f(fN+P)F\_2(D\mathbf{r})(af+\beta) \\ &= (\varepsilon f + \lambda)(LF\_2(\mathbf{r}) + fK\mathbf{F}\_2(\mathbf{r}) - f^2NF\_1(D\mathbf{r}) - fPF\_1(D\mathbf{r}) + f^2K\mathbf{F}\_1(\mathbf{r}) + fLF\_1(\mathbf{r})). \end{aligned} \tag{18}$$

The free term of the cubic on the right hand side equals *λLF*2(**r**) while it vanishes on the left hand side, so that *λL* = 0. If *L* = 0 we obtain (16). Otherwise, if *λ* = 0 identity (18) takes the form

$$\mathcal{F}\_1(f\mathbf{N}+P)F\_2(D\mathbf{r})(af+\beta) = LF\_2(\mathbf{r}) + fKF\_2(\mathbf{r}) - f^2NF\_1(D\mathbf{r}) - fPF\_1(D\mathbf{r}) + f^2KF\_1(\mathbf{r}) + fLF\_1(\mathbf{r}). \tag{19}$$

If *N* = 0, then *K* = 0 and we again arrive at (16). If *N* = 0 the value *f* = −*P*/*N* must be a root of the quadratic on the right hand side of (19). In other words, we must have

$$LF\_2(\mathbf{r}) - \frac{P}{N}KF\_2(\mathbf{r}) - \frac{P^2}{N^2}NF\_1(D\mathbf{r}) + \frac{P}{N}PF\_1(D\mathbf{r}) + \frac{P^2}{N^2}KF\_1(\mathbf{r}) - \frac{P}{N}LF\_1(\mathbf{r}) = 0$$
 for 
$$\left(L - \frac{P}{N}K\right)\left(F\_2(\mathbf{r}) - \frac{P}{N}F\_1(\mathbf{r})\right) = 0.$$

Equality *L* = *PK*/*N* again leads to (16). The equality *F*2(**r**) = *PF*1(**r**)/*N* is impossible for rational *P*/*N*, as demonstrated by Ebisu and Iwasaki in ([20], Theorem 1.1) which proves our claim (16). If *P*/*N* is a function of gamma type then so is *F*2(**r**)/*F*1(**r**) which would contradict the claim made before the theorem, but as we could not find a proof of this claim we explicitly prohibit this situation in the definition of *C*(**r**, *f*).

Substituting (*N f* + *P*)/(*K f* + *L*) by *M* + *W*/ *f* in (17) we can now express *η* as follows:

$$\eta = -\frac{(Mf + \mathcal{W})F\_2(D\mathbf{r})}{(M\mathcal{F}\_1(D\mathbf{r}) - F\_1(\mathbf{r}))f + F\_1(D\mathbf{r})\mathcal{W} - F\_2(\mathbf{r})}.\tag{20}$$

Next suppose *M* = 0. Then *C*(**r**, *f*) = *M*(*εf* + *W*/*M*)/ *f* with *ε* = 1. Comparison of (20) with (11) yields *W*/*M* = *λ* which proves that the transformation (10) must have the form (15). Moreover,

$$\alpha = -\frac{M\varepsilon F\_1(D\mathbf{r}) - F\_1(\mathbf{r})}{M F\_2(D\mathbf{r})}, \ \beta = -\frac{F\_1(D\mathbf{r})\lambda M - F\_2(\mathbf{r})}{M F\_2(D\mathbf{r})}.$$

These equalities can be rewritten as the system

$$\begin{cases} \quad F\_1(\mathbf{r}) = M(\varepsilon F\_1(D\mathbf{r}) + \mu F\_2(D\mathbf{r})),\\ \quad F\_2(\mathbf{r}) = M(\lambda F\_1(D\mathbf{r}) + \beta F\_2(D\mathbf{r})). \end{cases} \tag{21}$$

Suppose now that *M* = 0, *W* = 0. Then *C*(**r**, *f*) = *W*(*εf* + *λ*)/ *f* with *ε* = 0, *λ* = 1. From (20) we have

$$\eta = \frac{\varepsilon f + \lambda}{\alpha f + \beta}.$$

where again *ε* = 0, *λ* = 1, and *α* = *F*1(**r**)/(*WF*2(*D***r**)), *β* = −(*WF*1(*D***r**) − *F*2(**r**))/(*WF*2(*D***r**)) or

$$\begin{cases} \quad F\_1(\mathbf{r}) = \mathbb{W}aF\_2(D\mathbf{r}) = \mathcal{W}(\varepsilon F\_1(D\mathbf{r}) + aF\_2(D\mathbf{r})),\\ \quad F\_2(\mathbf{r}) = \mathcal{W}(F\_1(D\mathbf{r}) + \beta F\_2(D\mathbf{r})) = \mathcal{W}(\lambda F\_1(D\mathbf{r}) + \beta F\_2(D\mathbf{r})). \end{cases} \tag{22}$$

Renaming *W* into *M* we have thus proved that the transformation again has the form (15) and the system (21) is satisfied.

The computation of composition is straightforward:

$$\begin{split} T &= T\_2 \circ T\_1 \Longleftrightarrow T : F(\mathbf{r}, f) = M\_1(\mathbf{r}) \frac{\varepsilon\_1 f + \lambda\_1(\mathbf{r})}{f} F(D\_1 \mathbf{r}, \eta\_1) \\ &= M\_1(\mathbf{r}) M\_2(D\_1 \mathbf{r}) \frac{\varepsilon\_1 f + \lambda\_1(\mathbf{r})}{f} \frac{\varepsilon\_2 (\varepsilon\_1 f + \lambda\_1(\mathbf{r})) / (a\_1(\mathbf{r}) f + \beta\_1(\mathbf{r})) + \lambda\_2(D\_1 \mathbf{r})}{(\varepsilon\_1 f + \lambda\_1(\mathbf{r})) / (a\_1(\mathbf{r}) f + \beta\_1(\mathbf{r}))} F(D\_2 D\_1 \mathbf{r}, \eta\_2) \\ &= M\_1(\mathbf{r}) M\_2(D\_1 \mathbf{r}) \frac{\varepsilon\_2 (\varepsilon\_1 f + \lambda\_1(\mathbf{r})) + \lambda\_2(D\_1 \mathbf{r}) (a\_1(\mathbf{r}) f + \beta\_1(\mathbf{r}))}{f} F(D\_2 D\_1 \mathbf{r}, \eta\_2) \\ &= M\_1(\mathbf{r}) M\_2(D\_1 \mathbf{r}) \frac{[\varepsilon\_1 \varepsilon\_2 + a\_1(\mathbf{r}) \lambda\_2(D\_1 \mathbf{r})] f + \varepsilon\_2 \lambda\_1(\mathbf{r}) + \beta\_1(\mathbf{r}) \lambda\_2(D\_1 \mathbf{r})}{f} F(D\_2 D\_1 \mathbf{r}, \eta\_1). \end{split}$$

If *ε*1*ε*<sup>2</sup> + *α*1(**r**)*λ*2(*D*1**r**) = 0, we can divide by this quantity leading to case (I). If it vanishes we get case (II). Given a transformation *T* ∈ T of the from (15) it is rather straightforward to compute its inverse. We omit the details.

**Remark 1.** *Theorem 1 implies that each transformation t* ∈ T *is uniquely characterized by the collection* {*ε*, *M*(**r**), *λ*(**r**), *α*(**r**), *β*(**r**), *D*}*, where ε* ∈ {0, 1}*, M*(**r**) *is a function of gamma type, λ*(**r**)*, α*(**r**) *and β*(**r**) *are rational functions of parameters a*, *b*, *c*, *d*,*e and D is* 6 × 6 *unit determinant integer matrix with bottom row* (0, ... , 0, 1)*. We will express this fact by writing T* ∼ {*ε*, *M*(**r**), *λ*(**r**), *α*(**r**), *β*(**r**), *D*}*. Occasionally, we will omit the dependence on* **r** *in the notation of the functions M*(**r**)*, λ*(**r**)*, α*(**r**)*, β*(**r**) *for brevity.*

Please note that for *ε* = 1 and non-vanishing *α*, *β* and *λ* the system (21) takes the form of <sup>4</sup>*F*<sup>3</sup> → <sup>3</sup>*F*<sup>2</sup> reduction formulas

$$\begin{cases} \ ^F(D\mathbf{r}, \boldsymbol{\kappa}(\mathbf{r})^{-1}) = \boldsymbol{M}(\mathbf{r})^{-1} \boldsymbol{F}\_1(\mathbf{r}),\\ \ ^F(D\mathbf{r}, \boldsymbol{\lambda}(\mathbf{r})/\beta(\mathbf{r})) = (\boldsymbol{M}(\mathbf{r})\boldsymbol{\lambda}(\mathbf{r}))^{-1} \boldsymbol{F}\_2(\mathbf{r}). \end{cases}$$

Next, we clarify the structure of the group T further. The composition rule involves all the parameters *M*(**r**), *λ*(**r**), *α*(**r**), *β*(**r**) and *D*. The following theorem implies that the matrix *D* determines all other parameters uniquely. Denote by SL (*n*,Z) the subgroup of the special linear group SL(*n*,Z) of *n* × *n* integer matrices with unit determinant comprising matrices whose last row has the form (0, . . . , 0, 1).

**Theorem 2.** *The mapping T* ∼ {*ε*, *M*(**r**), *λ*(**r**), *α*(**r**), *β*(**r**), *DT*} → *DT is isomorphism, so that the group* (T , ◦) *is isomorphic to a subgroup of* SL (6,Z) *which we denote by* (DT , ·)*.*

**Proof of Theorem 2.** One direction is clear: each transformation *T* ∈ T by construction defines a matrix *DT* ∈ SL (6,Z) and the composition rule (I), (II) in Theorem 1 involves the product of matrices. Hence, to establish our claim it remains to prove that the kernel of the homomorphism T → *DT* is trivial. Assume the opposite: there exists a transformation *T* ∈ T with the identity matrix *D* = *I* and non-trivial parameters *ε*, *M*, *λ*, *α*, *β*. The system (21) then takes the form

$$\begin{cases} (1 - M\varepsilon)F\_1(\mathbf{r}) = M\kappa F\_2(\mathbf{r}), \\\ M\lambda F\_1(\mathbf{r}) = (1 - M\beta)F\_2(\mathbf{r}). \end{cases} \tag{23}$$

If *α* = *λ* = 0 we get *M* = *ε* = 1 from the first equation and *β* = 1 from the second equation, which amounts to the trivial identity transformation. We will show that all other cases are impossible. Indeed, Ebisu and Iwasaki demonstrated in ([20], Theorem 1.1) that the functions *F*1(**r**) and *F*2(**r**) are linearly independent

over the field of rational functions of parameters. If *α* = 0 and *λ* = 0, then *M* = *ε* = 1 from the first equation and *F*1(**r**)/*F*2(**r**)=(1 − *β*)/*λ* from the second equation contradicting linear independence. Similarly, if *α* = 0 and *λ* = 0, then *M* = 1/*β* from the second equation, so that *F*2(**r**)/*F*1(**r**)=(1 − *ε*/*β*)/(*α*/*β*) is rational from the first equation leading again to contradiction. Finally, if both *α* = 0 and *λ* = 0 we arrive at the identities

$$\frac{F\_2(\mathbf{r})}{F\_1(\mathbf{r})} = \frac{1 - M\varepsilon}{\alpha M} = \frac{\lambda M}{1 - M\beta} \quad \Rightarrow \quad (1 - M\beta)(1 - M\varepsilon) = \alpha \lambda M^2.$$

Linear independence of the functions *F*1(**r**), *F*2(**r**) over rational functions implies that the function *M* = *M*(**r**) must be a ratio of products of gamma functions irreducible to a rational function. On the other hand, by the ultimate equality *M*(**r**) solves the quadratic equation with rational coefficients:

$$M = M(\mathbf{r}) = \mu(\mathbf{r}) \pm \sqrt{\nu(\mathbf{r})}$$

with rational *μ*, *ν*. It is easy to see that this is not possible as Γ is meromorphic with infinite number of poles and no branch points, while *<sup>μ</sup>*(**r**) <sup>±</sup> *ν*(**r**) may only have a finite number of poles and zeros and has branch points.

#### **3. The Subgroup of** *T* **Generated by Known Transformations**

We can now rewrite the transformations (5)–(8) in the standard form (15). Denote by *ψ* = *d* + *e* − *a* − *b* − *c* − 1 the parametric excess of the function on the left hand side of (15). Identity (7) is determined by the following set of parameters

$$M\_1 = \frac{\Gamma(\psi + 1)\Gamma(d)\Gamma(\varepsilon)}{\Gamma(a)\Gamma(d + \varepsilon - a - \varepsilon)\Gamma(d + \varepsilon - a - b)}, \quad \varepsilon\_1 = 1, \quad \lambda\_1 = \frac{bc}{\psi},\tag{24a}$$

$$D\_1 = \begin{bmatrix} -1 & -1 & -1 & 1 & 1 & -1 \\ -1 & 0 & 0 & 1 & 0 & 0 \\ -1 & 0 & 0 & 0 & 1 & 0 \\ -1 & 0 & -1 & 1 & 1 & 0 \\ -1 & -1 & 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix}, \quad \mathfrak{a}\_1 = \frac{1}{\mathfrak{p}'} \quad \mathfrak{f}\_1 = 0. \tag{24b}$$

We will call this transformation *T*1.

The standard form (15) of identity (6) is characterized by the following parameters:

$$M\_2 = \frac{\Gamma(\varepsilon)\Gamma(\psi + 1)}{\Gamma(\varepsilon + d - a - b)\Gamma(\varepsilon - c)}, \quad \varepsilon\_2 = 1, \quad \lambda\_2 = \frac{c(-d + a + b)}{\psi}, \tag{25a}$$

$$D\_2 = \begin{bmatrix} -1 & 0 & 0 & 1 & 0 & 0 \\ 0 & -1 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ -1 & -1 & 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix}, \quad a\_2 = 0, \quad \beta\_2 = \frac{c - c - 1}{\Psi}. \tag{25b}$$

We will call this transformation *T*2.

The standard parameters of transformation (8) are given by

$$M\_3 = \frac{\Gamma(\psi + 1)\Gamma(\varepsilon)}{\Gamma(\varepsilon - a)\Gamma(\varepsilon + d - b - c)}, \quad \varepsilon\_3 = 1, \quad \lambda\_3 = \frac{abc}{d\psi'} \tag{26a}$$

$$D\_3 = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & -1 & -1 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix}, \quad \alpha\_3 = \frac{1}{d}, \quad \beta\_3 = \frac{bc}{d\psi}. \tag{26b}$$

We will call this transformation *T*3.

Finally, transformation (5) in the standard form (15) is parameterized by

$$M\_4 = \frac{\Gamma(\varepsilon)\Gamma(\psi)}{\Gamma(\varepsilon-\varepsilon)\Gamma(\psi+\varepsilon)}, \quad \varepsilon\_4 = 1, \quad \lambda\_4 = 0, \quad a\_4 = \frac{d-a-b-1}{(d-a-1)(d-b-1)},\tag{27a}$$

$$D\_4 = \begin{bmatrix} -1 & 0 & 0 & 1 & 0 & -1 \\ 0 & -1 & 0 & 1 & 0 & -1 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ -1 & -1 & 0 & 1 & 1 & -1 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix}, \quad \beta\_4 = \frac{ab}{(d-a-1)(d-b-1)}.\tag{27b}$$

We will call this transformation *T*4. It is easy to see that it is of order 2, i.e., *T*<sup>2</sup> <sup>4</sup> = *I*.

The four transformations *T*1, *T*2, *T*3, *T*<sup>4</sup> (or, equivalently, (5)–(8)) combined with permutations of the upper and lower parameters generate a subgroup of <sup>T</sup> which we will call <sup>T</sup><sup>ˆ</sup> . Isomorphism established in Theorem <sup>2</sup> induces an isomorphism between <sup>T</sup><sup>ˆ</sup> and a subgroup of SL (6,Z) which we denote by DT<sup>ˆ</sup> .

A complete characterization of <sup>T</sup><sup>ˆ</sup> and DT<sup>ˆ</sup> will follow. Before we turn to it, we remark that to our belief, the complete group <sup>T</sup> contains no elements other than those in <sup>T</sup><sup>ˆ</sup> . We were unable, however, to prove this claim. Let us thus state it as a conjecture.

**Conjecture.** The subgroup <sup>T</sup><sup>ˆ</sup> generated by the transformations (24)–(27) coincides with the entire group T of all transformations of the form (10) or, equivalently, of the form (15).

Denote by *Sj*, *j* = 1, ... , 5, the transformation shifting the *j*-th component of the parameter vector **r** by +1, i.e., *Sj* is characterized by the matrix *S*ˆ *<sup>j</sup>* such that *S*ˆ <sup>1</sup>**r** = (*a* + 1, *b*, *c*, *d*,*e*, 1), *S*ˆ <sup>2</sup>**r** = (*a*, *b* + 1, *c*, *d*,*e*, 1), etc. It is not *a priori* obvious that such transformations should exist among the elements of <sup>T</sup><sup>ˆ</sup> . The following theorem shows that it is indeed the case.

**Theorem 3.** *The group* <sup>T</sup><sup>ˆ</sup> *contains the transformations Sj, j* <sup>=</sup> 1, . . . , 5*.*

**Proof of Theorem 3.** Due to permutation symmetry it is clearly sufficient to display the transformations *S*<sup>1</sup> and *S*4. We will need the inverse of the transformation *T*<sup>1</sup> defined in (24). Using Theorem 1 we calculate

$${}\_{4}F\_{3}\left( \begin{matrix} a,b,c,f+1\\d,e,f \end{matrix} \right) = \frac{\hat{M}\_{1}}{f} {}\_{4}F\_{3}\left( \begin{matrix} d+e-a-b-c-1,d-a-1,e-a-1,\hat{\eta}\_{1}+1\\d+e-a-c-1,d+e-a-b-1,\hat{\eta}\_{1} \end{matrix} \right)},\tag{28}$$

where

$$
\hat{M}\_1 = \frac{\Gamma(d)\Gamma(c)\Gamma(\psi)}{\Gamma(\psi+b)\Gamma(\psi+c)\Gamma(a)}, \quad \hat{\varepsilon}\_1 = 0, \quad \hat{\lambda}\_1 = 1, \ \hat{\mathfrak{a}}\_1 = \frac{1}{(d-a-1)(c-a-1)}, \quad \hat{\varepsilon}\_2 = \frac{1}{(d-a-1)(c-a-1)}, \quad \hat{\varepsilon}\_3 = \frac{1}{(d-a-1)(c-a-1)}.
$$

$$
\hat{\beta}\_1 = \frac{-a}{(d-a-1)(c-a-1)}, \quad \text{so that } \hat{\eta}\_1 = \frac{(d-a-1)(c-a-1)}{f-a}.
$$

Next, exchanging the roles of *d* + *e* − *a* − *b* − 1 and *d* and the roles of *d* − *a* − 1 and *c* in (5) or, equivalently, post-composing *T*<sup>4</sup> with permutation (13) (45) we will obtain a transformation that we call *T*ˆ 4. Then *T*ˆ <sup>4</sup> ◦ *<sup>T</sup>*<sup>ˆ</sup> <sup>4</sup> takes the form

$${}\_4F\_3 \left( \begin{matrix} a, b, c, f + 1 \\ d, e, f \end{matrix} \right) = \frac{\Gamma(c) \Gamma(d) \Gamma(\psi)}{\Gamma(b + \psi) \Gamma(c + \psi) \Gamma(a + 1)} {}\_4F\_3 \left( \begin{matrix} \psi - 1, e - a - 1, d - a - 1, \vec{\eta}\_4 + 1 \\ c + \psi, b + \psi, \vec{\eta}\_4 \end{matrix} \right) \tag{29}$$

with

$$
\tilde{\eta}\_4 = \frac{(\psi - 1)(c - a - 1)(d - a - 1)f}{abc + (1 + 2a + a^2 - bc - d - ad - e - ac + de)f}.
$$

Applying *T*−<sup>1</sup> <sup>1</sup> to the right hand side of (29) we obtain the transformation *S*1:

$${}\_{4}F\_{3}\left(\begin{matrix}a,b,c,f+1\\d,c,f\end{matrix}\right)=M\frac{\varepsilon f+\lambda}{f}{}\_{4}F\_{3}\left(\begin{matrix}a+1,b,c,\eta+1\\d,c,\eta\end{matrix}\right),\tag{30}$$

.

where *ε* = 1, and

$$M = 1 - \frac{bc}{(d - a - 1)(c - a - 1)}, \quad \lambda = \frac{abc}{a^2 - bc + (d - 1)(c - 1) - a(d + c - 2)}, \quad \lambda = \frac{abc}{a^2 - bc + (d - 1)(c - 1) - a(d + c - 2)}$$

$$\mathfrak{a} = \frac{d + \mathfrak{e} - a - b - \mathfrak{e} - 2}{\mathfrak{a}^2 - bc + (d - 1)(\mathfrak{e} - 1) - a(d + \mathfrak{e} - 2)}, \quad \mathfrak{f} = -\frac{a(d + \mathfrak{e} - a - b - \mathfrak{e} - 2)}{\mathfrak{a}^2 - bc + (d - 1)(\mathfrak{e} - 1) - a(d + \mathfrak{e} - 2)}.$$

According to (15) we thus obtain the following expression for *η*

$$\eta = \frac{abc + (1 + 2a + a^2 - bc - d - ad - e - ac + de)f}{a(2 + a + b + c - d - e) - (2 + a + b + c - d - e)f}$$

Application of the transformation *T*<sup>3</sup> given by (26) to itself yields *T*<sup>3</sup> ◦ *T*<sup>3</sup> in the form:

<sup>4</sup>*F*<sup>3</sup> *a*, *b*, *c*, *f* + 1 *d*,*e*, *f* <sup>=</sup> *<sup>a</sup>*(*<sup>d</sup>* <sup>−</sup> *<sup>b</sup>*)(*<sup>d</sup>* <sup>−</sup> *<sup>c</sup>*)(*bc* <sup>+</sup> *<sup>f</sup>ψ*)+(*<sup>d</sup>* <sup>+</sup> <sup>1</sup>)(*<sup>e</sup>* <sup>−</sup> *<sup>a</sup>*)(*abc* <sup>+</sup> *f dψ*) *f d*(*<sup>d</sup>* <sup>+</sup> <sup>1</sup>)*e<sup>ψ</sup>* <sup>4</sup>*F*<sup>3</sup> *a*, *b* + 1, *c* + 1, *η*˜3 + 1 *d* + 2,*e* + 1, *η*˜3 ,

where

$$
\tilde{\eta}\_{\beta} = \frac{a(d-b)(d-c)(bc+f\psi) + (d+1)(c-a)(abc+fd\psi)}{(d-b)(d-c)(bc+f\psi) + (e-a)(abc+fd\psi)}.
$$

On the other hand, using (28) we compute *T*−<sup>2</sup> <sup>1</sup> as follows:

$${}\_4F\_3 \binom{a,b,c,f+1}{d,e,f} = \frac{(d-1)(e-1)(f-a)}{f(d-a-1)(e-a-1)} {}\_4F\_3 \binom{a,b-1,c-1,\eta'\_1+1}{d-1,e-1,\eta'\_1}$$

with

$$
\eta\_1' = \frac{(b-1)(c-1)(f-a)}{(d-a-1)(e-a-1) - \psi(f-a)}.
$$

Comparing these formulas we see that the composition *T*−<sup>2</sup> <sup>1</sup> ◦ *<sup>T</sup>*<sup>2</sup> <sup>3</sup> gives the transformation *S*<sup>4</sup> shifting *d* → *d* + 1 while *a*, *b*, *c*,*e* remain intact:

$${}\_{4}F\_{3}\left(\begin{matrix}a,b,c,f+1\\d,e,f\end{matrix}\right)=\frac{f+\lambda}{f}{}\_{4}F\_{3}\left(\begin{matrix}a,b,c,\eta+1\\d+1,e,\eta\end{matrix}\right),\tag{31}$$

so that *ε* = 1, *M* = 1,

$$\lambda = \frac{abc}{d(d+c-a-b-c-1)}, \ a = \frac{1}{d'}, \ \beta = \frac{(b-d)(c-d) + a(b+c-d)}{d(d+c-a-b-c-1)}, \ \eta = \frac{\varepsilon f + \lambda}{af + \beta}.$$

Each transformation *Sj*, *<sup>j</sup>* <sup>=</sup> 1, ... , 5, obviously generates a subgroup of <sup>T</sup><sup>ˆ</sup> isomorphic to <sup>Z</sup>—the additive group of integers. Hence, in the parlance of group theory, the above theorem can be restated and enhanced as follows.

**Corollary 1.** *The group* <sup>T</sup><sup>ˆ</sup> *contains a subgroup* <sup>S</sup> *isomorphic to the* <sup>5</sup>*-dimensional integer lattice* <sup>Z</sup>5*. Furthermore, this subgroup is normal.*

**Proof of Corollary 1.** By the previous theorem we only need to prove normality. Denote by S the subgroup of the matrix group DT<sup>ˆ</sup> generated by the shift matrices *<sup>S</sup>*<sup>ˆ</sup> *<sup>j</sup>*, *j* = 1, ... , 5. Clearly, S comprises 6 × 6 matrices whose principal 5 × 5 sub-matrix equals the identity matrix *I*5, the 6-th row is (0, ... , 0, 1) and the 6-th column is (*k*1, ... , *<sup>k</sup>*5, 1) for some *ki* <sup>∈</sup> <sup>Z</sup>. As all elements of DT<sup>ˆ</sup> have integer entries and the bottom row (0, ... , 0, 1) it is easy to see that for any shift matrix *<sup>S</sup>* ∈ S and any matrix *<sup>D</sup>* ∈ DT<sup>ˆ</sup> both products *DS* and *SD* have the principal 5 × 5 sub-matrix equal to that of *D* and the last column of the form (*k*1, ... , *k*5, 1) for some *ki* ∈ Z. Running over all elements of S while keeping *D* fixed we see that the left and right conjugacy classes of the element *D* with respect to S coincide.

The above corollary implies that we can take the factor group DT<sup>ˆ</sup> /S. Each element in DT<sup>ˆ</sup> /<sup>S</sup> is a conjugacy class containing a representative with the last column (0, ... , 0, 1)*T*. Next, we note that the principal 5 × 5 sub-matrix of the matrix *D*<sup>2</sup> from (25b) of the transformation (6) is equal to that of the Kummer's transformation (2). This transformation together with the permutation group *P*<sup>3</sup> × *P*<sup>2</sup> representing the obvious invariance with respect to separate permutations of the upper and lower parameters generate the entire group of Thomae transformations [9]. Next, comparing the principal 5 × 5 sub-matrices of the further generators *D*1, *D*3, *D*<sup>4</sup> with the matrices of the Thomae transformations found, for instance in ([4], Appendix 1), we see that all of them occur among the elements of the group of the Thomae transformations. Hence, it remains to apply Theorem 3.2 from [9] asserting that the group of the Thomae transformations is isomorphic the 120-element symmetric group *P*<sup>5</sup> of permutations on five symbols. Isomorphism is given by a linear change of variables seen in (4). Hence, our final result is the following theorem.

**Theorem 4.** *The group* <sup>T</sup><sup>ˆ</sup> *is isomorphic to P*<sup>5</sup> <sup>×</sup> <sup>Z</sup>5*.*

As the entire group of the Thomae transformations for <sup>3</sup>*F*<sup>2</sup> can be generated by the identity (2) and the permutation group *<sup>P</sup>*<sup>3</sup> <sup>×</sup> *<sup>P</sup>*2, the above theorem implies that our entire group <sup>T</sup><sup>ˆ</sup> can be generated by the identity (6) (transformation *T*2) and the top parameter shift transformation *S*<sup>1</sup> together with the obvious symmetries *P*<sup>3</sup> × *P*2. For example, the bottom parameter shift transformation can be obtained as follows:

$$\begin{aligned} (d - c, \varepsilon - c, \psi, \psi + a, \psi + b) &\stackrel{T^2\_2}{\longrightarrow} (a, b, c, d, \varepsilon) \stackrel{S\_1 S\_3^{-1}}{\longrightarrow} (a + 1, b, c - 1, d, \varepsilon) \\ &\stackrel{T^2\_2}{\longrightarrow} (d - c + 1, \varepsilon - c + 1, \psi, \psi + a + 1, \psi + b) \stackrel{S\_1 S\_2}{\longmapsto} (d - c, \varepsilon - c, \psi, \psi + a + 1, \psi + b). \end{aligned}$$

Comparing the first and the last terms in this chain we see that we got the bottom parameter shift transformation *S*<sup>4</sup> using only *T*<sup>2</sup> and top shift transformations *S*1, *S*2, *S*<sup>3</sup> obtained from *S*<sup>1</sup> by permuting top parameters.

Theorem 4 further implies that there is a straightforward algorithm for computing any transformation from the group <sup>T</sup><sup>ˆ</sup> . Details are given in the Appendix <sup>A</sup> to this paper.

#### **4. Related <sup>3</sup>***F***<sup>2</sup> Transformation**

The proof of Theorem 1 shows that each transformation *T* ∈ T is associated with the system (21) of two <sup>3</sup>*F*<sup>2</sup> transformations. This system leads immediately to the following corollary.

**Proposition 1.** *Each transformation T* ∼ {*ε*, *M*(**r**), *λ*(**r**), *α*(**r**), *β*(**r**)}∈T *induces a transformation for the ratio*

$$\Psi(\mathbf{r}) := \frac{F\_2(\mathbf{r})}{F\_1(\mathbf{r})} = \frac{abc}{d\varepsilon} \frac{\,^3F\_2\left(\begin{matrix} a+1, b+1, c+1\\ d+1, c+1 \end{matrix}\right)}{\,^3F\_2\left(\begin{matrix} a, b, c\\ d, c \end{matrix}\right)} = \frac{d}{dx} \log\_3 \,^3F\_2\left(\begin{matrix} a, b, c\\ d, c \end{matrix}\; \middle|\; \mathbf{x}\right)\_{|\mathbf{x}-\mathbf{r}|}$$

*of the form*

$$
\Psi(\mathbf{r}) = \frac{\beta(\mathbf{r})\Psi(D\mathbf{r}) + \lambda(\mathbf{r})}{\varkappa(\mathbf{r})\Psi(D\mathbf{r}) + \varepsilon}.
$$

Next, we observe that any two elements of T generate a three-term relation for <sup>3</sup>*F*2.

**Proposition 2.** *For any two transformations from the group* T *: T*<sup>1</sup> ∼ {*ε*1, *M*1(**r**)*, λ*1(**r**)*, α*1(**r**)*, β*1(**r**)*, D*1} *and T*<sup>2</sup> ∼ {*ε*2, *M*2(**r**), *λ*2(**r**), *α*2(**r**), *β*2(**r**), *D*2} *satisfying the condition α*2*β*<sup>1</sup> − *α*1*β*<sup>2</sup> = 0*, the following identities hold*

$$F\_1(\mathbf{r}) = M\_1 \frac{a\_2 \beta\_1 \varepsilon\_1 - a\_1 a\_2 \lambda\_1}{a\_2 \beta\_1 - a\_1 \beta\_2} F\_1(D\_1 \mathbf{r}) + M\_2 \frac{a\_1 a\_2 \lambda\_2 - a\_1 \beta\_2 \varepsilon\_2}{a\_2 \beta\_1 - a\_1 \beta\_2} F\_1(D\_2 \mathbf{r})\tag{32}$$

(*the dependence on* **r** *is omitted for brevity*) *and*

$$F\_{2}(\mathbf{r}) = M\_{1} \frac{\beta\_{1} \beta\_{2} \varepsilon\_{1} - a\_{1} \beta\_{2} \lambda\_{1}}{a\_{2} \beta\_{1} - a\_{1} \beta\_{2}} F\_{1}(D\_{1} \mathbf{r}) + M\_{2} \frac{a\_{2} \beta\_{1} \lambda\_{2} - \beta\_{1} \beta\_{2} \varepsilon\_{2}}{a\_{2} \beta\_{1} - a\_{1} \beta\_{2}} F\_{1}(D\_{2} \mathbf{r}),\tag{33}$$
  $where \text{ as before, } F\_{1}(\mathbf{r}) = {}\_{3}F\_{2}\left(\begin{matrix} a, b, c\\ d, e \end{matrix}\right), F\_{2}(\mathbf{r}) = (abc)/(de)\_{3}F\_{2}\left(\begin{matrix} a+1, b+1, c+1\\ d+1, c+1 \end{matrix}\right).$ 

**Proof of Proposition 2.** Solving (21) for each transformation we, in particular, get the system of equations:

$$\begin{cases} \begin{aligned} \mathcal{F}\_{1}(D\_{1}\mathbf{r}) &= \left(\beta\_{1}F\_{1}(\mathbf{r}) - \alpha\_{1}F\_{2}(\mathbf{r})\right) / \left(M\_{1}(\beta\_{1}\varepsilon\_{1} - \alpha\_{1}\lambda\_{1})\right), \\\ F\_{1}(D\_{2}\mathbf{r}) &= \left(\beta\_{2}F\_{1}(\mathbf{r}) - \alpha\_{2}F\_{2}(\mathbf{r})\right) / \left(M\_{2}(\beta\_{2}\varepsilon\_{2} - \alpha\_{2}\lambda\_{2})\right). \end{aligned} \end{cases}$$

Solving the above system for *F*1(**r**), *F*2(**r**) we arrive at (32) and (33).

If the matrices *D*1, *D*<sup>2</sup> contain no shifts (i.e., the last column is (0, 0, 0, 0, 0, 1)*T*), then they correspond to Thomae's relations, so that *F*1(*D*1**r**), *F*1(*D*2**r**) are equal to each other up to a factor of gamma type. In this case, identities (32) and (33) become two-term transformations. However, for non-zero shifts Proposition 2 generates genuine three-term relations for <sup>3</sup>*F*2(*a*, *b*, *c*; *d*,*e*). For example, we obtain

$${}\_{3}F\_{2}\left(\begin{matrix}a,b,c\\d,e\end{matrix}\right)=\frac{\Gamma(d+1)\Gamma(c)\Gamma(d+e-a-b-c)}{\Gamma(a+1)\Gamma(d+e-a-b)\Gamma(d+e-a-c)}{}\_{3}F\_{2}\left(\begin{matrix}d+e-a-b-c-1,d-a,e-a\\d+e-a-c,e+d-a-b\end{matrix}\right)$$

$$+\frac{(a-d)(d-b)(d-c)}{d(1+d)\varepsilon}{}\_{3}F\_{2}\left(\begin{matrix}a+1,b+1,c+1\\d+2,e+1\end{matrix}\right).\tag{34}$$

An important subclass of these transformations are pure shifts (the principal 5 × 5 submatrices of *D*1, *D*<sup>2</sup> are identity matrices). This subclass comprises the so-called contiguous relations, studied recently in detail in [20]. In particular, Theorem 1.1 from [20] claims the existence of the unique rational functions *u*(**r**), *v*(**r**) such that

$${}\_{3}F\_{2}\left( \begin{matrix} a,b,c \\ d,e \end{matrix} \right) = u(\mathbf{r})\_{3}F\_{2}\left( \begin{matrix} a+k\_{1},b+k\_{2},c+k\_{1} \\ d+k\_{4},c+k\_{5} \end{matrix} \right) + v(\mathbf{r})\_{3}F\_{2}\left( \begin{matrix} a+m\_{1},b+m\_{2},c+m\_{3} \\ d+m\_{4},c+m\_{5} \end{matrix} \right) \tag{35}$$

for any two distinct non-zero integer vectors (*k*1, *k*2, *k*3, *k*4, *k*5), (*m*1, *m*2, *m*3, *m*4, *m*5). Furthermore, Ebisu and Iwasaki presented a rather explicit algorithm in [20] for computing the functions *u*(**r**), *v*(**r**) for given shifts. Proposition 2 furnishes an alternative method for computing these functions. For its realization we provide a collection of *Mathematica* routines in the Appendix A to this paper. Our algorithm works as follows: first step is to calculate transformations *<sup>T</sup>*1, *<sup>T</sup>*<sup>2</sup> <sup>∈</sup> <sup>T</sup><sup>ˆ</sup> associated with the matrices

$$D\_1 = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & k\_1 \\ 0 & 1 & 0 & 0 & 0 & k\_2 \\ 0 & 0 & 1 & 0 & 0 & k\_3 \\ 0 & 0 & 0 & 1 & 0 & k\_4 \\ 0 & 0 & 0 & 0 & 1 & k\_5 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix}, \\ D\_2 = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & m\_1 \\ 0 & 1 & 0 & 0 & 0 & m\_2 \\ 0 & 0 & 1 & 0 & 0 & m\_3 \\ 0 & 0 & 0 & 1 & 0 & m\_4 \\ 0 & 0 & 0 & 0 & 1 & m\_5 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix}.$$

To this end we simply iterate transformations *S*±, *S*<sup>±</sup> realizing the shifts by ±1 of the first and forth parameters, respectively, combining them with the necessary permutations of the upper and lower parameters. To calculate the resulting *λ*, *α* and *β* the composition rule from Theorem 1 is used with the help of *Mathematica* routine. Then it remains to apply formula (32). For example, we get:

$$\begin{split} \,\_3F\_2\left( \begin{matrix} a,b,c \\ d,e \end{matrix} \right) &= \frac{d+c-a-b-c-1}{e} {}\_3F\_2\left( \begin{matrix} a+1,b+1,c+1 \\ d+1,e+1 \end{matrix} \right) \\ &+ \frac{(a-d)(d-b)(d-c)}{d(d+1)e} {}\_3F\_2\left( \begin{matrix} a+1,b+1,c+1 \\ d+2,e+1 \end{matrix} \right) . \end{split} \tag{36}$$

Please note that identity (34) is obtained from (36) by an application of a Thomae relation to the first term on the right hand side. In a similar fashion, contiguous relations and Thomae transformations generate all three-term relations from Proposition 2, induced by the elements of the the group <sup>T</sup><sup>ˆ</sup> . We note that the relations covered by Proposition 2 are different from the three-term relations for <sup>3</sup>*F*<sup>2</sup> summarized by Bailey in ([21], Section 3.7) and studied from group-theoretic viewpoint in ([9], Section IV). This can be seen for example by comparing the matrices ([9], Equation (2.6c)) with the matrices *<sup>D</sup>* associated with <sup>T</sup><sup>ˆ</sup> .

The system (21) follows from the representation (13) of <sup>4</sup>*F*<sup>3</sup> with one unit shift as a linear combination of two <sup>3</sup>*F*<sup>2</sup> functions. However, Formula (13) is just one example of such decomposition. The two propositions below give many more ways to expand the <sup>4</sup>*F*<sup>3</sup> with unit shift into linear combination of <sup>3</sup>*F*2. Proposition 3 is proved directly in terms of hypergeometric series manipulations as its results will be used below in Section <sup>6</sup> to prove Lemma <sup>1</sup> used to generate the group <sup>T</sup><sup>ˆ</sup> .

**Proposition 3.** *The following identities hold true:*

$${}\_{3}F\_{2}\left(\begin{matrix}a,b,c\\d,e\end{matrix}\right)+\gamma\_{3}F\_{2}\left(\begin{matrix}a-1,b,c\\d,e\end{matrix}\right)=(\gamma+1){}\_{4}F\_{3}\left(\begin{matrix}a-1,b,c,\xi+1\\d,e,\xi\end{matrix}\right),\tag{37}$$

*where ξ* = (*γ* + 1)(*α* − 1)*;*

$${}\_{3}F\_{2}\left( \begin{matrix} a,b,c \\ d,e \end{matrix} \right) + \gamma\_{3}F\_{2}\left( \begin{matrix} a+1,b,c \\ d+1,e \end{matrix} \right) = (\gamma+1){}\_{4}F\_{3}\left( \begin{matrix} a,b,c,\nu+1 \\ d+1,e,\nu \end{matrix} \right),\tag{38}$$

*where ν* = (*γ* + 1)*αd*/(*γd* + *α*)*; and*

$${}\_{3}F\_{2}\left( \begin{matrix} a,b,c \\ d,e \end{matrix} \right) + \gamma\_{3}F\_{2}\left( \begin{matrix} a,b+1,c+1 \\ d+1,e+1 \end{matrix} \right) = {}\_{4}F\_{3}\left( \begin{matrix} a-1,b,c,\lambda+1 \\ d,e,\lambda \end{matrix} \right),\tag{39}$$

*where λ* = (*α* − 1)*bc*/(*bc* + *γde*).

**Proof of Proposition 3.** We have

$$\begin{aligned} \,\_3F\_2\left( \begin{matrix} a,b,c \\ d,e \end{matrix} \right) + \gamma\_3 F\_2\left( \begin{matrix} a-1,b,c \\ d,e \end{matrix} \right) &= 1 + \gamma + \sum\_{n=1}^{\infty} \frac{(a)\_n (b)\_n (c)\_n + \gamma (a-1)\_n (b)\_n (c)\_n}{(d)\_n (e)\_n n!} \\ &= (1+\gamma) \left( 1 + \sum\_{n=1}^{\infty} \frac{(a-1)\_n (b)\_n (c)\_n}{(d)\_n (e)\_n n!} \left( 1 + \frac{n}{(a-1)(\gamma+1)} \right) \right) \\ &= (\gamma+1) \,\_4F\_3\left( \begin{matrix} a-1,b,c,\xi+1 \\ d,e,\xi \end{matrix} \right), \end{aligned}$$

where *ξ* = (*γ* + 1)(*α* − 1) and we used (*α*)*<sup>n</sup>* = (*α* − 1)*n*(1 + *n*/(*α* − 1)). Next,

$$\begin{aligned} \,\_3F\_2\left( \begin{matrix} a,b,c \\ d,e \end{matrix} \right) + \gamma\_3 \,\_3F\_2\left( \begin{matrix} a+1,b,c \\ d+1,e \end{matrix} \right) &= 1 + \gamma + \sum\_{n=1}^\infty \frac{(a)\_n (b)\_n (c)\_n}{(d+1)\_n (e)\_n n!} \left( 1 + \frac{n}{d} + \gamma + \frac{\gamma n}{a} \right) \\ &= (\gamma+1)\_4 \,\_3F\_3\left( \begin{matrix} a,b,c,\nu+1 \\ d+1,e,\nu \end{matrix} \; \bigg| \; \gamma+1,\gamma+1 \right), \end{aligned}$$

where *ν* = (*γ* + 1)*αd*/(*γd* + *α*) and we used (*α* + 1)*<sup>n</sup>* = (*α*)*n*(1 + *n*/*α*).

Finally, using the obvious identities (*b*)*<sup>n</sup>* = *b*(*b* + 1)*n*−<sup>1</sup> and (*α*)*<sup>n</sup>* = (*α* − 1)*n*+1/(*α* − 1) we get

<sup>3</sup>*F*<sup>2</sup> *α*, *b*, *c d*,*e* + *γ*3*F*<sup>2</sup> *α*, *b* + 1, *c* + 1 *d* + 1,*e* + 1 = 1 + ∞ ∑ *n*=1 *bc*(*α*)*n*−1(*b* + 1)*n*−1(*c* + 1)*n*−1(*α* + *n* − 1) *de*(*d* + 1)*n*−1(*e* + 1)*n*−1*n*! + *γ*3*F*<sup>2</sup> *α*, *b* + 1, *c* + 1 *d* + 1,*e* + 1 = 1 + ∞ ∑ *n*=0 (*α*)*n*(*b* + 1)*n*(*c* + 1)*<sup>n</sup>* (*d* + 1)*n*(*e* + 1)*nn*! *bc*(*α* + *n*) *de*(*<sup>n</sup>* <sup>+</sup> <sup>1</sup>) <sup>+</sup> *<sup>γ</sup>* = 1 + ∞ ∑ *n*=0 (*α* − 1)*n*+1(*b*)*n*+1(*c*)*n*+<sup>1</sup> (*d*)*n*+1(*e*)*n*+1(*n* + 1)! *de bc*(*α* − 1) *bc*(*α* + *n*) *de* <sup>+</sup> *<sup>γ</sup>*(*<sup>n</sup>* <sup>+</sup> <sup>1</sup>) = 1 + ∞ ∑ *n*=1 (*α* − 1)*n*(*b*)*n*(*c*)*<sup>n</sup>* (*d*)*n*(*e*)*nn*! <sup>1</sup> <sup>+</sup> *<sup>n</sup> bc* <sup>+</sup> *<sup>γ</sup>de* (*<sup>α</sup>* <sup>−</sup> <sup>1</sup>)*bc* = <sup>4</sup>*F*<sup>3</sup> *α* − 1, *b*, *c*, *λ* + 1 *d*,*e*, *λ* ,

where *λ* = (*α* − 1)*bc*/(*bc* + *γde*).

Other ways to represent <sup>4</sup>*F*<sup>3</sup> with one unit shift as a linear combination of <sup>3</sup>*F*<sup>2</sup> are found by substituting (32) and (33) into (13). This is done in the following proposition.

**Proposition 4.** *Any two transformations from the group* T *: T*<sup>1</sup> ∼ {*ε*1, *M*1(**r**)*, λ*1(**r**), *α*1(**r**), *β*1(**r**), *D*1} *and T*<sup>2</sup> ∼ {*ε*2, *M*2(**r**)*, λ*2(**r**), *α*2(**r**), *β*2(**r**), *D*2} *satisfying the condition α*2*β*<sup>1</sup> − *α*1*β*<sup>2</sup> = 0 (*for brevity we omit the dependence on* **r** *in the parameters*) *induce the decomposition*

$${}\_{4}F\_{3}\left(\begin{matrix}a,b,c,f+1\\d,c,f\end{matrix}\right)=M\_{1}\frac{\beta\_{1}\varepsilon\_{1}-a\_{1}\lambda\_{1}}{a\_{2}\beta\_{1}-a\_{1}\beta\_{2}}\left(a\_{2}+\frac{\beta\_{2}}{f}\right)F\_{1}(D\_{1}\mathbf{r})+M\_{2}\frac{a\_{2}\lambda\_{2}-\beta\_{2}\varepsilon\_{2}}{a\_{2}\beta\_{1}-a\_{1}\beta\_{2}}\left(a\_{1}+\frac{\beta\_{1}}{f}\right)F\_{1}(D\_{2}\mathbf{r}),\quad(40)$$
  $where\ F\_{1}(\mathbf{r})={3}F\_{2}\left(\begin{matrix}a,b,c\\d,c\end{matrix}\right)$ .

Let us exemplify (40) with the following two decompositions:

$$\begin{aligned} {}\_4F\_3 \left( \begin{matrix} a, b, c, f + 1 \\ d, c, f \end{matrix} \right) &= \left( \frac{d + e - a - b - c - 1}{c} + \frac{abc}{d \varepsilon f} \right) {}\_3F\_2 \left( \begin{matrix} a + 1, b + 1, c + 1 \\ d + 1, c + 1 \end{matrix} \right) \\ &+ \frac{(a - d)(d - b)(d - c)}{cd(1 + d)} {}\_3F\_2 \left( \begin{matrix} a + 1, b + 1, c + 1 \\ d + 2, c + 1 \end{matrix} \right) \end{aligned}$$

and

$${}\_4F\_3 \left( \begin{matrix} a,b,c,f+1 \\ d,e,f \end{matrix} \right) = A\_3F\_2 \left( \begin{matrix} a+1,b,c \\ d,e \end{matrix} \right) + B\_3F\_2 \left( \begin{matrix} a+1,b+1,c+1 \\ d+2,e+1 \end{matrix} \right),$$

where

$$A = 1 + \frac{bc(f - a)}{f(b(d - c) - d(d + \varepsilon - a - c - 1))}, \quad B = \frac{bc(a - d)(b - d)(c - d)(f - a)}{def(1 + d)(b(c - d) + d(d + \varepsilon - a - c - 1))}.$$

#### **5. Summation Formulas**

In ([22], Equation (45)) we established the following summation formula

$${}\_{4}F\_{3}\left(\begin{array}{c} a,b,c,f+1\\d,e,f\end{array}\right)=\frac{\Gamma(d)\Gamma(e)}{\Gamma(a+1)\Gamma(b+1)\Gamma(c+1)},\tag{41a}$$

valid if

$$
\varepsilon\_1(d,\varepsilon) - \varepsilon\_1(a,b,\varepsilon) = 2 \quad \text{and} \quad f = \frac{\varepsilon\_3(a,b,\varepsilon)}{\varepsilon\_2(a,b,\varepsilon) - \varepsilon\_2(1-d,1-\varepsilon)},
\tag{41b}
$$

where *ek*(·) denotes the *k*-th elementary symmetric polynomial. Now, if we apply any transformation of the form (15) and impose the above restrictions on the parameters on the right hand side, we obtain

$${}\_{4}F\_{3}\left(\begin{matrix}a,b,c,f+1\\d,e,f\end{matrix}\right) = M(\mathbf{r})\frac{\varepsilon f + \lambda(\mathbf{r})}{f}F(\mathbf{q},\eta) = \frac{M(\mathbf{r})(\varepsilon f + \lambda(\mathbf{r}))\Gamma(q\_{4})\Gamma(q\_{5})}{f\Gamma(q\_{1}+1)\Gamma(q\_{2}+1)\Gamma(q\_{3}+1)},\tag{42a}$$

where (*q*1, *q*2, *q*3, *q*4, *q*5, 1) = *D***r**, and the conditions *e*1(*q*4, *q*5) − *e*1(*q*1, *q*2, *q*3) = 2 and

$$\eta = \frac{\varepsilon f + \lambda(\mathbf{r})}{\kappa(\mathbf{r})f + \beta(\mathbf{r})} = \frac{e\_3(q\_1, q\_2, q\_3)}{e\_2(q\_1, q\_2, q\_3) - e\_2(1 - q\_4, 1 - q\_5)}$$

must hold. Expressing *f* they are equivalent to

$$\varepsilon\_1(q\_4, q\_5) - \varepsilon\_1(q\_1, q\_2, q\_3) = 2 \text{ and } f = \frac{\lambda(\mathbf{r})(e\_2(q\_1, q\_2, q\_3) - e\_2(1 - q\_4, 1 - q\_5)) - \beta(\mathbf{r})e\_3(q\_1, q\_2, q\_3)}{a(\mathbf{r})e\_3(q\_1, q\_2, q\_3) - \varepsilon(e\_2(q\_1, q\_2, q\_3) - e\_2(1 - q\_4, 1 - q\_5))}.\tag{42b}$$

As *qi* = *qi*(*a*, *b*, *c*, *d*,*e*), *i* = 1, . . . , 5, are linear functions we arrive at the following proposition:

**Proposition 5.** *Each transformation T* ∈ T *as characterized by the collection* {*ε*, *M*(**r**), *λ*(**r**), *α*(**r**), *β*(**r**), *D*} *corresponds to a summation formula* (42a) *valid under restrictions* (42b) *with* (*q*1,..., *q*5, 1) = *D***r***.*

We will illustrate Proposition 5 by applying it to transformation (25). First condition in (42b) becomes *e* = *c* + 2. In view of this condition formula (42a) takes the form

$${}\_4F\_3 \left( \begin{matrix} a,b,c,f+1 \\ d,c+2,f \end{matrix} \right) = \frac{(c+1)\Gamma(d)\Gamma(d-a-b+2)(f\psi+c(a+b-d))}{\Gamma(d-a+1)\Gamma(d-b+1)f\psi} \lambda$$

where *ψ* = *d* − *a* − *b* + 1 and, by the second condition in (42b),

$$f = -\frac{c(a+b-d)}{\psi} + \frac{(d-a)(d-b)c}{\psi((d-a)(d-b+c) + (d-b)c + (d-1)(a+b-d-c-1))}.$$

Further examples will be given in [23].

#### **6. Proof of Lemma 1**

Write identity (13) in expanded form

$${}\_{4}F\_{3}\left(\begin{matrix}a,b,c,f+1\\d,e,f\end{matrix}\right)={}\_{3}F\_{2}\left(\begin{matrix}a,b,c\\d,e\end{matrix}\right)+\frac{abc}{fdc}{}\_{3}F\_{2}\left(\begin{matrix}a+1,b+1,c+1\\d+1,e+1\end{matrix}\right).\tag{43}$$

Applying Thomae's transformation (3) to both <sup>3</sup>*F*<sup>2</sup> functions on the right hand side, we get (*ψ* = *d* + *e* − *a* − *b* − *c* − 1):

$$\begin{aligned} {}\_4F\_3 \left( \begin{matrix} a, b, c, f + 1 \\ d, c, f \end{matrix} \right) &= \frac{\Gamma(\psi + 1)\Gamma(d)\Gamma(e)}{\Gamma(a)\Gamma(\psi + b + 1)\Gamma(\psi + c + 1)} \times \\ &\qquad \left[ \begin{matrix} \psi + 1, d - a, e - a \\ \psi + b + 1, \psi + c + 1 \end{matrix} + \frac{bc}{f\psi^3} \,\_3F\_2 \left( \begin{matrix} \psi, d - a, e - a \\ \psi + b + 1, \psi + c + 1 \end{matrix} \right) \right] .\end{aligned}$$

Now we employ Proposition 3. Application of Formula (37) to the linear combination in brackets yields

$${}\_{4}F\_{3}\left(\begin{matrix}a,b,c,f+1\\d,e,f\end{matrix}\right)=\frac{(f\psi+bc)\Gamma(\psi)\Gamma(d)\Gamma(e)}{f\Gamma(a)\Gamma(\psi+b+1)\Gamma(\psi+c+1)}{}\_{4}F\_{3}\left(\begin{matrix}\psi,d-a,c-a,\eta+1\\\psi+b+1,\psi+c+1,\eta\end{matrix}\right),$$

where *η* = *ψ* + *bc*/ *f* . This proves transformation given by (7).

In a similar fashion, if we apply the Kummer transformation (2) to <sup>3</sup>*F*<sup>2</sup> on the right hand side of (43) we get:

$${}\_{4}F\_{3}\left(\begin{matrix}a,b,c,f+1\\d,e,f\end{matrix}\right)=\frac{\Gamma(\psi+1)\Gamma(d)}{\Gamma(d-a)\Gamma(\psi+a+1)}\left[{}\_{3}F\_{2}\left(\begin{matrix}a,\varepsilon-b,\varepsilon-c\\e,\psi+a+1\end{matrix}\right)+\frac{abc}{f\varepsilon\psi}{}\_{3}F\_{2}\left(\begin{matrix}a+1,\varepsilon-b,\varepsilon-c\\c+1,\psi+a+1\end{matrix}\right)\right].$$

Applying the relation (38) to the linear combination in brackets we then obtain

$${}\_4F\_3 \begin{pmatrix} a, b, c, f+1 \\ d, e, f \end{pmatrix} = \frac{(abc+f c \psi) \Gamma(\psi) \Gamma(d)}{f e \Gamma(d-a) \Gamma(\psi+a+1)} {}\_4F\_3 \begin{pmatrix} a, c-b, c-c, \lambda+1 \\ e+1, \psi+a+1, \lambda \end{pmatrix}$$

,

where

$$
\lambda = \frac{ab\mathfrak{c} + f e \psi}{bc + f \psi}.
$$

This proves transformation (8).

**Author Contributions:** The authors contributed equally to this work. Both authors have read and agreed to the published version of the manuscript.

**Funding:** The second author was funded by the Ministry of Science and Higher Education of the Russian Federation (supplementary agreement No. 075-02-2020-1482-1 of 21 April 2020).

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A**

In this appendix we will display the explicit form of the main building blocks needed for calculating the elements of the group <sup>T</sup><sup>ˆ</sup> . Just as it stands for Thomae's transformations ([4], Appendix 1), we have ten different identities with zero shifts. They are obtained as follows: permuting *a* ↔ *b* and *a* ↔ *c* in Formula (7) we get three transformations, while *a* ↔ *b*, *a* ↔ *c* and *d* ↔ *e* in (6) leads to six more transformations. Adding the identity transformation we arrive at ten "Thomae-like" zero-shift 

transformations for <sup>4</sup>*F*<sup>3</sup> containing the parameter pair *f* + 1 *f* . The entire 120 element subgroup of

"Thomae-like" zero-shift transformations is obtained by the obvious 12 permutations of three top and two bottom parameters on the right hand side of each of the ten transformations described above.

All further transformations are obtained by consecutive application of the four shifting transformations *S*±, *S*<sup>±</sup> and permutations of top and bottom parameters to the 120 transformations described above. Transformation *S*<sup>+</sup> shifting the top parameter *a* by +1 (denoted by *S*<sup>1</sup> in Section 3) is given by (30). Combining parameters it can be written as:

$${}\_4F\_3\left( \begin{matrix} a, b, c, f+1\\ d, e, f \end{matrix} \right) = \left( 1 - \frac{bc}{(d-a-1)(e-a-1)} \right) \left( 1 + \frac{\lambda}{f} \right) {}\_4F\_3\left( \begin{matrix} a+1, b, c, \eta+1\\ d, e, \eta \end{matrix} \right), \tag{A1}$$

where

$$\lambda = \frac{abc}{a(2+a-d-\varepsilon)-bc+(d-1)(\varepsilon-1)}, \quad \eta = \frac{abc+((a+1)(a+1-d-\varepsilon)-bc+de)f}{(a-f)(2+a+b+\varepsilon-d-\varepsilon)}.$$

Its inverse *S*− is given by:

$${}\_4F\_3 \left( \begin{matrix} a, b, c, f + 1 \\ d, e, f \end{matrix} \right) = \left( 1 + \frac{bc}{\Psi f} \right) {}\_4F\_3 \left( \begin{matrix} a - 1, b, c, \eta + 1 \\ d, e, \eta \end{matrix} \right), \tag{A2}$$

.

where

$$\eta = \frac{(a-1)(bc+\psi f)}{a(d+c-a)+bc-de+\psi f}$$

The transformation *S*<sup>+</sup> shifting the bottom parameter *d* by +1 (denoted by *S*<sup>4</sup> in Section 3) is given by (31). It can be written more compactly as

$${}\_{4}F\_{3}\left(\begin{matrix}a,b,c,f+1\\d,e,f\end{matrix}\right)=\frac{abc+\psi df}{\psi df}{}\_{4}F\_{3}\left(\begin{matrix}a,b,c,\eta+1\\d+1,e,\eta\end{matrix}\right),\tag{A3}$$

.

where *ψ* = *e* + *d* − *a* − *b* − *c* − 1 and

$$η = \frac{abc + \psi df}{d(d - a - b - c) + ab + ac + bc + \psi f}$$

Finally, its inverse transformation *S*<sup>−</sup> shifting a bottom parameter by −1 has the form

$${}\_{4}F\_{3}\left(\begin{matrix}a,b,c,f+1\\d,e,f\end{matrix}\right)=\frac{[((d-b-1)(d-c-1)-a(d-b-c-1))f-abc](d-1)}{(d-a-1)(d-b-1)(d-c-1)f}{}\_{4}F\_{3}\left(\begin{matrix}a,b,c,\eta+1\\d-1,e,\eta\end{matrix}\right),\tag{A4}$$

where

$$\eta = \frac{abc + [(1-d)(d-a-b-c-1) - ab - ac - bc]f}{(d+c-a-b-c-2)(f-d+1)}.$$

In the remaining part of the Appendix we present several *Wolfram Mathematica*® routines intended for dealing with the group T together with an example of their use. Listing A1 contains the function CMPS[*T*1, *T*2] that takes as input two transformations *T*1, *T*<sup>2</sup> and computes their composition *T*<sup>2</sup> ◦ *T*1. The form in which the parameters *εi*, *Mi*, *λi*, *αi*, *β<sup>i</sup>* and *Di*, *i* = 1, 2, should be supplied can be seen from the example in Listing A5. Similarly, Listing A2 contains the function INV[*T*] that computes the inverse of a given transformation *T*. The output provided by CMPS and INV can be printed in an easily readable

form using the function PRN[*T*] given in Listing A3. The same Listing A3 contains the function INPT[*T*] that converts the output form of the functions CMPS and INV into the input form of the same functions, so that further compositions or inverses could be computed from such output. For numerical verification of the outputs of CMPS and INV the function RHS[*T*] presented in Listing A4 converts these outputs into an expression that can be evaluated by the *Mathematica* function N[...] after the parameters have been assigned some numerical values, see an example at the end of Listing A5.

**Listing A1.** Composition.


**Listing A2.** Inversion.

INV[TT\_]:=Module[{eps=TT[[1]], M=TT[[2]], lam=TT[[3]], alpha=TT[[4]], beta=TT[[5]], D=TT[[6]],


**Listing A3.** Conversion into input form and printing.


**Listing A4.** Conversion into computable form.


```
Listing A5. Example of use.
```

```
1 (* Definition of the first transformation *)
2 eps1=1; M1[a_,b_,c_,d_,e_]:=Gamma[d+e−a−b−c]*Gamma[d]*Gamma[e]/Gamma[a]/Gamma[d+e−a−c]/Gamma[d+e−a−b];
3 lam1[a_,b_,c_,d_,e_]:=b*c/(d+e−a−b−c−1); alpha1[a_,b_,c_,d_,e_]:=1/(d+e−a−b−c−1);
4 beta1[a_, b_, c_, d_,e_ ]:=0; D1={{−1,−1,−1,1,1,−1}, {−1,0,0,1,0,0}, {−1,0,0,0,1,0}, {−1,0,−1,1,1,0},
5 {−1,−1,0,1,1,0}, {0,0,0,0,0,1}};
6
7 (* Definition of the second transformation *)
8 eps2=1; M2[a_,b_,c_,d_,e_]:=Gamma[d+e−a−b−c]*Gamma[e]/Gamma[d+e−a−b]/Gamma[e−c];
9 lam2[a_,b_,c_,d_,e_ ]:=(a+b−d)*c/(d+e−a−b−c−1); alpha2[a_,b_,c_,d_,e_]:=0;
10 beta2[a_,b_,c_,d_,e_ ]:=(e−c−1)/(d+e−a−b−c−1); D2={{−1,0,0,1,0,0}, {0,−1,0,1,0,0}, {0,0,1,0,0,0},
11 {0,0,0,1,0,0}, {−1,−1,0,1,1,0}, {0,0,0,0,0,1}};
12
13 (* composition T2T1*)
14 T1T2=CMPS[{eps1, M1, lam1, alpha1, beta1, D1}, {eps2, M2, lam2, alpha2, beta2, D2}];
15
16 (*Inverse of T1*)
17 T1INV = INV[{eps1, M1, lam1, alpha1, beta1, D1}];
18
19 (*Printing the parameters of T2T1*)
20 PRN[T1T2]
21 epsilon=1
22 M=−(((a c+(1+b−d) e) Gamma[d] Gamma[−1−a−b−c+d+e])/(e Gamma[−b+d] Gamma[−a−c+d+e]))
23 Lambda=−((a b c)/(a c+(1+b−d)e))
24 alpha=(1+b−d)/(a c+e+b e−d e)
25 Beta=0
26 Parameters={1+b,−c+e,−a+e,−a−c+d+e,1+e}
27 eta=(−abc+(ac+e+be−de)f)/((1+b−d)f)
28
29 (*Computing composition of the results of previous operations *)
30 NEW=CMPS[INPT[T1T2], INPT[T1INV]];
31
32 (*Numerical verification of the transformation NEW using RHS[...]*)
33 a=1+2/3; b=−13/17+2; c=3/7; d=5/11; e=5+44/17; f=12/13;
34 In[51]:= N[HypergeometricPFQ[{a, b, c, f + 1}, {d, e, f }, 1], 15]
35 Out[51]= 2.22268615827388
36 In[52]:= N[RHS[NEW], 15]
37 Out[52]= 2.22268615827388
```
#### **References**



**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Laguerre-Type Exponentials, Laguerre Derivatives and Applications. A Survey**

#### **Paolo Emilio Ricci**

Department of Mathematics, International Telematic University UniNettuno, Corso Vittorio Emanuele II, 39, 00186 Roma, Italy; paoloemilioricci@gmail.com or p.ricci@uninettunouniversity.net

Received: 19 October 2020; Accepted: 13 November 2020; Published: 18 November 2020

**Abstract:** Laguerrian derivatives and related autofunctions are presented that allow building new special functions determined by the action of a differential isomorphism within the space of analytical functions. Such isomorphism can be iterated every time, so that the resulting construction can be re-submitted endlessly in a cyclic way. Some applications of this theory are made in the field of population dynamics and in the solution of Cauchy's problems for particular linear dynamical systems.

**Keywords:** Laguerre-type derivative; Laguerre-type exponentials; Laguerre-type special functions; multivariable and multi-index Laguerre polynomials; population dynamics models; Laguerre-type linear dynamical systems

**MSC:** 33C45; 33C99; 30D05; 33B10; 33C10; 92D25; 34A30

#### **1. Introduction**

This survey article is dedicated to a topic that has received little attention in the past, and therefore seems not to be very well known by the mathematical community.

Recently the role of the Laguerre derivative was considered in a few papers.

In [1], the authors introduce an interesting application of Wright functions of the first kind to solve fractional ordinary differential equations, with variable coefficients, generalizing the Bessel-type equations.

In [2], the authors use the same tool in Combinatorics, a completely different area, and in [3] an operational approach to the subject has been examined, in the framework of Clifford algebras.

Actually, in past time, the Laguerre-type exponentials and the related Laguerre derivative were introduced and studied in several articles (see [4–15]) and applications to Special functions, have been obtained. In particular, Laguerre-type functions of Bessel, Appell, Bell and multivariate functions were defined.

The operator *DxD* = *D* + *x D*<sup>2</sup> determines a linear differential isomorphism, acting onto the space of analytic functions of the *x* variable. By using this isomorphism, a sort of parallel structure is created within this space, in such a way that the differentiation properties have their counterpart, which can be immediately derived.

Furthermore, iterations of the Laguerre derivative can be defined, so that this parallelism with the space of analytic functions can be iterated too, in an endless way.

Therefore, a cyclic construction is created within the space that repeats the same structure at a higher level of differentiation order. It is one of the great cycles that sometimes occur within mathematical theories: for example, in Number theory the Fibonacci numbers *Fn* with Fibonacci indexes constitute a higher sequence of Fibonacci numbers which still satisfies the same recursion, i.e., *FFn*<sup>+</sup><sup>2</sup> = *FFn*<sup>+</sup><sup>1</sup> + *FFn* , and this property can be iterated at infinity.

However the operators *DL* = *DxD* and its iterates as *DnL* = *DxDxDx* ··· *DxD* are not completely new, since they can be considered to be particular cases of the hyper-Bessel differential operators when *α*<sup>0</sup> = *α*<sup>1</sup> = ··· = *α<sup>n</sup>* = 1 (the special case considered in operational calculus by Ditkin and Prudnikov [16]). In general, the *Bessel-type differential operators of arbitrary order n* were introduced by Dimovski, in 1966 [17] and later called by Kiryakova *hyper-Bessel operators*, because are closely related to their eigenfunctions, called hyper-Bessel by Delerue [18], in 1953. These operators were studied in 1994 by Kiryakova in her book [19] (Ch. 3).

Since the Laguerrian exponentials on the positive semi-axis of the abscissas are convex increasing functions, with a growth lower than the exponential one, in Section 7 a natural application was made in the context of population dynamics.

Laguerre-type linear dynamical systems were also considered in Section 8.

#### **2. The Laguerre Derivative and the Relevant Exponentials**

The *Laguerre derivative*, is defined by

$$D\_L := D \mathbf{x} D = D + \mathbf{x} D^2,\tag{1}$$

where *D* = *Dx* = *d*/*dx*.

It is an interesting operator. In fact, as the exponential function *eax* (*a* constat) is an eigenfunction of the derivative operator *D* = *Dx*, i.e.,

$$D\epsilon^{ax} = ae^{ax},\tag{2}$$

equally the function

$$\mathfrak{e}\_1(\mathbf{x}) := \sum\_{k=0}^{\infty} \frac{\mathbf{x}^k}{\left(k!\right)^2} = \mathbb{C}\_0(-\mathbf{x}),\tag{3}$$

where *C*0(*x*) is the Tricomi function of order zero, is an eigenfunction of the Laguerre derivative *DL*, since:

$$D\_L \ e\_1(ax) = a e\_1(ax) \,. \tag{4}$$

The proof easily follows, by noting that:

$$D\_L \ e\_1(a\mathbf{x}) = \left(D + \mathbf{x}D^2\right) \sum\_{k=0}^{\infty} a^k \frac{\mathbf{x}^k}{(k!)^2} =$$

$$= \sum\_{k=1}^{\infty} \left(k + k(k-1)\right) a^k \frac{\mathbf{x}^{k-1}}{(k!)^2} = \sum\_{k=1}^{\infty} k^2 a^k \frac{\mathbf{x}^{k-1}}{(k!)^2} = \tag{5}$$

$$= a \sum\_{k=0}^{\infty} a^k \frac{\mathbf{x}^k}{(k!)^2} = a e\_1(a\mathbf{x}).$$

For this reason, *the function e*1(*x*) *is called the Laguerre-type exponential* (of order 1).

In preceding articles, the role of the Laguerre derivative, in connection with the *monomiality principle*—an important technique introduced by G. Dattoli [20]—and its application to the multidimensional Hermite (Hermite-Kampé de Fériet or Gould-Hopper polynomials, see [21–23]) or Laguerre polynomials [14,24], has been shown.

The above technique can be iterated, producing Laguerre classes of exponential-type functions, of higher order, called *L-exponentials*, and the relevant *L-circular*, *L-hyperbolic*, *L-Gaussian functions* (see [4]).

Similar generalized hypergeometric functions, called trigonometric/Bessel type, exponential/ confluent type and Gauss/Beta-distribution, can be found in a book by Kiryakova [19] and also in [25]. Before going on, we notice that the Laguerre derivative verifies [26]:

$$(D\mathbf{x}D)^{\mathfrak{n}} = D^{\mathfrak{n}}\mathbf{x}^{\mathfrak{n}}D^{\mathfrak{n}},\tag{6}$$

an equation which can be easily proven by recursion.

#### *2.1. L-Exponentials of Higher Order*

We consider the operator:

$$D\_{2L} := D \mathbf{x} D \mathbf{x} D = D \left( \mathbf{x} D + \mathbf{x}^2 D^2 \right) = D + 3 \mathbf{x} D^2 + \mathbf{x}^2 D^3,\tag{7}$$

and the function:

$$\varphi\_2(\mathbf{x}) := \sum\_{k=0}^{\infty} \frac{\mathbf{x}^k}{\left(k!\right)^3}. \tag{8}$$

The following theorem holds:

**Theorem 1.** *The function e*2(*ax*) *is an eigenfunction of the operator D*2*L, i.e.,*

$$D\_{2L} \ e\_2(ax) = a e\_2(ax) \tag{9}$$

The proof (see [4]) depends on the identity: *<sup>k</sup>* <sup>+</sup> <sup>3</sup>*k*(*<sup>k</sup>* <sup>−</sup> <sup>1</sup>) + *<sup>k</sup>*(*<sup>k</sup>* <sup>−</sup> <sup>1</sup>)(*<sup>k</sup>* <sup>−</sup> <sup>2</sup>) = *<sup>k</sup>*3, so that, it can be recognized that the coefficients of the combination in Equation (7) are the *Stirling numbers of the second kind*, *S*(3, 1), *S*(3, 2), *S*(3, 3), (see [27], and [28] (p. 835 for an extended table)).

In general, we can state the following theorem:

**Theorem 2.** *The function*

$$\mathfrak{e}\_n(\mathbf{x}) := \sum\_{k=0}^{\infty} \frac{\mathbf{x}^k}{(k!)^{n+1}}.\tag{10}$$

*is an eigenfunction of the operator*

$$\begin{split} D\_{nL} &:= D\mathbf{x} \cdot \cdots \cdot D\mathbf{x} D\mathbf{x} D = D\left(\mathbf{x} D + \mathbf{x}^2 D^2 + \cdots + \mathbf{x}^n D^n\right) = \\ &= S(n+1,1)D + S(n+1,2)\mathbf{x} D^2 + \cdots + S(n+1,n+1)\mathbf{x}^n D^{n+1} \end{split} \tag{11}$$

*i.e., for every constant a it results:*

$$D\_{nL} \ e\_n(a\mathbf{x}) = a e\_n(a\mathbf{x}) \,. \tag{12}$$

**Remark 1.** *The above results show that, for every positive integer n, we can define a Laguerre-exponential function, satisfying an eigenfunction property, which is an analog of the elementary property* (2) *of the exponential. The function en*(*x*) *reduces to the exponential function when n* = 0*, so that we put by definition:*

$$e\_0(\mathfrak{x}) := e^{\mathfrak{x}}, \qquad D\_{0L} := D.$$

*Obviously, D*1*<sup>L</sup>* := *DL.*

Examples of the *L*-exponential functions are given in Figure 1.

**Figure 1.** *e*1(*x*), (green) and *e*2(*x*), (red).

#### *2.2. L-Circular and L-Hyperbolic Functions*

Starting from the equation

$$\varepsilon\_1(i\mathbf{x}) = \sum\_{h=0}^{\infty} (-1)^h \frac{\mathbf{x}^{2h}}{((2h)!)^2} + i \sum\_{h=0}^{\infty} (-1)^h \frac{\mathbf{x}^{2h+1}}{((2h+1)!)^2} \,\, \,\tag{13}$$

we can define the 1*L*-*circular functions* as follows

$$\cos\_1(\mathbf{x}) := \Re \left( e\_1(i\mathbf{x}) \right) = \sum\_{h=0}^{\infty} (-1)^h \frac{\mathbf{x}^{2h}}{((2h)!)^{2}} \tag{14}$$

$$\sin\_1(\mathbf{x}) := \heartsuit \left( e\_1(i\mathbf{x}) \right) = \sum\_{h=0}^{\infty} (-1)^h \frac{\mathbf{x}^{2h+1}}{((2h+1)!)^{2}} \tag{15}$$

so that we find the Euler-type formulas

$$\cos\_1(\mathbf{x}) = \frac{e\_1(i\mathbf{x}) + e\_1(-i\mathbf{x})}{2}, \qquad \sin\_1(\mathbf{x}) = \frac{e\_1(i\mathbf{x}) - e\_1(-i\mathbf{x})}{2i}, \tag{16}$$

Recalling Equation (6), we find the result:

**Theorem 3.** *The* 1*L-circular functions* (14) *and* (15) *are solutions of the differential equation*

$$D\_L^2 \upsilon + \upsilon = \left(D^2 \mathbf{x}^2 D^2\right) \upsilon + \upsilon = 0. \tag{17}$$

The above results hold even for the generalized case. Write the *nL*-exponential in the form:

$$e\_n(i\mathbf{x}) = \sum\_{h=0}^{\infty} (-1)^h \frac{\mathbf{x}^{2h}}{((2h)!)^{n+1}} + i \sum\_{h=0}^{\infty} (-1)^h \frac{\mathbf{x}^{2h+1}}{((2h+1)!)^{n+1}}.\tag{18}$$

Then we can define the *nL*-*circular functions* by putting

#### **Definition 1.**

$$\cos\_n(x) := \Re\left(e\_n(ix)\right) = \sum\_{h=0}^{\infty} (-1)^h \frac{x^{2h}}{((2h)!)^{n+1}} \,\_1\tag{19}$$

$$\sin\_n(x) := \heartsuit \left( e\_n(ix) \right) = \sum\_{h=0}^{\infty} (-1)^h \frac{x^{2h+1}}{((2h+1)!)^{n+1}}.\tag{20}$$

and we find again the Euler-type formulas:

$$\cos\_n(\mathbf{x}) = \frac{e\_n(i\mathbf{x}) + e\_n(-i\mathbf{x})}{2}, \qquad \sin\_n(\mathbf{x}) = \frac{e\_n(i\mathbf{x}) - e\_n(-i\mathbf{x})}{2i}.\tag{21}$$

Theorem 3 becomes, in general:

**Theorem 4.** *The nL-circular functions* (18) *and* (19) *are solutions of the differential equation*

$$D\_{nL}^2 v + v = 0.$$

*and satisfy the conditions:*

$$
\cos\_n(0) = 1, \qquad \sin\_n(0) = 0.
$$

Furthermore, we find:

**Theorem 5.** *The nL-circular functions satisfy*

$$D\_{nL}\cos\_n(\mathbf{x}) = -\sin\_n(\mathbf{x}), \qquad D\_{nL}\sin\_n(\mathbf{x}) = \cos\_n(\mathbf{x})\,. \tag{22}$$

Examples of the *L*-circular functions are given in Figures 2 and 3.

**Figure 2.** cos1(*x*), (green) and sin1(*x*), (red).

**Figure 3.** cos2(*x*) (green) and sin2(*x*) (red).

In a similar way we can define the *nL*-hyperbolic functions, putting

$$\begin{aligned} \cosh\_n(\mathbf{x}) &:= \sum\_{h=0}^{\infty} \frac{\mathbf{x}^{2h}}{((2h)!)^{n+1}}, \\\\ \sinh\_n(\mathbf{x}) &:= \sum\_{h=0}^{\infty} \frac{\mathbf{x}^{2h+1}}{((2h+1)!)^{n+1}}. \end{aligned}$$

and the formulas analogues of that of the circular functions are easily derived (see [4]).

All the eigenfunctions *e*1(*x*),*e*2(*x*), ... ,*en*(*x*) can be expressed as generalized hypergeometric functions *pFq*, [29], namely: *e*1(*x*) = <sup>0</sup>*F*1(−*x*), *e*2(*x*) = <sup>0</sup>*F*2(*x*),..., *en*(*x*) = <sup>0</sup>*Fn*(*x*). In practice, starting from the Bessel function *e*1(*x*), all these eigenfunctions are special cases of the hyper-Bessel functions of Delerue [18], which are shown to be eigenfunctions of Dimovski's operators mentioned above.

Naturally, the cos*n*, sin*<sup>n</sup>* functions, in Equations (19) and (20), and their hyperbolic variants, are special cases of the *trigonometric type generalized hypergeometric functions* considered in the Kiryakova book [19].

#### **3. The Isomorphism** T*<sup>x</sup>* **and Its Iterations**

It was previously noted (see e.g., [14]) that, in the space A*<sup>x</sup>* of analytic functions, it is possible to define an isomorphism T*<sup>x</sup>* that preserves the differentiation properties, by means of correspondence:

$$D \to D\_{\rm L}, \qquad \text{x} \xleftarrow{} D\_{\rm x}^{-1}, \tag{23}$$

where

$$D\_{\mathbf{x}}^{-1}f(\mathbf{x}) = \int\_{0}^{\mathbf{x}} f(\boldsymbol{\xi}) \, d\boldsymbol{\xi}, \qquad D\_{\mathbf{x}}^{-n}f(\mathbf{x}) = \frac{1}{(n-1)!} \int\_{0}^{\mathbf{x}} (\mathbf{x} - \boldsymbol{\xi})^{n-1} f(\boldsymbol{\xi}) \, d\boldsymbol{\xi}, \tag{24}$$

so that

$$\mathcal{T}\_{\mathbf{x}}(\mathbf{x}^{\mathbf{n}}) = D\_{\mathbf{x}}^{-n}(1) = \frac{1}{(n-1)!} \int\_{0}^{\mathbf{x}} (\mathbf{x} - \boldsymbol{\xi})^{n-1} \, d\boldsymbol{\xi} = \frac{\mathbf{x}^{n}}{n!}. \tag{25}$$

It is worth noting that this kind of isomorphism is widely used in operational calculus and differential equations also under the name of *Transmutation or Similarity operator*, since it transforms one operator into another, and eigenfunctions into each other.

In fact, in such an isomorphism we have the correspondences:

• The exponential function is transformed into the function *e*1(*x*), since

$$\mathcal{T}\_{\mathfrak{X}}(\mathfrak{e}^{\mathfrak{x}}) = \sum\_{k=0}^{\infty} \frac{\mathcal{T}\_{\mathfrak{X}}(\mathfrak{x}^{k})}{k!} = \sum\_{k=0}^{\infty} \frac{\mathfrak{x}^{k}}{(k!)^{2}} = \mathfrak{e}\_{1}(\mathfrak{x})\dots$$

• The Hermite polynomial *<sup>H</sup>*(1) *<sup>n</sup>* (*x*, *<sup>y</sup>*) := (*<sup>x</sup>* <sup>−</sup> *<sup>y</sup>*)*<sup>n</sup>* becomes the Laguerre polynomial

$$\mathcal{L}\_n(x, y) := n! \sum\_{r=0}^n \frac{(-1)^r y^{n-r} x^r}{(n-r)! (r!)^2}$$

and by using the *monomiality principle* we can prove thate all the relations valid in the polynomial space still hold after the substitutions stated in Equation (23).

Furthermore, an iterative application of Equation (23) gives in sequence the functions *e*1(*x*),*e*2(*x*),*e*3(*x*),....

We have, for example:

$$\mathcal{T}\_{\mathfrak{x}}^{2}(\mathfrak{e}^{\mathfrak{x}}) = \sum\_{k=0}^{\infty} \frac{\mathcal{T}\_{\mathfrak{x}}(\mathfrak{x}^{k})}{(k!)^{2}} = \sum\_{k=0}^{\infty} \frac{\mathfrak{x}^{k}}{(k!)^{3}} = \mathfrak{e}\_{2}(\mathfrak{x})\,\,\,\mathfrak{x}$$

and so on.

We already noticed that the isomorphism connected with the Laguerre derivative can be iterated as many times as we wish.

Correspondently, the derivative operator is transformed into

$$\begin{aligned} D\_{\mathrm{L}} &= D \mathbf{x} D\_{\prime}, & D\_{\mathrm{2L}} &= D\_{\mathrm{L}} D\_{\mathrm{x}}^{-1} D\_{\mathrm{L}} = D \mathbf{x} D \mathbf{x} D\_{\prime}, \\ D\_{\mathrm{3L}} &= D\_{\mathrm{L}} D\_{\mathrm{x}}^{-1} D\_{\mathrm{L}} D\_{\mathrm{x}}^{-1} D\_{\mathrm{L}} = D \mathbf{x} D \mathbf{x} D \mathbf{x} D\_{\prime}, \dots, \end{aligned} \tag{26}$$

and so on.

We can conclude that the *L*-exponentials (and the relevant *L*-circular and *L*-hyperbolic functions) are determined by an iterative application of the considered differential isomorphism.

#### **4. Examples of Laguerre-Type Problems**

#### *4.1. L-Diffusion Equations*

**Theorem 6.** *For any fixed integer n, consider the problem* (see [4] (Theorem 5.1)):

$$\begin{cases} \begin{array}{ll} D\_{nL} \ S(\mathbf{x}, t) = \frac{\partial}{\partial t} \ S(\mathbf{x}, t), & \text{in the half plane } t > 0, \\ S(0, t) = s(t), \end{array} \end{cases} \tag{27}$$

*with analytic boundary condition s*(*t*)*.*

*The operational solution of problem* (27) *is given by:*

$$S(\mathbf{x}, t) = \varepsilon\_n \left( \mathbf{x} \frac{\partial}{\partial t} \right) s(t) = \sum\_{k=0}^{\infty} \frac{\mathbf{x}^k}{(k!)^{n+1}} \frac{d^k}{dt^k} s(t) \tag{28}$$

Representing *s*(*t*) = ∞ ∑ *k*=0 *akt k* , from Equation (28) we find, in particular:

$$S(\mathbf{x},0) = \sum\_{k=0}^{\infty} a\_k \frac{\mathbf{x}^k}{(k!)^{\mathrm{n}}}.\tag{29}$$

Please note that the operational solution becomes an effective solution whenever the series in Equation (28) is convergent. The validity of this condition depends on the growth of the coefficients *ak* of the boundary data *s*(*t*), but it is usually satisfied in physical problems.

More general problems are shown in [4,10], where evolution problems related to an operator of the type

$$\left(D^{p\_1} \mathbf{x}^{q\_1} D^{p\_2} \mathbf{x}^{q\_2} \cdots D^{p\_r} \mathbf{x}^{q\_r} D^{p\_{r+1}}\right),\tag{30}$$

where *p*1, *p*2,..., *pr*+1; *q*1, *q*2,... *qr* are fixed integers, have been considered.

An operational solution of the problem

$$D^{p\_1} \mathbf{x}^{q\_1} D^{p\_2} \mathbf{x}^{q\_2} \cdots D^{p\_r} \mathbf{x}^{q\_r} D^{p\_{r+1}} S(\mathbf{x}, t) = D\_t S(\mathbf{x}, t), \quad \text{in the half plane } t > 0, \mathbf{y}$$

with suitable initial conditions have been determined, in terms of the eigenfunctions of the same operator.

**Remark 2.** *Please note that the above operators generalize the subsequent Laguerre-type derivatives, since they are written as:*

$$D'\_{nL} = \underbrace{(DxDx \cdot \cdots \cdot DxD)^r}\_{(n+1)\,\,Dvert\,t\,\,s} = D'x^rD'x^r \cdot \cdots \cdot D'x^rD'\,,\tag{31}$$

*which is an equation extending* (6)*.*

The operator (30) closely recalls the general case of hyper-Bessel *B* operators, in [17], since integers *q*1, *q*2, ... *qr* could be replaced by arbitrary real numbers, as are parameters *α*0, *α*1, ... , *αn*, considered in [17,30]. The solutions of the general differential equation *By*(*x*) + *λy*(*x*) = *f*(*x*) are given by Kiryakova et al. in [31].

#### *4.2. L-Hyperbolic-Type Problems*

**Theorem 7.** *Let* <sup>Ω</sup><sup>ˆ</sup> *<sup>x</sup> be a* <sup>2</sup>*nd order differential operator with respect to the <sup>x</sup> variable, DnL* := (*DnL*)*<sup>t</sup> the nL-derivative with respect to the t variable, and denote by ψ*(*t*) *and χ*(*t*) *two functions such that:*

$$\begin{aligned} D\_{nL}\,\psi(t) &= \chi(t), \qquad D\_{nL}\,\chi(t) = \psi(t) \\ \psi(0) &= 1, \qquad \chi(0) = 0 \end{aligned} \tag{32}$$

*then the abstract L-hyperbolic-type problem:*

$$\begin{cases} \, \, \hat{\Omega}\_x^2 \, S(\mathbf{x}, t) = D\_{nL}^2 \, S(\mathbf{x}, t), \quad \text{in the half plane } t > 0, \\\ S(\mathbf{x}, 0) = q(\mathbf{x}), \\\ \, \, D\_{nL} \, S(\mathbf{x}, t)|\_{t=0} = v(\mathbf{x}) \end{cases} \tag{33}$$

*with analytic initial condition q*(*x*), *v*(*x*)*, admits the operational solution* (see [4], Theorem 5.3):

$$S(\mathbf{x},t) = \psi\left(t\hat{\Omega}\_{\mathbf{x}}\right)q(\mathbf{x}) + \chi\left(t\hat{\Omega}\_{\mathbf{x}}\right)w(\mathbf{x}),\tag{34}$$

where *w*(*x*) := Ωˆ <sup>−</sup><sup>1</sup> *<sup>x</sup> v*(*x*).

Please note that conditions in (32) are satisfied, for any fixed integer *n*, assuming:

*ψ*(*x*) := cosh*nL*(*x*), *χ*(*x*) := sinh*nL*(*x*).

*4.3. L-Elliptic-Type Problems*

**Theorem 8.** *Let* <sup>Ω</sup><sup>ˆ</sup> *<sup>x</sup> be a* <sup>2</sup>*nd order differential operator with respect to the <sup>x</sup> variable, DnL* := (*DnL*)*<sup>y</sup> the nL-derivative with respect to the y variable, and denote by ϕ*(*y*) *and τ*(*y*) *two functions such that:*

$$\begin{array}{ll} D\_{nL} \; \varphi(y) = -\tau(y)\_{\prime} & \quad & D\_{nL} \; \tau(y) = \varphi(y) \\\\ \varphi(0) = 1, \qquad & \tau(0) = 0 \end{array} \tag{35}$$

*then the abstract L-elliptic-type problem:*

$$\begin{cases} \dot{\Omega}\_{\mathbf{x}}^{2}S(\mathbf{x},\mathbf{y}) + D\_{\eta\mathcal{L}}^{2}S(\mathbf{x},\mathbf{y}) = 0, \quad \text{in the half plane } t > 0, \\\ S(\mathbf{x},0) = q(\mathbf{x}), \end{cases} \tag{36}$$

*with analytic boundary condition q*(*x*)*, admits the operational solution* (see [4], Theorem 5.4):

$$S(\mathbf{x}, y) = \varphi\left(y\hat{\Omega}\_{\mathbf{x}}\right)q(\mathbf{x}).\tag{37}$$

Please note that conditions in (35) are satisfied, for any fixed integer *n*, assuming:

$$\varphi(\mathfrak{x}) := \cos\_{nL}(\mathfrak{x}), \qquad \mathfrak{x}(\mathfrak{x}) := \sin\_{nL}(\mathfrak{x}) \dots$$

Further examples of PDE's problems involving the Laguerre derivatives can be found in [10,11].

#### **5. Laguerre-Type Special Functions**

#### *5.1. Laguerre-Type Bessel Functions*

The Laguerre-type Bessel functions, of order 1, (shortly *L-Bessel functions*), denoted by *<sup>L</sup> Jn*(*x*), are obtained substituting the exponential with the *L*-exponential *e*1(*x*) in the classic generating function, i.e., by putting

$$c\_1 \left[ \frac{\mathbf{x}}{2} \left( t - \frac{1}{t} \right) \right] = \sum\_{n = -\infty}^{+\infty} {}\_L f\_n \left( \mathbf{x} \right) \, t^n \dots$$

We can derive the explicit expression by applying the isomorphism T*<sup>x</sup>* to both sides of the explicit expression of the Bessel functions, so that we find:

.

$$\_L J\_h(\mathfrak{x}) := \sum\_{n=0}^{\infty} \frac{(-1)^h \mathfrak{x}^{n+2h}}{2^{n+2h} \, h! (n+h)! (n+2h)!}$$

We proved the results:

**Theorem 9.** *The L-Bessel functions <sup>L</sup> Jn*(*x*) *satisfy the recurrence relation* (see [8], Theorem 2.3):

$$\begin{cases} \, \, \_L\hat{D}\_{\ge}^{-1} \left[ \_LJ\_{n-1}(\infty) + \_LJ\_{n+1}(\infty) \right] = 2n \, \_LJ\_n(\infty),\\ \, \_LJ\_{n-1}(\infty) - \_LJ\_{n+1}(\infty) = 2\hat{D}\_{\le} \, \_LJ\_n(\infty). \end{cases}$$

**Theorem 10.** *The differential equation satisfied by the L-Bessel functions <sup>L</sup> Jn*(*x*) *is* (see [8], Theorem 2.5):

$$\left(\mathcal{D}\_L^2 + \mathcal{D}\_x \mathcal{D}\_L - n^2 \mathcal{D}\_x^2 + \mathcal{I}\right) \,\_L J\_n(\mathbf{x}) = 0 \,,$$

*where* <sup>ˆ</sup>*<sup>I</sup> denotes the identity operator. This equation can be derived by applying the isomorphism* <sup>T</sup>*<sup>x</sup> to both sides of the differential equation of the ordinary first kind Bessel functions.*

#### *5.2. Laguerre-Type Hypergeometric Functions*

By using the isomorphism technique it is possible to define in general Laguerre-type special functions, and in particular, the 1st order Laguerre-type hypergeometric functions.

In fact, starting from the Gauss' hypergeometric equation:

$$x(1-x)y'' + [c - (a+b+1)x]y' - aby = 0$$

and applying the isomorphism T*x*, we find the equation

$$\mathbf{x}(1-\mathbf{x})D\_{L}^{2}y + [c - (a+b+1)\mathbf{x}]D\_{L}y - aby = \mathbf{0},\tag{38}$$

that is:

$$[(\mathbf{x}(1-\mathbf{x}))(\mathbf{x}^2y^{j\upsilon}+4\mathbf{x}y^{\prime\prime\prime}+2y^{\prime\prime})+[c-(a+b+1)\mathbf{x}](y^{\prime}+\mathbf{x}y^{\prime\prime})-aby=0.\tag{39}$$

The solution of Equation (38), corresponding to the Gauss' hypergeometric equation *F*(*a*, *b*, *c*; *x*), is given by

$$\,\_LF(a,b,c;x) = 1 + \sum\_{n=1}^{\infty} \frac{a^{(n)}b^{(n)}}{c^{(n)}} \, \frac{x^n}{(n!)^2} \,. \tag{40}$$

where the symbol *a*(*n*) denotes the rising factorial.

Of course the *r*th order Laguerre-type hypergeometric functions are obtained by applying to both sides of the hypergeometric equation the iterated isomorphism of order *r*, but the corresponding differential equation becomes more and more complicated as *r* increases.

The generalized hypergeometric functions have their 1st order Laguerre-type counterpart, which are given by:

$${}\_{1\perp\_{p}}F\_{q}(a\_{1},\ldots,a\_{p};b\_{1},\ldots,b\_{q};\mathbf{x}) = \sum\_{n=0}^{\infty} \frac{a\_{1}^{(n)}\cdot\cdots\cdot a\_{p}^{(n)}}{b\_{1}^{(n)}\cdot\cdots\cdot b\_{q}^{(n)}} \frac{\mathbf{x}^{n}}{(n!)^{2}}\,\,\,\tag{41}$$

and those of higher order immediately follow.

Please note that the function in (41) can be viewed as a generalized hypergeometric function of the form *pFq*+1, by moving one of the *n*! in the first fraction under the sum and considering Γ(*n* + 1)/Γ(1) = 1(*n*) = *n*! = *b* (*n*) *<sup>q</sup>*+<sup>1</sup> as the (*q* + 1)-th term.

#### *5.3. Laguerre-Type Bell Polynomials*

We first note that for the Laguerre derivative, the chain rule

$$\frac{d}{dt} = \frac{d}{dx}\frac{dx}{dt}$$

becomes:

$$\frac{d}{dt}t\frac{d}{dt} = \frac{d}{dx}\frac{d}{dt}t\frac{dx}{dt}, \qquad \text{that is} \quad \quad (D\_L)\_t = \frac{d}{dx}(D\_L)\_t \ge \tag{42}$$

and in general:

$$(D\_{nL})\_t = \frac{d}{dx} \left( D\_{nL} \right)\_t \ge \frac{1}{t}$$

The problem of constructing Bell polynomials can be extended in the natural way to the case of the Laguerre-type derivatives.

To this aim, we introduce the definition:

**Definition 2.** *The* n*th Laguerre-type Bell polynomial, denoted by rLYn* (*x*; [ *f* , *g*]*n*)*, represents the* n*th* r*Laguerre-type derivative of the composite function f*(*g*(*t*))*.*

In [12] we showed that *rLYn* can be expressed as a polynomial in the independent variable *x*, depending on *f*1, *g*1; *f*2, *g*2;...; *fn*, *gn*, in terms of the classical Bell polynomials.

According to Equation (6), the Leibniz rule, gives:

$$\begin{split} \mathbf{u}^{\boldsymbol{n}} (D\mathbf{x}D)^{\boldsymbol{n}} &= D^{\boldsymbol{n}} \left( \mathbf{x}^{\boldsymbol{n}} D^{\boldsymbol{n}} \right) = \sum\_{k=0}^{n} \binom{n}{k} D^{\boldsymbol{n}-k} \mathbf{x}^{\boldsymbol{n}} D^{\boldsymbol{n}+k} = \\ &= \sum\_{k=0}^{n} \left[ \binom{n}{k} \right]^{2} (n-k)! \mathbf{x}^{k} D^{\boldsymbol{n}+k} = \sum\_{k=0}^{n} \frac{n!}{k!} \binom{n}{k} \mathbf{x}^{k} D^{\boldsymbol{n}+k} . \end{split} \tag{43}$$

Therefore, the following representation formula for the Laguerre-type Bell polynomials, denoted by *LYn*, holds:

**Theorem 11.** *The LYn polynomials are expressed in terms of the ordinary Bell polynomials according to the equation* (see [12], Theorem 4.1):

$$\,\_{L}\mathrm{Y}\_{\mathrm{n}}\left(\mathrm{x};[f,\mathcal{g}]\_{\mathrm{n}}\right) = \sum\_{k\_{\mathrm{n}}0}^{\mathrm{n}} \frac{n!}{k!} \binom{\mathrm{n}}{k} \,\mathrm{x}^{k} \,\mathrm{Y}\_{\mathrm{n}+k}\left([f,\mathcal{g}]\_{\mathrm{n}+k}\right) \,. \tag{44}$$

The above results can be easily generalized, since

$$\begin{split} (D\_{2L})^n &= (D \mathbf{x} D \mathbf{x} D)^n = D^n \left( \mathbf{x}^n D^n \mathbf{x}^n D^n \right) = \\ &= \sum\_{k\_1=0}^n \sum\_{k\_2=0}^n \frac{n!}{k\_1!} \frac{(n+k\_1)!}{(k\_1+k\_2)!} \binom{n}{k\_1} \binom{n}{k\_2} \mathbf{x}^{k\_1+k\_2} D^{n+k\_1+k\_2} \,. \end{split} \tag{45}$$

In [12] even the general case of polynomials *rLYn* Bell is considered, but we do not report here the equation which is a little more complicated.

#### **6. The Multivariate Case**

#### *6.1. Laguerre-Type Appell Polynomials*

In a preceding article [32] multivariate extensions of the Appell polynomials (including the Bernoulli and Euler cases) have been introduced, by means of the generating function [23]:

$$A(t)\exp(xt+y^j) = \sum\_{n=0}^{\infty} R\_n^{(j)}(x,y)\frac{t^n}{n!}$$

where *j* is a fixed integer.

The application of the isomorphism T*x*, and its iterations allows defining new classes of multivariate special polynomials, the Laguerre-type Appell polynomials, and to build their main properties (recurrence relations, shift operators, differential equations, etc), in an easy and uniform way.

This has been achieved in [6] starting from generating functions of the type

$$A(t)e\_s(xt)e\_{\sigma}(yt^j) = \sum\_{n=0}^{\infty} R\_n^{(j)}(\mathcal{T}\_x^s(x), \mathcal{T}\_y^{\sigma}(y)) \frac{t^n}{n!}$$

where *es*(·) and *eσ*(·) are Laguerre-type exponentials. Many properties of these functions have been derived, including recursions and differental equations.

The results obtained in this case are easily extended to the functions of *r* variables, since the technique works regardless of the number *r*.

#### *6.2. Laguerre-Type Appell Series*

We limit ourselves to the case of series in two variables, but the equations trivially extend to the general case. For |*x*| < 1, |*y*| < 1 the double series

$$\,\_{1}F\_{1}(a,b\_{1},b\_{2};c;\mathbf{x},\mathbf{y}) = \sum\_{m,n=0}^{\infty} \frac{a^{(m+n)}b\_{1}^{(m)}b\_{2}^{(n)}}{c^{(m+n)}} \frac{\mathbf{x}^{m}}{(m!)^{2}} \frac{\mathbf{y}^{n}}{(n!)^{2}} \tag{46}$$

is the Laguerre-type Appell series, obtained by the classical one acting on it with the two isomorphisms T*<sup>x</sup>* and T*y*.

We avoid to consider further extension to the case of multivariate functions with several parameters, since they are trivially obtained.

#### **7. Applications to Population Dynamics**

#### *7.1. Exponential and L-Exponential Models*

In this section a possible application of the Laguerre derivative is recalled [9,13]. Since the *L*-exponentials for every *x* ≥ 0 are convex increasing functions, with a graph lower with respect to exp(*x*), it is possible to use these function in the framework of population dynamics, as it seems that in some cases the growth of the exponential is too fast.

Consider the number *N*(*t*) of population individuals at time *t* and let *N*(0) = *N*<sup>0</sup> the initial number at time *t* = 0.

In the Malthus model, the variation is assumed to be proportional to *N*(*t*), i.e.,

$$\begin{cases} \frac{d}{dt}N(t) = rN(t), \\\\ N(0) = N\_0, \end{cases}$$

where the *growth rate r* is a suitable constant.

The solution is given by the exponential function

$$N(t) = N\_0 e^{rt} \dots$$

Using the Laguerre derivative, the Laguerre-type Malthus reads:

$$\frac{d}{dt}t\frac{dN}{dt} = \, ^rN(t) \qquad \text{i.e.} \qquad \frac{dN}{dt} + t\frac{d^2N}{dt^2} = \, ^rN(t) \, \_t$$

where *r* is a positive constant. Assuming the initial conditions

$$\begin{cases} \ N(0) = N\_{0\prime} \\\\ N'(0) = N\_1 = N\_0 r\_{\prime\prime} \end{cases}$$

we find the solution

$$N(t) \;=\; N\_0 e\_1(rt)\;=\; N\_0 \sum\_{k=0}^{+\infty} r^k \frac{t^k}{(k!)^2} \;.$$

In this case the population growth increases according to the Laguerre exponential function *e*1(*x*), so that the relevant increasing is slower with respect to the classical Malthus model.

In [9] it has been shown, with tables of data taken from real population dynamics, that the Laguerre-type Malthus model produces data closer to real population growth.

#### *7.2. Logistic vs. L-logistic Model*

Taking into account that the growth rate cannot be constant, since it depends on the environmental resources, Pierre Verhulst considered the so-called *logistic model*

$$\begin{cases} \frac{dN}{dt} = r \left[ 1 - \frac{1}{K} N(t) \right] N(t), \\\\ N(0) = N\_0, \end{cases}$$

where *r* is called the *intrinsic growth rate*, and *K* denotes the *environmental capacity*.

The exact solution of this problem is given by

$$N(t) = \frac{N\_0 K}{N\_0 + (K - N\_0)e^{-rt}} \,\prime$$

so that, if *N*<sup>0</sup> < *K* the solution is a function monotonically increasing to *K*, whereas, if *N*<sup>0</sup> > *K*, the solution is monotonically decreasing to *K*. In any case,

$$\lim\_{t \to \infty} \mathcal{N}(t) = \mathbb{K}\_{\prime}$$

and the value *N*(*t*) = *K* is a stable equilibrium point for the logistic equation.

The Laguerre-logistic (shortly *L*-logistic) model is expressed by

$$\begin{cases} \begin{aligned} \label{eq:K} N'(t) + tN''(t) &= rN(t) \left( 1 - \frac{N(t)}{K} \right), \\\\ N(0) &= N\_0, \\\ N'(0) &= N\_1. \end{aligned} \end{cases} \tag{47}$$

Please note that if in the above equation *N* is small with respect to *K*, then *N*/*K* is close to 0 and consequently *DttDtN* ≈ *rN*(*t*).

If *N* → *K*, then *N*/*K* → 1, and *DttDtN* → 0.

The *L*-logistic equation cannot be solved explicitly, but numerically, using a Runge-Kutta method. The behavior of the approximate solutions for the *L*-logistic model is shown in Figure 4. It is worth noting that the solution tends to the environmental capacity *K* by an oscillating behavior.

**Figure 4.** Solutions to the *L*-logistic model with *N*(0) = *N* (0) < *K* (on the left), *N*(0) = *N* (0) > *K* (on the right), *K* = 64, *r* = 0.8, *T* = 100, Δ*t* = 0.1.

This is the main difference with respect to the ordinary logistic model, since in that case the solution was monotonically increasing or decreasing to *K*.

Similar results could be obtained by using the *nL*-derivatives, introducing suitable initial conditions which can be easily derived from the initial observations data.

Please note that as the order *n* increases, for *x* > 0, the Laguerrian exponential attenuates its growth and for *n* → ∞ it tends to assume the linear value 1 + *x*, so it can be used to model a growth as slow as it is needed.

**Remark 3.** *We recall that the oscillating asymptotic trend of solutions occurs in reality. For example, the classical experiment of G.F. Gause, relative to the protozoon paramecium shows such a typical behavior, represented in Figure 5. In this figure the true values, represented by a dotted line, are compared with the exponential trend of Malthus and with the logistic curve.*

**Figure 5.** The behavior of growth in the Gause experiment.

#### *7.3. Modified L-Logistic Models*

Many different models modifying the basic logistic model appeared in the literature: the Bernoulli, the modified logistic, the Gompertz, the Alee, and the Beverton-Holt models.

In [13] we considered the Laguerre-type version of all of them, showing that in all cases the oscillating asymptotic behavior of solutions takes the place of the monotonic one.

Instead, it was found that the model of Volterra-Lotka model is invariant under the action of the isomorphism T*x*, since the Laguerre derivative satisfies again the chain rule, according to Equation (42).

#### **8. Laguerre-Type Linear Dynamical Systems**

Let A be a *r* × *r* matrix and denote by *uk*, (*k* = 1, 2, ... ,*r*) the invariants of A, i.e., the sum of principal minors (i.e., the elementary symmetric functions of the eigenvalues). The invariants of the matrix *t*A are given by *uk*(*t*) = *t kuk*, (*k* = 1, 2, . . . ,*r*).

Consider the vectors

$$\begin{cases} Z(t) = (Z\_1(t), \dots, Z\_r(t))^T \\ Z\_0 = (Z\_1(0), \dots, Z\_r(0))^T \end{cases}$$

Then the solution of the linear dynamical system

$$\begin{cases} Z'(t) = \mathcal{A}Z(t) \\ Z(0) = Z\_0 \end{cases}$$

writes [33]:

$$Z(t) = e^{t\mathcal{A}} Z\_0 = \sum\_{h=0}^{r-1} \left[ \frac{1}{2\pi i} \sum\_{j=0}^{r-h-1} (-1)^j u\_j(t) \oint\_{\gamma} \frac{e^{\lambda} \lambda^{r-h-j-1}}{P(\lambda, t)} \, d\lambda \right] \cdot t^h Z\_0^h \,.$$

where *P*(*λ*, *t*) is the characteristic polynomial of the matrix *t*A and *γ* denotes a simple Jordan curve encircling all the eigenvalues of A. The choice of *γ*, without computing the eigenvalues, can be done by using the Gershgorin theorem.

In [15], a Laguerre-type version of the above classic result has been shown. Consider the above *r* × *r* matrix, and the vectors

$$\begin{cases} Z(t) = (Z\_1(t), \dots, Z\_r(t))^T \\ Z\_0 = (Z\_1(0), \dots, Z\_r(0))^T \\ Z\_0' = (Z\_1'(0), \dots, Z\_r'(0))^T = \mathcal{A} \cdot Z\_0 \\ \vdots \\ Z\_0^{r-1} = (Z\_1^{r-1}(0), \dots, Z\_r^{r-1}(0))^T = \mathcal{A} \cdot Z\_0^{r-2} \dots \end{cases}$$

The following result holds (see [15] (Theorem 10)):

**Theorem 12.** *The Laguerre-type Cauchy problem for a homogeneous linear differential system*

$$\begin{cases} D\_L Z(t) = Z'(t) + tZ''(t) = \mathcal{A} \cdot Z(t), \\ Z(0) = Z\_0 \\ Z\_0' = \mathcal{A} \cdot Z\_0 \end{cases} $$

*has the solution:*

$$Z(t) = \varepsilon\_1(t\mathcal{A})\ Z\_0 = \sum\_{h=0}^{r-1} \left[ \frac{1}{2\pi i} \sum\_{j=0}^{r-h-1} (-1)^j u\_j(t) \oint\_{\gamma} \frac{\varepsilon\_1(\lambda)\,\lambda^{r-h-j-1}}{P(\lambda, t)} \,d\lambda \right] \cdot t^h Z\_0^h \,d\lambda$$

where *P*(*λ*, *t*) and *γ* have been defined above.

The proof of this result is a straightforward application of the isomorphism T*t*. In [15] worked examples are reported.

#### **9. Conclusions**

The Laguerre derivative and the relevant Laguerre-type exponentials allow to associate, to any given integer *n*, a new class of special functions. This fact is obtained by exploiting the properties of an isomorphism, within the space of analytic functions, which acts in such a way as to preserve the differentiation properties. The successive iterations of this isomorphism produce a cyclic construction within the space that repeats the same structure at a higher level of the order of derivation.

Infinite many special functions can be defined in this way. A few of them have been presented explicitly, and the general technique to produce the others has been indicated.

This Survey has shown even possible applications of the Laguerrian derivative in the context of population dynamics and in the solution of Cauchy problems related to particular linear dynamical systems.

**Funding:** This research received no external funding.

**Acknowledgments:** The author is grateful to the referees, especially to the one who reported ties with the topics covered in the article and the hyper-Bessel operators, helping to expand the bibliography.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Some Relationships for the Generalized Integral Transform on Function Space**

#### **Hyun Soo Chung**

Department of Mathematics, Dankook University, Cheonan 3116, Korea; hschung@dankook.ac.kr Received: 25 November 2020; Accepted: 17 December 2020; Published: 19 December 2020

**Abstract:** In this paper, we recall a more generalized integral transform, a generalized convolution product and a generalized first variation on function space. The Gaussian process and the bounded linear operators on function space are used to define them. We then establish the existence and various relationships between the generalized integral transform and the generalized convolution product. Furthermore, we obtain some relationships between the generalized integral transform and the generalized first variation with the generalized Cameron–Storvick theorem. Finally, some applications are demonstrated as examples.

**Keywords:** generalized integral transform; generalized convolution product; bounded linear operator; Gaussian process; Cameron–Storvick theorem; translation theorem

**MSC:** 47A60; 60J65; 28C20

#### **1. Introduction**

For *T* > 0, let *C*0[0, *T*] be the one-parameter Wiener space and let M denote the class of all Wiener measurable subsets of *C*0[0, *T*]. Let *m* denote Wiener measure. Then, the space (*C*0[0, *T*],M, *m*) is complete, and we denote the Wiener integral of a Wiener integrable functional *F* by

$$\int\_{\mathcal{C}\phi[0,T]} F(x)dm(x)\,.$$

Let *K* ≡ *K*0[0, *T*] be the space of all complex-valued continuous functions defined on [0, *T*] which vanishes at *t* = 0 and whose real and imaginary parts are elements of *C*0[0, *T*].

In [1], Lee studied an integral transform of analytic functionals on abstract Wiener spaces

$$\mathcal{F}\_{\gamma,\theta}(F)(y) = \int\_{\mathbb{C}\_{\mathbb{D}}[0,T]} F(\gamma x + \beta y) dm(x), \qquad y \in \mathcal{K}. \tag{1}$$

For some parameters *γ* and *β* and for certain classes of functionals, the Fourier–Wiener transform, the modified Fourier–Wiener transform, the analytic Fourier–Feynman transform and the Gauss transform are popular examples of the integral transform defined by (1) above (see [1–12]). Researchers have studied some theories of integral transform for functionals on function space. Recently, the integral transform is generalized by some methods in various papers. One of them uses the concept of Gaussian process instead of the ordinary process. For a function *h* on [0, *T*], the Gaussian process is defined by the formula

$$Z\_h(\mathbf{x}, t) = \int\_0^t h(\mathbf{s}) d\mathbf{x}(\mathbf{s})$$

where *<sup>t</sup>* <sup>0</sup> *<sup>h</sup>*(*s*) ˜*dx*(*s*) the Paley–Wiener–Zygmund (PWZ) stochastic integral. Many mathematician use this process to generalize the integral. As representative examples, the generalized integral transforms

$$\mathcal{F}^{\hbar}\_{\gamma,\beta}(F)(y) = \int\_{C\_0[0,T]} F(\gamma Z\_{\hbar}(\mathbf{x}, \cdot) + \beta y) dm(\mathbf{x}) \tag{2}$$

and

$$\mathcal{F}^{h\_1, h\_2}\_{\gamma, \mathfrak{E}}(\mathcal{F})(y) = \int\_{\mathbb{C}\_0[0, T]} \mathcal{F}(\gamma Z\_{h\_1}(\mathbf{x}, \cdot) + \beta Z\_{h\_2}(y, \cdot)) dm(\mathbf{x}) \tag{3}$$

are studied in [13–15]. In fact, if *h*, *h*<sup>1</sup> and *h*<sup>2</sup> are identically 1 on [0, *T*], then Equations (2) and (3) reduce to Equation (1).

Another method is using the operators on *K*. Let *S* and *R* be bounded linear operators on *K*. In [6,16], the authors used this operators to generalize the integral transforms. A more generalized form is given by

$$\mathcal{G}\_{\mathcal{S},\mathbb{R}}(F)(y) = \int\_{C\_{\mathbb{D}}[0,T]} F(\mathcal{S}x + \mathcal{R}y) dm(\mathbf{x}).\tag{4}$$

If *R* is a constant operator and *Sx* = *Zh*(*x*, ·) for some function *h*, then Equation (4) reduces to Equation (2), and hence it reduces to Equation (1) again. In previous studies, many relationships among the integral transform, the convolution and the first variation have been obtained. However, most of the results consist of fixed parameters.

In this paper, we use the both concepts, the Gaussian process and the operator, to define a more generalized integral transform, a generalized convolution product and a generalized first variation of functionals on function space. We then give some necessary and sufficiently conditions for holding some relationships between the generalized integral transforms and the generalized convolution products, and between the generalized integral transforms and the generalized first variations. In addition, some examples are given to illustrate usefulness for our formulas and results. By choosing the kernel functions and operators, all results and formulas in previous papers are corollaries of our results and formulas in this paper.

#### **2. Definitions and Preliminaries**

We first list some definitions and properties needed to understand this paper.

A subset *B* of *C*0[0, *T*] is called scale-invariant measurable if *ρB* is M-measurable for all *ρ* > 0, and a scale-invariant measurable set *N* is called a scale-invariant null set provided *m*(*ρN*) = 0 for all *ρ* > 0. A property that holds except on a scale-invariant null set is said to hold scale-invariant almost everywhere (s-a.e.) [17]. For *v* ∈ *L*2[0, *T*] and *x* ∈ *C*0[0, *T*], let *v*, *x* denote the Paley–Wiener–Zygmund (PWZ) stochastic integral. Then, we have the following assertions.


For a more detailed study of the PWZ stochastic integral, see [4,5,7–9,11–15,18]. Let

$$\mathcal{C}\_0' \equiv \mathcal{C}\_0'[0, T] = \left\{ \boldsymbol{\upsilon} \in \mathcal{C}\_0[0, T] : \boldsymbol{\upsilon}(t) = \int\_0^t z\_{\boldsymbol{\upsilon}}(s) ds, \, z\_{\boldsymbol{\upsilon}} \in L\_2[0, T] \right\}.$$

Then, *C* <sup>0</sup> is the Hilbert space with the inner product

$$(v\_1, v\_2)\_{C\_0'} = \int\_0^T z\_{v\_1}(t) z\_{v\_2}(t) dt\_{\prime\prime}$$

where *vj*(*t*) = *<sup>t</sup>* <sup>0</sup> *zvj* (*s*)*ds* for *j* = 1, 2. Furthermore, we note that *C* <sup>0</sup>[0, *T*] ⊂ *C*0[0, *T*] and (*C* <sup>0</sup>[0, *T*], *C*0[0, *T*], *m*) is one example of the abstract Wiener space [1,16,19,20]. For *x* ∈ *C*0[0, *T*] and *v* ∈ *C* <sup>0</sup>[0, *<sup>T</sup>*] with *<sup>v</sup>*(*t*) = *<sup>t</sup>* <sup>0</sup> *zv*(*s*)*ds*, *zv* ∈ *L*2[0, *T*], (*v*, *x*)<sup>∼</sup> ≡ *zv*, *x* is a well-defined Gaussian random variable with mean 0 and variance *v*<sup>2</sup> *C* <sup>=</sup> *zv*<sup>2</sup> <sup>2</sup>, where (·, ·)<sup>∼</sup> is the complex bilinear form on *K*<sup>∗</sup> × *K*.

0 The following is a well-known integration formula which is used several times in this paper. For each *v* ∈ *C* <sup>0</sup> with *<sup>v</sup>*(*t*) = *<sup>t</sup>* <sup>0</sup> *zv*(*s*)*ds*,

$$\int\_{C\_0[0,T]} \exp\{ (v, x)^{\sim} \} dm(x) = \exp\left\{ \frac{1}{2} \|v\|\_{\mathcal{C}\_0'}^2 \right\} = \exp\left\{ \frac{1}{2} \|z\_v\|\_2^2 \right\}.\tag{5}$$

For each *v* ∈ *C* <sup>0</sup>[0, *T*], let

$$\Phi\_{\mathbf{v}}(\mathbf{x}) = \exp\{ (\mathbf{v}, \mathbf{x})^{\sim} \}. \tag{6}$$

These functionals are called the exponential functionals on *C*0[0, *T*]. It is a well-known fact that the class

$$\mathcal{A} \equiv \{ \Phi\_{\mathcal{v}} : \boldsymbol{\upsilon} \in \mathcal{C}\_0^l[0, T] \}\tag{7}$$

is a fundamental set in *<sup>L</sup>*2(*C*0[0, *<sup>T</sup>*]). Thus, there is a countable dense <sup>S</sup>(*C*0[0, *<sup>T</sup>*]) = {Φ*vn* }<sup>∞</sup> *n*=1 ≡ {Φ*n*}<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> which is dense in *L*2(*C*0[0, *T*]). Thus, we have that, for each *F* ∈ *L*2(*C*0[0, *T*]),

$$F(\mathbf{x}) = \lim\_{n \to \infty} \sum\_{j=1}^{n} a\_j \Phi\_{v\_j}(\mathbf{x})$$

in the *<sup>L</sup>*2-sense, where {*aj*}<sup>∞</sup> *<sup>j</sup>*=<sup>1</sup> is a sequence of constants.

Let L≡L(*K*) be the class of all bounded linear operators on *K*. Then, for each *v* ∈ *C* <sup>0</sup>[0, *T*] and *S* ∈ L,

$$(v, \mathcal{S}x)^{\sim} = (\mathcal{S}^\*v, x)^{\sim}$$

where *S*∗ is the adjoint operator of *S*, see [16,19,21]. We state the conditions for the function *h* to obtain mathematically consistency as follows:

(i) For each *h* ∈ *L*∞[0, *T*] ⊂ *L*2[0, *T*],

$$
\langle z\_{\upsilon}, Z\_h(\mathbf{x}, \cdot) \rangle = \langle z\_{\upsilon} h, \mathbf{x} \rangle.
$$

where *v*(*t*) = *<sup>t</sup>* <sup>0</sup> *zv*(*s*)*ds* for some *zv* ∈ *L*2[0, *T*] because, although *zv* ∈ *L*2[0, *T*], *zvh* may not be an element of *L*2[0, *T*] for *h* ∈ *L*2[0, *T*].

(ii) Let

$$h(t) = \begin{cases} 0, & 0 \le t < T/2 \\ t + 2, & T/2 \le t \le T \end{cases}.$$

Then, *h* is in *L*∞[0, *T*] (and hence *h* ∈ *L*2[0, *T*]). However, *Zh*(*x*, *t*) may not be a Gaussian process. A condition for *h* is needed. Let *h* be an element of *L*∞[0, *T*] such that *mL*(*supp* (*h*)) = *mL*({*t* ∈ [0, *T*] : *h*(*t*) = 0}) = *T*, where *mL* is the Lebesgue measure. Then, we have *h*<sup>2</sup> > 0 and *Zh*(*x*, *t*) is a Gaussian process.

(iii) For each *h* ∈ *L*∞[0, *T*] and *x* ∈ *C*0[0, *T*], *Zh*(*x*, *t*) is stochastically continuous but it is not continuous, namely *Zh*(*x*, *t*) may not element of *C*0[0, *T*]. However, if *h* is a function of bounded variation on [0, *T*], the Gaussian process *Zh*(*x*, *t*) is continuous and hence *SZh*(*x*, ·) is well-defined for all *S* ∈ L. Since for *v* ∈ *C* <sup>0</sup> with *<sup>v</sup>*(*t*) = *<sup>t</sup>* <sup>0</sup> *zv*(*s*)*ds*, (*v*, *x*)<sup>∼</sup> = *zv*, *x*, we have that

$$(v, SZ\_{\hbar}(\mathbf{x}, \cdot)) \curvearrowright = (\mathcal{S}^\* v, Z\_{\hbar}(\mathbf{x}, \cdot)) \curvearrowright = \langle z\_{\mathcal{S}^\* v \prime} Z\_{\hbar}(\mathbf{x}, \cdot) \rangle = \langle h z\_{\mathcal{S}^\* v \prime} x \rangle. \tag{8}$$

(iv) Let H = {*h* : [0, *T*] → R : *h* ∈ *BV*[0, *T*], *mL*(*supp* (*h*)) = *T*}.

#### **3. Generalization of the Integral Transform with Related Topics**

We start this section by giving definition of generalized integral transform, generalized convolution product and the generalized first variation of functionals on *K*.

**Definition 1.** *Let h*, *h*1, *h*<sup>2</sup> *be an element of* H *and let F and G be functionals on K. Let S*, *R*, *A*, *B*, *C*, *D*, *S*1, *<sup>S</sup>*<sup>2</sup> ∈ L*. Then, the generalized integral transform* <sup>T</sup> *<sup>h</sup> <sup>S</sup>*,*R*(*F*) *of F, a generalized convolution product* (*F* ∗ *G*) *h*1,*h*<sup>2</sup> *A*,*B*,*C*,*D of F and G, and a generalized first variation δ h*1,*h*<sup>2</sup> *S*1,*S*<sup>2</sup> *F of F with respect to h*1, *h*2, *S*<sup>1</sup> *and S*<sup>2</sup> *are defined by the formulas*

$$\mathcal{T}\_{\mathbb{S},\mathbb{R}}^h(F)(y) = \int\_{\mathbb{C}\_0[0,T]} F(\mathcal{S}Z\_h(\mathbf{x}, \cdot) + \mathcal{R}y) dm(\mathbf{x}),\tag{9}$$

$$\mathbb{E}\left(F\*G\right)\_{A,B,C,D}^{h\_1,h\_2}(y) = \int\_{\mathbb{C}\_0[0,T]} F(AZ\_{h\_1}(\mathbf{x},\cdot) + By)G(\mathbb{C}Z\_{h\_2}(\mathbf{x},\cdot) + Dy)dm(\mathbf{x})\tag{10}$$

*and*

$$\begin{split} \delta\_{S\_1, S\_2}^{h\_1, h\_2} F(\mathbf{x}|\boldsymbol{\mu}) & \equiv \delta F(\mathbb{S}\_1 \boldsymbol{Z}\_{h\_1}(\mathbf{x}, \cdot) | \mathbb{S}\_2 \boldsymbol{Z}\_{h\_2}(\mathbf{u}, \cdot)) \\ & = \left. \frac{\partial}{\partial a} F(\mathbb{S}\_1 \boldsymbol{Z}\_{h\_1}(\mathbf{x}, \cdot) + a \mathbb{S}\_2 \boldsymbol{Z}\_{h\_2}(\mathbf{u}, \cdot)) \right|\_{a=0} \end{split} \tag{11}$$

*for x*, *u*, *y* ∈ *K if they exist.*

#### **Remark 1.**


We next state some notations used in this paper. For *v* ∈ *L*2[0, *T*], *h*1, *h*2, ··· , *hn* ∈ H and *R*1, ··· , *Rn* ∈ L, let

$$M(R\_{1\prime}, \cdots, R\_{\hbar}: h\_{1\prime}, \cdots, h\_{\hbar}: v) \equiv \exp\left\{\frac{1}{2} \sum\_{j=1}^{n} ||h\_j z\_{R\_j^\* v}||\_2^2\right\},\tag{12}$$

where *R*∗ *<sup>j</sup> <sup>v</sup>*(*t*) = *<sup>t</sup>* <sup>0</sup> *zR*<sup>∗</sup> *<sup>j</sup> <sup>v</sup>*(*s*)*ds* for each *<sup>j</sup>* = 1, 2, ··· , *<sup>n</sup>*. Furthermore, we have the symmetric property for *M*(· : · : *v*).

In Theorem 1, we obtain the existence of generalized integral transform, generalized convolution product and generalized first variation of functionals in S(*C*0[0, *T*]). In addition, we show that they are elements of S(*C*0[0, *T*]).

**Theorem 1.** *Let h*, *h*1, *h*<sup>2</sup> *be elements of* H *and let S*, *R*, *A*, *B*, *C*, *D*, *S*1, *S*<sup>2</sup> ∈ L*. Let* Φ*<sup>v</sup> and* Φ*<sup>w</sup> be elements of* <sup>S</sup>(*C*0[0, *<sup>T</sup>*]) *and let <sup>u</sup>*(*t*) = *<sup>t</sup>* <sup>0</sup> *zu*(*s*)*ds* ∈ *C* <sup>0</sup>*. In addition, let khj* (*t*) = *<sup>t</sup>* <sup>0</sup> *hj*(*s*)*ds for j* = 1, 2*. Then, the generalized integral transform* <sup>T</sup> *<sup>h</sup> <sup>S</sup>*,*R*(Φ*v*) *of* Φ*v, the generalized convolution product* (Φ*<sup>v</sup>* ∗ Φ*w*) *h*1,*h*<sup>2</sup> *<sup>A</sup>*,*B*,*C*,*<sup>D</sup> of* Φ*<sup>v</sup> and* Φ*<sup>w</sup> and the generalized first variation δ h*1,*h*<sup>2</sup> *S*1,*S*<sup>2</sup> Φ*v*(*x*|*u*) *with respect to h*1, *h*2, *S*<sup>1</sup> *and S*<sup>2</sup> *exist, belong to* S(*C*0[0, *T*]) *and are given by the formulas*

$$\mathcal{T}\_{S,R}^h(\Phi\_v)(y) = M(S:h:v)\Phi\_{R^\*v}(y),\tag{13}$$

$$\begin{aligned} \left(\Phi\_{\upsilon} \ast \Phi\_{w}\right)\_{A,B,\mathbb{C},D}^{h\_{1},h\_{2}}(y) \\ = \mathcal{M}(A:h\_{1}:\upsilon)M(\mathbb{C}:h\_{2}:w)\exp\{(h\_{1}z\_{A^{\ast}\upsilon},h\_{2}z\_{\mathbb{C}^{\ast}w})\_{2}\}\Phi\_{\mathbb{B}^{\ast}\upsilon+D^{\ast}w}(y) \end{aligned} \tag{14}$$

$$
\delta\_{S\_1, S\_2}^{h\_1, h\_2} \Phi\_v(\mathbf{x}|\mu) = (h\_2 z\_{S\_2^\* v} z\_u)\_2 \Phi\_{h\_1 z\_{S\_1^\* v}}(\mathbf{x}).\tag{15}
$$

*for x*, *y*, *u* ∈ *K.*

**Proof.** First, using Equations (5), (1) and (8), it follows that, for all *y* ∈ *K*, we have

$$\begin{split} \mathcal{T}\_{S,\mathbb{R}}^{\hbar}(\Phi\_{\boldsymbol{v}})(\boldsymbol{y}) &= \int\_{\mathcal{C}\_{0}[0,T]} \exp\left\{ (\boldsymbol{v}, SZ\_{\hbar}(\boldsymbol{x}, \cdot))^{\smile} + (\boldsymbol{v}, \boldsymbol{R}\boldsymbol{y})^{\smile} \right\} dm(\boldsymbol{x}) \\ &= \int\_{\mathcal{C}\_{0}[0,T]} \exp\left\{ \langle \hbar \boldsymbol{z}\_{S^{\*}\boldsymbol{v}\boldsymbol{v}}, \boldsymbol{x} \rangle + (\boldsymbol{R}^{\*}\boldsymbol{v}, \boldsymbol{y})^{\smile} \right\} dm(\boldsymbol{x}) \\ &= \exp\left\{ \frac{1}{2} ||\hbar \boldsymbol{z}\_{S^{\*}\boldsymbol{v}}||\_{2}^{2} \right\} \Phi\_{\boldsymbol{R}^{\*}\boldsymbol{v}}(\boldsymbol{y}) . \end{split}$$

Finally, by using Equations (12) and (13) is obtained. We next use Equations (5), (8) and (14) to obtain the following calculation

$$\begin{split} (\Phi\_{\upsilon} \* \Phi\_{w})^{h\_{1}h\_{2}}\_{A,B,\mathsf{C},D}(y) &= \int\_{\mathsf{C}\_{\mathsf{0}}[0,T]} \Phi\_{\upsilon}(AZ\_{h\_{1}}(\mathbf{x},\cdot) + By) \Phi\_{w}(\mathsf{C}Z\_{h\_{2}}(\mathbf{x},\cdot) + Dy) dm(\mathbf{x}) \\ &= \int\_{\mathsf{C}\_{\mathsf{0}}[0,T]} \exp\left\{ \langle h\_{1}z\_{A^{\*}\upsilon} + h\_{2}z\_{\mathsf{C}^{\*}w\nu}, \mathbf{x} \rangle + (B^{\*}\upsilon + D^{\*}w, y)^{\sim} \right\} dm(\mathbf{x}) \\ &= \exp\left\{ \frac{1}{2} \|h\_{1}z\_{A^{\*}\upsilon} + h\_{2}z\_{\mathsf{C}^{\*}w}\|\_{2}^{2} \right\} \Phi\_{\mathsf{B}^{\*}\upsilon}(y) \Phi\_{\mathsf{D}^{\*}w}(y) . \end{split}$$

Since *h*<sup>2</sup> <sup>2</sup> = (*h*, *h*)<sup>2</sup> for all *h* ∈ *L*2[0, *T*], we now note that

$$\frac{1}{2}||h\_1z\_{A^\*v} + h\_2z\_{C^\*w}||\_2^2 = \frac{1}{2} \left( h\_1z\_{A^\*v} + h\_2z\_{C^\*w}, h\_1z\_{A^\*v} + h\_2z\_{C^\*w} \right)\_2$$

$$= \frac{1}{2} \left[ (h\_1z\_{A^\*v}, h\_1z\_{A^\*v})\_2 + (h\_2z\_{C^\*w}, h\_2z\_{C^\*w})\_2 + 2(h\_1z\_{A^\*v}, h\_2z\_{C^\*w})\_2 \right]$$

and Φ*v*(*y*) + Φ*w*(*y*) = Φ*v*+*w*(*y*) for all *v*, *w* ∈ *C* <sup>0</sup>[0, *T*]. Hence, we can obtain Equation (14) as desired. Finally, we use Equations (8) and (11) to establish Equation (15) as follows:

$$\begin{split} \delta^{\natural\_{\mathrm{I}\_{1}}\hbar\_{2}}\_{S\_{1},S\_{2}}\Phi\_{v}(\boldsymbol{x}|\boldsymbol{u}) &= \frac{\partial}{\partial\boldsymbol{a}} \Big[\Phi\_{v}(S\_{1}Z\_{\hbar\_{1}}(\mathbf{x},\cdot) + aS\_{2}Z\_{\hbar\_{2}}(\boldsymbol{u},\cdot))\Big]\Big|\_{\boldsymbol{a}=\boldsymbol{0}} \\ &= \frac{\partial}{\partial\boldsymbol{a}} \Big[\exp\{\langle h\_{1}z\_{S\_{1}^{\*}\boldsymbol{v}\nu},\mathbf{x}\rangle + a\langle h\_{2}z\_{S\_{2}^{\*}\boldsymbol{v}\nu},\mathbf{u}\rangle\}\Big]\Big|\_{\boldsymbol{a}=\boldsymbol{0}} \\ &= \langle h\_{2}z\_{S\_{2}^{\*}\boldsymbol{v}\nu}\,\boldsymbol{u}\rangle\exp\{\langle h\_{1}z\_{S\_{1}^{\*}\boldsymbol{v}\nu},\mathbf{x}\rangle\}. \end{split}$$

We now note that

$$\langle h\_2 z\_{S\_2^\* v'} u \rangle = \int\_0^T h\_2(t) z\_{S\_2^\* v}(t) z\_u(t) dt = (h\_2 z\_{S\_2^\* v'} z\_u)\_{2 \le t}$$

which establishes Equation (15) as desired.

#### **4. Some Relationships with the Generalized Convolution Products.**

In this section, we obtain some relationships between the generalized integral transform and the generalized convolution product of functionals in S(*C*0[0, *T*]). In the first theorem in Section 4, we give a formula for the generalized integral transforms of functionals in S(*C*0[0, *T*]). To establish some relationships, the following lemma is needed.

**Lemma 1.** *Let h*1, *h*<sup>2</sup> ∈ H *and let S*1, *S*2, *R* ∈ L*. Then, for each v* ∈ *C* 0*,*

$$M(\mathcal{S}\_1:h\_1:\mathbb{R}^\*v)M(\mathcal{S}\_2:h\_2:v) = M(\mathcal{RS}\_1,\mathcal{S}\_2:h\_1,h\_2:v). \tag{16}$$

**Proof.** Using the following fact *S*∗ <sup>1</sup>*R*<sup>∗</sup> = (*RS*1)<sup>∗</sup> and Equation (12) repeatedly, we have

$$\begin{split} M(S\_1:h\_1:R^\*v)M(S\_2:h\_2:v) &= \exp\left\{\frac{1}{2}||h\_1z\_{S\_1^\*R^\*v}||\_2^2\right\} \exp\left\{\frac{1}{2}||h\_2z\_{S\_2^\*v}||\_2^2\right\} \\ &= \exp\left\{\frac{1}{2}||h\_1z\_{S\_1^\*R^\*v}||\_2^2 + \frac{1}{2}||h\_2z\_{S\_2^\*v}||\_2^2\right\} \\ &= \exp\left\{\frac{1}{2}||h\_1z\_{(RS\_1)^\*v}||\_2^2 + \frac{1}{2}||h\_2z\_{S\_2^\*v}||\_2^2\right\} \\ &= M(RS\_1, S\_2:h\_1, h\_2:v), \end{split}$$

which complete the proof of Lemma 1.

**Theorem 2.** *Let S*1, *S*2, *R*<sup>1</sup> *and R*<sup>2</sup> *be elements of* L *and let h*<sup>1</sup> *and h*<sup>2</sup> *be elements of* H*. In addition, let* Φ*<sup>v</sup> be an element of* S(*C*0[0, *T*])*. Then,*

$$(\mathcal{T}\_{S\_1, R\_1}^{h\_1}(\mathcal{T}\_{S\_2, R\_2}^{h\_2}(\Phi\_v))(y) = M(\mathcal{R}\_2 S\_1, S\_2 : h\_1, h\_2 : v) \Phi\_{(R\_1 R\_2)^\* v}(y) \tag{17}$$

*for y* ∈ *K.*

**Proof.** From Theorem 1, we have

$$T\_{\mathcal{S}\_2, \mathcal{R}\_2}^{\text{lr}\_2}(\Phi\_v)(y) = M(\mathcal{S}\_2 : h\_2 : v) \Phi\_{\mathcal{R}\_2^\* v}(y) .$$

Applying Theorem 1 once more,

$$(\mathcal{T}\_{\mathcal{S}\_1,\mathcal{R}\_1}^{h\_1}(\mathcal{T}\_{\mathcal{S}\_2,\mathcal{R}\_2}^{h\_2}(\Phi\_{\mathcal{V}}))(y) = M(\mathcal{S}\_2 : h\_2 : \upsilon)M(\mathcal{S}\_1 : h\_1 : \mathcal{R}\_2^\* \upsilon)\Phi\_{(\mathcal{R}\_1 \mathcal{R}\_2)^\* \upsilon}(y).$$

Finally, using Equation (16) in Lemma 1, we complete the proof of Theorem 2 as desired.

Equations (18) and (19) in Theorem 3 are the commutative of the generalized integral transform and the Fubini theorem with respect to the generalized integral transform, respectively.

**Theorem 3.** *Let S*1, *S*2, *R*<sup>1</sup> *and R*<sup>2</sup> *be elements of* L *and let h*<sup>1</sup> *and h*<sup>2</sup> *be elements of* H*. In addition, let* Φ*<sup>v</sup> be an element of* S(*C*0[0, *T*])*. Then,*

$$(\mathcal{T}\_{S\_1, R\_1}^{l\_1}(\mathcal{T}\_{S\_2, R\_2}^{l\_2}(\Phi\_\upsilon)))(y) = \mathcal{T}\_{S\_2, R\_2}^{l\_2}(\mathcal{T}\_{S\_1, R\_1}^{l\_1}(\Phi\_\upsilon))(y) \tag{18}$$

*if and only if*

$$R\_1 R\_2 = R\_2 R\_1, \quad \text{and} \quad M(R\_2 S\_1, S\_2: h\_1, h\_2: v) = M(S\_1, R\_1 S\_2: h\_1, h\_2: v).$$

*Furthermore,*

$$(T\_{S\_1, R\_1}^{h\_1}(T\_{S\_2, R\_2}^{h\_2}(\Phi\_{\mathcal{v}}))(y) = T\_{S\_3, R\_3}^{h\_3}(\Phi\_{\mathcal{v}})(y) \tag{19}$$

*if and only if*

$$R\_1 R\_2 = R\_3 \quad \text{and} \quad M(R\_2 S\_1, S\_2 : h\_1, h\_2 : \upsilon) = M(S\_3 : h\_3 : \upsilon).$$

**Proof.** Using Equation (17) twice, we have

$$(\mathcal{T}\_{\mathbb{S}\_1,\mathbb{R}\_1}^{h\_1}(\mathcal{T}\_{\mathbb{S}\_2,\mathbb{R}\_2}^{h\_2}(\Phi\_v))(y) = M(\mathcal{R}\_2\mathbb{S}\_1,\mathbb{S}\_2:h\_1,h\_2:v)\Phi\_{(\mathcal{R}\_1\mathbb{R}\_2)^\*v}(y)$$

$$(\mathcal{T}\_{\mathcal{S}\_2,\mathcal{R}\_2}^{\mathbb{h}\_2}(\mathcal{T}\_{\mathcal{S}\_1,\mathcal{R}\_1}^{\mathbb{h}\_1}(\Phi\_{\upsilon}))(y) = M(\mathcal{S}\_1,\mathcal{R}\_1\mathcal{S}\_2:\mathcal{h}\_1,\mathcal{h}\_2:\upsilon)\Phi\_{(\mathcal{R}\_2\mathcal{R}\_1)^{\mathbb{v}}\upsilon}(y) \dots$$

Using these facts and Equation (13), we can establish Equations (18) and (19).

From Theorems 2 and 3, we can establish the *n*-dimensional version for the generalized integral transform.

**Corollary 1.** *Let S*1, ··· , *Sn*, *R*1, ··· , *Rn*−<sup>1</sup> *and Rn be elements of* L *and let hj be an element of* H, *j* = 1, 2, ··· *. In addition, let* Φ*<sup>v</sup> be an element of* S(*C*0[0, *T*])*. Then,*

$$\begin{aligned} &\mathcal{T}\_{S\_{\boldsymbol{n}},\mathbb{R}\_{\boldsymbol{n}}}^{\boldsymbol{h}\_{\boldsymbol{n}}}(\cdots(\mathcal{T}\_{S\_{1},\mathbb{R}\_{1}}^{\boldsymbol{h}\_{1}}(\Phi\_{\boldsymbol{v}})\cdots \cdot))(\boldsymbol{y}) \\ &= M(\mathcal{S}\_{1},\mathbb{R}\_{1}\mathbb{S}\_{2},\mathbb{R}\_{1}\mathbb{R}\_{2},\mathbb{S}\_{3},\cdots,\mathbb{R}\_{1}\mathbb{R}\_{2}\cdots\cdot\mathbb{R}\_{n-1}\mathbb{S}\_{n}:\boldsymbol{h}\_{1},\cdots,\boldsymbol{h}\_{n}:\boldsymbol{v})\Phi\_{(\mathbb{R}\_{1}\cdots\mathbb{R}\_{n})^{\*}\boldsymbol{v}}(\boldsymbol{y}). \end{aligned}$$

In our next theorem, we show that our generalized convolution product is commutative.

**Theorem 4.** *Let A*, *B*, *C and D be elements of* L *and let h*1, *h*<sup>2</sup> ∈ H*. Let* Φ*<sup>v</sup> and* Φ*<sup>w</sup> be elements of* S(*C*0[0, *T*])*. Then,*

$$(\Phi\_{\upsilon} \* \Phi\_{\upsilon})^{h\_1, h\_2}\_{A, B, C, D}(y) = (\Phi\_{\upsilon} \* \Phi\_{\upsilon})^{h\_1, h\_2}\_{A, B, C, D}(y) \tag{20}$$

*if and only if*

$$M(A:h\_1:v) = M(\mathbb{C}:h\_2:v) \text{ and } M(A:h\_1:w) = M(\mathbb{C}:h\_2:w).$$

**Proof.** The proof of Theorem 4 is a straightforward application of Theorem 1.

In Theorem 5, we give a necessary and sufficient condition for holding a relationship between the generalized integral transform and the generalized convolution product.

**Theorem 5.** *For j* = 1, 2, 3*, let Sj*, *Rj* ∈ L*, and, for* = 1, 2*, let Ai*, *Bi*, *Ci*, *Di* ∈ L*. In addition, for k* = 1, 2, ··· , 7*, let hk* ∈ H*. Then,*

$$\mathcal{T}\_{\mathbb{S}\_1,\mathbb{R}\_1}^{h\_1}(\Phi\_{\boldsymbol{\upsilon}}\*\Phi\_{\boldsymbol{w}})\_{A\_1,B\_1,\mathbb{C}\_1,D\_1}^{h\_2,h\_3}(\boldsymbol{y}) = \left(\mathcal{T}\_{\mathbb{S}\_2,\mathbb{R}\_2}^{h\_4}\Phi\_{\boldsymbol{\upsilon}}\*\mathcal{T}\_{\mathbb{S}\_3,\mathbb{R}\_3}^{h\_5}\Phi\_{\boldsymbol{w}}\right)\_{A\_2,B\_2,\mathbb{C}\_2,D\_2}^{h\_6,h\_7}(\boldsymbol{y})\tag{21}$$

.

*if and only if the following equations hold*

$$\begin{cases} B\_1 R\_1 = R\_2 B\_2 \text{ and } D\_1 R\_1 = R\_3 D\_2 \\ M(B\_1 S\_1, A\_1 : h\_1, h\_2 : v) = M(S\_2, A\_2 : h\_4, h\_6 : v) \\ M(D\_1 S\_1, C\_1 : h\_1, h\_3 : w) = M(S\_3, C\_2 : h\_5, h\_7 : w) \\ (h\_2 A\_1^\* v, h\_3 C\_1^\* w)\_2 = (h\_6 A\_2^\* v, h\_7 C\_2^\* w)\_2 \end{cases}$$

**Proof.** To complete the proof of Theorem 5, we first calculate the left hand side of Equation (21). From Equation (14) in Theorem 1, we have

$$\begin{split} \left(\Phi\_{\boldsymbol{v}} \* \Phi\_{\boldsymbol{w}}\right)^{h\_2, h\_3}\_{A\_1, B\_1; C\_1, D\_1}(\boldsymbol{y}) \\ \boldsymbol{\Phi} = \boldsymbol{M}(A\_1 : h\_2 : \boldsymbol{v}) \boldsymbol{M}(\mathbb{C}\_1 : h\_3 : \boldsymbol{w}) \exp\{ (h\_2 A\_1^\* \boldsymbol{v}, h\_3 \mathbb{C}\_1^\* \boldsymbol{w})\_2 \} \Phi\_{B\_1^\* \boldsymbol{v} + D\_1^\* \boldsymbol{w}}(\boldsymbol{y}). \end{split} \tag{22}$$

Using Equations (13), (12), (16) and (22), we have

$$\begin{split} T\_{S\_1,R\_1}^{h\_1}(\Phi\_{\upsilon} \ast \Phi\_{\mathcal{w}})^{h\_2,h\_3}\_{A\_1,\overline{\mathcal{B}}\_1,\mathcal{C}\_1,D\_1}(\mathcal{y}) &= M(B\_1\mathcal{S}\_1,A\_1:h\_1,h\_2:\upsilon)M(D\_1\mathcal{S}\_1,\mathcal{C}\_1:h\_1,h\_3:w),\\ &\quad \cdot \exp\left\{(h\_2A\_1^\*\upsilon,h\_3\mathcal{C}\_1^\*w)\_2\right\}\Phi\_{\mathcal{R}\_1^\*B\_1^\*\upsilon+R\_1^\*D\_1^\*w}(\mathcal{y}). \end{split}$$

We next calculate the left hand side of Equation (21). From Equations (12) and (13) twice, we have

$$\mathcal{T}^{h\_4}\_{\mathbb{S}\_2, \mathbb{R}\_2}(\Phi\_v)(y) = M(\mathbb{S}\_2 : h\_4 : v)\Phi\_{R\_2^\*v}(y) \tag{23}$$

and

$$\mathcal{T}\_{\mathbb{S}\_{\mathbb{S}}, \mathbb{R}\_{\mathbb{S}}}^{\mathbb{Ws}}(\Phi\_w)(y) = M(\mathbb{S}\_{\mathbb{S}} : h\_{\mathbb{S}} : w) \Phi\_{\mathbb{R}\_{\mathbb{S}}^w}(y). \tag{24}$$

.

.

We now use Equations (14), (16), (23) and (24) repeatedly to obtain the following calculation

$$\begin{split} \left( (\mathcal{T}\_{\mathbb{S}\_2,\mathbb{R}\_2}^{\mathbb{H}\_4} \Phi\_{\boldsymbol{\upsilon}} \ast \mathcal{T}\_{\mathbb{S}\_3,\mathbb{R}\_3}^{\mathbb{H}\_5} \Phi\_{\boldsymbol{w}} \right)\_{A\_2,\mathbb{R}\_2,\mathbb{C}\_2,D\_2}^{\mathbb{H}\_6,\mathbb{H}\_7}(\boldsymbol{y}) &= M(\mathcal{S}\_2,\mathcal{A}\_2 : \mathcal{h}\_4,\mathbb{h}\_6 : \boldsymbol{\upsilon}) M(\mathcal{S}\_3,\mathbb{C}\_2 : \mathbb{h}\_5,\mathbb{h}\_7 : \boldsymbol{w}) \\ &\cdot \exp\{ (\mathcal{h}\_6 A\_2^\* \boldsymbol{\upsilon}, \operatorname{h}\_7 \mathbb{C}\_2^\* \boldsymbol{w})\_2 \} \Phi\_{\mathbb{R}\_2^\* R\_2^\* \boldsymbol{\upsilon} + D\_2^\* R\_3^\* \boldsymbol{w}}(\boldsymbol{y}). \end{split}$$

Hence, we complete the proof of Theorem 5 as desired.

**Corollary 2.** *The following results and formulas stated bellow easily from Theorem 5.*

*(1) Let S and R be elements of* L*, and, for* = 1, 2*, let Ai*, *Bi*, *Ci*, *Di* ∈ L*. In addition, for k* = 1, 2, ··· , 5*, let hk* ∈ H*. Then,*

$$(\mathcal{T}\_{\mathcal{S},\mathcal{R}}^{h\_1}(\Phi\_{\upsilon}\*\Phi\_{\mathfrak{w}})^{h\_2,h\_3}\_{A\_1,B\_1,C\_1,D\_1}(\mathcal{y}) = (\mathcal{T}\_{\mathcal{S},\mathcal{R}}^{h\_1}\Phi\_{\upsilon}\*\mathcal{T}\_{\mathcal{S},\mathcal{R}}^{h\_1}\Phi\_{\mathfrak{w}})^{h\_4,h\_5}\_{A\_2,B\_2,C\_2,D\_2}(\mathcal{y})$$

*if and only if the following equations hold*

$$\begin{cases} B\_1 R = R B\_2 \text{ and } D\_1 R = R D\_2 \\ M(B\_1 S, A\_1 : h\_1, h\_2 : v) = M(S, A\_2 : h\_1, h\_4 : v) \\ M(D\_1 S, C\_1 : h\_1, h\_3 : w) = M(S, C\_2 : h\_1, h\_5 : w) \\ (h\_2 A\_1^\* v, h\_3 C\_1^\* w)\_2 = (h\_4 A\_2^\* v, h\_5 C\_2^\* w)\_2 \end{cases}$$

*(2) For j* = 1, 2, 3*, let Sj*, *Rj* ∈ L *and A*, *B*, *C*, *D* ∈ L*. In addition, for k* = 1, 2, ··· , 7*, let hk* ∈ H*. Then,*

$$\mathcal{T}\_{S\_1,R\_1}^{h\_1}(\Phi\_{\mathcal{V}}\*\Phi\_w)\_{A,B,\mathcal{C},D}^{h\_2,h\_3}(y) = (\mathcal{T}\_{S\_2,R\_2}^{h\_4}\Phi\_{\mathcal{V}}\*\mathcal{T}\_{S\_3,R\_3}^{h\_5}\Phi\_w)\_{A,B,\mathcal{C},D}^{h\_2,h\_3}(y)$$

*if and only if the following equations hold*

$$\begin{cases} BR\_1 = R\_2 B \text{ and } DR\_1 = R\_3 D \\ M(BS\_{1\prime}A : h\_{1\prime}h\_2 : v) = M(S\_{2\prime}A : h\_{4\prime}h\_2 : v) \\ M(DS\_{1\prime}C : h\_{1\prime}h\_3 : w) = M(S\_{3\prime}C : h\_{5\prime}h\_{3\prime}w) \\ (h\_2A^\*v, h\_3C^\*w)\_2 = (h\_6A^\*v, h\_3C^\*w)\_2 \end{cases}$$

#### **5. Some Relationships with the Generalized First Variations**

In this section, we establish some formulas involving the generalized first variation. We next obtain a generalized Cameron–Storvick theorem for the generalized first variation and use this to apply for the generalized integral transform.

**Theorem 6.** *Let h*1, *h*2, *h*<sup>3</sup> ∈ H *and S*1, *S*2, *S*<sup>3</sup> ∈ L*. Let u* ∈ *C* <sup>0</sup> *with u*(*t*) = *<sup>t</sup>* <sup>0</sup> *zu*(*s*)*ds. Then,*

$$T\_{S\_1,R}^{h\_1}(\delta\_{S\_2,S\_3}^{h\_2,h\_3}\Phi\_\mathcal{v}(\cdot|u))(y) = \delta\_{S\_2,S\_3}^{h\_2,h\_3}T\_{S\_1,R}^{h\_1}(\Phi\_\mathcal{v})(y|u) \tag{25}$$

*if and only if R* = *I and M*(*S*<sup>1</sup> : *h*<sup>1</sup> : *vS*2,*h*<sup>2</sup> ) = *M*(*S*<sup>1</sup> : *h*<sup>1</sup> : *v*)*, where vS*2,*h*<sup>2</sup> = *h*2*zS*<sup>∗</sup> 2 *v.* **Proof.** First, using Equations (5), (12), (13) and (29), we have

T *h*1 *<sup>S</sup>*1,*R*(*<sup>δ</sup> h*2,*h*<sup>3</sup> *S*2,*S*<sup>3</sup> Φ*v*(·|*u*))(*y*) = (*h*3*zS*<sup>∗</sup> <sup>3</sup> *<sup>v</sup>*, *zu*)<sup>2</sup> - *C*0[0,*T*] Φ*h*2*zS*<sup>∗</sup> 2 *v* (*S*1*Zh*<sup>1</sup> (*x*, ·) + *Ry*)*dm*(*x*) = (*h*3*zS*<sup>∗</sup> <sup>3</sup> *<sup>v</sup>*, *zu*)<sup>2</sup> - *C*0[0,*T*] exp{(*vS*2,*h*<sup>2</sup> , *S*1*Zh*<sup>1</sup> (*x*, ·))<sup>∼</sup> + (*vS*2,*h*<sup>2</sup> , *Ry*)∼}*dm*(*x*) = (*h*3*zS*<sup>∗</sup> <sup>3</sup> *<sup>v</sup>*, *zu*)<sup>2</sup> - *C*0[0,*T*] exp{*h*1*zS*<sup>∗</sup> <sup>1</sup> *vS*2,*h*<sup>2</sup> , *x* + (*R*<sup>∗</sup>*vS*2,*h*<sup>2</sup> , *y*)∼}*dm*(*x*) = (*h*3*zS*<sup>∗</sup> <sup>3</sup> *<sup>v</sup>*, *zu*)2*M*(*S*<sup>1</sup> : *<sup>h</sup>*<sup>1</sup> : *vS*2,*h*<sup>2</sup> ) exp{(*R*<sup>∗</sup>*vS*2,*h*<sup>2</sup> , *<sup>y</sup>*)∼} = (*h*3*zS*<sup>∗</sup> <sup>3</sup> *<sup>v</sup>*, *zu*)2*M*(*S*<sup>1</sup> : *<sup>h</sup>*<sup>1</sup> : *vS*2,*h*<sup>2</sup> )Φ*R*∗*vS*2,*h*<sup>2</sup> (*y*).

On the other hands, using Equations (11)–(13), we have

*δ h*2,*h*<sup>3</sup> *S*2,*S*<sup>3</sup> T *h*1 *<sup>S</sup>*1,*R*(Φ*v*)(*y*|*u*) <sup>=</sup> *<sup>∂</sup> ∂α* T *h*1 *<sup>S</sup>*1,*R*(Φ*v*)(*S*2*Zh*<sup>2</sup> (*y*, ·) + *<sup>α</sup>S*3*Zh*<sup>3</sup> (*u*, ·))} *α*=0 <sup>=</sup> *<sup>∂</sup> ∂α* exp<sup>1</sup> 2 *h*1*zS*<sup>∗</sup> <sup>1</sup> *<sup>v</sup>*<sup>2</sup> 2 <sup>Φ</sup>*R*∗*v*(*S*2*Zh*<sup>2</sup> (*y*, ·) + *<sup>α</sup>S*3*Zh*<sup>3</sup> (*u*, ·)) *α*=0 <sup>=</sup> exp<sup>1</sup> 2 *h*1*zS*<sup>∗</sup> <sup>1</sup> *<sup>v</sup>*<sup>2</sup> 2 *∂ ∂α* exp (*R*∗*v*, *S*2*Zh*<sup>2</sup> (*y*, ·))<sup>∼</sup> + *α*(*R*∗*v*, *S*3*Zh*<sup>3</sup> (*u*, ·))<sup>∼</sup> *α*=0 <sup>=</sup> exp<sup>1</sup> 2 *h*1*zS*<sup>∗</sup> <sup>1</sup> *<sup>v</sup>*<sup>2</sup> 2 *∂ ∂α* exp *h*2*zS*<sup>∗</sup> 2*R*∗*v*, *<sup>y</sup>* + *<sup>α</sup>h*3*zS*<sup>∗</sup> 3*R*∗*v*, *u α*=0 = (*h*3*zS*<sup>∗</sup> <sup>3</sup>*R*∗*v*, *zu*)2*M*(*S*<sup>1</sup> : *<sup>h</sup>*<sup>1</sup> : *<sup>v</sup>*) exp *h*2*zS*<sup>∗</sup> 2*R*∗*v*, *y* = (*h*3*zS*<sup>∗</sup> <sup>3</sup>*R*∗*v*, *zu*)2*M*(*S*<sup>1</sup> : *<sup>h</sup>*<sup>1</sup> : *<sup>v</sup>*)Φ*h*2*zS*<sup>∗</sup> <sup>2</sup> *<sup>R</sup>*∗*<sup>v</sup>* (*y*).

Hence, Equation (25) holds if and only if *R* = *I* and

$$M(\mathcal{S}\_1:h\_1:\overline{\boldsymbol{\varpi}}\_{\mathcal{S}\_2,h\_2}) = M(\mathcal{S}\_1:h\_1:\boldsymbol{\upsilon}).\ .$$

To establish a generalized Cameron–Storvick theorem for the generalized first variation, we need two lemmas with respect to the translation theorem on Wiener space.

**Lemma 2.** (Translation Theorem 1) *Let F be a integrable functional on C*0[0, *T*] *and let x*<sup>0</sup> ∈ *C* <sup>0</sup>*. Then,*

$$\int\_{\mathbb{C}\_{\mathbb{O}}[0,T]} F(\mathbf{x} + \mathbf{x}\_{0}) dm(\mathbf{x}) = \exp\left\{-\frac{1}{2} \|\mathbf{x}\_{0}\|\_{\mathcal{C}\_{0}^{\prime}}^{2}\right\} \int\_{\mathbb{C}\_{\mathbb{O}}[0,T]} F(\mathbf{x}) \exp\{(\mathbf{x}\_{0}, \mathbf{x})^{\sim}\} dm(\mathbf{x}).\tag{26}$$

In [23], the authors used Equation (26) to establish Equation (28), which is a generalized translation theorem. The main key in their proof is the change of kernel for the Gaussian process, i.e.

$$\begin{split} Z\_{h\_1}(\theta\_0, t) &= \int\_0^t h\_1(s) d\left( \int\_0^s h\_2(\tau) z\_{x\_0}(\tau) d\tau \right) \\ &= \int\_0^t h\_1(s) h\_2(s) z\_{x\_0}(s) ds \\ &= \int\_0^t h\_2(s) d\left( \int\_0^s h\_1(\tau) z\_{x\_0}(\tau) d\tau \right) = Z\_{h\_2}(u, t) \end{split} \tag{27}$$

where *<sup>θ</sup>*0(*t*) = *<sup>t</sup>* <sup>0</sup> *<sup>h</sup>*2(*t*)*zx*<sup>0</sup> (*t*)*dt* and *<sup>u</sup>*(*t*) = *<sup>t</sup>* <sup>0</sup> *h*1(*s*)*zx*<sup>0</sup> (*s*)*ds* for given *x*<sup>0</sup> ∈ *C* 0.

The following lemma is said to be the translation theorem via the Gaussian process on Wiener space.

**Lemma 3** (Translation Theorem 2)**.** *Let <sup>h</sup>*1, *<sup>h</sup>*<sup>2</sup> ∈ H*. Let <sup>x</sup>*0(*t*) = *<sup>t</sup>* <sup>0</sup> *zx*<sup>0</sup> (*s*)*ds and let F*(*Zh*<sup>1</sup> (*x*, ·)) *be a integrable functional on C*0[0, *T*]*. Let*

$$\theta\_0(t) = \int\_0^t h\_1(s) z\_{x\_0}(s) ds.$$

*Then,*

$$\begin{split} &\int\_{\mathcal{C}\_{0}[0,T]} F(Z\_{\mathbf{l}\_{1}}(\mathbf{x}\_{\prime} \cdot) + Z\_{\mathbf{l}\_{2}}(\theta\_{0\prime} \cdot)) dm(\mathbf{x}) \\ &= \exp\left\{ -\frac{1}{2} \| z\_{\mathbf{x}\_{0}} h\_{2} \|\_{2}^{2} \right\} \int\_{\mathcal{C}\_{0}[0,T]} F(Z\_{\mathbf{l}\_{1}}(\mathbf{x}\_{\prime} \cdot)) \exp\{ (\theta\_{0\prime} Z\_{\mathbf{l}\_{2}}(\mathbf{x}\_{\prime} \cdot))^{\sim} \} dm(\mathbf{x}). \end{split} \tag{28}$$

In our next theorem, we establish the generalized Cameron–Storvick theorem for the generalized first variation.

**Theorem 7.** *Let x*<sup>0</sup> ∈ *C* <sup>0</sup> *be given. Let <sup>h</sup>*1, *<sup>h</sup>*<sup>2</sup> ∈ H *and <sup>S</sup>* ∈ L*. In addition, let <sup>u</sup>*(*t*) = *<sup>t</sup>* <sup>0</sup> *h*1(*s*)*zx*<sup>0</sup> (*s*)*ds and <sup>θ</sup>*0(*t*) = *<sup>t</sup>* <sup>0</sup> *h*2(*s*)*zx*<sup>0</sup> (*s*)*ds. Then,*

$$\int\_{\mathbb{C}\_{0}[0,T]} \delta\_{\mathbb{S},\mathbb{S}}^{h\_{1},h\_{2}} \Phi\_{\mathbb{P}}(\mathbf{x}|\boldsymbol{u}) dm(\mathbf{x}) = \int\_{\mathbb{C}\_{0}[0,T]} (\mathbf{x}\_{0} Z\_{h\_{2}}(\mathbf{x}\_{\prime} \cdot))^{\sim} \Phi\_{\mathbb{P}}(\mathcal{S} Z\_{h\_{1}}(\mathbf{x}\_{\prime} \cdot)) dm(\mathbf{x}).\tag{29}$$

**Proof.** First, by using Equation (11) and the dominated convergence theorem, we have

$$\begin{split} &\int\_{C\_{0}[0,T]} \delta\_{S,S}^{\mathfrak{h}\_{1},\mathfrak{h}\_{2}} \Phi\_{v}(\mathbf{x}|\boldsymbol{u}) dm(\mathbf{x}) \\ &= \left. \frac{\partial}{\partial \boldsymbol{\alpha}} \left[ \int\_{C\_{0}[0,T]} \Phi\_{v}(SZ\_{\mathfrak{h}\_{1}}(\mathbf{x}\_{\cdot} \cdot) + \kappa SZ\_{\mathfrak{h}\_{2}}(\boldsymbol{u}\_{\cdot} \cdot)) dm(\mathbf{x}) \right] \right|\_{\boldsymbol{\alpha}=0} \\ &= \left. \frac{\partial}{\partial \boldsymbol{\alpha}} \left[ \int\_{C\_{0}[0,T]} \Phi\_{v}(SZ\_{\mathfrak{h}\_{1}}(\mathbf{x}\_{\cdot} \cdot) + SZ\_{\mathfrak{h}\_{2}}(\boldsymbol{u}\boldsymbol{u}\_{\cdot} \cdot)) dm(\mathbf{x}) \right] \right|\_{\boldsymbol{\alpha}=0} . \end{split}$$

Now, let *F<sup>h</sup> <sup>S</sup>* (*x*) = Φ*v*(*SZh*(*x*, ·)). Using the key (27) used in [23], we have

$$F\_S^{\hbar\_1}(\mathfrak{x} + \mathfrak{a}\theta\_0) = \Phi\_v(SZ\_{\hbar\_1}(\mathfrak{x}, \cdot) + SZ\_{\hbar\_2}(\mathfrak{a}\mathfrak{u}, \cdot))^\*$$

where *<sup>θ</sup>*0(*t*) = *<sup>t</sup>* <sup>0</sup> *<sup>h</sup>*2(*s*)*zx*<sup>0</sup> (*s*)*ds* and *<sup>u</sup>*(*t*) = *<sup>t</sup>* <sup>0</sup> *h*1(*s*)*zx*<sup>0</sup> (*s*)*ds*. This means that

$$\int\_{\mathcal{C}\_{0}[0,T]} \delta\_{S,S}^{h\_{1},h\_{2}} F(\mathbf{x}|\boldsymbol{\mu}) dm(\mathbf{x}) = \frac{\partial}{\partial \boldsymbol{\alpha}} \left[ \int\_{\mathcal{C}\_{0}[0,T]} F\_{\mathcal{S}}^{h\_{1}}(\mathbf{x} + \boldsymbol{\alpha}\boldsymbol{\theta}\_{0}) dm(\mathbf{x}) \right] \Big|\_{\boldsymbol{\alpha}=\boldsymbol{0}}.$$

We next apply the translation theorem to the functional *Fh*<sup>1</sup> *<sup>S</sup>* instead of *F* in Lemma 2 to proceed the following formula

$$\begin{split} &\int\_{\mathcal{C}\_{0}[0,T]} \delta\_{S,S}^{h\_{1},h\_{2}} F(x|\mu) dm(x) \\ &= \frac{\partial}{\partial\mathfrak{a}} \left[ \exp\left\{-\frac{1}{2} \|\boldsymbol{a}\boldsymbol{\theta}\_{0}\|\_{\mathcal{C}\_{0}^{\*}}^{2} \right\} \int\_{\mathcal{C}\_{0}[0,T]} F\_{S}^{h\_{1}}(\boldsymbol{x}) \exp\{(\boldsymbol{a}\boldsymbol{\theta}\_{0},\boldsymbol{x})^{\sim}\} dm(\boldsymbol{x}) \right] \bigg|\_{\boldsymbol{a}=0} \\ &= \frac{\partial}{\partial\mathfrak{a}} \left[ \exp\left\{-\frac{\mathfrak{a}^{2}}{2} \|\boldsymbol{z}\_{\boldsymbol{X}\boldsymbol{\theta}\_{0}}\boldsymbol{h}\_{2}\|\_{2}^{2} \right\} \int\_{\mathcal{C}\_{0}[0,T]} F\_{S}^{h\_{1}}(\boldsymbol{x}) \exp\{\mathfrak{a}\left(\boldsymbol{z}\_{\boldsymbol{X}\boldsymbol{\theta}\_{0}}\boldsymbol{h}\_{2},\boldsymbol{x}\right)\} dm(\boldsymbol{x}) \right] \bigg|\_{\boldsymbol{a}=0} \\ &= \int\_{\mathcal{C}\_{0}[0,T]} \langle \boldsymbol{z}\_{\boldsymbol{X}\boldsymbol{\theta}\_{0}}\boldsymbol{h}\_{2},\boldsymbol{x} \rangle \Phi\_{\boldsymbol{v}}(\boldsymbol{S}\boldsymbol{Z}\_{\boldsymbol{h}\_{1}}(\boldsymbol{x},\cdot)) dm(\boldsymbol{x}). \end{split}$$

Since (*θ*0, *x*)<sup>∼</sup> = *zx*<sup>0</sup> *h*2, *x* = (*x*0, *Zh*<sup>2</sup> (*x*, ·))∼, we complete the proof of Theorem 7 as desired.

In the last theorem in this paper, we use Equation (29) to give an integration formula involving the generalized first variation and the generalized integral transform. This formula tells us that we can calculate the Wiener integral of generalized first variation for generalized integral transform directly without calculations of them.

**Theorem 8.** *Let h*1, *h*2, *h*<sup>3</sup> ∈ H *and let S*1, *S*<sup>2</sup> ∈ L*. In addition, let u*, *x*0, *θ*<sup>0</sup> *be as in Theorem 7. Then,*

$$\int\_{\mathbb{G}\_{\mathbb{G}}[0,T]} \delta\_{\mathbb{S}\_2,\mathbb{S}\_2}^{\mathbb{h}\_2,\mathbb{h}\_3} \mathcal{T}\_{\mathbb{S}\_1,\mathbb{R}}^{\mathbb{h}\_1}(\Phi\_v)(y|u) dm(y) = M(\mathbb{S}\_1, \mathbb{R}\mathbb{S}\_2 : h\_1, h\_2 : v) (h\_3 z\_{\mathbb{x}\mathbb{O}}, h\_2 z\_{\mathbb{S}\_2\mathbb{R}^{\mathbb{x}}v})\_2. \tag{30}$$

**Proof.** Applying Equation (29) to the functional <sup>T</sup> *<sup>h</sup>*<sup>1</sup> *<sup>S</sup>*1,*R*(Φ*v*) instead of <sup>Φ</sup>*v*, we have

$$\begin{aligned} &\int\_{C\_{\mathbb{O}}[0,T]} \delta\_{S\_2,S\_2}^{\mathfrak{h}\_2,\mathfrak{h}\_3} \mathcal{T}\_{S\_1,\mathbb{R}}^{\mathfrak{h}\_1}(\Phi\_v)(y|u) dm(y) \\ &= \int\_{C\_{\mathbb{O}}[0,T]} (\mathfrak{x}\_{0\prime} Z\_{\mathfrak{h}\_3}(y\_{\prime} \cdot)) \stackrel{\sim}{\mathcal{T}}\_{S\_1,\mathbb{R}}^{\mathfrak{h}\_1}(\Phi\_v)(S\_2 Z\_{\mathfrak{h}\_2}(y\_{\prime} \cdot)) dm(y) .\end{aligned}$$

Now, using Equations (8) and (13), it becomes that

$$\begin{split} &\int\_{\mathcal{C}\_{0}[0,T]} \delta\_{S\_{2},\mathcal{S}\_{2}}^{h\_{2},h\_{3}} \mathcal{T}\_{S\_{1},\mathcal{R}}^{h\_{1}}(\Phi\_{v})(y|u)dm(y) \\ &= M(S\_{1}:h\_{1}:v) \int\_{\mathcal{C}\_{0}[0,T]} (\mathbf{x}\_{0},\mathcal{Z}\_{h\_{3}}(y\_{\cdot}\cdot))^{\sim} \exp\{(\mathcal{R}^{\*}v,\mathcal{S}\_{2}\mathcal{Z}\_{h\_{2}}(y\_{\cdot}\cdot))^{\sim}\} dm(y) \\ &= M(S\_{1}:h\_{1}:v) \int\_{\mathcal{C}\_{0}[0,T]} \langle h\_{3}\mathcal{z}\_{\mathcal{X}\_{0}},y\rangle \exp\{\langle h\_{2}\mathcal{z}\_{\mathcal{S}\_{2}^{\*}\mathcal{R}^{\*}v},y\rangle\} dm(y) .\end{split}$$

The following integration formula

$$\int\_{\mathbb{C}\_{0}[0,T]} \langle w, \mathbf{x} \rangle \exp\{\langle p, \mathbf{x} \rangle\} dm(\mathbf{x}) = (w, p)\_{2} \exp\left\{\frac{1}{2} \|p\|\_{2}^{2}\right\}, \qquad w, p \in L\_{2}[0, T]$$

and Equation (12) yield that

$$\begin{split} &\int\_{C\_{0}[0,T]} \delta\_{S\_{2},S\_{2}}^{\text{fr}\_{3}} T\_{S\_{1},R}^{\text{fr}\_{1}}(\Phi\_{v})(y|u)dm(y) \\ &= M(S\_{1}:h\_{1}:v) \int\_{C\_{0}[0,T]} \langle h\_{3}z\_{x\_{0}},y \rangle \exp\{\langle h\_{2}z\_{S\_{2}^{\*}R^{\*}v}y \rangle\} dm(y) \\ &= M(S\_{1}:h\_{1}:v) \langle h\_{3}z\_{x\_{0}},h\_{2}z\_{S\_{2}^{\*}R^{\*}v}\rangle\_{2} \exp\left\{\frac{1}{2}||h\_{2}z\_{S\_{2}^{\*}R^{\*}v}||\_{2}^{2}\right\} \\ &= M(S\_{1}:h\_{1}:v)M(RS\_{2}:h\_{2}:v)(h\_{3}z\_{x\_{0}},h\_{2}z\_{S\_{2}^{\*}R^{\*}v})\_{2}. \end{split}$$

Finally, by using Equation (16) in Lemma 1, we establish Equation (30) as desired.

#### **6. Application**

We finish this paper by giving some examples to illustrate the usefulness of our results and formulas.

We first give a simple example used in the stack exchange and the signal process. For *x* ∈ *C*0[0, *T*], let *Ks*(*x*)(*t*) = *<sup>t</sup>* <sup>0</sup> *x*(*s*)*ds*. Then, the adjoint is given by the formula *K*<sup>∗</sup> *<sup>s</sup>* (*x*)(*t*) = *<sup>T</sup> <sup>t</sup> x*(*s*)*ds*.

**Example 1.** *Let S* <sup>=</sup> *Ks and let v*(*t*) = <sup>−</sup>*<sup>t</sup>* <sup>+</sup> *<sup>T</sup>* <sup>2</sup> *and h*(*t*) = *t* <sup>2</sup> *on* [0, *<sup>T</sup>*]*. Then, h* ∈ H*. In addition, we have*

$$S^\*v(t) = \int\_t^T v(s)ds = \frac{1}{2}t^2 - \frac{t}{2}T = \int\_0^t (s - \frac{1}{2}T)ds.$$

*This means that zS*∗*v*(*t*) = *<sup>t</sup>* <sup>−</sup> <sup>1</sup> <sup>2</sup>*T on* [0, *<sup>T</sup>*] *and hence hzS*∗*v*<sup>2</sup> <sup>2</sup> = <sup>1</sup> <sup>12</sup>*T*4. *Thus, we obtain that*

$$\mathcal{T}\_{S,R}^{\text{fl}}(\Phi\_v)(y) = \exp\left\{\frac{1}{24}T^4\right\} \Phi\_{R^\*v}(y).$$

We give two examples in the quantum mechanics. To do this, we consider useful operators used in quantum mechanics. We consider two cases. However, various cases can be applied in appropriate methods as examples.

#### **Case 1 : Multiplication operator.**

In the next examples, we consider the multiplication operator *Tm*, which plays a role in physics (quantum theories) (see [21]). Before do this, we introduce some observations to proceed obtaining examples. Let *R* ∈ L such that

$$R(xy) = xR(y) \tag{31}$$

for all *x*, *y* ∈ *C*0[0, *T*]. In addition, for *t* ∈ [0, *T*] on *C*0[0, *T*], we define a multiplication operator *Tm* by

$$(T\_{\mathfrak{m}}(\mathbf{x}))(t) \equiv T\_{\mathfrak{m}}(\mathbf{x}(t)) = t\mathbf{x}(t). \tag{32}$$

Then, we have *Tm*(*xy*) = *tx*(*t*)*y*(*t*) and *xTm*(*y*) = *x*(*t*)*ty*(*t*). Hence, Equation (31) holds. In addition, one can easily check that *T*∗ *mv*(*t*) = *tv*(*t*) for all *v* ∈ *C* <sup>0</sup>. Note that the expected value or corresponding mean value is

$$E(\mathbf{x}) \equiv \int\_0^T t|\mathbf{x}(t)|^2 dt = \int\_0^T T\_{\mathbf{m}}(|\mathbf{x}|^2)(t) dt.$$

where *x* is the state function of a particle in quantum mechanics and *<sup>T</sup>* <sup>0</sup> |*x*(*t*)| <sup>2</sup>*dt* is the probability that the particle will be found in [0, *T*].

In the first and second examples, we give some formula with respect to the multiplication operator *Tm*.

**Example 2.** *Let S* = *Tm and let v*(*t*) = <sup>1</sup> 2 *t* <sup>2</sup> *and h*(*t*) = *t* <sup>2</sup> *on* [0, *<sup>T</sup>*]*. Then, h* ∈ H*. In addition, we have*

$$v(t) = \frac{1}{2}t^2 = \int\_0^t s ds$$

*and*

$$S^\*v(t) = \frac{1}{2}t^3 = \int\_0^t \frac{3}{2}s^2 ds.$$

*This means that zv*(*t*) = *t and zS*∗*v*(*t*) = <sup>3</sup> 2 *t* <sup>2</sup> *on* [0, *<sup>T</sup>*] *and hence hzS*∗*v*<sup>2</sup> <sup>2</sup> = <sup>3</sup> <sup>10</sup>*T*5. *Thus, we obtain that*

$$\mathcal{T}\_{S,R}^{\hbar}(\Phi\_v)(y) = \exp\left\{\frac{3}{10}T^5\right\} \Phi\_{R^\*v}(y).$$

**Example 3.** *Let S* <sup>=</sup> *Tm and let v*(*t*) = *<sup>e</sup><sup>t</sup>* <sup>−</sup> <sup>1</sup> *and h*(*t*) = *t on* [0, *<sup>T</sup>*]*. Then, h* ∈ H*. In addition, we have*

$$v(t) = e^t - 1 = \int\_0^t e^s ds$$

$$S^\*v(t) = te^t - t = \int\_0^t (se^s + e^s - 1)ds.$$

*This means that zv*(*t*) = *<sup>e</sup><sup>t</sup> and zS*∗*v*(*t*) = *te<sup>t</sup>* <sup>+</sup> *<sup>e</sup><sup>t</sup>* <sup>−</sup> <sup>1</sup> *on* [0, *<sup>T</sup>*] *and hence*

$$\|\|hz\_{S^\*v}\|\|\_{2}^{2} = \frac{1}{4}e^{2T}(2T^4 + 2T^2 - 2T + 1) - 2e^T(T^2 + 2T - 4) + \frac{1}{3}T^3 - \frac{33}{4}.$$

*Thus, we obtain that*

$$\begin{aligned} \mathcal{T}\_{S,R}^h(\Phi\_v)(y) &= \exp\left\{\frac{1}{8}\varepsilon^{2T}(2T^4 + 2T^2 - 2T + 1) \\ &- \varepsilon^T(T^2 + 2T - 4) + \frac{1}{6}T^3 - \frac{33}{8}\right\} \Phi\_{R^\*v}(y). \end{aligned}$$

#### **Case 2 : Quantum mechanics operators.**

In the next examples, we consider some linear operators which are used to explain the solution of the diffusion equation and the Schrôdinger equation (see [24]).

Let *S* : *C* <sup>0</sup>[0, *T*] → *C* <sup>0</sup>[0, *T*] be the linear operator defined by

$$Sw(t) = \int\_0^t w(s)ds.\tag{33}$$

Then, the adjoint operator *S*∗ of *S* is given by the formula

$$S^\*w(t) = w(T)t - \int\_0^t w(s)ds = \int\_0^t [w(T) - w(s)]ds$$

and the linear operator *A* = *S*∗*S* is given by the formula

$$Aw(t) = \int\_0^T \min\{s, t\}w(s)ds.$$

In addition, *A* is self-adjoint on *C* <sup>0</sup>[0, *T*] and so

$$(w\_1, Aw\_2)\_{\mathbb{C}\_0'} = (Sw\_1, Sw\_2)\_{\mathbb{C}\_0'} = \int\_0^T w\_1(s) w\_2(s) ds$$

for all *w*1, *w*<sup>2</sup> ∈ *C* <sup>0</sup>[0, *T*]. Hence, *A* is a positive definite operator, i.e., (*w*, *Aw*)*C* <sup>0</sup> ≥ 0 for all *w* ∈ *C* <sup>0</sup>[0, *T*]. This means that the orthonormal eigenfunctions {*em*} of *A* are given by

$$\kappa\_{\rm ff}(t) = \frac{\sqrt{2T}}{(m - \frac{1}{2})\pi} \sin\left(\frac{(m - \frac{1}{2})\pi}{T}t\right) \equiv \int\_0^t \kappa\_{\rm ff}(s)ds$$

with corresponding eigenvalues {*βm*} given by

$$
\beta\_m = \left(\frac{T}{(m - \frac{1}{2})\pi}\right)^2.
$$

Furthermore, it can be shown that {*em*} is a basis of *C* <sup>0</sup>[0, *T*] and so {*αm*} is a basis of Ł2, and that *A* is a trace class operator and so *S* is a Hilbert–Schmidt operator on *C* <sup>0</sup>[0, *T*]. In fact, the trace of *A* is given by *TrA* = <sup>1</sup> <sup>2</sup>*T*<sup>2</sup> <sup>=</sup> *<sup>T</sup>* <sup>0</sup> *tdt*. By using the concept of *m*-lifting on abstract Wiener space, the operators *S* and *A* can be extended on *C*0[0, *T*] (see [19,25]).

We now give formulas with respect to the operators *S* and *A*, respectively.

**Example 4.** *Let S be given by Equation* (33) *and let v*(*t*) = <sup>1</sup> 2 *t* <sup>2</sup> *and <sup>h</sup>*(*t*) = *<sup>t</sup> on* [0, *<sup>T</sup>*]*. Then, <sup>h</sup>* ∈ H*. In addition, we have*

$$\begin{aligned} v(t) &= \frac{1}{2}t^2 = \int\_0^t s ds\\ Sv(t) &= \int\_0^t \frac{1}{2}s^2 ds = \frac{1}{6}t^3 \end{aligned}$$

*and*

$$S^\*v(t) = tv(T) - Sv(t) = \frac{1}{2}tT^2 - \frac{1}{6}t^3 = \int\_0^t \left[\frac{1}{2}T - \frac{1}{2}\mathbf{s}^2\right]d\mathbf{s}.$$

*This means that zv*(*t*) = *t and zS*∗*v*(*t*) = <sup>1</sup> <sup>2</sup>*<sup>T</sup>* <sup>−</sup> <sup>1</sup> 2 *t* <sup>2</sup> *on* [0, *<sup>T</sup>*] *and hence hzS*∗*v*<sup>2</sup> <sup>2</sup> = <sup>1</sup> <sup>40</sup>*T*7. *Thus, we obtain that*

$$\mathcal{T}\_{S,\mathbb{R}}^h(\Phi\_v)(y) = \exp\left\{\frac{1}{80}T^{\mathcal{T}}\right\} \Phi\_{\mathbb{R}^\*v}(y).$$

**Example 5.** *Let S* = *A and let v*(*t*) = <sup>1</sup> 2 *t* <sup>2</sup> *and h*(*t*) = *t on* [0, *<sup>T</sup>*]*. Then, h* ∈ H*. In addition, we have*

$$v(t) = \frac{1}{2}t^2 = \int\_0^t s ds\_\prime$$

$$\begin{split} Av(t) = SS^\*v(t) &= \int\_0^t S^\*v(s)ds = \int\_0^t \int\_0^s [v(T) - v(u)du]ds \\ &= \int\_0^t \left[ sv(T) - \int\_0^s \frac{1}{2}u^2 du \right] ds = \int\_0^t \left[ \frac{1}{2}sT^2 - \frac{1}{6}s^3 \right] ds \\ &= \frac{1}{4}T^2t^2 - \frac{1}{24}t^4 \end{split}$$

*and*

$$A^\*v(t) = Av(t) = \int\_0^t \left[\frac{1}{2}sT^2 - \frac{1}{6}s^3\right]ds.$$

*This means that zv*(*t*) = *t and zS*∗*v*(*t*) = <sup>1</sup> <sup>2</sup> *tT*<sup>2</sup> <sup>−</sup> <sup>1</sup> 6 *t* <sup>3</sup> *on* [0, *<sup>T</sup>*] *and hence hzA*∗*v*<sup>2</sup> <sup>2</sup> = <sup>58</sup> <sup>2835</sup>*T*9. *Thus, we obtain that*

$$\mathcal{T}\_{A,\mathbb{R}}^{\hbar}(\Phi\_{\upsilon})(y) = \exp\left\{\frac{29}{2835}T^{9}\right\}\Phi\_{\mathbb{R}^{\ast}\upsilon}(y).$$

We now give an example with respect to Theorem 8.

**Example 6.** *Let s*<sup>1</sup> = *Tm and S*<sup>2</sup> = *S, as used in the examples above. Let R* = *I and let h*1(*t*) = *h*2(*t*) = *t*, *h*3(*t*) = *t* <sup>2</sup> *on* [0, *T*]*. Furthermore, let v*(*t*) = <sup>1</sup> 2 *t* <sup>2</sup> *on* [0, *<sup>T</sup>*] *and let <sup>x</sup>*0(*t*) = *<sup>t</sup>* = *<sup>t</sup>* <sup>0</sup> 1*ds* ∈ *C* <sup>0</sup>*. Then, we have zv*(*t*) = *t*, *zS*<sup>∗</sup> <sup>1</sup> *<sup>v</sup>*(*t*) = <sup>3</sup> 2 *t* 2, *zS*<sup>∗</sup> <sup>2</sup> *<sup>v</sup>*(*t*) = <sup>1</sup> <sup>2</sup>*<sup>T</sup>* <sup>−</sup> <sup>1</sup> 2 *t* <sup>2</sup> *and zx*<sup>0</sup> (*t*) = 1 *on* [0, *T*]*. Furthermore, we have*

$$M(S\_1, RS\_2 : h\_1, h\_2 : v) = \exp\left\{\frac{5}{28}T^7 + \frac{1}{24}T^4 - \frac{1}{20}T^5\right\}$$

*and*

$$(h\_3 z\_{x\_0 \prime} h\_2 z\_{S^\sharp\_{\mathbb{Z}} R^\* v})\_2 = \frac{1}{8} T^4 - \frac{1}{12} T^6 \cdot$$

*Hence, by using Equation* (30) *in Theorem 8, we can conclude that*

$$\begin{split} & \int\_{C\_{0}[0,T]} \delta\_{S\_{2},S\_{2}}^{\mathbf{h}\_{2},\mathbf{h}\_{3}} \mathcal{T}\_{S\_{1},R}^{\mathbf{h}\_{1}}(\Phi\_{v})(y|u)dm(y) \\ &= \exp\left\{\frac{5}{28}T^{7} + \frac{1}{24}T^{4} - \frac{1}{20}T^{5}\right\} \left(\frac{1}{8}T^{4} - \frac{1}{12}T^{6}\right). \end{split} \tag{34}$$

#### **7. Conclusions**

In Sections 3 and 4, we establish some fundamental formulas for the generalized integral transform, the generalized convolution product and the generalized first variation involving the generalized Cameron–Storvick theorem. As shown in Examples 2, 4 and 6, various applications are established by choosing the kernel functions and operators. The results and formulas are more generalized forms than those in previous papers. From these, we can conclude that various examples can also be explained very easily.

**Funding:** This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning (2017R1E1A1A03070041).

**Acknowledgments:** The author would like to express gratitude to the referees for their valuable comments and suggestions, which have improved the original paper.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

© 2020 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Review* **A Guide to Special Functions in Fractional Calculus**

**Virginia Kiryakova**

Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria; virginia@diogenes.bg

**Abstract:** *Dedicated to the memory of Professor Richard Askey (1933–2019) and to pay tribute to the Bateman Project*. Harry Bateman planned his "shoe-boxes" project (accomplished after his death as *Higher Transcendental Functions*, Vols. 1–3, 1953–1955, under the editorship by A. Erdélyi) as a "*Guide to the Functions*". This inspired the author to use the modified title of the present survey. Most of the standard (classical) Special Functions are representable in terms of the Meijer *G*-function and, specially, of the generalized hypergeometric functions *pFq*. These appeared as solutions of differential equations in mathematical physics and other applied sciences that are of integer order, usually of second order. However, recently, mathematical models of fractional order are preferred because they reflect more adequately the nature and various social events, and these needs attracted attention to *"new" classes* of special functions as their solutions, the so-called *Special Functions of Fractional Calculus (SF of FC)*. Generally, under this notion, we have in mind the Fox *H*-functions, their most widely used cases of the Wright generalized hypergeometric functions *<sup>p</sup>*Ψ*<sup>q</sup>* and, in particular, the Mittag– Leffler type functions, among them the "Queen function of fractional calculus", the Mittag–Leffler function. These fractional indices/parameters extensions of the classical special functions became an unavoidable tool when fractalized models of phenomena and events are treated. Here, we try to review some of the basic results on the theory of the SF of FC, obtained in the author's works for more than 30 years, and support the wide spreading and important role of these functions by several examples.

**Keywords:** special functions; generalized hypergeometric functions; fractional calculus operators; integral transforms

**MSC:** 33C60; 33E12; 26A33; 44A20

#### **1. Historical Introduction**

Special functions are particular mathematical functions that have more or less established names and notations due to their importance in mathematical analysis, functional analysis, geometry, physics, astronomy, statistics or other applications (Wikipedia: Special Functions [1]). It might be Euler, who started to talk, since 1720, about lots of the standard special functions. He defined the Gamma-function as a continuation of the factorial, also the Bessel functions and looked after the elliptic functions. Several (theoretical and applied) scientists started to use such functions, introduced their notations and named them after famous contributors. Thus, the notions as the Bessel and cylindrical functions; the Gauss, Kummer, Tricomi, confluent and generalized hypergeometric functions; the classical orthogonal polynomials (as Laguerre, Jacobi, Gegenbauer, Legendre, Tchebisheff, Hermite, etc.); the incomplete Gamma- and Beta-functions; and the Error functions, the Airy, Whittaker, etc. functions appeared and a long list of handbooks on the so-called "*Special Functions of Mathematical Physics*" or "*Named Functions*" (we call them also "*Classical Special Functions*") were published. We mention only some of them in this survey.

As Richard Askey (*to whose memory we dedicate this survey*) confessed in his lectures [2] on orthogonal polynomials and special functions: "Now, there are relatively large number of people who know a fair amount about this topic. Nevertheless, . . .most of the

**Citation:** Kiryakova, V. A Guide to Special Functions in Fractional Calculus. *Mathematics* **2021**, *9*, 106. https://doi.org/10.3390/math9010106

Received: 15 December 2020 Accepted: 24 December 2020 Published: 5 January 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

mathematicians are totally unaware of the power of the special functions. They react to a paper which contains a Bessel function or Legendre polynomial by turning immediately to the next paper", and continued: "Hopefully these lectures will show ... how useful hypergeometric functions can be. Very few facts about them are known, but these few facts can be very useful in many different contexts. So, my advice is to learn something about hypergeometric functions: or, if this seems too hard or dull a task, get to know someone who knows something about them. If you already know something about these functions, share your knowledge with a colleague or two, or a group of students. Every large university and research laboratory should have a person who not only finds things in the Bateman Project (i.e., [3]), but can fill in a few holes in this set of books ... In any case, I hope my point has been made; special functions are useful and those who need them and those who know them should start to talk to each other ... The mathematical community at large needs the education on the usefulness of special functions more than most other people who could use them . . . ".

As a participant in the NATO International Conference on Special Functions and Applications 2000 (Arizona State University), the author had the chance to witness the late night long discussions (mainly between Richard Askey and Oleg Marichev) for the merits and competition of the two great projects on Special Functions, on which the Computer Algebra Systems packages *Mathematica* and *Maple* are based, the *Bateman Project* [3] and the *NIST Project* [4] based on the Abramowitz–Stegun handbook [5] and on a more recent one, edited by Olver–Lozier–Boisvert–Clark [6].

The author of this survey was tempted to start paying attention to Special Functions by having the handbook [3] on her desk, while working on a M.Sc. thesis. We cite some texts from the Preface of this encyclopedia book, known as the *Bateman Project*: ". . . During his last years he (Professor Harry Bateman) had embarked upon a project whose successful completion, he believed, would prove of great value to scientists in all fields. He planned an extensive compilation of "special functions"—solutions of a wide class of mathematically and physically relevant functional equations. He intended to investigate and to tabulate properties of such functions, inter-relations between such functions, their representations in various forms, their macro- and micro-scopic behavior, and to construct tables of important definite integrals involving such functions . . . anyone who has been faced with the task of handling and discussing and understanding in detail the solution to an applied problem which is described by a differential equation is painfully familiar with the disproportionately large amount of scattered research on special functions one must wade through in the hope of extracting the desired information . . . ". In the time of Bateman's death (1946) his notes amounted to a veritable mountain of paper. His card-catalogue alone filled several dozen cardboard boxes (the famous *"shoe-boxes"*). . . . "Bateman planned his Project as a *'Guide to the Functions'* on a gigantic scale . . . the great importance of such a work hardly needs emphasis . . . (this) would have made this book as a kind of 'Greater Oxford Dictionary of Special Functions' (from the Introduction to [3])". This project resulted in publication of five important reference volumes ([3,7]), under the editorship of Arthur Erdélyi, in association with W. Magnus, F. Oberhettinger and F.G. Tricomi.

In 2007, the *Askey–Bateman Project* was announced by Mourad Ismail as a five- or six-volume encyclopedic book series on special functions, based on the works of both Harry Bateman and Richard Askey. Starting in 2020, Cambridge University Press began publishing Volumes 1 and 2 of this Encyclopedia of Special Functions with series editors Mourad Ismail and Walter Van Assche [8].

#### **2. Preliminaries—Basic Definitions and Facts**

We give here only a short background on the considered *Special Functions of Fractional Calculus (SF of FC)*. As for the standard special functions and same for the SF of FC, most of them are entire functions of the complex variable *z* or analytic ones in disks in C. We skip the details on defining single-valued branches of the considered functions, functional spaces and operators' properties there (see our previous works, e.g., ref. [9] (§5.5.i)). In addition, we limit ourselves to the Fox *H*-functions of one complex variable, as enough general level to expose our approach and results.

Among the long list of handbooks and surveys dedicated not only to classical SF but also to the SF of FC, we mention only few of them *of the few decades:* Mathai–Saxena [10], 1973; Marichev [11], 1978; Srivastava–Gupta–Goyal [12], 1982; Srivastava–Kashyap [13], 1982; Prudnikov–Brychkov–Marichev [14], 1992; Kiryakova [9], 1994; Yakubovich–Luchko [15], 1994; Podlubny [16], 1999; Kilbas–Saigo [17], 2004; Kilbas–Srivastava–Trujillo [18], 2006; Mathai–Haubold [19], 2008; Mathai–Saxena–Haubold [20], 2010; Mainardi [21], 2010; Gorenflo–Kilbas–Mainardi–Rogosin [22], 2014–2020; the recent ones as Cohl–Ismail [23], 2020; Assche–Ismail [8], 2020; Mainardi [24], 2020; etc. (see more sources also the survey paper Machado-Kiryakova [25]). The basic classes of SF considered here are shortly discussed below.

#### *Definitions of the Basic Special Functions*

We refer to the survey by Mainardi–Pagnini [26] that points out the pioneering role of Salvatore Pincherle on developing the generalized hypergeometric functions (and, thus, later appearing *G*-functions) by means of Mellin–Barnes integrals, where a historical note from the Bateman Project [3] (Vol. 1, p. 49) is cited: "... Of all integrals which contain Gamma functions in their integrands the most important ones are the so-called *Mellin-Barnes integrals*. Such integrals were first introduced by S. Pincherle, in 1888; their theory has been developed in 1910 by H. Mellin ... and they were used for a complete integration of the hypergeometric differential equation by E.W. Barnes, 1908."

**Definition 1.** (Ch. Fox [27], 1961, see books as [9,12,14,18], and other earlier and latest ones) *The Fox H-function is a generalized hypergeometric function, defined by means of the Mellin–Barnes type contour integral*

$$\begin{aligned} \cdots \\ H\_{p,q}^{w,n} \left[ 2 \begin{array}{c} \left( a\_i, A\_i \right)\_1^p \\ \left( b\_j, B\_j \right)\_1^q \end{array} \right] = \frac{1}{2 \pi i} \int\_{\mathcal{C}} \mathcal{H}\_{p,q}^{w,n}(s) \, z^{-s} ds, \text{ with } \; \mathcal{H}\_{p,q}^{w,n}(s) = \frac{\prod\_{j=1}^n \Gamma(b\_{\bar{j}} + B\_{\bar{j}}s) \prod\_{i=1}^n \Gamma(1 - a\_i - A\_i s)}{\prod\_{j=n+1}^q \Gamma(1 - b\_j - B\_{\bar{j}}s) \prod\_{i=n+1}^p \Gamma(a\_i + A\_i s)}, \; \text{ (1)} \end{aligned} (1)$$

*with complex variable z* = 0 *and a contour* L *in the complex domain; the orders* (*m*, *n*, *p*, *q*) *are nonnegative integers so that* 0 ≤ *m* ≤ *q,* 0 ≤ *n* ≤ *p, the parameters Ai* > 0, *Bj* > 0 *are positive and ai*, *bj, i* = 1, ... , *p*; *j* = 1, ... , *q are arbitrary complex such that Ai*(*bj*+*l*) = *Bj*(*ai*−*l* −1), *l*, *l* = 0, 1, 2, ... ; *<sup>i</sup>* <sup>=</sup> 1, ... , *<sup>n</sup>*; *<sup>j</sup>* <sup>=</sup> 1, ... , *m. Note that the integrand* <sup>H</sup>*m*,*<sup>n</sup> <sup>p</sup>*,*<sup>q</sup>* (*s*) *with <sup>s</sup>* → −*<sup>s</sup> is the Mellin transform of the H-function* (1)*.*

The details on the properties of the Fox *H*-function and types of contour L can be found in many contemporary handbooks on SF as [12,14,18], where its behavior is described in terms of the following parameters:

$$\begin{array}{llll}\rho = \prod\_{i=1}^{p} A\_i^{-A\_i} \prod\_{j=1}^{q} B\_j^{B\_j} \ ; & \Delta = \sum\_{j=1}^{q} B\_j - \sum\_{i=1}^{p} A\_i \ ; & \gamma = \lim\_{s \to \infty, \, s \in \mathbb{Q}\_{\text{inv}}} \text{Res} \, s, \\\mu = \sum\_{j=1}^{q} b\_j - \sum\_{i=1}^{p} a\_i + \frac{p-q}{2} \ ; & \boldsymbol{a}^\* = \sum\_{i=1}^{n} A\_i - \sum\_{i=n+1}^{p} A\_i + \sum\_{j=1}^{m} B\_j - \sum\_{j=m+1}^{q} B\_j. \end{array} \tag{2}$$

Depending on the values in (2), the *H*-function is a function analytic of *z* in disks |*z*| < *ρ* or outside them, in some sectors, or in the whole complex plane. In particular, the integral (1) converges (see [14] (§8.3)), if one of the following conditions is satisfied: (1) L = L*i*∞: *a*<sup>∗</sup> > 0, | arg *z*| < *a*∗*π*/2; (2) L = L*i*∞: *a*<sup>∗</sup> ≥ 0, | arg *z*| = *a*∗*π*/2, *γ*Δ < −1 − Re *μ*; (3) <sup>L</sup> = <sup>L</sup>−*i*∞: <sup>Δ</sup> > 0, 0 < |*<sup>z</sup>* < <sup>∞</sup>, or <sup>Δ</sup> = 0, 0 < |*z*| < *<sup>ρ</sup>*, or <sup>Δ</sup> = 0, *<sup>a</sup>*<sup>∗</sup> ≥ 0, |*z*| = *<sup>ρ</sup>*,Re *<sup>μ</sup>* < 0; or (4) L = L+*i*∞: Δ < 0, 0 < |*z* < ∞, or Δ = 0, |*z*| > *ρ*, or Δ = 0, *a*<sup>∗</sup> ≥ 0, |*z*| = *ρ*,Re *μ* < 0. The contour L−*i*<sup>∞</sup> (respectively, L+*i*∞) is a left (respectively, right) loop in some horizontal strip that begins the point −∞ + *iϕ*<sup>1</sup> (respectively, +∞ + *iϕ*1) keeping all poles of the functions Γ(*bj* + *Bjs*), *j* = 1, 2, ..., *m* on the left side, and those of Γ(1 − *ai* − *Ais*), *i* = 1, 2, ..., *n*

on its right side, and ends at the point −∞ + *iϕ*<sup>2</sup> (respectively, +∞ + *iϕ*2), where *ϕ*<sup>1</sup> < *ϕ*2. The contour L*i*<sup>∞</sup> starts at the point *γ* − *i*∞ and ends at *γ* + *i*∞ in a way to separate the mentioned poles, same as for L±*i*∞.

For studies on the behavior of the *H*-function around the singular points, one can see also the work of Karp [28], commenting and revisiting the results of Braaksma [29].

If all *Ai* = *Bj* = 1, *i* = 1, ..., *p*; *j* = 1, ..., *q*, the *H*-function *Hm*,*<sup>n</sup> <sup>p</sup>*,*<sup>q</sup> z* (*ai*, 1) *p* 1 (*bj*, 1) *q* 1 reduces to the *Meijer G-function* (C.S. Meijer [30], 1936–1941; see details in [3] (Vol. 1) and all above-mentioned books):

$$G\_{p,q}^{m,n}\left[z\middle|\begin{array}{c}\{a\_{l}\}\_{1}^{p}\\\{b\_{l}\}\_{1}^{q}\end{array}\right]=\frac{1}{2\pi i}\int\_{\mathcal{C}}G\_{p,q}^{m,n}(s)z^{-s}ds$$

$$\mathcal{I} = \frac{1}{2\pi i}\int\_{\mathcal{C}}\frac{\prod\_{j=1}^{m}\Gamma(b\_{j}+s)\prod\_{i=1}^{n}\Gamma(1-a\_{i}-s)}{\prod\_{j=m+1}^{q}\Gamma(1-b\_{j}-s)\prod\_{i=n+1}^{p}\Gamma(a\_{i}+s)}z^{-s}ds,\quad z\neq 0.\tag{3}$$

In this case, the behavior of the function (3) depends on conditions (2) with *ρ* = 1, <sup>Δ</sup> <sup>=</sup> *<sup>q</sup>* <sup>−</sup> *<sup>p</sup>*, *<sup>δ</sup>* <sup>=</sup> *<sup>m</sup>* <sup>+</sup> *<sup>n</sup>* <sup>−</sup> *<sup>p</sup>*+*<sup>q</sup>* <sup>2</sup> . Although simpler than (1), the *G*-function is yet enough general as it incorporates most of the Classical SF (known also as Named SF) and many elementary functions (see lists of examples in [3] (Vol. 1), [9] (Appendix C)).

The basic SF of FC that are Fox *H*-functions but do *not* reduce to Meijer *G*-functions in the general case (of *irrational Aj*, *Bk*) are the following generalized hypergeometric functions, extending the more popular *pFq*-functions.

**Definition 2.** (see [9,14,22]) *The Wright generalized hypergeometric function <sup>p</sup>*Ψ*q*(*z*)*, called also Fox–Wright function (abbrev. as F-W g.h.f. or Wright g.h.f.), is defined as:*

$$\,\_p\Psi\_q \left[ \begin{array}{c} (a\_1, A\_1), \dots, (a\_p, A\_p) \\ (b\_1, B\_1), \dots, (b\_q, B\_q) \end{array} \bigg| z \right] = \sum\_{k=0}^\infty \frac{\Gamma(a\_1 + kA\_1) \dots \Gamma(a\_p + kA\_p)}{\Gamma(b\_1 + kB\_1) \dots \Gamma(b\_q + kB\_q)} \frac{z^k}{k!} \tag{4}$$

$$\mathbf{x} = H\_{p,q+1}^{1,p} \begin{bmatrix} -z \\ \end{bmatrix} \begin{pmatrix} (1 - a\_{1\prime}A\_1), \dots, (1 - a\_{p\prime}A\_p) \\ (0, 1), (1 - b\_1, B\_1), \dots, (1 - b\_{q\prime}B\_q) \\ \end{pmatrix} . \tag{5}$$

*It was introduced and studied by Sir Edward Maitland (E.-M.) Wright in a series of his works (e.g., [31,32], pp. 1933–1940). In denotations for the parameters* (2)*, the power series* (4) *defines an entire function of z if* Δ > −1*; it is absolutely convergent in the disk* {|*z*|<*ρ*} *for* Δ = −1*; and it is the same for* |*z*|=*ρ if Re*(*μ*)>1/2*, (see details, for example, in [33]).*

When all *A*<sup>1</sup> = ··· = *Ap* = 1, *B*<sup>1</sup> = ··· = *Bq* = 1, the Wright g.h.f. reduces to the *generalized hypergeometric pFq*-*function* which itself is a case of the *G*-function (3) (for early details, see [3] (Vol. 1)):

$${}\_{p}\Psi\_{q}\left[\begin{array}{c}(a\_{1},1),\ldots,(a\_{p},1)\\(b\_{1},1),\ldots,(b\_{q},1)\end{array}\bigg|z\right]=c\,\_{p}F\_{q}(a\_{1},\ldots,a\_{p};b\_{1},\ldots,b\_{q};z)=\sum\_{k=0}^{\infty}\frac{(a\_{1})\_{k}\ldots(a\_{p})\_{k}}{(b\_{1})\_{k}\ldots(b\_{q})\_{k}}\frac{z^{k}}{k!}\tag{6}$$

$$=G\_{p,q+1}^{1,p}\left[-z\,\middle|\begin{array}{c}1-a\_{1},\ldots,1-a\_{p}\\0,1-b\_{1},\ldots,1-b\_{q}\end{array}\right];$$

where

$$\mathcal{L} = \left[ \prod\_{i=1}^p \Gamma(a\_i) \nearrow \prod\_{j=1}^q \Gamma(b\_j) \right], \\ (a)\_k := \Gamma(a+k) / \Gamma(a).$$

In general (that is, except in certain integer values of parameters when the series terminates or fails to make sense), *pFq* converges for all finite *z* if *p* ≤ *q*, converges for |*z*| < 1 if *p* = *q* + 1 and diverges for all *z* = 0 if *p* > *q* + 1. The simplest particular cases are the Gauss hypergeometric function <sup>2</sup>*F*1, the Kummer (confluent hypergeometric) function <sup>1</sup>*F*<sup>1</sup> and the Bessel type functions <sup>0</sup>*F*1.

A very special and important case of SF of FC, as a *H*-function and also as a Wright *<sup>p</sup>*Ψ*q*-function, is the *"Queen"-function of FC* (see [34]), namely the *Mittag–Leffler (M-L) function*, which has recently enjoyed many extensions (along with many basic elementary and known SF as its particular cases) and wide applications in solutions of fractional order models. This is the topic of Sections 4 and 5.

#### **3. On the Use of Some** *G***- and** *H***-Functions in Theory of Integral Transforms and Special Functions**

The Meijer *G*-function includes most of elementary and special functions (the classical ones) as particular cases, one can find lists of these, say in [3] (Vol. 1), [9] (Appendix C), [11,14]. Naturally, the more general Fox *H*-function incorporates all cases of the *G*-functions, and much more the SF of FC. Here, we attract readers' attention to the use of *two basic classes of G- and H-functions* with specific orders: *Gm*,0 0,*m*, respectively, *<sup>H</sup>m*,0 0,*<sup>m</sup>* with *m* = *q*, *n* = *p* = 0; and *Gm*,0 *<sup>m</sup>*,*m*, respectively, *Hm*,0 *<sup>m</sup>*,*<sup>m</sup>* with *m* = *p* = *q*, *n* = 0.

*3.1. Use of G- and H-Functions as Kernels of Laplace Type Integral Transforms*

The Laplace transform

$$\mathcal{L}\{f(t);s\} = \int\_0^\infty \exp(-st)f(t)dt,\quad \text{Re}\,s > \mu,\tag{7}$$

is usually considered for functions *f*(*t*) of the form

$$\left\{ f(t) = t^p \tilde{f}(t), \ p > -1, \tilde{f} \in \mathbb{C}[0, \infty); \ f(t) = O(\exp \mu t), \ t \to \infty, \mu \in \mathbb{R} \right\}.$$

**Definition 3.** *The G- and H-transforms (see, for example [35], also [15,36]) of the form*

$$\mathcal{G}\{f(t);s\} = \int\_0^\infty \mathcal{G}^{m,n}\_{p,q} \left[ \text{s!} \begin{array}{c} \left(a\_j\right)\_1^p \\ \left(b\_k\right)\_1^q \end{array} \right] f(t)dt, \quad \text{resp. } \mathcal{H}\{f(t);s\} = \int\_0^\infty H^{m,n}\_{p,q} \left[ \text{s!} \begin{array}{c} \left(a\_j, A\_j\right)\_1^p \\ \left(b\_k, B\_k\right)\_1^q \end{array} \right] f(t)dt, \quad \text{resp. } \mathcal{H}\{f(t);s\} = \int\_0^\infty H^{m,n}\_{p,q} \left[ \text{s!} \begin{array}{c} \left(a\_j, A\_j\right)\_1^p \\ \left(b\_k, B\_k\right)\_1^q \end{array} \right] f(t)dt$$

*are said to be generalized integral transforms of Laplace type when*

$$\delta = m + n - \frac{p + q}{2} > 0, \quad \text{resp. } a^\* = \sum\_{j=1}^n A\_j - \sum\_{j=n+1}^p A\_j + \sum\_{k=1}^m B\_k - \sum\_{k=m+1}^q B\_k > 0,$$

*and are considered in suitable functional spaces of "transformable" functions.*

In 1958, Obrechkoff [37] introduced a far reaching generalization of the Laplace and Meijer transforms, particular cases of which were studied by many authors years later, mainly for the purposes of operational calculi for different classes of differential operators. His aims were to extend the theorem of S. Bernstein for absolutely monotonic functions representable by means of Laplace–Stieltjes transforms, when the conditions for *n*th derivatives are replaced by similar ones with more general differential operators. The *Obrechkoff transform* was defined as

$$\mathcal{F}(s) = \int\_0^\infty \Phi(st) f(t) dt$$

with a kernel Φ(*s*) given by the integral representation

$$\Phi(s) = \int\_0^\infty \cdots \int\_0^\infty u\_1^{\beta\_1} \cdots u^{\beta\_p} \, \exp\left(-u\_1 - \cdots - u\_p - \frac{s}{u\_1 \dots u\_p}\right) du\_1 \cdots \, du\_p. \tag{8}$$

Later, in 1966, Dimovski [38] introduced a class of differential operators of Bessel type and of arbitrary integer order *m* > 1, called by the author (as for example in [9]) as *hyper-Bessel differential operators*. They have the alternative representations

$$\begin{split} Bf(t) &= t^{n\_0} \frac{d}{dt} t^{n\_1} \frac{d}{dt} \cdots t^{n\_{m-1}} \frac{d}{dt} t^{n\_m} f(t) \\ &= t^{-\beta} P\_m \left( t \frac{d}{dt} \right) f(t) = t^{-\beta} \prod\_{k=1}^m \left( t \frac{d}{dt} + \beta \gamma\_k \right) f(t), \quad t > 0, \end{split} \tag{9}$$

with arbitrary parameters *<sup>α</sup>*0, *<sup>α</sup>*1, ..., *<sup>α</sup>m*, *<sup>β</sup>* :<sup>=</sup> *<sup>m</sup>* <sup>−</sup> (*α*<sup>0</sup> <sup>+</sup> *<sup>α</sup>*<sup>1</sup> <sup>+</sup> ... <sup>+</sup> *<sup>α</sup>m*) <sup>&</sup>gt; 0, *<sup>γ</sup><sup>k</sup>* :<sup>=</sup> <sup>1</sup> *<sup>β</sup>* (*α<sup>k</sup>* + *αk*+<sup>1</sup> + ... + *αm*), *k* = 1, ..., *m*, *Pm* a polynomial of degree *m*. Evidently for *m* = *β* = 2, *<sup>γ</sup>*1,2 <sup>=</sup> <sup>±</sup>*<sup>ν</sup>* <sup>2</sup> , one has the second-order Bessel differential operator *B<sup>ν</sup>* with the Bessel function *y*(*t*) = *Jν*(*t*) satisfying *Bνy*(*t*) = −*y*(*t*). For other choices of parameters, many other particular differential operators appear in equations of mathematical physics, operational calculus and applied analysis. To combine the Mikusinski type algebraic approach to operational calculus for (9) with a transform method, Dimovski used a *modified Obrechkoff transform* (we shortly call it also Obrechkoff transform), defined as

$$\mathcal{O}\{f(t);s\} = \beta \int\_0^\infty t^{\beta(\gamma\_m+1)-1} \, K\left[ (st)^\beta \right] f(t)dt = \beta \int\_0^\infty \lambda(t,s)f(t)dt\sqrt{s}$$

with the kernel-function

$$K(s) = \int\_0^\infty \dots \int\_0^\infty \exp\left(-u\_1 - \dots - u\_{m-1} - \frac{s}{u\_1 \dots u\_{m-1}}\right) \prod\_{k=1}^m u\_k^{\gamma\_m - \gamma\_k - 1} du\_1 \dots du\_{m-1}.\tag{10}$$

In [9] (Ch.3), also in other works like [39], we proved that the kernel-functions (8) and (10) of the Obrechkoff transforms are representable as Meijer's *Gm*,0 0,*m*-functions, namely (for a proof see, e.g., Lemma 1 of [39]):

$$\Phi(\mathbf{s}) = \mathbf{G}\_{0, p+1}^{p+1, 0} \begin{bmatrix} \mathbf{s} \\ \mathbf{s} \end{bmatrix}\_{\mathbf{1}} \begin{array}{c} \mathbf{s} \\ \left(\boldsymbol{\beta}\_{\mathbf{k}} + \mathbf{1}\right)\_{1}^{p}, \mathbf{0} \end{array} \end{bmatrix}, \ \lambda(\mathbf{t}, \mathbf{s}) = \mathbf{s}^{-\beta(\gamma\_{\mathbf{m}} + 1) + 1} \mathbf{G}\_{\mathbf{m}, \mathbf{m}}^{m, 0} \left[ (\mathbf{s} \mathbf{t})^{\beta} \bigg| \begin{array}{c} -\mathbf{\overline{\underline{\mathbf{\mathbf{\bar{s}}}}} \\ (\gamma\_{\mathbf{k}} - \frac{1}{\beta} + \mathbf{1})\_{1}^{m} \end{array} \right]. \tag{11}$$

Therefore, *the Obrechkoff transform appears to be a G-transform of Laplace type* (because *δ* = *m*/2 > 0), and its theory has been further developed in whole details (convolution theorems, real and complex inversion formulas, images, examples, etc.) more easily by using the tools of the *G*-functions (see for example [9] (Ch.3), [39]).

Another not observed fact was that functions of the form of kernels (8) and (10) of the Obrehkoff transform were studied yet in 1937 by Erdélyi [40]. He might be the first who derived a relation between the <sup>0</sup>*Fm*−1-functions (what we mention next as hyper-Bessel functions) and these kernel-functions (formula (7.4) in [40]). However, at the time of Erdélyi's work [40], 1937, the next step in introducing the *G*-functions had not yet been done by Meijer [30], 1946. Obrechkoff himself made no attempts to identify the kernelfunction Φ(*s*) with some known special functions and studied its properties "ad hoc". Thus, the *Gm*,0 0,*m*-functions seemed to appear in use for the hyper-Bessel operators and related integral transforms in author's works since 1980 (see [9] (Ch.3), [41]).

Next, the *generalized Obrechkoff transform* (a *fractionalized* analog) was introduced and studied by Kiryakova [9] (Ch.5), Al-Mussalam–Kiryakova–Tuan [42] and Yakubovich– Luchko [15], with the *Fox Hm*,0 *<sup>m</sup>*,*m-function as kernel*:

$$\mathcal{B}(s) = \mathcal{B}\_{(\rho\_i),(\mu\_i)}\{f(t); s\} = \int\_0^\infty H\_{0,m}^{m,0} \left[ st \, \middle| \begin{array}{c} -- \\ (\mu\_i - \frac{1}{\rho\_i}, \frac{1}{\rho\_i})\_1^m \end{array} \right] f(t) dt. \tag{12}$$

We call it as *multi-index Borel–Dzrbashjan transform*, because for *m* = 1 it is reduced to the *Borel transform*

$$\mathcal{B}\_{(\rho),(\mu)}\left\{f(t);s\right\} = \int\_0^\infty \exp\left(-s^\rho t^\rho\right) t^{\mu\rho - 1} f(t) dt \tag{13}$$

whose kernel appears to be a *H*1,0 0,1 -function. This integral transform was shown by Dzrbashjan [43,44] to have inversion formula involving the Mittag–Leffler function *E*1/*ρ*,*μ*. The generalized Obrechkoff transform (12) is a tool in operational calculus for *fractional multi-order analogs of hyper-Bessel differential operators* (9), formally of the kind

$$D\_{(\rho\_i),(\mu\_i)}f(t) = t^{-1} \prod\_{i=1}^{m} \left( t^{1 + (1 - \mu\_i)\rho\_i} D\_{t^{\rho\_i}}^{1/\rho\_i} \, t^{(\mu\_i - 1)\rho\_i} \right) f(t), \tag{14}$$

.

in the same way as the Laplace transform, the Obrechkoff transform and its particular cases serve for the classical differentiation, respectively for the hyper-Bessel operators (9).

In the studies on these Laplace type *G*- and *H*-integral transforms, we used essentially the theory of the *G*- and *H*-functions, mainly of the cases of orders (*m*, 0; 0, *m*). Note that, for example, *Gm*,0 0,*m*(*s*) is an analytic function in the sector | arg *s*| < (*m*/2) *π* (where in this case *δ* = *m*/2 > 0). Some additional necessary results on these *G*- and *H*- kernel functions were derived by Kiryakova [9] (Appendix), as Lemmas B.1–B.4, Corollaries B.5–B.7, Formula (E.21), etc.

From the known representations of some elementary and special functions in *G*- and *H*-terms, one observes many particular cases of simpler Laplace type integral transforms. Namely, the Laplace and Borel–Dzrbashjan transforms (7) and (13) are Obrechkoff transform (10) and multi-index Borel–Dzrbashjan transform (12), respectively, for *m* = 1, since

$$\exp(-s) = G\_{0,1}^{1,0} \begin{bmatrix} s & - & \\ \end{bmatrix}, \quad \exp(-s^{\rho}t^{\rho}) = H\_{0,1}^{1,0} \begin{bmatrix} \frac{1}{\rho} & -\\ st & \left(\mu - \frac{1}{\rho}, \frac{1}{\rho}\right) \end{bmatrix}$$

For *m* = 2, we have the *classical Meijer transform* as a case of the Obrechkoff transform, related to the Bessel differential operator *<sup>B</sup><sup>ν</sup>* <sup>=</sup> *<sup>d</sup> dtt* <sup>1</sup>−*<sup>ν</sup> <sup>d</sup> dtt ν*:

$$\mathcal{K}\_{\nu}\{f(t);s\} = \int\_{0}^{\infty} \sqrt{s}t \, K\_{\nu}(t) \, f(t)dt,\text{ since}\quad K\_{\nu}(s) = \frac{1}{2} \, \mathrm{G}\_{0,2}^{2,0} \left[\frac{s^{2}}{4} \, \middle|\begin{array}{c} - \\ \frac{\nu}{2}, \frac{-\nu}{2} \end{array}\right],\tag{15}$$

the kernel *Macdonald function Kν*(*s*) has such a *G*-function representation.

In a series of papers [45,46], *Krätzel* introduced a generalization of the Meijer transform (again with *m* = 2), and further a more general one of the type of Obrechkoff transform for arbitrary integer *m* > 1,

$$\mathcal{L}\_{\nu}^{(m)}\{f(t);s\} = \int\_{0}^{\infty} \lambda\_{\nu}^{(m)}[m(st)^{1/m}]\,f(t)dt := \int\_{0}^{\infty} \Lambda(s,t)\,f(t)dt.\tag{16}$$

He used the transformation (16) for operational calculus for the following (hyper-Bessel type) operator or order *m* > 1:

$$B\_{\nu}^{(m)} = \frac{d}{dt} t^{\frac{1}{m} - \nu} \left( t^{1 - \frac{1}{m}} \frac{d}{dt} \right)^{m-1} t^{\nu + 1 - \frac{2}{m}} \quad \text{with} \quad \beta = 1, \gamma\_1 = 0, \gamma\_k = \nu + \frac{k-2}{m}, k = 2, \dots, m. \tag{17}$$

As expected, we can represent the Krätzel kernel in terms of the *G*-function corresponding to (11):

$$\Lambda(s,t) = \int\_0^\infty \cdots \int\_0^\infty \left[\prod\_{k=1}^{m-1} u\_k^{\nu-1+\frac{k-1}{m}}\right] \exp\left(-u\_1 - \ldots - u\_{m-1} - \frac{st}{u\_1 \ldots u\_{m-1}}\right) du\_1 \ldots du\_{m-1}$$

$$= s^{-\nu-1+\frac{2}{m}} G\_{0,m}^{m,0} \left[st\right]\_{0,\left(\nu + \frac{k-2}{m}\right)\_2^m} \qquad\tag{18}$$

Krätzel started from the simple case *m* = 2 with a kernel of the form ∞ 0 *<sup>u</sup>γ*−<sup>1</sup> exp(−*<sup>u</sup>* <sup>−</sup> *st*/*u*)*du* (with some variations as *t* → *t <sup>ρ</sup>*, *ρ* > 0), close to the Macdonald function (15),

which is often called as the *Krätzel function*. Then, many other authors continued to study

it and established its relations to hypergeometric functions. We can refer to such works by Kilbas–Saxena–Trujillo [47], Mathai–Haubold [48], etc. In a paper by Glaeske–Kilbas– Saigo [49], a fractionalized analog of the Krätzel transform (7) was introduced, where instead of integer *m* > 1 in the transformation (16), they took a (fractional) parameter *ρ* > 0. Then, naturally, its kernel is represented as a *H*-function (due to some variations in the definition, it appears as *H*2,0 1,2 instead of *<sup>H</sup>*2,0 0,2 ). Relations with operators of fractional calculus are considered, but one should mention that such an integral transform is analog of the generalized Obrechkoff transform (12) for a fractional order differential operator of the form (14). In all these mentioned cases, the kernel functions have the form of (8) as also studied earlier by Erdélyi [40]. We conclude here the list of cases of the Obrechkoff transform with emphasize on the works by *Ditkin–Prudnikov* (as [50]) on operational calculi for (hyper-Bessel) operators of the form

$$B\_1 = \frac{d}{dt}t\frac{d}{dt}, \quad \text{and more generally,} \quad B\_m = \frac{d}{dt}t\frac{d}{dt}t\frac{d}{dt}\cdots\frac{d}{dt} = t^{-1}\left(t\frac{d}{dt}\right)^m, \ m \ge 2. \tag{19}$$

For *m* = 2, the corresponding integral transform is a variant of the Meijer transform (with *ν* = 0), and in the general case *m* > 1, Ditkin and Prudnikov [50] made use of an integral transform of the form

$$\mathcal{B}\{f(t);s\} = 2\int\_0^\infty \mathbb{E}\_{0w}(st)\,f(t)dt,\text{ where we can represent the kernel }\mathbb{E}\_{0w}\text{ as a }\mathbb{C}\_{0,w}^{n,0}-\text{function.}$$

For more details on the Obrechkoff type transforms with kernels *Gm*,0 0,*<sup>m</sup>* and *<sup>H</sup>m*,0 0,*m*, their properties, images and special cases, see Kiryakova [9] (Ch.3, Ch.5), [39].

#### *3.2. Use of G- and H-Functions as Kernels in Generalized Fractional Calculus*

For basic background on Fractional Calculus (FC) as theory of operators of integration and differentiation of arbitrary (fractional) order, and its closely related topics as special functions (SF) and integral transforms, we refer to the books by Samko–Kilbas–Marichev [51], Podlubny [16], Kilbas–Srivastava–Trujillo [18], and Yakubovich–Luchko [15], as well as one by the author [9], among many others. For wider list, see, for example, Machado– Kiryakova [25]. In our works, and mainly for the needs of the SF theory, we consider the Riemann–Liouville (R-L) type integrals and their corresponding derivatives of R-L and Caputo type, respectively, their generalizations involving *G*- and *H*-functions in the kernels. Note that we concentrate on the left-hand side variants and skip details (in most cases being similar) for the Weyl-type, right-hand sided operators.

The basic tools in our studies are the fractional integration operators of the form *I f*(*z*) = *<sup>z</sup>δ*<sup>0</sup> *<sup>I</sup> γ*,*δ <sup>β</sup> f*(*z*), *δ*<sup>0</sup> ≥ 0, to which we refer as "*classical fractional integrals*", where

$$\Pi\_{\beta}^{\gamma,\delta}f(z) = \frac{1}{\Gamma(\delta)} \int\_0^1 (1-\sigma)^{\delta-1} \sigma^{\gamma} f(z\sigma^{\frac{1}{\delta}}) \,d\sigma = \frac{z^{-\beta(\gamma+\delta)}}{\Gamma(\delta)} \int\_0^z (z^{\delta} - \xi^{\beta})^{\delta-1} \xi^{\beta\gamma} f(\xi) d(\xi^{\beta}), \tag{20}$$

is the *Erdélyi-Kober operator* (E-K) *of integration* of order *δ* ≥ 0, depending on two additional parameters *γ* ∈ R, *β* > 0. In this general form, it is introduced in Sneddon's works [52] and considered in some books (for example, [9] (Ch.2), [15,18,51]). The earlier versions with *β* = 1, *β* = 2 are due to Kober and Erdélyi. The R-L operator of integration *R<sup>δ</sup>* appears as a case with one parameter only, for *γ*=0, *β*=1, *δ*<sup>0</sup> = *δ* ≥ 0,

$$R\_{0+,z}^{\delta}f(z) := R^{\delta}f(z) = z^{\delta} \, ^{0,\delta}\_1f(z); \quad \text{conversely,} \quad I\_1^{\gamma,\delta}f(z) = z^{-\gamma-\delta} \, ^{\delta}\_1\mathcal{C}^{\gamma}f(z). \tag{21}$$

The *E-K fractional derivative Dγ*,*<sup>δ</sup> <sup>β</sup>* , corresponding to (20), is defined explicitly almost simultaneously in the works of Kiryakova [9] (Ch.2) and Yakubovich–Luchko [15] (Ch.3). It serves as an interpretation of the formal inversion formula , *I γ*,*δ β* -−1 = *I γ*+*δ*,−*δ <sup>β</sup>* , namely:

$$D\_{\vec{\beta}}^{\gamma,\delta}f(z) = D\_n I\_{\vec{\beta}}^{\gamma+\delta,n-\delta}f(z) = \prod\_{j=1}^{n} \left(\frac{1}{\vec{\beta}} z \frac{d}{dz} + \gamma + j\right) I\_{\vec{\beta}}^{\gamma+\delta,n-\delta} f(z), \quad n-1 < \delta \le n, n \in \mathbb{N}.\tag{22}$$

Here, the simplest integer order derivative (*d*/*dz*)*<sup>n</sup>* in the definition of the R-L fractional derivative *D<sup>δ</sup> f*(*z*) := *d dzn Rn*−*<sup>δ</sup> f*(*z*), is replaced by an auxiliary differential operator *Dn* of integer order, a polynomial of (*z d*/*dz*). The *Caputo-type R-L and E-K fractional derivatives* are defined in the same way but with exchanged order of the nonnegative order integration and the integer order differentiation (see, e.g., [53]).

The notion for *generalized operators of fractional integration* was introduced by Kalla in his 1969–1979 works (see the survey [54]), who suggested their common form

$$\operatorname{If} f(z) = \int\_0^1 \Phi(\sigma) \, \sigma^\gamma \, f(z\sigma) d\sigma = z^{-\gamma-1} \int\_0^z \Phi(\frac{\zeta}{z}) \xi^\gamma f(\xi) d\xi.$$

where Φ(*σ*) can be an arbitrary continuous (analytical) function for which the integral makes sense. The idea of such generalized fractional calculus is *to replace the elementary function in the kernel* of R-L and E-K operators (20) (and, say, the logarithmic kernel in the Hadamard integral) *by some special function*. Variants with the Gauss-, Bessel-, Whittaker-, arbitrary *G*- and *H*-functions appeared in papers of several authors (see historical details and references in [54,55]). If such a special function Φ is taken to be too general or too specific, only some formal operational rules for the corresponding fractional calculus can be derived. The lucky hint in our studies was to choose suitably the *kernel-functions* Φ *to be of the form of Gm*,0 *<sup>m</sup>*,*m- and Hm*,0 *<sup>m</sup>*,*m-functions*. Then, the operators of the generalized fractional calculus happen to be also commutative products of classical operators of FC, namely of finite number of Erdélyi-Kober operators. Thus, the tools of the special functions and the wide use of the classical FC are combined into a *Generalized Fractional Calculus (GFC)* in Kiryakova [9], with developed full theory and many illustrated applications in different areas of analysis, differential equations, special functions and integral transforms. Below, we briefly review the basic definitions and few results on this GFC.

**Definition 4.** (Kiryakova, [9] (Ch.5)) *We define the multiple E-K integral (of multiplicity m*>1*), by means of the real parameters' sets* (*δ*<sup>1</sup> ≥0, ..., *δ<sup>m</sup>* ≥0) *(multi-order of integration)* (*γ*1, ..., *γm*) *(multi-weight) and* (*β*<sup>1</sup> >0, ..., *β<sup>m</sup>* >0) *(additional multi-parameter), as:*

$$H\_{\left(\beta\_{k}\right),m}^{\left(\gamma\_{k}\right),\left(\delta\_{k}\right)}f(z) := \int\_{0}^{1}H\_{m,m}^{m,0}\left[\sigma\begin{array}{c} (\gamma\_{k}+\delta\_{k}+1-\frac{1}{\tilde{\beta}\_{k}^{\prime}},\frac{1}{\tilde{\beta}\_{k}^{\prime}})\_{1}^{m} \\ (\gamma\_{k}+1-\frac{1}{\tilde{\beta}\_{k}^{\prime}},\frac{1}{\tilde{\beta}\_{k}^{\prime}})\_{1}^{m} \end{array}\right]f(z\sigma)d\sigma,\tag{23}$$

*if <sup>m</sup>* ∑ *k*=1 *<sup>δ</sup><sup>k</sup>* <sup>&</sup>gt; <sup>0</sup>*; and as the identity operator: I*(*γ<sup>k</sup>* ),(0,...,0) (*β<sup>k</sup>* ),*<sup>m</sup> <sup>f</sup>*(*z*) = *<sup>f</sup>*(*z*)*, if <sup>δ</sup>*<sup>1</sup> <sup>=</sup> *<sup>δ</sup>*<sup>2</sup> <sup>=</sup> ··· <sup>=</sup> *<sup>δ</sup><sup>m</sup>* <sup>=</sup> <sup>0</sup>*.*

It is important to mention that, for the particular conditions (2), the above kernel *<sup>H</sup>m*,0 *<sup>m</sup>*,*m*-function is analytic function in the unit disk and *<sup>H</sup>m*,0 *<sup>m</sup>*,*m*(*σ*) <sup>≡</sup> 0 for <sup>|</sup>*σ*<sup>|</sup> <sup>&</sup>gt; <sup>1</sup> (Kiryakova, ref. [9]).

In the case of all equal *β*s: *β*<sup>1</sup> =*β*<sup>2</sup> =...=*β<sup>m</sup>* =: *β* > 0, integral (23) has a simpler form with a *Meijer Gm*,0 *<sup>m</sup>*,*m-function* ([9] (Ch.1)), which is also analytic in unit disk and *Gm*,0 *<sup>m</sup>*,*m*(*σ*) ≡ 0 for |*σ*| > 1,

$$I\_{(\beta,\cdots,\delta),m}^{(\gamma\_k),(\delta\_k)}f(z) := I\_{\beta,m}^{(\gamma\_k),(\delta\_k)}f(z) = \int\_0^1 G\_{m,m}^{m,0} \left[ \sigma \, \middle| \begin{array}{c} (\gamma\_k + \delta\_k)\_1^m \\ (\gamma\_k)\_1^m \end{array} \right] f(z\sigma^{1/\beta}) d\sigma = \left[ \prod\_{k=1}^m I\_{\beta}^{\gamma\_k,\delta\_k} \right] f(z). \tag{24}$$

In both cases of (23) and (24), the operators of the form

$$\widetilde{I}f(z) = z^{\delta\_0} I\_{(\mathfrak{f}\_k),m}^{(\gamma\_k),(\delta\_k)} f(z), \quad \widetilde{I}f(z) = z^{\delta\_0} I\_{\mathfrak{f},m}^{(\gamma\_k),(\delta\_k)} f(z), \quad \text{with} \quad \delta\_0 \ge 0,\tag{25}$$

are called *generalized fractional integrals* of *multi-order* (*δ*1, ..., *δm*).

The important decomposition property (for proof, see for example, [9] (Th.1.2.10, Th.5.2.1), says that the same GFC integrals (23) and (24) can be represented, instead of

using the kernel *H*- and *G*-functions, by *repeated integral representations for the commutative product of classical E-K operators* (20):

$$I\_{\left(\beta\_k\right),m}^{\left(\gamma\_k\right),\left(\delta\_k\right)}f(z) := \left[\prod\_{k=1}^m I\_{\beta\_k}^{\gamma\_k,\delta\_k}\right]f(z)$$

$$=\int\_0^1 \cdots \int\_0^1 \left[\prod\_{k=1}^m \frac{(1-\sigma\_k)^{\delta\_k-1}\sigma\_k^{\gamma\_k}}{\Gamma(\delta\_k)}\right] f\left(z\sigma\_1^{1/\beta\_1}\ldots\sigma\_m^{1/\beta\_m}\right) d\sigma\_1 \ldots d\sigma\_m.\tag{26}$$

In the book [9] and subsequent papers, we provided a full set of operational properties of the operators (23) and (24) that justify their names as operators of GFC, as *semigroup property, formal inversion formula*, reduction to identity or to the conventional integration operators for special parameters' choice.

Analogously to the R-L and E-K fractional derivatives, we define the corresponding *generalized fractional derivatives*. The auxiliary differential operator *Dη* is chosen on the base of the *specific differential relations for the kernel function*, derived for the *G*-functions, and especially for *Gm*,0 *<sup>m</sup>*,*<sup>m</sup>* by Kiryakova [9] (App., Lemmas B.3, B.4, Cor. B.6) and for *Hm*,0 *<sup>m</sup>*,*<sup>m</sup>* by Kiryakova [9] (Ch.5, Lemma 5.1.7)and Kiryakova–Luchko [53] (Lemma 18).

$$\begin{aligned} \textbf{Definition 5. (Kiryakova [9]) } & \textit{Let } D\_{\eta} \text{ be the following polynomial of } z \left(\frac{d}{dz}\right) \text{ of degree } \eta\_{1} + \ldots + \eta\_{m}. \\\ D\_{\eta} &= \left[\prod\_{r=1}^{m} \prod\_{j=1}^{\eta\_{r}} \left(\frac{1}{\delta\_{r}} z \frac{d}{dz} + \gamma\_{r} + j\right)\right], \text{ with } \quad \eta\_{k} := \begin{cases} [\delta\_{k}] + 1, & \text{for noninteger } \delta\_{k}, \\\ \delta\_{k}, & \text{for integer } s\_{k}, \end{cases} \end{aligned} \tag{27}$$

*The multiple* (*m-tuple*) *Erdélyi–Kober fractional derivative of R-L type of multi-order* (*δ*<sup>1</sup> ≥ 0, . . . , *δ<sup>m</sup>* ≥ 0) *is defined by means of the differ-integral operator:*

$$D\_{(\tilde{\mathbb{B}}\_{k}),m}^{(\gamma\_{\mathbb{B}}),(\delta\_{\mathbb{B}})}f(z) := D\_{\mathbb{V}}\,l\_{(\tilde{\mathbb{B}}\_{k}),m}^{(\gamma\_{\mathbb{B}}+\delta\_{\mathbb{B}}),(\eta\_{\mathbb{B}}-\delta\_{\mathbb{B}})}f(z) = D\_{\mathbb{V}}\,\int\_{0}^{1}H\_{m,m}^{m,0}\begin{bmatrix}\sigma\\\sigma\end{bmatrix}\begin{pmatrix}\langle\gamma\_{k}+\eta\_{k}+1-\frac{1}{\tilde{\mathbb{B}}\_{k}^{\prime}},\frac{1}{\tilde{\mathbb{B}}\_{k}^{\prime}}\rangle^{m}\\\langle\gamma\_{k}+1-\frac{1}{\tilde{\mathbb{B}}\_{k}^{\prime}},\frac{1}{\tilde{\mathbb{B}}\_{k}^{\prime}}\rangle^{m}\_{1}\end{pmatrix}f(zv)\,d\sigma.\tag{28}$$

*Similarly, the Caputo-type generalized fractional derivative was introduced by Kiryakova and Luchko [53], as*

$$\prescript{\*}{}{D}\_{\left(\begin{smallmatrix}\mathfrak{k}\_{k}\end{smallmatrix}\right),\mathsf{m}}^{\left(\gamma\_{k}\right),\left(\delta\_{k}\right)}f(z) = I\_{\left(\begin{smallmatrix}\mathfrak{k}\_{k}\end{smallmatrix}\right),\left(\eta\_{k}-\delta\_{k}\right)}^{\left(\gamma\_{k}+\delta\_{k}\right),\left(\eta\_{k}-\delta\_{k}\right)}D\_{\eta}f(z). \tag{29}$$

In the case *β*<sup>1</sup> = ... = *β<sup>m</sup>* := *β* > 0, simpler representations involving the Meijer *G*-function hold for the R-L and Caputo-type "derivatives" which correspond to the generalized fractional integral (24):

$$D\_{\beta,m}^{(\gamma\_k),(\delta\_k)}f(z) = D\_\eta \, I\_{\beta,m}^{(\gamma\_k+\delta\_k),(\eta\_k-\delta\_k)}f(z) = \left[\prod\_{r=1}^m \prod\_{j=1}^{\eta\_r} \left(\frac{1}{\beta}z\frac{d}{dz} + \gamma\_{\ell^r} + j\right)\right] I\_{\beta,m}^{(\gamma\_k+\delta\_k),(\eta\_k-\delta\_k)}f(z),$$

$${}^\*D\_{\beta,m}^{(\gamma\_k),(\delta\_k)}f(z) = I\_{\beta,m}^{(\gamma\_k+\delta\_k),(\eta\_k-\delta\_k)}D\_\eta f(z). \tag{30}$$

More generally, the differ-integral/integro-differential operators of the form

$$\tilde{D}f(z) = D\_{(\beta\_k),m}^{(\gamma\_k),(\delta\_k)} z^{-\delta\_0} f(z) = z^{-\delta\_0} D\_{(\beta\_k),m}^{(\gamma\_k - \frac{\delta\_0}{\beta}),(\delta\_k)} f(z), \quad \text{and}$$

$$^\ast \overline{D}f(z) = ^\* D\_{(\beta\_k),m}^{(\gamma\_k),(\delta\_k)} z^{-\delta\_0} f(z) \quad \text{with} \quad \delta\_0 \ge 0,\tag{31}$$

are all called *generalized (multiple, multi-order) fractional derivatives* (of R-L or Caputo type).

Next, in Section 8, we often use also the notion of (generalized) fractional *differintegrals*. We have in mind either (generalized) fractional integrals or derivatives or compositions of some E-K fractional integrals and some E-K fractional derivatives. These appear as meanings of operators (26) when part of the order's *δ*s are non-negative and the other parts are negative.

For the functional spaces (here, we mainly limit to weighted analytical functions of complex *z*), mapping properties, long list of operational properties, images, etc., we refer, for example, to the work of Kiryakova [9,53,56].

We use also a further extension of the generalized fractional integrals (23), based on the so-called *Wright–Erdélyi–Kober (W-E-K) operator of fractional integration* (see [57]), with parameters as in E-K integral: *δ* ≥ 0, *γ* real, *β* > 0 and additional parameter *λ* > 0, where the Wright–Bessel (Bessel–Maitland) function of the form *J μ <sup>ν</sup>* (see (57)) in Section 5) is used in the kernel:

$$\mathcal{W}^{\gamma,\delta}\_{\beta,\lambda} f(z) := I^{\gamma,\delta}\_{\beta,\lambda,1} f(z) = \lambda \int\_0^1 \sigma^{\lambda(\gamma+1)-1} J^{-\lambda/\beta}\_{\gamma+\delta-\lambda(\gamma+1)/\beta} (\sigma^{\lambda} f(z\sigma) d\sigma. \tag{32}$$

One can show that, for *λ* = *β*, the above kernel-function reduces to the kernel of the E-K operator, therefore the W-E-K integration becomes the E-K one. Using *compositions of W-E-K operators* (32), Kalla and Galue [57] tried to develop a next step in the generalized fractional calculus with *Hm*,0 *<sup>m</sup>*,*<sup>m</sup>* kernel-functions that have the same structure but different parameters *βk*s and *λk*s in upper and low rows. Some revisions and properties of these operators were further provided by Kiryakova [58–60].

**Definition 6.** *For integer m* ≥ 1 *and real parameters δ<sup>k</sup>* ≥ 0*, γk, β<sup>k</sup>* > 0, *λ<sup>k</sup>* > 0, *β<sup>k</sup>* ≥ *λk, k* = 1, ..., *m, we define the multiple Wright–Erdélyi–Kober (W-E-K) fractional integrals, as follows:*

$$\overline{I}f(z) = I\_{\{\overline{\rho}\_{\overline{k}}\}, \{\lambda\_{\overline{k}}\}, m}^{(\gamma\_{\overline{k}}), \{\delta\_{\overline{k}}\}} f(z) := \int\_0^1 H\_{m,m}^{m,0} \left[ \sigma \begin{array}{c} (\gamma\_{\overline{i}} + \delta\_{\overline{i}} + 1 - \frac{1}{\overline{\rho}\_{\overline{i}}}, \frac{1}{\overline{\rho}\_{\overline{i}}})^m\_1 \\ (\gamma\_{\overline{i}} + 1 - \frac{1}{\lambda\_{\overline{i}}}, \frac{1}{\lambda\_{\overline{i}}})^m\_1 \end{array} \right] f(z\sigma) d\sigma = \left[ \prod\_{k=0}^m W\_{\overline{\rho}\_{\overline{k}}, \lambda\_k}^{\gamma\_{\overline{k}}, \delta\_{\overline{k}}} \right] f(z), \tag{3.3}$$

*if <sup>m</sup>* ∑ *i*=1 *δ<sup>i</sup>* > 0*; and as the identity operator: I f*(*z*) = *f*(*z*)*, when δ*<sup>1</sup> = *δ*<sup>2</sup> = ... = *δ<sup>m</sup>* = 0 *and λ<sup>k</sup>* = *βk*, *k* = 1, ... , *m. For γ<sup>k</sup>* > −1, *k* = 1, ..., *m and the above-mentioned conditions on the other parameters, the operators* (33) *are shown to preserve the space of analytic functions in disks or in starlike complex domains.*

If *β<sup>k</sup>* = *λk*, *k* = 1, ..., *m*, the "new" operators of GFC (33) coincide with operators (23). The corresponding generalized fractional derivatives *D*(*γ<sup>k</sup>* ),(*δ<sup>k</sup>* ) (*β<sup>k</sup>* ),(*λ<sup>k</sup>* ),*<sup>m</sup>* are defined by means of differential-integral operators similar to those for (28).

Here, we mention some few of the numerous *special cases of the above defined GFC operators*, to emphasize the particular elementary and special functions appearing in their kernels, and thus as *cases of the kernel Hm*,0 *<sup>m</sup>*,*m- and G<sup>m</sup>*,0 *<sup>m</sup>*,*m-functions*.

For *m* = 1, we have the kernel-functions:

$$H\_{1,1}^{1,0} \left[ \sigma \middle| \begin{array}{c} (\gamma + \delta, 1/\beta) \\ (\gamma, 1/\beta) \end{array} \right] = \beta \sigma^{\theta - 1} G\_{1,1}^{1,0} \left[ \sigma^{\theta} \middle| \begin{array}{c} \gamma + \delta \\ \gamma \end{array} \right] = \beta \frac{\sigma^{\theta \gamma + \theta - 1} (1 - \sigma^{\theta})^{\delta - 1}}{\Gamma(\delta)},\tag{34}$$

thus the generalized fractional integrals and derivatives (23) and (28) reduce to the corresponding E-K (20) and (22) and R-L operators (21): *I γ*,*δ <sup>β</sup>*,1 = *I γ*,*δ <sup>β</sup>* , *<sup>D</sup>γ*,*<sup>δ</sup> <sup>β</sup>*,1 <sup>=</sup> *<sup>D</sup>γ*,*<sup>δ</sup> <sup>β</sup>* , *<sup>R</sup><sup>δ</sup>* and *<sup>D</sup>δ*. Many other integration and differentiation operators introduced and used by different authors appear as their special cases.

In the case *m* = 2, the kernel functions *H*2,0 2,2 and *<sup>G</sup>*2,0 2,2 reduce to a *Gauss hypergeometric function* or its variations, for example:

$${}\_{2}H\_{2,2}^{2,0}\left[\sigma\begin{array}{c} \left(\gamma\_{1}+\delta\_{1}+1-\frac{1}{\delta},\frac{1}{\delta}\right), \left(\gamma\_{2}+\delta\_{2}+1-\frac{1}{\delta},\frac{1}{\delta}\right) \\ \left(\gamma\_{1}+1-\frac{1}{\delta},\frac{1}{\delta}\right), \left(\gamma\_{2}+1-\frac{1}{\delta},\frac{1}{\delta}\right) \end{array}\right] = G\_{2,2}^{2,0}\left[\sigma^{\delta}\begin{array}{c} \gamma\_{1}+\delta\_{1},\gamma\_{2}+\delta\_{2} \\ \gamma\_{1},\gamma\_{2}\end{array}\right]$$

$$=\frac{\sigma^{\delta\gamma\_{2}}\left(1-\sigma^{\delta}\right)^{\delta\_{1}+\delta\_{2}-1}}{\Gamma\left(\delta\_{1}+\delta\_{2}\right)}\,\_{2}F\_{1}(\gamma\_{2}+\delta\_{2}-\gamma\_{1},\delta\_{1};\delta\_{1}+\delta\_{2};1-\upsilon^{\delta}).\tag{35}$$

Therefore, the generalized fractional integrals in this case are known as *hypergeometric fractional integrals*; some of them were introduced and studied by, e.g. Love, Saxena, Saigo and Hohlov (see [54]).

For *m* = 3, we have as special case the *Marichev–Saigo–Maeda (M-S-M) operators* of FC, the integration operators introduced and studied by Marichev (1974) and Saigo et al. (1996, 1998) (see [55]). This is because their kernel-function, the *Appel F*<sup>3</sup> *function* (*Horn function*)

$$F\_3(a, a', b, b', c, z, \xi) = \sum\_{m,n=0}^{\infty} \frac{(a)\_m (a')\_n (b)\_m (b')\_n}{(c)\_{m+n}} \frac{z^m \xi^n}{m! n!}, \quad |z| < 1, |\xi| < 1 \text{(see, e.g., [3, 14])},$$

is a case of the GFC kernel-functions *H*3,0 3,3 and *<sup>G</sup>*3,0 3,3 (see, for example, [14], §8.4.51, (2)):

$$\begin{aligned} \frac{(1-\sigma)^{c-1}}{\Gamma(c)} & \; F\_3\left(a, a', b, b', c, 1-\frac{1}{\sigma'}, 1-\sigma\right) \\ &= G\_{3,3}^{3,0}\left[\sigma \left| \begin{array}{c} a+b, c-a', c-b' \\ a, b, c-a'-b' \end{array} \right. \right] = H\_{3,3}^{3,0}\left[\sigma \left| \begin{array}{c} (a+b, 1), (c-a', 1), (c-b', 1) \\ (a, 1), (b, 1), (c-a'-b', 1) \end{array} \right. \right], \; \text{Re}\, c > 0. \tag{36} \end{aligned}$$

Let *m* ≥ 1 be an arbitrary integer, but all *δ*s be equal integers, say *δ*<sup>1</sup> = ... = *δ<sup>m</sup>* = 1. Then, from (24), we obtain the *hyper-Bessel integral operators L* (we denote below their kernel by *G*1) that correspond to the hyper-Bessel differential operators (9) of arbitrary (higher) integer order *m* > 1. In practice, these are operators of integer multi-orders (1, 1, ..., 1), but their fractional powers *Lλ*, *λ* > 0 have been represented (Kiryakova [9,41]) as GFC integrals of multi-order (*λ*, *λ*, ..., *λ*) with kernels *Gλ*, where

$$\mathcal{G}\_1(\sigma) = \mathcal{G}\_{m,m}^{m,0} \left[ \sigma \middle| \begin{array}{c} (\gamma\_k + 1)\_1^m \\ (\gamma\_k)\_1^m \end{array} \right], \ \mathcal{G}\_\lambda(\sigma) = \mathcal{G}\_{m,m}^{m,0} \left[ \sigma \middle| \begin{array}{c} (\gamma\_k + \lambda)\_1^m \\ (\gamma\_k)\_1^m \end{array} \right].$$

The kernel of *L<sup>λ</sup>* in the form *G<sup>λ</sup>* appeared also in the work of McBride [61]. These expressions gave us the hint how to introduce our GFC, replacing (*λ*, *λ*, ..., *λ*) by arbitrary fractional multi-order (*δ*1, *δ*2, ..., *δm*), explanations are in [41]. We can mention also the Gelfond–Leontiev [62] operator (47) generated by the *multi-index M-L functions* (see next section and the works by Kiryakova [63,64]), as a more general example of operators of fractional multi-order where the Fox *Hm*,0 *<sup>m</sup>*,*m*-functions serve as kernels.

The *H*-functions of the form *Hq*,0 *<sup>p</sup>*,*q*, of which the kernel functions of (23) are cases with *p* = *q* = *m*, were studied in series of papers by Karp. In [28], he revisited the Braaksma results [29] for the *H*-function's behavior in the neighborhood of the singular points and its analytical continuation. There he commented also works on applications of *H*functions not only in fractional calculus, but also widely in statistics, including the book by Mathai–Saxena–Haubold [20].

In relation to the *use of the Gm*,0 *<sup>m</sup>*,*m-functions* (the kernel-functions of GFC integrals (24)) in applications to statistics, it is interesting to note that, in 1958, *Kabe* [65] explored them *in statistics, as density functions of a random variable*. He also distinguished the cases *m* = 1 and *m* = 2 (mentioned above) related to the kernel-functions of the E-K and of the hypergeometric fractional integrals, respectively, (34) and (35). Studies on the closely related *Gm*,1 *<sup>m</sup>*+1,*m*+1-functions as R-L integrals of *<sup>G</sup>m*,0 *<sup>m</sup>*,*<sup>m</sup>* can be found in the work by Karp [66].

#### **4. Mittag–Leffler Functions and Their Extensions**

The *Mittag–Leffler (M-L) function Eα*(*z*) was introduced by G. Mittag–Leffler ([67], 1902–1905), extended to 2-parameters as *Eα*,*β*(*z*) by A. Wiman [68] and studied later by P. Humbert and R.P. Agarwal [69]. It was presented in the Bateman Project [3], Vol. 3 (1954), in a chapter for "Miscellaneous Functions". However, for long time, it was ignored in the other handbooks on special functions because the applied scientists suffered from the lack of tables for its Laplace transforms. Although arising from the studies of Mittag–Leffler on a problem not related to fractional calculus, but on analytical continuation of series to maximal starlike domain (Mittag–Leffler star), nowadays, the M-L function is the most popular and most exploited SF of FC. It was titled as the *"Queen"-function of FC* by Gorenflo

and Mainardi in 1997 (see also the very recent survey by Mainardi [34]). The basic theory and more details, can be found, for example, in [22,43] (see also, e.g., [9,16,70,71]).

**Definition 7.** *The Mittag–Leffler (M-L) functions E<sup>α</sup> and Eα*,*β, are entire functions of order ρ* = 1/*α and type 1, defined by the power series*

$$E\_{\mathfrak{a}}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{\Gamma(ak+1)}, \quad E\_{\mathfrak{a},\mathfrak{b}}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{\Gamma(ak+\beta)}, \quad \mathfrak{a} > 0, \beta > 0. \tag{37}$$

As *"fractional index"* (*α* > 0) *analogs* of the exponential and trigonometric functions that satisfy ODEs of first and second order (*α* = 1, 2), the M-L functions serve as solutions of *fractional order differential and integral equations*. An example is the Rabotnov function, called also "fractional exponent", *y*(*z*) = *zα*−1*Eα*,*α*(*zα*) that solves the simplest fractional order differential equation *Dαy*(*z*) = *y*(*z*). Let us refer also to the pioneering work by Hille–Tamarkin [72], where the solution of the Abel integral equation of the second kind was provided in terms of a M-L function. As far as the Laplace transform images are mentioned, one can find these for the M-L type functions and their *k*th derivatives in the work of Podlubny ([16] (S.1.2.2)):

$$\mathcal{L}\left\{z^{\alpha k + \beta - 1} \, \_{a, \beta}(\pm \lambda z^a) \right\} (\pm \lambda z^a) / \colon s \right\} = \frac{k! \, s^{\alpha - \beta}}{\left(s^{\alpha} \mp \lambda \right)^{k + 1}} \prime \colon \text{Res} \, s > |\lambda|^{1/a} \, \_a$$

A Mittag–Leffler type function with three indices, known as the *Prabhakar function* [73], is also often studied and used (for details, see [22,70,71,74,75] and other contemporary books and surveys on M-L type functions):

$$E\_{a,\theta}^{\gamma}(z) = \sum\_{k=0}^{\infty} \frac{(\gamma)\_k}{\Gamma(ak+\beta)} \frac{z^k}{k!}, \quad a,\beta,\gamma \in \mathbb{C}, \text{ Re } a > 0;\tag{38}$$

where (*γ*)<sup>0</sup> = 1,(*γ*)*<sup>k</sup>* = Γ(*γ* + *k*)/Γ(*γ*) denotes the Pochhammer symbol. Its Laplace transform has the form

$$
\mathcal{L}\left\{E\_{\kappa,\beta}^{\gamma}(\lambda z^{\kappa});s\right\} = \frac{s^{-\beta}}{(1-\lambda s^{-\kappa})^{\gamma}}.
$$

For *γ* = 1, we get the M-L function *Eα*,*β*, and, if additionally *β* = 1, then it is *Eα*.

These M-L type functions are simple cases of the Wright g.h.f. and of the *H*-function, namely:

$$E\_{a,\boldsymbol{\beta}}(z) = {}\_1\Psi\_1\left[ \begin{array}{c} (1,1) \\ (\boldsymbol{\beta},a) \end{array} \bigg| z \right] = H\_{1,2}^{1,1}\left[ -z \bigg| \begin{array}{c} (0,1) \\ (0,1),(1-\beta,a) \end{array} \right],$$

$$E\_{a,\boldsymbol{\beta}}^{\gamma}(z) = \frac{1}{\Gamma(\gamma)}{}\_1\Psi\_1\left[ \begin{array}{c} (\gamma,1) \\ (\boldsymbol{\beta},a) \end{array} \bigg| z \right] = H\_{1,2}^{1,1}\left[ -z \bigg| \begin{array}{c} (1-\gamma,1) \\ (0,1),(1-\beta,a) \end{array} \right].$$

Another generalization of M-L function (37) with additional parameters, for example *l* ∈ C, *μ* ∈ R, was considered by Gorenflo–Kilbas–Rogosin [76], and its relations to FC operators:

$$E\_{a,\mu,l}(z) = \sum\_{k=0}^{\infty} c\_k z^k, \quad \text{with} \quad c\_k = \prod\_{j=0}^{k-1} \frac{\Gamma[a(j\mu + l) + 1]}{\Gamma(a(j\mu + l + 1) + 1)}.$$

A *vector index extension* of (37) appeared in the works by Luchko et al. (e.g., [15,77,78]) on operational calculus' methods for some fractional order PDE and multi-term FO differential equations. Under the name *multi-index (multiple) M-L function*, it was introduced by Kiryakova [63,79] using a different approach, as to be the generating function of Gelfond– Leontiev generalized integration and differentiation operators (47) (see Definition 9) and inspired from the paper by Dzrbashjan [44] on M-L type function with 2 × 2 indices. Further, this class of functions were studied in details by Kiryakova [59,80], Kilbas–Koroleva– Rogosin [81], Paneva–Konovska [74] and many other followers. Luchko et al. also considered multivariate analogs of the so-called vector index M-L functions [78].

**Definition 8.** (Kiryakova [59,80]) *Let m* > 1 *be an integer,* (*α*<sup>1</sup> > 0, *α*<sup>2</sup> > 0, ... , *α<sup>m</sup>* > 0) *and* (*β*1, *β*2, ... , *βm*) *be arbitrary real parameters. By means of these sets of "multi-indices", the multi-index Mittag–Leffler function (abbrev. as multi-M-L f.) is defined as:*

$$E\_{(a\_i),(\beta\_i)}(z) := E\_{(a\_i),(\beta\_i)}^{(m)}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{\Gamma(a\_1 k + \beta\_1) \dots \Gamma(a\_m k + \beta\_m)}.\tag{39}$$

*Under weakened restrictions on αs (or their real parts) not to be obligatory all non-negative, the study was extended by Kilbas et al. [81].*

As a further extension of both Prabhakar function (38) and of the (2*m*) multi-index M-L functions (39), Paneva–Konovska [74,82] introduced and studied the so-called (3*m*) *parametric (multi-index) Mittag–Leffler functions*, similar to (39) but with additional set of parameters (*γ*1, ..., *γm*):

$$E\_{(a\_i)\_\*(\beta\_i)}^{(\gamma\_i),m}(z) = \sum\_{k=0}^{\infty} \frac{(\gamma\_1)\_k \dots (\gamma\_m)\_k}{\Gamma(\alpha\_1 k + \beta\_1) \dots \Gamma(\alpha\_m k + \beta\_m)} \frac{z^k}{(k!)^m}.\tag{40}$$

For *m* = 1, one has the Prabhakar function, and, for *γ*<sup>1</sup> = ... = *γ<sup>m</sup>* = 1, these are (39). The Mellin transforms of (39), (40) and their particular cases can be found in [83].

The so-called *Le Roy type function* has been an object of several recent studies, e.g., by Gerhold [84], Garra–Polito [85], Garrappa–Rogosin–Mainardi [86], Garrappa–Orsingher– Polito [87], as a new special function

$$F\_{\mathfrak{a},\mathfrak{k}}^{(\gamma)}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{[\Gamma(\mathfrak{a}k + \beta)]^{\gamma}},\tag{41}$$

which is an entire function of *z* ∈ C for parameters Re (*α*) > 0, *β* ∈ R and *γ* > 0. This resembles to the M-L function (for *γ* = 1) and to the multi-index M-L function (39) (for integer *γ* = *m*, all *α<sup>i</sup>* = *α*, *β<sup>i</sup>* = *β*, *i* = 1, ..., *m*). The function (41) appeared as extension of the function *<sup>R</sup>γ*(*z*) = <sup>∞</sup> ∑ *k*=0 *zk*/[(*k* + 1)!] *<sup>γ</sup>*, introduced by E. Le Roy [88] (1899), similarly to the purposes of G. Mittag–Leffler [67] (1903) to study analytical continuations of the sums of power series, and it seems they were working in competition on such ideas. Similar to the M-L type functions, (41) is involved in solutions of various problems, including a Convey–Maxwell–Poison distribution for different degrees of over- and under-dispersion.

#### *Some Basic Properties of the Multi-Index Mittag-Leffler Functions*

The basic properties and results for the functions (39) and long lists of their examples, all of them having wide applications in solutions of integer- and fractional-order models, are provided in our previous papers (e.g., [59,60,79,80]). Some of them are reminded here.

**Theorem 1.** *The multi-index M-L functions* (39) *are entire functions with the following order ρ and type σ:*

$$\frac{1}{\rho} = \alpha\_1 + \dots + \alpha\_m, \quad \frac{1}{\sigma} = (\rho a\_1)^{\rho a\_1} \dots \times (\rho a\_m)^{\rho a\_m},\tag{42}$$

*respectively with αis replaced by Re*(*αi*)*s. Note that the type σ* > 1 *for m* > 1 *and only for m* = 1 (*classical case* (37))*: σ* = 1*. The following asymptotic estimate holds:*

$$|E\_{(a\_i),(\beta\_i)}(z)| \le C|z|^{\rho((1/2)+\mu - (m/2))} \exp(\sigma|z|^{\rho}), \ \mu := \beta\_1 + \dots + \beta\_{m\prime} \text{ for } |z| \to \infty.$$

The (3*m*)-parameters M-L type functions (40) are also entire functions with the same order and type as in (42), see [74,82].

**Lemma 1.** *The multi-index M-L functions (39) are important examples of the Wright generalized hypergeometric functions <sup>p</sup>*Ψ*<sup>q</sup> and of the Fox H-functions:*

$$E\_{(a\_l),(\beta\_l)}(z) = E\_{(a\_l),(\beta\_l)}^{(m)}(z) = {}\_1\Psi\_m \left[ \begin{array}{c} (1,1) \\ (\beta\_l, a\_i)\_1^m \end{array} \bigg| z \right] = H\_{1,m+1}^{1,1} \left[ -z \bigg| \begin{array}{c} (0,1) \\ (0,1),(1-\beta\_l, a\_i)\_1^m \end{array} \right].\tag{43}$$

*Thus, the following Mellin–Barnes type integral representation holds (cf. with* (1)*):*

$$E\_{(\mathfrak{a}\_i),(\mathfrak{k}\_i)}(z) = \frac{1}{2\pi i} \int\_{\mathfrak{k}} \frac{\Gamma(s)\Gamma(1-s)}{\prod\_{i=1}^m \Gamma(\beta\_i - s\mathfrak{a}\_i)} (-z)^{-s} ds, \quad z \neq 0,$$

*based on the Mellin transform (see [59,83]; also [18] (p. 48)):*

$$\mathcal{M}\left\{E\_{\left(a\_{i}\right),\left(\beta\_{i}\right)}\left(-z\right);s\right\} = \frac{\Gamma(s)\Gamma(1-s)}{\prod\_{i=1}^{m}\Gamma(\beta\_{i}-sa\_{i})},\ \ 0 < \text{Re}\left(s\right) < 1. \tag{44}$$

Additionally, as shown by Paneva–Konovska [74,82], the (3*m*)-parametric functions (40) can be represented as

$$E\_{(x\_i),(\beta\_i)}^{(\gamma\_i),m}(z) = A\_m \Psi\_{2m-1} \left[ \begin{array}{c} (\gamma\_1, 1), \dots, (\gamma\_m, 1) \\ (\beta\_1, \alpha\_1), \dots, (\beta\_m, \alpha\_m), (1, 1), \dots, (1, 1) \end{array} \bigg| z \right] $$

$$= A \, H\_{m, 2m}^{1, m} \left[ -z \bigg| \begin{array}{c} (1 - \gamma\_1, 1), \dots, (1 - \gamma\_{m'}, 1) \\ [(0, 1), (1 - \beta\_{i'}, \alpha\_i)]\_1^m \end{array} \right], \text{ with } A = \left[ \prod\_{i=1}^m \Gamma(\gamma\_i) \right]^{-1},\tag{45}$$

which is in agreement with (43) for *γ*<sup>1</sup> = ... = *γ<sup>m</sup>* = 1.

As an analog of the Laplace transform (L), relationship between the classical M-L function (37) and the classical Wright function: L{*φ*(*α*, *<sup>β</sup>*; *<sup>z</sup>*);*s*} <sup>=</sup> <sup>1</sup> *s Eα*,*β*( 1 *s* ) (see in the books [16,18]), we derive the following *new relation*.

**Lemma 2.**

$$\mathcal{L}\left\{{}^{0}\Psi\_{m}\left[\begin{array}{c} -\\(\beta\_{1},a\_{1}),...,(\beta\_{m},a\_{m}) \end{array}\bigg|z\right];s\right\}=\frac{1}{s}E\_{(a\_{i}),(\beta\_{i})}(\frac{1}{s}),\ \text{Re}\left(s\right)>0\,.\tag{46}$$

Note that we can consider the <sup>0</sup>Ψ*m*-functions on the left-hand side as *"fractional indices" analogs of the* <sup>0</sup>*Fm-functions*, that is *of the hyper-Bessel functions J* (*m*) *<sup>ν</sup>*1,...,*ν<sup>m</sup>* of Delerue [89], related to the hyper-Bessel operators (9) as their eigenfunctions, and discussed further as special cases of (39). For details on these special functions, see Kiryakova [9] (Ch.3).

Various relations for the multi-M-L functions in terms of the operators of classical FC and GFC have been derived in our previous works (e.g., [59,80]). First, let us consider the so-called *Gelfond–Leontiev (G-L) operators of generalized integration and differentiation*, generated by the coefficients of an entire function *ϕ*(*σ*). For the theory of the G-L operators in general, see Gelfond and Leontiev's paper [62]) of 1951, and for details in the case when the mentioned entire function is taken to be the M-L function or multi-index Mittag– Leffler function, we refer to Kiryakova [9] (Ch.1), [59,63,79]. Here, we only remind the definition of the G-L operators related to *<sup>ϕ</sup>*(*σ*) =*E*(*αi*),(*βi*)(*σ*):<sup>=</sup> <sup>∞</sup> ∑ *k*=0 *bkz<sup>k</sup>* whose coefficients *bk* = 1/(Γ(*α*1*k* + *β*1)...Γ(*αmk* + *βk*)) are taken as multipliers' sequences below.

**Definition 9.** (Kiryakova [63,64]) *For functions <sup>f</sup>*(*z*) = <sup>∞</sup> ∑ *k*=0 *akz<sup>k</sup> analytic in a disk* {|*z*<sup>|</sup> <sup>&</sup>lt; *<sup>R</sup>*}*, we consider the operators*

$$\bar{D}f(z) := D\_{\{a\_l\}, \{\beta\_l\}} f(z) = \sum\_{k=1}^{\infty} a\_k \frac{b\_{k-1}}{b\_k} z^{k-1}, \quad \bar{L}f(z) := L\_{\{a\_l\}, \{\beta\_l\}} f(z) = \sum\_{k=0}^{\infty} a\_k \frac{b\_{k+1}}{b\_k} z^{k+1}, \tag{47}$$

*and call them multiple (multi-index) Dzrbashjan–Gelfond–Leointiev (D-G-L) differentiations and integrations, respectively. These are generated by the multi-index M-L functions and the name of Dzrbashjan is used in addition to Gelfond–Leontiev to honor his contribution to one of the first deep studies on M-L type functions, the book [43].*

Evidently, *D*(*αi*),(*βi*)*L*(*αi*),(*βi*) *f*(*z*) = *f*(*z*), and it is proven that the radii of convergence (and analyticity) of resulting analytical functions in (47) are the same *R* as for *f*(*z*). According to Theorem 3 in [79], operators (47) can be analytically extended outside the disks to starlike domains and represented as operators of GFC, as follows:

$$\check{D}f(z) = z^{-1} D\_{(1/a\_i),m}^{(\gamma\_i - 1 - a\_i), (a\_i)} f(z) - \left[ \prod\_{i=1}^m \frac{\Gamma(\gamma\_i)}{\Gamma(\gamma\_i - a\_i)} \right] \frac{f(0)}{z}, \quad \check{L}f(z) = z \, I\_{(1/a\_i),m}^{(\gamma\_i - 1), (a\_i)} f(z). \tag{48}$$

To start with the classical FC operators for the multi-index M-L functions, we state the following

**Lemma 3.** (Kiryakova [80] (Lemma 3.2)) *For any fixed l*, 1 ≤ *l* ≤ *m and integration order δ<sup>l</sup>* > 0*, we have for the E-K fractional integral the relation*

$$I\_{1/a\_l}^{\gamma\_l - 1, \delta\_l} E\_{(a\_i), (\gamma\_1, \dots, \gamma\_l, \dots, \gamma\_m)}(\lambda z) = E\_{(a\_i), (\gamma\_1, \dots, \gamma\_l + \delta\_l, \dots, \gamma\_m)}(\lambda z), \quad \lambda \neq 0, 1$$

*that is, a fractional integration can transform a multi-M-L function into another one with same αis and corresponding parameter γ<sup>l</sup> increased by the order of integration to γ<sup>l</sup>* + *δl.*

Applying E-K fractional integrals of the form *I γi*−1,*δ<sup>i</sup>* 1/*α<sup>i</sup>* successively *<sup>m</sup>*-times (*<sup>i</sup>* <sup>=</sup> 1, ..., *<sup>m</sup>*) to (39) and using the composition (decomposition) property (26), we obtain for the generalized fractional integrals (23) the image:

$$\operatorname{I}\_{(1/a\_i),m}^{(\gamma\_i - 1),(\delta\_i)} E\_{(a\_i),(\gamma\_i)}(\lambda z) = E\_{(a\_i),(\gamma\_i + \delta\_i)}(\lambda z). \tag{49}$$

Then, for *δ<sup>i</sup>* := *αi*, *i* = 1, ..., *m*, and applying the operational rules for the operators *I* (*γi*),(*δi*) (*βi*),*<sup>m</sup>* and *<sup>D</sup>*(*γi*),(*δi*) (*βi*),*<sup>m</sup>* of GFC, the following generalized fractional integration and differentiation relations follow:

$$(\lambda z)\ I\_{(1/a\_i),m}^{(\gamma\_i - 1),(a\_i)} E\_{(a\_i),(\gamma\_i)}(\lambda z) = E\_{(a\_i),(\gamma\_i)}(\lambda z) - \frac{1}{\Gamma(\gamma\_1)...\Gamma(\gamma\_m)},$$

$$D\_{(1/a\_i),m}^{(\gamma\_i - 1 - a\_i),(a\_i)} E\_{(a\_i),(\gamma\_i)}(\lambda z) = (\lambda z)\ E\_{(a\_i),(\gamma\_i)}(\lambda z) + \frac{1}{\Gamma(\gamma\_1 - a\_1)...\Gamma(\gamma\_m - a\_m)},\tag{50}$$

as analogs of the classical relation *<sup>z</sup>αDαEα*(*λz*) = *<sup>λ</sup>z<sup>α</sup> <sup>E</sup>α*(*λz*) + <sup>1</sup> <sup>Γ</sup>(<sup>1</sup> <sup>−</sup> *<sup>α</sup>*) for the R-L derivative *D<sup>α</sup>* = *z*−*αD*−*α*,*<sup>α</sup>* <sup>1</sup> .

It remains to combine the results (48) and (50) to verify the fact that *the multi-index M-L functions that generate the G-L operators* (47) *appear as their eigenfunctions*:

**Theorem 2.** *The multi-index Mittag–Leffer function* (39) *satisfies the differential equation of fractional multi-order* (*α*1, ..., *αm*)*:*

$$D\,E\_{(a\_i),(\oint\_i)}(\lambda z) = D\_{a\_i \mathfrak{H}\_i} E\_{(a\_i),(\oint\_i)}(\lambda z) = \lambda \, E\_{(a\_i),(\oint\_i)}(\lambda z), \quad \lambda \neq 0. \tag{51}$$

The classical *Poisson integral formula*, representing the Bessel function via the cosinefunction ([3] (Vol. 2)), can be written in terms of an E-K fractional integral, as

$$f\_{\nu}(z) = \frac{2}{\sqrt{\pi}\Gamma(\nu + 1/2)} \left(\frac{z}{2}\right)^{\nu} \int\_{0}^{1} (1 - t^2)^{\nu - 1/2} \cos(zt) dt = \frac{1}{\sqrt{\pi}} \left(\frac{z}{2}\right)^{\nu} I\_{1/2}^{-1/2, \nu + 1/2} \{\cos z\}.\tag{52}$$

This representation has been extended in our works [9] (Ch.4), [90] for the *hyper-Bessel functions* (58), *m* ≥ 2, that is for the <sup>0</sup>*Fm*−1*-functions*, via generalized fractional integrals (24) of the function cos*m*. The details follow in Section 8. For the multi-index M-L functions, a Poisson type integral representation of the kind of (52) has to explore the more general fractional calculus operators from Definition 6. This is a part of the general results discussed in Section 8, but we expose it here as to close (at least partly) the topic with some properties of the multi-index Mittag–Leffler functions.

**Theorem 3.** (Kiryakova [59]) *Let <sup>α</sup><sup>k</sup>* <sup>&</sup>gt; <sup>1</sup>*, <sup>β</sup><sup>k</sup>* <sup>≥</sup> *<sup>k</sup> <sup>m</sup> , k* = 1, ... , *m. Then, we have the following Poisson-type integral representation of the multi-index M-L functions my means of multiple W-E-K fractional integrals* (33) *of the cosine function* (54) *of order m (from the next section):*

$$E\_{(a\_k),(\beta\_k)}(-z) = c^\* \, ^t \overline{\ell^{(\frac{k}{n}-1)} ^m \overline{\ell^{(\beta\_k - \frac{k}{n})} ^m}} ^m \left\{ \cos\_m (mz^{1/m}) \right\}$$

$$= c^\* \int \!\!\! H\_{m,m}^{m,0} \left[ \sigma \right] \begin{array}{l} \langle \beta\_k - a\_k, a\_k \rangle\_1^m \\ (k/m - 1, 1)\_1^m \end{array} \right] \cos\_m \left( m(zr)^{1/m} \right) d\sigma, \quad \text{with} \quad c^\* := \sqrt{m/(2\pi)^{m-1}}. \tag{53}$$

**Remark 1.** *The above result is parallel with* (52) *for the Bessel functions. If we take α<sup>k</sup>* = 1*, β<sup>k</sup>* = *<sup>k</sup> <sup>m</sup> , the above GFC operator, the multiple W-E-K fractional integral, has a multi-order* (0, ..., 0) *and since also λ<sup>k</sup>* = *βk, it turns into identity. Then, the E*(*α<sup>k</sup>* ),(*β<sup>k</sup>* )*-function reduces to the* cos*m*(*z*)*-function. It is similar in the simplest case to the Bessel function with index <sup>ν</sup>* <sup>=</sup> <sup>−</sup>1/2*: <sup>J</sup>*−1/2(*z*) = <sup>2</sup> *<sup>π</sup><sup>z</sup>* cos *z. More generally, it is also known that the Bessel functions of "semi-integer" indices (called also* "spherical functions" *for their use in theory of spherical waves) are reducible to trigonometric functions or to integer order operators of them: Jn*−1/2(*z*) = (2*z*)*n*+1/2 √*<sup>π</sup> dn* (*dz*2)*<sup>n</sup>* ,cos *z z* - , *n* = 0, 1, 2, ...*. In the case of multi-index M-L functions* (39)*, we can call multi-indices of the form <sup>α</sup><sup>k</sup>* <sup>=</sup> <sup>1</sup>*, <sup>β</sup><sup>k</sup>* :<sup>=</sup> *<sup>ν</sup>k*<sup>−</sup> *<sup>k</sup> <sup>m</sup>* = 0, 1, 2, ...*; for k* = 1, ..., *m, as* "semiinteger multi-indices"*. A corollary of Theorem 3 tells that for such multi-indices the functions E*(*α<sup>k</sup>* ),(*β<sup>k</sup>* ) *reduce either directly to generalized trigonometric functions, or to integer order integral or differential operators of them.*

The results for the *images of the multi-index Mittag–Leffler functions* (39) and (40) *under GFC integrals and derivatives*, or under their particular cases a R-L, E-K, Saigo, Marichev– Saigo–Maeda operators, etc. can be written from the general results in Section 7 according to definition via the Wright g.h.f. <sup>1</sup>Ψ*m*.

*Series in systems of special functions*, in the general cases of 2*m*- and 3*m*-parameters M-L functions and their particular case (mentioned in next section) as the M-L function, Parbhakar function, multi-index and fractional analogs of the Bessel- and hyper-Bessel functions, were studied recently in details by Paneva–Konovska in a series of papers and in the book [74], especially with respect to their convergence in complex domain, including Cauchy–Hadamard, Abel, Tauber type, Hardy–Littlewood and Ostrovski type theorems.

#### **5. Examples of M-L Type and Multi-Index M-L Functions**

**5.1.** For *m* = 1, this is the *classical M-L function Eα*,*β*(*z*) with all its special cases:

• *<sup>α</sup>* <sup>&</sup>gt; 0, *<sup>β</sup>* <sup>=</sup> 1: *<sup>E</sup>*0,1(*z*) = <sup>1</sup> 1 − *z* ; *<sup>E</sup>*1,1(*z*) = exp(*z*); *<sup>E</sup>*2,1(*z*2) = cosh(*z*), *<sup>E</sup>*2,1(−*z*2) = cos(*z*); *E*1/2,1(*z*1/2) = exp(*z*) 1 + erf(*z*1/2) <sup>=</sup> exp(*z*) erfc(−*z*1/2) <sup>=</sup> exp(*z*) 1 + 1 <sup>√</sup>*<sup>π</sup> <sup>γ</sup>*( 1 2 , *z*) (*the error functions*, or *incomplete gamma functions*); • *<sup>β</sup>* <sup>=</sup> 1: *<sup>E</sup>*1,2(*z*) = *<sup>e</sup>z*−<sup>1</sup> *<sup>z</sup>* ; *<sup>E</sup>*1/2,2(*z*) = sh√*<sup>z</sup> <sup>z</sup>* ; *<sup>E</sup>*2,2(*z*) = sh√*<sup>z</sup>* <sup>√</sup>*<sup>z</sup>* ; the *Miller-Ross function zνE*1,*ν*+1(*az*); etc.; • *<sup>β</sup>* <sup>=</sup> *<sup>α</sup>*: the *<sup>α</sup>-exponential (Rabotnov) function yα*(*z*) = *<sup>z</sup>α*−1*Eα*,*α*(*zα*).

• The *trigonometric functions of order m*, and, respectively the *hyperbolic functions of order m*:

$$\cos\_m(z) = \sum\_{j=0}^{\infty} \frac{(-1)^j z^{mj}}{(mj)!} = E\_{m,1}(-z^m),\tag{54}$$

*y*(*z*) =cos*m*(*z*) is the solution of IVP *y*(*m*) (*z*) = <sup>−</sup>*y*(*z*), *<sup>y</sup>*(0) =1, *<sup>y</sup>*(*j*) (0) =0, *j* = 1, ..., *m*−1;

$$k\_r(z,m) = \sum\_{j=0}^{\infty} \frac{(-1)^j z^{mj+r-1}}{(mj+r-1)!} = z^{r-1} E\_{m,r}(-z^m), \; r = 1,2,\dots; \; k\_1(z,m) := \cos\_m(z) = E\_{m,1}(-z^m),$$

$$k\_r(z,m) = \sum\_{j=0}^{\infty} \frac{z^{mj+r-1}}{z^{mj+r-1}} = z^{r-1} E\_{m,1}(z^m), \; r = 1,2 \qquad k\_r(z,m) := \cosh\_m(z) = E\_{m,r}(z^m).$$

$$h\_{\mathbb{F}}(z,m) = \sum\_{j=0}^{\cdot \cdot} \frac{z^{m\_j+r-1}}{(m\_j+r-1)!} = z^{r-1} \operatorname{E}\_{\mathfrak{m},r}(z^m), \; r = 1,2,\dots, \; \operatorname{h}\_1(z,m) := \cosh\_{\mathfrak{m}}(z) = \operatorname{E}\_{\mathfrak{m},1}(z^m), \; r = 1,2,\dots, \operatorname{h}\_{\mathfrak{m},1}(z^m)$$

can also be expressed in terms of the M-L function (see in [3] (Vol. 3) and [16] (Ch.1)); and the same for their *fractionalized versions*, as by Plotnikov [91] and Tseytlin [92]:

$$\operatorname{Sc}\_{\mathfrak{a}}(z) = \sum\_{k=0}^{\infty} \frac{(-1)^k z^{(2-\mathfrak{a})m+1}}{\Gamma((2-\mathfrak{a})m+2)} = z \operatorname{E}\_{2-\mathfrak{a},2}(-z^{2-\mathfrak{a}}),$$

$$\operatorname{Cs}\_{\mathfrak{a}}(z) = \sum\_{k=0}^{\infty} \frac{(-1)^k z^{(2-\mathfrak{a})m}}{\Gamma((2-\mathfrak{a})m+1)} = E\_{2-\mathfrak{a},1}(-z^{2-\mathfrak{a}}),$$

and by Luchko–Srivastava [77]:

$$\sin\_{\lambda,\mu}(z) = \sum\_{k=0}^{\infty} \frac{(-1)^k z^{2k+1}}{\Gamma(2\mu k + 2\mu - \lambda + 1)} = z \, E\_{2\mu, 2\mu - \lambda + 1}(-z^2),$$

$$\cos\_{\lambda,\mu}(z) = \sum\_{k=0}^{\infty} \frac{(-1)^k z^{2k}}{\Gamma(2\mu k + \mu - \lambda + 1)} = E\_{2\mu, \mu - \lambda + 1}(-z^2),$$

(see details again in Podlubny [16] (Ch.1)).

• Here, we mention also the so-called *Lorenzo–Hartley functions* [93], the *F*-function and its generalization the *R*-function, shown to be solutions of some linear fractional differential equations. We can represent them in terms of M-L function, namely, for *z* > 0, *c* = 0, *q* ≥ 0, *ν* ≤ *q*:

$$F\_q(a, z) = \sum\_{k=0}^{\infty} \frac{a^k z^{(k+1)q - 1}}{\Gamma((k+1)q)} = z^{q-1} E\_{q, q}(az),$$

$$R\_{q, \nu}(a, 0, z) = \sum\_{k=0}^{\infty} \frac{a^k z^{(k+1)q - 1 - \nu}}{\Gamma((k+1)q - \nu)} = z^{q-1} E\_{q, q - \nu}(az).$$

**5.2.** For *m* = 2: We start with the not enough popular *M-L type function of Dzrbashjan* [44], with 2 × 2 indices, which he denoted alternatively by (we need to set 1/*ρ<sup>i</sup>* := *αi*, *μ<sup>i</sup>* := *βi*, *i* = 1, 2):

$$\Phi\_{\rho\_1,\rho\_2}(z;\mu\_1,\mu\_2) = \sum\_{k=0}^{\infty} \frac{z^k}{\Gamma(\mu\_1 + \frac{k}{\rho\_1})\Gamma(\mu\_2 + \frac{k}{\rho\_2})} := E\_{(\frac{1}{\rho\_1}, \frac{1}{\rho\_2}), (\mu\_1,\mu\_2)}(z) = E\_{(\mu\_1,\mu\_2), (\beta\_1,\beta\_2)}(z). \tag{55}$$

Dzrbashjan found the order and type of this entire function, claimed on few simple particular cases, and considered some integral relations between (55) and Mellin transforms on a set of axes. Then, he developed a theory of integral transforms in the class *L*2, involving kernel close to functions (55) and, further, proposed approximations of entire functions in *L*<sup>2</sup> for an arbitrary finite system of axes in complex plane starting from the origin.

The 2 × 2-indices M-L type functions (55) were also studied in detail by Luchko in the recent paper [94]. He allowed the parameters *ρ*1, *ρ*<sup>2</sup> to be also negative or zero and called them "*4-parameters Wright functions of second kind*", separating the cases when *ρ*<sup>1</sup> + *ρ*<sup>2</sup> > 0, *ρ*<sup>1</sup> + *ρ*<sup>2</sup> = 0 or *ρ*<sup>1</sup> + *ρ*<sup>2</sup> < 0.

Some of the simple cases of (55), as mentioned and denoted in Dzrbashjan [44], are:


$$s\_{\mu,\nu}(z) = \frac{1}{4} z^{\mu+1} E\_{(1,1),((3-\nu+\mu)/2,(3+\nu+\mu)/2)}(-\frac{z^2}{4}),\ H\_{\nu}(z) = \frac{1}{\pi 2^{\nu-1} (1/2)\_{\nu}} s\_{\nu,\nu}(z).$$

• The *"classical" Wright function* that arose in the studies of Fox ([95], 1928), Wright ([31], 1933) and Humbert and Agarwal ([69], 1953) and was also referred to in Erdélyi et al. [3] (Vol. 3). Initially, Wright [31] defined this function only for *α* > 0, then extended its definition for *α* > −1 [32]. Now, we see this is a case of multi-index M-L function with *m* = 2:

$$\Phi(a,\beta;z) := \mathcal{W}\_{a,\beta}(z) = \sum\_{k=0}^{\infty} \frac{1}{\Gamma(ak+\beta)} \frac{z^k}{k!} = {}\_0\Psi\_1\left[ \begin{array}{c} - \\ (\beta,a) \end{array} \bigg| z \right] = E\_{(a,1),(\beta,1)}^{(2)}(z),\tag{56}$$

which is entire function of order 1/(1 + *α*). The survey papers by Gorenflo–Luchko– Mainardi [96] and Mainardi–Consiglio [97] reflect in detail its analytical properties and applications, see also the book [22] as well as the related literature. In the case *α* ≥ 0, the Wright function is said to be of first kind, and for −1 < *α* < 0 of second kind. The latter survey [97] concentrates on the Wright function of second kind. It is noted that the first kind Wright function is of exponential order, while the second kind is not of exponential order, and naturally they have different asymptotic behaviors, Laplace transforms, etc. (see also Luchko [94]). The function (56) plays an important role in the solutions of linear partial fractional differential equations as the *fractional diffusion-wave equation* studied by Nigmatullin (1984–1986, to describe the diffusion process in media with fractal geometry, 0 < *α* < 1) and by Mainardi et al. (since 1994, for propagation of mechanical diffusive waves in viscoelastic media, 1 < *α* < 2). In the form *M*(*z*; *β*) = *φ*(−*β*, 1 − *β*; −*z*), *β* := *α*/2, this function is recently called as the *Mainardi function* (see [16] (Ch.1)). In our denotations, it is a multi-index M-L function with *m* = 2 and a Dzrbashjan function (55): *M*(*z*; *β*) = *E*(2) (−*β*,1),(1−*β*) (−*z*) and has its own particular cases, such as *M*(*z*; 1/2) = 1/ <sup>√</sup>*<sup>π</sup>* exp(−*z*2/4) and the *Airy function*, *M*(*z*; 1/3) = 32/3 *Ai*(*z*/31/3). Note also that, for *α* = 0, the Wright function (56) reduces to the *exponent*, since *<sup>φ</sup>*(0, *<sup>β</sup>*; *<sup>z</sup>*) = <sup>∞</sup> ∑ *k*=0 *zk*/(*k*!Γ(*β*)) = (1/Γ(*β*)) exp(*z*).

In alternative form and denotation, the Wright function (56) is known as the *Wright– Bessel function* or is misnamed as the *Bessel–Maitland function*:

$$\Psi\_{\nu}^{\mu}(z) = \Phi(\mu, \nu + 1; -z) = \,\_0\Psi\_1 \left[ \begin{array}{c} - \\ (\nu + 1, \mu) \end{array} \; \middle| \; -z \right] = \sum\_{k=0}^{\infty} \frac{(-z)^k}{\Gamma(\nu + k\mu + 1)} \equiv E\_{(1/\mu, 1), (\nu + 1, 1)}^{(2)}(-z) \,, \tag{57}$$

again as an example of the Dzrbashjan function. It is an obvious (and was introduced as such by Sir E. Maitland Wright [32]) *"fractional index" analog of the classical Bessel function Jν*(*z*) = *c*(*z*/2) <sup>0</sup>*F*1(*z*2/4), more exactly, of the Bessel–Clifford function *Cν*(*z*).

Several further *"fractional-indices" generalizations* of *Jν*(*z*) and *J μ <sup>ν</sup>* (*z*) are found in the studies of other authors (details are in [59]), and we can represent all of them as multi-index M-L functions. One of them is the so-called *generalized Wright–Bessel(–Lommel) functions*, introduced by Pathak ([98], 1966),

$$\begin{aligned} J\_{\nu,\lambda}^{\mu}(z) &= (z/2)^{\nu+2\lambda} \sum\_{k=0}^{\infty} \frac{(-1)^k (z/2)^{2k}}{\Gamma(\nu+k\mu+\lambda+1)\Gamma(\lambda+k+1)} \\ &= (z/2)^{\nu+2\lambda} E\_{(1/\mu,1),(\nu+\lambda+1,\lambda+1)}^{(2)} \left( -(z/2)^2 \right), \ \mu > 0. \end{aligned}$$

For *μ* = 1, it includes the mentioned Lommel and Struve functions, e.g., *J*<sup>1</sup> *<sup>ν</sup>*,*λ*(*z*) = const *<sup>S</sup>*2*λ*+*ν*−1,*ν*(*z*). A next example is the *generalized Lommel–Wright function with four indices*, introduced by de Oteiza, Kalla and Conde ([99], 1986), with *r* > 0, *n* ∈ N, *ν*, *λ* ∈ C:

$$\begin{split} f\_{\boldsymbol{\nu},\boldsymbol{\lambda}}^{r,n}(z) &= (z/2)^{\boldsymbol{\nu}+2\boldsymbol{\lambda}} \sum\_{k=0}^{\infty} \frac{(-1)^{k}(z/2)^{k}}{\Gamma(\boldsymbol{\nu}+k\boldsymbol{\tau}+\boldsymbol{\lambda}+1)\Gamma(\boldsymbol{\lambda}+k+1)^{n}} \\ &= (z/2)^{\boldsymbol{\nu}+2\boldsymbol{\lambda}} \, {}^{E}\_{(1/r,1,\ldots,1),(\boldsymbol{\nu}+\boldsymbol{\lambda}+1,\boldsymbol{\lambda}+1,\ldots,\boldsymbol{\lambda}+1)} \left( -(z/2)^{2} \right) .\end{split}$$

**5.3.** The above is an interesting example of a multi-M-L function with *m* = *n* + 1.

Other particular cases of multi-index (2*m*-parameters) M-L functions with greater multiplicity *m* ≥ 2 are:

• For arbitrary *m* ≥ 2: let ∀*α<sup>i</sup>* = 0 and ∀*β<sup>i</sup>* = 1, *i* = 1, ..., *m*. Then, from definition (39), we get again the *geometric series*

$$E\_{(0,0,\ldots,0),(1,1,\ldots,1)}^{(m)}(z) = \sum\_{k=0}^{\infty} z^k = \frac{1}{1-z} \cdot 1$$

• Consider the case *m* ≥ 2, ∀*α<sup>i</sup>* = 1, *i* = 1, . . . , *m*. Then, the function

$$E\_{(1,1,\dots,1),(\beta\_1,\dots,\beta\_m)}^{(m)}(z) = {}\_1\Psi\_m \left[ \begin{array}{c} (1,1) \\ (\beta\_i,1)\_1^m \end{array} \bigg| z \right] = \left( \prod\_{i=1}^m \Gamma(\mu\_i) \right)^{-1} {}\_1F\_m(1; \beta\_1, \beta\_2, \dots, \beta\_m; z)$$

reduces to <sup>1</sup>*Fm-function* and also to a *Meijer's G*1,1 1,*m*+1*-function*. Denote *β<sup>i</sup>* = *γi*+1, *i*=1, ... , *m*, and let additionally one of the *β<sup>i</sup>* be 1, e.g., *β<sup>m</sup>* = 1, i.e., *γ<sup>m</sup>* = 0. Then, the multi-index M-L function becomes a <sup>0</sup>*Fm*−1-function, that is, a *hyper-Bessel function* in the sense of Delerue [89] (see also [9] (Ch.3)):

*J* (*m*−1) *<sup>γ</sup>i*,...,*γm*−<sup>1</sup> (*z*) = *<sup>z</sup> m <sup>m</sup>*−<sup>1</sup> ∑ *i*=1 *γi E*(*m*) (1,1,...,1),(*γ*1+1,*γ*2+1,...,*γm*−1+1,1) <sup>−</sup>( *<sup>z</sup> m*)*<sup>m</sup>* (58)

$$=\left[\prod\_{i=1}^{m-1}\Gamma(\gamma\_{i}+1)\right]^{-1}\left(\frac{z}{m}\right)^{\sum\_{i=1}^{m-1}\gamma\_{i}}{}\_{0}F\_{m-1}\left(\gamma\_{1}+1,\gamma\_{2}+1,\dots,\gamma\_{m-1}+1;-(\frac{z}{m})^{m}\right)$$

$$:=\left[\prod\_{i=1}^{m-1}\Gamma(\gamma\_{i}+1)\right]^{-1}\left(\frac{z}{m}\right)^{\sum\_{i=1}^{m-1}\gamma\_{i}}{}\_{j}^{(m-1)}(-z),\tag{59}$$

where *j* (*m*−1) *<sup>γ</sup>*1,..,*γm*−<sup>1</sup> is called as *normalized hyper-Bessel function*.

This representation suggests that the multi-index M-L functions (39) with arbitrary (*α*1, ..., *αm*) = (1, ..., 1) can be interpreted as *fractional-indices analogs of the hyper-Bessel functions* (58) and (59), which themselves are *multi-index* (but integer) *analogs* of the Bessel function. Functions (58) and (59) are closely related to the *hyper-Bessel differential operators* (9) (see Section 3.1), and form a fundamental system of solutions of the differential equations of the form *By*(*z*) = *λy*(*z*); the details are found in Kiryakova [9] (Ch.3, Th.3.4.3). For example, if the hyper-Bessel operator (9) is with *β* = *m*, *γ*<sup>1</sup> < *γ*<sup>2</sup> < ... < *γ<sup>m</sup>* = 0 < *γ*<sup>1</sup> + 1, the solution of the Cauchy problem *By*(*z*) = <sup>−</sup>*y*(*z*), *<sup>y</sup>*(0) = 1, *<sup>y</sup>*(*j*)(0) = 0, *<sup>j</sup>* <sup>=</sup> 1, ..., *<sup>m</sup>*−1, is given by the *normalized hyper-Bessel function* (59): *y*(*z*) = *j* (*m*−1) *<sup>γ</sup>*1,..,*γm*−<sup>1</sup> (−*z*). Closely related functions are also the *Bessel–Clifford functions of order m*:

$$C\_{\nu\_1,\dots,\nu\_m}(z) = \sum\_{k=0}^{\infty} \frac{(-1)^k z^k}{\Gamma(\nu\_1 + k + 1)\dots\Gamma(\nu\_m + k + 1)} = E\_{(1,\dots,1),(\nu\_1+1,\dots,\nu\_m+1,1)}^{(m+1)}(-z).$$

Let us mention the special functions appearing in a very recent paper by Ricci [100]. He considered the so-called *Laguerre derivative DL* <sup>=</sup> *<sup>d</sup> dz <sup>z</sup> <sup>d</sup> dz* and its iterates *DmL* <sup>=</sup> *d dz <sup>z</sup> <sup>d</sup> dz <sup>z</sup>*... *<sup>d</sup> dz <sup>z</sup>*, same as the particular hyper-Bessel differential operators (19) considered in operational calculus by Ditkin and Prudnikov [50], as mentioned in Section 3.1. Then, the *L-* *exponentials e*1(*z*),*e*2(*z*), ...,*em*(*z*), ..., which are eigenfunctions of *DmL*, that is, *DmL em*(*λz*) = *λ em*(*λz*), are shown in [100] to have the form

$$\varepsilon\_{\mathfrak{m}}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{(k!)^{m+1}} = {}\_0F\_{\mathfrak{m}}(-;1,1,...,1;z) = {}\_1\Psi\_{m+1} \left[ \begin{array}{c} (1,1) \\ (1,1), (1,1),..., (1,1) \end{array} \bigg| z \right]. \tag{60}$$

Then, these are examples of the hyper-Bessel functions (58) and of the multi-index Mittag–Leffler functions *E*(*m*+1) (1,...,1),(1,...,1) (*z*) as well. In [100], applications to population dynamics and in solutions of linear dynamical systems of these SF and of the related Laguerre-type Bell polynomials and Laguerre-type generalized hypergeometric functions are discussed.

• The *Rabotnov function* (the *α*-exponential function), presented in **5.1.**, appeared in Rabotnov's works on application of fractional order operators in mechanics of solids. It is interesting to consider its *multi-index analog*, that is the case with all *β<sup>i</sup>* = *α<sup>i</sup>* = *α* > 0, *i* = 1, ..., *m*. This is the function

$$y\_a^{(m)}(z) = z^{a-1} E\_{(a\_r, a), (a\_r, a)}^{(m)}(z^a) = z^{a-1} \sum\_{k=0}^{\infty} \frac{z^{ak}}{[\Gamma(a+ak)]^m}.\tag{61}$$

Observe that, for *<sup>α</sup>* <sup>=</sup> 1, we get the Ricci function (60), namely: *em*−1(*z*) = <sup>∞</sup> ∑ *k*=0 *zk* [*k*!]*<sup>m</sup>* , and also a case of the *original Le Roy function* with *γ* = *m*.

• In general, for rational values of ∀*αi*, *i* = 1, ..., *m*, the functions (39) are reducible to *generalized hypergeometric functions* <sup>1</sup>*Fm* and to *Meijer's G-functions G*1,1 1,*m*+1, that is, to classical special functions.

**Remark 2.** *Note that all the results we derived for the multi-index M-L functions can be applied for their particular cases mentioned above.*

#### **6. Other Special Cases of the Wright Generalized Hypergeometric Functions** *p***Ψ***q*

**6.1.** *Virchenko and Ricci generalized hypergeometric functions.* In [101] and some other papers, Virchenko studied some generalized hypergeometric functions denoted by <sup>2</sup>*R<sup>τ</sup>* <sup>1</sup> (*z*) and <sup>1</sup>Φ*<sup>τ</sup>* <sup>1</sup> (*z*), as well as their integral representations, relations and applications to the generalized Legendre functions *Pm*,*<sup>m</sup> <sup>k</sup>* (*z*), *<sup>Q</sup>m*,*<sup>n</sup> <sup>k</sup>* (*z*), gamma functions, Laguerre's functions, etc.

$$\,\_2\mathcal{R}\_1^{\omega,\mu}(a,b;c;z) = \frac{\Gamma(c)}{\Gamma(a)\Gamma(b)} \sum\_{k=0}^\infty \frac{\Gamma(a+k)\Gamma(b+\frac{\omega}{\mu}k)}{\Gamma(c+\frac{\omega}{\mu}k)} \cdot \frac{z^k}{k!}$$

.

For *<sup>ω</sup> <sup>μ</sup>* :<sup>=</sup> *<sup>τ</sup>* <sup>&</sup>gt; 0, and *<sup>a</sup>*, *<sup>b</sup>*, *<sup>c</sup>* - complex, *<sup>a</sup>* <sup>+</sup> *<sup>k</sup>* <sup>=</sup> 0, <sup>−</sup>1, <sup>−</sup>2, ...; *<sup>b</sup>* <sup>+</sup> *<sup>τ</sup><sup>k</sup>* <sup>=</sup> 0, <sup>−</sup>1, <sup>−</sup>2, ..., *<sup>k</sup>* <sup>=</sup> 0, 1, 2, ...; |*z*| < 1, it is rewritten as ∞

$$\frac{\Gamma(z)}{\Gamma(a)\Gamma(b)} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a)\Gamma(b)} = \frac{\Gamma(c)}{\Gamma(a)\Gamma(b)} \sum\_{k=0}^{\infty} \frac{\Gamma(a+k)\Gamma(b+\tau k)}{\Gamma(c+\tau k)} \cdot \frac{z^k}{k!},$$

which is nothing but the Wright g.h.f. <sup>Γ</sup>(*c*) <sup>Γ</sup>(*a*)Γ(*b*) <sup>2</sup>Ψ<sup>1</sup> (*a*, 1),(*b*, *τ*) (*c*, *τ*) *z* . Virchenko also proposed some examples of elementary functions for these special functions, e.g., (ln (1 + *z*))*<sup>τ</sup>* and (arcsin *z*)*τ*; some generalized incomplete *B*-function; the Gauss function <sup>2</sup>*F*1; etc.

$$\_1\Phi\_1^\tau(a;c;z) = \frac{\Gamma(c)}{\Gamma(a)}\sum\_{k=0}^\infty \frac{\Gamma(a+\tau k)}{\Gamma(c+\tau k)} \cdot \frac{z^k}{k!} \zeta$$

and, in Virchenko [101], generalizations of the gamma function, incomplete gamma function, probability integrals and Laguerre's functions are introduced by means of <sup>1</sup>Φ*<sup>τ</sup>* <sup>1</sup> (*z*), which is a Wright g.h.f. of the form <sup>Γ</sup>(*c*) <sup>Γ</sup>(*a*) <sup>1</sup>Ψ<sup>1</sup> (*a*, *τ*) (*c*, *τ*) *z* , and, according to our classifications in Section 8, a confluent type g.h.f.

•

•

• In **5.3.**, the recent paper by Ricci [100] is mentioned for the Laguerre-type derivatives and related special functions. Along with the functions (60), there he also considered the Laguerre-type (*L*-) Bessel functions, *L*-type Gauss hypergeometric functions and the *Laguerre-type generalized hypergeometric functions LpFq*. They can be shown to be representable by *pFq*+1, thus also as *<sup>p</sup>*Ψ*q*+1, namely:

$${}\_{Lp}F\_q(a\_1, \ldots, a\_p; b\_1, \ldots, b\_q; z) = \sum\_{k=0}^{\infty} \frac{a\_1^{(k)} \ldots a\_p^{(k)}}{b\_1^{(k)} \ldots b\_q^{(k)}} \cdot \frac{z^k}{(k!)^2}$$

$$= \sum\_{k=0}^{\infty} \frac{a\_1^{(k)} \ldots a\_p^{(k)}}{b\_1^{(k)} \ldots b\_q^{(k)} (1)^{(k)}} \cdot \frac{z^k}{k!} = {}\_pF\_{q+1}(a\_1, \ldots, a\_p; b\_1, \ldots, b\_q; 1; z). \tag{62}$$

**6.2.** *Mainardi-Masina and Paris generalized exponential integrals*. In [102], Mainardi and Masina introduced a generalized exponential integral Ein*α*(*z*) by replacing the exponential function in the complementary exponential integral Ein(*z*) by the Mittag–Leffler function *Eα*(*z*) and mentioned the physical applications for 0 < *α* < 1 in the studies of the creep features of linear viscoelastic models. In the recent paper [103], Paris made the next step to involve the two-parameter M-L function, namely to consider the generalized exponential integral

$$\operatorname{Ein}\_{\mathfrak{a},\mathfrak{\beta}}(z) = z \sum\_{k=0}^{\infty} \frac{(-1)^k z^{ak}}{(ak+1)\Gamma(ak+\mathfrak{a}+\mathfrak{\beta})'}, \quad \text{which for } \mathfrak{\beta} = 1 \text{ gives } \operatorname{Ein}\_{\mathfrak{a}}(z). \tag{63}$$

As observed, this function can be seen as a case of the Wright g.h.f. with *p* = *q* = 2, namely

$$\mathrm{Ein}\_{a,\beta}(z) = z \sum\_{k=0}^{\infty} \frac{\Gamma(ak+1)\Gamma(k+1)}{\Gamma(ak+2)\Gamma(ak+a+\beta)} \frac{(-z^a)^k}{k!} = z \,\_2\Psi\_2 \left[ \begin{array}{c} (1,a),(1,1) \\ (2,a),(a+\beta,a) \end{array} \; \bigg| \; -z^a \right].$$

Paris studied in details the asymptotic expansion of (63) for |*z*| → ∞. In [102,103], generalized Sine and Cosine integrals are also considered (of the kind mentioned in **5.1.**), for example Sin*α*,*β*(*z*) = Ein2*<sup>α</sup>*,*β*−*α*(*z*), with their asymptotics and plots for different values of parameters.

**6.3.** *The so-called k-analogs of special functions.* Claims on inventing and studying "new" classes of special functions in several recent papers have been based on the extended notion of the *k-Gamma function*, *k* > 0. However, in all such works, its representation in terms of the classical Gamma-function is explicitly written there, and then is ignored:

$$\Gamma\_k(s) = \int\_0^\infty \exp(-\frac{t^k}{k}) \, t^{s-1} dt = k^{\frac{s}{k}-1} \Gamma(\frac{s}{k}), \quad s \in \mathbb{C}, \text{Re}\,(s) > 0,\tag{64}$$

where Γ(.) is the classical Gamma-function.

In addition, the *k-Pochhammer symbol* is used in the next denotations:

(*λ*)*ν*,*<sup>κ</sup>* := Γ*k*(*λ* + *νκ*)/Γ*k*(*λ*), *λ* ∈ C \ {0}, *ν* ∈ C, with Γ*<sup>k</sup>* as in (64). (65)

In [104], using the above two definitions, we showed that *most of these "new" functions are in fact some known special functions, namely Wright g.h.f. and its cases*. For the details of establishing the mentioned relations, see Kiryakova [104]. In addition, in the references lists of [104,105], one can find the particular authors/sources mentioned below.

• *A generalized k-Bessel function* was introduced by Gehlot ([106], 2014), and studied by Mondal ([107], 2016) and Shaktawat et al. ([108], 2017). It is defined by

$$\mathcal{W}\_{\nu,\varepsilon}^{k}(z) = \sum\_{n=0}^{\infty} \frac{(-\varepsilon)^{n}}{\Gamma\_{k}(nk+\nu+k)} \cdot \frac{(z/2)^{2n+\frac{\nu}{k}}}{n!}, \quad z \in \mathbb{C}, \ k > 0, \ \text{Re}\left(\nu\right) > -1, \ c \in \mathbb{C}. \tag{66}$$

However, after simple exercise, the function (66) can be represented as a Wright g.h.f. <sup>0</sup>Ψ1, and even as the simpler g.h.f. <sup>0</sup>*F*<sup>1</sup> of the same type as the classical Bessel function:

$$\mathcal{W}\_{\nu,\varepsilon}^{k}(z) = (z/2)^{\frac{\nu}{k}} \sum\_{n=0}^{\infty} \frac{[-c(\frac{\tau}{2})^2]^n}{k^{n+1+\left(\frac{\nu}{k}\right)} \Gamma(n+1+\left(\frac{\nu}{k}\right)) \Gamma(n+1)} = \dots$$

$$= \frac{(\frac{\tau}{2})^{\frac{\nu}{k}}}{k^{1+\left(\frac{\nu}{k}\right)}} \sum\_{n=0}^{\infty} \frac{[-\left(\frac{\xi}{k}\right)(\frac{\tau}{2})^2]^n}{\Gamma(1+\left(\frac{\nu}{k}\right)+n.1)\,\Gamma(1+n.1)}$$

$$=\frac{(\frac{z}{2})^{\frac{y}{k}}}{k^{1+(\frac{y}{k})}} \,\_1\Psi\_2\left[ \begin{array}{c} (1,1) \\ (1+\frac{\nu}{k},1), (1,1) \end{array} \; \bigg| \; -\frac{c}{k} \left(\frac{z}{2}\right)^2 \right] = \frac{(z/2)^{\nu/k}}{k^{1+(\nu/k)}} \,\_0\Psi\_1\left[ \begin{array}{c} -- \\ (1+\frac{\nu}{k},1) \end{array} \; \bigg| \; -\frac{c}{k} \left(\frac{z}{2}\right)^2 \right]$$

$$=\frac{(\frac{z}{2})^{\frac{y}{k}}}{k^{1+(\frac{y}{k})}\Gamma(1+\nu)} \,\_0\Psi\_1\left( -;1+\frac{\nu}{k}; -\frac{c}{k}\frac{z^2}{4} \right). \tag{67}$$

Indeed, if we take *k* = 1 and *c* = 1, this function reduces to the classical Bessel function: *W*<sup>1</sup> *<sup>ν</sup>*,1(*z*) = (*z*/2)*<sup>ν</sup>* <sup>Γ</sup>(1+*ν*) <sup>0</sup>*F*<sup>1</sup> <sup>−</sup>; 1+*ν*; <sup>−</sup>*z*<sup>2</sup> 4 . For *k* > 0 and *c* = 1 Gehlot [106] used (66) as a solution of a *k*-Bessel differential equation. Mondal [107] studied some properties of (66) for arbitrary *c* ∈ C. Shaktawat et al. [108] evaluated the *Marichev–Saigo–Maeda (M-S-M) operators* of FC

$$I^{a,a',b,b',c}f(z) = z^{c-a-a'} \int\_0^1 \frac{(1-\sigma)^{c-1}}{\Gamma(c)} \sigma^{-a'} F\_3(a,a',b,b';c;1-\sigma,1-\frac{1}{\sigma}) f(z\sigma) d\sigma \tag{68}$$

of this function. Since its kernel Appel *F*3-function is a *H*-function (36) with *m* = 3, in view of author's result from Corollary 3 in Section 7, it is well expected that the result appears in terms of a <sup>3</sup>Ψ4-function (because the indices of <sup>0</sup>Ψ<sup>1</sup> are increased by 3 under the 3-tuple FC integral).

• *Generalized k-Mittag–Leffler function*. It was studied by many authors, for example in its simplest case by Gupta and Parihar ([109], 2014) in the form

$$E\_{k, \alpha, \beta}(z) = \sum\_{n=0}^{\infty} \frac{z^n}{\Gamma\_k(\alpha n + \beta)}.$$

This function has various further extensions, such as the generalized *k*-Mittag–Leffler function by Nisar–Eata–Dhaifalla–Choi ([110], 2016):

$$E\_{\kappa,a,\emptyset}^{\eta,\delta,p,q}(z) = \sum\_{n=0}^{\infty} \frac{(\eta)\_{qn,\kappa}}{\Gamma\_k(an+\beta)\left(\delta\right)\_{pn,\kappa}} z^n, \text{ with } \kappa, p, q \in \mathbb{R}\_+; \ a, \beta, \eta, \delta \in \mathbb{C},\tag{69}$$

and min{Re (*α*), Re (*β*), Re (*η*), Re (*δ*)} > 0; *q* ≤ Re (*α*) + *p*.

Again, by using (64) and (65), it can be transformed into a Wright g.h.f. (see [104], Case 5.2), namely:

$$E\_{\kappa,a,\emptyset}^{\eta,\delta,p,q}(z) = k^{1-\frac{\beta}{k}} \frac{\Gamma(\delta/k)}{\Gamma(\eta/k)} \,\_2\Psi\_2 \left[ \begin{array}{c} (\frac{\eta}{k'}, \frac{q\kappa}{k}), (1,1) \\ (\frac{\delta}{k'}, \frac{p\kappa}{k}), (\frac{\delta}{k'}, \frac{a}{k}) \end{array} \; \bigg| k^{\frac{(q-p)\kappa-a}{k}} z \right].$$

Nisar–Eata–Dhaifalla-Choi [110] put efforts to evaluate FC operators' images of (69) by the standard techniques, and as expected in view of the general results in next Section 7 Theorem 5, Corollarys 1–3) these appear in terms of <sup>5</sup>Ψ5-functions (for the M-S-M operators (68)), in particular, as <sup>4</sup>Ψ4-functions (for the Saigo operators (78)) and <sup>3</sup>Ψ3-functions (for the R-L and E-K operators). In addition, the pathway integrals (that are related to E-K integrals) are calculated there.

• *The generalized multi-index Bessel function*. In a series of papers, Nisar et al. ([111], 2017, 2019) introduced and studied the function

$$\Gamma\_{(\beta\_j)\_{m,k},\chi,b}^{(a\_j)\_{m,\chi,b}}(z) = \sum\_{k=0}^{\infty} \frac{c^k \left(\gamma\right)\_{\ge k}}{\prod\_{j=1}^m \Gamma(a\_j k + \beta\_j + \frac{b+1}{2})} \frac{z^k}{k!}, \quad m = 1, 2, 3, \dots \tag{70}$$

with the Pochhammer symbol denotation (65) for (*γ*)*κk*; and for *αj*, *βj*, *γ*, *b*, *c* ∈ C, *j* = 1, 2, ..., *<sup>m</sup>*; *<sup>m</sup>* ∑ *j*=1 Re (*αj*) > max{0,Re (*κ*) − 1}; *κ* > 0, Re (*βj*) > 0, Re (*γ*) > 0. As shown in Kiryakova [104], this is *only a very special case of the Wright generalized hypergeometric function* <sup>1</sup>Ψ*m*, namely:

$$f^{(a)\_m,\gamma,\epsilon}\_{(\beta\_j)\_m,\varkappa,b}(z) = \frac{1}{\Gamma(\gamma)} \sum\_{n=0}^{\infty} \frac{\Gamma(\varkappa n + \gamma)}{\prod\_{j=1}^m \Gamma\left(a\_j n + (\beta\_j + \frac{b+1}{2})\right)} \frac{(cz)^n}{n!} \tag{71}$$

$$= \frac{1}{\Gamma(\gamma)} \,\_1\Psi\_m \left[ \begin{array}{c} (\gamma,\varkappa) \\ (\beta\_j + \frac{b+1}{2},\alpha\_j)\_{j=1}^m \end{array}; cz \right]$$

<sup>=</sup> <sup>1</sup> <sup>Γ</sup>(*γ*) *<sup>H</sup>*1,1 1,*m*+1 −*cz* (1 − *γ*,κ) (0, 1),(<sup>1</sup> <sup>−</sup> *<sup>β</sup><sup>j</sup>* <sup>−</sup> *<sup>b</sup>*+<sup>1</sup> <sup>2</sup> )*<sup>m</sup> j*=1 , that is, it is also a Fox *H*-function.

Then, the R-L fractional integral (21) can be evaluated as part of Kiryakova's general results in next Section 7 (Theorem 5, in particular Corollary 1 for *m* = 1, *γ* = *β* = 1), or directly from Kilbas' Theorem 2 in [33], which is a variant of Lemma 1 in Kiryakova [112]. Taking there *p* = 1, *q* = *m*, *c*<sup>1</sup> = *γ*, *C*<sup>1</sup> = *κ*, *dj* = *β<sup>j</sup>* + *<sup>b</sup>*+<sup>1</sup> <sup>2</sup> , *Dj* = *α<sup>j</sup>* and *μ* = 1, one obtains the following R-L image for the multi-index Bessel function (70):

$$I^{\lambda} \left\{ t^{\delta-1} J\_{(\beta\_{\flat})\_{m,\varkappa,b}}^{(a\_{\flat})\_{m,\varkappa,b}}(z) \right\} = \frac{1}{\Gamma(\gamma)} z^{\delta+\lambda-1} {}\_2\Psi\_{m+1} \left[ \begin{array}{c} (\gamma,\varkappa),(\delta,1) \\ (\beta\_{\flat}+\frac{b+1}{2},a\_{\flat})\_{1}^{m},(\lambda+\delta,1) \end{array}; \varepsilon z \right].$$

This was to be the result in Theorem 1, Equation (2.4) in *arXiv:1706.08039* [111], its v1: 2017, but was *written wrongly there*—similarly looking but involving a <sup>2</sup>Ψ2-function. The evident true result involves the Wright function <sup>2</sup>Ψ*m*+<sup>1</sup> (see Kiryakova [104] (5.3.)), as later corrected in v2: 2019 of [111].

• A special case of (70) appears as a kind of *generalized multi-index Mittag–Leffler function*. It was introduced by Saxena and Nishimoto ([113], 2010). As mentioned by Agarwal– Rogosin–Trujillo ([114], 2015), it is representable also as a Wright g.h.f. <sup>1</sup>Ψ*m*, namely:

$$E\_{\left(\boldsymbol{\beta},\boldsymbol{\beta}\right)\_{0}}^{\left(\boldsymbol{\gamma},\boldsymbol{\kappa}\right)}\left(\boldsymbol{z}\right) = \sum\_{n=0}^{\infty} \frac{\left(\boldsymbol{\gamma}\right)\_{\mathbb{K}\mathbb{I}}}{\prod\_{j=1}^{m} \Gamma\left(\boldsymbol{\beta}\_{j} + \boldsymbol{a}\_{j}\boldsymbol{n}\right)} \cdot \frac{z^{n}}{n!} = \frac{1}{\Gamma\left(\boldsymbol{\gamma}\right)} \,\_{1}\Psi\_{m}\left[\begin{array}{c} \left(\boldsymbol{\gamma},\boldsymbol{\kappa}\right) \\ \left(\boldsymbol{\beta}\_{j},\boldsymbol{a}\_{j}\right)\_{1}^{m} \end{array}\bigg| \,\_{2}\right].\tag{72}$$

Therefore, all the GFC operators (say the R-L, E-K, Saigo, M-S-M operators) of this special function can be evaluated by means of the general results in Section 7, Corollaries 1–3 there. For *m* = 1, *b* = −1, this is the SF considered by Srivastava and Tomovski ([115], 2009): *Eγ*,*<sup>κ</sup> <sup>α</sup>*,*<sup>β</sup>* (*z*) = ∞ ∑ *n*=0 (*γ*)*κ<sup>n</sup>* Γ(*αn* + *β*) · *zn n*! .

• Similar, but simpler, is the case of the *generalized Lommel-Wright function* from the paper by Agarwal–Jain–Agarwal–Baleanu ([116], 2018), which is commented in Kiryakova [117]. It has a representation as a Wright g.h.f. as follows:

$$J\_{\omega,\theta}^{\varrho,m}(z) = (\frac{z}{2})^{\omega+2\theta} \sum\_{k=0}^{\infty} \frac{(-1)^k (\frac{z}{2})^{2k}}{(\Gamma(\theta+k+1))^m \Gamma(\omega+k\varrho+\theta+1)} \tag{73}$$

$$\Psi = (\frac{z}{2})^{\omega+2\theta} \,\_1\Psi\_{m+1} \left[ (1,1); (\theta+1,1), \dots; (\theta+1,1), (\omega+\theta+1,\varrho); -z^2/4 \right], \ \varrho > 0.$$

Note, additionally, that (73) is an example of the multi-index Mittag–Leffler function (39), namely: *J ϕ*,*m <sup>ω</sup>*,*<sup>θ</sup>* (*z*)=( *<sup>z</sup>* <sup>2</sup> )*ω*+2*<sup>θ</sup>* ( *<sup>z</sup>* <sup>2</sup> )*ω*+2*θE*(*m*+1) (1,...,1,*ϕ*),(*θ*+1,...,*θ*+1,*ω*+*θ*+1) <sup>−</sup>( *<sup>z</sup>* 2 )2 . Then, all the FC images of (73) evaluated in the commented paper follow at once by our general results (see details in [117]).

**6.4.** *The S-function*. It was introduced by Saxena-Daiya ([118], 2015) as a "new" special function extending the M-L function (*p* = *q* = 0, *k* = 1), the Prabhakar function (38), the *M*-series (76) of Sharma and Jain ([119], 2009) with *γ* = 1, *k* = 1, etc., as follows:

$$S[z] := S^{a, \emptyset, \gamma, \tau, k}\_{(p, q)}(a\_1, \dots, a\_p; b\_1, \dots, b\_q; z) = \sum\_{n=0}^{\infty} \frac{(a\_1)\_{n \dots} (a\_p)\_{n} \cdot (\gamma)\_{n \tau, k}}{(b\_1)\_{n} \dots (b\_q)\_{n} \cdot \Gamma\_k(n\alpha + \beta)} \frac{z^n}{n!},\tag{74}$$

$$\text{with} \quad k \in \mathbb{R}; \, a, \beta, \gamma, \tau \in \mathbb{C}; \, \text{Re}\,(a) > 0; \, \text{Re}\,(a) > k \, \text{Re}\,(\tau), \, p < q + 1.$$

For *p* = *q* = 0, it reduces to the generalized *k*-Mittag–Leffler function *Eγ*,*<sup>τ</sup> <sup>k</sup>*,*α*,*β*(*z*), a variant of (69). However, it can be easily seen to be special case of the generalized hypergeometric function of Wright of the form *<sup>p</sup>*+1Ψ*q*<sup>+</sup>1. Unfortunately, this fact has not been observed, neither by the authors introducing (74) nor by their numerous followers. Namely, one can write (74) as follows (see details in [104]):

$$S[z] = k^{1 - \frac{\beta}{k}} \frac{\Gamma(b\_1)...\Gamma(b\_q)}{\Gamma(a\_1)...\Gamma(a\_p) \cdot \Gamma(\frac{\gamma}{k})} \,\_{p+1} \Psi\_{q+1} \left[ \begin{array}{c} (a\_1,1),...,(a\_p,1),(\frac{\gamma}{k},\tau) \\ (b\_1,1),...,(b\_q,1),(\frac{\beta}{k},\frac{a}{k}) \end{array}; zk^{\tau - \frac{\alpha}{k}} \right].$$

That is, the "new" special function *S*[*z*] is nothing *but a case of the Wright function <sup>p</sup>*+1Ψ*q*+<sup>1</sup> *zkτ*<sup>−</sup> *<sup>α</sup> k* . Then, all results for images of FC operators, as R-L, E-K, Saigo, M-S-M and the Euler-transform, follow from the statements in Section 7.

• Special cases of the *S*-function in **6.4.** are the *generalized K-series* and the *M-series*. Recently, (K.) Sharma ([120], 2012) introduced an extension of both g.h.f. *pFq*(*z*) and Prabhakar function *E<sup>γ</sup> <sup>α</sup>*,*β*(*z*):

$$\overline{\prescript{a}{}{k}{}^{\mathcal{C}}{}{\mathcal{K}}\_{q} \left( a\_{1}, \ldots, a\_{p}; b\_{1}, \ldots, b\_{q}; z \right) = \sum\_{n=0}^{\infty} \frac{(a\_{1})\_{n} \ldots (a\_{p})\_{n}}{(b\_{1})\_{n} \ldots (b\_{q})\_{n}} \frac{(\gamma)\_{n} z^{n}}{\Gamma(an + \beta)}, \quad z, a, \beta \in \mathbb{C}, \text{Re } n > 0,\tag{75}$$

with integers *<sup>p</sup>* <sup>≤</sup> *<sup>q</sup>* (and additional requirement <sup>|</sup>*z*<sup>|</sup> <sup>&</sup>lt; *<sup>R</sup>* <sup>=</sup> *<sup>α</sup><sup>α</sup>* if *<sup>p</sup>* <sup>=</sup> *<sup>q</sup>*+1). For *<sup>γ</sup>* <sup>=</sup> 1, this gives the *M*-series (76) of (M.) Sharma and Jain ([119], 2009):

$$\,\_{p}^{a,\beta}\check{M}\_{q}\left(a\_{1},\ldots,a\_{p};b\_{1},\ldots,b\_{q};z\right):=\,\_{p}^{a,\beta}\check{M}\_{q}\left(z\right)=\sum\_{n=0}^{\infty}\frac{(a\_{1})\_{n}\ldots(a\_{p})\_{n}}{(b\_{1})\_{n}\ldots(b\_{q})\_{n}}\frac{z^{n}}{\Gamma(an+\beta)}$$

$$=\kappa\,\_{p+1}\Psi\_{q+1}\left[\begin{array}{c}(a\_{1},1),\ldots\left(a\_{p},1),(1,1)\\(b\_{1},1),\ldots,(b\_{q},1),(\beta,a)\end{array}\bigg|z\right],\quad\text{where}\ \kappa = \prod\_{j=1}^{q}\Gamma(b\_{j})/\prod\_{i=1}^{p}\Gamma(a\_{i}).\tag{76}$$

We can mention its particular cases, for example: (1) for *β* = 1, the (simpler) *M*-series, introduced by M. Sharma (2008); (2) for *p* = *q* = 0 (no upper and lower parameters), M-L function *Eα*,*β*(*z*); (3) for *p* = 0, *q* = 1, *b*<sup>1</sup> = 1, the Wright function *φ*(*α*, *β*; *z*), or the generalized Bessel-Maitland function (57); (4) for *p* = *q* = 1, *a*<sup>1</sup> = *γ*, *b*<sup>1</sup> = 1 in (75), the Prabhakar type function (38); and (5) for *α* = *β* = 1, the g.h.f. *pFq*(*a*1, ..., *ap*; *b*1, ..., *bq*; *z*).

In the recent *arXiv* preprint [121], Lavault represented (75) as a Wright g.h.f.:

$$\begin{aligned} \;\_p\;^{a,b;\gamma}\_{\Gamma\_p}(a\_1,\ldots,a\_p;b\_1,\ldots,b\_q;z) &:= \,\_p\;^{a,\beta;\gamma}\_{\Gamma\_q}(z) \\ \;=\frac{\prod\_{j=1}^q \Gamma(b\_j)}{\Gamma(\gamma)\prod\_{i=1}^p \Gamma(a\_i)} \,\_{p+2}\Psi\_{q+2}\left[ \begin{array}{c} (a\_1,1),\ldots,(a\_p,1),(\gamma,1),(1,1) \\ (b\_1,1),\ldots,(b\_q,1),(1,1),(\beta,a) \end{array} \bigg| z \right],\end{aligned} \tag{77}$$

although this can also be reduced to: = *q* ∏ *j*=1 Γ(*bj*) <sup>Γ</sup>(*γ*) *<sup>p</sup>* ∏ *i*=1 Γ(*ai*) *<sup>p</sup>*+1Ψ*q*+<sup>1</sup> (*a*1, 1),...(*ap*, 1),(*γ*, 1)) (*b*1, 1),...,(*bq*, 1),(*β*, *α*) *z* ,

since the two pairs (1, 1) of parameters in the upper and low rows eliminate each other. In [121] some FC operators of this *K*-series are calculated, as the R-L, Saigo and M-S-M

operators. Naturally, a R-L integral is transforming a *<sup>p</sup> α*,*β*;*γ K <sup>q</sup>*-function into a *<sup>p</sup>*+<sup>1</sup> *α*,*β*;*γ K <sup>q</sup>*<sup>+</sup>1 function (Theorem 4.1, there), similarly to our Example 11 in [112] for the *M*-series. Next, in Theorem 4.2 of [121] for the *M*-series and Corollary 4.3 for the *K*-series, the *Saigo operator* (78) (with Gauss hypergeometric function (35), GFC with *m* = 2) is derived,

$$I^{a, \beta, \eta} f(z) = \frac{z^{-a-\beta}}{\Gamma(a)} \int\_0^z (z - \xi)^{a-1} \,\_2F\_1(a + \beta, -\eta; a; 1 - \frac{\xi}{z}) f(\xi) d\xi$$

$$= \frac{z^{-\beta}}{\Gamma(a)} \int\_0^1 (1 - \sigma)^{a-1} \,\_2F\_1(a + \beta, -\eta; a; 1 - \sigma) \, f(z\sigma) d\sigma. \tag{78}$$

Since the *K*-series (75) is a *<sup>p</sup>*+1Ψ*q*+1-function, from our results (and Corollary 3 [112]; see also Corollary 2 in the next section), it is expected that the result should be given as a *<sup>p</sup>*+3Ψ*q*+3-function (the indices are to be increased by 2), which is the result (4.10) in [121]:

$$\begin{split} &I^{a,\emptyset,\gamma} \left\{ t^{\sigma-1} \, \_p\prescript{\overline{\xi},\eta;\nu}{\mathop{K}}\_q \left( cz^{\mu} \right) \right\} \\ &= \frac{\prod\_{1}^{q} \Gamma(b\_{\overline{\gamma}})}{\prod\_{1}^{p} \Gamma(a\_{\overline{\iota}})} \frac{z^{\sigma-\not\vdash -1}}{\Gamma(\nu)} \, \_{p+3} \Psi\_{q+3} \left[ \begin{array}{c} (a\_{\overline{\iota}},1)\_{1'}^{p}, (\sigma,\mu), (-\not\vdash +\gamma+\sigma,\mu), (\nu,1) \\ (b\_{\overline{\gamma}})\_{1'}^{q}, (\not\vdash +\sigma,\mu), (a+\gamma+\sigma,\mu), (\eta,\xi) \end{array} \bigg| cz^{\mu} \right]. \end{split}$$

Similarly, the M-S-M-images (68) follow as *<sup>p</sup>*+4Ψ*q*+4-functions, according to Corollary 3 in next section.

**6.5.** *k-Wright generalized hypergeometric function <sup>p</sup>*Ψ*<sup>k</sup> <sup>q</sup>*. Purohit and Badguzer ([122], 2018) introduced the *generalized k-Wright function*, as a *k*-extension (*k* > 0) of the Wright g.h.f. (4), by

$${}\_{p}\Psi\_{q}^{k}(z) = {}\_{p}\Psi\_{q}^{k}\left[\begin{array}{c} (a\_{1},A\_{1}),\ldots,(a\_{p},A\_{p})\\(b\_{1},B\_{1}),\ldots,(b\_{q},B\_{q}) \end{array}\bigg|z\right] = \sum\_{n=0}^{\infty} \frac{\Gamma\_{k}(a\_{1}+nA\_{1})\ldots\Gamma\_{k}(a\_{p}+nA\_{p})}{\Gamma\_{k}(b\_{1}+nB\_{1})\ldots\Gamma\_{k}(b\_{q}+nB\_{q})}\frac{z^{n}}{n!}.\tag{79}$$

Replacing the *k*-Gamma function by the classical Gamma function according to (64), it is seen that the "new" function is *again a Wright generalized hypergeometric function*, of the form

$$\left[ \text{const}\_{p+1} \Psi\_{q+1} \left[ \begin{array}{c} \left( a\_{\hat{l}}/k, A\_{\hat{l}}/k \right)\_{\hat{j}=1}^{p} \\ \left( b\_{\hat{j}}/k, B\_{\hat{j}}/k \right)\_{\hat{j}=1}^{q} \end{array} \bigg| k^{\left( A\_{1} + \dots + A\_{p} - B\_{1} - \dots - B\_{q} \right)/k} \cdot \varpi \right] . \tag{80}$$

#### **7. Results for the FC and GFC Images of SF of FC**

Recently, there have appeared too many papers that deal with evaluation of FC and GFC operators of various special functions. They use the same standard techniques replace the particular function by its power series, then interchange the orders of integration (fractional order integrals) and summation, etc. Usually only the special functions are changed and also the FC operators—with more and more general ones (but all these happen to be cases of our GFC operators). The *great number of combinations "special function* + *particular operator" explains the dramatically increasing production of such works*.

Based on our older results on GFC for SF, since the work in [9], in the papers [64,104,112,117,123], and in a recent survey paper [105] in this same journal, we propose an unified approach how this job can be done at once, for all SF of FC (we mean the *H*- and *G*-functions and in particular the Wright g.h.f., multi-index M-L functions and all their particular cases) and for all operators of GFC (we mean the generalized fractional

integrals and derivatives of the form (25) and (28), thus including the R-L, E-K, Saigo, Marichev–Saigo–Maeda operators, etc.). For the initiating idea, we need to pay tribute to the initial classical results of 20th century in the Bateman Project on Integral Transforms [7] and in works by Askey [2], Lavoie–Osler–Tremblay [124], etc. for the R-L images of many elementary functions and of the simplest *pFq*-functions, as: <sup>0</sup>*F*1, <sup>1</sup>*F*<sup>1</sup> and <sup>1</sup>*F*2. We combined these with the composition/decomposition rule (26) presenting the GFC operators as compositions of weighted R-L/E-K operators. As a recent survey on FC images of elementary functions, we mention also the work of Garrappa–Kaslik–Popolizio [125].

Below, we remind only the statements of the main results from the mentioned author's papers, as surveyed in [105], in this same journal.

**Theorem 4.** *The I* (*γ<sup>k</sup>* ),(*δ<sup>k</sup>* ) (*β<sup>k</sup>* ),*<sup>m</sup> -image* (23) *of a H-function is also a H-function whose last three components of the order are increased by m (the multiplicity in GFC operators), and with additional parameters depending on those of the generalized fractional integration. Namely,*

$$\mathcal{I}\_{(\mathfrak{f}\_{\mathbb{K}}),m}^{(\gamma\_{\mathbb{K}}),(\delta\_{\mathbb{K}})} \left\{ H\_{\mathfrak{u},\mathbb{J}}^{\mathbb{A}} \left[ \lambda \mathbb{1} \Big| \begin{array}{c} (\varepsilon\_{\mathbb{I}}, \mathbb{C}\_{\mathbb{I}})\_{1}^{\mathbb{A}} \\ (d\_{\mathbb{J}}, D\_{\mathbb{J}})\_{1}^{\mathbb{U}} \end{array} \right] \right\} = H\_{\mathfrak{u}+\mathfrak{u},\mathbb{J}+\mathfrak{u}}^{\mathbb{A},+\mathfrak{u}} \left[ \lambda \mathbb{1} \Big| \begin{array}{c} (\varepsilon\_{\mathbb{I}}, \mathbb{C}\_{\mathbb{I}})\_{1}^{\mathbb{A}}, (-\gamma\_{\mathbb{K}})\_{1}^{\mathbb{M}}, (\varepsilon\_{\mathbb{I}}, \mathbb{C}\_{\mathbb{I}})\_{\mathfrak{f}+\mathfrak{u}}^{\mathbb{A}} \\ (d\_{\mathbb{J}}, D\_{\mathbb{J}})\_{1}^{\mathbb{A}}, (-\gamma\_{\mathbb{K}} - \delta\_{\mathbb{K}})\_{1}^{\mathbb{M}}, (d\_{\mathbb{J}}, D\_{\mathbb{J}})\_{\mathfrak{f}+\mathfrak{u}}^{\mathbb{U}} \end{array} \right]. \tag{81}$$

Then, GFC images of almost all SF of FC can be evaluated from (81). This result is based on a formula for the integral of product of two arbitrary *H*-functions, namely for the Mellin transform of such a product ([9] (App., (E.21 ), [12] (§5.1, (5.1.1)), [14] (§2.25, (1))). A similar formula presents the GFC operators (with *Gm*,0 *<sup>m</sup>*,*m*-kernel) of arbitrary *G-function*, in terms of another *G*-function with increased orders and additional parameters (Lemma 1.2.2 in [9] and Corollary 1 in [105]).

Since *most of the considered SF of FC are Wright g.h.f.*, the main and most useful result is as follows.

**Theorem 5.** *The image of a Wright g.h.f. <sup>p</sup>*Ψ*q*(*z*) *by a generalized fractional integral* (23) (*multiple, m-tuple Erdélyi-Kober integral*)*, provided δ<sup>k</sup>* ≥ 0, *γ<sup>k</sup>* > −1, *k* = 1, ..., *m, c* > −1*, μ* > 0, *λ* = 0*, is another Wright g.h.f. with indices p and q increased by the multiplicity m and additional parameters related to these of the GFC integral:*

$$\begin{split} \left| I\_{(\beta\_k)\_1^m, m}^{(\gamma\_k)\_1^m, (\delta\_k)\_1^m} \left\{ z^c \,\_p\Psi\_q \left[ \begin{array}{c} (a\_1, A\_1), \dots, (a\_p, A\_p) \\ (b\_1, B\_1), \dots, (b\_q, B\_q) \end{array} \bigg| \lambda z^\mu \right] \right\} \right| \\ &= z^c \,\_{p+m} \Psi\_{q+m} \left[ \begin{array}{c} (a\_i, A\_i)\_{1'}^p, (\gamma\_k + 1 + \frac{c}{\tilde{p}\_k}, \frac{\mu}{\tilde{p}\_k} \right)\_{1'}^m \\ (b\_{\tilde{p}}, B\_{\tilde{p}})\_{1'}^q, (\gamma\_k + \delta\_k + 1 + \frac{c}{\tilde{p}\_k}, \frac{\mu}{\tilde{p}\_k} \}\_{1'}^m \end{array} \bigg| \lambda z^\mu \right]. \end{split} \tag{82}$$

*Specially, for c* = 0*, μ* = 1*, this result is simplified to <sup>p</sup>*Ψ*q*(*λz*) −→ *<sup>p</sup>*+*m*Ψ*q*+*m*(*λz*)*, as above.*

Similarly (Theorem 4.2 in [104]; Theorem 4 in [105]),

$$D\_{(\tilde{p}\_k)\_1^m, m}^{(\gamma\_1)\_1^m, (\delta\_k)\_1^m} \left\{ z^{\varepsilon} \, \_p\Psi\_q \left[ \begin{array}{c} (a\_1, A\_1), \dots, (a\_p, A\_p) \\ (b\_1, B\_1), \dots, (b\_q, B\_q) \end{array} \bigg| \, \lambda z^\mu \right] \right\}$$

$$= z^{\varepsilon} \, \_{p+m} \Psi\_{q+m} \left[ \begin{array}{c} (a\_i, A\_i)\_1^p, (\gamma\_k + \delta\_k + 1 + \frac{c}{\tilde{p}\_k}, \frac{\mu}{\tilde{p}\_k})\_1^m \\ (b\_j, B\_j)\_1^q, (\gamma\_k + 1 + \frac{c}{\tilde{p}\_k}, \frac{\mu}{\tilde{p}\_k})\_1^m \end{array} \bigg| \, \lambda z^\mu \right]. \tag{83}$$

The *simpler results for the pFq-functions* read by analogy (Corollarys 4.1 and 4.2 in [104]), for example with *β* = 1, as:

$$I\_{1,m}^{(\gamma\_k)\_1^m, (\delta\_k)\_1^m} \left\{ z^c \,\_pF\_q(a\_1, \dots, a\_p; b\_1, \dots, b\_q; \lambda z) \right\}$$

$$= \left[ \prod\_{k=1}^m \frac{\Gamma(\gamma\_k + c + 1)}{\Gamma(\gamma\_k + \delta\_k + c + 1)} \right] z^c \,\_{p+m}F\_{q+m}(a\_1, \dots, a\_{p\*} \, (\gamma\_k + c + 1) \mathbf{1}\_1^{\mathbf{u}}; b\_1, \dots, b\_{q\*} \, (\gamma\_k + \delta\_k + c + 1) \mathbf{1}\_1^{\mathbf{u}}; \lambda z) . \tag{84}$$

We also describe the corollaries of the results (82) and (83) *for the particular cases of most often FC operators* on which the other authors have exercised their evaluations, say for: *m* = 1 (R-L and E-K), *m* = 2 (Saigo operators) and *m* = 3 (M-S-M operators). These results for arbitrary Wright g.h.f. are mentioned below.

**Corollary 1.** *For the Riemann–Liouville (R-L) integrals and derivatives, the simplest results are parts of Lemmas 1 and 2 in Kiryakova [105]:*

$$\mathcal{R}^{\delta} \left\{ z^{c} \,\_{p} \mathbf{w}\_{q} \left[ \begin{array}{ll} (a\_{1}, A\_{1}), \dots, (a\_{p}, A\_{p}) \\ (b\_{1}, B\_{1}), \dots, (b\_{q}, B\_{q}) \end{array} \; \middle| \; \lambda z^{\mu} \right] \right\} = z^{c+\delta} \,\_{p+1} \mathbf{w}\_{q+1} \left[ \begin{array}{ll} (a\_{\ell}, A\_{\ell})\_{1}^{p}, (c+1, \mu) \\ (b\_{\ell}, B\_{\ell})\_{1}^{q}, (c+\delta+1, \mu) \end{array} \; \middle| \; \lambda z^{\mu} \right], \tag{85}$$
 
$$\dots \left\{ \begin{array}{ll} \begin{array}{ll} \begin{array}{ll} \begin{array}{ll} \begin{array}{l} (a\_{\ell}, A\_{\ell}) \end{array} \end{array} \; \begin{array}{ll} (a\_{\ell}, A\_{\ell})^{p} \end{array} \; \middle| \; \lambda z^{\mu} \end{array} \end{array} \; \begin{array}{l} \begin{array}{ll} (a\_{\ell}, A\_{\ell})^{p} \end{array} \; \end{array} \right\}, \end{\begin{array}{l} \lambda z^{\mu} \end{array} \right\}, \tag{85}$$

$$\mathbf{D}^{\delta} \left\{ z^{\varepsilon} \, \_p \Psi\_q \left[ \begin{array}{c} (a\_1, A\_1), \dots, (a\_p, A\_p) \\ (b\_1, B\_1), \dots, (b\_q, B\_q) \end{array} \bigg| \, \lambda z^{\mu} \right] \right\} = z^{\varepsilon - \delta} \, \_{p+1} \Psi\_{q+1} \left[ \begin{array}{c} (a\_\flat, A\_\flat)^p\_{1'} (c+1, \mu) \\ (b\_\flat, B\_\flat)^q\_{1'} (c+1-\delta, \mu) \end{array} \bigg| \, \lambda z^{\mu} \right]. \tag{86}$$

*The results for the E-K operators have same expressions as in* (82) *and* (83) *with m* = 1*.*

**Corollary 2.** *The images of the Wright g.h.f. <sup>p</sup>*Ψ*<sup>q</sup> and, in particular, of the g.h.f. pFq under the Saigo operators* (78) *are given by the formulas:*

$$I^{\mathfrak{a},\mathfrak{b},\mathfrak{y}}\left\{z^{c}\,\_{p}\Psi\_{q}\left[\begin{array}{c}(a\_{i},A\_{i})\_{1}^{p}\\(b\_{j},B\_{j})\_{1}^{q}\end{array}\bigg|\,\lambda z^{\mu}\right]\right\}=z^{c-\beta}\,\_{p+2}\Psi\_{q+2}\left[\begin{array}{c}(a\_{i},A\_{i})\_{1}^{p},(\eta-\mathfrak{f}+1+c,\mu),(1+c,\mu)\\(b\_{\uparrow},B\_{\overline{\jmath}})\_{1}^{q},(-\mathfrak{f}+1+c,\mu),(a+\eta+1+c,\mu)\end{array}\bigg|\,\lambda z^{\mu}\right],\tag{87}$$

*(for c* = 0*, μ* = 1*, this is Corollary 3 in [112]) and*

$$\mathcal{I}^{a,\beta,\eta} \left\{ \,\_{\mathbb{P}}F\_{\theta} \left( a\_1, \dots, a\_{\mathbb{P}}; b\_1, \dots, b\_{\mathbb{Q}}; \lambda z \right) \right\} = \mathcal{z}^{-\beta} \,\_{\mathbb{P}}F\_{\theta} + 2 \left( a\_1, \dots, a\_{\mathbb{P}}, \eta - \not\!\! + 1, 1; b\_1, \dots, b\_{\mathbb{Q}}, -\not\!\! + 1, a + \eta + 1; \lambda z \right). \tag{88}$$

**Corollary 3.** *The Marichev–Saigo–Maeda (M-S-M) operators* (68) *transform a Wright g.h.f. function into same kind of special function but with indices increased by 3:*

$$\begin{aligned} \mathbf{1}^{a,a',b,b',c} \left\{ \, \_p\Psi\_q \left[ \begin{array}{c} (a\_i, A\_i)\_1^p \\ (b\_j, B\_j)\_1^q \end{array} \Big| \, \lambda z^\mu \right] \right\} \\ = \, z^{c-a-a'} \, \_{p+3}\Psi\_{q+3} \left[ \begin{array}{c} (a\_i, A\_i)\_1^p \\ (b\_j, B\_j)\_1^q \end{array} \Big| \, \_p\left( \begin{array}{c} (a\_i, A\_i)\_1^p \\ (b\_j, B\_j)\_1^q \end{array} \Big| \, (b - a' + 1, 1), (c - 2a' - b' + 1, 1) \end{aligned} \Big| \, \begin{array}{c} \vert \, \lambda z^\mu \end{array} \Big| \, \lambda z^\mu \right] . \end{aligned} \right\} \tag{89}$$

We state here also the more general result for images of arbitrary Wright generalized hypergeometric function *in the case of multiple Wright–Erdélyi–Kober operators* (33).

**Theorem 6.** (Kiryakova, [60], Theorem 9) *The image of a Wright generalized function <sup>p</sup>*Ψ*q*(*z*) *by a multiple W-E-K operator* (33) *has the form*

$$\left. \begin{array}{c} \left. \begin{array}{c} (\gamma\_{k}), (\delta\_{k}) \\ \end{array} \right| \right. \\ \left. \left. \begin{array}{c} (\gamma\_{k}), (\lambda\_{k}), m \\ \end{array} \right| \right. \\ \left. \left. \begin{array}{c} (a\_{1}, A\_{1}), \dots, (a\_{p}, A\_{p}) \\ (b\_{1}, B\_{1}), \dots, (b\_{q}, B\_{q}) \\ \end{array} \right| \end{array} \right] \right\} = \, \, ^{\ast}\_{p+m} \Psi\_{q+m} \left[ \begin{array}{c} (a\_{j}, A\_{j})\_{1}^{p}; \ (\gamma\_{k} + 1, 1/\lambda\_{k})\_{1}^{m} \\ (b\_{k}, B\_{k})\_{1}^{q}; \ (\gamma\_{k} + \delta\_{k} + 1, 1/\beta\_{k})\_{1}^{m} \\ \end{array} \; \bigg| z \right] . \tag{90} \\ \{0\} \end{array} \tag{91}$$

*Conversely, the alternatively stated result reads as: each <sup>p</sup>*+*m*Ψ*q*+*m-function can be represented by means of a multiple (m-tuple) operator I of GFC, of a <sup>p</sup>*Ψ*q-function, the orders of which are reduced by m:*

$$\Psi\_{p+m} \Psi\_{q+m} \left[ \begin{array}{c} (a\_{\dot{\boldsymbol{\mu}}}, A\_{\dot{\boldsymbol{\beta}}})\_{\dot{\boldsymbol{\beta}}=1}^{p}; \ (a\_{p+i\dot{\boldsymbol{\mu}}}, A\_{p+i\dot{\boldsymbol{\beta}}})\_{\dot{\boldsymbol{\mu}}=1}^{m} \\ (b\_{\dot{\boldsymbol{\mu}}}, B\_{\dot{\boldsymbol{\beta}}})\_{\dot{\boldsymbol{\mu}}=1}^{q}; \ (b\_{q+i\dot{\boldsymbol{\mu}}}, B\_{q+i\dot{\boldsymbol{\beta}}})\_{\dot{\boldsymbol{\mu}}=1}^{m} \end{array} \right] = \hat{\boldsymbol{\Gamma}} \left\{ \,\_{p} \Psi\_{q} \left[ \begin{array}{c} (a\_{\dot{\boldsymbol{\mu}}}, A\_{\dot{\boldsymbol{\beta}}})\_{\dot{\boldsymbol{\mu}}=1}^{p} \\ (b\_{\dot{\boldsymbol{\mu}}}, B\_{\dot{\boldsymbol{\mu}}})\_{\dot{\boldsymbol{\mu}}=1}^{q} \end{array} \right] \right\},\tag{91}$$

*with*

$$\widetilde{I}\,f(z) = I\_{(1/B\_{q+1})\_{i=1}^m,(b\_{q+i}-a\_{p+i})\_{i=1}^m}^{(a\_{p+i}-1)\_{i=1}^m,(b\_{q+i})\_{i=1}^m} \, f(z) \quad \text{of the form} \tag{33}.$$

*A long list of examples* how these general results work at once for any of the SF of FC mentioned in previous sections is provided in author's works [104,105,112,117,123], including some of the particular cases of W.g.h.f. and of multi-index M-L f., mentioned in

Sections 5 and 6. There we also provided the details on the references items for the authors cited here only with years.

#### **8. Theory of SF of FC in View of GFC Operators**

Usually, the special functions of mathematical physics are defined by means of power series representations. However, some alternative representations can be used as their definitions. Let us mention the well-known *Poisson integral* (52) for the Bessel function and the analytical continuation of the Gauss hypergeometric function via the *Euler integral formula*. The *Rodrigues differential formulas*, involving repeated or fractional differentiation are also used as definitions of the classical orthogonal polynomials and their generalizations. As to the other special functions (most of them being *pFq*- and *<sup>p</sup>*Ψ*q*-functions), such representations have been less popular and even unknown in the general case. There exist various integral and differential formulas, but, unfortunately, quite peculiar for each corresponding special function and scattered in the literature without any common idea to relate them.

In our works since 1985 (e.g., [9] (Ch.4), [58,60]), we showed that all the classical SF and the SF of FC (in the sense of generalized hypergeometric functions *pFq* and *<sup>p</sup>*Ψ*q*) can be presented by means of generalized fractional integrals or derivatives of three basic elementary functions. On this basis, these special functions have been classified into three specific classes, and several new integral and differential representations have been proposed under a unified idea. Besides, for these three classes of SF, *we provide analogs of the mentioned Poisson and Euler integral formulas and of the Rodrigues differential formulas*, which can also be used for *alternative definitions of these special functions, their analytical extensions or for numerical algorithms*.

The idea is briefly explained as follows: (i) most of the classical SF (SF of mathematical physics) and SF of FC are nothing but modifications of the g.h.f. *pFq* or *<sup>p</sup>*Ψ*q*; (ii) each *pFq*function or *<sup>p</sup>*Ψ*q*-function can be represented as an E-K fractional *differintegral* (i.e., integral or derivative) of a *<sup>p</sup>*−<sup>1</sup>*Fq*−1-function or *<sup>p</sup>*−1Ψ*q*−1, respectively; (iii) a finite number of steps (ii) leads to one of the basic g.h.f. (0*Fq*−*<sup>p</sup>* (for *q*−*p* = 1: Bessel function); <sup>1</sup>*F*<sup>1</sup> (confluent h.f.) and <sup>0</sup>*F*<sup>0</sup> (exponent); and <sup>2</sup>*F*<sup>1</sup> (Gauss h.f.) and <sup>1</sup>*F*<sup>0</sup> (beta-distribution) to the simplest functions <sup>0</sup>Ψ*q*−*p*, <sup>1</sup>Ψ1, <sup>1</sup>Ψ0, respectively); (iv) the above three basic g.h.f. can be considered themselves as fractional differintegrals of the three elementary functions, depending on whether *p* < *q*, *p* = *q*, or *p* = *q* + 1; and (v) the compositions of E-K operators arising in Step (iii) give generalized (*q*-tuple) fractional integrals or derivatives.

Thus, for the *simpler case of pFq-functions*, we have the following general proposition.

**Theorem 7.** (Kiryakova [58]) *All the generalized hypergeometric functions pFq can be considered as generalized (q-tuple) fractional differintegrals* (24)*,* (30) (*with Gm*,0 *<sup>m</sup>*,*m-kernels*) *of one of the elementary functions*

$$z \cos\_{q-p+1}(z) \text{ (if } p < q), \text{ } z^a \exp z \text{ (if } p = q), \text{ } z^a (1-z)^\beta \text{ (if } p = q+1), \tag{92}$$

*depending on whether p* < *q p* = *q p* = *q* + 1*.*

It is based on the known auxiliary result coming yet from the Bateman Project on integral transforms [7], Askey [2], Lavoie–Osler–Tremblay [124] for the R-L derivatives that we have *paraphrased in terms of E-K operators* (e.g., Equation (4.2.2 ) in [9] and Lemma 3.2 in [58]) as follows:

$$\begin{aligned} &\frac{\Gamma(a\_p)}{\Gamma(b\_q)} \,\_pF\_q(a\_1, \dots, a\_p; b\_1, \dots, b\_q; z) \\ &= \begin{cases} \, \_{1,1}^{a\_p - 1, b\_q - a\_p} \left\{ \_{p-1}F\_{q-1}(a\_1, \dots, a\_{p-1}; b\_1, \dots, b\_{q-1}; z) \right\} & \text{if } b\_q > a\_{p\_\ell} \\\, \_{1,1}^{b\_q - 1, a\_p - b\_q} \left\{ \_{p-1}F\_{q-1}(a\_1, \dots, a\_{p-1}; b\_1, \dots, b\_{q-1}; z) \right\} & \text{if } b\_q < a\_{p\_\ell} \end{cases} \end{aligned} \tag{93}$$

for all complex *z*, and if *p* = *q* + 1 we require additionally |*z*| < 1. Then, this basic fact is to be used repeatedly, and combined with the composition/decomposition property (26) for the operators of GFC. In each of the three separate cases, we reach to one of the basic functions (92) with smallest possible first index *p*, namely: <sup>0</sup>*Fq*−*p*(*z*) = cos*q*−*p*+1(*z*); <sup>1</sup>*F*1(*z*) and then <sup>0</sup>*F*0(*z*) = exp *<sup>z</sup>*; and <sup>2</sup>*F*1(*z*) and then <sup>1</sup>*F*0(*β*; <sup>−</sup>; *<sup>z</sup>*)=(<sup>1</sup> <sup>−</sup> *<sup>z</sup>*)−*β*.

For the *Wright generalized hypergeometric functions* (4), this proposition reads almost the same, only the third basic function (for *p* = *q* + 1) is more general, namely <sup>1</sup>Ψ<sup>0</sup> = *H*1,1 1,1 , and the GFC operators have as kernel the *Hm*,0 *<sup>m</sup>*,*m*-function with different parameters *β*s and *λ*s in the upper and low rows.

**Theorem 8.** (Kiryakova [60] (Theorem 14)) *All the Wright generalized hypergeometric functions <sup>p</sup>*Ψ*<sup>q</sup> can be represented as multiple (q-tuple) W-E-K fractional integrals* (33)*, or their corresponding fractional derivatives, of one of the following three basic functions:*

$$\cos\_{q-p+1}(z) \text{ (if } p < q \text{) }, \text{ } \exp z \text{ (if } p = q \text{) }, \text{ } \,\_1\Psi\_0[(a, A) \, | \, z \text{] (if } p = q+1). \tag{94}$$

In this case, the basic used result is Theorem 6, following similar Steps (i)–(v) as described above.

The three cases, for both Theorems 7 and 8, are considered in detail, in separate statements.

(1) *p* < *q*. The Poisson integral representation (52) is extended in [9] (Ch.4) and [90] for the *hyper-Bessel functions* (58), *m* ≥ 2, that is for the <sup>0</sup>*Fm*−1*-functions*, via generalized fractional integrals (24) of the function cos*m*, (54) as follows:

$$f\_{\nu\_1,\ldots,\nu\_{m-1}}^{(m-1)}(z) = \sqrt{\frac{m}{(2\pi)^{m-1}}} \left(\frac{z}{m}\right)^{\nu\_1 + \ldots + \nu\_{m-1}} I\_{\frac{1}{m}, m-1}^{(\frac{k}{m}-1), (\nu\_k - \frac{k}{m} + 1)}\{\cos\_m(z)\}.\tag{95}$$

By analogy with the hyper-Bessel functions (58), we consider what we call *the Wright hyper-Bessel functions*:

$${}^{0}\Psi\_{m}\left[\begin{array}{c} -\\(b\_{1},B\_{1}),...,(b\_{m},B\_{m}) \end{array}\;\middle|\;-z\right] = H\_{0,m+1}^{1,0}\left[z\bigg|\begin{array}{c} -\\(0,1),(1-b\_{1},B\_{1}),...,(1-b\_{m},B\_{m}) \end{array}\right]$$

$$=\sum\_{k=0}^{\infty}\frac{z^{k}}{\Gamma(b\_{1}+kB\_{1})\dots\Gamma(b\_{m}+kB\_{m})}\;:=\J\_{b\_{1}-1,...,b\_{m}-1}^{B\_{1},...,B\_{m}}(z). \tag{96}$$

The latter denotation is to remind of the analogy with the hyper-Bessel functions (58), when ∀*Bk* = 1. It is easy to observe that (96) appears as special case of the multi-index Mittag–Leffler functions (39), namely: *J B*1,...,*Bm <sup>b</sup>*1−1,...,*bm*−1(*z*) = *<sup>E</sup>*(*m*+1) (1,*B*1,...,*Bm*),(1,*b*1,...,*bm*) (−*z*).

We have then a result, analogous to (95), and more general than (53) for the multi-M-L functions, that: *each Wright hyper-Bessel function* <sup>0</sup>Ψ*q*−*p, p* < *q, can be represented by means of a Poisson type integral of the* cos*p*−*q*+1*-function*, written in the form

<sup>0</sup>Ψ*q*−*<sup>p</sup>* <sup>−</sup> (*b*1, *B*1), ...,(*bq*−*p*) <sup>−</sup> *<sup>z</sup>* = *J B*1,...,*Bm <sup>b</sup>*1−1,...,*bm*−1(*z*) = *I* ( *<sup>k</sup> <sup>q</sup>*−*p*+<sup>1</sup> <sup>−</sup>1),(*bk*<sup>−</sup> *<sup>k</sup> <sup>q</sup>*−*p*+<sup>1</sup> ) ( <sup>1</sup> *Bk* ),(1),*q*−*p* cos*q*−*p*+<sup>1</sup> (*<sup>q</sup>* <sup>−</sup> *<sup>p</sup>* <sup>+</sup> <sup>1</sup>)*<sup>z</sup>* <sup>1</sup> *<sup>q</sup>*−*p*+<sup>1</sup> . (97)

Let us now apply to the function <sup>0</sup>Ψ*q*−*<sup>p</sup>* above, *p*-times the results (90), (91) (Theorem 6) with *m*=1, combined with the composition rule for the W-E-K integrals (33). Then, we obtain the following:

**Theorem 9.** (Kiryakova [60] (Theorem 15)) *Each <sup>p</sup>*Ψ*q-function with p* < *q is a generalized q-tuple W-E-K fractional (differ-)integral operator of* cos*q*−*p*+1(*z*)*,*

$${}\_{p}\Psi\_{q}\left[\begin{array}{c}(a\_{1},A\_{1}),\ldots,(a\_{p},A\_{p})\\(b\_{1},B\_{1}),\ldots,(b\_{q},B\_{q})\end{array}\;\middle|\;-z\right]=I^{(\gamma\_{k}),(\delta\_{k})}\_{(\frac{1}{B\_{k}}),(\lambda\_{k}),q}\left\{\cos\_{q-p+1}((q-p+1)z^{\frac{1}{q-p+1}})\right\},\tag{98}$$

*with the following parameters:*

$$\gamma\_k = \begin{cases} \frac{k}{q-p+1} - 1, \\\ a\_{k-q+p} - 1, \end{cases}; \quad \delta\_k = \begin{cases} b\_k - \frac{k}{q-p+1}, \\\ b\_k - a\_{k-q+p}, \end{cases}; \quad \lambda\_k = \begin{cases} 1, & k = 1, \dots, q-p \\\ \frac{1}{A\_{k-q+p}}, & k = q-p+1, \dots, q. \end{cases}$$

*If the following conditions are satisfied:*

$$b\_k > \frac{k}{q - p + 1}, \ k = 1, \dots, q - p; \ b\_k > a\_{k - q + p} > 0, \ k = q - p + 1, \dots, q, \ l$$

$$B\_k \ge 1, \ k = 1, \dots, q - p; \ B\_k \ge A\_{k - q + p\prime}, \ k = q - p + 1, \dots, q, \ i$$

*then relation* (98) *gives a Poisson type integral representation; otherwise, the operator in the R.H.S. should be interpreted as a multiple W-E-K derivative (see, e.g., Definition 7 in [60]), and then* (98) *turns into a new Rodrigues type differential formula, or a mixed differ-integral representation.*

The particular case of Poisson type representation (53) for the multi-index M-L function has been already stated as Theorem 3 in Section 4.

In the other two cases, *p* = *q* and *p* = *q* + 1, the starting results for *<sup>p</sup>*Ψ*<sup>q</sup>* were formulated as Lemmas 11 and 12 in Kiryakova [60]:

$${}\_{1}\Psi\_{1}\left[\begin{array}{c} (a\_{1},A\_{1})\\(b\_{1},B\_{1}) \end{array}\bigg|z\right] = \mathcal{W}\_{1/B\_{1},1/A\_{1}}^{a\_{1}-1,b\_{1}-a\_{1}}\{\exp z\}, \text{ if } A\_{1} \ge B\_{1}, b\_{1} \ge a\_{1}, \text{ for } |z| < \infty; \tag{99}$$

$$\begin{aligned} \, \_2\Psi\_1 \left[ \begin{array}{ccccc} (a\_1, A\_1), (a\_2, A\_2) & & \\ (b\_1, B\_1) & & \end{array} \bigg| z \right] &= W\_{1/B\_1, 1/A\_1}^{a\_1 - 1, b\_1 - a\_1} \left\{ \, \_1\Psi\_0 \left[ \begin{array}{ccccc} (a\_2, A\_2) & & \\ & - & \end{array} \bigg| z \right] \right\} & & (100) \\ &= W\_{1/B\_1, 1/A\_1}^{a\_1 - 1, b\_1 - a\_1} \left\{ H\_{1,1}^{1, 1} \left[ \begin{array}{c} (1 - a\_2, A\_2) \\ (0, 1) \end{array} \right] \right\} \end{aligned} \tag{100}$$

if *A*<sup>1</sup> ≥ *B*1, *b*<sup>1</sup> ≥ *a*1; and if *A*<sup>2</sup> < 1, for |*z*| < ∞; or if *A*<sup>2</sup> = 1, for |*z*| < 1.

After additional (*p*−1) steps, from *<sup>p</sup>*Ψ*<sup>q</sup>* passing via <sup>1</sup>Ψ<sup>1</sup> to <sup>0</sup>Ψ0, respectively, to <sup>1</sup>Ψ0, the following explicit results for the statement in Theorem 8 are provided in [60].

$$
\begin{array}{c}
\text{(2) } p = q.
\end{array}
$$

**Theorem 10.** *If p* = *q, each g.h.f. <sup>p</sup>*Ψ*p*(*z*) *is an p-tuple W-E-K fractional integral of the exponential function, namely, if Bk* ≥ *Ak* > 0, *bk* > *ak* > 0*, k* = 1, ..., *p:*

$$\mathbb{E}\_{p}\Psi\_{q}\begin{bmatrix}(a\_{1},A\_{1}),\ldots,(a\_{p},A\_{p}) \\ (b\_{1},B\_{1}),\ldots,(b\_{p},B\_{p})\end{bmatrix}\begin{bmatrix}z\\ \end{bmatrix} = I^{(a\_{k}-1),(b\_{k}-a\_{k})}\_{(\frac{1}{B\_{k}}),(\frac{1}{A\_{k}}),p}\begin{Bmatrix}\exp z\end{Bmatrix},\text{ for }\begin{vmatrix} z\end{vmatrix}<\infty.\tag{101}$$

*If for some indices k, the above inequalities for parameters are not satisfied, representation* (101) *turns into differ-integral one, or in special cases to purely differential one.*

Theorem 10 suggests us to separate the g.h.f-s *<sup>p</sup>*Ψ*<sup>p</sup>* with *p* = *q* in a *class of so-called Wright g.h.f. of confluent type*, involving the *confluent hypergeometric function* <sup>1</sup>*F*1(*a*; *b*; *z*) = Φ(*a*; *b*; *z*) and exp *z* as the simplest cases.

(3) *p* = *q* + 1. Analogously, we call the *<sup>q</sup>*+1Ψ*q*-functions with *p* = *q* + 1 as *Wright g.h.f. of Gauss type*, since the simplest case of such special function is the Gauss function. We have following specific result.

**Theorem 11.** *Each Wright g.h.f. of Gauss type <sup>p</sup>*Ψ*q, that is with p*=*q* + 1*, is a q-tuple Wright– Erdélyi–Kober fractional integral (or differ-integral) of the* <sup>1</sup>Ψ0*-function. Namely, for* 0 < *A*<sup>0</sup> ≤ 1 *and bk* > *ak* > 0, *k* = 1, ..., *p:*

$${}\_{q+1}\Psi\_q \left[ \begin{array}{c} (a\_0, A\_0), (a\_1, A\_1), \dots, (a\_{q\_\prime}, A\_q) \\ (b\_1, B\_1), \dots, (b\_{q\_\prime}, B\_q) \end{array} \bigg| z \right] = I^{(a\_k - 1), (b\_k - a\_k)}\_{(\frac{1}{B\_k}), (\frac{1}{A\_k}), q} \left\{ {}\_1\Psi\_0 \left[ \begin{array}{c} (a\_0, A\_0) \\ - \end{array} \bigg| z \right] \right\} \tag{102}$$

$$I = I\_{(\frac{1}{B\_k}), (\frac{1}{A\_k}), q} \left\{ H\_{1,1}^{1,1} \left[ -z \Big| \begin{array}{c} (1 - a\_{0\prime} A\_0) \\ (0, 1) \end{array} \right] \right\}, \text{ if } A\_0 < 1 \text{ for } |z| < \infty; \text{ or if } A\_0 = 1, \text{ for } |z| < 1.$$

*For other arrangements between bk and ak, the operator in* (102) *is a generalized fractional derivative.*

For particular choices of parameters *bk* and *ak* not satisfying the conditions *bk* > *ak* > 0, some *integer order differentiations* appear in place of the fractional integrals or derivatives and lead to *Rodrigues type differential formulas, analogous to these for the classical orthogonal polynomials*.

Note that the integral representation (102) generalizes the *Euler integral formula for the Gauss hypergeometric functions* that serves for its analytical extension outside |*z*| < 1 to the domain |arg(1 − *z*)| < *π*:

$$\,\_2F\_1(a\_1, a\_2; b\_1; z) = \frac{\Gamma(b\_1)}{\Gamma(a\_2)\Gamma(b\_1 - a\_2)} \int\_0^1 \frac{(1 - \sigma)^{b\_1 - a\_2 - 1} \sigma^{a\_2 - 1}}{(1 - z\sigma)^{a\_1}} d\sigma, \quad b\_1 > a\_2 > 0. \tag{103}$$

This gave us the reason to name *<sup>p</sup>*Ψ*<sup>q</sup>* with *p* = *q* + 1 as a Gauss type g.h.f.

In particular, for *A*<sup>0</sup> = 1, the basic function in the case *p* = *q* + 1 reduces to the geometric series:

$$\mathbf{G}\_{1}\Psi\_{0}\left[\begin{array}{c} (a\_{0},1) \ \left|z\right| = H\_{1,1}^{1,1}\left|-z\right| \end{array}\right] - z\left|\begin{array}{c} (1-a\_{0},1) \\ (0,1) \end{array}\right.\right] = G\_{1,1}^{1,1}\left[-z\left|\begin{array}{c} 1-a\_{0}\\ 0 \end{array}\right.\right] = {}\_{1}F\_{0}(a\_{0};-;z) = (1-z)^{-a\_{0}}.\mathbf{1}$$

Therefore, based on the statements in Theorems 7–11, we suggest a *classification of the classical SF and of the SF of FC into three classes*, namely "Bessel", "confluent" and "Gauss" types, depending on whether *p* < *q*, *p* = *q* or *p* = *q* + 1. *This approach can facilitate applied scientists and engineers*, escaping a deep knowledge on SF, to think of them in a very general view as similar to a cosine- (Bessel) function, exponent or geometric series, because the fractional integrations keep in some sense the asymptotic and general behavior.

The results from Theorems 7–11 for *<sup>p</sup>*Ψ*q*, and their specifications for the *pFq*-functions, yield also *several new integral and differential formulas for them*, with possible *hints for computational procedures*.

Below, we mention some few of them, say in the *simpler cases of pFq-functions*.

The case *p* = *q*: For the g.h.f. *pFp*, the integral representation can be written not only by means of *Gp*,0 *<sup>p</sup>*,*p*-functions in the kernel, but also avoiding SF due to decomposition property (26). Thus, we have an integral formula, as follows:

$$\begin{split} \, \_pF\_p(a\_1, \ldots, a\_p; b\_1, \ldots, b\_p; z) &= B \, z^{1-a\_1} I\_{1,p}^{(a\_k-a\_1),(b\_k-a\_k)} \left\{ z^{a\_1-1} \exp z \right\} \\ &= B \int\_0^1 \ldots \int\_0^1 \prod\_{k=1}^p \left[ \frac{(1-\sigma\_k)^{b\_k-a\_k-1} \sigma\_k^{a\_k-1}}{\Gamma(b\_k-a\_k)} \right] \, \exp(z \, \sigma\_1 \ldots \sigma\_p) \, d\sigma\_1 \ldots d\sigma\_{p'} \quad B := \prod\_{j=1}^p \frac{\Gamma(b\_j)}{\Gamma(a\_j)}, \end{split} \tag{104}$$

under conditions *bk* > *ak* > 0, *k* = 1, ..., *p*. If the parameters do not satisfy them, the GFC operator above is interpreted as generalized fractional derivative of the form (30).

Specially, let all the differences *ak* − *bk* = *ηk*, *k* = 1, ..., *p* be *non-negative integers*. In this case, we call the *pFp*-functions as *"spherical" g.h.f. of confluent type*, using the analogy with the spherical Bessel, hyper-Bessel functions and spherical multi-M-L functions *E*(*αi*),(*βi*)(*z*) with "semi-integer" indices, mentioned in Remark 1. Then, the operator in (104) turns into a differential operator *D<sup>η</sup>* of integer order *η* = *η*<sup>1</sup> + ... + *η<sup>p</sup>* ≥ 0 of the form (27), and we obtain a differential formula of the form

$$\frac{\Gamma(a\_p)}{\Gamma(b\_q)} \,\_pF\_q(a\_1, \dots, a\_p; b\_1, \dots, b\_q; z) \,\_pF\_p(b\_1 + \eta\_1, \dots, b\_p + \eta\_p; b\_1, \dots, b\_p; z)$$

$$\begin{aligned} &= \,\_pF\_p(b\_1+\eta\_1,\ldots,b\_p+\eta\_p;b\_1,\ldots,b\_p;z) \\ &= \left[\prod\_{j=1}^p \frac{\Gamma(b\_j)}{\Gamma(b\_j+\eta\_j)}\right] \left[\prod\_{k=1}^p \prod\_{j=1}^{\eta\_k} (z\frac{d}{dz}+b\_k+j-1)\right] \{\exp z\} = Q\_p(z)\{\exp z\}. \end{aligned} \tag{105}$$

The representation (105) gives an example how differential formulas for the "spherical" g.h.f. introduced by Kiryakova [9] can be used for their explicit calculation, especially in the case *p* = *q* in the form *Qp*(*z*){exp *z*}, where *Qp*(*z*) is a *p*-degree polynomial. A special case of (105) with *bk* <sup>=</sup>*η<sup>k</sup>* <sup>=</sup> 1, *<sup>k</sup>*=1, ..., *<sup>p</sup>* and *Qp*(*z*) = *<sup>d</sup> dz z d dzp* was presented by Prudnikov–Brychkov–Marichev [14] (p. 593).

The case *p* = *q* + 1: For the Gauss type g.h.f. *<sup>q</sup>*+1*Fq*, we have in the unit disk |*z*| < 1 an integral representation (if written by repeated integral with no use of the kernel *Gq*,0 *<sup>q</sup>*,*<sup>q</sup>* function), for *bk* > *ak*+1, *k* = 1, .., *q*:

$${}\_{q+1}F\_{\emptyset}(a\_1, \ldots, a\_{q+1}; b\_1, \ldots, b\_{\emptyset}; z)$$

$$=\left[\prod\_{j=1}^{q} \frac{\Gamma(b\_{j})}{\Gamma(a\_{j+1})\Gamma(b\_{j}-a\_{j+1})}\right] z^{1-a\_{2}} I\_{1,q}^{(a\_{k+1}-1)\dagger} \langle (b\_{k}-a\_{k+1})^{\dagger}\_{1} \left\{ z^{a\_{2}-1} (1-z)^{-a\_{1}} \right\}$$

$$=\left[\prod\_{j=1}^{q} \frac{\Gamma(b\_{j})}{\Gamma(a\_{j+1})\Gamma(b\_{j}-a\_{j+1})}\right] \int\_{0}^{1} \frac{1}{\langle p\rangle} \int\_{0}^{1} \prod\_{j=1}^{q} \left[(1-\sigma\_{k})^{b\_{k}-a\_{k+1}-1} \sigma\_{k}^{a\_{k+1}-1}\right] (1-z\,\sigma\_{1}\ldots\sigma\_{q})^{-a\_{1}} d\sigma\_{1}\ldots d\sigma\_{q} . \tag{106}$$

In this form, (106) can also be found in [14] (p. 438). In the case *q* = 1, this is exactly the *Euler integral formula* (103) for the Gauss hypergeometric function. Similarly, (106) proposes a way for an analytical continuation of the functions *<sup>q</sup>*+1*Fq*(*z*) outside the unit disk to the domain | arg(1 − *z*)| < *π*.

In the case when the *ak*'s and *bk*'s do not satisfy the above conditions, the operator in (106) turns into a generalized fractional derivative, and this also provides useful corollaries. By analogy with the previous two cases (*p* < *q* and *p* = *q*), we introduce the notion of *spherical g.h.f. of Gauss type* when all the differences *ak*<sup>+</sup><sup>1</sup> − *bk* = *ηk*, *k* = 1, ..., *q* are non-negative integers. Then, *<sup>q</sup>*+1*Fq*(*z*) is representable by a purely differential operator of a function (<sup>1</sup> <sup>−</sup> *<sup>z</sup>*)−*a*<sup>1</sup> , and a special case of such differential formula is presented in [14] (p.572).

Another interesting case concerns the so-called *hypergeometric polynomials*

$${}\_{1}\_{q+1}F\_{\emptyset}(-n,a\_1,\ldots,a\_{\mathcal{P}};b\_1,\ldots,b\_{\emptyset};z) = \sum\_{k=0}^{n} \frac{(-n)\_k {}\_{k}(a\_1)\_{k}\ldots(a\_{\mathcal{P}})\_{k}}{(b\_1)\_{k}\ldots(b\_q)\_{k}} \frac{z^k}{k!}.\tag{107}$$

Taking *aq*+<sup>1</sup> = −*n* with integer *n* ≥ 0 and *ak* > *bk* > 0, *k* = 1, ..., *q*, the fractional derivative form of the operator in (106) provides the *Rodrigues type formula* ([9] (Ch.4)):

$$\begin{split} & \left[ \prod\_{j=1}^{q} \frac{\Gamma(a\_j)}{\Gamma(b\_j)} \right]\_{p+1} \, {}\_1F\_q(-n, a\_1, \dots, a\_q; b\_1, \dots, b\_q; z) = D\_{1,q}^{(b\_k-1),(a\_k-b\_k)} \{ (1-z)^n \} \\ &= z^{1-a\_q} D\_{1,q}^{(b\_k-a\_q),(a\_k-b\_k)} \left\{ z^{a\_1-1} (1-z)^n \right\} = z^{1-b\_q} D^{a\_q-b\_q} z^{a\_p-b\_{q-1}} D^{a\_{p-1}-b\_{q-1}} \\ & \times \dots z^{a\_3-b\_2} D^{a\_2-b\_2} z^{a\_2-b\_1} D^{a\_1-b\_1} \left\{ z^{a\_1-1} (1-z)^n \right\}. \end{split} \tag{108}$$

Special cases of (108) yield some *classical Rodrigues formulas*. For example, *p* = *q* = 1 with *<sup>a</sup>*<sup>1</sup> <sup>=</sup> *<sup>n</sup>* <sup>+</sup> 1, *<sup>b</sup>*<sup>1</sup> <sup>=</sup> 1 and *<sup>z</sup>* <sup>→</sup> <sup>1</sup>−*<sup>z</sup>* <sup>2</sup> gives the *Rodrigues formula for the Legendre polynomials*:

$$\begin{aligned} P\_n(z) = (-1)^n \,\_2F\_1(-n, n+1; 1; \frac{1-z}{2}) &= \frac{(-1)^n}{n!} \frac{d^n}{dz^n} \left[ \frac{1-z^n}{2} \cdot \frac{1+z^n}{2} \right] \\ &= \frac{1}{2^n n!} \frac{d^n}{dz^n} \left\{ (z^2-1)^n \right\}, \end{aligned}$$

and *p* = *q* = 2 with *a*<sup>1</sup> = *n* + 1, *b*<sup>1</sup> = 1, *a*<sup>2</sup> = *ζ*, *b*<sup>2</sup> = *p* (*ζ* > *p* > 0) gives the *Rodrigues formula for the Rice polynomials*, namely

$$R\_{\Pi}(z) = \,\_3\mathbb{P}(-n, n+1, \zeta; 1, p; z) = \frac{\Gamma(p)}{n!\Gamma(\zeta)} \left[ \frac{d^n}{dz^n} z^{1-p} \left( \frac{d}{dz} \right)^{\zeta - p} \right] \{z^n (1-z)^n\}.$$

#### **9. Numerical Aspects of SF of FC**

In the days before the electronic computers, the necessary complement to a special function was the computation, *by hand*, of extended *tables* of its values, intended to make the function available for users, similarly to the familiar logarithm tables. After *mechanical calculators* appeared and were more widespread, several huge special-function-table projects were started. Let us mention as examples the handbooks Gradshteyn–Ryzhik [126] and Magnus–Oberhettinger [127], both initiated in 1943.

R. Askey (at the Conf. "SF 2000", ASU): ". . . The advent of *fast computing machines* was thought to have made special functions a subject of the past. The reality has been different. Continued development of older functions and *the introduction of new special functions has been the reality* ... and still remains to be discovered ... The classical handbooks as mentioned, although useful as references, maybe no longer the primary means of accessing the special functions of mathematical physics. A number of high level programs appeared that are better suited for this challenging purpose, to mention as *Mathematica, Maple, Matlab, Mathcad,* ..."

We like to add a citation from Stephen Wolfram [128] (Wolfram *Mathematica*), "... and special functions became a big business. Table making had become a major activity for the governments, and was thought strategically important. Particularly for things like navigation, nuclear physics, military reasons, H-bomb, etc. And there were lots of tables . . . The aspects of the theory then mattered might be as two: – for numerical analysis, discovery of infinite series or other analytical expression allowing rapid calculation; and – reduction of as many functions as possible to the given (better known) function . . . " (*Author's comment*: compare with the approach applied in works of Kiryakova as [9] (Ch.4), [58,60], discussed in Section 8). (S. Wolfram:): ". . . There gradually started to appear systematic reference works on the properties of special functions. Each one based on lot of work . . . ", ". . . I guess integrals are timeless. *They don't really bear the marks of the human creators*. So we have the tables, but we really don't quite know where they came from ...". (*Author's comment*: However, it seems Marichev knew, and we refer to his book [11]).

In the 1960s and 1970s, a lot of efforts started for developing numerical algorithms for computers. Evaluation of special functions became a favorite area. S. Wolfram: "Well, a few years passed. And in 1986, I started designing *Mathematica*. I wanted to be sure to do a definitive job, and to have good numerics for all functions, for all values of parameters, anywhere in the complex plane, to any precision . . . And I remember very distinctly a phone call I had with someone at a government lab. And there was a *silence*. And then he said: "Look, you have to understand that *by the end of the 1990s* we hope to have the *integer-order* Bessel functions done to quad precision." ... (S.W., cont'd): "You know, it's actually quite a difficult thing to put a special function into *Mathematica*. You don't just have to do the numerics... So what makes a special function good? Well, we can start thinking about that question empirically. Asking what's true about the special functions we normally use. And of course, from what we have in *Mathematica* and in our *Wolfram Functions Site* [129], we should be in the best position to answer this."

Let us note that the standard SF—the hypergeometric functions (Gauss, *pFq*-), the Meijer *G*-function, etc.—are well presented there ([129]), but (it seems) *none of the M-L type, Wright and H-functions, that is cases of SF of FC, are available yet*. Meanwhile, the fractional nature of the world needs better reflection by fractional order (FO) models in whose solutions the so-called SF of FC appear. *Thus, it is yet a challenging trend to be developed*.

Here, we try to provide only a *short information on some "recent" numerical jobs done* with respect to the M-L-function, classical Wright function and only few of their extensions.

For numerical algorithms and results in the case of *the Mittag–Leffler functions* (one- and two-parameter and matrix analog), we start with reference to Caputo–Mainardi [130] (1971). We note that this is one of the first works to propose a plot of the M-L function. On those times, without possibilities to take advantage of software packages such as *Mathematica*, *Maple* and *Matlab*, this task was difficult, as it was managing series expansions convergent only in the mathematical sense but not in the numerical sense. Further, some other authors worked on similar problems either simultaneously but independently, or in years afterward: Gorenflo–Loutchko–Luchko [131], 2002; Diethelm–Ford–Freed–Luchko [132], 2005; Podlubny [133], 2005–2009–2012, (v 1.2.0.0) 2021; Hilfer–Seybold and Seybold–Hilfer [134,135], 2006–2008; Garrappa [136], 2015 and Garrappa–Popolizio [137], 2018; etc.

Numerical algorithms and results on the *(classical) Wright function* (56) *and its special cases, including the Mainardi function*, can be found in works by Luchko [138], 2008; Luchko– Trujillo–Velasco [139], 2010; Consiglio [140], 2019; Mainardi–Consiglio [97], 2020; etc.

Concerning the *Prabhakar (three-parameter) M-L type function* (38), see Garrappa [136], 2015; etc.

For the generalized exponential integrals as (63) and related generalized trigonometric functions involving M-L functions (in the sense of **6.2.**), and shown to be Wright g.h.f. <sup>2</sup>Ψ2, one can find some tables and plots for physically interesting parameters and related models, proposed by Mainardi and Masina [102] (2018) and Paris [103] (2020).

This list can surely be extended with more information.

We would like to attract readers' attention to the *challenging Open Problem*: What about possibilities for numerical and graphical interpretations, plots and tables and implementing software packages for some more general Special Functions of Fractional Calculus, such as the multi-index Mittag-Leffler functions? At least, to treat illustrative examples for few typical sets if multi-indices?

#### **10. Conclusions**

In this survey, under the notion of *Special Functions of Fractional Calculus (SF of FC),* we have in mind the Fox *H*-function and the Wright generalized hypergeometric functions *<sup>p</sup>*Ψ*q*, including the Mittag–Leffler function, its multi-index extensions and all their particular cases. The standard (classical) special functions (SF) naturally come as part of this scheme, as cases of the Meijer *G*-function and of the *pFq*-functions, including so many named SF and orthogonal polynomials. Here, we try to review some of the basic results on the theory of the SF of FC, obtained in author's works over more than 30 years, and support the wide spreading and important role of these functions by several examples.

The short outline of the contents is as follows:

In Section 1, we start with a historical introduction to pay tribute to the older projects that gave life to the contemporary development of the topic. Some short definitions and facts on the considered basic special functions are given in Section 2. In Section 3, we pay attention to the use of the *H*- and *G*-functions, especially of orders (*m*, 0; 0, *m*) and (*m*, 0; *m*, *m*), as kernel-functions of generalized integral transforms of Laplace type and of the operators of the so-called generalized fractional calculus (GFC). In Section 4, we introduce the Mittag–Leffler functions and the multi-index Mittag–Leffler functions, with short information on their properties derived in author's works. Sections 5 and 6 contain long lists of examples of SF that appear as cases of the multi-index Mittag–Leffler functions and in more general setting, of the Wright generalized hypergeometric functions *<sup>p</sup>*Ψ*q*. These include also citations to many other authors who introduced and applied such functions in their works. The author's unified approach to evaluate images of classical SF and of SF of FC under operators of FC and GFC is shortly described in Section 7, because the details are presented in another survey paper in the same journal [105]. In Section 8, we collect some of our basic propositions on the representations of the SF and of SF of FC as operators of GFC of three basic and simplest elementary functions and propose a classification of the SF based on the cases *p* < *q*, *p* = *q* and *p* = *q* + 1. Thus, a new sight on the theory of SF is proposed. Since the computational aspects related to the considered SF are of important interest for their applications, in Section 9, we provide some short information on the state of affairs and some recent works on this direction by other authors. A provoking challenge in this respect is mentioned.

**Author Contributions:** The ideas and results in this paper survey and reflect the author's sole contributions, resulting from more than 30 years research on the topic. The author has read and agreed to the published version of the manuscript.

**Funding:** This research received no financial funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** This paper is done under the working programs on bilateral collaboration contracts of Bulgarian Academy of Sciences with Serbian and Macedonian Academies of Sciences and Arts, and under the COST program, COST Action CA15225 'Fractional'.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


## *Article* **Approximation of CDF of Non-Central Chi-Square Distribution by Mean-Value Theorems for Integrals**

**Árpád Baricz 1,2, Dragana Jankov Maširevi´c <sup>3</sup> and Tibor K. Pogány 2,4,\***


**Abstract:** The cumulative distribution function of the non-central chi-square distribution *χ*<sup>2</sup> *<sup>n</sup>* (*λ*) of *n* degrees of freedom possesses an integral representation. Here we rewrite this integral in terms of a lower incomplete gamma function applying two of the second mean-value theorems for definite integrals, which are of Bonnet type and Okamura's variant of the du Bois–Reymond theorem. Related results are exposed concerning the small argument cases in cumulative distribution function (CDF) and their asymptotic behavior near the origin.

**Keywords:** non-central *χ*<sup>2</sup> distribution; second mean-value theorem for definite integrals; modified Bessel function of the first kind; Marcum *Q*–function; lower incomplete gamma function

**MSC:** Primary: 26A24, 62E17; Secondary: 33C10, 60E99

#### **1. Introduction with Historical Notes and Motivation**

The non-central *<sup>χ</sup>*<sup>2</sup> distribution with *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> degrees of freedom (in general, *<sup>n</sup>* can be a non-negative real number, see ([1] (p. 436), [2]) and non–centrality parameter *λ* > 0 is usually denoted by *χ* <sup>2</sup> *<sup>n</sup>* (*λ*) (see, e.g., [1] (p. 433)) and it is one of the most applied distributions: it is important in calculating the power function of some statistical tests [3], precisely in approximating to the power of *χ*2-tests applied to contingency tables (goodness of fit tests) ([1] (p. 467)); it frequently occurs in finance, estimation and decision theory and time series analysis [4,5] and can also be regarded as a generalized Rayleigh distribution ([1] (p. 435)) in which case it is used in mathematical physics; when it is used in communication theory then we call the appropriate complementary cumulative distribution function the generalized Marcum *Q*-function and the non-centrality parameter is interpreted as a signal-to-noise ratio [1].

The beginnings of the research that led up to the model and finally results in the *χ*<sup>2</sup> *n* distribution, which is the zero non-centrality parameter case of non-central *χ*<sup>2</sup> *<sup>n</sup>* (*λ*), that is *χ*<sup>2</sup> *<sup>n</sup>* <sup>≡</sup> *<sup>χ</sup>*<sup>2</sup> *<sup>n</sup>* (0), can be located around the middle of the 19th century. More precisely there are two main opinions exposed: firstly, the influential work by Lancaster [6] who attributed certain preliminary results to Bienaymé in ([7] (p. 58)) (never mentioning normal distribution), which are in fact the same as what Karl Pearson did to earn his tables [8]. It is not surprising, Bienaymé's interest in the sum of random squares and the related distribution of errors; namely, we should have in mind his celebrated result on the linearity of variance of a sum of independent random variables called the Bienaymé formula. Lancaster proceeded then to Helmert, who in [9,10] derived that which we understand in modern notation the *χ*<sup>2</sup> *<sup>n</sup>* probability density function (PDF). Finally, Lancaster joined Kruskal [11], suggesting to call the distribution by Helmert's name.

However, Sheynin [12], Plackett [13] and especially Kendall [14] have mentioned the contribution of the applied mathematician and physicists Ernst Abbe, who has published

**Citation:** Baricz, Á.; Jankov Maširevi´c, D.; Pogány, T.K. Approximation of CDF of Non-Central Chi-Square Distribution by Mean-Value Theorems for Integrals. *Mathematics* **2021**, *9*, 129. https://doi.org/10.3390/math9020129

Received: 9 December 2020 Accepted: 4 January 2021 Published: 8 January 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

<sup>1</sup> Department of Economics, Babe¸s-Bolyai University, 400591 Cluj-Napoca, Romania; arpad.baricz@econ.ubbcluj.ro

in his *venia docendi* thesis [15] in Jena, 1863, the *χ*<sup>2</sup> distribution's PDF ([12] (p. 1004)). It is also worth mentioning that Helmert himself never explicitly mentioned this distribution as Abbe's result in this manner, but several times quoted the "modified Abbe's criterion" in geodetic literature ([12] (p. 1004)). Kendall emphasizes Abbe's priority (agreeing with Sheynin) and wrote a *laudatio* to his work regarding the derivation of the PDF of *χ*<sup>2</sup> distribution (in a contemporary notation) ([14] (p. 311, Equation (11))), preceding Helmert for at least twelve years.

A random variable (rv) *ξ* possesses non-central *χ*<sup>2</sup> distribution, which we signify with *<sup>ξ</sup>* <sup>∼</sup> *<sup>χ</sup>*<sup>2</sup> *<sup>n</sup>* (*λ*) if the associated probability density function is ([4] (p. 396, Equation (1.7)))

$$f\_{n, \lambda}(\mathbf{x}) = \frac{1}{2} \mathbf{e}^{-(\mathbf{x} + \lambda)/2} \left(\frac{\mathbf{x}}{\lambda}\right)^{(n-2)/4} I\_{n/2 - 1}(\sqrt{\lambda \cdot \mathbf{x}}), \qquad \lambda > 0, \mathbf{x} > 0; n \in \mathbb{N}, \tag{1}$$

where *I<sup>ν</sup>* stands for the modified Bessel function of the first kind of order *ν* ([16] (p. 77)) and has the power series representation ([17] (p. 375, Equation (9.6.10)))

$$I\_{\nu}(z) = \sum\_{k=0}^{\infty} \frac{1}{\Gamma(\nu + k + 1)} \left(\frac{z}{2}\right)^{2k+\nu}; \qquad \Re(\nu) > -1, z \in \mathbb{C} \,. \tag{2}$$

As for the historical background of related PDF and the associated cumulative distribution function (CDF) we consult the monographs [1,18]. In accordance with ([18] (Chapter 1, §5)) the PDF of *<sup>ξ</sup>* <sup>∼</sup> *<sup>χ</sup>*<sup>2</sup> *<sup>n</sup>* (*λ*) was pioneered in 1928 by Fisher [19] by a limiting process, while the explicit derivation belongs to Tang [20] ten years later (we also draw the interested reader's attention to ([1] (Chapter 29, pp. 435 *et seq.*)). In 1949 Patnaik [21], then, among others, Pearson [22], Sankaran [23] and Temme [24] have been studied the *χ*<sup>2</sup> *<sup>n</sup>* (*λ*) distribution; Temme claimed that his formulae have certain computational advantages

$$F\_{n,\lambda}(\mathbf{x}) = \begin{cases} 1 - \frac{1}{2} \left(\frac{\mathbf{x}}{\lambda}\right)^{n/4} \left[ T\_{n/2 - 1}(\sqrt{\lambda \mathbf{x}}, \omega) - \sqrt{\frac{\lambda}{\mathbf{x}}} T\_{n/2}(\sqrt{\lambda \mathbf{x}}, \omega) \right], & \mathbf{x} > \lambda \\\frac{1}{2} \left(\frac{\mathbf{x}}{\lambda}\right)^{n/4} \left[ \sqrt{\frac{\lambda}{\mathbf{x}}} T\_{n/2}(\sqrt{\lambda \mathbf{x}}, \omega) - T\_{n/2 - 1}(\sqrt{\lambda \mathbf{x}}, \omega) \right], & \mathbf{x} < \lambda \end{cases}$$

where *ω* = <sup>1</sup> 2 ( <sup>√</sup>*<sup>x</sup>* <sup>−</sup> <sup>√</sup>*λ*)2/ <sup>√</sup>*λ<sup>x</sup>* and

$$T\_{\nu}(\alpha, \omega) = \int\_{\alpha}^{\infty} \mathbf{e}^{-(\omega + 1)t} I\_{\nu}(t) \, \mathbf{d}t.$$

Here we are interested in the CDF used in communication theory ([25] (p. 66, Equation (1.1)))

$$F\_{n,\lambda}(\mathbf{x}) = 1 - Q\_{n/2}(\sqrt{\lambda}, \sqrt{\mathbf{x}} \mid), \qquad \mathbf{x} > \mathbf{0}, \tag{3}$$

where [26]

$$Q\_{\boldsymbol{V}}(a,b) = \frac{1}{a^{\nu-1}} \int\_{b}^{\infty} t^{\nu} e^{-(t^2+a^2)/2} \, I\_{\nu-1}(at) \, \mathrm{d}t, \qquad a, \nu > 0; b \ge 0,\tag{4}$$

denotes the generalized Marcum *Q*-function of the order *ν*.

Finally, it is worth mentioning that Brychkov recently published a closed expression for the generalized Marcum *Q*-function ([27] (p. 178, Equation (7))) in terms of the complementary error function *z* → erfc(*z*) ([28] (p. 160, Equation (7.2.2))), which immediately implies a new formula for CDF (3) in the case when *n* ∈ N is odd. In turn, in the case of an even number of the degrees of freedom, Jankov Maširevi´c derived the following expression for the appropriate CDF for all *λ* > 0, *x* > 0 [25]

$$F\_{2n,\lambda}(\mathbf{x}) = 1 - \frac{\sqrt{\lambda \mathbf{x}}}{2} I\_1(\sqrt{\lambda \mathbf{x}}) \left[ K\_0(\sqrt{\lambda \mathbf{x}}) - K\_0\left(\sqrt{\lambda \mathbf{x}}, \ln \sqrt{\frac{\mathbf{x}}{\lambda}}\right) \right],$$

$$\begin{aligned} &+\lambda \, I\_0(\sqrt{\lambda \mathbf{x}}) \frac{\partial}{\partial \lambda} \left[ K\_0(\sqrt{\lambda \mathbf{x}}) - K\_0(\sqrt{\lambda \mathbf{x}}, \ln \sqrt{\frac{\mathbf{x}}{\lambda}}) \right] \\ &- \mathbf{e}^{-\frac{\lambda + x}{2}} \sqrt{\frac{\lambda}{\mathbf{x}}} \sum\_{m=1}^n \left( \sqrt{\frac{\mathbf{x}}{\lambda}} \right)^m I\_{m-1}(\sqrt{\lambda \mathbf{x}}) \,. \end{aligned} \tag{5}$$

Here *Kν* stands for the modified Bessel functions of the second kind and

$$\mathbb{K}\_{\nu}(z, w) = \frac{\sqrt{\pi}}{\Gamma\left(\nu + \frac{1}{2}\right)} \left(\frac{z}{2}\right)^{\nu} \int\_{0}^{w} \mathbf{e}^{-z \cosh t} \sinh^{2\nu} t \,\mathrm{d}t, \qquad \Re(\nu) > -1/2\nu$$

is its incomplete variant ([29] (p. 26, Equation (1.30))), while

$$\lim\_{w \to \infty} K\_{\nu}(z, w) = K\_{\nu}(z), \qquad \Re(z) > 0, \nu$$

in the pointwise sense. Jankov Maširevi´c established the computational efficiency of Expression (5) versus the formulae derived by Patnaik, and those by Temme for even *n* ∈ N, concluding that her approach is more efficient, compare ([25] (Section 3)).

The main aim of this paper is to present new results for the CDF (3) concerning approximation formulae obtained by two variants of the second mean-value theorems for definite integrals. Throughout, the non-centrality parameter *λ* > 0 and the variable *x* > 0.

#### **2. Preliminaries and Auxiliary Results**

Combining the integral form of the Marcum *Q*-function (4) and the integral ([30] (p. 306, Equation (2.15.5.4)))

$$\int\_0^\infty t^{\nu+1} \, \mathbf{e}^{-pt^2} \, I\_\nu(\varepsilon t) \, \mathbf{d}t = \frac{\mathbf{c}^\nu \, \mathbf{e}^{c^2/(4p)}}{(2p)^{\nu+1}}, \qquad \Re(p) > 0, \Re(\nu) > -1, |\arg(\varepsilon)| < \pi, \tau > 0$$

we express the CDF (3) for all *λ* > 0 and *x* > 0 as

$$F\_{n,\lambda}(\mathbf{x}) = \frac{\mathbf{e}^{-\lambda/2}}{\lambda^{n/4 - 1/2}} \int\_0^{\sqrt{\pi}} t^{n/2} \mathbf{e}^{-t^2/2} I\_{n/2 - 1}(\sqrt{\lambda} \, t) \, \mathrm{d}t. \tag{6}$$

This formula is the starting point for our main results, which concerns the approximate calculation of the involved integral using two different types of mean-value theorems.

Our next main tools are two mean-value theorems for integrals, of which the integrands contain products of two suitable functions *f* , *g*, say. Both theorems belong to the so-called second mean-value theorems for definite integrals. The ancestor results of the first version theorem belongs to Bonnet ([31] (p. 14)); however, for the second one we are referred to the memoir by du Bois–Reymond ([32] (p. 83)) or also to Hobson's article [31]. The case in which at least one of the input functions *f* , *g* is a constant (first mean-value theorem) we skip in our present considerations. Now, recall the Bonnet variant of second mean-value theorem by Schwind–Ji–Koditschek.

**Theorem 1.** ([33] (p. 559, Theorem 2))*. Suppose f* ∈ C(*a*, *b*] *and g* ≥ 0 *is integrable on* (*a*, *b*)*. Let <sup>x</sup>* <sup>∈</sup> (*a*, *<sup>b</sup>*] *be fixed. If both* lim*t*→*a*(*f*(*t*) <sup>−</sup> *<sup>K</sup>*)/(*<sup>t</sup>* <sup>−</sup> *<sup>a</sup>*)*<sup>r</sup> and* lim*t*→*<sup>a</sup> <sup>g</sup>*(*t*)/(*<sup>t</sup>* <sup>−</sup> *<sup>a</sup>*)*<sup>s</sup> exist and differ from zero for some constant K, a non-zero r and some s* > −1 *with r* + *s* > −1*, then:*

*1. There exists cx* ∈ (*a*, *x*] *so, that*

$$\int\_{a}^{x} f(t)g(t) \,\mathrm{d}t = f(c\_{x}) \int\_{a}^{x} g(t) \,\mathrm{d}t.$$

*2. Moreover, for any such choice of cx there holds*

$$\lim\_{x \to a} \frac{c\_{\chi} - a}{\chi - a} = \left(\frac{s + 1}{r + s + 1}\right)^{\frac{1}{r}}.\tag{7}$$

**Remark 1.** *We notice that often a good choice for K in Theorem 1 is to take K* = lim*t*→*a*<sup>+</sup> *f*(*t*) *if the limit exists or K* = 0 *otherwise, consult* ([33] (p. 561))*, also see* [34–36] *for ancestry of* (7)*, which describes the asymptotic behavior of cx.*

Another approach in approximating the CDF of *χ*<sup>2</sup> *<sup>n</sup>* (*λ*) is based on the use of the Okamura's version of the du Bois–Reymond's second mean-value theorem for definite integrals [37,38].

**Theorem 2.** ([39] (Equation (14)))*. Let f* : [*a*, *b*] → R *be monotone and g* : [*a*, *b*] → R *integrable. Then there exists a c* ∈ [*a*, *b*] *such that*

$$\int\_{a}^{b} f(t) \g(t) \, \mathrm{d}t = f(a+) \int\_{a}^{c} \g(t) \, \mathrm{d}t + f(b-) \int\_{c}^{b} g(t) \, \mathrm{d}t.$$

We point out that both Theorems 1 and 2 hold for Riemann integrable input functions. However, stronger second mean-value theorem results for definite integrals for Lebesgue integrable functions have been presented by Wituła–Hetmaniok–Słota in ([40] (p. 1614, Theorem 3)).

#### **3. Approximating CDF of** *χ***<sup>2</sup>** *<sup>n</sup>* **(***λ***) Distribution**

In this section we will state and prove our main results, derived from the formula (6) and the mean-value Theorems 1 and 2.

**Theorem 3.** *Let n* ∈ N, *λ* > 0 *and x* > 0*.*

*1. Then, there exists cx* <sup>∈</sup> (0, <sup>√</sup>*<sup>x</sup>* ] *such that*

$$F\_{\boldsymbol{\eta},\lambda}(\boldsymbol{x}) = \left(\frac{\boldsymbol{\chi}}{\lambda}\right)^{\boldsymbol{\eta}/4} \mathrm{e}^{-\frac{\lambda + \varepsilon\_{\mathbf{x}}^{2}}{2}} I\_{\boldsymbol{\eta}/2}(\sqrt{\lambda \boldsymbol{x}}).\tag{8}$$

*2. For cx there holds*

$$\lim\_{\mathbf{x}\to\mathbf{0}}\frac{c\_{\mathbf{x}}^{2}}{\mathbf{x}}=\frac{n}{n+2},\qquad n\in\mathbb{N},\tag{9}$$

*while*

$$F\_{n, \lambda}(\mathbf{x}) = \frac{\mathbf{e}^{-\lambda/2}}{\Gamma(n/2 + 1)} \left(\frac{\mathbf{x}}{2}\right)^{n/2} \left(1 + \mathcal{O}(\mathbf{x})\right), \qquad \mathbf{x} \to \mathbf{0}. \tag{10}$$

**Proof.** Consider the form of CDF given in (6). Making use of Theorem 1, with *f*(*t*) = e−*<sup>t</sup>* 2/2 <sup>∈</sup> <sup>C</sup>(R+), which imply *<sup>K</sup>* <sup>=</sup> lim*t*→<sup>0</sup> *<sup>f</sup>*(*t*) = 1 and by L'Hospital rule and *<sup>r</sup>* <sup>=</sup> <sup>2</sup> follows

$$\lim\_{t \to 0} \frac{f(t) - K}{t^r} = \lim\_{t \to 0} \frac{\mathbf{e}^{-t^2/2} - 1}{t^2} = -\frac{1}{2} \neq 0;$$

then, choosing *g*(*t*) = *t <sup>n</sup>*/2 *In*/2−<sup>1</sup> *t* <sup>√</sup>*<sup>λ</sup>* and *s* = *n* − 1 we have

$$\lim\_{t \to 0} \frac{g(t)}{t^s} = \lim\_{t \to 0} \frac{I\_{n/2 - 1} \left( t\sqrt{\lambda} \right)}{t^{n/2 - 1}} = \frac{\lambda^{(n - 2)/4}}{2^{n/2 - 1} \Gamma(n/2)} \neq 0,$$

bearing in mind the asymptotics of the modified Bessel *I<sup>ν</sup>* for small *z* → 0 which is the consequence of (2):

$$I\_{\nu}(z) = \frac{1}{\Gamma(\nu+1)} \left(\frac{z}{2}\right)^{\nu} \left(1 + \mathcal{O}(z^2)\right), \qquad -\nu \notin \mathbb{N}.\tag{11}$$

Hence, for *<sup>x</sup>* <sup>&</sup>gt; 0 fixed, according to part 1 of Theorem 1, there exists a *cx* <sup>∈</sup> (0, <sup>√</sup>*<sup>x</sup>* ] for which

$$F\_{n,\lambda}(x) = \frac{\mathbf{e}^{-\frac{\lambda+c\_\tau^2}{2}}}{\lambda^{n/4-1/2}} \int\_0^{\sqrt{x}} t^{n/2} \ I\_{n/2-1} \left( t\sqrt{\lambda} \right) \mathbf{d}t = \left( \frac{x}{\lambda} \right)^{n/4} \mathbf{e}^{-\frac{\lambda+c\_\tau^2}{2}} I\_{n/2}(\sqrt{\lambda}x),$$

where in the last equality the formula ([41] (p. 676, Equation (6.561.7)))

$$\int\_0^1 t^{\nu+1} \ I\_\nu(at) \, \mathrm{d}t = a^{-1} I\_{\nu+1}(a), \qquad \Re(\nu) > -1,$$

was taken.

By the second part of Theorem 1, bearing in mind that *cx* <sup>∈</sup> (0, <sup>√</sup>*<sup>x</sup>* ] and setting *r* = 2, *s* = *n* − 1, *a* = 0, we have

$$\lim\_{x \to 0} \frac{c\_{\chi}^2}{(\sqrt{x})^2} = \lim\_{x \to 0} \frac{c\_{\chi}^2}{x} = \frac{n}{n+2}, \qquad n \in \mathbb{N}\_{\prime}$$

that is (9). Now, the asymptotic behavior of the modified Bessel Function (11) approves the Relation (10).

**Corollary 1.** *Let the situation be the same as in the preamble of Theorem 3. Then there exists c* = *cx* ∈ (0, 1] *such that*

$$F\_{n,\lambda}(\mathbf{x}) = \left(\frac{\mathbf{x}}{\lambda}\right)^{n/4} \mathbf{e}^{-\frac{\lambda \pm \mathbf{x} \cdot \mathbf{x}^2}{2}} I\_{n/2}\left(\sqrt{\lambda \mathbf{x}}\right). \tag{12}$$

**Proof.** Using the substitution *u* = *t*/ <sup>√</sup>*x*, from (6) *mutatis mutandis*

$$F\_{n,\lambda}(\mathbf{x}) = \sqrt{\lambda \mathbf{x}} \,\mathrm{e}^{-\lambda/2} \left(\frac{\mathbf{x}}{\lambda}\right)^{n/4} \int\_0^1 \boldsymbol{u}^{n/2} \,\mathrm{e}^{-\mathbf{x}\mathbf{u}^2/2} \, I\_{n/2-1} \left(\boldsymbol{u}\sqrt{\lambda \mathbf{x}}\right) \, \mathrm{d}\boldsymbol{u},\tag{13}$$

and then applying Theorem 1 repeating the above procedure for *f*(*u*) = e−*xu*2/2, *r* = 2, *<sup>g</sup>*(*u*) = *<sup>u</sup>n*/2 *In*/2−<sup>1</sup> *u* <sup>√</sup>*λ<sup>x</sup>* and *s* = *n* − 1 we readily conclude the Formula (12).

In what follows we propose some numerical approximations for the real number *cx* given in part 2 of Theorem 3 for small values of non-centrality parameter *λ* > 0.

**Corollary 2.** *Let n* ∈ N *and x* > 0*. When λ* → 0*, in* (8) *we have*

$$c\_x^2 = -2\log\left[\left(\frac{2}{\mathbf{x}}\right)^{n/2}\gamma\left(\frac{n}{2} + 1, \frac{\mathbf{x}}{2}\right) + \mathbf{e}^{-\mathbf{x}/2}\right].\tag{14}$$

**Proof.** Combining the Formulae (3) and ([26] (p. 70))

$$\lim\_{a \to 0} \mathbb{Q}\_{\boldsymbol{\nu}}(a, b) = \frac{1}{\Gamma(\boldsymbol{\nu})} \Gamma\left(\boldsymbol{\nu}, b^2/2\right).$$

where Γ(·, ·) denotes the upper incomplete gamma function ([28] (p. 174, Equation (8.2.2)))

$$
\Gamma(a, z) = \int\_z^{\infty} t^{a-1} \, \mathbf{e}^{-t} \, \mathbf{d}t, \qquad \Re(a) > 0,
$$

we obtain

$$\lim\_{\lambda \to 0} F\_{n,\lambda}(\mathbf{x}) = 1 - \frac{\Gamma(n/2, \mathbf{x}/2)}{\Gamma(n/2)}.\tag{15}$$

Now, from (11) we observe

$$\lim\_{\lambda \to 0} \frac{I\_{n/2}(\sqrt{\lambda \alpha})}{\lambda^{n/4}} = \frac{\mathbf{x}^{n/4}}{2^{n/2} \Gamma(n/2 + 1)}$$

which, in conjunction with (8) implies that

$$\lim\_{\lambda \to 0} F\_{n,\lambda}(x) = \left(\frac{x}{2}\right)^{n/2} \frac{\mathbf{e}^{-c\_x^2/2}}{\Gamma(n/2 + 1)}, \qquad c\_x \in (0, \sqrt{x}].\tag{16}$$

Equating the right-hand-side expressions in (15) and (16) we arrive at

$$
\Gamma(n/2+1) - \frac{n}{2}\Gamma(n/2, \mathbf{x}/2) = \left(\frac{\mathbf{x}}{2}\right)^{n/2} \mathbf{e}^{-c\_x^2/2}.\tag{17}
$$

The identities ([28] (p. 178, Equation (8.8.2–3)))

$$
\Gamma(a+1, z) = a\Gamma(a, z) + z^a \mathbf{e}^{-z}; \qquad \gamma(a, z) + \Gamma(a, z) = \Gamma(a), z
$$

where *γ*(·, ·) is the lower incomplete gamma function, defined by ([28] (p. 174, Equation (8.2.1)))

$$\gamma(a, z) = \int\_0^z t^{a-1} \mathbf{e}^{-t} \, \mathbf{d}t, \qquad \Re(a) > 0,$$

one transforms (17) into

$$\left(\frac{2}{\pi}\right)^{n/2}\gamma\left(n/2+1,\mathbf{x}/2\right)+\mathbf{e}^{-\mathbf{x}/2}=\mathbf{e}^{-\mathbf{c}\_x^2/2}.$$

Now, obvious steps lead to the final form of *c*<sup>2</sup> *x*.

**Corollary 3.** *For the small enough values of the non-centrality parameter λ and the argument x the magnitude of approximation satisfies the relation*

$$\frac{c\_x^2}{\infty} - \frac{n}{n+2} = -\frac{n\,\,x}{4(n+4)} + o(\text{x}), \qquad \text{x} \to 0. \tag{18}$$

**Proof.** Recalling the asymptotic of the lower incomplete gamma function, which we deduce from the hypergeometric form expression ([17] (p. 262, Equation (6.5.12))), written in Landau's notation

$$\gamma(\alpha, z) = \frac{z^{\alpha}}{\alpha} \left( 1 - \frac{\alpha \cdot z}{\alpha + 1} + o(z) \right), \qquad z \to 0, \alpha$$

after asymptotic expansion of both expressions inside square brackets in (14), we get

$$\begin{split} \frac{d\mathbf{x}\_{\mathbf{x}}^{2}}{dt} &= -\frac{2}{\mathbf{x}} \log \left[ \frac{\mathbf{x}}{n+2} \left( 1 - \frac{(n+2)\mathbf{x}}{2(n+4)} + o(\mathbf{x}) \right) + 1 - \frac{\mathbf{x}}{2} + \frac{\mathbf{x}^{2}}{8} + o(\mathbf{x}^{2}) \right] \\ &= -\frac{2}{\mathbf{x}} \log \left[ 1 - \frac{n\mathbf{x}}{2(n+2)} + \frac{n\mathbf{x}^{2}}{8(n+4)} + o(\mathbf{x}^{2}) \right]. \end{split}$$

For when *n* is fixed and *x* is small enough, it is legitimate to express the logarithm via its asymptotic expansion log(1 + *h*) = *h* + *o*(*h*), |*h*| < 1, which approves (18).

**Remark 2.** *The associated limit result* (18) *enables the approximation*

$$F\_{n,\lambda}(\mathbf{x}) \simeq \left(\frac{\mathbf{x}}{\lambda}\right)^{n/4} \exp\left\{-\frac{\lambda}{2} - \frac{n\mathbf{x}}{2(n+2)} + \frac{n\mathbf{x}^2}{8(n+4)}\right\} I\_{n/2}(\sqrt{\lambda\mathbf{x}}) .$$

*This estimate we can readily take into account in numerical calculation of* CDF *for the purpose of comparison with another representations like Patnaik's and Temme's, for instance.*

**Corollary 4.** *For all λ* > 0, *x* > 0 *we have*

$$F\_{1,\lambda}(\mathbf{x}) = \sqrt{\frac{2}{\pi\lambda}} \sinh(\sqrt{\lambda\mathbf{x}}) \,\mathrm{e}^{-\frac{\lambda + \epsilon\_x^2}{2}},\tag{19}$$

*where*

$$x\_x^2 = x + 2\log\left[1 + \frac{1}{\lambda} - \sqrt{\frac{x}{\lambda}} \frac{\cosh(\sqrt{\lambda x})}{\sinh(\sqrt{\lambda x})}\right].$$

$$F\_{2,\lambda}(x) = \sqrt{\frac{x}{\lambda}} \mathbf{e}^{-\frac{\lambda + c\_x^2}{2}} I\_1(\sqrt{\lambda x}),\tag{20}$$

*where*

*Moreover,*

$$x\_{\mathbf{x}}^2 = \mathbf{x} + 2\log\left[1 - \sqrt{\frac{\mathbf{x}}{\lambda}} \frac{I\_2(\sqrt{\lambda \mathbf{x}})}{I\_1(\sqrt{\lambda \mathbf{x}})}\right].$$

**Proof.** Having in mind that for non-negative integer *m* there holds ([27] (p. 178, Equation (7)))

$$\begin{aligned} Q\_{m+1/2}(a,b) &= \frac{1}{2} \Big[ \text{erfc} \left( \frac{b-a}{\sqrt{2}} \right) + \text{erfc} \left( \frac{b+a}{\sqrt{2}} \right) \Big] \\ &+ \mathbf{e}^{-(a^2+b^2)/2} \sum\_{k=1}^m \left( \frac{b}{a} \right)^{k-1/2} I\_{k-1/2}(ab), \end{aligned}$$

the Formula (3) for *n* = 1 becomes

$$F\_{1,\lambda}(x) = \frac{1}{2} \left[ \text{erf}\left(\frac{\sqrt{x} - \sqrt{\lambda}}{\sqrt{2}}\right) + \text{erf}\left(\frac{\sqrt{x} + \sqrt{\lambda}}{\sqrt{2}}\right) \right]. \tag{21}$$

As (erf(*z*)) = 2e−*z*<sup>2</sup> / <sup>√</sup>*π*, equating (21) and the Formula (8) and then deriving such equality with respect to *λ* we get

$$\mathbf{e}^{-\mathbf{x}/2} \left( \mathbf{e}^{-\sqrt{\lambda x}} - \mathbf{e}^{\sqrt{\lambda x}} \right) = \frac{2}{\lambda} \mathbf{e}^{-c\_x^2/2} \left( \sqrt{\lambda x} \cosh(\sqrt{\lambda x}) - (1 + \lambda) \sinh(\sqrt{\lambda x}) \right).$$

Finally, the definition of hyperbolic sine implies (19).

The Formula (3) for *n* = 2 becomes *F*2,*λ*(*x*) = 1 − *Q*( <sup>√</sup>*λ*, <sup>√</sup>*x*) where *<sup>Q</sup>*1(*a*, *<sup>b</sup>*) <sup>≡</sup> *<sup>Q</sup>*(*a*, *<sup>b</sup>*) is the Marcum Q-function. Now, knowing that ([42] (p. 1221, Equation (5)))

$$\frac{\partial Q(a,b)}{\partial a} = b \, I\_1(ab) \, \mathbf{e}^{-(a^2+b^2)/2} \,,$$

the first derivative of (8), with respect of *λ* becomes

$$-\frac{\sqrt{\mathcal{X}}}{2\sqrt{\lambda}}\mathbf{e}^{-(\lambda+x)/2}I\_{1}(\sqrt{\lambda}\mathbf{x}) = \sqrt{\mathcal{X}}\mathbf{e}^{-(\lambda+x\_{x}^{2})/2}\left[\frac{\sqrt{\mathcal{X}}}{2\lambda}I\_{2}(\sqrt{\lambda\mathbf{x}}) - \frac{I\_{1}(\sqrt{\lambda\mathbf{x}})}{2\sqrt{\lambda}}\right]\mathbf{e}^{-\lambda}$$

that is

$$\mathbf{e}^{-x/2} = \mathbf{e}^{-\varepsilon\_x^2/2} \left[ 1 - \sqrt{\frac{\mathbf{x}}{\lambda}} \frac{I\_2(\sqrt{\lambda \mathbf{x}})}{I\_1(\sqrt{\lambda \mathbf{x}})} \right] \mathbf{x}$$

giving (20).

The second approach in approximating the CDF of *χ*<sup>2</sup> *<sup>n</sup>* (*λ*) is to apply Theorem 2.

**Theorem 4.** *Let λ* > 0*, x* > 0 *and Rρ*(*n*) = [(2/*n* − 1)+, 2/*n* + 1)*, where* (*a*)+ = max{0, *a*}*. Then for all ρ* ∈ *Rρ*(1)=[1, 3) *there exists some c* ∈ [0, 1] *for which*

$$F\_{1,\lambda}(\mathbf{x}) = \frac{\mathbf{e}^{-\lambda/2}}{\sqrt{\pi}} \left(\frac{\mathbf{x}}{2}\right)^{(\rho - 1)/4} \cosh(\sqrt{\lambda}\mathbf{x}) \left[\gamma\left(\frac{3 - \rho}{4}, \frac{\mathbf{x}}{2}\right) - \gamma\left(\frac{3 - \rho}{4}, \frac{\mathbf{x}\mathbf{c}^2}{2}\right)\right]. \tag{22}$$

*When ρ* ∈ *Rρ*(2)=[0, 2) *there exists certain c* ∈ [0, 1] *that*

$$F\_{2,\lambda}(\mathbf{x}) = \mathbf{e}^{-\lambda/2} \left(\frac{\mathbf{x}}{2}\right)^{\frac{\xi}{2}} \left\{ I\_0(\sqrt{\lambda \mathbf{x}} \ ) \,\gamma \left(1 - \rho/2, \mathbf{x}/2\right) \right.$$

$$+ \left[\delta\_{\rho 0} - I\_0(\sqrt{\lambda \mathbf{x}} \ )\right] \,\gamma \left(1 - \rho/2, \mathbf{x}c^2/2\right) \right\},\tag{23}$$

*where δab stands for the Kronecker delta.*

*Moreover, for all n* ∈ N<sup>3</sup> = {3, 4, . . . } *and ρ* ∈ *Rρ*(*n*) *there exists c* ∈ [0, 1] *such that*

$$F\_{n, \lambda}(\mathbf{x}) = \frac{\lambda^{(2-n)/4}}{\sqrt{2}} \mathbf{e}^{-\lambda/2} \left(\frac{\mathbf{x}^{\rho}}{2^{\rho-1}}\right)^{n/4} I\_{n/2-1}\left(\sqrt{\lambda}\mathbf{x}\right)$$

$$\times \left[\gamma\left(\frac{(1-\rho)n+2}{4}, \frac{\mathbf{x}}{2}\right) - \gamma\left(\frac{(1-\rho)n+2}{4}, \frac{\mathbf{x}c^{2}}{2}\right)\right].\tag{24}$$

*We remark that the value of c is not necessarily the same throughout.*

**Proof.** Consider the CDF's integral representation (13) in which the integration domain is the unit interval [0, 1]. Our intention is to specify the appropriate input functions *f* , *g* in a simple way and by scaling only the exponent of the power term—the integrand contains a product of three functions—to prepare it for the use of Okamura's Theorem 2. Precisely, consider for some real *ρ* (which range will be established later):

$$f\_{n\_{\theta}}(t) = t^{\rho \, n/2} \, I\_{n/2 - 1} (t \sqrt{\lambda x} \,); \qquad g\_{n\_{\theta}}(t) = t^{(1 - \rho) \, n/2} \, \mathbf{e}^{-\mathbf{x} t^2 / 2}.$$

From Formula (2) we can conclude that the function *Iν*(*x*) increases monotonically for *ν* > 0, *x* > 0. Therefore, *fn*,*ρ*(*t*), as a product of monotonically increasing functions, also monotonically increases. However, to establish the interconnection between the scaling parameter *ρ* and the degrees of freedom *n* we are forced to employ a more sophisticated approach. Namely, investigating the monotone behavior of *fn*,*ρ*(*t*), *t* ∈ (0, 1] we start with

$$f\_{n\rho}'(t) = t^{\rho n/2 - 1} \left\{ \left[ (\rho + 1)n/2 - 1 \right] I\_{n/2 - 1} \left( t\sqrt{\lambda x} \right) + t\sqrt{\lambda x} \, I\_{n/2} \left( t\sqrt{\lambda x} \right) \right\}.\tag{25}$$

The function *I<sup>ν</sup>* is monotone decreasing with respect to the order, *viz.* ([43] (p. 220, Equation (2)))

$$I\_{\nu}(\mathbf{x}) > I\_{\mu}(\mathbf{x}), \qquad \mu > \nu \ge 0, \mathbf{x} > 0, \mathbf{x}$$

also consult [44–46] regarding this question. So, evaluating (25) we get

$$\begin{aligned} f\_{n, \rho}'(t) &\geq t^{\rho \ln/2 - 1} \left[ (\rho + 1)n/2 - 1 + t\sqrt{\lambda x} \right] I\_{n/2}(t\sqrt{\lambda x}), \\ &\geq t^{\rho \ln/2 - 1} \left[ (\rho + 1)n/2 - 1 \right] I\_{n/2}(t\sqrt{\lambda x}), \end{aligned}$$

which is sufficient to see that *f <sup>n</sup>*,*ρ*(*t*) > 0 for *ρ* > 2/*n* − 1 and also follows *f <sup>n</sup>*,2/*n*−1(*t*) <sup>&</sup>gt; <sup>0</sup> directly from (25). On the other side we have

$$\int\_0^1 g\_{n,\rho}(t) \, dt = \frac{1}{2} \left(\frac{\chi}{2}\right)^{[(\rho-1)n-2]/4} \gamma\left([(1-\rho)n+2]/4, \ge/2\right);\tag{26}$$

this expression makes sense for *ρ* < 1 + 2/*n*. Thus, having in mind the finiteness of *fn*,*ρ*(0+) and collecting all these constraints we infer that the range of the scaling parameter *ρ* is the interval *Rρ*(*n*) = [(2/*n* − 1)+, 2/*n* + 1).

Firstly, consider *ρ* ∈ *Rρ*(1)=[1, 3) with the associated input functions

$$f\_{1, \rho}(t) = t^{\rho/2} I\_{-1/2} \left( t \sqrt{\lambda x} \right) = \frac{\sqrt{2/\pi}}{\sqrt[4]{\lambda x}} t^{(\rho - 1)/2} \cosh(t \sqrt{\lambda x}) \tag{27}$$
  $g\_{1, \rho}(t) = t^{(1 - \rho)/2} e^{-\mathbf{x}^2/2} \cdot \mathbf{e}$ 

Being *ρ* ≥ 1, the input limits are

$$f\_{1\_{\mathcal{P}}}(0+) = 0; \qquad f\_{1\_{\mathcal{P}}}(1) = \frac{\sqrt{2/\pi}}{\sqrt[4]{\lambda x}} \cosh(\sqrt{\lambda x})\dots$$

From (13) Okamura's Theorem 2 there follows (22).

The case *n* = 2, *ρ* ∈ *Rρ*(2)=[0, 2) works since *I*0(0) = 1. *Ergo*, we have two different solutions: when *ρ* = 0 and, respectively, *ρ* ∈ *Rρ*(2) \ {0} ≡ (0, 2). Indeed, since

$$\begin{aligned} f\_{2,0}'(t) &= \sqrt{\lambda x} \, I\_1\left(t\sqrt{\lambda x}\right) > 0, \\ f\_{2,\rho>0}'(t) &= t^{\rho-1} \left[\rho \, I\_0\left(t\sqrt{\lambda x}\right) + t\sqrt{\lambda x} \, I\_1\left(t\sqrt{\lambda x}\right)\right] > 0, \qquad t \in (0,1], \end{aligned}$$

both *f*2,0(*t*) and *f*2,*ρ*>0(*t*) monotone increase for *t* ∈ (0, 1]. The associated limits read

$$f\_{2, \rho}(0+) = \delta\_{\rho 0 \prime} \qquad f\_{2, \rho}(1) = l\_0(\sqrt{\lambda \pi}); \qquad \rho \in \mathcal{R}\_{\rho}(2),$$

which leads to the master Formula (23) for the CDF *F*2,*λ*(*x*).

It remains to see *n* ∈ N3, *ρ* ∈ *Rρ*(*n*). Knowing that *Iν*(0) = 0, (*ν*) > 0, we have vanishing *fn*,*ρ*(0+) = 0 for *<sup>ρ</sup>* ≥ 0 and *fn*,*ρ*(1) = *In*/2−<sup>1</sup> √*λ<sup>x</sup>* . By the monotonicity of *fn*,*ρ*(*t*) and the integration result (26) of *gn*,*ρ*(*t*) we get

$$\begin{split} F\_{\mathsf{n},\mathsf{k}}(\mathsf{x}) &= \sqrt{\frac{\lambda}{2\mathsf{e}^{\mathsf{A}}}} \left( \frac{2}{\lambda} \right)^{\mathsf{n}/4} \left( \frac{\mathsf{x}}{2} \right)^{\rho \mathsf{n}/4} I\_{\mathsf{n}/2 - 1} \left( \sqrt{\lambda \mathsf{x}} \right) \\ &\quad \times \left[ \gamma \left( \frac{(1-\rho)\mathsf{n} + 2}{4}, \frac{\mathsf{x}}{2} \right) - \gamma \left( \frac{(1-\rho)\mathsf{n} + 2}{4}, \frac{\mathsf{x} \mathsf{e}^{2}}{2} \right) \right]. \end{split}$$

The rest is obvious. This completes the proof of the expression (24).

**Remark 3.** *Let ξ*1, *ξ*<sup>2</sup> *be independent random variables defined on a standard probability space* (Ω, A , P) *having χ*<sup>2</sup> *<sup>n</sup>*<sup>1</sup> (*λ*1), *<sup>χ</sup>*<sup>2</sup> *<sup>n</sup>*<sup>2</sup> (*λ*2) *distributions, respectively. Then the rv <sup>ξ</sup>*<sup>1</sup> <sup>+</sup> *<sup>ξ</sup>*<sup>2</sup> <sup>∼</sup> *<sup>χ</sup>*<sup>2</sup> *<sup>n</sup>* (*λ*)*, where n* = *n*<sup>1</sup> + *n*<sup>2</sup> *and λ* = *λ*<sup>1</sup> + *λ*2*, see, e.g., ([18] (p. 33, Teorema 27)). According to this relation we can consider F*2,*λ*(*x*) *as the* CDF *of the sum of two χ*<sup>2</sup> <sup>1</sup> (*λj*), *j* = 1, 2 *distributed random variables where the linear combination λ* = *θλ*<sup>1</sup> + (1 − *θ*)*λ*2, *θ* ∈ [0, 1] *occurs between their non-centrality parameters.*

*Moreover, the values θ* = 0, 1 *correspond to the problem of obtaining the* CDF *using the property χ*<sup>2</sup> *<sup>n</sup>* (*λ*) = *χ*<sup>2</sup> <sup>1</sup> (*λ*) + *<sup>χ</sup>*<sup>2</sup> *<sup>n</sup>*−1(0) <sup>≡</sup> *<sup>χ</sup>*<sup>2</sup> <sup>1</sup> (*λ*) + *<sup>χ</sup>*<sup>2</sup> *<sup>n</sup>*−1*, where the no–central and the central rvs on the right are mutually independent, consult ([1] (p. 436)) and the related quotations therein.*

**Corollary 5.** *Let λ* > <sup>0</sup>*, x* > <sup>0</sup>*. Then for all n* ∈ N<sup>2</sup> = {2, 3, 4, ... } *there exists certain c* ∈ [0, 1] *such that*

$$F\_{n,\lambda}(\mathbf{x}) = \sqrt{\frac{\pi\lambda}{2}} \mathbf{e}^{-\lambda/2} \left(\frac{\mathbf{x}}{\lambda}\right)^{n/4} I\_{n/2-1}\left(\sqrt{\lambda\mathbf{x}}\right) \left[\text{erf}\left(\sqrt{\mathbf{x}/2}\right) - \text{erf}\left(\mathbf{c}\sqrt{\mathbf{x}/2}\right)\right].\tag{28}$$

*Also, for all n* ∈ N<sup>3</sup> = {3, 4, . . . } *there exists some c* ∈ [0, 1] *for which*

$$F\_{n,\lambda}(\mathbf{x}) = \mathbf{e}^{-\lambda/2} \left(\frac{2}{\lambda}\right)^{(n-2)/4} I\_{n/2-1}\left(\sqrt{\lambda\mathbf{x}}\right) \left[\gamma\left(\frac{n+2}{4}, \frac{\mathbf{x}}{2}\right) - \gamma\left(\frac{n+2}{4}, \frac{\mathbf{x}c^2}{2}\right)\right].\tag{29}$$

**Proof.** The first case occurs when *ρ* = 1 in Theorem 4. From (27) we have

$$f\_{1,1}(t) = \frac{\sqrt{2/\pi}}{\sqrt[4]{\lambda x}} \cosh(t\sqrt{\lambda x})$$

which results in *<sup>f</sup>*1,1(0+) = <sup>√</sup>2/*π*/ <sup>√</sup><sup>4</sup> *<sup>λ</sup>x*. Hence, we consider *<sup>n</sup>* <sup>∈</sup> <sup>N</sup><sup>2</sup> in which case *fn*,1(0+) = 0 and *fn*,1(1) = *In*/2−<sup>1</sup> √*λ<sup>x</sup>* . Additionally, from (25) it follows for all *n* ∈ N2, *t* ≥ 0 that

$$f\_{n,1}'(t) = t^{n/2 - 1} \ I\_{n/2 - 1}(t\sqrt{\lambda x})(n - 1) + \sqrt{\lambda x} t^{n/2} \ I\_{n/2}(t\sqrt{\lambda x}) \ge 0.1$$

So, *fn*,1(*t*) monotone increases on [0, 1]. Therefore

$$\begin{split} F\_{n,\lambda}(\mathbf{x}) &= \sqrt{\lambda \mathbf{x}} \, \mathrm{e}^{-\lambda/2} \left( \frac{\mathbf{x}}{\overline{\lambda}} \right)^{n/4} I\_{n/2-1} (\sqrt{\lambda \mathbf{x}} \, ) \int\_{\varepsilon}^{1} \mathrm{e}^{-\mathbf{x}t^{2}/2} \, \mathrm{d}t \\ &= \sqrt{\frac{\pi \lambda}{2}} \, \mathrm{e}^{-\lambda/2} \left( \frac{\mathbf{x}}{\overline{\lambda}} \right)^{n/4} I\_{n/2-1} (\sqrt{\lambda \mathbf{x}} \, ) \left[ \mathrm{erf}(\sqrt{\mathbf{x}/2}) - \mathrm{erf}(\varepsilon \sqrt{\mathbf{x}/2}) \right]. \end{split}$$

Here, the notation of the error function (or probability integral)

$$\operatorname{erf}(z) = \frac{2}{\sqrt{\pi t}} \int\_0^z \mathbf{e}^{-t^2} \,\mathrm{d}t \,\mathrm{d}t$$

has been used.

Taking *ρ* = 0 in Theorem 4, from (27) *f*1,0(*t*) = √2/(*πt*) <sup>√</sup><sup>4</sup> *<sup>λ</sup><sup>x</sup>* cosh(*<sup>t</sup>* <sup>√</sup>*λx*), no right limit exists at zero, hence *a fortiori n* > 1. Having in mind the observations stated in the proof of Theorem 4 for *n* ∈ N<sup>3</sup> the Formula (29) follows immediately from (24), setting *ρ* = 0.

**Remark 4.** *Recalling the relation ([28] (p. 176, Equation (8.4.1)))*

$$
\gamma(1/2, \mathbf{x}) = \sqrt{\pi} \operatorname{erf}(\sqrt{\mathbf{x}})
$$

*the representation Formula* (28) *becomes*

$$F\_{\mathbf{n},\boldsymbol{\lambda}}(\mathbf{x}) = \sqrt{\frac{\lambda}{2}} \,\mathrm{e}^{-\lambda/2} \left(\frac{\mathbf{x}}{\lambda}\right)^{\mathbf{n}/4} I\_{\mathbf{n}/2 - 1} \left(\sqrt{\lambda} \,\mathrm{x}\right) \left[\gamma \left(1/2, \mathbf{x}/2\right) - \gamma \left(1/2, \mathbf{x} \mathbf{c}^2/2\right)\right].$$

**Author Contributions:** The authors contributed equally to the manuscript and typed, read and approved the final version. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Acknowledgments:** The authors are grateful to the referees for careful reading of the first version of the manuscript and for helpful comments that finally encompass the article.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Review* **Some Applications of the Wright Function in Continuum Physics: A Survey**

**Yuriy Povstenko**

Department of Mathematics and Computer Science, Faculty of Science and Technology, Jan Dlugosz University in Czestochowa, al. Armii Krajowej 13/15, 42-200 Czestochowa, Poland; j.povstenko@ajd.czest.pl

**Abstract:** The Wright function is a generalization of the exponential function and the Bessel functions. Integral relations between the Mittag–Leffler functions and the Wright function are presented. The applications of the Wright function and the Mainardi function to description of diffusion, heat conduction, thermal and diffusive stresses, and nonlocal elasticity in the framework of fractional calculus are discussed.

**Keywords:** fractional calculus; Caputo derivative; Mittag–Leffler functions; Wright function; Mainardi function; Laplace transform; Fourier transform; nonperfect thermal contact; nonlocal elasticity; fractional nonlocal elasticity

**MSC:** 26A33; 33E12; 35Q74; 74S40

#### **1. Introduction**

The fractional calculus (the theory of integrals and derivatives of non-integer order) has attracted considerable interest of researchers and has many applications in physics, chemistry, rheology, geology, hydrology, medicine, engineering, finance, etc. (see, for example, West–Bologna–Grigolini [1], Magin [2], Povstenko [3], Tarasov [4], Povstenko [5], Uchaikin [6], Atanackovi´c–Pilipovi´c–Stankovi´c–Zorica [7], Herrmann [8], Povstenko [9], Datsko–Gafiychuk–Podlubny [10], West [11], Skiadas [12], Tarasov [13], Kumar–Singh [14], Su [15] and references therein). The Mittag–Leffler functions and the Wright function appear in solutions of various types of equations with fractional operators. The Mittag– Leffler function in one parameter *Eα*(*z*) was introduced in [16,17]. The generalized Mittag– Leffler function in two parameters *Eα*,*β*(*z*) was considered in [18,19]. A comprehensive treatment of properties of the Mittag–Leffler functions can be found in Erdélyi–Magnus– Oberhettinger–Tricomi [20], Gorenflo–Mainardi [21], Podlubny [22], Kilbas–Srivastava– Trujillo [23], Gorenflo–Kilbas–Mainardi–Rogosin [24]. Numerical algorithms for calculation of the Mittag–Leffler functions were proposed in [25] and implemented in [26]. The Wright function was presented in [27,28] and later on discussed by Erdélyi–Magnus– Oberhettinger–Tricomi [20], Gorenflo–Mainardi [21], Podlubny [22], Kilbas–Srivastava– Trujillo [23], Gorenflo–Kilbas–Mainardi–Rogosin [24], Luchko [29], among others. Numerical algorithms for calculating the Wright function were suggested in [30].

In 1996, Mainardi [31,32] solved the diffusion-wave equation with the Caputo fractional derivative of the order *α*

$$\frac{\partial^{\alpha}T}{\partial t^{\alpha}} = a \frac{\partial^{2}T}{\partial x^{2}}, \qquad 0 < a \le 2,\tag{1}$$

on a real line (the Cauchy problem) and a half-line (the signaling problem). The solutions were obtained in terms of the Mainardi function *M z*; *<sup>α</sup>* 2 [33], where

$$z = \frac{|x|}{\sqrt{a}t^{a/2}}\tag{2}$$

**Citation:** Povstenko, Y. Some Applications of the Wright Function in Continuum Physics: A Survey. *Mathematics* **2021**, *9*, 198. https:// doi.org/10.3390/math9020198

Received: 23 December 2020 Accepted: 17 January 2021 Published: 19 January 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

is the similarity variable, *a* can be treated as the generalized thermal diffusivity coefficient.

Equation (1) in the limiting case *α* → 0 corresponds to the Helmholtz equation (localized diffusion); the subdiffusion regime is characterized by the values 0 < *α* < 1. For 1 < *α* < 2, the diffusion-wave Equation (1) interpolates between the diffusion equation (*α* = 1) and the wave equation (*α* = 2).

Applications of fractional calculus to viscoelasticity have been studied by many authors. The historical notes and the extensive bibliography on this subject can be found in the book of Mainardi [34]. According to the Scott–Blair stress-strain law, the dependence between the stress *σ*(*x*, *t*) and the strain (*x*, *t*) can be written as [34,35]

$$
\sigma(\mathbf{x}, t) = \rho a \frac{\partial^{\nu} \varepsilon(\mathbf{x}, t)}{\partial t^{\nu}}, \qquad 0 \le \nu \le 1. \tag{3}
$$

The constitutive Equation (3) characterizes a viscoelastic material intermediate between a perfectly elastic solid (the Hooke law for the value *ν* = 0) and a perfectly viscous fluid (the Newton law when *ν* = 1) with the corresponding interpretations of the coefficient *a* in terms of the elasticity constant or the kinematic viscosity. The relation (3) leads to the evolution Equation (1) with *α* = 2 − *ν*.

The book [36] presents a picture of the state-of-the-art for solutions of the diffusionwave equation with one, two, and three space variables in Cartesian, cylindrical, and spherical coordinates under different kinds of boundary conditions.

In the present survey article, we briefly discuss the properties of the Mittag–Leffler functions and Wright function and present the integral relations between the Mittag– Leffler functions and the Wright function. The applications of the Wright function and the Mainardi function to the description of diffusion, heat conduction, thermal and diffusive stresses, and nonlocal elasticity in the framework of fractional calculus are reviewed.

#### **2. Mathematical Preliminaries**

*2.1. Integrals and Derivatives of Fractional Order*

The Riemann–Liouville integral of fractional order *α* is defined as [21–23]:

$$I^a f(t) = \frac{1}{\Gamma(a)} \int\_0^t (t - \tau)^{a-1} f(\tau) \, d\tau, \qquad a > 0,\tag{4}$$

where Γ(*α*) is the gamma function.

The Riemann–Liouville derivative of fractional order *α* has the form

$$D\_{\mathbb{KL}}^{\mathfrak{a}}f(t) = \frac{\mathbf{d}^{\mathfrak{n}}}{\mathbf{d}t^{\mathfrak{n}}} \left[ \frac{1}{\Gamma(n-\mathfrak{a})} \int\_{0}^{t} (t-\tau)^{n-\mathfrak{a}-1} f(\tau) \, \mathrm{d}\tau \right], \qquad n-1 < a < n,\tag{5}$$

whereas the Caputo fractional derivative is written as

$$D\_{\mathbb{C}}^{\mathfrak{a}}f(t) \equiv \frac{\operatorname{d}^{\mathfrak{a}}f(t)}{\operatorname{d}t^{\mathfrak{a}}} = \frac{1}{\Gamma(n-\mathfrak{a})} \int\_{0}^{t} (t-\tau)^{n-\mathfrak{a}-1} \frac{\operatorname{d}^{\mathfrak{a}}f(\tau)}{\operatorname{d}\tau^{\mathfrak{a}}} \operatorname{d}\tau, \qquad n-1 < \mathfrak{a} < n. \tag{6}$$

The fractional operators have the following Laplace transform rules:

$$
\mathcal{L}\{l^a f(t)\} = \frac{1}{s^a} f^\*(s),
\tag{7}
$$

$$\mathcal{L}\{\ D\_{RL}^{a}f(t)\} = s^{a}f^{\*}(s) - \sum\_{k=0}^{n-1} D^{k}I^{n-a}f(0^{+})s^{n-1-k}, \qquad n-1 < a < n,\tag{8}$$

$$\mathcal{L}\left\{\frac{\mathbf{d}^a f}{\mathbf{d}t^a}\right\} = s^a f^\*(s) - \sum\_{k=0}^{n-1} f^{(k)}(0^+) s^{a-1-k}, \qquad n-1 < a < n. \tag{9}$$

Here, the asterisk denotes the transform, and *s* is the Laplace transform variable.

#### *2.2. Mittag–Leffler Functions*

The Mittag–Leffler function in one parameter *α*

$$E\_{\mathfrak{a}}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{\Gamma(ak+1)}, \qquad \mathfrak{a} > 0, \ z \in \mathbb{C}, \tag{10}$$

can be considered as the extension of the exponential function e*<sup>z</sup>* = *E*1(*z*), whereas the generalized Mittag–Leffler function in two parameters *α* and *β* is defined by the series representation

$$E\_{a, \beta}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{\Gamma(ak + \beta)}, \qquad a > 0, \ \beta > 0, \ z \in \mathbb{C}. \tag{11}$$

In the general case, the parameters *α* and *β* can be treated as complex numbers with some limitations on their real parts [24], but we restrict ourselves to positive values of *α* and *β*.

The following recurrence relations [20,24]

$$E\_{a,\beta}(z) = \frac{1}{\Gamma(\beta)} + zE\_{a,a+\beta}(z). \tag{12}$$

$$E\_{\mathfrak{a},\mathfrak{k}}(z) = \beta E\_{\mathfrak{a},\mathfrak{k}+1}(z) + \mathfrak{a}z \frac{\mathrm{d}E\_{\mathfrak{a},\mathfrak{k}+1}(z)}{\mathrm{d}z} \tag{13}$$

are valid for the Mittag–Leffler functions.

For investigation of the convergence of integrals containing the Mittag–Leffler functions, their asymtotic representations for large negative values of argument are useful. For *x* → ∞, we have

$$E\_{\alpha}(-\mathbf{x}) \sim \frac{1}{\Gamma(1-\alpha)\mathbf{x}'} \tag{14}$$

$$E\_{a,2}(-\mathbf{x}) \sim \frac{1}{\Gamma(2-a)\mathbf{x}'} \tag{15}$$

$$E\_{\mathfrak{a},\mathfrak{a}}(-\mathfrak{x}) \sim -\frac{1}{\Gamma(-\mathfrak{a})\mathfrak{x}^2} \tag{16}$$

$$E\_{\mathfrak{a},\mathfrak{k}}(-\mathfrak{x}) \sim \frac{1}{\Gamma(\beta - \mathfrak{a})\mathfrak{x}}.\tag{17}$$

The essential role of the Mittag–Leffler functions in fractional calculus is connected with the formula for the inverse Laplace transform (see Gorenflo–Mainardi [21], Podlubny [22], Kilbas–Srivastava–Trujillo [23], Gorenflo–Kilbas–Mainardi–Rogosin [24]):

$$\mathcal{L}^{-1}\left\{\frac{s^{a-\beta}}{s^a+b}\right\} = t^{\beta-1}E\_{a,\beta}(-bt^a). \tag{18}$$

#### *2.3. Wright Function and Mainardi Function*

The Wright function is a generalization of the exponential function and the Bessel functions and is defined as [27,28] (see also refs. [20–24,31,32,37–39])

$$\mathcal{W}(a,\beta;z) = \sum\_{k=0}^{\infty} \frac{z^k}{k!\,\Gamma(ak+\beta)}, \qquad a > -1, \quad \beta \in \mathbb{C}, \quad z \in \mathbb{C}.\tag{19}$$

The Wright function satisfies the recurrence equations [20]

$$az\mathcal{W}(\mathfrak{a},\mathfrak{a}+\mathfrak{z};z) = \mathcal{W}(\mathfrak{a},\mathfrak{z}-1;z) + (1-\mathfrak{z})\mathcal{W}(\mathfrak{a},\mathfrak{z};z),\tag{20}$$

$$\frac{\mathrm{d}\mathcal{W}(\mathfrak{a},\mathfrak{\beta};z)}{\mathrm{d}z} = \mathcal{W}(\mathfrak{a},\mathfrak{a}+\mathfrak{\beta};z). \tag{21}$$

The Mainardi function *M*(*α*; *z*) [22,31–33] is a particular case of the Wright function

$$M(a;z) = \mathcal{W}(-a, 1-a; -z) = \sum\_{k=0}^{\infty} \frac{(-1)^k z^k}{k! \, \Gamma[-ak + (1-a)]}, \qquad 0 < a < 1, \quad z \in \mathbb{C}.\tag{22}$$

The Wright function and the Mainardi function appear in formulae for the inverse Laplace transform (see Mainardi [31,32], Stankovi´c [40], Gaji´c–Stankovi´c [41]):

$$\mathcal{L}^{-1}\{\exp(-\lambda s^a)\} = \frac{a\lambda}{t^{a+1}}M(a;\lambda t^{-a}), \quad 0 < a < 1, \ \lambda > 0,\tag{23}$$

$$\mathcal{L}^{-1}\left\{\mathbf{s}^{a-1}\exp(-\lambda\mathbf{s}^a)\right\} = \frac{1}{t^a}M(\mathbf{a};\lambda t^{-a}), \quad 0 < a < 1, \ \lambda > 0,\tag{24}$$

$$\mathcal{L}^{-1}\left\{\mathbf{s}^{-\beta}\exp(-\lambda\mathbf{s}^{a})\right\} = t^{\beta-1}\mathcal{W}(-\mathfrak{a},\beta;-\lambda t^{-a}), \quad 0 < a < 1, \ \lambda > 0. \tag{25}$$

#### *2.4. The Integral Transform Relations between the Mittag–Leffler Function and Wright Function*

The Laplace transform of the Wright function is expressed in terms of the Mittag– Leffler function [20,22,23]

$$\mathcal{L}\{\mathcal{W}(\mathfrak{a},\mathfrak{f};t)\} = \frac{1}{s} E\_{\mathfrak{a},\mathfrak{f}}\left(\frac{1}{s}\right), \quad \mathfrak{a} > 0, \ \mathfrak{f} > 0,\tag{26}$$

and [37]

$$\mathcal{L}\{\mathcal{W}(\mathfrak{a},\mathfrak{\beta};-\mathfrak{t})\} = E\_{-\mathfrak{a},\mathfrak{\beta}-\mathfrak{a}}(-\mathfrak{s}), \quad -1 < \mathfrak{a} < 0, \ \mathfrak{\beta} > 0,\tag{27}$$

whereas, for the Mainardi function, the corresponding relation takes the form

$$
\mathcal{L}\{M(a;t)\} = E\_a(-s), \quad 0 < a < 1. \tag{28}
$$

The Mittag–Leffler functions and the Wright function are related by the Fourier cosine transform (Povstenko [36,42]):

$$\int\_0^\infty E\_\mathbf{a}(-\xi^2) \cos(\mathbf{x}\_\circ^\mathbf{x}) \, \mathrm{d}\xi^\mathbf{x} = \frac{\pi}{2} \, M(\frac{\mathbf{a}}{2}; \mathbf{x}), \qquad 0 < \mathbf{a} < 2, \ \mathbf{x} > 0,\tag{29}$$

$$\int\_0^\infty \mathbb{E}\_{\mathbf{z},2}(-\vec{\xi}^2) \cos(\mathbf{x}\xi) \, \mathrm{d}\xi = \frac{\pi}{2} \, \mathcal{W} \left( -\frac{\mathbf{a}}{2}, 2 - \frac{\mathbf{a}}{2}; -\mathbf{x} \right), \quad 0 < \mathbf{a} < 2, \ \mathbf{x} > 0,\tag{30}$$

$$\int\_{0}^{\infty} E\_{a,a}(-\vec{\xi}^{2}) \cos(\mathbf{x}\_{\theta}^{\mathfrak{x}}) \, \mathrm{d}\xi = \frac{\pi}{2} \, \mathcal{W} \left( -\frac{a}{2}, \frac{a}{2}; -\mathbf{x} \right), \quad 0 < a < 2, \ \mathbf{x} > 0,\tag{31}$$

$$\int\_0^\infty E\_{a,\emptyset}(-\xi^2) \cos(\mathbf{x}\_\theta^x) \, \mathrm{d}\xi = \frac{\pi}{2} \, W\left(-\frac{a}{2}, \emptyset - \frac{a}{2}; -\infty\right), \quad 0 < a < 2, \; \not\beta > 0, \; \mathbf{x} > 0,\tag{32}$$

as well as by the Fourier sine transform

$$\int\_0^\infty \xi^\mathbf{r} E\_\mathbf{a}(-\xi^2) \sin(\mathbf{x}\_\theta^\mathbf{x}) \, \mathrm{d}\xi = \frac{\pi}{2} \, \mathrm{W} \left( -\frac{a}{2}, 1 - a; -\mathbf{x} \right), \qquad 0 < a < 2, \ \mathbf{x} > 0,\tag{33}$$

$$\int\_0^\infty \xi^\tau E\_{a,2}(-\xi^2) \sin(x\xi) \,d\xi = \frac{\pi}{2} \,\mathcal{W}\left(-\frac{a}{2}, 2-a; -x\right), \quad 0 < a < 2, \ x > 0,\tag{34}$$

$$\int\_0^\infty \xi^\tau E\_{a,a}(-\xi^2) \sin(\mathbf{x}\_\nu^\mathbf{x}) \, \mathrm{d}\xi = \frac{\pi}{2} \, \mathrm{W} \left(-\frac{a}{2}, 0; -\mathbf{x}\right) = \frac{a\pi}{4} \, \mathrm{x} \, \mathrm{M} \left(\frac{a}{2}; \mathbf{x}\right), \quad 0 < a < 2, \; \mathbf{x} > 0,\tag{35}$$

$$\int\_0^\infty \xi^\tau E\_{a,\emptyset}(-\xi^2) \sin(x\xi) \,d\xi = \frac{\pi}{2} \,^t \mathcal{W}\left(-\frac{a}{2}, \beta - a; -x\right), \quad 0 < a < 2, \ \beta > 0 \,\ , x > 0. \tag{36}$$

Due to (16), we can also obtain for *Eα*,*α* <sup>−</sup>*ξ*<sup>2</sup> 

$$\int\_0^\infty \xi^2 \, E\_{a,a}(-\xi^2) \, \cos(x\xi) \, d\xi = -\frac{\pi}{2} \, W(-\frac{a}{2}, -\frac{a}{2}; -x), \quad 0 < a < 2, \ x > 0,\tag{37}$$

$$\int\_0^\infty \xi^3 \, E\_{a,a} \left( -\xi^2 \right) \sin(\mathbf{x}\xi) \, d\xi = -\frac{\pi}{2} \, W \left( -\frac{a}{2}, -a; -\mathbf{x} \right), \quad 0 < a < 2, \ \mathbf{x} > 0. \tag{38}$$

The equations presented above allow us to obtain additional integral relations between the Mittag–Leffler functions and the Wright function, which can be helpful when solving problems in polar or cylindrical coordinates using the Hankel transform of order zero. Taking into account the integral representation of the Bessel function *J*0(*x*) (Watson [43], Abramowitz-Stegun [44])

$$J\_0(\mathbf{x}) = \frac{1}{\pi} \int\_0^\pi \cos(\mathbf{x}\sin\theta) \mathbf{d}\theta,\tag{39}$$

$$J\_0(\mathbf{x}) = \frac{2}{\pi} \int\_0^\infty \sin(\mathbf{x} \cosh t) \mathbf{d}t, \quad \mathbf{x} > \mathbf{0}, \tag{40}$$

$$J\_0(\mathbf{x}) = \frac{2}{\pi} \int\_1^\infty \frac{\sin(\mathbf{x}t)}{\sqrt{t^2 - 1}} \, \mathbf{d}t, \quad \mathbf{x} > \mathbf{0}, \tag{41}$$

we get

$$\int\_0^\infty E\_a\left(-\vec{\xi}^2\right) l\_0(r\xi) d\xi = \frac{1}{2} \int\_0^\pi M\left(\frac{a}{2}; r\sin\theta\right) \mathrm{d}\theta, \quad 0 < a \le 2, \ r > 0,\tag{42}$$

$$\int\_0^\infty \mathbf{E}\_{\mathbf{a},2} \left( -\xi^2 \right) \mathbf{J}\_0(r\xi) \mathbf{d}\xi = \frac{1}{2} \int\_0^\pi \mathcal{W} \left( -\frac{a}{2}, 2 - \frac{a}{2}; -r\sin\theta \right) \mathbf{d}\theta, \quad 0 < a \le 2, \ r > 0,\tag{43}$$

$$\int\_0^\infty E\_{a,a}\left(-\xi^2\right) f\_0(r\xi) d\xi = \frac{1}{2} \int\_0^\pi \mathcal{W}\left(-\frac{a}{2}, \frac{a}{2}; -r\sin\theta\right) d\theta, \quad 0 < a \le 2, \ r > 0,\tag{44}$$

$$\int\_0^\infty E\_{\mathfrak{a},\mathfrak{f}}\left(-\zeta^2\right) I\_0(r\xi)d\xi = \frac{1}{2}\int\_0^\pi W\left(-\frac{\mathfrak{a}}{2}, \beta - \frac{\mathfrak{a}}{2}; -r\sin\theta\right) d\theta, \quad 0 < \mathfrak{a} \le 2, \ r > 0. \tag{45}$$

Similarly,

$$\int\_0^\infty \mathbb{E}\_a \left( -\mathbb{z}^2 \right) \mathbf{J}\_0(r\xi) \xi \, d\xi = \int\_0^\infty \mathcal{W} \left( -\frac{a}{2}, 1 - a; -r \cosh t \right) \mathrm{d}t, \quad 0 < a \le 2, \ r > 0,\tag{46}$$

$$\int\_0^\infty \mathbf{E}\_{a,2}\left(-\xi^2\right) \mathbf{J}\_0(r\xi)\xi \,d\xi = \int\_0^\infty \mathcal{W}\left(-\frac{a}{2}, 2 - a; -r\cosh t\right) \mathrm{d}t, \quad 0 < a \le 2, \ r > 0,\tag{47}$$

$$\int\_0^\infty E\_{a,a}\left(-\xi^2\right) I\_0(r\xi)\xi \,d\xi = \int\_0^\infty W\left(-\frac{a}{2}, 0; -r\cosh t\right) dt, \quad 0 < a \le 2, \ r > 0,\tag{48}$$

$$\int\_{0}^{\infty} E\_{a,\beta}\left(-\xi^{2}\right) \left[\rho\_{0}(r\xi)\xi^{}\mathrm{d}\xi\right] = \int\_{0}^{\infty} \mathcal{W}\left(-\frac{a}{2}, \beta - a; -r\cosh t\right) \mathrm{d}t, \quad 0 < a \le 2, \ r > 0,\tag{49}$$
 
$$\text{and}$$

$$\int\_0^\infty E\_a\left(-\zeta^2\right) l\_0(r\zeta)\zeta^r d\zeta^r = \int\_1^\infty \mathcal{W}\left(-\frac{a}{2}, 1 - a; -rt\right) \frac{1}{\sqrt{t^2 - 1}} dt, \quad 0 < a \le 2, \ r > 0,\tag{50}$$

$$\int\_0^\infty \mathbf{E}\_{a,2}\left(-\xi^2\right) \mathbf{J}\_0(r\xi)\xi^r \mathbf{d}\xi^x = \int\_1^\infty \mathcal{W}\left(-\frac{a}{2}, 2-a; -r\mathbf{t}\right) \frac{1}{\sqrt{t^2 - 1}} \mathbf{d}\xi, \quad 0 < a \le 2, \ r > 0,\tag{51}$$

$$\begin{split} \int\_{0}^{\infty} \mathrm{E}\_{\mathfrak{a},\mathfrak{a}} \left( -\xi^{2} \right) f\_{0}(r\_{\mathfrak{g}}^{\mathfrak{x}}) \xi^{\mathfrak{z}} d\xi &= \int\_{1}^{\infty} \mathcal{W} \left( -\frac{\mathfrak{a}}{2}, 0; -rt \right) \frac{1}{\sqrt{t^{2} - 1}} \, \mathrm{d}t \\ &= \frac{\mathfrak{a}r}{2} \int\_{0}^{\infty} \mathcal{M} \left( \frac{\mathfrak{a}}{2}; r\sqrt{1 + u^{2}} \right) \mathrm{d}u, \quad 0 < \mathfrak{a} \le 2, \ \mathrm{r} > 0, \end{split} \tag{52}$$

$$\int\_0^\infty \mathcal{E}\_{a,\beta}\left(-\xi^2\right) \left[l\_0(r\xi)\right] \xi^r d\xi = \int\_1^\infty W\left(-\frac{a}{2}, \beta - a; -rt\right) \frac{1}{\sqrt{t^2 - 1}} dt, \quad 0 < a \le 2, \ r > 0. \tag{53}$$

In addition,

$$\int\_0^\infty E\_{a,a}\left(-\xi^2\right) f\_0(r\xi)\xi^2 \,d\xi = -\frac{1}{2} \int\_0^\pi W\left(-\frac{a}{2}, -\frac{a}{2}; -r\sin\theta\right) \mathrm{d}\theta, \quad 0 < a \le 2, \ r > 0,\tag{54}$$

$$\begin{split} \int\_{0}^{\infty} \mathbf{E}\_{a,a} \left( -\xi^{2} \right) \mathbf{J}\_{0} (r\xi) \xi^{3} \mathbf{d}\_{s}^{\mathbf{x}} &= -\int\_{0}^{\infty} \mathcal{W} \left( -\frac{a}{2}, -a; -r \cosh t \right) \mathbf{d}t \\ &= -\int\_{1}^{\infty} \mathcal{W} \left( -\frac{a}{2}, -a; -r \mathbf{t} \right) \frac{1}{\sqrt{t^{2} - 1}} \mathbf{d}t, \quad 0 < a \le 2, \ r > 0. \end{split} \tag{55}$$

#### **3. Applications of the Wright Function**

#### *3.1. Fractional Heat Conduction in Nonhomogeneous Media under Perfect Thermal Contact*

Time-fractional heat conduction in two joint half-lines was considered by Povstenko [36,45,46]. In the general case, the heat conduction equation with the Caputo derivative of the order 0 < *α* ≤ 2 in one half-line

$$\frac{\partial^{\alpha}T\_{1}}{\partial t^{\alpha}} = a\_{1} \frac{\partial^{2}T\_{1}}{\partial x^{2}}, \quad \text{or} > 0,\tag{56}$$

and the corresponding equation with the Caputo derivative of the order 0 < *β* ≤ 2 in another half-line

$$\frac{\partial^{\beta}T\_{2}}{\partial t^{\beta}} = a\_{2} \frac{\partial^{2}T\_{2}}{\partial x^{2}}, \quad \text{x} < 0,\tag{57}$$

were treated under the boundary conditions of perfect thermal contact which state that two bodies must have the same temperature at the contact point and the heat fluxes through the contact point must be the same:

$$T\_1(\mathbf{x}, t)\Big|\_{\mathbf{x}=\mathbf{0}^+} = T\_2(\mathbf{x}, t)\Big|\_{\mathbf{x}=\mathbf{0}^-} \tag{58}$$

$$k\_1 D\_{RL}^{1-a} \left. \frac{\partial T\_1(\mathbf{x}, t)}{\partial \mathbf{x}} \right|\_{\mathbf{x} = 0^+} = k\_2 D\_{RL}^{1-\beta} \left. \frac{\partial T\_2(\mathbf{x}, t)}{\partial \mathbf{x}} \right|\_{\mathbf{x} = 0^-}, \quad 0 < a \le 2, \quad 0 < \beta \le 2. \tag{59}$$

In the condition (59), *k*<sup>1</sup> and *k*<sup>2</sup> are the generalized thermal conductivities of two bodies; the Riemann–Liouville fractional derivative of the negative order *D*−*<sup>α</sup> RL* (*f*(*t*)) is understood as the Riemann–Liouville fractional integral *Iα*(*f*(*t*).

Here, we present the fundamental solution to the first Cauchy problem with the initial condition

$$t = 0: \ T\_1 = p\_0 \, \delta(\mathbf{x} - \boldsymbol{\varrho}), \quad \mathbf{x} \gg 0, \quad \boldsymbol{\varrho} \gg 0,\tag{60}$$

for the case *α* = *β* (for details see Povstenko [46]):

$$T\_1(\mathbf{x}, t) = \frac{p\_0}{2\sqrt{a\_1}t^{a/2}} \left[ M(\frac{a}{2}; \frac{|\mathbf{x} - \boldsymbol{\varrho}|}{\sqrt{a\_1}t^{a/2}}) + \frac{\boldsymbol{\varepsilon} - 1}{\boldsymbol{\varepsilon} + 1} M\left(\frac{a}{2}; \frac{\mathbf{x} + \boldsymbol{\varrho}}{\sqrt{a\_1}t^{a/2}}\right) \right], \quad \mathbf{x} \ge \mathbf{0}, \tag{61}$$

$$T\_2(\mathbf{x}, t) = \frac{\varepsilon p\_0}{(\varepsilon + 1)\sqrt{a\_1} t^{a/2}} M\left(\frac{a}{2}; \frac{|\mathbf{x}|}{\sqrt{a\_2} t^{a/2}} + \frac{\rho}{\sqrt{a\_1} t^{a/2}}\right), \quad \mathbf{x} \le \mathbf{0},\tag{62}$$

where

$$
\varepsilon = \frac{k\_1 \sqrt{a}\_2}{k\_2 \sqrt{a}\_1}.\tag{63}
$$

For the corresponding problem with uniform initial temperature *T*<sup>0</sup> in one of halflines [45], in the particular case *α* = *β*, we have:

$$T\_1 = T\_0 - \frac{T\_0}{(1+\varepsilon)} \,\, W \left( -\frac{\mathfrak{a}}{2}, 1; -\frac{\mathfrak{x}}{\sqrt{a\_1} t^{a/2}} \right), \qquad \mathfrak{x} > 0,\tag{64}$$

$$T\_2 = \frac{\varepsilon T\_0}{(1+\varepsilon)} \, \mathcal{W} \left( -\frac{\mathfrak{a}}{2}, 1; -\frac{|\mathfrak{x}|}{\sqrt{\mathfrak{a}\_2} \mathfrak{t}^{a/2}} \right), \qquad \mathfrak{x} < 0. \tag{65}$$

The time-fractional heat conduction equations with the Caputo derivatives in a semiinfinite medium composed of a region 0 < *x* < *L* and a region *L* < *x* < ∞ under the boundary conditions of perfect thermal contact at *x* = *L* and the insulated boundary condition at *x* = 0 with uniform initial temperature in a layer were investigated in [47]. The approximate solution of the considered problem for small values of time is obtained based on Tauberian theorems for the Laplace transform. For *α* = *β*, this solution reads

$$T\_1 \simeq T\_0 - \frac{T\_0}{1+\varepsilon} \,\,\mathrm{W}\left(-\frac{\mathrm{a}}{2}, 1; -\frac{\mathrm{L}-\mathrm{x}}{\sqrt{a\_1}t^{a/2}}\right), \quad 0 \le \mathrm{x} \le \mathrm{L},\tag{66}$$

$$T\_2 \simeq \frac{\varepsilon T\_0}{1+\varepsilon} \,\,\mathrm{W}\left(-\frac{a}{2}, 1; -\frac{\chi - L}{\sqrt{a\_2} \mathbf{t}^{a/2}}\right), \quad L \le \infty < \infty. \tag{67}$$

Fractional heat conduction in an infinite medium with a spherical inclusion when a sphere 0 ≤ *r* < *R* is at the initial uniform temperature *T*<sup>0</sup> and a matrix *R* < *r* < ∞ is at a zero initial temperature was considered by Povstenko [36,48]. In the case of perfect thermal contact at the boundary *r* = *R*,

$$
\tau = \mathbb{R}: \quad T\_1(r, t) = T\_2(r, t), \tag{68}
$$

$$k\_1 D\_{RL}^{1-\mu} \frac{\partial T\_1(r, t)}{\partial r} = k\_2 D\_{RL}^{1-\beta} \frac{\partial T\_2(r, t)}{\partial r}, \quad 0 < \alpha \le 2, \quad 0 < \beta \le 2,\tag{69}$$

the approximate solution for small values of time has the following form (we present only the solution for *α* = *β*):

*<sup>T</sup>*1(*r*, *<sup>t</sup>*) *<sup>T</sup>*<sup>0</sup> <sup>−</sup> *RT*0*k*<sup>2</sup> (*k*<sup>2</sup> − *k*1)*r W* −*α* 2 , 1; <sup>−</sup> *<sup>R</sup>* <sup>−</sup> *<sup>r</sup>* <sup>√</sup>*a*1*tα*/2 − *W* −*α* 2 , 1; <sup>−</sup> *<sup>R</sup>* <sup>+</sup> *<sup>r</sup>* <sup>√</sup>*a*1*tα*/2 + *cRT*<sup>0</sup> *r t* 0 (*<sup>t</sup>* <sup>−</sup> *<sup>τ</sup>*)*α*/2−<sup>1</sup> *<sup>τ</sup>α*/2 *M α* 2 ; *<sup>R</sup>* <sup>−</sup> *<sup>r</sup>* <sup>√</sup>*a*1*τα*/2 (70) − *M α* 2 ; *R* + *r* <sup>√</sup>*a*1*τα*/2 *<sup>E</sup>α*/2, *<sup>α</sup>*/2 <sup>−</sup>*b*(*<sup>t</sup>* <sup>−</sup> *<sup>τ</sup>*)*α*/2 d*τ*, *<sup>T</sup>*2(*r*, *<sup>t</sup>*) <sup>−</sup> *RT*0*k*<sup>1</sup> (*k*<sup>2</sup> <sup>−</sup> *<sup>k</sup>*1)*<sup>r</sup> <sup>W</sup>* −*α* 2 , 1; <sup>−</sup> *<sup>r</sup>* <sup>−</sup> *<sup>R</sup>* <sup>√</sup>*a*2*tα*/2 + *cRT*<sup>0</sup> *r t* 0 (*<sup>t</sup>* <sup>−</sup> *<sup>τ</sup>*)*α*/2−<sup>1</sup> *τα*/2 × *M α* 2 ; *<sup>r</sup>* <sup>−</sup> *<sup>R</sup>* <sup>√</sup>*a*2*τα*/2 *<sup>E</sup>α*/2, *<sup>α</sup>*/2 <sup>−</sup>*b*(*<sup>t</sup>* <sup>−</sup> *<sup>τ</sup>*)*α*/2 d*τ*, (71)

where

$$b = \frac{(k\_2 - k\_1)\sqrt{a\_1 a\_2}}{R(k\_1 \sqrt{a\_1} + k\_2 \sqrt{a\_2})}, \qquad c = \frac{k\_1 k\_2 (\sqrt{a\_1} + \sqrt{a\_2})}{(k\_2 - k\_1)\left(k\_1 \sqrt{a\_1} + k\_2 \sqrt{a\_2}\right)}.\tag{72}$$

It should be mentioned that, for the classical heat conduction, the method of analysis of the solution for small values of time was described by Luikov [49] and Özi¸sik [50]. In the case of fractional diffusion equation, the decay rate at large values of time was analyzed by Sakamoto–Yamamoto [51].

#### *3.2. Fractional Heat Conduction in Nonhomogeneous Media under Nonperfect Thermal Contact*

Near the interface between two solids, a transition region arises whose state differs from the state of contacting media owing to different conditions of material–particle interaction. The transition region has its own physical, mechanical, and chemical properties, and processes occurring in it differ from those in the bulk. Small thickness of the intermediate region between two solids allows us to reduce a three-dimensional problem to a two-dimensional one for median surface endowed with equivalent physical properties. There are several approaches to reducing three-dimensional equations to the corresponding two-dimensional equations for the median surface. For example, introducing the mixed coordinate system (*ξ*, *η*, *z*), where *ξ* and *η* are the curvilinear coordinates in the median surface and *z* is the normal coordinate, the linear or polynomial dependence of the considered functions on the normal coordinate can be assumed. This assumption is often used in the theory of elastic shells.

For the classical heat conduction equation, which is based on the conventional Fourier law, the reduction of the three-dimensional problem to the simplified two-dimensional one was pioneered by Marguerre [52,53] and later on developed by many authors. In this case, the assumption of linear or polynomial dependence of temperature on the normal coordinate or more general operator method were used. An extensive literature on this subject can be found, for example, in [9]. For time-fractional heat conduction, the reduction of the three-dimensional equation to the two-dimensional one was carried out by Povstenko [9,54,55].

A solution to the problem (56), (57) with uniform initial temperature in one of halflines under conditions of nonperfect thermal contact was obtained in [56]. In the particular case *α* = *β*, the solution reads

$$T\_1 = T\_0 - \frac{T\_0}{(1+\varepsilon)} \mathcal{W} \left( -\frac{\mathfrak{a}}{2}, 1; -\frac{\mathfrak{x}}{\sqrt{a\_1} \mathfrak{a}^{a/2}} \right) + \frac{T\_0 (1-\varepsilon)}{2(1+\varepsilon)} \int\_0^t \frac{(t-\tau)^{a/2-1}}{\tau^{a/2}} \tag{73}$$

$$\times M \left( \frac{\mathfrak{a}}{2}; \frac{\mathfrak{x}}{\sqrt{a\_1} \mathfrak{x}^{a/2}} \right) E\_{\mathfrak{a}/2, \mathfrak{a}/2} \left[ -b\underline{\mathfrak{x}} (t-\tau)^{a/2} \right] \mathrm{d}\tau, \qquad \mathbf{x} > 0,$$

$$T\_2 = \frac{\varepsilon T\_0}{(1+\varepsilon)} \mathcal{W} \left( -\frac{\mathfrak{a}}{2}, 1; -\frac{|\mathbf{x}|}{\sqrt{a\_2} \mathfrak{x}^{a/2}} \right) + \frac{T\_0 (1-\varepsilon)}{2(1+\varepsilon)} \int\_0^t \frac{(t-\tau)^{a/2-1}}{\tau^{a/2}}$$

$$\times M \left( \frac{\mathfrak{a}}{2}; \frac{|\mathbf{x}|}{\sqrt{a\_2} \mathfrak{x}^{a/2}} \right) E\_{\mathfrak{a}/2, \mathfrak{a}/2} \left[ -b\underline{\mathfrak{x}} (t-\tau)^{a/2} \right] \mathrm{d}\tau, \qquad \mathbf{x} < 0,$$

where *ε* is defined by (63),

$$b\_{\Sigma} = \frac{k\_1 \sqrt{a\_2} + k\_2 \sqrt{a\_1}}{\mathcal{C}\_{\Sigma} \sqrt{a\_1 a\_2}},\tag{75}$$

*C*<sup>Σ</sup> is the reduced heat capacity of the median surface of the transition region. When *C*<sup>Σ</sup> → 0, the solutions (73), (74) coincide with the solutions (64), (65).

#### *3.3. Fractional Heat Conduction under Time-Harmonic Impact*

Ångström [57] was the first to investigate the standard parabolic heat conduction equation under time-harmonic impact. An extensive review of literature in this field in the case of classical diffusion equation can be found in the book by Mandelis [58].

Fractional heat conduction with a source varying harmonically in time was studied by Povstenko [59]. Equation (1) with a source term

$$\frac{\partial^a T}{\partial t^a} = a \frac{\partial^2 T}{\partial x^2} + Q\_0 \delta(\mathbf{x}) \mathbf{e}^{i\omega t}, \quad 0 < a \le 2,\tag{76}$$

was solved in the domain −∞ < *x* < ∞ under zero initial conditions. Temperature is expressed as

$$T(\mathbf{x},t) = \frac{Q\_0}{2\sqrt{a}} \int\_0^t \tau^{a/2-1} \, \mathcal{W}\left(-\frac{a}{2}, \frac{a}{2}; -\frac{|\mathbf{x}|}{\sqrt{a}\tau^{a/2}}\right) \mathbf{e}^{i\omega\left(t-\tau\right)} \, \mathbf{d}\tau. \tag{77}$$

The corresponding problem in the central symmetric case

$$\frac{\partial^a T}{\partial t^a} = a \left( \frac{\partial^2 T}{\partial r^2} + \frac{2}{r} \frac{\partial T}{\partial r} \right) + Q\_0 \frac{\delta(r)}{4 \pi r^2} \mathbf{e}^{i\omega t}, \quad 0 < r < \infty, \quad 0 < a \le 2,\tag{78}$$

has the solution

$$T(\mathbf{x},t) = \frac{aQ\_0}{8\pi a^{3/2}} \int\_0^t \frac{1}{\tau^{1+a/2}} \, M\left(\frac{a}{2}; \frac{r}{\sqrt{a}\tau^{a/2}}\right) \mathbf{e}^{i\omega\gamma(t-\tau)} \, d\tau. \tag{79}$$

#### *3.4. Fractional Nonlocal Elasticity*

Nonlocal continuum physics assumes integral constitutive equations. In the nonlocal theory of the continuum mechanics, stresses at the reference point **x** of an elastic solid at time *t* depend not only on the strains at this point at this time, but also on strains at all the points **x** of a body and all the times prior to and at time *t*:

$$\mathbf{t}(\mathbf{x}, t, \varepsilon\_{\rm L}, \varepsilon\_{\rm T}) = \int\_{0}^{t} \int\_{V} \gamma \left( |\mathbf{x} - \mathbf{x}'| , t - \mathbf{t}', \varepsilon\_{\rm L}, \varepsilon\_{\rm T} \right) \sigma \left( \mathbf{x}', \mathbf{t}' \right) \mathrm{d}v(\mathbf{x}') \, \mathrm{d}t',\tag{80}$$

$$
\sigma(\mathbf{x}',t') = 2\mu \,\mathbf{e}(\mathbf{x}',t') + \lambda \,\mathrm{tr}\,\mathbf{e}\left(\mathbf{x}',t'\right)\mathbf{I},\tag{81}
$$

where **t** and *σ* are the nonlocal and classical stress tensors, **x** and **x** are the reference and running points, **e** the linear strain tensor, *λ* and *μ* are Lamé constants, **I** stands for the unit tensor. The volume integral in (80) is over the region occupied by the solid. The time-non-locality describes memory effects, distributed lag (distributed time delay), and frequency dispersion; the space-non-locality deals with the long-range interaction. The weight function (the non-locality kernel) *γ*(|**x** − **x** |, *t* − *t* , *L*, *T*) depends on two basic non-locality parameters (see Eringen [60]): the characteristic length ratio

$$\epsilon\_{l} = \frac{\text{Internal characteristic length}}{\text{External characteristic length}}$$

and the characteristic time ratio

$$
\epsilon\_T = \frac{\text{Internal characteristic time}}{\text{External characteristic time}}.
$$

When *<sup>T</sup>* → 0, the memory effects are eliminated; for *<sup>L</sup>* → 0 the space-non-locality disappears.

In the pioneering works by Podstrigach [61,62], a new nontraditional thermodynamic pair (the chemical potential tensor *ϕ* and the concentration tensor **c**) was introduced (see also [63,64]). The tensor character of the chemical potential means that, for solids, the work of bringing the substance into a point in a body depends on the direction. In this case, the diffusion equation, split into the mean and deviatoric parts, has the form

$$
\rho \frac{\partial (\text{tr}\,\mathbf{c})}{\partial t} = 3a \,\Delta \, (\text{tr}\,\mathbf{q}) , \tag{82}
$$

$$
\rho \frac{\partial (\det \mathbf{c})}{\partial t} = 2a\_1 \Delta (\det \mathbf{q})\_\prime \tag{83}
$$

where *ρ* is the mass density, and *a* and *a*<sup>1</sup> are the corresponding diffusion coefficients.

Starting from interrelated equations describing elasticity and diffusion, Podstrigach [65] eliminated the chemical potential tensor from the constitutive equation for the stress tensor and obtained the stress–strain relation containing spatial and time derivatives. In the infinite medium, this relation can be integrated using the Fourier and Laplace integral transforms, and the final result, written for the mean and deviatoric parts, has the nonlocal integral form:

$$\begin{split} \text{tr}\,\boldsymbol{\sigma} &= 3\mathbf{K}\_{\boldsymbol{\epsilon}}\,\text{tr}\,\mathbf{e} + 3\frac{\mathbf{K}\_{\boldsymbol{\theta}} - \mathbf{K}\_{\boldsymbol{\epsilon}}}{p} \int\_{0}^{t} \int\_{-\infty}^{\infty} \int\_{-\infty}^{\infty} \int\_{-\infty}^{\infty} \gamma\_{(p)}\left(\mathbf{x} - \mathbf{x}', y - y', z - z', t - t'\right) \\ & \times \,\text{tr}\,\mathbf{e}\left(\mathbf{x}', y', z', t'\right) \text{d}\mathbf{x}' \,\text{d}y' \,\text{d}z' \,\text{d}t', \end{split} \tag{84}$$

$$\operatorname{div}\sigma = 2\mu\_{\varepsilon}\operatorname{dev}\mathbf{e} + 2\frac{\mu\_{\phi} - \mu\_{\varepsilon}}{q} \int\_{0}^{t} \int\_{-\infty}^{\infty} \int\_{-\infty}^{\infty} \int\_{-\infty}^{\infty} \gamma\_{(q)}\left(\mathbf{x} - \mathbf{x}', y - y', z - z', t - t'\right) \mathbf{d}\mathbf{x}' d\mathbf{x}' $$
 
$$ \times \text{ dev } \mathbf{e}\left(\mathbf{x}', y', z', t'\right) \mathbf{dx}' \, \mathrm{d}y' \, \mathrm{d}z' \, \mathrm{d}t'. \tag{85} $$

Here, *Kc*, *Kϕ*, *μc*, *μϕ*, *p*, and *q* are material constants (for details, see [42,65]). The kernel *γ*(*p*)(*x*, *y*, *z*, *t*) has the following form:

$$\gamma\_{(p)}(x,y,z,t) = \left(\frac{p}{2t}\right)^{5/2} \left(3 - \frac{p}{2t}\right) \exp\left(-\frac{p}{4t}\right),\tag{86}$$

where *r* = *x*<sup>2</sup> + *y*<sup>2</sup> + *z*2; the kernel *γ*(*q*)(*x*, *y*, *z*, *t*) is obtained from the kernel *γ*(*p*)(*x*, *y*, *z*, *t*) substituting *p* by *q*.

The results of Podstrigach [65] were generalized by Povstenko [42] for the case of fractional diffusion equations

$$
\rho \frac{\partial^{\mu}(\text{tr}\,\mathbf{c})}{\partial t^{\alpha}} = 3a \,\Delta \,(\text{tr}\,\mathbf{q}),
\tag{87}
$$

$$
\rho \frac{\partial^a (\det \mathbf{c})}{\partial t^a} = 2a\_1 \,\Delta \, (\det \mathbf{q}). \tag{88}
$$

The kernel *γ*(*p*)(*x*, *y*, *z*, *t*) in the fractional generalization of the constitutive Equation (84) for the mean part of the stress tensor is expressed in terms of the Wright function:

$$\gamma\_{(p)}(x,y,z,t) = -\frac{\sqrt{\pi}p^2}{\sqrt{2}} \, \_1\mathcal{W} \left( -\frac{a}{2}, -a; -\sqrt{p} \, \frac{r}{t^{a/2}} \right). \tag{89}$$

The kernel *γ*(*q*)(*x*, *y*, *z*, *t*) in the fractional generalization of the constitutive Equation (85) for the deviatoric part of the stress tensor is obtained by substituting *p* with *q*.

In the case of only space-non-locality, the constitutive equation for the stress tensor reads

$$\mathbf{t}(\mathbf{x}, \boldsymbol{\varepsilon}\_{\perp}) = \int\_{V} \gamma \left( |\mathbf{x} - \mathbf{x}'| , \boldsymbol{\varepsilon}\_{\perp} \right) \sigma \left( \mathbf{x}' \right) \mathrm{d}v \left( \mathbf{x}' \right) . \tag{90}$$

The space-nonlocal elasticity reduces to the classical theory of elasticity in the long wavelength limit and to the atomic lattice theory in the short wave-length limit. Several versions of nonlocal elasticity based on various assumptions were proposed by different authors (see, for example, Podstrigach [65], Eringen [66,67], Kunin [68,69] and references therein).

In the case of space-nonlocal constitutive Equation (90), the nonlocal kernel *γ*(|**x** − **x** |, *L*) is a delta sequence and in the classical elasticity limit *<sup>L</sup>* → 0 becomes the Dirac delta function. For example, slightly changing the notation, the nonlocal kernel *γ*(|**x** − **x** |, *τ*) can be considered as the Green function of the Cauchy problem for the diffusion operator (see Eringen [67,70]):

$$\frac{\partial \gamma(\mathbf{x}, \tau)}{\partial \tau} - a\Delta \,\gamma(\mathbf{x}, \tau) = 0,\tag{91}$$

$$
\pi = 0: \quad \gamma(\mathbf{x}, \pi) = \delta(\mathbf{x}), \tag{92}
$$

which results in the kernel

$$\gamma(\mathbf{x}, \tau) = \frac{1}{(2\sqrt{\pi a \tau})^n} \exp\left(-\frac{|\mathbf{x}|^2}{4a\tau}\right) \tag{93}$$

for *n* = 1, 2, 3 space variables. In this case, the nonlocal stress tensor is a solution of the corresponding Cauchy problem:

$$\frac{\partial \mathbf{t}(\mathbf{x}, \tau)}{\partial \tau} - a \Delta \mathbf{t}(\mathbf{x}, \tau) = 0,\tag{94}$$

$$
\boldsymbol{\pi} = \boldsymbol{0}: \quad \mathbf{t}(\mathbf{x}, \boldsymbol{\pi}) = \boldsymbol{\sigma}(\mathbf{x}).\tag{95}
$$

It should be emphasized that in, the formal sense, *τ* in the initial-value problems (91), (92) and (94), (95) looks like time, but in fact *τ* is a non-locality parameter related to the space-non-locality characteristic ratio *L*.

In the paper [71], the nonlocal kernel *γ*(|**x** − **x** |, *τ*) was considered as the Green function of the Cauchy problem for the fractional diffusion operator

$$\frac{\partial^a \gamma(\mathbf{x}, \tau)}{\partial \tau^a} - a\Delta \gamma(\mathbf{x}, \tau) = 0, \quad 0 < a \le 1,\tag{96}$$

$$
\mathfrak{r} = 0: \quad \gamma(\mathfrak{x}, \mathfrak{r}) = \delta(\mathfrak{x}).\tag{97}
$$

In the framework of this approach, instead of the Cauchy problem (94)–(95), we obtain

$$\frac{\partial^a \mathbf{t}(\mathbf{x}, \tau)}{\partial \tau^a} - a \Delta \mathbf{t}(\mathbf{x}, \tau) = 0, \quad 0 < a \le 1,\tag{98}$$

$$
\boldsymbol{\pi} = \boldsymbol{0}: \quad \mathbf{t}(\mathbf{x}, \boldsymbol{\tau}) = \boldsymbol{\sigma}(\mathbf{x}).\tag{99}
$$

In the case of one spatial coordinate, the nonlocal kernel takes the form

$$\gamma(\mathbf{x}, \tau) = \frac{1}{2\sqrt{a}\tau^{a/2}} \, \mathcal{M}\left(\frac{a}{2}; \frac{|\mathbf{x}|}{\sqrt{a}\tau^{a/2}}\right), \qquad 0 \le a \le 1,\tag{100}$$

and in the central symmetric case

$$\gamma(r,\tau) = \frac{1}{4\pi a \tau^a r} \,\, W\left(-\frac{a}{2}, 1 - a; -\frac{r}{\sqrt{a}\tau^{a/2}}\right), \qquad 0 \le a \le 1. \tag{101}$$

#### **4. Conclusions**

In this survey, we have reviewed the main applications of the Wright function and the Mainardi function in continuum physics based essentially on the author's works. We have presented the integral relations between the Mittag–Leffler functions and the Wright function, which can be useful when solving fractional differential equations. We have restricted ourselves to the standard Mittag–Leffler functions and Wright function. The interested reader is referred to publications on further generalizations of the Mittag–Leffler functions [24,72–75] and of the Wright function [24,29,76–78].

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Acknowledgments:** The author would like to thank the anonymous reviewers for their helpful comments.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


## *Article* **Some Properties of the Kilbas-Saigo Function**

**Lotfi Boudabsa <sup>1</sup> and Thomas Simon 2,\***


**Abstract:** We characterize the complete monotonicity of the Kilbas-Saigo function on the negative half-line. We also provide the exact asymptotics at −∞, and uniform hyperbolic bounds are derived. The same questions are addressed for the classical Le Roy function. The main ingredient for the proof is a probabilistic representation of these functions in terms of the stable subordinator.

**Keywords:** complete monotonicity; convex ordering; double Gamma function; fractional extreme distribution; Kilbas-Saigo function; Le Roy function; Mittag–Leffler function; stable subordinator

#### **1. Introduction**

The Kilbas-Saigo function is a three-parameter entire function with the convergent series representation

$$E\_{a,m,l}(z) = 1 + \sum\_{n \ge 1} \left( \prod\_{k=1}^n \frac{\Gamma(1 + a((k-1)m + l))}{\Gamma(1 + a((k-1)m + l + 1))} \right) z^n, \qquad z \in \mathbb{C}\_{\epsilon}$$

where the parameters are such that *α*, *m* > 0 and *l* > −1/*α*. It can be viewed as a generalization of the one- or two-parameter Mittag–Leffler function since, with standard notations,

$$E\_{\mathfrak{a},1,0}(z) := \sum\_{n \ge 0} \frac{z^n}{\Gamma(1 + \mathfrak{a}n)} = E\_{\mathfrak{a}}(z)$$

and

$$E\_{\mathfrak{a},1,\frac{\mathfrak{d}-1}{a}}(z) := \Gamma(\beta) \sum\_{n\geq 0} \frac{z^n}{\Gamma(\beta + \mathfrak{a}n)} = \,\_1\Gamma(\beta) E\_{\mathfrak{a},\mathfrak{d}}(z)$$

for every *α*, *β* > 0 and *z* ∈ C. This function was introduced in [1] as the solution to some integro-differential equation with Abelian kernel on the half-line, and we refer to Chapter 5.2 in [2] for a more recent account, including an extension to complex values of the parameter *l*. In our previous paper [3], written in collaboration with P. Vallois, it was shown that certain Kilbas-Saigo functions are moment generating functions of Riemannian integrals of the stable subordinator. This observation made it possible to define rigorously some Weibull and Fréchet distributions of fractional type via an independent exponential random variable and the stable subordinator—see [3] for details. In the present paper, we wish to take the other way round and use the probabilistic connection to deduce some non-trivial analytical properties of the Kilbas-Saigo function.

In Section 2, we tackle the problem of the complete monotonicity on the negative half-line. This problem dates to Pollard in 1948 for the one-parameter Mittag–Leffler function—see e.g., Section 3.7.2 in [2] for details and references. It was shown in [3] that for every *m* > 0 and *α* ∈ (0, 1] the function *x* → *Eα*,*m*,*m*−1(−*x*) is completely monotone, extending Pollard's result and solving an open problem stated in [4]. In Theorem 1 below, we characterize the complete monotonicity of *x* → *Eα*,*m*,*l*(−*x*) by *α* ∈ [0, 1] and *l* ≥ *m* − 1/*α*.

**Citation:** Boudabsa, L.; Simon, T. Some Properties of the Kilbas-Saigo Function. *Mathematics* **2021**, *9*, 217. https://doi.org/10.3390/ math9030217

Received: 11 December 2020 Accepted: 16 January 2021 Published: 22 January 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

We also give an explicit representation, albeit complicated in general, of the underlying positive random variable. Along the way, we study an interesting family of Mellin transforms given as the quotient of four double Gamma functions.

In Section 3, we establish uniform hyperbolic bounds on the negative half-line for two families of completely monotonic Kilbas-Saigo functions, extending the bounds obtained in [5] for the classical Mittag–Leffler function. The argument in [5] relied on stochastic and convex orderings and was rather lengthy. We use here the same kind of arguments, but the proof is shorter and more transparent thanks to the connection with the stable subordinator; which also enables us to derive some monotonicity properties on *m* → *Eα*,*m*,*l*(*x*) for every *x* ∈ R—see Proposition 1 below.

In Section 4, we address the question of the asymptotic behavior at −∞ in the completely monotonic case *α* ∈ (0, 1] and *l* ≥ *m* − 1/*α*. It is shown in Theorem 5.5 of [2] that in the general case *α*, *m* > 0 and *l* > *m* − 1/*α*, the entire function *Eα*,*m*,*l*(*z*) has order *ρ* = 1/*α* and type *σ* = 1/*m*. However, precise asymptotics along given directions of the complex plane do not seem to have been investigated as yet, as is the case—see e.g., Proposition 3.6 in [2] for the classical Mittag–Leffler function. For the negative half-line and *α* ∈ (0, 1], the asymptotics are different depending on whether *l* = *m* + 1/*α* or *l* > *m* + 1/*α*. In the former case, the behavior is in *cα*,*<sup>m</sup> x*−(1+1/*m*) with a non-trivial constant *cα*,*<sup>m</sup>* obtained from the connection with the fractional Fréchet distribution and given in terms of the double Gamma function—see Proposition 7 and Remark 8 (c) below. In the latter case, the behavior is in *cα*,*m*,*<sup>l</sup> x*−<sup>1</sup> with a uniform speed and a simple constant *cα*,*m*,*<sup>l</sup>* given in terms of the standard Gamma function—see Proposition 6 below. The method for the case *l* > *m* + 1/*α* relies on the computation of the Mellin transform of the positive function *Eα*,*m*,*l*(−*x*), which is obtained from the proof of its complete monotonicity, and is interesting in its own right—see Remark 2 (c) below. Along the way, we provide the exact asymptotics of the fractional Weibull and Fréchet densities at both ends of their support and we give a series of probabilistic factorizations. The latter enhance the position of the fractional Fréchet distribution, which is in one-to-one correspondence with the boundary Kilbas-Saigo function *<sup>E</sup>α*,*m*,*m*−1/*α*(*x*), as an irreducible factor—see Remark <sup>8</sup> (a) below.

In the last Section 5, we pay attention to the so-called Le Roy function with parameter *α* > 0. This is a simple generalization of the exponential function defined by

$$\mathcal{L}\_{\mathfrak{a}}(z) := \sum\_{n \ge 0} \frac{z^n}{(n!)^{\mathfrak{a}}}, \qquad z \in \mathbb{C}.$$

Introduced in [6] in the context of analytic continuation, a couple of years before the Mittag–Leffler function, the Le Roy function has been much less studied. It was shown in [3] that this function encodes for *α* ∈ [0, 1] a Gumbel distribution of fractional type, as the moment generating function of the perpetuity of the *α*−stable subordinator. This fact is recalled in Proposition 9 below, together with a characterization of the moment generating property. The exact asymptotic behavior at −∞ is also derived for *α* ∈ [0, 1], completing the original result of Le Roy. Finally, the non-increasing character of *α* → L*α*(*x*) on [0, 1] for every *x* ∈ R is established by convex ordering. It is worth mentioning that this property is an open problem—see Conjecture 5 below-for the Mittag–Leffler function.

As in [3], an important role is played throughout the paper by Barnes' double Gamma function *G*(*z*; *δ*) which is the unique solution to the functional equation *G*(*z* + 1; *δ*) = Γ(*zδ*−1)*G*(*z*; *δ*) with normalization *G*(1; *δ*) = 1, and its associated Pochhammer type symbol

$$[a; \delta]\_s \;=\; \frac{G(a+s; \delta)}{G(a; \delta)}\cdot$$

We have gathered in Appendix A all the needed facts and formulæ on this double Gamma function, whose connection with the Kilbas-Saigo function has probably a broader focus than the content of the present paper (we leave this topic open to further research).

#### **2. Complete Monotonicity on the Negative Half-Line**

In this section, we wish to characterize the property that the function *x* → *Eα*,*m*,*l*(−*x*) is completely monotone (CM) on (0, ∞). We begin with the following result on the above generalized Pochhammer symbols, which is reminiscent of Proposition 5.1 and Theorem 6.2 in [7] and has an independent interest.

**Lemma 1.** *Let a*, *b*, *c*, *d and δ be positive parameters. There exists a positive random variable Z* = **Z**[*a*, *c*; *b*, *d*; *δ*] *such that*

$$\mathbb{E}[Z^s] = \frac{[a; \delta]\_s [c; \delta]\_s}{[b; \delta]\_s [d; \delta]\_s} \tag{1}$$

*for every s* > 0, *if and only if b* + *d* ≤ *a* + *c and* inf{*b*, *d*} ≤ inf{*a*, *c*}. *This random variable is absolutely continuous on* (0, ∞)*, except in the degenerate case a* = *b* = *c* = *d*. *Its support is* [0, 1] *if b* + *d* = *a* + *c and* [0, ∞) *if b* + *d* < *a* + *c*.

**Proof of Lemma 1.** We giscard the degenerate case *a* = *b* = *c* = *d*, which is obvious with *Z* = 1. By (A2) and some rearrangements—see also (2.15) in [8], we first rewrite

$$\log \left( \frac{[a;\delta]\_s [c;\delta]\_s}{[b;\delta]\_s [d;\delta]\_s} \right) = \text{ } \text{s} \text{ } + \int\_{-\infty}^0 (e^{\text{sx}} - 1 - \text{sx}) \left( \frac{e^{-b|\mathbf{x}|} + e^{-d|\mathbf{x}|} - e^{-d|\mathbf{x}|} - e^{-\varepsilon|\mathbf{x}|}}{|\mathbf{x}| (1 - e^{-|\mathbf{x}|}) (1 - e^{-\delta|\mathbf{x}|})} \right) \text{d}\mathbf{x} $$

for every *s* > 0, where *κ* is some real constant. By convexity, it is easy to see that if *<sup>b</sup>* <sup>+</sup> *<sup>d</sup>* <sup>≤</sup> *<sup>a</sup>* <sup>+</sup> *<sup>c</sup>* and inf{*b*, *<sup>d</sup>*} ≤ inf{*a*, *<sup>c</sup>*}, then the function *<sup>z</sup>* → *<sup>z</sup><sup>b</sup>* <sup>+</sup> *<sup>z</sup><sup>d</sup>* <sup>−</sup> *<sup>z</sup><sup>a</sup>* <sup>−</sup> *<sup>z</sup><sup>c</sup>* is positive on (0, 1). This implies that the function

$$\infty \mapsto \frac{e^{-b|\boldsymbol{\alpha}|} + e^{-d|\boldsymbol{\alpha}|} - e^{-d|\boldsymbol{\alpha}|} - e^{-c|\boldsymbol{\alpha}|}}{|\boldsymbol{\alpha}|(1 - e^{-|\boldsymbol{\alpha}|})(1 - e^{-\delta|\boldsymbol{\alpha}|})}$$

is positive on (−∞, 0) and that it can be viewed as the density of some Lévy measure on (−∞, 0), since it integrates 1 <sup>∧</sup> *<sup>x</sup>*2. By the Lévy–Khintchine formula, there exists a real infinitely divisible random variable *Y* such that

$$\mathbb{E}[\mathcal{e}^{sY}] \, = \begin{array}{c} \frac{[a; \delta]\_s [c; \delta]\_s}{[b; \delta]\_s [d; \delta]\_s} \end{array}$$

for every *s* > 0, and the positive random variable *Z* = *e<sup>Y</sup>* satisfies (1). Since we have excluded the degenerate case, the Lévy measure of *Y* is clearly infinite and it follows from Theorem 27.7 in [9] that *Y* has a density and the same is true for *Z*.

Assuming first *b* + *d* = *a* + *c*, a Taylor expansion at zero shows that the density of the Lévy measure of *Y* integrates 1 ∧ |*x*| and we deduce from (A2) the simpler formula

$$\log \mathbb{E}[e^{sY}] = \log \left( \frac{[a; \delta]\_s [c; \delta]\_s}{[b; \delta]\_s [d; \delta]\_s} \right) \\ = - \int\_0^\infty (1 - e^{-sx}) \left( \frac{e^{-bx} + e^{-dx} - e^{-ax} - e^{-cx}}{x(1 - e^{-x})(1 - e^{-\delta x})} \right) dx \\ \ge 0$$

By the Lévy–Khintchine formula, this shows that the ID random variable *Y* is negative. Moreover, its support is (−∞, 0] since its Lévy measure has full support and its drift coefficient is zero—see Theorem 24.10 (iii) in [9], so that the support of *Z* is [0, 1].

Assuming second *b* + *d* < *a* + *c*, the same Taylor expansion as above shows that the density of the Lévy measure of *Y* does not integrate 1 ∧ |*x*| and the real Lévy process associated with *Y* is thus of type C using the terminology of [9]—see Definition 11.9 therein. By Theorem 24.10 (i) in [9], this implies that *Y* has full support on R, and so does *Z* on R+.

It remains to prove the only if part of the Lemma. Assuming *a* ≤ *d* and *b* ≤ *c* without loss of generality, we first observe that if *a* < *b* then the function

$$s \mapsto \frac{[a; \delta]\_s [c; \delta]\_s}{[b; \delta]\_s [d; \delta]\_s}$$

is real-analytic on (−*b*, ∞) and vanishes at *s* = −*a* > −*b*, an impossible property for the Mellin transform of a positive random variable. The necessity of *b* + *d* ≤ *a* + *c* is slightly more subtle and hinges again upon infinite divisibility. First, setting *ϕ*(*z*) = *<sup>z</sup><sup>b</sup>* <sup>+</sup> *<sup>z</sup><sup>d</sup>* <sup>−</sup> *<sup>z</sup><sup>a</sup>* <sup>−</sup> *<sup>z</sup><sup>c</sup>* and *<sup>z</sup>*<sup>∗</sup> <sup>=</sup> inf{*<sup>z</sup>* <sup>&</sup>gt; 0, *<sup>ϕ</sup>*(*z*) <sup>&</sup>lt; <sup>0</sup>}, it is easy to see by convexity and a Taylor expansion at 1 that if *b* + *d* > *a* + *c*, then *z*<sup>∗</sup> < 1 and *ϕ*(*z*) < 0 on (*z*∗, 1) with *ϕ*(*z*) ∼ (*b* + *d* − *a* − *c*)(*z* − 1) as *z* → 1. Introducing next the ID random variable *V* with Laplace exponent

$$\log \mathbb{E}[\boldsymbol{e}^{\mathrm{sV}}] = -\kappa \,\mathrm{s} + \int\_{\log z\_{\mathrm{s}}}^{0} (\boldsymbol{e}^{\mathrm{sx}} - 1 - \mathrm{sx}) \left( \frac{\boldsymbol{e}^{-\mathrm{d}|\boldsymbol{x}|} + \boldsymbol{e}^{-\mathrm{c}|\boldsymbol{x}|} - \boldsymbol{e}^{-\mathrm{b}|\boldsymbol{x}|} - \boldsymbol{e}^{-\mathrm{d}|\boldsymbol{x}|}}{|\boldsymbol{x}|(1 - \boldsymbol{e}^{-|\boldsymbol{x}|})(1 - \boldsymbol{e}^{-\delta|\boldsymbol{x}|})} \right) d\boldsymbol{x}\_{\star}$$

we obtain the decomposition

$$\log\left(\frac{[a\beta]\_s[c\delta]\_s}{[b\beta]\_s[d\beta]\_s}\right) + \log\mathbb{E}[e^{sV}] = \int\_{-\infty}^{\log z\_\*} (e^{sx} - 1 - sx) \left(\frac{e^{-b|x|} + e^{-d|x|} - e^{-d|x|} - e^{-\varepsilon|x|}}{|x|(1 - e^{-|x|})(1 - e^{-\delta|x|})}\right) d\mathbf{x}\_s$$

whose right-hand side is the Laplace exponent of some ID random variable *U* with an atom because its Lévy measure, whose support is bounded away from zero, is finite—see Theorem 27.4 in [9]. On the other hand, the random variable *V* has an absolutely continuous and infinite Lévy measure and hence it has also a density. If there existed *Z* such that (1) holds, then the independent decomposition *U <sup>d</sup>* = *V* + log *Z* would imply by convolution that *U* has a density as well. This contradiction finishes the proof of the Lemma.

**Remark 1.** *(a) By the Mellin inversion formula, the density of* **Z**[*a*, *c*; *b*, *d*; *δ*] *is expressed as*

$$f(\mathbf{x}) = \frac{1}{2\mathbf{i}\pi\mathbf{x}} \int\_{s\_0 - i\infty}^{s\_0 + i\infty} \mathbf{x}^{-s} \left(\frac{[a; \delta]\_s [c; \delta]\_s}{[b; \delta]\_s [d; \delta]\_s}\right) d\mathbf{s}$$

*over* (0, ∞) *for any s*<sup>0</sup> > − inf{*b*, *d*}. *From this expression, it is possible to prove that this density is real-analytic over the interior of the support. We omit details. Let us also mention by Remark 28.8 in [9] that this density is positive over the interior of its support.*

*(b) With the standard notation for the Pochhammer symbol, the aforementioned Proposition 5.1 and Theorem 6.2 in [7] show that*

$$s \quad \mapsto \ \frac{(a)\_s (c)\_s}{(b)\_s (d)\_s}$$

*is the Mellin transform of a positive random variable if and only if b* + *d* ≥ *a* + *c and* inf{*b*, *d*} ≥ inf{*a*, *c*}. *This fact can be proved exactly as above, in writing*

$$\log\left(\frac{(a)\_s(c)\_s}{(b)\_s(d)\_s}\right) = -\int\_0^\infty (1 - e^{-sx}) \left(\frac{e^{-ax} + e^{-cx} - e^{-bx} - e^{-dx}}{x(1 - e^{-x})}\right) dx \dots$$

*This expression also shows that the underlying random variable has support* [0, 1] *and that it is absolutely continuous, save for a* + *c* = *b* + *d where it has an atom at zero. We refer to [7] for an exact expression of the density on* (0, 1) *in terms of the classical hypergeometric function.*

We can now characterize the CM property for *Eα*,*m*,*l*(−*x*) on (0, ∞).

**Theorem 1.** *Let α*, *m* > 0 *and l* > −1/*α*. *The Kilbas-Saigo function*

$$\infty \iff E\_{a,m,l}(-\infty)$$

*is* CM *on* (0, ∞) *if and only if α* ≤ 1 *and l* ≥ *m* − 1/*α*. *Its Bernstein representation is*

$$E\_{\mathfrak{a},\mathfrak{w},l}(-\mathbf{x}) = \mathbb{E}\left[\exp - \mathbf{x} \left\{ \mathbf{X}\_{\mathfrak{a},\mathfrak{w},l} \times \int\_0^\infty \left(1 + \sigma\_t^{(\mathfrak{a})}\right)^{-a\left(\mathfrak{w}+1\right)} dt \right\} \right] \tag{2}$$

*with δ* = 1/*αm and* **X***α*,*m*,*<sup>l</sup>* = **Z**[1 + 1/*m*,(*αl* + 1)*δ*; 1, 1/*m* + (*αl* + 1)*δ*; *δ*].

**Proof of Theorem 1.** Assume first *α* ≤ 1 and *l* ≥ *m* − 1/*α* and let

$$\Upsilon\_{a,m,l} = \Upsilon\_{a,m,l} \times \int\_0^\infty \left(1 + \sigma\_t^{(a)}\right)^{-\alpha(m+1)} dt.$$

By Proposition 2.4 in [8], and Lemma 1, its Mellin transform is

$$\begin{array}{rcl} \mathbb{E}[(\mathbf{Y}\_{a,m,l})^s] &=& \delta^s \frac{[1+\delta;\delta]\_s[(al+1)\delta;\delta]\_s}{[1;\delta]\_s[1/m+(al+1)\delta;\delta]\_s} \\ &=& \Gamma(1+s) \times \frac{[(al+1)\delta;\delta]\_s}{[1/m+(al+1)\delta;\delta]\_s} \end{array}$$

where in the second equality we have used (A9). By Fubini's theorem, the moment generating function of **Y***α*,*m*,*<sup>l</sup>* reads

$$\begin{aligned} \mathbb{E}[\varepsilon^{z\mathbf{Y}\_{a,m,l}}] &= \sum\_{n\geq 0} \mathbb{E}[(\mathbf{Y}\_{a,m,l})^n] \frac{z^n}{n!} \\ &= \sum\_{n\geq 0} \left( \frac{[(al+1)\delta;\delta]\_n}{[1/m+(al+1)\delta;\delta]\_n} \right) z^n \\ &= \sum\_{n\geq 0} \left( \prod\_{j=0}^{n-1} \frac{\Gamma(a(jm+l)+1)}{\Gamma(a(jm+l+1)+1)} \right) z^n =: E\_{a,m,l}(z) \end{aligned}$$

for every *z* ≥ 0, where in the third equality we have used (A1) repeatedly. The latter identity is extended analytically to the whole complex plane and we get, in particular,

$$E\_{a,m,l}(-\infty) = \mathbb{E}[e^{-\alpha \mathbf{Y}\_{a,m,l}}], \qquad \mathbf{x} \ge \mathbf{0}.$$

This shows that *Eα*,*m*,*l*(−*x*) is CM with the required Bernstein representation.

We now prove the only if part. If *Eα*,*m*,*l*(−*x*) is CM, then we see by analytic continuation that *Eα*,*m*,*l*(*z*) is the moment generating function on C of the underlying random variable *X*, whose positive integer moments read

$$\mathbb{E}[X^n] = n! \times \left( \prod\_{j=0}^{n-1} \frac{\Gamma(\alpha(jm+l)+1)}{\Gamma(\alpha(jm+l+1)+1)} \right), \quad n \ge 0.$$

If *α* > 1, Stirling's formula implies E[*Xn*] 1 *<sup>n</sup>* → 0 as *n* → ∞ so that *X* ≡ 0, a contradiction because *Eα*,*m*,*<sup>l</sup>* is not a constant. If *α* = 1 and *l* + 1 < *m*, then

$$\mathbb{E}[X^n] \, \, \, \, \frac{n!}{(c)\_n n^n} \, \, \, \, \, \, \, \, \frac{n^{1-c}}{m^n} \quad \text{as } n \to \infty,$$

with *<sup>c</sup>* = (*<sup>l</sup>* <sup>+</sup> <sup>1</sup>)/*<sup>m</sup>* <sup>∈</sup> (0, 1). In particular, the Mellin transform *<sup>s</sup>* → <sup>E</sup>[*Xs*] is analytic on {(*s*) ≥ 0}, bounded on {(*s*) = 0}, and has at most exponential growth on {(*s*) > 0} because

$$|\mathbb{E}[X^s]| \le \mathbb{E}\left[X^{\Re(s)}\right] \\ = \left(\mathbb{E}\left[X^{\lceil\Re(s)\rceil+1}\right]\right)^{\frac{\Re(s)}{\lceil\Re(s)\rceil+1}}$$

by Hölder's inequality. On the other hand, the Stirling type Formula (A4) implies, after some simplifications,

$$\delta^{-s} \frac{[1+\delta; \delta]\_s [c; \delta]\_s}{[1; \delta]\_s [c+\delta; \delta]\_s} = \delta^{-s} \mathbf{s}^{1-\varepsilon} (1+o(1)) \quad \text{as } |\mathbf{s}| \to \infty \text{ with } |\arg \mathbf{s}| < \pi$$

and this shows that the function on the left-hand side, which is analytic on {(*s*) ≥ 0}, has at most linear growth on {(*s*) = 0} and at most exponential growth on {(*s*) > 0}. Moreover, the above analysis clearly shows that

$$\mathbb{E}[X^{n}] = \delta^{-n} \frac{[1+\delta;\delta]\_{n}[c;\delta]\_{n}}{[1;\delta]\_{n}[c+\delta;\delta]\_{n}}$$

for all *n* ≥ 0 and by Carlson's theorem—see e.g., Section 5.81 in [10], we must have

$$\mathbb{E}[X^{s}] = \delta^{-s} \frac{[1+\delta;\delta]\_{s}[c;\delta]\_{s}}{[1;\delta]\_{s}[c+\delta;\delta]\_{s}}$$

for every *s* > 0, a contradiction since Lemma 1 shows that the right-hand side cannot be the Mellin transform of a positive random variable if *c* < 1. The case *α* < 1 and *l* + 1/*α* < *m* is analogous. It consists of identifying the bounded sequence

$$\frac{1}{n!} \times \left( \prod\_{j=0}^{n-1} \frac{\Gamma(\alpha(jm+l+1)+1)}{\Gamma(\alpha(jm+l)+1)} \right)^j$$

as the values at non-negative integer points of the function

$$\delta^{-s} \times \frac{[1; \delta]\_s [1/m + (al+1)\delta; \delta]\_s}{[1+\delta; \delta]\_s [(al+1)\delta; \delta]\_s} = \delta^{-s} e^{-(1-a)s\ln(s) + \kappa s + O(1)} \quad \text{as } |\mathbf{s}| \to \infty \text{ with } |\arg \mathbf{s}| < \pi \text{ (as } \kappa \text{)}$$

where the purposeless constant *κ* can be evaluated from (A4). On {(*s*) ≥ 0}, we see that this function has growth at most *eπ*(1−*α*)|*s*|/2 and we can again apply Carlson's theorem. We leave the details to the interested reader.

**Remark 2.** *(a) When m* = 1, *applying (A1) we see that the random variable* **X***α*,1,*<sup>l</sup> has Mellin transform*

$$\mathbb{E}[(\mathsf{X}\_{\mathsf{s},1})^{\mathsf{s}}] = \frac{[2;\delta]\_{\mathsf{s}}[l+1/\mathsf{a};\delta]\_{\mathsf{s}}}{[1;\delta]\_{\mathsf{s}}[1+l+1/\mathsf{a};\delta]\_{\mathsf{s}}} = \frac{(\mathsf{a})\_{\mathsf{as}}}{(\beta)\_{\mathsf{as}}}$$

*with β* = 1 + *αl* ≥ *α*. *This shows* **X***α*,1,*<sup>l</sup> d* = **B***<sup>α</sup> <sup>α</sup>*,*β*−*<sup>α</sup> where* **<sup>B</sup>***a*,*<sup>b</sup> denotes, here and throughout, a standard Beta random variable with parameters a*, *b* > 0. *We hence recover the Bernstein representation of the CM function* Γ(*β*)*Eα*,*β*(−*x*) *which was discussed in Remark 3.3 (c) in [3]. Notice also the very simple expression of the Mellin transform*

$$\mathbb{E}[(\mathbf{Y}\_{a,1,l})^s] = \frac{\Gamma(1+\alpha l)\Gamma(1+s)}{\Gamma(1+\alpha(l+s))}\cdot$$

*(b) Another simplification occurs when l* + 1/*α* = *km for some integer k* ≥ 1. *One finds*

$$\mathbb{E}[(\mathsf{X}\_{\mathsf{u},m,km-1/a})^{s}] = \frac{[\mathsf{k};\delta]\_{\mathfrak{s}}[1+1/m;\delta]\_{\mathfrak{s}}}{[1;\delta]\_{\mathfrak{s}}[k+1/m;\delta]\_{\mathfrak{s}}} = \prod\_{j=1}^{k-1} \left(\frac{(ajm)\_{\mathfrak{u}}}{(a(jm+1))\_{\mathfrak{u}}}\right)^{s}$$

*for u* = *αms* ≥ 0, *which implies*

$$\mathbf{X}\_{\mathfrak{a},m,km-1/\mathfrak{a}} \stackrel{d}{=} \left( \mathbf{B}\_{\mathfrak{a}m,\mathfrak{a}} \times \cdots \times \mathbf{B}\_{\mathfrak{a}m(k-1),\mathfrak{a}} \right)^{\mathfrak{a}m}$$

*In general, the law of the absolutely continuous random variable* **X***α*,*m*,*<sup>l</sup> valued in* [0, 1] *seems to have a complicated expression.*

*(c) As seen during the proof, the random variable* **Y***α*,*m*,*<sup>l</sup> defined by the Bernstein representation*

$$E\_{\mathfrak{a},m,l}(-\mathfrak{x}) \;=\; \mathbb{E}[e^{-\mathfrak{x}\mathbf{Y}\_{\mathfrak{a},m,l}}],$$

*has Mellin transform*

$$\mathbb{E}[(\mathbf{Y}\_{a,m,l})^s] = \Gamma(1+s) \times \frac{[(al+1)\delta; \delta]\_s}{[1/m + (al+1)\delta; \delta]\_s} \tag{3}$$

.

*with δ* = 1/*αm*, *for every s* > −1. *By Fubini's theorem, this implies the following exact computation, which seems unnoticed in the literature on the Kilbas-Saigo function.*

$$\int\_0^\infty E\_{a,m,l}(-x) \, x^{s-1} \, dx = \, \Gamma(s) \, \mathbb{E}[\mathbf{Y}\_{a,m,l}^{-s}] = \, \Gamma(s) \Gamma(1-s) \times \frac{[(al+1)\delta; \delta]\_{-s}}{[1/m + (al+1)\delta; \delta]\_{-s}} \tag{4}$$

*for every s* ∈ (0, 1). *For m* = 1, *we recover from (A1) the formula*

$$\int\_0^\infty E\_{\mathfrak{a},\mathfrak{f}}(-x) \, x^{s-1} \, dx = \frac{1}{\Gamma(\beta)} \int\_0^\infty E\_{\mathfrak{a},1,\frac{\beta-1}{a}}(-x) \, x^{s-1} \, dx = \frac{\Gamma(s)\Gamma(1-s)}{\Gamma(\beta-as)}$$

*which is given in (4.10.3) of [2], as a consequence of the Mellin-Barnes representation of Eα*,*β*(*z*). *Notice that there is no such Mellin-Barnes representation for Eα*,*m*,*l*(*z*) *in general.*

#### **3. Uniform Hyperbolic Bounds**

In Theorem 4 of [5], the following uniform hyperbolic bounds are obtained for the classical Mittag–Leffler function:

$$\frac{1}{1+\Gamma(1-a)\ge} \le E\_a(-\mathbf{x}) \le \frac{1}{1+\frac{1}{\Gamma(1+a)}\ge} \tag{5}$$

for every *α* ∈ [0, 1] and *x* ≥ 0. The constants in these inequalities are optimal because of the asymptotic behaviors

$$E\_a(-\mathbf{x}) \sim \frac{1}{\Gamma(1-a)\mathbf{x}} \quad \text{as } \mathbf{x} \to \infty \qquad \text{and} \qquad 1 - E\_a(-\mathbf{x}) \sim \frac{\mathbf{x}}{\Gamma(1+a)} \quad \text{as } \mathbf{x} \to 0.$$

See [11] and the references therein for some motivations on these hyperbolic bounds. In this section, we shall obtain analogous bounds for *<sup>E</sup>α*,*m*,*m*−1(−*x*) and *<sup>E</sup>α*,*m*,*m*<sup>−</sup> <sup>1</sup> *α* (−*x*) with *α* ∈ [0, 1], *m* > 0. Those peculiar functions are associated with the fractional Weibull and Fréchet distributions defined in [3]. Specifically, we will use the following representations as a moment generating function, obtained respectively in (3.1) and (3.4) therein:

$$E\_{a, \mathfrak{m}, \mathfrak{m} - 1}(z) = \mathbb{E}\left[ \exp \left\{ z \int\_0^\infty \left( 1 - \sigma\_t^{(a)} \right)\_+^{a(m-1)} dt \right\} \right] \tag{6}$$

and

$$E\_{a,m,m-\frac{1}{a}}(z) = \mathbb{E}\left[\exp\left\{z\int\_0^\infty \left(1+\sigma\_t^{(a)}\right)^{-a(m+1)}dt\right\}\right] \tag{7}$$

for every *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>, where {*σ*(*α*) *<sup>t</sup> t* ≥ 0} is the *α*−stable subordinator normalized such that

E[*e* −*λσ*(*α*) *<sup>t</sup>* ] = *e* <sup>−</sup>*tλ<sup>α</sup>* , *λ*, *t* ≥ 0.

Observe that these two formulæ specify the general Bernstein representation (2) in terms of the *α*−stable subordinator only. We begin with the following monotonicity properties, of independent interest.

**Proposition 1.** *Fix α* ∈ (0, 1] *and x* ∈ R. *The functions*

*<sup>m</sup>* → *<sup>E</sup>α*,*m*,*m*−1(*x*) *and m* → *<sup>E</sup>α*,*m*,*m*<sup>−</sup> <sup>1</sup> *α* (*x*)

*are decreasing on* (0, ∞) *if x* > 0 *and increasing on* (0, ∞) *if x* < 0.

**Proof of Proposition 1.** This follows from (6) resp. (7), and the fact that *<sup>σ</sup>*(*α*) *<sup>t</sup>* > 0 for every *t* > 0.

**Remark 3.** *It would be interesting to know if the same property holds for <sup>m</sup>* → *<sup>E</sup>α*,*m*,*m*−*l*(*x*) *and any l* ≤ 1/*α*. *In the case l* ∈ {1, 1/*α*}, *this would require from (2) a monotonicity analysis of the mapping m* → **X***α*,*m*,*m*−*l*, *which does not seem easy at first sight.*

As in [5], our analysis to obtain the uniform bounds will use some notions of stochastic ordering. Recall that if *X*,*Y* are real random variables such that E[*ϕ*(*X*)] ≤ E[*ϕ*(*Y*)] for every *ϕ* : R → R convex, then *Y* is said to dominate *X* for the convex order, a property which we denote by *X* ≺*cx Y*. Another ingredient in the proof is the following infinite independent product

$$\mathbf{T}(a,b,c) := \prod\_{n \ge 0} \left( \frac{a+nb+c}{a+nb} \right) \mathbf{B}\_{a+nb,c}.$$

We refer to Section 2.1 in [8] for more details on this infinite product, including the fact that it is a.s. convergent for every *a*, *b*, *c* > 0. We also mention from Proposition 2 in [8] that its Mellin transform is

$$\mathbb{E}[\mathbb{T}(a,b,c)^{s}] = \left(\frac{\Gamma(ab^{-1})}{\Gamma((a+c)b^{-1})}\right)^{s} \times \frac{[a+c;b]\_{s}}{[a;b]\_{s}}$$

for every *s* > −*a*. The following simple result on convex orderings for the above infinite independent products has an independent interest.

**Lemma 2.** *For every a*, *b*, *c* > 0 *and d* ≥ *c*, *one has*

$$\mathbf{T}(a,b,c) \prec\_{c\infty} \mathbf{T}(a,b,d).$$

**Proof of Lemma 2.** By the definition of **T**(*a*, *b*, *c*) and the stability of the convex order by mixtures—see Corollary 3.A.22 in [12], it is enough to show

$$(a+b)\mathbf{B}\_{a,b} \prec\_{cx} (a+c)\mathbf{B}\_{a,c}$$

for every *a*, *b* > 0 and *c* ≥ *b*. Using again Corollary 3.A.22 in [12] and the standard identity **<sup>B</sup>***a*,*<sup>c</sup> <sup>d</sup>* = **<sup>B</sup>***a*,*<sup>b</sup>* × **<sup>B</sup>***a*+*b*,*c*−*b*, we are reduced to show

$$\mathbb{E}\left(\frac{a+b}{a+c}\right) = \mathbb{E}[\mathbf{B}\_{a+b,c-b}] \quad \prec\_{c\ge} \mathbf{B}\_{a+b,c-b}$$

which is a consequence of Jensen's inequality.

The following result is a generalization of the inequalities (5), which deal with the case *m* = 1 only, to all Kilbas-Saigo functions *Eα*,*m*,*m*−1(−*x*). The argument is considerably simpler than in the original proof of (5).

**Theorem 2.** *For every α* ∈ [0, 1], *m* > 0 *and x* ≥ 0, *one has*

$$\frac{1}{1+\Gamma(1-\alpha)x} \le E\_{a,m,m-1}(-x) \le \frac{1}{1+\frac{\Gamma(1+a(m-1))}{\Gamma(1+a m)}x} \cdot x$$

**Proof of Theorem 2.** The first inequality is a consequence of Proposition 1, which implies in letting *m* → 0

$$\begin{aligned} E\_{\mathbf{z},m,\mathbf{w}-1}(-\mathbf{x}) &\geq& \mathbb{E}\left[\exp\left\{-\mathbf{x}\int\_{0}^{\infty} \left(1-\sigma\_{t}^{(\alpha)}\right)\_{+}^{-\alpha} dt\right\}\right] \\ &=& \mathbb{E}\left[e^{-\mathbf{x}\,\Gamma(1-\alpha)}\mathbf{L}\right] = \frac{1}{1+\Gamma(1-\alpha)\mathbf{x}} \end{aligned}$$

for *x* ≥ 0, where the first equality follows from Theorem 1.2 (b) (ii) in [8]. For the second inequality, we come back to the infinite product representation

$$\int\_0^\infty \left(1 - \sigma\_t^{(a)}\right)\_+^{\rho-a} dt \quad \overset{d}{=} \quad \frac{\Gamma(\rho+1-a)}{\Gamma(\rho+1)} \, \, \mathbf{T}(1, \rho^{-1}, (1-a)\rho^{-1}),$$

which follows from Theorem 1.2 (b) (i) in [8], exactly as in the proof of Theorem 1.1 in [3]. Lemma 2 implies then

$$\int\_0^\infty \left(1 - \sigma\_t^{(a)}\right)\_+^{\rho-a} dt \quad \prec\_{c\infty} \quad \frac{\Gamma(\rho+1-a)}{\Gamma(\rho+1)} \text{ } \mathbf{T}(1, \rho^{-1}, \rho^{-1}) \stackrel{d}{=} \frac{\Gamma(\rho+1-a)}{\Gamma(\rho+1)} \text{ L}$$

where the identity in law follows from (2.7) in [8]. Using (6) with *ρ* = *αm* and the convexity of *<sup>t</sup>* → *<sup>e</sup>*−*xt*, we obtain the required

$$E\_{a,m,m-1}(-\infty) \le \frac{1}{1 + \frac{\Gamma(1+\alpha(m-1))}{\Gamma(1+\alpha m)}} \cdot \frac{1}{\infty}$$

**Remark 4.** *(a) As for the classical case m* = 1*, these bounds are optimal because of the asymptotic behaviors*

$$(1 - E\_{\mathfrak{a}, \mathfrak{m}, m-1}(-\mathfrak{x}) \sim \frac{\Gamma(1 + \mathfrak{a}(m-1))}{\Gamma(1 + \mathfrak{a}m)} \ge \text{ as } \mathfrak{x} \to 0)$$

*and*

$$E\_{\mathfrak{a}, \mathfrak{m}, \mathfrak{m} - 1}(-\mathfrak{x}) \sim \frac{1}{\Gamma(1 - \mathfrak{a})\mathfrak{x}} \quad \text{as } \mathfrak{x} \to \infty.$$

*The behavior at zero is plain from the definition, whereas the behavior at infinity will be given after Remark 6 below.*

*(b) It is easy to check that the above proof also yields the upper bound*

$$E\_{a,m,m-1}(\mathbf{x}) \le \frac{1}{(1 - \Gamma(1 - a)\mathbf{x})\_+}$$

*for every α* ∈ [0, 1], *m* > 0 *and x* ≥ 0, *which seems unnoticed even in the classical case m* = 1.

Our next result is a uniform hyperbolic upper bound for the Kilbas-Saigo function *<sup>E</sup>α*,*m*,*m*<sup>−</sup> <sup>1</sup> *α* (−*x*), with a power exponent which will be shown to be optimal in Remark 8 (c) below, and also an optimal constant because

$$1 - E\_{a,m,m-\frac{1}{4}}(-x) \sim \left(1 + \frac{1}{m}\right) \times \frac{\Gamma(1+am)}{\Gamma(1+a(m+1))} \quad \text{as } x \to 0.$$

**Proposition 2.** *For every α* ∈ (0, 1], *m* > 0 *and x* ≥ 0, *one has*

$$E\_{\mathfrak{a},m,m-\frac{1}{a}}(-\mathfrak{x}) \le \frac{1}{\left(1 + \frac{\Gamma(1+am)}{\Gamma(1+a(m+1))}\mathfrak{x}\right)^{1+\frac{1}{m}}} \cdot \mathfrak{x}$$

**Proof of Proposition 2.** The inequality is derived by convex ordering as in Theorem 2: setting, here and throughout, **Γ***<sup>a</sup>* for a Gamma random variable with parameter *a* > 0, one has

$$\begin{array}{rcl} \int\_{0}^{\infty} \left(1 + \sigma\_{t}^{(a)}\right)\_{+}^{-\rho-a} dt & \stackrel{d}{=} & \frac{\Gamma(\rho)}{\Gamma(\rho+a)} \, \mathsf{T}(1+a\rho^{-1}, \rho^{-1}, (1-a)\rho^{-1})\\ & & \prec\_{\mathrm{cx}} & \frac{\Gamma(\rho)}{\Gamma(\rho+a)} \, \mathsf{T}(1+a\rho^{-1}, \rho^{-1}, \rho^{-1}) \stackrel{d}{=} & \frac{\Gamma(\rho+1)}{\Gamma(\rho+1+a)} \, \mathsf{T}\_{1+\frac{a}{\rho}} \end{array}$$

where the first identity follows from Corollary 3 in [8] as in the proof of Theorem 1.1 in [3], the convex ordering from Lemma 2 and the second identity from (2.7) in [8]. Then, using (7) with *ρ* = *αm*, we get the required inequality.

As in Theorem 2, we believe that there is also a uniform lower bound, with a more complicated optimal constant which can be read off from the asymptotic behavior of the density at zero obtained in Proposition 7 below:

**Conjecture 3.** *For every α* ∈ (0, 1], *m* > 0 *and x* ≥ 0, *one has*

$$E\_{a,m,m-\frac{1}{a}}(-x) \ge \frac{1}{\left(1+(am)^{-\frac{d}{m+1}}(\Gamma(1+a)\,G(1-a;am)\,\,G(1+a;am))^{-\frac{m}{m+1}}x\right)^{1+\frac{1}{m}}}\cdot \tag{8}$$

Unfortunately, the proof of this general inequality still eludes us. The monotonicity property observed in Proposition 1 does not help here, giving only the trivial lower bound zero. The discrete factorizations which are used in [5] are also more difficult to handle in this context, because the Mellin transform underlying *<sup>E</sup>α*,*m*,*m*<sup>−</sup> <sup>1</sup> *α* is expressed in terms of generalized Pochhammer symbols. In the case *m* = 1, we could however get a proof of (8). The argument relies on the following representation, observed in Remarks 3.1 (d) and 3.3 (c) of [3]:

$$E\_{a,1,1-\frac{1}{a}}(z) = \Gamma(a)E\_{a,a}'(z) = \Gamma(1+a)E\_a'(z) = \Gamma(1+a)\mathbb{E}\left[T\_a \, \epsilon^{zT\_a}\right] = \mathbb{E}\left[\epsilon^{zT\_a^{(1)}}\right] \tag{9}$$

for every *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>, where *<sup>T</sup><sup>α</sup>* <sup>=</sup> inf{*<sup>t</sup>* <sup>&</sup>gt; 0, *<sup>σ</sup>*(*α*) *<sup>t</sup>* > 1} is the first-passage time above one of the *<sup>α</sup>*−stable subordinator and *<sup>T</sup>*(1) *<sup>α</sup>* its usual size-bias of order one.

**Proposition 3.** *For every α* ∈ (0, 1) *and x* ≥ 0, *one has*

$$E\_{\mathfrak{a},1,1-\frac{1}{\mathfrak{a}}}(-\mathfrak{x}) \ge \frac{1}{\left(1+\sqrt{\frac{\Gamma(1-\mathfrak{a})}{\Gamma(1+\mathfrak{a})}}\mathfrak{x}\right)^2}.$$

**Proof of Proposition 3.** By (9) and since

$$\mathbb{E}\left[e^{-\mathbf{x}\cdot\Gamma\_2}\right] = \frac{1}{\left(1+\mathbf{x}\right)^2}$$

for every *x* ≥ 0, it is enough to show, reasoning exactly as in the proof of Theorem 4 in [5], that

$$T\_a^{(1)} \prec\_{st} \sqrt{\frac{\Gamma(1-a)}{\Gamma(1+a)}} \mathbf{r}\_{2\prime} \tag{10}$$

where ≺*st* stands for the usual stochastic order between two real random variables. Recall that *X* ≺*st Y* means P[*X* ≥ *x*] ≤ P[*Y* ≥ *x*] for every *x* ∈ R. Since *T*1/2 *d* = 2 **Γ**1/2, the case *α* = 1/2 is explicit and the stochastic ordering can be obtained directly. More precisely, the densities of both random variables in (10) are respectively given by

$$\frac{\pi}{2} \,\, e^{-\mathbf{x}^2/4} \qquad \text{and} \qquad \frac{\pi}{2} \,\, e^{-\mathbf{x}/\sqrt{2}}$$

on (0, ∞), where they cross only once at *x* = 2 <sup>√</sup>2. It is a well-known and an easy result that this single intersection property yields (10)—see Theorem 1.A.12 in [12].

The argument for the case *α* = 1/2 is somehow analogous, but the details are more elaborate because the density of *<sup>T</sup>*(1) *<sup>α</sup>* is not explicit anymore. We proceed as in Theorem C of [5] and first consider the case where *α* is rational. Setting *α* = *p*/*n* with *n* > *p* positive integers and *<sup>X</sup><sup>α</sup>* <sup>=</sup> *<sup>T</sup>*(1) *<sup>α</sup>* we have, on the one hand,

$$\begin{aligned} \mathbb{E}[(X\_{\boldsymbol{a}})^{\mathrm{ns}}] &= \quad \frac{\mathbb{E}[(T\_{\boldsymbol{a}})^{1+\mathrm{ns}}]}{\mathbb{E}[T\_{\boldsymbol{a}}]} \\ &= \quad \frac{\Gamma(2+ns)\Gamma(1+pn^{-1})}{\Gamma(1+pn^{-1}+ps)} \\ &= \quad \frac{n^{\mathrm{ns}}}{p^{\mathrm{ps}}} \times \,\mathbb{E}\left[\left(\mathbf{B}\_{\frac{2}{n},\frac{1}{p}-\frac{1}{n}}\right)^{s}\right] \times \,\frac{\prod\_{i=3}^{n+1}(in^{-1})\_{s}}{\prod\_{j=2}^{p}(jp^{-1}+n^{-1})\_{s}} \end{aligned}$$

for every *<sup>s</sup>* <sup>&</sup>gt; <sup>−</sup>2*n*−1, where we have used the well-known identity *<sup>T</sup><sup>α</sup> <sup>d</sup>* = (*σ*(*α*) <sup>1</sup> )−*<sup>α</sup>* in the second equality, whereas in the third equality we have used repeatedly the Legendre-Gauss multiplication formula for the Gamma function—see e.g., Theorem 1.5.2 in [13]. The same formula implies, on the other hand,

$$\begin{aligned} \mathbb{E}\left[\left(\sqrt{\frac{\Gamma(1-\alpha)}{\Gamma(1+\alpha)}}\,\,\Gamma\_2\right)^{\mathrm{ns}}\right] &=& \frac{n^{\mathrm{ns}}\kappa\_{\mathrm{n}}^{s}}{p^{\mathrm{ps}}} \times \mathbb{E}\left[\left(\Gamma\_{\frac{2}{\pi}}\right)^{s}\right] \times \left(\prod\_{i=3}^{n+1} (in^{-1})\_{s}\right) \\ &=& \frac{n^{\mathrm{ns}}}{p^{\mathrm{ps}}} \times \mathbb{E}\left[\left(\kappa\_{\mathrm{a}} \times \Gamma\_{\frac{2}{\pi}} \times \prod\_{j=2}^{p} \Gamma\_{\frac{j}{p}+\frac{1}{\pi}}\right)^{s}\right] \\ &\times \frac{\prod\_{i=3}^{n+1} (in^{-1})\_{s}}{\prod\_{j=2}^{p} (j p^{-1} + n^{-1})\_{s}} \end{aligned}$$

for every *<sup>s</sup>* <sup>&</sup>gt; <sup>−</sup>2*n*−1, with the notation

$$\kappa\_n = \left(\prod\_{i=1}^p \frac{\Gamma(ip^{-1} - n^{-1})}{\Gamma(ip^{-1} + n^{-1})}\right)^{\frac{n}{2}}.$$

Since

$$\frac{\prod\_{i=3}^{n+1} (i n^{-1})\_s}{\prod\_{j=2}^p (j p^{-1} + n^{-1})\_s} = \mathbb{E}\left[ \left( \prod\_{i=2}^p \mathbf{B}\_{\frac{i+1}{n}, \frac{i}{p} - \frac{i}{n}} \times \prod\_{j=p+1}^n \mathbf{r}\_{\frac{j+1}{n}} \right)^s \right]$$

for every *<sup>s</sup>* <sup>&</sup>gt; <sup>−</sup>3*n*−1, by factorization and Theorem 1.A.3 (d) in [5] we are finally reduced to show

$$\mathbf{B}\_{\frac{2}{n},\frac{1}{p}-\frac{1}{n}}^{-1} \prec\_{st} \left(\prod\_{i=1}^{p} \frac{\Gamma(ip^{-1}-n^{-1})}{\Gamma(ip^{-1}+n^{-1})}\right)^{\frac{p}{2}} \times \Gamma\_{\frac{2}{n}} \times \prod\_{j=2}^{p} \Gamma\_{\frac{j}{p}+\frac{1}{n}}$$

for every *n* > *p* positive integers. The above inequality is equivalent to

$$\left(\mathbf{B}\_{\frac{2}{n},\frac{1}{p}-\frac{1}{n}}\right)^{\frac{2}{n}} \prec\_{st} \left(\prod\_{i=2}^{p} \frac{\Gamma(ip^{-1}-n^{-1})}{\Gamma(ip^{-1}+n^{-1})}\right) \times \left(\Gamma\_{\frac{2}{n}} \times \prod\_{j=2}^{p} \Gamma\_{\frac{j}{p}+\frac{1}{n}}\right)^{\frac{2}{n}}$$

and this is proved via the single intersection property exactly as for (5.1) in [5]: the random variable on the left-hand side has an increasing density on (0, 1), whereas the random variable on the right-hand side has a decreasing density on (0, ∞), both densities having the same positive finite value at zero. We omit details. This completes the proof of (10) when *α* is rational. The case when *α* is irrational follows then by a density argument.

**Remark 5.** *It is easy to check from (A5) and (A6) that*

$$\frac{\Gamma(1+a)}{\Gamma(1-a)} = \left. a^a \Gamma(1+a) \right| G(1-a;a) \, G(1+a;a) \, .$$

*so that Proposition 3 leads to (8) for m* = 1, *in accordance with the estimate (13). In general, the absence of a tractable complement formula for the product G*(1 − *α*; *δ*) *G*(1 + *α*; *δ*) *makes however the constant in (8) more difficult to handle.*

Our last result in this section gives optimal uniform hyperbolic bounds for the generalized Mittag–Leffler functions *Eα*,*β*(−*x*) whenever they are completely monotone, that is for *β* ≥ *α*—see the above Remark 2 (a). This can be viewed as another generalization of (5).

**Proposition 4.** *For every α* ∈ (0, 1], *β* > *α and x* ≥ 0, *one has*

$$\frac{1}{\left(1+\sqrt{\frac{\Gamma(1-\alpha)}{\Gamma(1+\alpha)}}\ge\right)^2} \le \Gamma(\mathfrak{a})\,E\_{\mathfrak{a},\mathfrak{a}}(-\mathfrak{x}) \le \frac{1}{\left(1+\frac{\Gamma(1+\mathfrak{a})}{\Gamma(1+2\mathfrak{a})}\ge\right)^2}$$

*and*

$$\frac{1}{1 + \frac{\Gamma(\beta - \alpha)}{\Gamma(\beta)} \ge} \le \Gamma(\beta) \, E\_{\alpha, \beta}(-\infty) \le \frac{1}{1 + \frac{\Gamma(\beta)}{\Gamma(\beta + \alpha)} \ge 1}$$

**Proof of Proposition 4.** The bounds for *Eα*,*α*(−*x*) are a direct consequence of (9), Proposition <sup>2</sup> and Proposition 3. Notice that letting *<sup>α</sup>* <sup>→</sup> 1 leads to the trivial bound 0 <sup>≤</sup> *<sup>e</sup>*−*<sup>x</sup>* <sup>≤</sup> (2/(2 + *x*))2. To handle the bounds for *β* > *α*, we first recall from Remark 2 (a) that

$$\Gamma(\beta) \, E\_{a,\beta}(-\infty) \, = \, \Gamma(\beta) \, E\_{a,1,\frac{\beta-1}{a}}(-\infty) \, = \, \, \mathbb{E} \left[ \mathfrak{e}^{-\mathbf{x}} \, \mathbf{Y}\_{a,1,l} \right],$$

with *l* = (*β* − 1)/*α* > 1 − 1/*α* and **Y***α*,1,*<sup>l</sup> d* = **B***<sup>α</sup> <sup>α</sup>*,*β*−*<sup>α</sup>* <sup>×</sup> *<sup>T</sup>*(1) *<sup>α</sup>* . Moreover, one has

$$\mathbb{E}[(\mathbf{Y}\_{a,1,l})^s] = \frac{\Gamma(1+s)\Gamma(\beta)}{\Gamma(\beta+as)}\tag{11}$$

·

for every *<sup>s</sup>* <sup>&</sup>gt; <sup>−</sup>1, which implies the factorization **<sup>L</sup>** *<sup>d</sup>* <sup>=</sup> **<sup>Y</sup>***α*,1,*<sup>l</sup>* <sup>×</sup> (**Γ***β*)*α*. Since, by Jensen's inequality,

$$\frac{\Gamma(\beta + \alpha)}{\Gamma(\beta)} = \mathbb{E}\left[ (\Gamma\_{\beta})^{\alpha} \right] \prec\_{\text{cx}} (\Gamma\_{\beta})^{\alpha}.$$

we deduce from Corollary 3.A.22 in [12] the convex ordering

$$\mathbf{Y}\_{\alpha,1,l} \prec\_{\text{cx}} \frac{\Gamma(\beta)}{\Gamma(\beta+\alpha)} \text{ L}$$

which, as above, implies

$$\Gamma(\beta) \, E\_{\alpha, \beta}(-\infty) \le \frac{1}{1 + \frac{\Gamma(\beta)}{\Gamma(\beta + \alpha)} \ge 1}$$

for every *x* ≥ 0.

The argument for the other inequality is analogous to that of Proposition 3. By density, we only need to consider the case *α* = *p*/*n* and *β* = (*p* + *q*)/*n* with *p* < *n* and *q* positive integers. By (11) and the Legendre-Gauss multiplication formula, we obtain

$$\mathbb{E}[(\mathbf{Y}\_{a,1,l})^{\mathrm{ns}}] = \frac{n^{\mathrm{ns}}}{p^{\mathrm{ps}}} \times \mathbb{E}\left[\left(\mathbf{B}\_{\frac{1}{n},\frac{q}{n\mathrm{p}}}\right)^{s}\right] \times \frac{\prod\_{i=2}^{n}(in^{-1})\_{s}}{\prod\_{j=1}^{p-1}(jp^{-1} + (p+q)(np)^{-1})\_{s}}$$

for every *<sup>s</sup>* <sup>&</sup>gt; <sup>−</sup>*n*−1. On the other hand, one has

$$\mathbb{E}\left[\left(\frac{\Gamma(\beta-a)}{\Gamma(\beta)}\mathbf{L}\right)^{\mathrm{ns}}\right] = \frac{n^{\mathrm{ns}}}{p^{\mathrm{ns}}}\mathbb{E}\left[\left(\kappa\_{a,\beta}\times\Gamma\_{\frac{1}{\pi}}\times\prod\_{j=1}^{p-1}\Gamma\_{\frac{j}{p}+\frac{p+q}{np}}\right)^{s}\right]\times\frac{\prod\_{i=2}^{n}(in^{-1})\_{s}}{\prod\_{j=1}^{p-1}(jp^{-1}+(p+q)(np)^{-1})\_{s}}$$

with

$$\kappa\_{\alpha,\emptyset} = \ p^p \left( \frac{\Gamma(qn^{-1})}{\Gamma((p+q)n^{-1})} \right)^n.$$

Comparing these two formulæ we are reduced to show

$$(\mathbf{B}\_{\frac{1}{n},\frac{q}{np}})^{\frac{1}{n}}\_{n} \prec\_{st} p^{\frac{p}{n}}\_{n} \left(\frac{\Gamma(qn^{-1})}{\Gamma((p+q)n^{-1})}\right) \times \left(\Gamma\_{\frac{1}{n}} \times \prod\_{j=1}^{p-1} \Gamma\_{\frac{j}{p} + \frac{p+q}{np}}\right)^{\frac{1}{n}}$$

for every *p* < *n* and *q* positive integers. This is obtained in the same way as above via the single intersection property. We leave the details to the reader.

#### **4. Asymptotic Behavior of Fractional Extreme Densities**

In this section, which is a complement to [3], we study the behavior of the density functions of the fractional Weibull and Fréchet distributions at both ends of their support. To this end, we also evaluate their Mellin transforms in terms of Barnes' double Gamma function. Along the way, we give the exact asymptotics of *x* → *Eα*,*m*,*l*(*x*) on the negative half-line, in the completely monotonic case *α* ∈ [0, 1] and *l* ≥ *m* − 1/*α*.

#### *4.1. The Fractional Weibull Case*

In [3], a fractional Weibull distribution function with parameters *α* ∈ [0, 1] and *λ*, *ρ* > 0 is defined as the unique distribution function *F***<sup>W</sup>** *<sup>α</sup>*,*λ*,*<sup>ρ</sup>* on (0, ∞) solving the fractional differential equation

$$\mathbf{D}\_{0+}^{\alpha}F(\mathbf{x}) \;= \; \lambda \, \mathbf{x}^{\rho-\alpha} \mathcal{F}(\mathbf{x})$$

where *<sup>F</sup>*¯ <sup>=</sup> <sup>1</sup> <sup>−</sup> *<sup>F</sup>* denotes the associated survival function and D*<sup>α</sup>* <sup>0</sup><sup>+</sup> a progressive Liouville fractional derivative on (0, ∞). The case *α* = 1 corresponds to the standard Weibull distribution. In [3], it is shown that this distribution function exists and is given by

$$F^{\mathsf{W}}\_{\alpha,\lambda,\rho}(\mathfrak{x}) \;= \; 1 \; - \; E\_{\alpha,\frac{\mathfrak{g}}{\alpha},\frac{\mathfrak{g}}{\mathfrak{a}} - 1}(-\lambda \mathfrak{x}^{\rho}) \; $$

for every *<sup>x</sup>* <sup>≥</sup> 0—see the formula following (3.1) in [3]. In particular, the density *<sup>f</sup>* **<sup>W</sup>** *<sup>α</sup>*,*λ*,*<sup>ρ</sup>* is real-analytic on (0, ∞) and has the following asymptotic behavior at zero:

$$f\_{\alpha,\lambda,\rho}^{\mathbf{W}}(\mathbf{x}) \sim \left(\frac{\lambda \,\Gamma(\rho + 1 - \mathfrak{a})}{\Gamma(\rho)}\right) \mathbf{x}^{\rho - 1} \quad \text{as } \mathbf{x} \to 0.$$

The behavior of *f* **<sup>W</sup>** *<sup>α</sup>*,*λ*,*<sup>ρ</sup>* at infinity is however less immediate, and to this aim we will need an exact expression for the Mellin transform of the random variable **W***α*,*λ*,*<sup>ρ</sup>* with distribution function *F***<sup>W</sup>** *<sup>α</sup>*,*λ*,*ρ*, which has an interest in itself.

**Proposition 5.** *The Mellin transform of* **W***α*,*λ*,*<sup>ρ</sup> is*

$$\mathbb{E}\left[\mathbf{W}\_{a,\lambda,\rho}^{s}\right] = \left(\frac{\rho^{a}}{\lambda}\right)^{\frac{k}{\rho}} \Gamma(1+s\rho^{-1}) \times \frac{[\rho + (1-a);\rho]\_{-s}}{[\rho;\rho]\_{-s}}$$

*for every s* ∈ (−*ρ*, *ρ*). *Consequently, one has*

$$f\_{\alpha,\lambda,\rho}^{\mathbf{W}}(\mathbf{x}) \sim \left(\frac{\rho}{\lambda \Gamma(1-\alpha)}\right) \mathbf{x}^{-\rho-1} \quad \text{as } \mathbf{x} \to \infty.$$

**Proof of Proposition 5.** We start with a more concise expression of (3) for *l* = *m* − 1, which is a direct consequence of (A9):

$$\mathbb{E}[(\mathbf{Y}\_{\boldsymbol{\alpha},\boldsymbol{\xi},\boldsymbol{\xi}-1})^s] = \rho^{-s} \times \frac{[1+(1-\alpha)\rho^{-1};\rho^{-1}]\_s}{[1;\rho^{-1}]\_s} \cdot \rho$$

By Theorem 1.1 in [3] and using the notations therein, we deduce

$$\begin{split} \mathbb{E}\left[\mathsf{W}\_{a,\lambda,\rho}^{\mathrm{s}}\right] &=& \mathbb{E}\left[\left(\frac{\mathsf{L}}{\lambda\mathsf{Y}\_{a,\frac{\rho}{\mathsf{s}},\frac{\rho}{\mathsf{s}}-1}}\right)^{\frac{\mathsf{s}}{\rho}}\right] \\ &=& \left(\frac{\rho}{\lambda}\right)^{\frac{\mathsf{s}}{\rho}}\Gamma(1+s\rho^{-1}) \times \frac{[1+(1-\alpha)\rho^{-1};\rho^{-1}]\_{-s\rho^{-1}}}{[1;\rho^{-1}]\_{-s\rho^{-1}}} \\ &=& \left(\frac{\rho^{\mathrm{s}}}{\lambda}\right)^{\frac{\mathsf{s}}{\rho}}\Gamma(1+s\rho^{-1}) \times \frac{[\rho+(1-\alpha);\rho]\_{-s}}{[\rho;\rho]\_{-s}} \end{split}$$

for every *s* ∈ (−*ρ*, *ρ*) as required, where the third equality comes from (A8). The asymptotic behavior of the density at infinity is then a standard consequence of Mellin inversion. First, we observe from the above formula and (A10) that the first positive pole of *s* → E **W***<sup>s</sup> α*,*λ*,*ρ* is simple and isolated in the complex plane at *s* = *ρ*, with

$$\begin{split} \mathbb{E}\left[\mathbf{W}\_{a,\lambda,\rho}^{\varepsilon}\right] &\quad \sim \quad \left(\frac{\rho^{a}}{\lambda}\right) \times \frac{[\rho + (1 - a); \rho]\_{-\rho}}{[\rho;\rho]\_{-s}} \\ &\sim \quad \left(\frac{\rho^{\rho + a}}{\lambda}\right) \times \frac{[\rho + (1 - a); \rho]\_{-\rho}}{[2\rho;\rho]\_{-\rho}} \times (\rho)\_{-s} = \frac{\rho \, \Gamma(\rho - s)}{\lambda \Gamma(1 - a)} \sim \frac{\rho}{\lambda \Gamma(1 - a) \left(\rho - s\right)}. \end{split}$$

as *s* ↑ *ρ*, where the second asymptotics comes from (A9) and the equality from (A5). Therefore, applying Theorem 4 (ii) in [14] beware the correction (log *<sup>x</sup>*)*<sup>k</sup>* <sup>→</sup> (log *<sup>x</sup>*)*k*−<sup>1</sup> to be made in the expansion of *f*(*x*) therein, we obtain

$$f\_{a,\lambda,\rho}^{\mathbb{W}}(\mathbf{x}) \sim \left(\frac{\rho}{\lambda \Gamma(1-a)}\right) \mathbf{x}^{-\rho-1} \quad \text{as } \mathbf{x} \to \infty$$
  $\text{as required.}$ 

**Remark 6.** *(a) Another proof of the asymptotic behavior at infinity can be obtained from that of the so-called generalized stable densities. More precisely, using the identity in law on top of p.12 in [3] and the notation therein, we see by multiplicative convolution, having set f* G *<sup>α</sup>*,*<sup>ρ</sup> for the density of the generalized stable random variable* G(*ρ* + 1 − *α*, 1 − *α*), *that*

$$\begin{split} f\_{a,\lambda,\rho}^{\mathbf{W}}(\mathbf{x}) &= \; \lambda \, \mathbf{x}^{\rho-1} \int\_{0}^{\infty} f\_{a,\rho}^{\mathcal{G}}(y) \, y^{-\rho} \, e^{-\frac{\lambda}{\rho} \left(\frac{\pi}{y}\right)^{\rho}} dy \\ &= \; \left(\frac{\lambda}{\rho}\right)^{\frac{1}{\rho}} \int\_{0}^{\infty} f\_{a,\rho}^{\mathcal{G}}\left(\mathbf{x}(\rho\lambda^{-1}t)^{-\frac{1}{\rho}}\right) t^{-\frac{1}{\rho}} \, e^{-t} \, dt \\ &\sim \; \left(\frac{\rho}{\lambda \Gamma(1-\alpha)} \int\_{0}^{\infty} t \, e^{-t} \, dt\right) \mathbf{x}^{-\rho-1} = \; \left(\frac{\rho}{\lambda \Gamma(1-\alpha)}\right) \mathbf{x}^{-\rho-1} \end{split}$$

*as x* → ∞, *where for the asymptotics we have used the Proposition in [15] and a direct integration. This argument does not make use of Mellin inversion and is overall simpler than the above. However, it does not convey to the fractional Fréchet case.*

*(b) The Mellin transform simplifies for α* = 0 *and α* = 1*: using (A1) and (A6) we recover*

$$\mathbb{E}[\mathsf{W}\_{0,\lambda,\rho}^{\mathrm{s}}] = \lambda^{-\frac{\mathsf{s}}{\rho}} \Gamma(1+s\rho^{-1})\Gamma(1-s\rho^{-1}) \qquad \text{and} \qquad \mathbb{E}[\mathsf{W}\_{1,\lambda,\rho}^{\mathrm{s}}] = \left(\frac{\rho}{\lambda}\right)^{\frac{\mathsf{s}}{\rho}} \Gamma(1+s\rho^{-1})$$

*in accordance with the scaling property* **W***α*,*λ*,*<sup>ρ</sup> d* = *λ*−1/*ρ***W***α*,1,*<sup>ρ</sup> and the identities given at the bottom of p.3 in [3]. The Mellin transform takes a simpler form in two other situations.*

• *For ρ* = *α*, *we obtain from (3), (A1) and (A5)*

$$\mathbb{E}[(\mathbf{Y}\_{\kappa,1,0})^s] = \frac{\Gamma(1+s)}{\Gamma(1+as)} = \mathbb{E}[\mathbf{Z}\_{\kappa}^{-as}]\_\times$$

*in accordance with Remark 3.1 (d) in [3]. This yields*

$$\mathbf{W}\_{\boldsymbol{\alpha},\boldsymbol{\lambda},\boldsymbol{\alpha}} \stackrel{d}{=} \left(\frac{\mathbf{L}}{\lambda \mathbf{Y}\_{\boldsymbol{\alpha},1,0}}\right)^{\frac{1}{\boldsymbol{\alpha}}} \stackrel{d}{=} \lambda^{-\frac{1}{\boldsymbol{\alpha}}} \mathbf{Z}\_{\boldsymbol{\alpha}} \times \mathbf{L}^{\frac{1}{\boldsymbol{\pi}}},$$

*an identity which was already discussed for λ* = 1 *in the introduction of [3] as the solution to (1.3) therein. The Mellin transform reads*

$$\mathbb{E}[\mathbf{W}^s\_{\mathfrak{a},\lambda,\mathfrak{a}}] = \lambda^{-\frac{1}{\mathfrak{a}}} \frac{\Gamma(1+s\mathfrak{a}^{-1})\Gamma(1-s\mathfrak{a}^{-1})}{\Gamma(1-s)} \cdot \mathfrak{a}$$

• *For ρ* = 1 − *α*, *where we obtain from (A5)*

$$\mathbb{E}[\mathbf{W}\_{1-\rho,\lambda,\rho}^{s}] = \left(\frac{\rho}{\lambda}\right)^{\frac{s}{\rho}} \frac{\Gamma(1+s\rho^{-1})\Gamma(\rho-s)}{\Gamma(\rho)} \quad \text{and} \quad \mathbf{W}\_{1-\rho,\lambda,\rho} \stackrel{d}{=} \left(\frac{\rho}{\lambda}\right)^{\frac{1}{\rho}} \mathbf{L}^{\frac{1}{\rho}} \times \mathbf{T}\_{\rho}^{-1}.$$

*(c) The two cases ρ* = *α and ρ* = 1 − *α have a Mellin transform expressed as the quotient of a finite number of Gamma functions. This makes it possible to use a Mellin-Barnes representation of*

*the density to get its full asymptotic expansion at infinity. Using the standard notation of Definition C.1.1 in [13], one obtains*

$$f\_{\mathsf{a},\lambda,\mathsf{a}}^{\mathsf{W}}(\mathsf{x}) \sim \sum\_{n\geq 1} \frac{\max^{-1-\mathsf{n}\mathsf{a}}}{\lambda^{\mathsf{w}}\Gamma(1-\mathsf{n}\mathsf{a})} \qquad \mathsf{and} \qquad f\_{1-\mathsf{a},\lambda,\mathsf{a}}^{\mathsf{W}}(\mathsf{x}) \sim \frac{\mathsf{a}\mathsf{x}^{-\mathsf{z}-1}}{\lambda\Gamma(\mathsf{a})} \sum\_{n\geq 0} (-1)^{\mathsf{n}} \frac{\Gamma\left(\frac{\mathsf{n}}{\mathsf{p}}+2\right)}{n!} \left(\frac{\lambda}{\mathsf{p}}\right)^{-\frac{\mathsf{n}}{\mathsf{p}}} \tilde{\mathcal{X}}^{-\mathsf{n}}$$

*which are everywhere divergent. The first expansion can also be obtained from (1.8.28) in [16] using*

$$f\_{\alpha,\lambda,a}^{\mathfrak{N}}(\mathfrak{x}) \;= \; \lambda \; \mathfrak{x}^{\alpha-1} E\_{\mathfrak{a},a}(-\lambda \mathfrak{x}^{\alpha}) .$$

*Unfortunately, the Mellin transform of* **W***α*,*λ*,*<sup>ρ</sup> might have poles of variable order and it seems difficult to obtain a general formula for the full asymptotic expansion at infinity of f* **<sup>W</sup>** *<sup>α</sup>*,*λ*,*ρ*(*x*)*.*

Writing

$$E\_{a, \frac{\rho}{a}, \frac{\rho}{a} - 1}(-\lambda x^{\rho}) = \mathbb{P}[\mathbf{W}\_{a, \lambda, \rho} > x] \, = \int\_{x}^{\infty} f\_{a, \lambda, \rho}^{\mathbf{W}}(y) \, dy \, dx$$

we obtain by integration the following asymptotic behavior at infinity, which is valid for any *α* ∈ (0, 1] and *m* > 0:

$$E\_{\mathfrak{a},\mathfrak{m},\mathfrak{m}-1}(-\mathfrak{x}) \sim \frac{1}{\Gamma(1-\mathfrak{a})\,\mathfrak{x}} \qquad \text{as } \mathfrak{x} \to \infty.$$

This behavior, which turns out to be the same as that of the classical Mittag–Leffler function *Eα*(−*x*)—see e.g., (3.4.15) in [2], gives the reason the constant in the lower bound of Theorem 2 is optimal—see the above Remark 4 (a). It is actually possible to get the exact behavior of *Eα*,*m*,*l*(−*x*) at infinity for any *α* ∈ (0, 1], *m* > 0 and *l* > *m* − 1/*α*. We include this result here since it seems unnoticed in the literature on Kilbas-Saigo functions.

**Proposition 6.** *For any α* ∈ [0, 1], *m* > 0 *and l* > *m* − 1/*α*, *one has*

$$E\_{\mathfrak{a},m,l}(-\mathfrak{x}) \sim \frac{\Gamma(1 + \mathfrak{a}(l+1-m))}{\Gamma(1 + \mathfrak{a}(l-m))\,\,\mathfrak{x}} \qquad \text{as } \mathfrak{x} \to \infty.$$

**Proof of Proposition 6.** The case *α* = 0 is obvious since *E*0,*m*,*l*(*x*) = 1/(1 − *x*). For *α* ∈ (0, 1], setting *δ* = 1/*αm*, recall from (4) that for every *s* ∈ (0, 1) one has

$$\begin{array}{rcl} \int\_0^\infty \mathbb{E}\_{a,m,l}(-x) \, x^{s-1} \, dx &=& \Gamma(s) \Gamma(1-s) \times \frac{[(al+1)\delta; \delta]\_{-s}}{[1/m + (al+1)\delta; \delta]\_{-s}}\\ &\sim \frac{[(al+1)\delta; \delta]\_{-1}}{[1/m + (al+1)\delta; \delta]\_{-1}(1-s)} &=& \frac{\Gamma(1+a(l+1-m))}{\Gamma(1+a(l-m))\left(1-s\right)} \end{array}$$

as *s* ↑ 1, where in the equality we have used the concatenation formula (A1). The asymptotic behavior follows then by Mellin inversion as in the proof of Proposition 5.

**Remark 7.** *In the boundary case <sup>l</sup>* = *<sup>m</sup>* − 1/*α*, *the behavior of <sup>E</sup>α*,*m*,*m*−1/*α*(−*x*) *at infinity, which has different speed and a more complicated constant, will be obtained with the help of the fractional Fréchet distribution—see Remark 8 (c) below.*

We end this paragraph with the following conjecture which is natural in view of Proposition 6. We know by Theorem 2 resp. Proposition 4 that this conjecture is true for the cases *l* = *m* − 1 and *m* = 1.

**Conjecture 4.** *For every α* ∈ (0, 1], *m* > 0, *l* > *m* − 1/*α and x* ≥ 0, *one has*

$$\frac{1}{1 + \frac{\Gamma(1 + a(l - m))}{\Gamma(1 + a(l - m + 1))}} \le \begin{array}{c} E\_{a, m, l}(-\infty) \le \frac{1}{1 + \frac{\Gamma(1 + al)}{\Gamma(1 + a(1 + l))}} \ge 1 \end{array}$$

*4.2. The Fréchet Case*

In [3], a fractional Fréchet distribution function with parameters *α* ∈ [0, 1] and *λ*, *ρ* > 0 is defined as the unique distribution function *F***<sup>F</sup>** *<sup>α</sup>*,*λ*,*<sup>ρ</sup>* on (0, ∞) solving the fractional differential equation

$$\mathbf{D}\_{-}^{\alpha}F(\mathbf{x}) \;= \; \lambda \, \mathbf{x}^{-\rho-\alpha}F(\mathbf{x})$$

where D*<sup>α</sup>* <sup>−</sup> denotes a regressive Liouville fractional derivative on (0, <sup>∞</sup>). The case *<sup>α</sup>* <sup>=</sup> <sup>1</sup> corresponds to the standard Fréchet distribution. In [3], it is shown that this distribution function exists and is given by

$$F\_{\alpha,\lambda,\rho}^{\mathbb{F}}(\mathbf{x}) \;= \; E\_{\alpha,\underbrace{\mathfrak{g}}\_{\alpha},\underbrace{\varrho - 1}\_{\alpha}}(-\lambda \mathbf{x}^{-\rho}),$$

for every *<sup>x</sup>* <sup>≥</sup> 0—see the formula following (3.4) in [3]. In particular, the density *<sup>f</sup>* **<sup>F</sup>** *<sup>α</sup>*,*λ*,*<sup>ρ</sup>* is real-analytic on (0, ∞) and has the following asymptotic behavior at infinity:

$$f\_{\alpha,\lambda,\rho}^{\mathbb{F}}(\mathbf{x}) \sim \left(\frac{\lambda \,\Gamma(\rho+1)}{\Gamma(\rho+\mathfrak{a})}\right) \mathbf{x}^{-\rho-1} \quad \text{as } \mathbf{x} \to \infty.$$

The behavior of the density at zero is less immediate and we will need, as in the above paragraph, the exact expression of the Mellin transform of the random variable **F***α*,*λ*,*<sup>ρ</sup>* with distribution function *F***<sup>F</sup>** *<sup>α</sup>*,*λ*,*ρ*, whose strip of analyticity is larger than that of **W***α*,*λ*,*ρ*.

**Proposition 7.** *The Mellin transform of* **F***α*,*λ*,*<sup>ρ</sup> is*

$$\mathbb{E}\left[\mathbb{F}\_{\alpha,\lambda,\rho}^{\mathfrak{s}}\right] = \left(\frac{\rho^{\mathfrak{s}}}{\lambda}\right)^{-\frac{1}{\rho}}\Gamma(1-s\rho^{-1}) \times \frac{[\rho+1;\rho]\_{\mathfrak{s}}}{[\rho+\alpha;\rho]\_{\mathfrak{s}}}$$

*for every s* ∈ (−*ρ* − *α*, *ρ*). *Consequently, one has*

$$f\_{a,\lambda,\rho}^{\mathbb{F}}(\mathbf{x}) \sim \left( \frac{\rho^{\frac{a^2}{\rho}}(\rho+a)}{\lambda^{1+\frac{a}{\rho}}} \Gamma(1+a) \, \mathbf{G}(1-a;\rho) \, \mathbf{G}(1+a;\rho) \right) \mathbf{x}^{\rho+a-1} \quad \text{as } \mathbf{x} \to \mathbf{0}.$$

**Proof of Proposition 7.** The evaluation of the Mellin transform is done as for the fractional Weibull distribution, starting from the expression

$$\mathbb{E}[(\mathbf{Y}\_{\alpha, \frac{\rho}{\alpha}, \frac{\rho - 1}{\alpha}})^s] = \rho^{-s} \times \frac{[1 + \rho^{-1}; \rho^{-1}]\_s}{[1 + \alpha \rho^{-1}; \rho^{-1}]\_s}$$

which is a consequence of (3) and (A5). By Theorem 1.2 in [3] and (A8), we obtain the required formula

$$\mathbb{E}\left[\mathbf{F}\_{a,\lambda,\rho}^{s}\right] = \mathbb{E}\left[\left(\frac{\mathbf{L}}{\lambda \mathbf{Y}\_{a,\frac{\rho}{\alpha},\frac{\rho-1}{\alpha}}}\right)^{-\frac{s}{\rho}}\right] = \left(\frac{\rho^{a}}{\lambda}\right)^{-\frac{s}{\rho}}\Gamma(1-s\rho^{-1}) \times \frac{[\rho+1;\rho]\_{s}}{[\rho+a;\rho]\_{s}}.$$

Then the asymptotic behavior of *f* **<sup>F</sup>** *<sup>α</sup>*,*λ*,*ρ*(*x*) at zero follows as that of *<sup>f</sup>* **<sup>W</sup>** *<sup>α</sup>*,*λ*,*ρ*(*x*) at infinity, in considering the residue at the first negative pole *s* = −(*ρ* + *α*) which is simple and isolated in the complex plane, applying Theorem 4 (i) in [14] with the same correction as above, and making various simplifications. We omit details.

**Remark 8.** *(a) Comparing the Mellin transforms, Propositions 5 and 7 imply the factorization*

$$\mathbf{W}\_{a,\lambda,\rho}^{-1} \stackrel{d}{=} \mathbf{F}\_{a,\lambda,\rho} \times \mathbf{Z}(\rho+1-a,\rho+a;\rho,\rho+1;\rho). \tag{12}$$

*In general, it follows from Theorem 1 that for every α* ∈ (0, 1], *m*, *λ* > 0 *and l* > *m* − 1/*α*, *there exists a positive random variable with distribution function <sup>E</sup>α*,*m*,*l*(−*λx*−*αm*), *and which is given by (3), (2) and Theorem 1.2 in [3] as the independent product*

$$\mathbf{F}\_{\mathbf{a},\lambda,a:m} \times (\mathbf{X}\_{\mathbf{a},m,l})^{\frac{1}{am}} \stackrel{d}{=} \mathbf{F}\_{\mathbf{a},\lambda,am} \times \mathbf{Z}(al+1,a(m+1);am,a(l+1)+1;am),$$

*where the identity in law follows from (A8). In this respect, the fractional Fréchet distributions can be viewed as the "ground state" distributions associated with the Kilbas-Saigo functions Eα*,*m*,*l*, *in the boundary case l* = *m* − 1/*α*.

*(b) As above, the Mellin transform simplifies for α* = 0, 1*: we get*

$$\mathbb{E}[\mathbb{F}\_{0,\lambda,\rho}^{\varepsilon}] = \lambda^{\frac{\sharp}{\rho}} \Gamma(1 + s\rho^{-1})\Gamma(1 - s\rho^{-1}) \qquad \text{and} \qquad \mathbb{E}[\mathbb{F}\_{1,\lambda,\rho}^{\varepsilon}] = \left(\frac{\lambda}{\rho}\right)^{\frac{\sharp}{\rho}} \Gamma(1 - s\rho^{-1}),$$

*in accordance with the scaling property* **F***α*,*λ*,*<sup>ρ</sup> d* = *λ*1/*ρ***F***α*,1,*<sup>ρ</sup> and the identities given after the statement of Theorem 1.2 in [3]. The Mellin transform also takes a simpler form in the same other situations as above.*

• *For ρ* = *α*, *with*

$$\mathbb{E}[\mathbf{F}^{\mathbf{s}}\_{\alpha,\lambda,\alpha}] = \lambda^{\frac{s}{\mathfrak{a}}} \frac{\Gamma(\mathfrak{a})\Gamma(1+s\mathfrak{a}^{-1})\Gamma(1-s\mathfrak{a}^{-1})}{\Gamma(\mathfrak{a}+s)} \cdot \mathfrak{a}$$

*This yields the identity* **F***α*,*λ*,*<sup>α</sup> d* = *λ*<sup>1</sup> *<sup>α</sup>* (**Z**−<sup>1</sup> *<sup>α</sup>* )(*α*) <sup>×</sup> **<sup>L</sup>**<sup>−</sup> <sup>1</sup> *<sup>α</sup>* , *which was discussed for λ* = 1 *in the introduction of [3] as the solution to (1.4) therein. This is also in accordance with Remark 3.3 (c) in [3], since*

$$(T\_{\mathfrak{a}}^{(1)})^{\frac{1}{\mathfrak{a}}} \stackrel{d}{=} ((\mathbf{Z}\_{\mathfrak{a}}^{-\mathfrak{a}})^{(1)})^{\frac{1}{\mathfrak{a}}} \stackrel{d}{=} (\mathbf{Z}\_{\mathfrak{a}}^{-1})^{(\mathfrak{a})}.$$

*Notice that the constant appearing in the asymptotic behavior of the density at zero is also simpler: one finds*

$$f\_{a,\lambda,\mathfrak{a}}^{\mathbb{F}}(\mathbf{x}) \sim \left(\frac{2\mathfrak{a}\,\Gamma(1+\mathfrak{a})}{\lambda^2 \,\Gamma(1-\mathfrak{a})}\right) \mathbf{x}^{2\mathfrak{a}-1} \quad \text{as } \mathbf{x} \to 0. \tag{13}$$

.

• *For ρ* = 1 − *α*, *with*

$$\mathbb{E}[\mathbf{F}\_{1-\rho,\lambda,\rho}^{\boldsymbol{\theta}}] = \left(\frac{\lambda}{\rho}\right)^{\frac{\boldsymbol{\rho}}{\rho}} \Gamma(1-s\rho^{-1})\Gamma(1+s) \quad \text{and} \quad \mathbb{F}\_{1-\rho,\lambda,\rho} \stackrel{\scriptstyle d}{=} \left(\frac{\lambda}{\rho}\right)^{\frac{1}{\rho}} \mathbf{L}^{-\frac{1}{\rho}} \times \mathbf{L}.$$

*Here, the density converges at zero to a simple constant: one finds*

$$f\_{1-\rho,\lambda,\rho}^{\mathbb{F}}(\mathbf{x}) \to \left(\frac{\rho}{\lambda}\right)^{\frac{1}{\rho}}\Gamma(1+\rho^{-1}) \qquad \text{as } \mathbf{x} \to 0.$$

*(c) Integrating the density and using* <sup>P</sup>[**F***α*,*λ*,*<sup>ρ</sup>* <sup>≤</sup> *<sup>x</sup>*] = *<sup>E</sup>α*, *ρ α* , *ρ*−1 *α* (−*λx*−*ρ*), *we obtain the following asymptotic behavior at infinity for any α* ∈ (0, 1] *and m* > 0, *which is more involved than that of Proposition 6:*

$$E\_{a,m,m-\frac{1}{n}}(-\mathbf{x}) \sim (am)^{\frac{d}{m}}\Gamma(1+a)\,\mathbf{G}(1-a;am)\,\mathbf{G}(1+a;am)\,\mathbf{x}^{-1-\frac{1}{m}}\qquad\text{as }\mathbf{x}\to\infty.$$

*For m* = 1, *this behavior matches the first term in the full asymptotic expansion*

$$E\_{\mathfrak{a},1,1-\frac{1}{\pi}}(-\mathfrak{x}) \; = \; \Gamma(\mathfrak{a}) E\_{\mathfrak{a},\mathfrak{a}}(-\mathfrak{x}) \; \sim \; \Gamma(\mathfrak{a}) \sum\_{n\geq 1} \frac{(-1)^n}{\Gamma(-\mathfrak{a}n)} \cdot \frac{1}{n}$$

*As for <sup>E</sup>α*,*m*,*m*−1(−*x*), *a full asymptotic expansion of <sup>E</sup>α*,*m*,*m*<sup>−</sup> <sup>1</sup> *α* (−*x*) *at infinity seems difficult to obtain for all values of m*.

#### **5. Some Complements on the Le Roy Function**

In this section, we show some miscellaneous results on the Le Roy function

$$\mathcal{L}\_{\mathfrak{A}}(\mathfrak{x}) := \sum\_{n \ge 0} \frac{\mathfrak{x}^n}{(n!)^{\alpha'}} , \qquad \mathfrak{a} > 0 \,, \mathfrak{x} \in \mathbb{R}.$$

In [3], this function played a role in the construction of a fractional Gumbel distribution —see Theorem 1.3 therein. The Le Roy function, which has been much less studied than the classical Mittag–Leffler function, can be viewed as an alternative generalization of the exponential function. See also the recent paper [17] for a further generalization related to the Mittag–Leffler function. Throughout, we giscard the explicit case <sup>L</sup>1(*x*) = *<sup>E</sup>*1(*x*) = *<sup>e</sup>x*.

We begin with the asymptotic behavior at infinity. Le Roy's original result—see [6] p. 263-reads

$$\mathcal{L}\_a(x) \sim \frac{(2\pi)^{\frac{1-\kappa}{2}}}{\sqrt{\kappa}} x^{\frac{1-\kappa}{2}} e^{a\alpha^{\frac{1}{2}}} \qquad \text{as } x \to \infty,$$

and is obtained by a variation of Laplace's method. An extension of this asymptotic behavior has been given in [18] for the so-called Mittag–Leffler functions of Le Roy type. Laplace's method can also be used to solve Exercise 8.8.4 in [19], which states

$$\mathcal{L}\_{\mathfrak{a}}(-\mathbf{x}) = \frac{2(2\pi)^{\frac{1-\mathfrak{a}}{2}}}{\sqrt{\mathfrak{a}}} \mathbf{x}^{\frac{1-\mathfrak{a}}{2\mathfrak{a}}} e^{a\cos(\pi/a)\mathbf{x}^{\frac{1}{2}}} \left( \sin\left(\pi/2a + a\sin(\pi/a)\mathbf{x}^{\frac{1}{a}}\right) + O(\mathbf{x}^{-\frac{1}{a}}) \right) \tag{14}$$

for *α* ≥ 2 and

$$\mathcal{L}\_a(-\mathbf{x}) \sim \frac{1}{a^a \, \Gamma(1-a) \ge (\log \mathbf{x})^a} \tag{15}$$

for *α* ∈ (1, 2), as *x* → ∞. The following estimate, which seems to have passed unnoticed in the literature, completes the picture.

**Proposition 8.** *For every α* ∈ (0, 1), *one has*

$$\mathcal{L}\_a(-\mathbf{x}) \sim \frac{1}{\Gamma(1-\alpha)\,\mathrm{x}\,(\log\mathbf{x})^a} \qquad\text{as }\mathbf{x}\to\infty.$$

**Proof of Proposition 8.** In the proof of Theorem 1.3 in [3] it is shown that

$$\mathcal{L}\_{\mathfrak{A}}(-\mathfrak{x}) \, \, \, \mathbb{P}[\mathbf{L} > \mathfrak{x} \mathbf{L}\_{\mathfrak{A}}] \, \, \, \, \int\_0^\infty e^{-\mathfrak{x}t} f\_{\mathfrak{a}}(t) \, dt$$

where

$$\mathbf{L}\_a \stackrel{d}{=} \int\_0^\infty e^{-\sigma\_t^{(a)}} \, dt.$$

has density *f<sup>α</sup>* on (0, ∞) and Mellin transform

$$\mathbb{E}[\mathbf{L}\_{\alpha}^{s}] \;= \; \Gamma(\mathbf{1} + s)^{1 - \alpha} \; \; \; \; s > -1.$$

In particular, using the notation in [20], we have *f<sup>α</sup>* = *e*1−*α*, and Theorem 2.4 therein implies

$$f\_a(\mathbf{x}) \sim \frac{1}{\Gamma(1-a) \left(-\log \mathbf{x}\right)^a} \qquad \text{as } \mathbf{x} \to 0. \tag{16}$$

Plugging this estimate into the above expression of L*α*(−*x*), we conclude the proof by a direct integration. **Remark 9.** *(a) The estimate (16) also gives the asymptotic behavior, at the right end of the support, of the density of the fractional Gumbel random variable* **G***α*,*<sup>λ</sup> which is defined in Theorem 1.3 of [3]. Indeed, by the definition and multiplicative convolution the density of eλ***G***α*,*<sup>λ</sup> on* (0, ∞) *writes*

$$\int\_0^\infty e^{-xy} \, y f\_a(y) \, dy \sim \frac{1}{\Gamma(1-a) \ge^2 (\log x)^a} \qquad \text{as } x \to \infty, \text{ i.e.}$$

*where the estimate follows from (16) as in the proof of Proposition 8. A change of variable implies then*

$$f\_{a,\lambda}^{\mathbf{G}}(\mathbf{x}) \sim \left(\frac{\lambda^{1-\kappa}}{\Gamma(1-\alpha)}\right) \mathbf{x}^{-\alpha} e^{-\lambda x} \qquad \text{as } \mathbf{x} \to \infty.$$

*Notice that at the left end of the support, there is a convergent series representation which is given by Corollary 3.6 in [3].*

*(b) In the case α* = 2, *one has* L2(*x*) = *I*0(2 <sup>√</sup>*x*) *and* <sup>L</sup>2(−*x*) = *<sup>J</sup>*0(<sup>2</sup> <sup>√</sup>*x*) *for all <sup>x</sup>* <sup>≥</sup> 0, *where I*<sup>0</sup> *and J*<sup>0</sup> *are the classical Bessel functions with index 0. In particular, a full asymptotic expansion for* L<sup>2</sup> *at both ends of the support is available, to be deduced e.g., from (4.8.5) and (4.12.7) in [13]. These expansions also exist when α is an integer since* L*<sup>α</sup> is then a generalized Wright function—see Chapter F.2.3 in [2] and the original articles by Wright quoted therein. The case when α is not an integer does not seem to have been investigated, and might be technical in the absence of a true Mellin-Barnes representation.*

Our next result characterizes the connection between the entire function L*α*(*z*) and random variables. Recall that a function *f* : C → C which is holomorphic in a neighborhood Ω of the origin is a moment generating function (MGF) if there exists a real random variable *X* such that

$$f(z) \;=\; \mathbb{E}\left[e^{zX}\right]\_{\prime} \qquad z \in \Omega.$$

In particular, it is clear that L<sup>0</sup> is the MGF of the exponential law **L** and L<sup>1</sup> that of the constant variable **1**. The following provides a characterization.

**Proposition 9.** *The function* L*α*(*z*) *is the* MGF *of a real random variable if and only if α* ≤ 1. *In this case, one has*

$$\mathcal{L}\_{\alpha}(z) := \mathbb{E}\left[e^{z\mathbf{L}\_{z}}\right], \qquad z \in \mathbb{C}.$$

**Proof of Proposition 9.** The if part is a direct consequence of the proof of Proposition 8. On the other hand, the estimates (14) and (15) show that L*α*(*z*) takes negative values on R−, so that it cannot be the moment generating function of a real random variable, when *α* > 1. This completes the proof.

Observe that since **L***<sup>α</sup>* is non-negative, the above result also shows L*α*(−*x*) is CM on (0, ∞) if and only if *α* ≤ 1, echoing Pollard's aforementioned classical result for the Mittag–Leffler *Eα*(−*x*). One can ask whether there are further complete monotonicity properties for L*α*, as in [21] for *Eα*. Our last result for the Le Roy function is a monotonicity property which is akin to Proposition 1.

**Proposition 10.** *The mapping α* → L*α*(*x*) *is non-increasing on* [0, 1] *for every x* ∈ R.

**Proof of Proposition 10.** The fact that *<sup>α</sup>* → L*α*(*x*) decreases on <sup>R</sup><sup>+</sup> is obvious for *<sup>x</sup>* <sup>≥</sup> 0, by the definition of L*α*. To show the property on [0, 1] for *x* < 0, we will use a convex ordering argument. More precisely, the Malmsten Formula (A3) and the Lévy–Khintchine formula show that for every *t* ∈ [0, 1], the random variable **G**1−*<sup>t</sup>* = log **L**1−*<sup>t</sup>* is the marginal at time *<sup>t</sup>* of a real Lévy process, since <sup>E</sup>[*e*i*z***G**1−*<sup>t</sup>* ] = <sup>Γ</sup>(<sup>1</sup> <sup>+</sup> <sup>i</sup>*z*)*<sup>t</sup>* <sup>=</sup> *<sup>e</sup>tψ*(*z*) for every *<sup>z</sup>* <sup>∈</sup> <sup>R</sup>, with

$$\psi(z) = -\gamma i z + \int\_{-\infty}^{0} (e^{i\mathbf{z}\cdot\mathbf{x}} - 1 - i\mathbf{z}\,\mathbf{x}) \frac{d\mathbf{x}}{|\mathbf{x}|(e^{|\mathbf{x}|} - 1)} \cdot \mathbf{z}$$

This is actually well known—see Example E in [22]. By independence and stationarity of the increments of a Lévy process, we deduce that there exists a multiplicative martingale {*Mt*, *<sup>t</sup>* <sup>∈</sup> [0, 1]} such that *Mt <sup>d</sup>* = **L**1−*<sup>t</sup>* for every *t* ∈ [0, 1]. Jensen's inequality implies

$$\mathbf{L}\_{\beta} \prec\_{\alpha} \mathbf{L}\_{\alpha}$$

for every 0 ≤ *α* ≤ *β* ≤ 1. Applying the definition of convex ordering to the function *<sup>ϕ</sup>*(*x*) = *<sup>e</sup>x*, we get <sup>L</sup>*β*(*x*) ≤ L*α*(*x*) for every *<sup>x</sup>* <sup>&</sup>lt; 0 and 0 <sup>≤</sup> *<sup>α</sup>* <sup>≤</sup> *<sup>β</sup>* <sup>≤</sup> 1, as required.

**Remark 10.** *(a) In the terminology of [23], the family* {**L**1−*α*, *α* ∈ [0, 1]} *is a peacock, whose associated multiplicative martingale is completely explicit. We refer to [23] for numerous examples of explicit peacocks related to exponential functionals of Lévy processes. Observe from Lemma 2 that the family* {**T**(*a*, *b*, *t*), *t* > 0} *is also a peacock.*

*(b) Letting α* → 0 *and α* → 1 *in Proposition 10 leads to the bounds*

$$\epsilon^{\mathfrak{x}} \le \mathcal{L}\_{\emptyset}(\mathfrak{x}) \le \mathcal{L}\_{\mathfrak{a}}(\mathfrak{x}) \le \frac{1}{(1-\mathfrak{x})\_+} $$

*for every x* ∈ R *and* 0 < *α* < *β* < 1. *The hyperbolic upper bound is optimal as in Theorem 2 and Proposition 2, because* L*α*(*x*) − 1 ∼ *x as x* → 0. *The exponential lower bound is thinner than the order given in Proposition 8. On the other hand, it does not seem that stochastic ordering arguments can help for a uniform estimate involving a logarithmic term.*

It is natural to ask if the statement of Proposition 10 is also true for the classical Mittag–Leffler function, and this problem seems still open.

**Conjecture 5.** *The mapping α* → *Eα*(*x*) *is non-increasing on* [0, 1] *for every x* ∈ R.

Numerical simulations suggest a positive answer. It is clear by the definition that *α* → *Eα*(*x*) is non-increasing for every *x* ≥ 0 on [*α*0, ∞), where 1 + *α*<sup>0</sup> = 1.46163... is the location of the minimum of the Gamma function on (0, ∞). A direct consequence of Theorem B in [5] is also that

$$\alpha \iff E\_{\alpha} \left( \Gamma (1 + \alpha) x \right)$$

is non-increasing on [1/2, 1] for every *x* ∈ R. The constant Γ(<sup>1</sup> + *α*) appears above because of the convex ordering argument used in [5]. It seems that other kinds of arguments are necessary to study the monotonicity of *α* → *Eα*(*x*) on [0, 1].

We would like to finish this paper with the following related monotonicity result, which relies on a stochastic ordering argument, for the generalized Mittag–Leffler function.

**Proposition 11.** *For every α* ∈ [0, 1] *and x* ∈ R, *the mapping*

$$
\beta \longleftrightarrow \Gamma(\beta) E\_{\alpha,\beta}(x).
$$

*is non-increasing on* (*α*, ∞) *if x* > 0 *and non-decreasing on* (*α*, ∞) *if x* < 0.

**Proof of Proposition 11.** By Remark 3.3 (c) in [3], we have the probabilistic representation

$$\mathbb{E}(\beta)E\_{\alpha,\beta}(\mathbf{x}) = \mathbb{E}\left[e^{\mathbf{x}\cdot\mathbf{B}^{\alpha}\_{\alpha,\beta-\alpha}\times T^{(1)}\_{\alpha}}\right]$$

for every *α* ∈ [0, 1], *β* > *α* and *x* ∈ R. Reasoning as in Proposition 3, we see by factorization that it suffices to show that

$$
\beta \quad \longmapsto \quad \mathsf{B}^{\alpha}\_{\alpha,\beta-\alpha}
$$

is non-increasing on (*α*, ∞) for the usual stochastic order. On the other hand, the density function of the random variable **B***<sup>α</sup> <sup>α</sup>*,*β*−*<sup>α</sup>* is

$$\frac{\Gamma(\beta)}{\Gamma(\alpha+1)\Gamma(\beta-\alpha)}\left(1-x^{\frac{1}{\alpha}}\right)^{\beta-\alpha-1}$$

on [0, 1) and its value at zero is by the log-convexity of the Gamma function an increasing function of *β*. Moreover, the density functions of **B***<sup>α</sup> <sup>α</sup>*,*β*−*<sup>α</sup>* and **<sup>B</sup>***<sup>α</sup> <sup>α</sup>*,*β*−*<sup>α</sup>* cross only once for *β* = *β* , at

$$\left(1 - \left(\frac{\Gamma(\beta)\Gamma(\beta'-a)}{\Gamma(\beta')\Gamma(\beta-a)}\right)^{\frac{1}{\beta'-\beta}}\right)^{\alpha} \in (0,1).$$

The single intersection property finishes then the argument, as for Proposition 3.

**Author Contributions:** Writing—original draft, L.B. and T.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A**

In this Appendix we recall some properties of Barnes' double Gamma function *G*(*z*; *δ*), which are used throughout the paper. For every *δ* > 0, this function is defined as the unique solution to the functional equation

$$G(z+1; \delta) \;=\; \Gamma(z\delta^{-1})G(z; \delta) \tag{A1}$$

with normalization *G*(1; *δ*) = 1. The function is holomorphic on C and admits the following Malmsten type representation

$$G(z; \delta) = \exp \int\_0^\infty \left( \frac{1 - \epsilon^{-zx}}{(1 - \epsilon^{-z})(1 - \epsilon^{-\delta x})} - \frac{z\epsilon^{-\delta x}}{1 - \epsilon^{-\delta x}} + (z - 1)(\frac{z}{2\delta} - 1)\epsilon^{-\delta x} - 1 \right) \frac{dx}{x} \tag{A2}$$

which is valid for (*z*) > 0—see (5.1) in [24]. Putting (A1) and (A2) together and making some simplifications, we recover the standard Malmsten formula for the Gamma function

$$\Gamma(1+z) = \exp\left\{-\gamma z + \int\_{-\infty}^{0} (e^{zx} - 1 - zx) \frac{dx}{|x|(e^{|x|}-1)}\right\} \tag{A3}$$

for every *z* > −1, where *γ* is Euler's constant. The following Stirling type asymptotic behavior

$$\log G(z;\delta) \ -\frac{1}{2i} \left( z^2 \log z - \left( \frac{3}{2} + \log \delta \right) z^2 - (1+\delta) z \log z \right) \ -Az \ -\mathcal{B} \log z \to \mathbb{C} \tag{A4}$$

is valid for |*z*| → ∞ with | arg(*z*)| < *π*, for some real constants *A*, *B* and *C* which are given in (4.5) of [24]. There is a second concatenation formula

$$G(z+\delta;\delta) = (2\pi)^{(\delta-1)/2} \delta^{1/2-z} \Gamma(z) G(z;\delta) \tag{A5}$$

which is valid for all *z* ∈ C, the right-hand side being understood as an analytic extension when *z* is a non-positive integer—see (4.6) in [25] and the references therein. Observe that (A1) and (A5) lead readily to the closed formula

$$G(\delta; \delta) \;=\; G(1+\delta; \delta)\;=\; (2\pi)^{(\delta-1)/2}\delta^{-1/2}.\tag{A6}$$

In this paper, we make an extensive use of the following Pochhammer type symbol

$$[a; \delta]\_s = \frac{G(a+s; \delta)}{G(a; \delta)}\tag{A7}$$

which is well-defined for every *a*, *δ* > 0 and *s* > −*a*. The following formula

$$[a\delta^{-1};\delta^{-1}]\_{s\delta^{-1}} = (2\pi)^{s(1/\delta-1)/2} \,\delta^{s^2/2\delta-s(1+(1-2a)/\delta)/2} \, [a;\delta]\_s \tag{A8}$$

can be deduced from (4.10) in [25] beware the different normalization for *G*(1; *δ*) therein which becomes irrelevant when considering the Pochhammer type symbol. Notice also that (A5) yields

$$\delta^s \left[ a + \delta; \delta \right]\_s = \left( a \right)\_s \left[ a; \delta \right]\_s \tag{A9}$$

with the standard notation

$$(a)\_s = \frac{\Gamma(a+s)}{\Gamma(a)}$$

for the usual Pochhammer symbol. Finally, we observe from the double product representation of *G*(*z*, *δ*)—see e.g., (4.4) in [25] that for every *a*, *δ* > 0 one has

$$\inf\{s>0,\ [a;\delta]\_{-s}=0\}\ =\ a\tag{A10}$$

and that this zero is simple and isolated on the complex plane.

#### **References**


## *Article* **On the Multistage Differential Transformation Method for Analyzing Damping Duffing Oscillator and Its Applications to Plasma Physics**

**Noufe H. Aljahdaly 1,\*,† and S. A. El-Tantawy 2,3,†**


**Abstract:** The multistage differential transformation method (MSDTM) is used to find an approximate solution to the forced damping Duffing equation (FDDE). In this paper, we prove that the MSDTM can predict the solution in the long domain as compared to differential transformation method (DTM) and more accurately than the modified differential transformation method (MDTM). In addition, the maximum residual errors for DTM and its modification methods (MSDTM and MDTM) are estimated. As a real application to the obtained solution, we investigate the oscillations in a complex unmagnetized plasma. To do that, the fluid govern equations of plasma species is reduced to the modified Korteweg–de Vries–Burgers (mKdVB) equation. After that, by using a suitable transformation, the mKdVB equation is transformed into the forced damping Duffing equation.

**Keywords:** multistage differential transformation method; Duffing equation; nonlinear damping oscillations

#### **1. Introduction**

Mathematical techniques are very important tools in mathematics. Mathematicians have developed many mathematical methods to compute linear or nonlinear differential equations which describe many important phenomena and applications in science [1–7]. The mathematical techniques are classified as algebraic methods, semi-approximate, general analytical, approximate analytical, numerical, or qualitative techniques. The basic concept of approximate analytical techniques such as Adomian decomposition method (ADM), Laplacian decomposition method (LDM), or differential transformation method (DTM) is assuming that the solution is descried by a Taylor expansion form. Indeed, some solutions of equations have well-known Taylor expansions such as exponential function or hyperbolic function. In this case, it is easy to determine the exact solution by a few terms of the Taylor expansion series. Otherwise, the approximate solution will be obtained in the form of few terms of Taylor expansion series. Since Taylor expansion is local convergent about the initial condition, the method can approximate the solution in the neighborhood of the initial point. Thus, the solution is obtained in a very short domain. This feature of ADM, LDM or DTM has been mentioned by some researchers [8–12]. DTM has been improved by dividing the domain into subdomains and modifying the initial point in each subdomain. The other modification is by using the Laplacian transformation and Padé approximate. In Section 2, we describe these modifications in details. However, it is very important to determine the optimal modification of DTM to present fast and accurate techniques.

**Citation:** Alajhdaly, N.H.; El-Tantawy, S.A. On the Multistage Differential Transformation Method for Analyzing Damping Duffing Oscillator and Its Applications to Plasma Physics. *Mathematics* **2021**, *9*, 432. https://doi.org/10.3390/math 9040432

Academic Editor: Francesco Mainardi

Received: 27 December 2020 Accepted: 19 February 2021 Published: 22 February 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Some of the most important and famous differential equations whose solutions are related to many natural phenomena, physical concepts, and engineering phenomena are the Duffing equation (including conservative and non-conservative cases), the Helmholtz equation (including conservative and non-conservative cases), and their families [13–24]. Given the importance of the family of Duffing equation, a great effort has been made by many researchers to solve this equation and its family, with a numerical, analytical, or semi-analytical solution according to the type of Duffing equation. Examples of these approximate methods for solving the conservative Duffing equation (*u* + *βu*(*t*) + *γu*<sup>3</sup> = 0) include the homotopy perturbation method [25], harmonic balance method [26], energy balance method [27], modified variational approach [28], and coupled homotopy–variational approach [29]. On the other hand, many researchers have tried to find a solution to the damping Duffing equation (DDE), *u* + *αu* + *βu*(*t*) + *γu*<sup>3</sup> = 0 [30–35], since it is more closely related to reality than the undamping Duffing equation, which is correct only for idealized isolated systems, i.e., systems in which the frictional force and viscosity are absent. One of the most important approximate methods that has been used and developed to solve many differential equations is DTM, which has been used in solving DDE [11]. Nourazar et al. [11] used the modified DTM to get an approximate solution to the DDE. The authors compared their solution with both the fourth-order Runge–Kutta (RK4) numerical solution and the DTM solution. They found that the DTM solution is suitable only for small time intervals while the MDTM solution is suitable for the whole time domain. In our study, we solve the forced damping Duffing equation (FDDE) *u* + *αu* + *βu*(*t*) + *γu*<sup>3</sup> = *F* using the multistage differential transformation method (MSDTM) for arbitrary initial conditions. Moreover, we compare the approximate solutions of DTM and MDTM as well as the numerical solution using RK4 in order to determine the optimal technique. Furthermore, the oscillations in complex unmagnetized plasmas are investigated by reducing the fluid govern equations of the plasma species to an evolution equation and then transform this equation to the Duffing-type equation using a suitable transformation.

#### **2. Methodology**

This section is devoted to briefly describing DTM and its modifications. Assume the following ordinary differential equation (ODE)

$$P(\mathbf{u}, \mathbf{u}', \mathbf{u}'', \dots) = \mathbf{0},\tag{1}$$

where *u*(*t*) is the solution of this ODE in domain [*t*0, *tN*], *P* is a polynomial in terms of *u* and its derivative, and *u*(*t*0) = *c*.

#### *2.1. Differential Transformation Method (DTM)*

Assume that the goal is finding the approximate solution of Equation (1). The main concept of DTM is based on applying the differential transformation *u*(*t*) =⇒ *U*(*k*) at *t* = *t*<sup>0</sup> as follows:

$$
\delta L(k) = \frac{1}{k!} [\frac{d^k u(t)}{dt^k}]\_{t=t\_0}. \tag{2}
$$

The differential inverse transformation *U*(*k*) =⇒ *u*(*t*) is defined as

$$u(t) = \sum\_{k=0}^{\infty} lI(k)(t - t\_0)^k,\tag{3}$$

Inserting Equation (2) into Equation (3), *u*(*t*) can be approximated in finite number series as follows

$$u\_N(t) = \sum\_{k=0}^{N} \frac{(t - t\_0)^k}{k!} [\frac{d^k u(t)}{dt^k}]\_{t = t\_0} = \mathcal{g}\_N. \tag{4}$$

Some differential transformation rules are introduced in Table 1.


**Table 1.** Differential transformation rules.

It is well known that, since the DTM based on Taylor expansion, the approximate solution if it is locally analytic converges to the exact solution with the following approximated error

$$|u(t) - g\_N(t)| \le \frac{M}{(N+1)!}|t - t\_0|^{N+1} \text{.}$$

where |*u*(*tN*)| ≤ *M*.

It is obvious that the error increases when |*t* − *t*0| incenses for fixed term *N*. Note that DTM gives accurate results only in a small domain around the initial point. Therefore, to obtain good results, some modifications to this method must be introduced. There are some attempts to improve this method, such as the modified differential transformation method (MDTM) and the multistage differential transformation method (MSDTM).

#### *2.2. Modified Differential Transformation Method (MDTM)*

MDTM is presented in [11]. The idea is described simply by applying the Laplacian transformation into Equation (3) L*u*(*t*). We obtain the polynomial in terms of 1/*t s* . Next, we use Padé approximate, [3/3] or [4/4], and then apply the Laplacian inverse transform. The method is improved and able to approximate the solution in long domain.

**Definition 1.** *We say the function g*(*t*) *is Padé approximate of order* [*m*/*n*] *for function u*(*t*) *if*

$$\mathcal{g}(t) = \frac{a\_0 + a\_1 t + a\_2 t^2 + \dots + a\_m t^m}{1 + b\_0 + b\_1 t + b\_2 t^2 + \dots + b\_n t^n}.$$

*where u*(0) = *g*(0), *u* (0) = *g* (0), *u*(0) = *g*(0), ........, *u*(*m*+*n*)(0) = *g*(*m*+*n*)(0)*. The constants ai*, *i* = 1, 2, ..., *m and bj*, *j* = 1, 2, ..., *n are uniquely determined. The Padé approximate is unique for given n and m.*

#### *2.3. Multistage Differential Transformation Method (MSDTM)*

The other modification is MSDTM. The main concept is dividing the domain into subdomains [*ti*, *ti*+1] = *Di* and applying DTM in each subdomain with the initial condition at *ti* to approximate *u*(*t*) at the subdomain *Di*.

#### *2.4. Example*

In this section, we apply DTM and its modifications to one of the most famous equations in dynamic systems which is called the Duffing oscillator or the Duffing equation. It is known that the Duffing equation has many formulas, and, in this paper, we restrict our attention to investigating the forced damping Duffing equation (FDDE) *u* + *αu* + *βu*(*t*) + *γu*<sup>3</sup> = *F* . This equation is non-integrable and does not have an exact solution except under certain conditions on its coefficients (*α*, *β*, *γ*). Therefore, the approximate solution to the following FDDE for arbitrary values of its coefficients (*α*, *β*, *γ*) and for arbitrary initial conditions is obtained:

$$\begin{cases} \ u^{\prime\prime} + \kappa u^{\prime} + \beta u(t) + \gamma u^{3} = F, \\\ u(0) = u\_0 \ \&\/ u^{\prime} = u\_0^{\prime}. \end{cases} \tag{5}$$

In the following analysis, we give some numerical examples to solve the initial value problem (i.v.p.) (5) using the aforementioned methods and examine the accuracy of these methods for calculating the residual error for each method compared to the RK4.

#### 2.4.1. MDTM

Firstly, let us use the same values of (*α*, *β*, *γ*, *F*) = (0.5, 25, 25, 0) as mentioned by [11] with the initial conditions *u*(0) = 0.1 and*u* (0) = 0. Note that the solution of the i.v.p. (5) for unforced (*F* = 0) using MDTM is introduced in details in [11]. In the case of using Padé approximate of [3/3], we have

$$u(t) = 0.00194 + 0.000238e^{-0.25t}(411\cos(5.068t) + \\\\\delta t) \\\tag{6}$$

In the case of using Padé approximate of [4/4], the solution is approximated as

$$\begin{split} u(t) &= A e^{(-0.60107 - 15.0816i)t} + B e^{(-0.60107 + 15.0816i)t} \\ &+ C e^{(-0.24894 - 5.0125i)t} + D e^{(-0.24894 + 5.0125i)t} \end{split} \tag{7}$$

with

$$\begin{aligned} A &= 1.6932 \times 10^{-5} - 1.3567 \times 10^{-4}i, \\ B &= 1.6932 \times 10^{-5} + 1.3567 \times 10^{-4}i, \\ C &= 4.9983 \times 10^{-2} - 2.5633 \times 10^{-3}i, \\ D &= 4.9983 \times 10^{-2} + 2.5633 \times 10^{-3}i. \end{aligned}$$

In the second example, we use the values (*α*, *β*, *γ*, *F*) = (1, 20, 2, 0) and with initial condition *u*(0) = −0.2 and *u* (0) = 2 and apply Padé approximate of [3/3] and [4/4]. The solution in the case of using Padé approximate of [3/3] reads

$$\begin{aligned} u(t) &= 0.003101 \exp(-6.3493t) \\ &+ \exp(-0.52098t)(0.434516 \sin(4.4046t) \\ &- 0.203101 \cos(4.4046t)), \end{aligned} \tag{8}$$

and for [4/4] reads

$$u(t) = Ae^{(-2.0169 + 12.6572i)t} + Be^{(-2.0169 - 12.6572i)t}$$

$$+ Ce^{(-0.4965 + 4.4826i)t} + De^{(-0.4965 - 4.4826i)t} \tag{9}$$

with

$$\begin{aligned} A &= 2.265 \times 10^{-4} , -4.807 \times 10^{-5} {}^{i}\_{i} \\ B &= 2.265 \times 10^{-4} , +4.807 \times 10^{-5} {}^{i}\_{i} \\ C &= -0.100226 - 0.21195 {}^{i}\_{i} \\ D &= -0.100226 + 0.21195 {}^{i}\_{i} \end{aligned}$$

#### 2.4.2. MSDTM

In this work, we focus our attention to solve the i.v.p. (5) for arbitrary initial conditions using MSDTM by dividing the domain [0, 20] to subdomains with time step 10−<sup>2</sup> and apply DTM with *k* = 3 to find *u<sup>i</sup>* as follows:

$$
\mu\_{k+1}^i = \frac{k!}{(k+1)!} y\_{k'}^i \tag{10}
$$

$$y\_{k+1}^i = \frac{k!}{(k+1)!} \left[ -\beta u\_k^i - a y\_k^i - \gamma \sum\_{r=0}^k \left( \sum\_{l=0}^r (u\_l^i u\_{r-l}^i) \right) u\_{k-r}^i + F \right],\tag{11}$$

where *y* = *u*

.

To check the accuracy of the aforementioned methods as compared the RK4 solution, we use the following error formula for the maximum residual error

$$L\_D(\text{method}) = \max\_{t\_0 \le t \le t\_N} |\text{RK}(t) - \mu(t)|.$$

Figures 1 and 2 demonstrate the approximate solutions to the i.v.p. (5) for different values of the coefficients (*α*, *β*, *γ*, *F*). The results show that the MDTM4 and MSDTM are better approximations than MDTM3. Moreover, the comparison of the maximum residual errors for the approximate solutions shown in Table 2 proves that the accurate method is MSDTM. Aljahdaly [10] proved that the MSDTM and RK4 techniques have the same accuracy, but MSDTM is faster than RK4. Thus, we conclude that MSDTM is a fast, accurate, and reliable method for many differential equations in physics and in different branches of science. In the next section, a new application to the damping Duffing equation in plasma physics is introduced.

**Table 2.** The error *LD*(methods) is estimated for different values of the coefficients *α*, *β*, *γ*, *u*0, *u* 0 .


(**a**)Comparing RK method and MDTM3

**Figure 1.** *Cont*.

(**c**)Comparing RK method and MSDTM

**Figure 1.** Plot the solution *u*(*t*) for *α* = 0.5, *β* = *γ* = 25, *F* = 0, *u*(0) = 0.1, *u* (0) = 0.

(**a**)Comparing RK method and MDTM3

**Figure 2.** *Cont*.

(**c**)Comparing RK method and MSDTM

**Figure 2.** Plot the solution *u*(*t*) for *α* = 1, *β* = 20, *γ* = 2, *F* = 0, *u*(0) = −0.2, *u* (0) = 2.

#### **3. Application in Plasma Physics**

Let us consider the propagation of nonlinear structures in a complex unmagnetized plasma composed of inertial positive ions (with subscript "*i*") and two different types of electrons (with subscripts "*l*" and "*h*" for the lower and higher electron temperatures, respectively) that follow the kappa distribution in addition to static dust grains with negative charge (with subscript "*d*") [36]. Accordingly, the neutrality condition reads *n*(0) *<sup>l</sup>* <sup>+</sup> *<sup>n</sup>*(0) *<sup>h</sup>* <sup>+</sup> *zdn*(0) *<sup>d</sup>* <sup>=</sup> *<sup>n</sup>*(0) *<sup>i</sup>* , where *<sup>n</sup>*(0) *<sup>j</sup>* represents the unperturbed number density of species *j* (*j* ≡ *l*, *h*, *d*, *i*) and *zd* gives the number of electrons residing on the surface of the dust grains. The dynamics of the nonlinear structures whose phase speed is much larger than the ion thermal speed but smaller than the electron thermal speed are governed by the following dimensionless fluid continuity, momentum, and Poisson's equations, respectively,

$$\begin{cases} \begin{aligned} \partial\_l \boldsymbol{n}\_i + \partial\_\mathbf{x} (\boldsymbol{n}\_i \boldsymbol{u}\_i) &= 0, \\ \partial\_t \boldsymbol{u}\_i + \boldsymbol{u}\_i \partial\_\mathbf{x} \boldsymbol{u}\_i + \partial\_\mathbf{x} \boldsymbol{\phi} &= \eta \partial\_\mathbf{x}^2 \boldsymbol{u}\_i, \\ \partial\_\mathbf{x}^2 \boldsymbol{\phi} - \boldsymbol{n}\_\mathbf{c} + \boldsymbol{n}\_i - \boldsymbol{\mu}\_d &= 0, \end{aligned} \tag{12}$$

where the number density of the electrons in kappa distribution is given by

$$\begin{split} n\_{\varepsilon} &= n\_{l} + n\_{h} = \mu\_{l} \left( 1 - \frac{\sigma\_{l} \phi}{R\_{l}} \right)^{S\_{l}} + \mu\_{h} \left( 1 - \frac{\sigma\_{h} \phi}{R\_{h}} \right)^{S\_{h}} \\ &\equiv \Gamma\_{0} + \Gamma\_{1} \phi + \Gamma\_{2} \phi^{2} + \Gamma\_{3} \phi^{3} + \cdots \end{split} \tag{13}$$

with

$$\begin{split} \Gamma\_{0} &= \mu\_{l} + \mu\_{h\prime} \\ \Gamma\_{1} &= -\left[\frac{S\_{l}\mu\_{l}\sigma\_{l}}{R\_{l}} + \frac{S\_{h}\mu\_{h}\sigma\_{h}}{R\_{h}}\right] \\ \Gamma\_{2} &= \left[\frac{S\_{l}\mu\_{l}\sigma\_{l}^{2}(S\_{l}-1)}{2R\_{l}^{2}} + \frac{S\_{h}\mu\_{h}\sigma\_{h}^{2}(S\_{h}-1)}{2R\_{h}^{2}}\right] \\ \Gamma\_{3} &= -\left[\frac{S\_{l}\mu\_{l}\sigma\_{l}^{3}(S\_{l}-1)(S\_{l}-2)}{6R\_{l}^{3}} + \frac{S\_{h}\mu\_{h}\sigma\_{h}^{3}(S\_{h}-1)(S\_{h}-2)}{6R\_{h}^{3}}\right] \gamma \\ S\_{l} &= \left(-\kappa\_{l} + \frac{1}{2}\right)\text{k}\ S\_{h} = \left(-\kappa\_{h} + \frac{1}{2}\right), \\ R\_{l} &= \left(\kappa\_{l} - \frac{3}{2}\right)\text{k}\ R\_{h} = \left(\kappa\_{h} - \frac{3}{2}\right). \end{split}$$

where *ni*/*nl*/*nh* is the normalized number density of the positive ions/low temperature electrons/high temperature electrons, *ui* refers to the normalized velocity of the positive ions, *φ* is the normalized electrostatic potential, *η* represents the normalized coefficient of ionic kinematic viscosity, *σl*,*<sup>h</sup>* = *Teff* /*Tl*,*<sup>h</sup>* is the electron temperature ratio, the effective electron temperature is *Teff* <sup>=</sup> *<sup>n</sup>*(0) *<sup>e</sup> TlTh*/ *n*(0) *<sup>l</sup> Th* <sup>+</sup> *<sup>n</sup>*(0) *<sup>h</sup> Tl* , *n*(0) *e* ≡ *n*(0) *<sup>l</sup>* <sup>+</sup> *<sup>n</sup>*(0) *h* is the total unperturbed electrons density, *<sup>μ</sup><sup>d</sup>* <sup>=</sup> *zdn*(0) *<sup>d</sup>* /*n*(0) *<sup>i</sup>* is the dust concentration, *<sup>μ</sup><sup>l</sup>* <sup>=</sup> *<sup>n</sup>*(0) *<sup>l</sup>* /*n*(0) *<sup>i</sup>* is the concentration of low electron temperature, *<sup>μ</sup><sup>h</sup>* <sup>=</sup> *<sup>n</sup>*(0) *<sup>h</sup>* /*n*(0) *<sup>i</sup>* is the concentration of high electron temperature, and *κl*,*h*(> 3/2) is the kappa index parameter [36].

To model and analyze the nonlinear structures that can propagate in the present plasma model, the reductive perturbation method (RPM) [37,38] is used to reduce the basic set of fluid Equations (12) and (13) to an evolution equation. According to this method, the independent variables (*x*, *t*, *η*) can be stretched as follows:

$$X = \varepsilon \left(\mathbf{x} - \boldsymbol{\upsilon}\_{ph}\mathbf{t}\right), \boldsymbol{T} = \varepsilon^3 t \text{ \& } \boldsymbol{\eta} = \varepsilon \boldsymbol{\eta},\tag{14}$$

where *ε* is a real and small parameter (*ε* << 1) that measures the strength of the nonlinearity and *vph* represents the normalized phase velocity, which is scaled by *Ci*. In addition, the dependent quantities Π(*x*, *t*) ≡ (*ni*, *ui*, *φ*) are expanded as follows:

$$
\Pi(\mathbf{x}, t) = \Pi^{(0)} + \sum\_{s=1}^{\infty} \varepsilon^s \Pi^{(s)}(\mathbf{X}, T), \tag{15}
$$

where <sup>Π</sup>(0) <sup>≡</sup> [1, 0, 0] *<sup>T</sup>*, <sup>Π</sup>(*s*)(*X*, *<sup>T</sup>*) <sup>≡</sup> *n*(*s*) *<sup>i</sup>* , *<sup>u</sup>*(*s*) *<sup>i</sup>* , *<sup>φ</sup>*(*s*) *T* , and *T* gives the matrix transpose.

Inserting both stretching (14) and expansion (15) into the basic set of fluid Equations (12) and (13), we get a system of reduced equations in different powers of *ε*. From the lowest-order of *<sup>ε</sup>*, i.e., *<sup>O</sup>*(*ε*), the values of the first-order quantities *n*(1) *<sup>i</sup>* , *<sup>u</sup>*(1) *i* and the phase velocity *vph* can be obtained as

$$u\_i^{(1)} = v\_{p\text{h}} n\_i^{(1)} = \frac{1}{v\_{p\text{h}}} \phi^{(1)}\,,$$

$$v\_{p\text{h}} = \frac{1}{\sqrt{\Gamma\_1}}.\tag{16}$$

The solution of next-order of *ε*, i.e., *O ε*2 , gives the values of the second-order quantities *n*(2) *<sup>i</sup>* , *<sup>u</sup>*(2) *i* 

$$\begin{aligned} n\_i^{(2)} &= \frac{1}{v\_{pll}^4} \left( \frac{3}{2} \phi^{(1)2} + v\_{pll}^2 \phi^{(2)} \right), \\ n\_i^{(2)} &= \frac{1}{v\_{pll}^3} \left( \frac{1}{2} \phi^{(1)2} + v\_{pll}^2 \phi^{(2)} \right), \end{aligned} \tag{17}$$

and the Poisson's equation gives

$$A\phi^{(1)2} + A\_c \phi^{(2)} = 0,\tag{18}$$

where *A* = 3/ 2*v*<sup>4</sup> *ph* − Γ<sup>2</sup> = 0 at the critical value of low electron temperature concentration *μ<sup>l</sup>* = *μlc* and the coefficient *Ac* = 1/*v*<sup>2</sup> *ph* − Γ<sup>1</sup> represents the compatibility condition, i.e., *Ac* = 0.

From the next-order of *ε*, i.e., *O ε*3 , we get

$$
\partial\_T n\_i^{(1)} + \partial\_X \left( n\_i^{(1)} u\_i^{(2)} \right) + \partial\_X \left( n\_i^{(2)} u\_i^{(1)} \right) - \upsilon\_{ph} \partial\_X n\_i^{(3)} + \partial\_X u\_i^{(3)} = 0,\tag{19}
$$

$$
\partial\_T \partial\_i u\_i^{(1)} + \partial\_X \left( u\_i^{(1)} u\_i^{(2)} \right) + \partial\_X \left( n\_i^{(2)} u\_i^{(1)} \right) - \upsilon\_{\text{ph}} \partial\_X u\_i^{(3)} + \partial\_X \phi^{(3)} - \eta \partial\_X^2 u\_i^{(1)} = 0,\tag{20}
$$

and the Poisson's equation gives

$$
\partial\_X \left( n\_i^{(3)} - \Gamma\_3 \phi^{(1)3} - 2\Gamma\_2 \phi^{(1)} \phi^{(2)} - \Gamma\_1 \phi^{(3)} + \partial\_X^2 \phi^{(1)} \right) = 0. \tag{21}
$$

By solving Equations (19)–(21) with the help of Equations (16) and (17), we finally get the mKdVB equation

$$
\partial\_T \varrho + P\_1 \varrho^2 \partial\_X \varrho + P\_2 \partial\_X^3 \varrho = P\_3 \partial\_X^2 \varrho,\tag{22}
$$

with

$$\begin{aligned} P\_1 &= \left(15 - 6\Gamma\_3 \upsilon\_{ph}^6\right) / \left(4 \upsilon\_{ph}^3\right), \\ P\_2 &= \frac{\upsilon\_{ph}^3}{2} \text{ &} P\_3 = \frac{\vec{\eta}}{2}, \end{aligned}$$

where *<sup>ϕ</sup>* <sup>≡</sup> *<sup>φ</sup>*(1).

It is known that Equation (22) supports the shock solution due to the presence of ion kinematic viscosity. However, in this paper, we want to investigate the damping oscillations in the present model. Accordingly, the transformation *ϕ*(*X*, *T*) = Φ(*ξ*), where *ξ* = *X* + *vf T*, is used to transform Equation (22) into the FDDE as follows:

$$
\rho^{\prime\prime} + \kappa \rho^{\prime} + \beta \rho + \gamma \rho^{\beta} = F,\tag{23}
$$

where *α* = −*P*3/*P*2, *β* = *vf* /*P*2, *γ* = *P*1/(3*P*2), and *F* is the constant of integration.

Let us now investigate the effect of typical complex plasma parameters, namely (*κl*, *σh*, *μl*, *μh*) = (3, 0.1, *μc*, 0.4), and different values for (*σl*, *κh*, *η*˜) on the profile of plasma oscillations. Some plasma data are used as an example for investigating the solution of MSDTM, as shown in Figure 3. It is clear from the results in Figure 3 that the enhancement of the viscosity parameter *η*˜ leads to an increase in the number of oscillations and decreasing the time of periodicity. Note that the effect *σ<sup>l</sup>* has on the profile of oscillation is the same as its effect on *η*˜ while *κ<sup>h</sup>* has the opposite effect, in which the number of oscillations decreases and the time periodicity increases with the enhancement of *κh*.

**Figure 3.** Plot of the initial solution *u*(*t*) for *η<sup>h</sup>* = 0.4; *κ<sup>l</sup>* = 3; *κ<sup>h</sup>* = 3; *σ<sup>l</sup>* = 2.5; *σ<sup>h</sup>* = 0.1; *uf* = 0.1; *η* = 0.3. The plot shows the effects of: *η* (**a**); *σ<sup>l</sup>* (**b**); and *κ<sup>h</sup>* (**c**).

#### **4. Conclusions**

The forced damping Duffing equation *ϕ* + *αϕ* + *βϕ* + *γϕ*<sup>3</sup> = *F* with arbitrary initial conditions is investigated numerically via the highly-accurate MSDTM. The comparison between the approximate solutions using MDTM and MSDTM with RK4 numerical solution is reported. Moreover, the maximum residual error for all approximate numerical solutions as compared to the RK4 solution is estimated. It is observed that the approximate numerical solution using MSDTM is highly accurate and better than both DTM and MDTM. Furthermore, the application of the FDDE in the practical plasma model is investigated to study the dynamics of nonlinear oscillations that occur in a complex unmagnetized plasma. This solution might help many researchers in studying and investigating many problems in various fields of science such as plasma physics and optical fiber.

Future work: in this work, the MSDTM is devoted for solving the FDDE for constant force, but sometimes the perturbation force is not constant but periodic with time *ϕ* + *αϕ* + *βϕ* + *γϕ*<sup>3</sup> = *f*(*t*) , this is considered an important and vital problem but out of the present scope.

**Author Contributions:** N.H.A., conceptualization of the mathematics, methodology, software, computation, mathematical analysis, writing and editing; and S.A.E.-T., conceptualization of the physics, introducing the application, computation, physics analysis and writing. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was founded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, under grant No. (D- 1441-445-662).

**Institutional Review Board Statement:** The study did not involve humans or animals.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The study did not report any data.

**Acknowledgments:** This Project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, under grant No. (D- 1441-445-662 ). The authors, therefore, acknowledge with thanks DSR technical and financial support.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Review* **The Bateman Functions Revisited after 90 Years—A Survey of Old and New Results**

**Alexander Apelblat 1, Armando Consiglio <sup>2</sup> and Francesco Mainardi 3,\***


**Abstract:** The Bateman functions and the allied Havelock functions were introduced as solutions of some problems in hydrodynamics about ninety years ago, but after a period of one or two decades they were practically neglected. In handbooks, the Bateman function is only mentioned as a particular case of the confluent hypergeometric function. In order to revive our knowledge on these functions, their basic properties (recurrence functional and differential relations, series, integrals and the Laplace transforms) are presented. Some new results are also included. Special attention is directed to the Bateman and Havelock functions with integer orders, to generalizations of these functions and to the Bateman-integral function known in the literature.

**Keywords:** bateman functions; havelock functions; integral-bateman functions; confluent hypergeometric functions

#### Mainardi, F. The Bateman Functions Revisited after 90 Years—A Survey of Old and New Results. *Mathematics* **2021**, *9*, 1273. https://doi.org/ 10.3390/math9111273

**Citation:** Apelblat, A.; Consiglio, A.;

Academic Editors: J. Tenreiro Machado, Manuel de León, Carsten Schneider and Abdelmejid Bayad

Received: 19 April 2021 Accepted: 26 May 2021 Published: 1 June 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

#### **1. Introduction**

Harry Bateman (1882–1946) has been a renowned Anglo-American applied mathematician, who made outstanding contributions to mathematical physics, namely to aeroand fluid dynamics, to electro-magnetic and optical phenomena, to thermodynamics and geophysics and to many other fields [1,2]. His main interests in mathematics were analytical solutions of partial differential and integral equations. His book published in 1932, *Partial Differential Equations of Mathematical Physics* [3] is even today, a basic textbook on this subject. Born in Manchester and educated in Trinity College, Cambridge, with a continuation in Paris and Gottingen, Bateman emigrated to USA in 1910 and starting since 1917, during nearly three decades he has been Professor of Aeronautical Research and Mathematical Physics in the California Institute of Technology (Caltech). During these years he solved a number of various applied problems and simultaneously compiled from mathematical literature a vast amount of information associated with special functions and their properties.

From an enormous scientific legacy that Bateman left behind him, it is important to mention three items which are named after him. The first is the so-called *Bateman equation* which is applied in solutions of pbharmacokinetics problems (modeling of effective therapeutic management of drugs). As usual with Bateman, the origin of this equation came from an interaction with other scientists, and this one with Ernest Rutherford. It includes the solution of a set of ordinary differential equations which describes the radioactive decay process. Mathematically, this process is similar to the behaviour of drugs in the human body and therefore is frequently used in pharmacokinetic models (see for example [4], and for prediction of the spread of COVID-19 look in [5]) listed in the fifties of the past century, and they constitute the so-called *Bateman approach*.

In mathematics, the Bateman name is mostly associated with the five red books published in the fifties of the previous century, and they constitute the so-called *Bateman* *Manuscript Project*. Three volumes are devoted to the properties of special functions [1] and two volumes to tables of integral transforms [6]. This enormous collection of functions, series and integrals, together with the description of their properties is based on the material compiled largely by Bateman, and prepared for publication by four editors A. Erdélyi, R. Magnus, F. Oberhettinger and F.G. Tricomi. Even today, these five books are indispensable for everybody, mathematicians, scientists and engineers who are involved in study and use of special functions and integral transforms. They were essential as a precursor and model for later appearing in published or in modern on-line forms various compilations of mathematical reference data (for most important see for example [7–19]).

In 1931 Bateman published a paper entitled: *The k-function, a particular case of the confluent hypergeometric function*, where he presented the definite trigonometric integral (1) and derived for it many properties [20]

$$k\_n(\mathbf{x}) = \frac{2}{\pi} \int\_0^{\pi/2} \cos(\mathbf{x} \tan \theta - n\theta) \, d\theta, \quad n = 0, 1, 2, 3, \dots \tag{1}$$

This integral represents the solution of the ordinary differential equation which appeared in Theodore von Kármán's theory of turbulent flows

$$
\alpha \frac{d^2 u(\mathbf{x})}{d\mathbf{x}^2} = (\mathbf{x} - \boldsymbol{n}) u(\mathbf{x}) \,. \tag{2}
$$

Bateman named the integral in (1) as *k*-function in tribute for the outstanding contribution of von Kármán in the field of fluid dynamics. Nowadays, denoted in the mathematical literature by small or capital *k*, this function in a more general form, is called the Bateman function of argument *x* and order (parameter) *ν*.

$$k\_{\nu}(\mathbf{x}) = \frac{2}{\pi} \int\_{0}^{\pi/2} \cos(\mathbf{x} \tan \theta - \nu \theta) \, d\theta \,. \tag{3}$$

The reason that Bateman used integer orders only, came from the fact that *kn*(*x*) functions can then be expressed by the Rodriguez type formulas and they are associated with the Laguerre polynomials. This also permitted to express sums of them in closed form and to link the Bateman functions with the confluent hypergeometric and Whittaker functions. In 1935, some new results were derived by Shastri [21], who showed that methods of operational calculus can be applied to this function.

Unfortunately, the Bateman functions found later rather limited attention in the mathematical literature. Few only topics associated with them were considered and these mainly by Indian mathematicians [22–35]. They included the generalized Bateman functions, dual, triple and multi series equations of these functions, some integral equations and recurrence relations. It is worthwhile also to mention that in mathematical textbooks and tables, the Bateman function is not considered as a some kind of minor special function, but only indicated as a particular case of the confluent hypergeometric function. Besides, no plots or tabulations of the Bateman functions are known in the literature.

One of the first attempts to enlarge a knowledge about properties of the Bateman functions, has been evidently to introduce a new function, by replacing in the integrand of integral (1) cosine function with sine function

$$T\_{\rm ll}(\mathbf{x}) = \frac{2}{\pi} \int\_0^{\pi/2} \sin(\mathbf{x} \tan \theta - n\theta) \, d\theta, \quad n = 0, 1, 2, 3, \dots \tag{4}$$

In 1950 H.M. Srivastava [25] and in 1966 K.N. Srivastava [29] suggested to denote this new function as *Tn*(*x*), where the capital *T* letter was adapted to honor Walter Tollmien who made pioneering works in the transition region between fully established laminar and turbulent flows. However, an unquestionably historical fact is that both trigonometric integrals as defined in (1) and (4), were already, six year earlier in 1925, considered by Havelock who investigated some problems associated with surface waves [36]. In the case of a circular cylinder immersed in a uniform flow, he needed to evaluated the following integrals which are written here in their original notation for *k* > 0

$$L\_r = \int\_0^{\pi/2} \cos(2r\phi - k\tan\phi) \,d\phi, \quad M\_r = \int\_0^{\pi/2} \sin(2r\phi - k\tan\phi) \,d\phi \,. \tag{5}$$

Thus, in view of that 2*r* = *x* and *k* = *n*, these integrals differ from (1) and (4) only by the normalization factor 2/*π* and the minus sign in the second integral. What is even more important, Havelock was able to present the first six integrals in a closed form. It is of interest also to mention that Bateman knew about the Havelock paper and of related integrals investigated by him. These integrals are included in the manuscript (later edited and published by Erdélyi) which was found among his papers [37]. Taking these facts into account, it is more fair and consistent to name the sine integral as the *Havelock function* and to use similar as in (3) notation

$$h\_{\nu}(\mathbf{x}) = \frac{2}{\pi} \int\_{0}^{\pi/2} \sin(\mathbf{x} \tan \theta - \nu \theta) \, d\theta \,. \tag{6}$$

In the next step, further generalizations of the Bateman function were proposed by including powers of trigonometric functions in integrands for *m*, *n* = 0, 1, 2, 3, . . . ,

$$\begin{aligned} k\_{\nu}^{\mathfrak{m}}(\mathbf{x}) &= \frac{2}{\pi} \int\_{0}^{\pi/2} (\cos \theta)^{\mathfrak{m}} \cos(\mathbf{x} \tan \theta - \nu \theta) \, d\theta \, \, \\\ k\_{\nu}^{\mathfrak{m}, \mathfrak{n}}(\mathbf{x}) &= \frac{2}{\pi} \int\_{0}^{\pi/2} (\sin \theta)^{\mathfrak{m}} (\cos \theta)^{\mathfrak{n}} \cos(\mathbf{x} \tan \theta - \nu \theta) \, d\theta \, \, \end{aligned} \tag{7}$$

However, by reviewing the papers dealing with these so-called generalized Bateman functions, Erdélyi pointed out that the integrals in (7) are particular cases of confluent hypergeometric functions and the derived mathematical expressions are not new because they follow directly from manipulations with known properties of the Kummer confluent hypergeometric functions.

Probably, the most paying attention from generalized Bateman functions is that which was proposed by Chaudhuri [38]. In an analogy with the integral Bessel functions, he introduced the *Bateman-integral function*

$$k i\_n(\mathbf{x}) = -\int\_{\mathbf{x}}^{\infty} \frac{k\_{2n}(\mathbf{u})}{\mathbf{u}} \, d\mathbf{u} \quad ; \quad \mathbf{x} > \mathbf{0}, \tag{8}$$

and discussed its properties.

As already mentioned above, in the last decades, the interest in the Bateman functions was very limited, and only investigations of Koepf and Schmersau [39–41] dealing with recurrence and other relations of *Fn*(*x*) functions, defined by

$$\begin{aligned} e^{-\mathbf{x}(1+t)/(1-t)} &= \sum\_{n=0}^{\infty} t^n F\_{\mathbb{R}}(\mathbf{x}), \\ F\_n(\mathbf{x}) &= (-1)^n k\_{2n}(\mathbf{x}) = (-1)^n \frac{2}{\pi} \int\_0^{\pi/2} \cos(\mathbf{x} \tan \theta - 2n\theta) \, d\theta. \end{aligned} \tag{9}$$

should be mentioned.

Considering that at the present time, the Bateman functions are unjustly neglected and nearly entirely forgotten, we decided to prepare this survey in order to revive them and to promote them as independent functions. It seems that the Bateman functions should be treated separately, less as particular cases of the confluent hypergeometric functions or the Whittaker functions. Bearing in mind today that the literature on the subject is rather old and practically unknown, after Introduction, in the second section of this survey we collect the most important properties of the Bateman functions with integer orders *kn*(*x*). In the next section we present known results associated with the Havelock functions with integer orders *hn*(*x*). In the fourth section the generalized Bateman and Havelock functions are discussed. More general aspects related with the Bateman and Havelock functions having any order are considered in the fifth section. In these sections some new results derived by us are also included. The sixth section is dedicated to properties of the Bateman-integral functions. Concluding remarks are included in the last section.

In Appendix A we report various finite and infinite integrals of functions associated with functions considered in this survey. Differential equations and trigonometric integrals associated with the Kummer confluent hypergeometric function are discussed in Appendix B. We refer the readers to Appendix C where they can find the integral representations of known special functions recalled in the text because of their relations with the Bateman and Havelock functions.

It is expected that all results presented here in analytical and in graphical form will stimulate a new research devoted to the Bateman and Havelock functions and these functions will find a desirable and proper place in the mathematical literature.

#### **2. The Bateman Functions with Integer Orders**

The Bateman functions with integer order *n* and with real argument *x*, are defined by

$$k\_n(\mathbf{x}) = \frac{2}{\pi} \int\_0^{\pi/2} \cos(\mathbf{x} \tan \theta - n\theta) \, d\theta, \quad n = 0, 1, 2, 3, \dots \tag{10}$$

For this integral Bateman showed that [20]

$$\begin{aligned} k\_{\text{fl}}(0) &= \frac{2}{\pi n} \sin \left( \frac{\pi n}{2} \right), & k\_{2\text{n}}(0) &= 0, \\ \lim\_{x \to \infty} k\_{\text{n}}(x) &= \lim\_{x \to \infty} k'\_{\text{n}}(x) = 0, \end{aligned} \tag{11}$$

and

$$\begin{aligned} |k\_{\boldsymbol{n}}(\boldsymbol{x})| &\le 1 \\ |k\_{\boldsymbol{n}}(\boldsymbol{x})| &\le \left|\frac{n}{\boldsymbol{x}}\right|; \quad |k\_{\boldsymbol{n}}(\boldsymbol{x})| \le \left|\frac{n^2 + 2}{\boldsymbol{x}^2}\right|; \quad n > 2, \\ |k\_{2\boldsymbol{n}}(\boldsymbol{x})| &\le \left|\frac{2n}{\boldsymbol{x}}\right|; \quad \boldsymbol{x} > 1, \\ |k\_{\boldsymbol{n}}'(\boldsymbol{x})| &\le \left|\frac{n}{2\boldsymbol{x}}\right|. \end{aligned} \tag{12}$$

In the case of even integers they are associated with the Havelock integrals (5) and with *Fn*(*x*) functions (9) in the following way [36,39–41]

$$k\_{2n}(\mathbf{x}) = \frac{2}{\pi}L\_n(\mathbf{x}), \quad k\_{2n}(\mathbf{x}) = (-1)^n F\_n(\mathbf{x}), \quad h\_{2n}(\mathbf{x}) = -\frac{2}{\pi}M\_n(\mathbf{x}).\tag{13}$$

The first six Bateman functions were tabulated by Havelock [36] for *x* > 0,

$$\begin{aligned} k\_0(\mathbf{x}) &= e^{-\mathbf{x}}, \\ k\_2(\mathbf{x}) &= 2\mathbf{x}e^{-\mathbf{x}}, \\ k\_4(\mathbf{x}) &= 2\mathbf{x}(\mathbf{x}-1)e^{-\mathbf{x}}, \\ k\_6(\mathbf{x}) &= \frac{2}{3}\mathbf{x}(2\mathbf{x}^2 - 6\mathbf{x} + 3)e^{-\mathbf{x}}, \\ k\_8(\mathbf{x}) &= \frac{2}{3}\mathbf{x}(\mathbf{x}^3 - 6\mathbf{x}^2 + 9\mathbf{x} - 3)e^{-\mathbf{x}}, \\ k\_{10}(\mathbf{x}) &= \frac{2}{15}\mathbf{x}(2\mathbf{x}^4 - 20\mathbf{x}^3 + 60\mathbf{x}^2 - 60\mathbf{x} + 15)e^{-\mathbf{x}}, \\ k\_{12}(\mathbf{x}) &= \frac{2}{45}\mathbf{x}(2\mathbf{x}^5 - 30\mathbf{x}^4 + 150\mathbf{x}^3 - 300\mathbf{x}^2 + 225\mathbf{x} - 45)e^{-\mathbf{x}}. \end{aligned} \tag{14}$$

In the general case these polynomials can be derived from the Rodriguez type formula

$$k\_{2n}(\mathbf{x}) = \frac{(-1)^n \mathbf{x} \mathbf{e}^\mathbf{x}}{n!} \frac{d^n}{d\mathbf{x}^n} \left[ \mathbf{x}^{n-1} \mathbf{e}^{-2\mathbf{x}} \right]\_\prime \tag{15}$$

which is similar to that of the generalized Laguerre polynomials *<sup>L</sup>*(*α*) *<sup>n</sup>* (*x*).

$$L\_n^{(\alpha)}(\mathbf{x}) = \frac{\mathbf{x}^{-\mathbf{a}}\mathbf{e}^{\mathbf{x}}}{n!} \frac{d^n}{d\mathbf{x}^n} \left[\mathbf{x}^{n+\mathbf{a}}\mathbf{e}^{-\mathbf{x}}\right].\tag{16}$$

Bateman showed that for his functions with even integer orders we have [20]

$$k\_{2n}(\mathbf{x}) = (-1)^n e^{-\mathbf{x}} [L\_n(2\mathbf{x}) - L\_{n-1}(2\mathbf{x})],\tag{17}$$

where *Lk*(*z*) are the Laguerre polynomials.

It is more difficult to express the Bateman functions with odd orders in terms of other known functions. For *n* = 1, Bateman introduced a new integration variable *t* = tan *θ* and obtained [20]

$$\begin{split} k\_1(\mathbf{x}) &= \frac{2}{\pi} \int\_0^{\pi/2} \cos(\mathbf{x} \tan \theta - \theta) \, d\theta = \\ \frac{2}{\pi} \int\_0^{\pi/2} \cos(\mathbf{x} \tan \theta) \cos \theta \, d\theta + \frac{2}{\pi} \int\_0^{\pi/2} \sin(\mathbf{x} \tan \theta) \sin \theta \, d\theta &= \\ \frac{2}{\pi} \int\_0^{\infty} \frac{\cos(\mathbf{x} t)}{(1 + t^2)^{3/2}} \, dt + \frac{2}{\pi} \int\_0^{\infty} \frac{t \sin(\mathbf{x} t)}{(1 + t^2)^{3/2}} \, dt &= \\ \frac{2}{\pi} \int\_0^{\infty} \frac{\cos(\mathbf{x} t)}{(1 + t^2)^{3/2}} \, dt - \frac{2x}{\pi} \int\_0^{\infty} \frac{\cos(\mathbf{x} t)}{(1 + t^2)^{1/2}} \, dt. \end{split} \tag{18}$$

The last two integrals are the integral representations of the modified Bessel functions of the second kind of the first and zero orders [7]

$$\begin{aligned} k\_1(\mathbf{x}) &= \frac{2\mathbf{x}}{\pi} \left[ K\_1(\mathbf{x}) - K\_0(\mathbf{x}) \right]; \quad \mathbf{x} > \mathbf{0},\\ k\_1(\mathbf{x}) &= -\frac{2\mathbf{x}}{\pi} \left[ K\_1(-\mathbf{x}) + K\_0(-\mathbf{x}) \right]; \quad \mathbf{x} < \mathbf{0}. \end{aligned} \tag{19}$$

The Bateman functions with other even and odd integer orders can also be derived by applying the recurrence relations which are in the form of difference equations and differential-difference equations

$$\begin{aligned} \left(2\mathbf{x} - 2\mathbf{n}\right)k\_{2n}(\mathbf{x}) &= \left(n - 1\right)k\_{2n - 2}(\mathbf{x}) + \left(n + 1\right)k\_{2n + 2}(\mathbf{x})\\ 4\mathbf{x}k\_{n}'(\mathbf{x}) &= \left(n - 2\right)k\_{n - 2}(\mathbf{x}) - \left(n + 2\right)k\_{n + 2}(\mathbf{x})\\ k\_{n}'(\mathbf{x}) + k\_{n + 2}'(\mathbf{x}) &= k\_{n}(\mathbf{x}) - k\_{n + 2}(\mathbf{x})\\ \mathbf{x}k\_{n}'(\mathbf{x}) &= \left(\mathbf{x} - n\right)k\_{n}(\mathbf{x}). \end{aligned} \tag{20}$$

For example, using the second equation in (20) for *n* = 1, we have

$$\begin{aligned} k\_3(\mathbf{x}) &= -\frac{1}{3} \left[ 4\mathbf{x} \frac{dk\_1(\mathbf{x})}{dx} + k\_{-1}(\mathbf{x}) \right] \\ \frac{dk\_1(\mathbf{x})}{dx} &= \frac{2}{\pi} [K\_1(\mathbf{x}) - K\_0(\mathbf{x})] + \frac{2\mathbf{x}}{\pi} \left[ \frac{dK\_1(\mathbf{x})}{dx} - \frac{dK\_0(\mathbf{x})}{dx} \right] \\ \frac{dK\_1(\mathbf{x})}{dx} &= \frac{K\_2(\mathbf{x}) + K\_0(\mathbf{x})}{2} \\ \frac{dK\_0(\mathbf{x})}{dx} &= -K\_1(\mathbf{x}) \end{aligned} \tag{21}$$

and *k*−1(*x*) can be expressed by using integrals from (18)

$$\begin{aligned} k\_{-1}(\mathbf{x}) &= \frac{2}{\pi} \int\_0^{\pi/2} \cos(\mathbf{x} \tan \theta + \theta) \, d\theta = \\ \frac{2}{\pi} \int\_0^{\pi/2} \cos(\mathbf{x} \tan \theta) \cos \theta \, d\theta - \frac{2}{\pi} \int\_0^{\pi/2} \sin(\mathbf{x} \tan \theta) \sin \theta \, d\theta. \end{aligned} \tag{22}$$

It is also possible to obtain the Bateman functions with odd orders in a different new procedure, for example *k*3(*x*)

$$\begin{aligned} k\_3(\mathbf{x}) &= \frac{2}{\pi} \int\_0^{\pi/2} \cos(\mathbf{x} \tan \theta - 3\theta) \, d\theta = \\ \frac{2}{\pi} \int\_0^{\pi/2} \cos(\mathbf{x} \tan \theta) \cos(3\theta) \, d\theta &+ \frac{2}{\pi} \int\_0^{\pi/2} \sin(\mathbf{x} \tan \theta) \sin(3\theta) \, d\theta, \end{aligned} \tag{23}$$

but with *t* = tan *θ*

$$\begin{aligned} \sin(3\theta) &= 3\sin\theta - 4(\sin\theta)^3 = \sin\theta \frac{3 - (\tan\theta)^2}{1 + (\tan\theta)^2} = \frac{t(3 - t^2)}{(1 + t^2)^{3/2}} \\ \cos(3\theta) &= -3\cos\theta + 4(\cos\theta)^3 = \cos\theta \frac{1 - 3(\tan\theta)^2}{1 + (\tan\theta)^2} = \frac{(1 - 3t^2)}{(1 + t^2)^{3/2}} \end{aligned} \tag{24}$$

and therefore (23) becomes

$$k\_3(\mathbf{x}) = \frac{2}{\pi} \int\_0^\infty \frac{(1 - 3t^2)\cos(\mathbf{x}t)}{(1 + t^2)^{5/2}} \, dt + \frac{2}{\pi} \int\_0^\infty \frac{t(3 - t^2)\sin(\mathbf{x}t)}{(1 + t^2)^{5/2}} \, dt. \tag{25}$$

However, this type of integrals can be evaluated by differentiating the modified Bessel functions of the second kind [14]

$$\begin{split} \int\_{0}^{\infty} \frac{t^{2u+1} \sin(\mathfrak{x}t)}{(1+t^2)^a} dt &= (-1)^{u+1} \frac{2^{1/2-a} \sqrt{\pi t}}{\Gamma(a)} \frac{\partial^{2u+1}}{\partial \mathbf{x}^{2u+1}} \left[ \mathfrak{x}^{a-1/2} \mathcal{K}\_{u-1/2}(\mathbf{x}) \right], \quad a > n+1/2, \\\int\_{0}^{\infty} \frac{t^{2u} \sin(\mathfrak{x}t)}{(1+t^2)^a} dt &= (-1)^u \frac{2^{1/2-a} \sqrt{\pi t}}{\Gamma(a)} \frac{\partial^{2u+1}}{\partial \mathbf{x}^{2u+1}} \left[ \mathfrak{x}^{a-1/2} \mathcal{K}\_{u-1/2}(\mathbf{x}) \right], \quad a > n. \end{split} \tag{26}$$

Using known expressions for sin(*α* + 2*θ*) and cos(*α* + 2*θ*) functions with *α* = 2*n* + 1, and taking into account that [7] with *t* = tan *θ*

$$\begin{aligned} \sin(2\theta) &= \frac{2\tan\theta}{1+(\tan\theta)^2} = \frac{2t}{(1+t^2)'}\\ \cos(2\theta) &= \frac{1-(\tan\theta)^2}{1+(\tan\theta)^2} = \frac{(1-t^2)}{(1+t^2)'} \end{aligned} \tag{27}$$

the above described procedure can be extended to the Bateman functions with higher odd orders. Integrals of the type presented in (26) can be also used when derivatives with respect to the argument are considered with *m* = 0, 1, 2, 3, . . .

$$\begin{split} \frac{\partial^{2m}k\_{n}(\mathbf{x})}{\partial \mathbf{x}^{2m}} &= (-1)^{m} \frac{2}{\pi} \int\_{0}^{\pi/2} (\tan \theta)^{2m} \cos(\mathbf{x} \tan \theta - n\theta) \, d\theta, \\\frac{\partial^{2m+1}k\_{n}(\mathbf{x})}{\partial \mathbf{x}^{2m+1}} &= (-1)^{m} \frac{2}{\pi} \int\_{0}^{\pi/2} (\tan \theta)^{2m+1} \sin(\mathbf{x} \tan \theta - n\theta) \, d\theta. \end{split} \tag{28}$$

In order to illustrate the behaviour of the Bateman functions as a function of argument and order, they were numerically evaluated using the MATLAB program and they are presented in Figure 1 for positive integer orders and in Figure 2 for negative integer order. As can be observed by comparing both figures, the curves are shifted with the symmetry predicted by Bateman [20]

$$k\_{-n}(\mathbf{x}) = k\_n(-\mathbf{x})\,. \tag{29}$$

**Figure 1.** Bateman functions with positive integer orders as a function of argument *x*.

**Figure 2.** Bateman functions with negative integer orders as a function of argument *x*.

Considering similarity with the generalized Laguerre polynomials, Bateman was able to show the existence of the following expansions associated with his functions with even orders [20] <sup>∞</sup>

$$\begin{aligned} \sum\_{n=0}^{\infty}(-1)^{n}t^{n}k\_{2n}(\mathbf{x}) &= (1-t)^{a+1}e^{-\mathbf{x}}\sum\_{n=0}^{\infty}t^{n}L\_{n}^{(a)}(2\mathbf{x}),\\ \sum\_{n=0}^{\infty}\frac{t^{n}}{2^{n}n!}k\_{2n+2}(\mathbf{x}) &= 2e^{-(\mathbf{x}+t/2)}\sqrt{\frac{\mathbf{x}}{t}}I\_{1}(2\sqrt{\mathbf{x}t}),\\ \sum\_{n=0}^{\infty}(-1)^{n}k\_{4n+2}(\mathbf{x}) &= \sin\mathbf{x}, \quad \sum\_{n=0}^{\infty}(-1)^{n}k\_{4n}(\mathbf{x}) = \cos\mathbf{x}.\end{aligned} \tag{30}$$

where *I*<sup>1</sup> denoted the modified Bessel function of order 1, see (C.8) and [7]. Shabde [22] demonstrated that

$$\begin{split} &\sum\_{n=0}^{\infty} (n+1)t^{n}k\_{2n+2}(\mathbf{x}) = \frac{2xe^{-x+[2\text{x}t/(1+t)]}}{(1+t)^{2}},\\ &\sum\_{n=0}^{\infty} (-1)^{n}(2n+1)t^{2n}k\_{2n+2}(\mathbf{x}) = \\ &\frac{2xe^{-x+2\text{x}t^{2}/(1+t^{2})}}{(1+t^{2})^{2}} \left[ (1-t^{2})\cos\left(\frac{2\text{x}t}{1+t^{2}}\right) + 2t\sin\left(\frac{2\text{x}t}{1+t^{2}}\right) \right],\\ &\sum\_{n=0}^{\infty} (-1)^{n}(2n+2)t^{2n+1}k\_{4n+4}(\mathbf{x}) =\\ &\frac{2xe^{-x+2\text{x}t^{2}/(1+t^{2})}}{(1+t^{2})^{2}} \left[ (1-t^{2})\sin\left(\frac{2\text{x}t}{1+t^{2}}\right) - 2t\cos\left(\frac{2\text{x}t}{1+t^{2}}\right) \right], \end{split} \tag{31}$$

$$\begin{aligned} \sum\_{n=0}^{\infty} \frac{(-1)^n t^n}{n!} k\_{2n+2}(\mathbf{x}) &= \sqrt{\frac{2\mathbf{x}}{t}} e^{-(\mathbf{x}+t)} f\_1(2^{3/2}\sqrt{\mathbf{x}t}),\\ \sum\_{n=0}^{\infty} \frac{(-1)^n t^{2n}}{(2n)!} k\_{2n+2}(\mathbf{x}) &= \sqrt{\frac{2\mathbf{x}}{t}} \left[ -\sin t \, b \mathbf{e} r'(2^{3/2}\sqrt{\mathbf{x}t}) + \cos t \, b \mathbf{e} i'(2^{3/2}\sqrt{\mathbf{x}t}) \right],\\ \sum\_{n=0}^{\infty} \frac{(-1)^{n+1} t^{2n+1}}{(2n+1)!} k\_{4n+4}(\mathbf{x}) &= \sqrt{\frac{2\mathbf{x}}{t}} \left[ \cos t \, b \mathbf{e} r'(2^{3/2}\sqrt{\mathbf{x}t}) + \sin t \, b \mathbf{e} i'(2^{3/2}\sqrt{\mathbf{x}t}) \right],\end{aligned} \tag{32}$$

where *ber* (*z*) and *bei* (*z*) are the derivatives of the Kelvin functions.

Additional sums of series expansions were reported by Shastri [24]

$$\begin{cases} \sum\_{n=0}^{\infty} (-1)^n t^{2n+1} k\_{4n+2}(\mathbf{x}) = e^{\mathbf{x}(t^2-1)/(1+t^2)} \sin\left(\frac{2\pi t}{1+t^2}\right); \quad |t| < 1, \\\sum\_{n=0}^{\infty} (-1)^n t^{2n} k\_{4n}(\mathbf{x}) = e^{\mathbf{x}(t^2-1)/(1+t^2)} \cos\left(\frac{2\pi t}{1+t^2}\right); \quad |t| < 1, \\\sum\_{n=0}^{\infty} (-1)^n k\_{4n+2}(\mathbf{x}) = \sin \mathbf{x}, \quad \sum\_{n=0}^{\infty} (-1)^n k\_{4n}(\mathbf{x}) = \cos \mathbf{x}, \end{cases} \tag{33}$$

and <sup>∞</sup>

$$\begin{aligned} \sum\_{n=0}^{\infty} k\_{2n}(\mathbf{x}) \sin(2n\theta) &= \sin(\mathbf{x} \tan \theta), \\ \sum\_{n=0}^{\infty} k\_{2n}(\mathbf{x}) \sin(2n\theta) &= \sin(\mathbf{x} \tan \theta), \\ \sum\_{n=0}^{\infty} k\_{2n}(\mathbf{x}) &= 1. \end{aligned} \tag{34}$$

The orthogonal relations were established by Bateman [20]

$$\begin{aligned} \int\_0^\infty [k\_{2n}(x)]^2 \, dx &= \begin{cases} 1 & \text{; } \quad n > 0\\ 1/2 & \text{; } \quad n = 0 \end{cases} \\\\ \int\_0^\infty k\_{2n}(x) k\_{2n} +\_{2k}(x) \, dx &= \begin{cases} 0 & \text{; } \quad k > 1\\ 1/2 & \text{; } \quad k = 1 \end{cases} \end{aligned} \tag{35}$$

$$\int\_0^\infty \frac{k\_n(x) k\_{2k}(x)}{x} \, dx = \frac{4 \sin\left[\frac{\pi}{2}(2k - n)\right]}{\pi n (2k - n)}; \quad k > 0,$$

and over the entire integration interval

$$\begin{aligned} \int\_{-\infty}^{+\infty} k\_{2k}(\mathbf{x}) k\_{2m}(\mathbf{x}) \, d\mathbf{x} &= \frac{\sin[\pi(m-k)]}{\pi(k-m+1)\left(k-m\right)\left(k-m-1\right)},\\ PV \int\_{-\infty}^{\infty} k\_{2k+1}(\mathbf{x}) k\_{2m+1}(\mathbf{x}) \, \frac{d\mathbf{x}}{\mathbf{x}} &= \begin{cases} 0; & k \neq m, \\\ \frac{2}{\pi(2k+1)}; & k = m. \end{cases} \end{aligned} \tag{36}$$

In the literature there is a number of infinite integrals where the Bateman functions appear in integrands or in final results of integration. These integrals are collected in Appendix A, here only the Laplace transforms of the Bateman functions are presented [6,9]:

$$\begin{aligned} \int\_0^\infty e^{-st} k\_0(t) \, dt &= \frac{1}{s+1}; \quad \text{Re}(s+1) > 0; \quad n = 0, 1, 2, \dots\\ \int\_0^\infty e^{-st} k\_{2n+2}(t) \, dt &= \frac{2(1-s)^n}{(s+1)^{n+2}}\\ \int\_0^\infty e^{-st} k\_{2\nu}(t) \, dt &= \frac{\sin(\pi\nu)}{2\pi\nu(1-\nu)} \, \_2F\_1(1, 2; 2-\nu; \frac{1-s}{2}); \quad \text{Re}s > 0 \end{aligned} \tag{37}$$

$$\begin{split} \int\_{0}^{\infty} e^{-st}e^{-t^{2}}k\_{2n}(t^{2})dt &= \frac{(-1)^{n-1}s^{n-3/2}e^{s^{2}/16}}{2^{3n/2+1/4}}W\_{-n/2-1/4,-n/2-1/4}\left(\frac{s^{2}}{8}\right) \\ \int\_{0}^{\infty} e^{-st}k\_{2n+2}(\frac{t}{2})k\_{2n+2}(\frac{t}{2})\frac{dt}{t} &= \frac{(-1)^{m+n}}{(s+1)^{m+n+2}}\,\_{2}F\_{1}\left(-m,-n;2;\frac{1}{s^{2}}\right) \\ \text{Res}> -1 \\ \int\_{0}^{\infty} e^{-st}\frac{e^{(s+\beta)t/2}}{a\beta}k\_{2n+2}(\frac{at}{2})k\_{2n+2}(\frac{\beta t}{2})\frac{dt}{t} &= \\ \frac{(-1)^{m+n}(m+n+1)!(s-a)^{m}(s-\beta)^{n}}{(m+1)!(n+1)!(s+1)^{m+n+2}}\,\_{2}F\_{1}\left(-m,-n;-m-n-1;\frac{s(s-a-\beta)}{(s-a)\left(s-\beta\right)}\right) \\ m,n = 0,1,2,\ldots; \quad \text{Res}> 0 \end{split} \tag{38}$$

where *Wκ*,*μ*(*z*) is the Whittaker function. Formulas in (32) and (33) are accessible in a much more general forms by applying the basic properties of the Laplace transformation

$$\begin{aligned} L\{f(t)\} &= \int\_0^\infty e^{-st} f(t) \, dt = F(s) \, ; \quad a > 0\\ L\{f(at)\} &= \frac{1}{a} F\left(\frac{s}{a}\right) \\ L\{e^{\pm at} f(t)\} &= F(s \mp a) \\ L\{t^n f(t)\} &= (-1)^n \frac{d^n F(s)}{ds^n} \end{aligned} \tag{39}$$

For example in the simple case of the function *k*2(*t*) we have from (39)

$$\begin{aligned} L\{k\_2(t)\} &= \frac{2}{(s+1)^2} \\ L\{k\_2(at)\} &= \frac{2a}{(s+a)^2} \\ L\{e^{\pm at}k\_2(at)\} &= \frac{2}{(s\mp a+1)^2} \\ L\{tk\_2(t)\} &= \frac{4}{(s+1)^3} .\end{aligned} \tag{40}$$

The initial and final values of the Bateman functions with even integer orders (see Figure 1) as presented in (11), can also be derived from the rules of the operational calculus

$$\begin{split} k\_0(t \to +0) &= \lim\_{s \to \infty} [sF(s)] = \lim\_{s \to \infty} \left[ \frac{s}{s+1} \right] = 1 \\ k\_0(t \to \infty) &= \lim\_{s \to 0} [sF(s)] = \lim\_{s \to 0} \left[ \frac{s}{s+1} \right] = 0 \\ k\_{2n+2}(t \to +0) &= \lim\_{s \to \infty} [sF(s)] = \lim\_{s \to \infty} \left[ \frac{2s(1-s)^n}{(s+1)^{n+2}} \right] = 0 \\ k\_{2n+2}(t \to \infty) &= \lim\_{s \to 0} [sF(s)] = \lim\_{s \to 0} \left[ \frac{2s(1-s)^n}{(s+1)^{n+2}} \right] = 0. \end{split} \tag{41}$$

Since the Bateman function is a particular case of the Whittaker function

$$k\_{2\nu}\left(\frac{t}{2}\right) = \frac{1}{\Gamma(\nu+1)}\mathcal{W}\_{\nu,1/2}(t) \tag{42}$$

it is possible to enlarge a number of the Laplace transforms using transforms of the Whittaker functions *W*1/2,1/2(*t*) and *Wν*,1/2(*t*)

$$\begin{aligned} L\left\{t^{1/2}e^{1/2t}k\_1\left(\frac{2}{t}\right)\right\} &= \frac{\sqrt{\pi}}{s}\left[H\_1(2\sqrt{s}) - Y\_1(2\sqrt{s})\right] \\ L\left\{te^{1/2t}k\_1\left(\frac{2}{t}\right)\right\} &= \frac{1}{2s}H\_1^{(1)}(\sqrt{s})H\_1^{(2)}(\sqrt{s}) \\ L\left\{\frac{1}{t}e^{-1/2t}k\_1\left(\frac{2}{t}\right)\right\} &= \frac{2^{5/2}\sqrt{s}}{\pi}K\_0(\sqrt{s})K\_1(\sqrt{s}) \\ L\left\{\frac{1}{t^2}e^{-1/2t}k\_1\left(\frac{2}{t}\right)\right\} &= \frac{4}{\pi s}\left[K\_1(\sqrt{s})\right]^2 \end{aligned} \tag{43}$$

$$\begin{aligned} L\left\{t^{a-1}k\_{2\nu}\left(\frac{t}{2}\right)\right\} &= \\ \frac{\Gamma(a)}{\Gamma(\nu+1)\Gamma(a-\nu+1)}\left(\frac{2}{2s+1}\right)^{a+1} & {\_2F\_1(a+1,-\nu;a-\nu+1;\frac{2s-1}{2s+1})} \\ \text{Res} &> -\frac{1}{2} \\ L\left\{t^{\nu}e^{1/2t}k\_{2\nu}\left(\frac{2}{t}\right)\right\} &= \frac{2^{1-2\nu}}{\Gamma(\nu+1)s^{\nu+1/2}}S\_{2\nu,1}(2\sqrt{s}); \quad \text{Re}(\nu \pm \frac{1}{2}) > -\frac{1}{2} \\ L\left\{\frac{1}{t^{\nu}}e^{-1/2t}k\_{2\nu}\left(\frac{2}{t}\right)\right\} &= \frac{2s^{\nu-1/2}}{\Gamma(\nu+1)}K\_{1}(2\sqrt{s}); \quad \text{Res} > 0, \end{aligned} \tag{44}$$

where *<sup>H</sup>μ*(*t*), *<sup>Y</sup>μ*(*t*), *<sup>H</sup>*(1) *<sup>μ</sup>* (*t*), *<sup>H</sup>*(2) *<sup>μ</sup>* (*t*) and *Sμ*(*t*) are the Struve, Bessel, Hankel and Lommel functions, respectively.

#### **3. The Havelock Functions with Integer Orders**

As pointed out above, Havelock in solving the surface wave problem [36] encountered the following trigonometric integrals with even integer values of order (parameter) *n*

$$h\_{\hbar}(\mathbf{x}) = \frac{2}{\pi} \int\_{0}^{\pi/2} \sin(\mathbf{x} \tan \theta - n\theta) \, d\theta. \tag{45}$$

These functions with positive and negative values of order were calculated numerically by using the MATLAB program and they are plotted in Figures 3 and 4. Comparing both figures, it is evident that the curves are shifted according to

$$h\_{-n}(\mathbf{x}) = -h\_n(-\mathbf{x}).\tag{46}$$

**Figure 3.** Havelock functions with positive integer orders as a function of argument *x*.

**Figure 4.** Havelock functions with begative integer orders as a function of argument *x*.

Havelock was able to present the first six integrals in terms of polynomials and the logarithmic integrals [36]

$$\begin{aligned} h\_0(\mathbf{x}) &= \frac{1}{2} \left[ \varepsilon^x \operatorname{li}(\varepsilon^{-x}) - \varepsilon^{-x} \operatorname{li}(\varepsilon^x) \right] \\ h\_2(\mathbf{x}) &= \mathbf{x} \varepsilon^{-x} \operatorname{li}(\varepsilon^x) - 1 \\ h\_4(\mathbf{x}) &= \mathbf{x}(\mathbf{x} - 1) \varepsilon^{-x} \operatorname{li}(\varepsilon^x) - \mathbf{x} \\ h\_6(\mathbf{x}) &= \frac{\mathbf{x}(2\mathbf{x}^2 - 6\mathbf{x} + 3) \varepsilon^{-x} \operatorname{li}(\varepsilon^x) - (2\mathbf{x}^2 - 4\mathbf{x} + 1)}{3} \end{aligned} \tag{47}$$

and

$$\begin{aligned} h\_{8}(\mathbf{x}) &= \frac{\mathbf{x}(\mathbf{x}^{3} - 6\mathbf{x}^{2} + 9\mathbf{x} - 3)e^{-\mathbf{x}}li(e^{\mathbf{x}}) - \mathbf{x}(\mathbf{x}^{2} - 5\mathbf{x} + 5)}{3} \\ h\_{10}(\mathbf{x}) &= \frac{\mathbf{x}(2\mathbf{x}^{4} - 20\mathbf{x}^{3} + 60\mathbf{x}^{2} - 60\mathbf{x} + 15)e^{-\mathbf{x}}li(e^{\mathbf{x}})}{(2\mathbf{x}^{4} - 18\mathbf{x}^{3} + 44\mathbf{x}^{2} - 28\mathbf{x} + 3)} \\ &\frac{15}{15} \frac{15}{(2\mathbf{x}^{3} - 30\mathbf{x}^{4} + 150\mathbf{x}^{3} - 300\mathbf{x}^{2} + 225\mathbf{x} - 45)e^{-\mathbf{x}}li(e^{\mathbf{x}})}{45} \\ &\frac{\mathbf{x}(2\mathbf{x}^{4} - 28\mathbf{x}^{3} + 124\mathbf{x}^{2} - 198\mathbf{x} + 93)}{45} \end{aligned} \tag{48}$$

where

$$\ln l(z) = \int\_0^z \frac{dt}{\ln t} = \gamma + \ln z + \sum\_{n=1}^\infty \frac{z^n}{n! n^{\prime}}, \quad z = e^x. \tag{49}$$

In the same way as in the Bateman paper from 1931, the properties of the Havelock functions with integer orders were studied by Srivastava in 1950 [25]. He found that

$$\begin{aligned} |h\_n(\mathbf{x})| &\le 1\\ |h\_n(0) &= \frac{2}{\pi n} \left[ \cos \left( \frac{\pi n}{2} \right) - 1 \right] \\ |h\_{2n}(0) &= \left[ \frac{1 - (-1)^n}{\pi n} \right] \\ |h\_{4n}(0) &= 0 \end{aligned} \tag{50}$$
 
$$\lim\_{\mathbf{x} \to \infty} h\_n(\mathbf{x}) = \lim\_{\mathbf{x} \to \infty} h\_n^{'}(\mathbf{x}) = 0,$$

$$\begin{aligned} h\_0(\mathbf{x}) &= \frac{2}{\pi} \int\_0^{\pi/2} \sin(\mathbf{x} \tan \theta) \, d\theta = \frac{2}{\pi} \int\_0^{\infty} \frac{\sin(\mathbf{x}t)}{1 + t^2} \, dt \\\ h\_1(\mathbf{x}) &= \frac{2}{\pi} \int\_0^{\pi/2} \sin(\mathbf{x} \tan \theta - \theta) \, d\theta = \\\ &\frac{2}{\pi} \int\_0^{\pi/2} \left[ \sin(\mathbf{x} \tan \theta) \cos \theta - \cos(\mathbf{x} \tan \theta) \cdot \sin \theta \right] \, d\theta = \\\ &\frac{2}{\pi} \int\_0^{\infty} \frac{\left[ \sin(\mathbf{x}t) - t \cos(\mathbf{x}t) \right]}{(1 + t^2)^{3/2}} \, dt .\end{aligned} \tag{51}$$

These integrals are of the type presented in (26). In 1950 Srivastava [25] showed that the infinite integral in (51) can be expressed in terms of the modified Bessel function of the first kind of zero order and the Struve function of zero order and their derivatives.

The Havelock functions satisfy the following recurrence and differential relations [25,37]

$$\begin{aligned} \left(2n - 4\mathbf{x}\right)h\_{\mathfrak{n}}(\mathbf{x}) + \left(n - 2\right)h\_{\mathfrak{n}-2}(\mathbf{x}) + \left(n + 2\right)h\_{\mathfrak{n}+2}(\mathbf{x}) &= -\frac{8}{\pi} \\ 4\mathbf{x}h\_{\mathfrak{n}}'(\mathbf{x}) &= \left(n - 2\right)h\_{\mathfrak{n}-2}(\mathbf{x}) - \left(n + 2\right)h\_{\mathfrak{n}+2}(\mathbf{x}) \\ h\_{\mathfrak{n}-1}'(\mathbf{x}) + h\_{\mathfrak{n}+1}'(\mathbf{x}) &= h\_{\mathfrak{n}-1}(\mathbf{x}) - h\_{\mathfrak{n}+1}(\mathbf{x}) \\ \mathbf{x}h\_{\mathfrak{n}}''(\mathbf{x}) &= \left(\mathbf{x} - \mathbf{n}\right)h\_{\mathfrak{n}}(\mathbf{x}) - \frac{2}{\pi} .\end{aligned} \tag{52}$$

The Laplace transform of the function *h*0(*x*) can be obtained in the following way

$$\begin{split} \mathcal{L}\{h\_{0}(\mathbf{x})\} &= \frac{2}{\pi} \int\_{0}^{\infty} e^{-s\mathbf{x}} \left( \int\_{0}^{\pi/2} \sin(\mathbf{x}\tan\theta) \,d\theta \right) d\mathbf{x} = \\ &\frac{2}{\pi} \int\_{0}^{\pi/2} \left( \int\_{0}^{\infty} e^{-s\mathbf{x}} \sin(\mathbf{x}\tan\theta) \,d\mathbf{x} \right) d\theta = \frac{2}{\pi} \int\_{0}^{\pi/2} \frac{\tan\theta}{s^{2} + (\tan\theta)^{2}} \,d\theta = \\ &\frac{2}{\pi} \int\_{0}^{\infty} \frac{t}{(s^{2} + t^{2})(1 + t^{2})} \,dt = \frac{2}{\pi} \frac{\ln(s)}{s(s^{2} - 1)}. \end{split} \tag{53}$$

For the function *h*1(*x*) we have

$$\begin{split} \mathcal{L}\{h\_{1}(\mathbf{x})\} &= \frac{2}{\pi} \int\_{0}^{\pi/2} \left( \int\_{0}^{\infty} e^{-sx} [\sin(\mathbf{x}\tan\theta)\cos\theta - \cos(\mathbf{x}\tan\theta)\sin\theta] d\mathbf{x} \right) d\theta = \\ &= \frac{2}{\pi} \int\_{0}^{\pi/2} \frac{[\tan\theta\cos\theta - \sin\theta]}{s^{2} + (\tan\theta)^{2}} d\theta = \frac{2(1-s)}{\pi} \int\_{0}^{\infty} \frac{t}{(s^{2} + t^{2})(1+t^{2})^{3/2}} dt = \\ &\quad \frac{2}{\pi(s+1)} \left[ \frac{\sec^{-1}(s)}{\sqrt{s^{2}-1}} - 1 \right]. \end{split} \tag{54}$$

The Laplace transforms of the functions *h*0(*x*) and *h*1(*x*) were also derived by Srivastava [25] in 1950 , but in the final expressions, the factor 2/*π* is missing.

The Havelock function *h*2(*x*) is expressed by

$$\begin{aligned} h\_2(\mathbf{x}) &= \frac{2}{\pi} \int\_0^{\pi/2} \sin(\mathbf{x} \tan \theta - 2\theta) \, d\theta = \\ \frac{2}{\pi} \int\_0^{\pi/2} \left[ \sin(\mathbf{x} \tan \theta) \cos(2\theta) - \cos(\mathbf{x} \tan \theta) \sin(2\theta) \right] d\theta &= \\ \frac{2}{\pi} \int\_0^{\pi/2} \frac{\sin(\mathbf{x} \tan \theta) \left[ 1 - (\tan \theta)^2 \right] - 2 \tan \theta \cos(\mathbf{x} \tan \theta)}{1 + (\tan \theta)^2} d\theta \\ &= \frac{2}{\pi} \int\_0^{\infty} \frac{(1 - t^2) \sin(\mathbf{x} t) - 2t \cos(\mathbf{x} t)}{(1 + t^2)^2} dt \end{aligned} \tag{55}$$

and its Laplace transform is therefore

$$\begin{split} L\{h\_2(\mathbf{x})\} &= \frac{2}{\pi} \int\_0^\infty \left[ \int\_0^\infty e^{-s\mathbf{x}} \frac{(1-t^2)\sin(\mathbf{x}t) - 2t\cos(\mathbf{x}t)}{(1+t^2)^2} d\mathbf{x} \right] dt = \\ &- \frac{2[s+1+\ln(s)]}{\pi(s+1)^2} \end{split} \tag{56}$$

where the infinite integrals in (53), (54) and (56) were verified using the MATHEMATICA program. The derived Laplace transforms allow us to obtain the initial and final values of the Havelock functions, for example for the function *h*0(*x*) we have

$$\begin{aligned} h\_0(\mathbf{x} \to +0) &= \lim\_{s \to \infty} [sF(s)] = \lim\_{s \to \infty} \left[ \frac{2s \ln(s)}{s^2 - 1} \right] = 0 \\ h\_0(\mathbf{x} \to \infty) &= \lim\_{s \to 0} [sF(s)] = \lim\_{s \to 0} \left[ \frac{2s \ln(s)}{s^2 - 1} \right] = 0 \end{aligned} \tag{57}$$

as it is observed in Figure 3.

There is a number of recurrence and differential expressions that include both the Bateman and the Havelock functions. They were reported by Srivastava [25] and three of them are presented here

$$\begin{aligned} &\left[ (n-2) \left[ k\_n(\mathbf{x}) h\_{n-2}(\mathbf{x}) - k\_{n-2}(\mathbf{x}) h\_n(\mathbf{x}) \right] + \\ &\left( n + 2 \right) \left[ k\_n(\mathbf{x}) h\_{n+2}(\mathbf{x}) - k\_{n+2}(\mathbf{x}) h\_n(\mathbf{x}) \right] = -\frac{8}{\pi} k\_n(\mathbf{x}) \\ &4 \pi \left[ k\_n(\mathbf{x}) h\_{n-2}'(\mathbf{x}) + k\_{n-2}'(\mathbf{x}) h\_n(\mathbf{x}) \right] = \\ &\left( n - 2 \right) \left[ k\_n(\mathbf{x}) h\_{n-2}(\mathbf{x}) + k\_{n-2}(\mathbf{x}) h\_n(\mathbf{x}) \right] + \\ &\left( n + 2 \right) \left[ k\_n(\mathbf{x}) h\_{n+2}(\mathbf{x}) + k\_{n+2}(\mathbf{x}) h\_n(\mathbf{x}) \right] \\ &\left[ k\_n(\mathbf{x}) h\_n''(\mathbf{x}) - k\_n''(\mathbf{x}) h\_n(\mathbf{x}) \right] = -\frac{2}{\pi \mathbf{x}} k\_n(\mathbf{x}), \end{aligned} \tag{58}$$

where *n* is an even integer.

If we consider the Havelock function in the special case

$$h\_{\rm ll}(nx) = \frac{2}{\pi} \int\_0^{\pi/2} \sin[n(x \tan \theta - \theta)] \, d\theta = \frac{2}{\pi} \int\_0^{\pi/2} \sin[na] \, d\theta,\tag{59}$$

then we recognize that the sums of series of the Havelock can be expressed by finite trigonometric integrals.

For example from [42]

$$\frac{2}{\pi} \sum\_{n=1}^{\infty} t^{\mathbb{I}} \sin(na) = \frac{2}{\pi} \left[ \frac{t \sin a}{1 - 2t \cos a + t^2} \right]; \quad t^2 < 1 \tag{60}$$

and integrating (60) with interchanging the order of summation and integration, we have

$$\sum\_{n=1}^{\infty} t^n h\_n(n\mathbf{x}) = \frac{2}{\pi} \int\_0^{\pi/2} \frac{t \sin(\mathbf{x} \tan \theta - \theta)}{1 - 2t \cos(\mathbf{x} \tan \theta - \theta) + t^2} d\theta ; \quad t^2 < 1. \tag{61}$$

In a similar way it is possible to obtain for series of the Bateman functions

$$\sum\_{n=1}^{\infty} t^n k\_n(nx) = \frac{2}{\pi} \int\_0^{\pi/2} \frac{1 - t \cos(x \tan \theta - \theta)}{1 - 2t \cos(x \tan \theta - \theta) + t^2} d\theta ; \quad t^2 < 1. \tag{62}$$

By this procedure, using various finite and infinite trigonometric series from [42], many sums of the Bateman *kn*(*nx*) and Havelock *hn*(*nx*) series with different coefficients, can be expressed by corresponding integrals.

#### **4. The Generalized Bateman and Havelock Functions with Integer Orders**

In order to solve dual, triple or multi series equations, a number of generalized Bateman and Havelock functions were introduced [25,26,29–35]. From the generalized functions only two considered in 1972 by Srivastava [31] are presented here. There is no agreed uniform notation of the generalized Bateman and Havelock functions. They are defined by using different letters, with upper and lower indexes. Here these functions are presented with an additional lower index with *k* > −1 as

$$\begin{aligned} k\_{n,k}(\mathbf{x}) &= \frac{2}{\pi} \int\_0^{\pi/2} (\cos \theta)^k \cos(\mathbf{x} \tan \theta - n\theta) \, d\theta, \\\ h\_{n,k}(\mathbf{x}) &= \frac{2}{\pi} \int\_0^{\pi/2} (\cos \theta)^k \sin(\mathbf{x} \tan \theta - n\theta) \, d\theta. \end{aligned} \tag{63}$$

It is suggested that if powers of cosine and sine functions appear also in (63), then the third lower index *m* is included

$$\begin{aligned} k\_{n,k,m}(\mathbf{x}) &= \frac{2}{\pi} \int\_0^{\pi/2} (\cos \theta)^k (\sin \theta)^m \cos(\mathbf{x} \tan \theta - n\theta) \, d\theta, \\\ h\_{n,k,m}(\mathbf{x}) &= \frac{2}{\pi} \int\_0^{\pi/2} (\cos \theta)^k (\sin \theta)^m \sin(\mathbf{x} \tan \theta - n\theta) \, d\theta, \end{aligned} \tag{64}$$

where this notation differs from that used in (7).

Values of three such integrals having *n* = 0 and *k* = 0, 1, 2 are known

$$\begin{aligned} \int\_0^{\pi/2} (\cos \theta)^2 \cos(\mathbf{x} \tan \theta - n\theta) \, d\theta &= \frac{\pi(1+\mathbf{x})e^{-\mathbf{x}}}{4} = \frac{\pi}{2} \, k\_{0,2}(\mathbf{x})\\ \int\_0^{\pi/2} (\sin \theta)^2 \cos(\mathbf{x} \tan \theta - n\theta) \, d\theta &= \frac{\pi(1-\mathbf{x})e^{-\mathbf{x}}}{4} = \frac{\pi}{2} \, k\_{0,0,2}(\mathbf{x})\\ \int\_0^{\pi/2} \cos \theta \sin \theta \sin(\mathbf{x} \tan \theta - n\theta) \, d\theta &= \frac{\pi x e^{-\mathbf{x}}}{4} = \frac{\pi}{2} \, k\_{0,1,1}(\mathbf{x}). \end{aligned} \tag{65}$$

The recurrence and differential expressions for the generalized Havelock functions are [31]

$$\begin{aligned} \left[ \left( n - k - 2 \right) h\_{n - 2, k}(\mathbf{x}) + \left( n + k + 2 \right) h\_{n + 2, k}(\mathbf{x}) + \left( 2n - \mathbf{x} \right) h\_{n, k}(\mathbf{x}) \right] &= -\frac{8}{\pi} \\ \mathbf{A} \mathbf{x} h\_{n, k}'(\mathbf{x}) = \left[ \left( n - k - 2 \right) h\_{n - 2, k}(\mathbf{x}) - \left( n + k + 2 \right) h\_{n + 2, k}(\mathbf{x}) + 2k \, h\_{n, k}(\mathbf{x}) \right] \\ 2 \mathbf{x} h\_{n, k}'(\mathbf{x}) - \frac{4}{\pi} &= \left[ \left( n - k - 2 \right) h\_{n - 2, k}(\mathbf{x}) + \left( n + k - 2 \mathbf{x} \right) h\_{n + 2, k}(\mathbf{x}) \right] \\ 2 h\_{0, 2k}'(\mathbf{x}) = \left[ 2 h\_{0, 2k + 2}(\mathbf{x}) - h\_{0, 2k}(\mathbf{x}) - h\_{2, 2k + 2}(\mathbf{x}) \right] \\ 2 \mathbf{x} h\_{n, k}'(\mathbf{x}) - k h\_{n, k}'(\mathbf{x}) + \left( n - \mathbf{x} \right) h\_{n, k}(\mathbf{x}) = -\frac{2}{\pi t} \end{aligned} \tag{66}$$

and for the generalized Bateman functions

$$2k\_{0,2k}'(\mathbf{x}) = \left[2k\_{0,2k+2}(\mathbf{x}) - k\_{0,2k}(\mathbf{x}) - k\_{2,2k+2}(\mathbf{x})\right].\tag{67}$$

In 1972 Srivastava [31] was able to show that

$$\begin{aligned} k\_{0,2k}(\mathbf{x}) &= \frac{2}{\pi} \int\_0^{\pi/2} (\cos \theta)^{2k} \cos(\mathbf{x} \tan \theta) \, d\theta = \\ \frac{2}{\sqrt{\pi} \Gamma(k+1)} \left(\frac{\mathbf{x}}{2}\right)^{k+1/2} \mathcal{K}\_{k+1/2}(\mathbf{x}) & \text{for } \\\ h\_{0,2k}(\mathbf{x}) &= \frac{2}{\pi} \int\_0^{\pi/2} (\cos \theta)^{2k} \sin(\mathbf{x} \tan \theta) \, d\theta = \\ \frac{2\Gamma(-k)}{\sqrt{\pi}} \left(\frac{\mathbf{x}}{2}\right)^{k+1/2} [I\_{k+1/2}(\mathbf{x}) - L\_{-k-1/2}(\mathbf{x})]\_{\mathbf{y}} \end{aligned} \tag{68}$$

and in the explicit form for the generalized Havelock function

$$h\_{2n,2k}(\mathbf{x}) = \frac{1}{\pi} [k\_{2n}(\mathbf{x}) \ln(\mathbf{e}^{\mathbf{x}}) - 2S\_{n-k-1,k}(\mathbf{x})]; \quad n \ge k+1,\tag{69}$$

where he determined the following polynomials for the expression in (69)

$$\begin{aligned} S\_{2,1}(\mathbf{x}) &= \frac{1}{6} \left( 2 + \mathbf{x} + \mathbf{x}^2 \right) \\ S\_{3,1}(\mathbf{x}) &= \frac{1}{12} \left( 2 - \mathbf{x}^2 + \mathbf{x}^3 \right) \\ S\_{4,1}(\mathbf{x}) &= \frac{1}{30} \left( 4 + \mathbf{x} + 2\mathbf{x}^2 - 4\mathbf{x}^3 + \mathbf{x}^4 \right) \\ S\_{5,1}(\mathbf{x}) &= \frac{1}{180} \left( 18 - 9\mathbf{x}^2 + 31\mathbf{x}^3 - 16\mathbf{x}^4 + 2\mathbf{x}^5 \right) \\ S\_{5,1}(\mathbf{x}) &= \frac{1}{180} \left( 18 - 9\mathbf{x}^2 + 31\mathbf{x}^3 - 16\mathbf{x}^4 + 2\mathbf{x}^5 \right) \end{aligned} \tag{70}$$

and

$$\begin{aligned} S\_{3,2}(\mathbf{x}) &= \frac{1}{48} \Big( 16 + 7\mathbf{x} + 3\mathbf{x}^2 + \mathbf{x}^3 \Big) \\ S\_{4,2}(\mathbf{x}) &= \frac{1}{120} \Big( 24 + 6\mathbf{x} + 2\mathbf{x}^2 + \mathbf{x}^3 + \mathbf{x}^4 \Big) \\ S\_{5,2}(\mathbf{x}) &= \frac{1}{360} \Big( 48 + 6\mathbf{x} - \mathbf{x}^3 - 2\mathbf{x}^4 + \mathbf{x}^5 \Big) \\ S\_{6,2}(\mathbf{x}) &= \frac{1}{2720} \Big( 268 + 30\mathbf{x} + 6\mathbf{x}^2 + 5\mathbf{x}^3 + 11\mathbf{x}^4 - 44\mathbf{x}^5 + 2\mathbf{x}^6 \Big). \end{aligned} \tag{71}$$

Besides, in 1972 H.M. Srivastava [31] evaluated four Laplace transforms of the generalized Bateman and Havelock functions. Two are presented here, long but complex expressions for the functions *k*2,2*k*(*x*) and *h*2,2*k*(*x*) are omitted here:

$$\begin{split} L\{k\_{0,2k}(\mathbf{x})\} &= \left[ \frac{(1-s)}{(1-s^2)^{k+1}} - \frac{s}{\sqrt{\pi}} \sum\_{m=1}^{k} \frac{\Gamma(k-m+3/2)}{\Gamma(k-m+2)\left(1-s^2\right)^m} \right] \\ L\{h\_{0,2k}(\mathbf{x})\} &= \frac{1}{\pi} \left[ \frac{2\ln(s)}{(1-s^2)^{k+1}} + \sum\_{m=1}^{k} \frac{1}{(k-m+1)\left(1-s^2\right)^m} \right]. \end{split} \tag{72}$$

For the solution of pairs of dual equations, other researchers called Srivastava [28,29] reported a few more properties of the generalized Bateman functions, but these functions are slightly modified in their definitions.

#### **5. The Bateman and Havelock Functions with Unrestricted Orders**

General case of the Bateman and Havelock with any order

$$\begin{aligned} k\_{\nu}(\mathbf{x}) &= \frac{2}{\pi} \int\_{0}^{\pi/2} \cos(\mathbf{x} \tan \theta - \nu \theta) \, d\theta\\ h\_{\nu}(\mathbf{x}) &= \frac{2}{\pi} \int\_{0}^{\pi/2} \sin(\mathbf{x} \tan \theta - \nu \theta) \, d\theta \end{aligned} \tag{73}$$

is practically unknown in the literature, with only one exception, the definition of the Bateman function in terms of the Whittaker function *Wk*,*μ*(*z*) or Tricomi function *U*(*a*, *b*, *z*) (particular cases of the confluent hypergeometric function ) [7]

$$\begin{split} k\_{2\nu}(\mathbf{x}) &= \frac{1}{\Gamma(\nu+1)} \, \mathcal{W}\_{\nu,1/2}(2\mathbf{x}) = \frac{e^{-\mathbf{x}}}{\Gamma(\nu+1)} \, \mathcal{U}(-\nu,0;2\mathbf{x}) \\ \mathcal{U}(-\nu,0;2\mathbf{x}) &= 2\mathbf{x} \, \mathcal{U}(1-\nu,2;2\mathbf{x}) \\ k\_{2n+2}(\mathbf{x}) &= 2xe^{-\mathbf{x}} \, \_1F\_1(-2n;2;2\mathbf{x}) ; \quad n = 0,1,2,3,\ldots \end{split} \tag{74}$$

Evidently, the corresponding generalized functions are

$$\begin{aligned} k\_{\nu,a,\emptyset}(\mathbf{x}) &= \frac{2}{\pi} \int\_0^{\pi/2} (\cos \theta)^a (\sin \theta)^\beta \cos(\mathbf{x} \tan \theta - \nu \theta) \, d\theta\\ h\_{\nu,a,\emptyset}(\mathbf{x}) &= \frac{2}{\pi} \int\_0^{\pi/2} (\cos \theta)^a (\sin \theta)^\beta \sin(\mathbf{x} \tan \theta - \nu \theta) \, d\theta, \end{aligned} \tag{75}$$

where *α*, *β* and *ν* have any real value. By changing the integration variable in (73) and (75), *t* = *tan*(*θ*) , these functions can be expressed by infinite integrals

$$\begin{cases} k\_{\nu}(\mathbf{x}) = \frac{2}{\pi t} \int\_{0}^{\infty} \frac{\left[\cos(\mathbf{x}t)\cos[\nu\tan^{-1}(t)] + \sin(\mathbf{x}t)\sin[\nu\tan^{-1}(t)]\right]}{1 + t^{2}} dt \\\ k\_{\nu,a,\beta}(\mathbf{x}) = \frac{2}{\pi} \int\_{0}^{\infty} \frac{t^{\beta} \left[\cos(\mathbf{x}t)\cos[\nu\tan^{-1}(t)] + \sin(\mathbf{x}t)\sin[\nu\tan^{-1}(t)]\right]}{(1 + t^{2})^{a/2 + \beta/2 + 1}} dt \\\ h\_{\nu}(\mathbf{x}) = \frac{2}{\pi} \int\_{0}^{\infty} \frac{\left[\sin(\mathbf{x}t)\cos[\nu\tan^{-1}(t)] - \cos(\mathbf{x}t)\sin[\nu\tan^{-1}(t)]\right]}{1 + t^{2}} dt \\\ h\_{\nu,a,\beta}(\mathbf{x}) = \frac{2}{\pi} \int\_{0}^{\infty} \frac{t^{\beta} \left[\sin(\mathbf{x}t)\cos[\nu\tan^{-1}(t)] - \cos(\mathbf{x}t)\sin[\nu\tan^{-1}(t)]\right]}{(1 + t^{2})^{a/2 + \beta/2 + 1}} dt. \end{cases} \tag{76}$$

In Figures 5 and 6 we illustrate the behavior and the symmetries with respect to the order of the Bateman functions with fractional positive and negative values: *ν* = *n* + 1/2 and *ν* = −(*n* + 1/2), with *n* = 0, 1, 2, 3, 4, 5. The same is demonstrated in Figures 7 and 8 for the Havelock-functions. Similarly as in (28), differentiation of the Bateman functions with respect to the argument *x* is for *k* = 0, 1, 2, 3, . . .

$$\begin{aligned} \frac{\partial^{2k}k\_{\nu}(\mathbf{x})}{\partial \mathbf{x}^{2k}} &= (-1)^{k} \frac{2}{\pi} \int\_{0}^{\pi/2} (\tan \theta)^{2k} \cos(\mathbf{x} \tan \theta - \nu \theta) \, d\theta\\ \frac{\partial^{2k+1}k\_{\nu}(\mathbf{x})}{\partial \mathbf{x}^{2k+1}} &= (-1)^{k} \frac{2}{\pi} \int\_{0}^{\pi/2} (\tan \theta)^{2k+1} \sin(\mathbf{x} \tan \theta - \nu \theta) \, d\theta. \end{aligned} \tag{77}$$

and in the case of the Havelock functions

$$\begin{cases} \frac{\partial^{2}h\_{\nu}(\mathbf{x})}{\partial \mathbf{x}^{2k}} = (-1)^{k} \frac{2}{\pi} \int\_{0}^{\pi/2} (\tan \theta)^{2k} \sin(\mathbf{x} \tan \theta - \nu \theta) \, d\theta\\ \frac{\partial^{2k+1}h\_{\nu}(\mathbf{x})}{\partial \mathbf{x}^{2k+1}} = (-1)^{k} \frac{2}{\pi} \int\_{0}^{\pi/2} (\tan \theta)^{2k+1} \cos(\mathbf{x} \tan \theta - \nu \theta) \, d\theta\\ k = 0, 1, 2, 3, \dots \end{cases} \tag{78}$$

**Figure 5.** Bateman functions with positive *n* + 1/2 orders as a function of argument *x*.

**Figure 6.** Bateman functions with negative *n* + 1/2 orders as a function of argument *x*.

**Figure 7.** Havelock functions with positive *n* + 1/2 orders as a function of argument *x*.

**Figure 8.** Havelock functions with negative *n* + 1/2 orders as a function of argument *x*.

Using the definition of these function from (73), it is possible to consider the Bateman and Havelock functions as functions of two variables *x* and *ν*. Thus, it is possible also to perform differentiation with respect to *ν*

$$\begin{aligned} \frac{\partial^{2k}k\_{\nu}(\mathbf{x})}{\partial\nu^{2k}} &= (-1)^{k}\frac{2}{\pi} \int\_{0}^{\pi/2} \theta^{2k} \cos(\mathbf{x}\tan\theta - \nu\theta) \,d\theta\\ \frac{\partial^{2k+1}k\_{\nu}(\mathbf{x})}{\partial\nu^{2k+1}} &= (-1)^{k}\frac{2}{\pi} \int\_{0}^{\pi/2} \theta^{2k+1} \sin(\mathbf{x}\tan\theta - \nu\theta) \,d\theta\\ k &= 0, 1, 2, 3, \dots \end{aligned} \tag{79}$$

$$\begin{cases} \frac{\partial^{2k}h\_{\nu}(\mathbf{x})}{\partial \nu^{2k}} = (-1)^{k} \frac{2}{\pi} \int\_{0}^{\pi/2} \theta^{2k} \sin(\mathbf{x} \tan \theta - \nu \theta) \, d\theta\\ \frac{\partial^{2k+1}h\_{\nu}(\mathbf{x})}{\partial \nu^{2k+1}} = (-1)^{k} \frac{2}{\pi} \int\_{0}^{\pi/2} \theta^{2k+1} \cos(\mathbf{x} \tan \theta - \nu \theta) \, d\theta\\ k = 0, 1, 2, 3, \dots \end{cases} \tag{80}$$

The first derivatives with respect to the order at fixed positive and negative values of argument *x* of the Bateman functions are plotted in Figures 9 and 10, and the same for the Havelock functions in Figures 11 and 12. As can be observed, these functions are symmetrical in both cases.

If orders are pure imaginary numbers *ν* = *iα* then the Bateman and Havelock functions become complex functions which are expressed by integrals with integrands having products of trigonometric and hyperbolic functions.

As pointed out above, the Bateman and Havelock functions were introduced to the mathematical literature as solutions of particular problems in fluid mechanics [20,36].

**Figure 9.** First derivatives of the Bateman functions with respect to the order at fixed positive values of argument *x*.

**Figure 10.** First derivatives of the Bateman functions with respect to the order at fixed negative values of argument *x*.

**Figure 11.** First derivatives of the Havelock functions with respect to the order at fixed positive values of argument *x*.

**Figure 12.** First derivatives of the Havelock functions with respect to the order at fixed negative values of argument *x*.

Years later, these functions were generalized to the form given in (64) and (75) [25,26,29–35]. It should be mentioned however, that historically, these proposed generalizations are not new, and they were already discussed much earlier by Giuliani in1888 [43] and by Bateman in 1931 [44]. They also introduced similar trigonometric integrals, but in the context of particular cases of the Kummer confluent hypergeometric functions. It is rather strange, that in the later investigations [25,26,29–35], when the generalized Bateman and Havelock functions were proposed, previous studies on this subject were completely ignored. Considering that the trigonometric integrals and associated with them differential equations presented in the Giuliani and Bateman papers are of particular importance and interest, it was decided to summarize their results separately, in Appendix B.

#### **6. The Bateman-Integral Functions**

Analogous to the sine-integral, cosine integral and the Bessel-integral functions

$$\begin{array}{l} si(x) = -\int\_{\frac{\infty}{t}}^{\infty} \frac{\sin t}{t} \, dt\\ \mathrm{Ci}(x) = -\int\_{\frac{\infty}{t}}^{\infty} \frac{\cos t}{t} \, dt\\ \mathrm{fi}\_{\nu}(x) = -\int\_{\infty}^{\infty} \frac{f\_{\nu}(t)}{t} \, dt. \end{array} \tag{81}$$

Chaudhuri [26] has introduced the Bateman-integral function

$$k i\_{2n}(\mathbf{x}) = -\int\_{x}^{\infty} \frac{k\_{2n}(t)}{t} \, dt; \quad t > 0,\tag{82}$$

and mainly using operational calculus he has discussed its properties.

$$\begin{aligned} ki\_{2n}(\mathbf{x}) &= \int\_0^\mathbf{x} \frac{k\_{2n}(t)}{t} \, dt + ki\_{2n}(0) \\\ ki\_{2n}(0) &= 0 \quad ; \quad n = 2k ; \quad k = 0, 1, 2, 3, \dots \\\ ki\_{2n}(0) &= -\frac{2}{n} ; \quad n = 2k + 1 \end{aligned} \tag{83}$$

Using similarity with the Laguerre polynomials, Chaudhuri [26] derived the following series expressions for the Bateman-integral functions

$$\begin{aligned} k i\_{2n}(\mathbf{x}) &= \frac{e^{-x}}{n} \sum\_{k=1}^{n} (-2)^{k} \binom{n}{k} L\_{k-1}(\mathbf{x}) \\ k i\_{2n}(\mathbf{x}) &= \\ \frac{1}{n\pi} \left[ n \, k\_{2n}(\mathbf{x}) - 2 \sum\_{m=1}^{n} (-1)^{k} \binom{n}{m} \left[ m \, k\_{2m}(2\mathbf{x}) + (m+1) k\_{2m+2}(2\mathbf{x}) - 2k\_0(2\mathbf{x}) \right] \right] \\ k i\_{2n}(\mathbf{x}) &= \frac{(-1)^{n-1} x^{\mathbf{x}}}{2^{n+1}} \left[ \sum\_{m=1}^{n} m \, k\_{2k}(\mathbf{x}) \right] \\ L\_{n-1}(\mathbf{x}) &= \frac{e^{x}}{2^{n}} \sum\_{m=1}^{n} (-1)^{m} \binom{n}{m} \, m \, k i\_{2m}(\mathbf{x}) \\ k i\_{2}(\mathbf{x}) &= -2k\_0(\mathbf{x}), \end{aligned} \tag{84}$$

and the recurrence and differential expressions

$$\begin{aligned} k\_{2n}(\mathbf{x}) &= \frac{(n-1)ki\_{2n-2}(\mathbf{x}) - (n+1)ki\_{2n+2}(\mathbf{x})}{2} \\ nki\_{2n}(\mathbf{x}) + (n+1)ki\_{2n+2}(\mathbf{x}) &= -2\sum\_{k=0}^{n} ki\_{2k}(\mathbf{x}) \\ \mathbf{x} \, k i\_{2n}'(\mathbf{x}) &= \frac{(n-1)ki\_{2n-2}(\mathbf{x}) - (n+1)ki\_{2n+2}(\mathbf{x})}{2} \\ \mathbf{x} \, k i\_{2n}'(\mathbf{x}) &= k\_{2n}(\mathbf{x}). \end{aligned} \tag{85}$$

He was also able to relate the Bateman-integral functions with the Bessel and the Bessel integral functions

$$\begin{aligned} \left[ (n+1) \left[ \left| i\_{n+1}(\mathbf{x}) k i\_{2n-2}(\mathbf{x}) - j i\_{n-1}(\mathbf{x}) k i\_{2n+2}(\mathbf{x}) \right| \right] = \\ \left[ \left( 2x j i\_{n-1}(\mathbf{x}) k i\_{2n}'(\mathbf{x}) - 2n j i\_n'(\mathbf{x}) k i\_{2n-2}(\mathbf{x}) \right) \right. \\ \left. \sum\_{m=1}^{\infty} (-1)^m m \, k i\_{2m}(\mathbf{x}) \, k i\_{2m}(\mathbf{y}) = J\_0(2\sqrt{\mathbf{x}\mathbf{y}}) . \end{aligned} \tag{86}$$

From integral expressions, the Laplace transform are presented here, when indefinite, definite and infinite integrals related to of the Bateman-integral functions are given in Appendix A:

$$\begin{aligned} L\{ki\_{2n}(\mathbf{x})\} &= \frac{1}{ns} \left[ \left( \frac{1-s}{s+1} \right)^{n} - 1 \right] = \frac{1}{ns} \sum\_{k=1}^{n} (-1)^{k} \binom{n}{k} \left( \frac{2s}{s+1} \right)^{k} \\ L\{ki\_{2n}(2\mathbf{x})\} &= \frac{1}{ns} \left[ \left( \frac{2-s}{s+2} \right)^{n} - 1 \right] \\ L\{ki(\mathbf{x})\} &= -\frac{\ln(s)}{s} \\ L\{ki\_{2}(\mathbf{x})\} &= -\frac{2}{s+1} . \end{aligned} \tag{87}$$

It is also worthwhile to mention that Srivastava [25] expressed the Bateman-integral function in the following way

$$\begin{aligned} k i\_{2n}(\mathbf{x}) &= \frac{\pi}{2} \left[ k'\_{2n}(\mathbf{x}) h\_{2n}(\mathbf{x}) - h'\_{2n}(\mathbf{x}) k\_{2n}(\mathbf{x}) \right] = \\ &\frac{\pi}{8\pi} [(2n+2) \left[ k\_{2n}(\mathbf{x}) h\_{2n+2}(\mathbf{x}) - k\_{2n+2}(\mathbf{x}) h\_{2n}(\mathbf{x}) \right] - \\ &(2n-2) \left[ k\_{2n}(\mathbf{x}) h\_{2n-2}(\mathbf{x}) - k\_{2n-2}(\mathbf{x}) h\_{2n}(\mathbf{x}) \right] \Big| \end{aligned} \tag{88}$$

by including products of the Bateman and Havelock functions.

#### **7. Conclusions**

As solutions of fluid mechanics problems, more than ninety years ago, Havelock in 1925 and Bateman in 1931 introduced new functions which are expressed in terms of finite trigonometric integrals and discussed their properties. Initially, these functions found attention of a number of mathematicians who further developed this subject and proposed some generalizations. However, unfortunately, after a rather short period, the Havelock and Bateman functions were practically abandoned. Today, only the Bateman function is listed in mathematical handbooks as a particular case of the confluent hypergeometric function, thus as a minor special function. However, as is clearly showed in this survey, these functions have interesting properties and a rather large mathematical material was devoted and associated with them. This leads to conclusion that they should be treated as independent special functions. Since at present, in reference books, our knowledge about these functions is very limited, we decided to prepare this survey where basic properties of the Havelock and Bateman functions are presented. We have found useful for the reader's convenience to add two Appendixes: Appendix A is devoted to integrals associated with the Bateman and Bateman-integral functions whereas Appendix B is devoted to trigonometric integrals and differential equations associated with the Kummer Confluent Hypergeometric Functions according to the almost unknown papers by Giuliani [43] and by Bateman himself [44].

In Appendix C we have added the integral representations of the special functions used in this survey.

It is worth to note that the Bateman Manuscript is currently under revision with the name *Encyclopedia of Special Functions: the Askey-Bateman Project*, see [45]. However the volume dealing with the confluent hypergeometric functions is not yet available.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The research of F.M. and A.C. has been carried out in the framework of the activities of the National Group of Mathematical Physics (GNFM, INdAM). All the the authors like to acknowledge the librarians of the Department of Physics and Astronomy of the University of Bologna to have found the pdf of several articles cited in the bibliography. The reader is kindly requested to accept the authors' somewhat informal style and to contact the corresponding author for pointing out possible misprints and mathematical errors.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A. Integrals Associated with the Bateman and Bateman-Integral Functions**

The integrals presented here are compiled from the literature and they have a definite form. Their number can be enlarged by applying interconnections between the Bateman, Bateman-integral and other special functions and using operational calculus. Besides, there are many integrals which are expressed in term of infinite series, but they are omitted from this tabulation.

$$\int\_0^1 (1-t)^{\beta-1} \, \epsilon^{\alpha t} \, k\_{2n}(at) \, dt = \frac{(-1)^{n-1} (n-1)! \, \Gamma(\beta)}{\Gamma(\beta+n+1)} \, L\_{n-1}^{(\beta+1)}(2a); \quad \beta > 0 \tag{A1}$$

$$\int\_0^\mathbf{x} k\_{2m}(\mathbf{t}) k\_{2n}(\mathbf{x} - \mathbf{t}) \, d\mathbf{t} = \int\_0^\mathbf{x} k\_{2n}(\mathbf{t}) k\_{2m}(\mathbf{x} - \mathbf{t}) \, d\mathbf{t} = \frac{1}{2} \left[ k\_{2m + 2n - 2}(\mathbf{x}) + 2k\_{2m + 2n}(\mathbf{x}) + k\_{2n + 2n + 2}(\mathbf{x}) \right] \tag{A2}$$

$$\begin{aligned} \int\_0^\mathbf{x} \frac{J\_0(t) - k\_0(t)}{t} dt &= j i\_0(\mathbf{x}) - k i\_0(\mathbf{x}) + \ln 2\\ \int\_0^\mathbf{x} \frac{J\_n(t) - k\_{2n}(t)}{t} dt &= j i\_n(\mathbf{x}) - k i\_{2n}(\mathbf{x}) + \frac{(-1)^n}{n} \end{aligned} \tag{A3}$$

$$\begin{aligned} \int\_0^\infty J\_0(2\sqrt{at}) \, k\_{2n}(t) \, dt &= \\ \frac{(-1)^{n-1}}{2} [(n-1)ki\_{2n-2}(a) - 2nk i\_{2n}(a) + (n+1)ki\_{2n+2}(a)] \end{aligned} \tag{A4}$$

$$\int\_0^\infty J\_0(2\sqrt{at}) \, k\_{2n}(t) \, \frac{dt}{t} = (-1)^n k i\_{2n}(a) \tag{A5}$$

$$\int\_0^\infty e^{-t} \, J\_1(2^{3/2}\sqrt{\mathbf{x}t}) \, k\_{2n}(t) \, \frac{dt}{t} = \frac{(-1)^{n-1} x^{n-1/2} e^{-x}}{\sqrt{2} n!} \tag{A6}$$

$$\int\_0^\infty e^{-a\,t} \, t^{n+1/2} f\_1(2\sqrt{\mathbf{x}}t) \, dt = \frac{(-1)^n \Gamma(n+2)}{a^{n+1} \sqrt{\mathbf{x}}} k\_{2n+2} \left(\frac{\mathbf{x}}{2a}\right); \quad a > 0 \tag{A7}$$

$$\int\_0^\mathbf{x} \frac{l\_n(t) - k\_{2n}(t)}{t} dt = j i\_n(\mathbf{x}) - k i\_{2n}(\mathbf{x}) + \frac{(-1)^n}{n} \tag{A8}$$

$$\int\_0^\infty t^{n/2-1} e^{-t} \, \_{2-n}(4\sqrt{\ge t}) \, k\_{2n}(t) \, \frac{dt}{t} = \frac{\mathbf{x}^{n/2-1} e^{-\mathbf{x}}}{2} k\_{2n}(\mathbf{x}) \tag{A9}$$

$$\int\_{0}^{\infty} \frac{e^{-bt^{2}} f\_{\lambda}(a\sqrt{t^{2}+x^{2}}) \, l\_{\nu}(a\sqrt{t^{2}+x^{2}})}{t\left(t^{2}+x^{2}\right)^{(\lambda+\nu)/2}} k\_{2\nu+2}(bt^{2}) \, dt = \frac{(-1)^{\nu} f\_{\lambda}(ax) \, l\_{\nu}(ax)}{(2n+2) \, x^{\lambda+\nu}} \tag{A10}$$
  $\text{Re}(\lambda+\nu) > -3/2$ 

$$\begin{aligned} \int\_0^\pi \mathcal{U}\_{2n}(\sqrt{\mathbf{x}}\cos\theta) \, (\sin\theta)^2 \, d\theta &= \frac{\pi (2n)! \, e^{\mathbf{x}/2}}{2\mathbf{x}n!} k\_{2n+2}(\frac{\mathbf{x}}{2}) \\ \mathcal{U}\_n(\mathbf{x}) &= (-1)^n \, e^{\mathbf{x}^2} \frac{d^n}{d\mathbf{x}^n} \left\{ e^{-\mathbf{x}^2} \right\} \end{aligned} \tag{A11}$$

$$\int\_0^\mathbf{x} \sin(\mathbf{x} - t) \, k \dot{\mathbf{z}}(t) \, dt = \cos \mathbf{x} - \sin \mathbf{x} - \mathbf{e}^{-\mathbf{x}} \tag{A12}$$

$$\int\_0^\infty \cos(\mathbf{x} - t) \, k i\_2(t) \, dt = \cos \mathbf{x} - \sin \mathbf{x} + e^{-\mathbf{x}} \tag{A13}$$

$$\int\_0^\infty \sinh(\mathbf{x} - t) \, k i\_2(t) \, dt = e^{-\mathbf{x}} (1 + \mathbf{x}) - \cosh \mathbf{x} \tag{A14}$$

$$\int\_0^\mathbf{x} \cosh(\mathbf{x} - t) \, k i\_2(t) \, dt = -\mathbf{x} e^{-\mathbf{x}} - \sinh \mathbf{x} \tag{A15}$$

$$\int\_0^\infty e^{\mathbf{x} - t} \, k i\_2(t) \, dt = -2 \sinh \mathbf{x} \tag{A16}$$

$$\int\_0^\mathbf{x} (\mathbf{x} - t) \, e^{\mathbf{x} - t} \, k i\_\mathbf{4}(t) \, dt = \sinh \mathbf{x} - \mathbf{x} \cosh \mathbf{x} \tag{A17}$$

$$\int\_0^\infty e^{-at} \, k i\_0(bt) \, dt = \frac{1}{a} \ln \left( \frac{b}{a+b} \right); \quad a, b > 0 \tag{A18}$$

$$\int\_0^\infty \frac{k i\_2(at) - k i\_2(bt)}{t} \, dt = 2 \ln\left(\frac{a}{b}\right); \quad a, b > 0 \tag{A19}$$

$$\int\_0^\infty \mathcal{J}\_0(2\sqrt{at}) \, k i\_{2n}(t) \, dt = (-1)^n \frac{k\_{2n}(a)}{a} \tag{A20}$$

#### **Appendix B. Trigonometric Integrals and Differential Equations Associated with the Kummer Confluent Hypergeometric Functions**

In a paper devoted to the note of Kummer, where he introduced into mathematics the confluent hypergeometric function defined by the following polynomial series

$$\begin{split} \, \_1F\_1(a;b; \mathbf{x}) &= M(a,b,\mathbf{x}) = 1 + \frac{a}{b} \frac{\mathbf{x}}{1!} + \frac{a(a+1)}{b(b+1)} \frac{\mathbf{x}^2}{2!} + \frac{a(a+1)(a+2)}{b(b+1)(b+2)} \frac{\mathbf{x}^3}{3!} + \dots \\ \, \_2\text{Re}(b) &> \text{Re}(a) > 0. \end{split} \tag{B1}$$

The Italian mathematician Giulio Giuliani [43] in 1888 considered the trigonometric integral (the original notation is replaced here by that used in this survey)

$$I(\mathbf{x}) = \int\_0^{\pi/2} (\cos \theta)^{a-1} \cos(\frac{\mathbf{x}}{2} \tan \theta + n\theta) \, d\theta = \frac{\pi}{2} k\_{-n, a-1}(\frac{\mathbf{x}}{2}), \quad a > 1. \tag{B2}$$

We note that this integral is one of particular solutions of the following differential equation

$$4x\frac{d^2I(\mathbf{x})}{dx^2} - 4(a-1)\frac{dI(\mathbf{x})}{dx} - (\mathbf{x} + 2n)I(\mathbf{x}) = 0. \tag{B3}$$

Besides, Giuliani introduced two integrals coming from (B2)

$$\begin{aligned} \, \_0\mathcal{U}\_n(\alpha, x) &= \int\_0^{\pi/2} (\cos \theta)^{\alpha - 1} \cos(\frac{\chi}{2} \tan \theta) \cos(n\theta) \, d\theta, \\\, \_0\mathcal{V}\_n(\alpha, x) &= \int\_0^{\pi/2} (\cos \theta)^{\alpha - 1} \sin(\frac{\chi}{2} \tan \theta) \sin(n\theta) \, d\theta, \end{aligned} \tag{B4}$$

when

$$\int\_0^{\pi/2} (\cos \theta)^{a-1} \cos(\frac{\chi}{2} \tan \theta + n\theta) \, d\theta = \mathcal{U}\_n(a, \mathbf{x}) - V\_n(a, \mathbf{x}).\tag{B5}$$

He showed that these integrals are solutions of the set of differential equations of the first order

$$\begin{aligned} 2(a-1)\frac{d\mathcal{U}\_{\boldsymbol{n}}(\boldsymbol{a},\boldsymbol{x})}{d\boldsymbol{x}} + \frac{\boldsymbol{x}}{2}\mathcal{U}\_{\boldsymbol{n}}(\boldsymbol{a}-2,\boldsymbol{x}) - \boldsymbol{n}\mathcal{V}\_{\boldsymbol{n}}(\boldsymbol{a},\boldsymbol{x}) &= 0, \\ 2(a-1)\frac{d\mathcal{V}\_{\boldsymbol{n}}(\boldsymbol{a},\boldsymbol{x})}{d\boldsymbol{x}} + \frac{\boldsymbol{x}}{2}\mathcal{V}\_{\boldsymbol{n}}(\boldsymbol{a}-2,\boldsymbol{x}) - \boldsymbol{n}\mathcal{U}\_{\boldsymbol{n}}(\boldsymbol{a},\boldsymbol{x}) &= 0, \end{aligned} \tag{B6}$$

and of the second order

$$\begin{split} 2x \frac{d^2 \mathcal{U}\_n(a, \mathbf{x})}{d\mathbf{x}^2} - 2(a - 1) \frac{d \mathcal{U}\_n(a, \mathbf{x})}{d\mathbf{x}} - \frac{\mathbf{x}}{2} \mathcal{U}\_n(a, \mathbf{x}) + nV\_n(a, \mathbf{x}) &= 0, \\ 2x \frac{d^2 V\_n(a, \mathbf{x})}{d\mathbf{x}^2} - 2(a - 1) \frac{d V\_n(a, \mathbf{x})}{d\mathbf{x}} - \frac{\mathbf{x}}{2} V\_n(a, \mathbf{x}) + n \mathcal{U}\_n(a, \mathbf{x}) &= 0. \end{split} \tag{B7}$$

From (B6) and (B7) it is possible to obtain a differential equation of the fourth order

$$\begin{split} &4\pi^{2}\frac{d^{4}l\mathcal{U}\_{\mathbb{R}}(\mathbf{a},\mathbf{x})}{d\mathbf{x}^{4}}-8(\mathbf{a}-2)\mathbf{x}\frac{d^{3}l\mathcal{U}\_{\mathbb{R}}(\mathbf{a},\mathbf{x})}{d\mathbf{x}^{3}}-\\ &2\left[\mathbf{x}^{2}-2(\mathbf{a}-1)(\mathbf{a}-2)\right]\frac{d^{2}l\mathcal{U}\_{\mathbb{R}}(\mathbf{a},\mathbf{x})}{d\mathbf{x}^{2}}+2\mathbf{x}(\mathbf{a}-2)\frac{d\mathcal{U}\_{\mathbb{R}}(\mathbf{a},\mathbf{x})}{d\mathbf{x}}-\\ &\left(\frac{\mathbf{x}^{2}}{4}+n^{2}+1-a\right)l\mathcal{U}\_{\mathbb{R}}(\mathbf{a},\mathbf{x})=0,\\ &V\_{\mathbb{R}}(\mathbf{a},\mathbf{x})=\frac{1}{n}\left(-2\mathbf{x}\frac{d^{2}l\mathcal{U}\_{\mathbb{R}}(\mathbf{a},\mathbf{x})}{d\mathbf{x}^{2}}+2(\mathbf{a}-1)\frac{d\mathcal{U}\_{\mathbb{R}}(\mathbf{a},\mathbf{x})}{d\mathbf{x}}+\frac{\mathbf{x}}{2}\mathcal{U}\_{\mathbb{R}}(\mathbf{a},\mathbf{x})\right). \end{split} \tag{B8}$$

In terms of the Kummer confluent hypergeometric functions Giuliani was able to obtain that

$$\begin{cases} \int\_0^{\pi/2} (\cos \theta)^{a-1} \cos(\frac{x}{2} \tan \theta + n\theta) \, d\theta = \mathcal{U}\_\mathbb{R}(a, x) - V\_\mathbb{R}(a, x) = \\\\ \left[ \frac{\pi \, \Gamma(a-1)}{2^a \, \Gamma\left(\frac{a-n+1}{2}\right) \, \Gamma\left(\frac{a+n+1}{2}\right)} \, ^1F\_1(\frac{a-n+1}{2}; 1-a; x) - \\\\ \frac{\pi^2 \cos\left(\frac{x-n}{2}\right) \, x^a \, e^{-x/2}}{2^a \, \sin(\pi a) \, \Gamma(a)} \, ^1F\_1(\frac{a+n+1}{2}; a+1; x) \right], \end{cases} \tag{B9}$$

and

$$\begin{split} \mathcal{U}\_{n}(\mathbf{a},\mathbf{x}) + V\_{n}(\mathbf{a},\mathbf{x}) &= \left[ \frac{\pi \Gamma(a-1)}{2^{a} \Gamma\left(\frac{a+n+1}{2}\right) \Gamma\left(\frac{a-n+1}{2}\right)} \, ^{1}F\_{1}(\frac{1-a-n}{2}; 1-a; \mathbf{x}) \right. \\ &- \frac{\pi^{2} \cos\left(\frac{a+n}{2}\right) \, ^{a}x^{a} e^{-x/2}}{2^{a} \sin(\pi a) \, \Gamma(a)} \, ^{1}F\_{1}(\frac{a-n+1}{2}; a+1; \mathbf{x}) \right]. \end{split} \tag{B10}$$

These expressions can be presented in terms of the generalized Bateman functions defined in (75)

$$\begin{aligned} k\_{-\nu,a,0}(\mathbf{x}) &= \\ \left[ \frac{\Gamma(a)}{2^{\alpha} \Gamma\left(\frac{a-\nu}{2} + 1\right) \Gamma\left(\frac{a+\nu}{2}\right)} \, \_1F\_1(\frac{a-\nu}{2} + 1; -a; 2\mathbf{x}) - \\ \left[ \frac{\pi}{2^{\alpha}} \cos\left(\frac{a-\nu+1}{2}\right) x^{a+1} e^{-\mathbf{x}} \right] \, \_1F\_1(\frac{a+\nu}{2} + 1; \alpha + 2; 2\mathbf{x}) \right], \end{aligned} \tag{B11}$$

and

$$\begin{split} k\_{\nu,a,0}(\mathbf{x}) &= \left[ \frac{\Gamma(\alpha) e^{-\mathbf{x}}}{2^{\alpha} \Gamma\left(\frac{\alpha+\nu}{2}+1\right) \Gamma\left(\frac{\alpha-\nu}{2}+1\right)} \,\_1F\_1(\frac{-\alpha-\nu}{2}; -\alpha; 2\mathbf{x}) \\ &- \frac{\pi \cos\left(\frac{\alpha+\nu+1}{2}\right) \mathbf{x}^{\alpha+1} e^{-\mathbf{x}}}{2^{\alpha} \sin\left[\pi(\alpha+1)\right] \Gamma(\alpha+1)} \,\_1F\_1(\frac{\alpha-\nu}{2}+1; \alpha+2; 2\mathbf{x}) \right]. \end{split} \tag{B12}$$

As shown by Giuliani, by changing the integration variable, the finite trigonometric integrals can be presented as the infinite integrals, for example

$$\int\_0^{\pi/2} (\cos \theta)^a \cos(\frac{\mathbf{x}}{2} \tan \theta) \, d\theta = \int\_0^{\infty} \frac{\cos\left(\frac{\mathbf{x}t}{2}\right)}{(1+t^2)^{a/2+1}} \, dt \,. \tag{B13}$$

Considering the case *α* = 1 in (B2), Bateman [44] in 1931 noted the link that exists between the investigated by Giuliani integral and the *k*-Bateman function with negative order. He also found that the solution of the following third order differential equation

$$
\alpha \frac{d^3 I(\mathbf{x})}{d\mathbf{x}^3} - (\alpha - 1) \frac{d^2 I(\mathbf{x})}{d\mathbf{x}^2} - (\mathbf{x} + \eta) \frac{d I(\mathbf{x})}{d\mathbf{x}} - \beta I(\mathbf{x}) = 0,\tag{B14}
$$

is given by the following trigonometric integral

$$I(\mathbf{x}) = \int\_0^{\pi/2} (\cos \theta)^a (\sin \theta)^{\beta - 1} \cos(\mathbf{x} \tan \theta + n\theta) \, d\theta = \frac{\pi}{2} k\_{n, a, \beta - 1}(\mathbf{x}) \,. \tag{B15}$$

Besides, Bateman showed that for *x* > 0:

$$\begin{split} &\int\_{0}^{\pi/2} (\cos \theta)^{\mathfrak{M}} \cos[x \tan \theta + (m+2n)\theta] \, d\theta = \frac{e^{\mathfrak{x}} \sin(\pi n)}{2^{k+1}} \int\_{0}^{1} t^{k} (1-t)^{\mathfrak{u}-1} \, e^{-2\mathfrak{x}/t} \, dt, \\ &\int\_{0}^{\pi/2} \cos[x \tan \theta + (m+2n)\theta] \, d\theta = \frac{e^{\mathfrak{x}} \sin(\pi n)}{2} \int\_{0}^{1} (1-t)^{\mathfrak{u}-1} \, e^{-2\mathfrak{x}/t} \, dt = \frac{\pi}{2} \, k\_{-2n}(\mathfrak{x}) \,. \end{split} \tag{B16}$$

As can be observed, the included material from the 1888 paper by Giuliani and from the 1931 paper by Bateman is important from the historical and mathematical points of view.

#### **Appendix C. Integral Representations of Special Functions Used in This Survey** Hypergeometric Function

$$\,\_2F\_1(a,b;c;x) = \frac{\Gamma(c)}{\Gamma(a)} \int\_0^1 \frac{t^{b-1} \left(1-t\right)^{c-b-1}}{(1-xt)^a} dt \quad \text{Re}(c) > \text{Re}(a) > 0. \tag{C1}$$

Kummer Confluent Hypergeometric Function

$$\,\_1F\_1(a,b;\mathbf{x}) = M(a,b,\mathbf{x}) = \frac{\Gamma(b)}{\Gamma(a)\,\Gamma(b-a)} \int\_0^1 t^{a-1} \,\_1e^{\mathbf{t}t} \, (1-t)^{b-a-1} \, dt \quad \text{Re}(b) > \text{Re}(a) > 0. \tag{C2}$$

Tricomi Confluent Hypergeometric Function

$$\mathrm{MI}(a,b,\mathbf{x}) = \frac{1}{\Gamma(a)} \int\_0^\infty t^{a-1} \, e^{-\mathrm{x}t} \left(1+t\right)^{b-a-1} dt, \quad \mathrm{Re}(b) > \mathrm{Re}(a) > 0. \tag{C3}$$

Whittaker Functions

$$\begin{split} M\_{\mathbf{x},\boldsymbol{\mu}}(\mathbf{x}) &= \frac{\Gamma(1+2\mu)}{\Gamma(\mu+\kappa+1/2)} \frac{e^{-\mathbf{x}/2}}{\Gamma(\mu+\kappa+1/2)} \int\_{0}^{1} t^{\mu-\kappa-1/2} e^{\mathbf{x}t} \left(1-t\right)^{\mu+\kappa-1/2} dt, \\ M\_{\mathbf{x},\boldsymbol{\mu}}(\mathbf{x}) &= \mathbf{x}^{\mu+1/2} e^{-\mathbf{x}/2} M(\mu-\kappa+1/2, 1+2\mu, \mathbf{x}) \\ \text{Re}(\mu \pm \kappa + 1/2) &> 0. \end{split} \tag{C4}$$

$$\begin{split} \mathcal{W}\_{\mathbf{x},\mu}(\mathbf{x}) &= \frac{\mathfrak{x}^{\mu+1/2} e^{-\mathfrak{x}/2}}{\Gamma(\mu-\kappa+1/2)} \int\_{0}^{\infty} t^{\mu-\kappa-1/2} e^{-\mathfrak{x}t} \left(1+t\right)^{\mu+\kappa-1/2} dt, \\ \mathcal{W}\_{\mathbf{x},\mu}(\mathbf{x}) &= \mathfrak{x}^{\mu+1/2} e^{-\mathfrak{x}/2} \mathcal{U}(\mu-\kappa+1/2, 1+2\mu, \mathbf{x}) \\ \text{Re}(\mu-\kappa+12) &> 0. \end{split} \tag{C5}$$

Bessel Functions

$$f\_{\nu}(\mathbf{x}) = \frac{1}{\pi} \int\_{0}^{\pi} \cos(\mathbf{x} \sin \theta - \nu \theta) \, d\theta - \frac{\sin(\pi \nu)}{\pi} \int\_{0}^{\infty} e^{-x \sinh t - \nu t} \, dt. \tag{\text{C6}}$$

$$Y\_{\nu}(\mathbf{x}) = \frac{1}{\pi} \int\_{0}^{\pi} \sin(\mathbf{x} \sin \theta - \nu \theta) \, d\theta - \frac{\sin(\pi \nu)}{\pi} \int\_{0}^{\infty} e^{-\mathbf{x} \cdot \mathbf{s} \text{Im} \, t - \nu t} \left[ e^{\nu t} + e^{-\nu t} \cos(\pi \nu) \right] \, dt. \tag{C7}$$

$$I\_V(\mathbf{x}) = \frac{1}{\pi} \int\_0^\pi e^{\mathbf{x}\cos\theta} \cos(\nu\theta) \,d\theta - \frac{\sin(\pi\nu)}{\pi} \int\_0^\infty e^{-\mathbf{x}\cosh t - \nu t} \,dt. \tag{C8}$$

$$K\_{\nu}(\mathbf{x}) = \frac{\Gamma(\nu + 1/2)}{\sqrt{\pi}} \int\_{0}^{\infty} \frac{\cos(\mathbf{x}t)}{(1 + t^2)^{\nu + 1/2}} dt = \int\_{0}^{\infty} e^{-\mathbf{x}\cosh t} \cosh(\nu t) \, dt. \tag{\text{C9}}$$

Struve Functions

$$H\_{\nu}(\mathbf{x}) = \frac{2(\mathbf{x}/2)^{\nu}}{\Gamma(\nu + 1/2)\sqrt{\pi}} \int\_{0}^{1} (1 - t^{2})^{\nu - 1/2} \sin(\mathbf{x}t) \, dt, \quad \text{Re}(\nu) > -1/2. \tag{C10}$$

$$L\_{\nu}(\mathbf{x}) = \frac{2(\mathbf{x}/2)^{\nu}}{\Gamma(\nu + 1/2)\sqrt{\pi t}} \int\_{0}^{\pi/2} (\sin t)^{2\nu} \sinh(\mathbf{x}\cos t) \, dt, \quad \text{Re}(\nu) > -1/2. \tag{C11}$$

Lommel Functions

$$S\_{\mu, \nu}(\mathbf{x}) x^{\mu} \int\_0^{\infty} e^{-\mathbf{x} t} \,\_2F\_1\left(\frac{1-\mu+|nu|}{2}, \frac{1-\mu-\nu}{2}; \frac{1}{2}; -t^2\right) dt, \quad \text{Re}(\mathbf{x}) > 0. \tag{C12}$$

#### **References**


## *Review* **Series in Le Roy Type Functions: A Set of Results in the Complex Plane—A Survey**

**Jordanka Paneva-Konovska**

Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria; jpanevakonovska@gmail.com

**Abstract:** This study is based on a part of the results obtained in the author's publications. An enumerable family of the Le Roy type functions is considered herein. The asymptotic formula for these special functions in the cases of 'large' values of indices, that has been previously obtained, is provided. Further, series defined by means of the Le Roy type functions are considered. These series are studied in the complex plane. Their domains of convergence are given and their behaviour is investigated 'near' the boundaries of the domains of convergence. The discussed asymptotic formula is used in the proofs of the convergence theorems for the considered series. A theorem of the Cauchy–Hadamard type is provided. Results of Abel, Tauber and Littlewood type, which are analogues to the corresponding theorems for the classical power series, are also proved. At last, various interesting particular cases of the discussed special functions are considered.

**Keywords:** Le Roy functions and series in them; inequalities; asymptotic formula; convergence of power and functional series in complex plane; Cauchy–Hadamard, Abel, Tauber and Littlewood type theorems

**MSC:** 30D20; 33E12; 30A10; 40E10; 30D15; 40A30; 40G10; 40E05

#### **1. Introduction**

In two recent papers, S. Gerhold [1] and, independently, R. Garra and F. Polito [2] introduced the new special function

$$F\_{a,\beta}^{(\gamma)}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{[\Gamma(ak+\beta)]^{\gamma}}, \quad z \in \mathbb{C},\tag{1}$$

for complex values of the variable *z* and values of parameters *α* > 0, *β* > 0, *γ* > 0. On a later stage its definition is extended by R. Garrappa, S. Rogosin and F. Mainardi [3] under more general conditions for the parameters. However, making sure that the coefficients [Γ(*αk* + *β*)] <sup>−</sup>*<sup>γ</sup>* in the Expansion (1) exist, the values of the parameters have to be restricted. A natural restriction in this direction would be the following:

$$
\mathfrak{a}, \ \mathfrak{F} \in \mathbb{C}, \quad \gamma > 0. \tag{2}
$$

As is established in [3], this function turns out to be an entire function of the complex variable *z* for all values of the parameters such that

$$\Re(u) > 0, \ \beta \in \mathbb{C}, \ \gamma > 0. \tag{3}$$

Actually, this function has been recently considered in [1–6] from various points of view. Some of its important properties can be seen therein. For example, different asymptotic formulae can be found in S. Gerhold [1] and R. Garrappa, S. Rogosin, F. Mainardi [3], for complete monotonicity see K. Gorska, A. Horzela, R. Garrappa [4] and T. Simon [5]. For studying its properties in relation to some integro-differential operators involving

**Citation:** Paneva-Konovska, J. Series in Le Roy Type Functions: A Set of Results in the Complex Plane—A Survey. *Mathematics* **2021**, *9*, 1361. https://doi.org/10.3390/math9121361

Academic Editor: Francesco Mainardi

Received: 26 May 2021 Accepted: 9 June 2021 Published: 12 June 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Hadamard fractional derivatives or hyper-Bessel-type operators see Garra-Polito [2], different integral representations can be seen in [3] and Pogány [6].

The function (1) is a natural generalization of the so-called Le Roy function

$$F^{(\gamma)}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{[\Gamma(k+1)]^{\gamma}} = \sum\_{k=0}^{\infty} \frac{z^k}{[k!]^{\gamma}}, \quad z \in \mathbb{C}, \ \gamma \in \mathbb{C}, \tag{4}$$

which was named after the great French mathematician Édouard Louis Emmanuel Julien Le Roy (1870–1954), and probably for that reason the authors of [3] use the name Le Roy type function for the function *F*(*γ*) *<sup>α</sup>*,*<sup>β</sup>* .

Keeping with this, and for the sake of brevity, we often use in this paper the name Le Roy type function for the function *F*(*γ*) *<sup>α</sup>*,*<sup>β</sup>* , defined by (1). In this paper, considering the Le Roy type functions (1), we discuss various earlier results which are needed here. These are results related to inequalities in the complex plane C and on its compact subsets and asymptotic formula for 'large' values of indices of the functions (1). Further, considering series in such a kind of functions, we provide results for their domains of convergence and investigate their behaviour 'near' the boundaries of their domains of convergence.

In the series of papers [7–10], as well as in the recent book [11], we studied series in systems of some representatives of the special functions of fractional calculus, which are fractional index analogues of the Bessel functions and also multi-index Mittag–Leffler functions (in the sense of [12–15]), and we have proved various results connected with their convergence in the complex domains.

#### **2. Inequalities and an Asymptotic Formula**

For our purpose we consider the family

$$F\_{a,n}^{(\gamma)}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{[\Gamma(ak+n)]^{\gamma}}, \quad z \in \mathbb{C}; \ n \in \mathbb{N}, \ n > 0, \ \gamma > 0,\tag{5}$$

where N means the set of positive integers.

We are going to deal with some analytical transformations of the function (5) for each value of the parameter *n*. The following results hold true (for the formulation and proof see Paneva-Konovska [16]).

**Lemma 1.** *Let z* ∈ C*, α* > 0*, γ* > 0*, n* ∈ N *and let K* ⊂ C *be a nonempty compact set. Then there exists an entire function <sup>ϑ</sup>*(*γ*) *<sup>α</sup>*,*<sup>n</sup> such that*

$$F\_{\mathfrak{a},\mathfrak{u}}^{(\gamma)}(z) = \frac{1}{\left[\Gamma(n)\right]^{\overline{\gamma}}} \left(1 + \vartheta\_{\mathfrak{a},\mathfrak{u}}^{(\gamma)}(z)\right). \tag{6}$$

*The entire function ϑ<sup>γ</sup> <sup>α</sup>*,*<sup>n</sup> satisfies the following inequality*

$$\left|\vartheta\_{a,n}^{(\gamma)}(z)\right| \le \frac{[\Gamma(a+1)]^{\gamma} \, [\Gamma(n)]^{\gamma}}{[\Gamma(a+n)]^{\gamma}} \left(F\_{a,1}^{(\gamma)}(|z|) - 1\right), \quad z \in \mathbb{C},\tag{7}$$

*Moreover there exists a positive constant C* = *C*(*K*)*, such that*

$$\max\_{z \in K} \left| \theta\_{a, \eta}^{(\gamma)}(z) \right| \le C \frac{[\Gamma(n)]^\gamma}{[\Gamma(n+n)]^{\gamma}} \tag{8}$$

*for all the positive integers n.*

**Theorem 1.** *Let <sup>z</sup>* <sup>∈</sup> <sup>C</sup>; *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, *<sup>α</sup>* <sup>&</sup>gt; 0, *<sup>γ</sup>* <sup>&</sup>gt; <sup>0</sup>*. Then the Le Roy type functions <sup>F</sup>*(*γ*) *<sup>α</sup>*,*<sup>n</sup> have the following asymptotic formula*

$$F\_{a,n}^{(\gamma)}(z) = \frac{1}{\left[\Gamma(n)\right]^{\gamma}} \left(1 + \theta\_{a,n}^{(\gamma)}(z)\right), \quad \theta\_{a,n}^{(\gamma)}(z) \to 0 \text{ as } n \to \infty. \tag{9}$$

*The convergence is uniform in the nonempty compact subsets of the complex plane.*

The results above allow us to write the next two remarks.

**Remark 1.** *According to the asymptotic Formula (9), it follows that there exists a natural number M such that the functions* [Γ(*n*)] *<sup>γ</sup>F*(*γ*) *<sup>α</sup>*,*<sup>n</sup>* (*z*) *do not vanish for any n great enough (say n* <sup>&</sup>gt; *M).*

**Remark 2.** *Note that each function <sup>F</sup>*(*γ*) *<sup>α</sup>*, *<sup>n</sup>* (*z*) (*<sup>n</sup>* <sup>∈</sup> <sup>N</sup>)*, being an entire function, no identically zero, has at most finite number of zeros in the closed and bounded set* |*z*| ≤ *R ([17], p. 305). Moreover, because of Remark 1, at most finite number of these functions have some zeros.*

#### **3. Series in Le Roy Type Functions**

For the sake of simplicity, we introduce an auxiliary family of functions, related to the Le Roy type functions, adding *<sup>F</sup>*(*γ*) *<sup>α</sup>*,0 (*z*) just for completeness, namely:

$$\hat{F}\_{a,0}^{(\gamma)}(z) = 1,\ \hat{F}\_{a,n}^{(\gamma)}(z) = z^n \ [\Gamma(n)]^\gamma \ F\_{a,n}^{(\gamma)}(z),\ n \in \mathbb{N};\ a > 0,\ \gamma > 0,\tag{10}$$

and we study the series with complex coefficients *an* (*n* ∈ N0, i.e., *n* = 0, 1, 2, ...) in these functions for *<sup>z</sup>* <sup>∈</sup> <sup>C</sup>, namely: <sup>∞</sup>

$$\sum\_{n=0}^{\infty} \ a\_n \, \hat{F}\_{n,n}^{(\gamma)}(z) \,. \tag{11}$$

Our major goal is to study the convergence of the series (11) in the complex plane. We give results, corresponding to the classical Cauchy–Hadamard theorem and Abel lemma for the power series and more precise results, giving the behaviour of the series 'near' the boundary of the domain of convergence, as well. Such kind of results may be useful for studying the solutions of some fractional order differential and integral equations, expressed in terms of series (or series of integrals) in special functions of the type (10) and their special cases (as for example in Kiryakova et al., in [18]—for the Mittag–Leffler functions; in [19]—for the hyper-Bessel functions; in [14,20]—for the multi-index Mittag– Leffler functions). Convergence theorems are also obtained for series in other special functions, for example, for series in Laguerre and Hermite polynomials (the results are obtained in a number of publications and they can be seen in Rusev [21]), and respectively by the author for series in Bessel and Mittag–Leffler types functions in the previous papers [7–10] and the book [11].

#### **4. Cauchy–Hadamard Type Theorem and Corollaries**

Let us denote by *D*(0; *R*) the open disk with the radius *R* and centred at the origin, and let the circle *C*(0; *R*) be its boundary, i.e.

$$D(0;R) = \{ z : \ |z| < R \} \quad \text{and} \quad C(0;R) = \{ z : \ |z| = R \} \quad (z \in \mathbb{C}).$$

In the beginning, we give a theorem of the Cauchy–Hadamard type for the series (11).

**Theorem 2** (of Cauchy–Hadamard type)**.** *Let z* ∈ C*, n* ∈ N0*, α* > <sup>0</sup>*, γ* > <sup>0</sup>*. Then the domain of convergence of the series (11) with complex coefficients an is the disk D*(0; *R*) *with a radius of convergence*

$$R = 1/\limsup\_{n \to \infty} \left( \left\lfloor |a\_n| \right\rfloor \right)^{1/n}. \tag{12}$$

*The cases R* = 0 *and R* = ∞ *are included in the general case.*

Let us note that the series (11) absolutely converges in the open disk *D*(0; *R*) with the radius *R*, given by (12), and it diverges in its outside (i.e., for *z* ∈ C with |*z*| > *R*), like in the classical theory of the power series. These facts are established in the process of proving this basic theorem. Further, three corollaries are formulated. First of them is analogical to the classical Abel lemma.

**Corollary 1.** *Let z* ∈ C*, n* ∈ N0*, α* > 0*, γ* > 0*, and let the series (11) converge at the point z*<sup>0</sup> = 0*. Then it is absolutely convergent in the disk D*(0; |*z*0|)*.*

Additionally, it turns out that the convergence of the discussed series is uniform inside the disk *D*(0; *R*), i.e., on each closed disk |*z*| ≤ *r* < *R*.

**Corollary 2.** *Let z* ∈ C*, n* ∈ N0*, α* > <sup>0</sup>*, γ* > <sup>0</sup>*. Then the convergence of the series (11) is uniform inside the disk D*(0; *R*)*, with R defined by (12), i.e., on each closed disk* [*D*(0;*r*)] = {*z* ∈ C : |*z*| ≤ *r* < *R*}*.*

The third corollary considers the behaviour of the series (11) outside the disk *D*(0; |*z*0|), described in Corollary 1.

**Corollary 3.** *Let z* ∈ C*, n* ∈ N0*, α* > <sup>0</sup>*, γ* > <sup>0</sup>*, and let the series (11) diverge at the point z*<sup>0</sup> = <sup>0</sup>*. Then it is divergent for each z with* |*z*| > |*z*0|*.*

Theorem 2 and Corollaries 1 and 2 are formulated and proved in [16]. The formulation and proof of Corollary 3 can be found in author's paper [22].

Thus, the series (11) absolutely converges in the open disk *D*(0; *R*) and it diverges in the region {*z* ∈ C : |*z*| > *R*}. Inside the open disk *D*(0; *R*), i.e., in each closed disk |*z*| ≤ *r* which is a subset of *D*(0; *R*), the convergence of the discussed series is uniform. However, the very disk of convergence is not obligatorily a domain of uniform convergence and at the points on its boundary divergence cannot be excluded. More precise results, connected with the behaviour of the series (11) 'near' the boundary *C*(0; *R*), are obtained and discussed in the next sections.

#### **5. Abel Type Theorem**

Let *z*<sup>0</sup> ∈ C, 0 < *R* < ∞, |*z*0| = *R* and *g<sup>ϕ</sup>* be an arbitrary angular region with size 2*ϕ* < *π* and with a vertex at the point *z* = *z*0. Let additionally this region be symmetric with respect to the straight line passing through the points 0 and *z*<sup>0</sup> and *d<sup>ϕ</sup>* be its part, bounded by the arms of the angle *g<sup>ϕ</sup>* and the arc of the circle centred at the point 0 and touching the arms of *gϕ*. The following inequality can be verified for *z* ∈ *d<sup>ϕ</sup>* [11] (p. 21):

$$|z - z\_0| \cos \varphi < 2(|z\_0| - |z|). \tag{13}$$

The next theorem refers to the uniform convergence of the series (11) in the set *dϕ* and the existence of the limit of its sum at the point *z*0, provided *z* ∈ *D*(0; *R*) ∩ *gϕ*.

**Theorem 3** (of Abel type)**.** *Let* {*an*}<sup>∞</sup> *<sup>n</sup>*=<sup>0</sup> *be a sequence of complex numbers, R be the real number defined by (12) and* 0 < *R* < ∞*. If f*(*z*; *α*, *γ*) *is the sum of the series (11) in the open disk D*(0; *R*)*, i.e.,*

$$f(z; \mathfrak{a}, \gamma) = \sum\_{n=0}^{\infty} a\_n \widetilde{F}\_{\mathfrak{a},n}^{(\gamma)}(z), \quad z \in D(0; \mathbb{R}),$$

*and this series converges at the point z*<sup>0</sup> *of the boundary C*(0; *R*)*, then:*

(*i*) *The following relation holds*

$$\lim\_{z \to z\_0} f(z; \ a, \gamma) = \sum\_{n=0}^{\infty} a\_n \vec{F}\_{n,n}^{(\gamma)}(z\_0),\tag{14}$$

*provided z* ∈ *D*(0; *R*) ∩ *gϕ.*

(*ii*) *The series (11) is uniformly convergent in the region dϕ.*

**Proof.** The proofs of the two assertions (*i*) and (*ii*) are separately given.


$$S\_k(z) = \sum\_{n=0}^k a\_n \tilde{F}\_{a,n}^{(\gamma)}(z), \quad S\_k(z\_0) = \sum\_{n=0}^k a\_n \tilde{F}\_{a,n}^{(\gamma)}(z\_0), \quad \lim\_{k \to \infty} S\_k(z\_0) = s,\tag{15}$$

we obtain

$$S\_{k+p}(z) - S\_k(z) = \sum\_{n=0}^{k+p} a\_n \widehat{F}\_{a,n}^{(\gamma)}(z) - \sum\_{n=0}^k a\_n \widehat{F}\_{a,n}^{(\gamma)}(z) = \sum\_{n=k+1}^{k+p} a\_n \widehat{F}\_{a,n}^{(\gamma)}(z).$$

According to Remark 2, there exists a natural number *<sup>N</sup>*<sup>0</sup> such that *<sup>F</sup>*(*γ*) *<sup>α</sup>*, *<sup>n</sup>* (*z*0) <sup>=</sup> 0 when *n* > *N*0. Let *k* > *N*<sup>0</sup> and *p* > 0. Then, using the denotation

$$\gamma\_n(z; z\_0) = \vec{F}\_{\alpha, n}^{(\gamma)}(z) / \vec{F}\_{\alpha, n}^{(\gamma)}(z\_0),$$

the difference *Sk*<sup>+</sup>*p*(*z*) − *Sk*(*z*) can be written as follows:

$$S\_{k+p}(z) - S\_k(z) = \sum\_{n=k+1}^{k+p} a\_n \widetilde{F}\_{a,n}^{(\gamma)}(z\_0) \frac{\widetilde{F}\_{a,n}^{(\gamma)}(z)}{\widetilde{F}\_{a,n}^{(\gamma)}(z\_0)} = \sum\_{n=k+1}^{k+p} a\_n \widetilde{F}\_{a,n}^{(\gamma)}(z\_0) \gamma\_n(z; z\_0).$$

Now, by the Abel transformation (see in [17]),

$$\sum\_{n=k+1}^{k+p} (\beta\_n - \beta\_{n-1})\gamma\_n = \beta\_{k+p}\gamma\_{k+p} - \beta\_k\gamma\_{k+1} - \sum\_{n=k+1}^{k+p-1} \beta\_n(\gamma\_{n+1} - \gamma\_n),$$

and additionally denoting *β<sup>n</sup>* = *Sn*(*z*0) − *s*, we obtain consecutively:

$$S\_{k+p}(z) - S\_k(z) = \sum\_{n=k+1}^{k+p} (\beta\_n - \beta\_{n-1})\gamma\_n(z; z\_0)$$

$$\xi = \beta\_{k+p}\gamma\_{k+p}(z; z\_0) - \beta\_k\gamma\_{k+1}(z; z\_0) - \sum\_{n=k+1}^{k+p-1} \beta\_n(\gamma\_{n+1}(z; z\_0) - \gamma\_n(z; z\_0)),$$

and

$$|\mathcal{S}\_{k+p}(z) - \mathcal{S}\_k(z)| \le |\mathcal{S}\_{k+p}(z\_0) - s||\gamma\_{k+p}(z; z\_0)| + |\mathcal{S}\_k(z\_0) - s||\gamma\_{k+1}(z; z\_0)|$$

$$+ \sum\_{n=k+1}^{k+p-1} |\mathcal{S}\_n(z\_0) - s| \times \left| \frac{\hat{F}\_{a,n}^{(\gamma)}(z)}{\hat{F}\_{a,n}^{(\gamma)}(z\_0)} - \frac{\hat{F}\_{a,n+1}^{(\gamma)}(z)}{\hat{F}\_{a,n+1}^{(\gamma)}(z\_0)} \right|. \tag{16}$$

Then, using the inequality (16), we intend to estimate the module of the difference *Sk*<sup>+</sup>*p*(*z*) − *Sk*(*z*). Due to (8) and (9), along with the Γ-functions quotient property (see e.g., [11] (p. 101)) and the equalities lim*n*→<sup>∞</sup> 1 *<sup>n</sup>αγ* <sup>=</sup> 0, lim*n*→∞(<sup>1</sup> <sup>+</sup> *<sup>θ</sup>n*(*z*0))−<sup>1</sup> <sup>=</sup> 1, we conclude that there exist numbers *A* > 0 and *N*<sup>1</sup> > *N*<sup>0</sup> such that |1 + *θn*(*z*)| ≤ *A*/2 for all the positive integers *n* and |1 + *θn*(*z*0)| <sup>−</sup><sup>1</sup> < 2 for *n* > *N*1, whence

$$|\gamma\_{\mathfrak{n}}(z; z\_0)| \le A \quad \text{for } n > N\_1. \tag{17}$$

Further, denoting

$$f\_n(z; z\_0) = \gamma\_n(z; z\_0) - \gamma\_{n+1}(z; z\_0),$$

which is the same as

$$f\_n(z; z\_0) = \frac{\widetilde{F}\_{\alpha, n}^{(\gamma)}(z)}{\widetilde{F}\_{\alpha, n}^{(\gamma)}(z\_0)} - \frac{\widetilde{F}\_{\alpha, n+1}^{(\gamma)}(z)}{\widetilde{F}\_{\alpha, n+1}^{(\gamma)}(z\_0)}.$$

and observing that *fn*(*z*0; *z*0) = 0, we apply the Schwartz lemma for the function *fn*(*z*; *z*0). So, we obtain that there exists a positive constant *C* such that:

$$|f\_n(z; z\_0)| = \left| \frac{\widetilde{F}\_{\alpha, n}^{(\gamma)}(z)}{\widetilde{F}\_{\alpha, n}^{(\gamma)}(z\_0)} - \frac{\widetilde{F}\_{\alpha, n+1}^{(\gamma)}(z)}{\widetilde{F}\_{\alpha, n+1}^{(\gamma)}(z\_0)} \right| \le C|z - z\_0||z/z\_0|^n.$$

whence, and according to (13) as well, we have:

$$\sum\_{n=k+1}^{k+p+1} |f\_n(z; z\_0)| \le \sum\_{n=0}^{\infty} \mathcal{C} |z - z\_0| |z/z\_0|^n = \mathcal{C} |z\_0| \times \frac{|z - z\_0|}{|z\_0| - |z|} < \frac{2\mathcal{C}|z\_0|}{\cos \varphi}.\tag{18}$$

Let *ε* be an arbitrary positive number. Taking in view the third relation (15), we deduce that there exists a positive number *N*<sup>2</sup> > *N*<sup>0</sup> so large that

$$|S\_n(z\_0) - s| < \min\left(\frac{\varepsilon}{3A}, \frac{\varepsilon \cos \bar{\varrho}}{6C|z\_0|}\right) \quad \text{for } n > N\_2. \tag{19}$$

Now, let us take *N* = *N*(*ε*) = max(*N*1, *N*2) and *k* > *N*. Therefore the inequalities (16)–(19) give

$$|\mathcal{S}\_{k+p}(z) - \mathcal{S}\_k(z)| < \frac{2\varepsilon}{3} + \frac{\varepsilon \cos \varrho}{6\mathcal{C}|z\_0|} \sum\_{n=k+1}^{k+p+1} |f\_n(z; z\_0)| < \frac{2\varepsilon}{3} + \frac{\varepsilon \cos \varrho}{6\mathcal{C}|z\_0|} \frac{2\mathcal{C}|z\_0|}{\cos \varrho} = \varepsilon, \quad \varepsilon > 0$$

that completes the proof of (*ii*).

Thus, the theorem is completely proved.

#### **6. Tauber Type Theorem**

It is established in Section 5 that the convergence of the considered series in Le Roy type functions at the point *z*<sup>0</sup> from the boundary of *D*(0; *R*) implies the existing of the limit of its sum when *z* tends to *z*0, provided *z* ∈ *D*(0; *R*) ∩ *gϕ*. It turns out that under additional conditions on the coefficients of the considered series, the inverse proposition is also valid.

Now, let *<sup>z</sup>*<sup>0</sup> <sup>∈</sup> <sup>C</sup>, <sup>|</sup>*z*0<sup>|</sup> <sup>=</sup> *<sup>R</sup>*, 0 <sup>&</sup>lt; *<sup>R</sup>* <sup>&</sup>lt; <sup>∞</sup>, and let *<sup>F</sup>*(*γ*) *<sup>α</sup>*, *<sup>n</sup>* (*z*0) <sup>=</sup> 0 for *<sup>n</sup>* <sup>=</sup> 0, 1, 2, ... . Note that, the last condition is fulfilled due to Remark 2, since each function *<sup>F</sup>*(*γ*) *<sup>α</sup>*, *<sup>n</sup>* (*z*) (*<sup>n</sup>* <sup>∈</sup> <sup>N</sup>), being an entire function, no identically zero, has at most a finite number of zeros in the closed and bounded set |*z*| ≤ *R*, and moreover, no more than a finite number of these functions have some zeros.

For the sake of brevity, denote

$$F\_{\mathfrak{u},\mathfrak{a},\gamma}^\*(z;z\_0) = \frac{\widehat{F}\_{\mathfrak{a},\mathfrak{a}}^{(\gamma)}(z)}{\widehat{F}\_{\mathfrak{a},\mathfrak{a}}^{(\gamma)}(z\_0)}$$

.

Let the series <sup>∞</sup> ∑ *n*=0 *anF*∗ *<sup>n</sup>*,*α*,*γ*(*z*; *z*0), with *an* ∈ C, be convergent for |*z*| < *R*, and

$$F(z) = \sum\_{n=0}^{\infty} a\_n F\_{n, a, \gamma}^\*(z; z\_0), \quad |z| < R. \tag{20}$$

Then the following theorem can be formulated.

**Theorem 4** (of Tauber type)**.** *If* {*an*}<sup>∞</sup> *<sup>n</sup>*=<sup>0</sup> *is a sequence of complex numbers with*

$$\lim\{na\_{\rm tr}\} = 0,\tag{21}$$

*and there exists*

$$\lim\_{z \to z\_0} F(z) = S \quad (|z| < R, z \to z\_0 \text{ radially}), \tag{22}$$

$$\text{In then the numerical series } \sum\_{n=0}^{\infty} a\_n \text{ is convergent and } \sum\_{n=0}^{\infty} a\_n = S.$$

**Proof.** Let *z* belong to the segment [0, *z*0]. By using the asymptotic Formula (9) for the Le Roy type functions, we obtain:

$$a\_n F\_{n, \mathfrak{a}, \gamma}^\*(z; z\_0) = a\_n \left(\frac{z}{z\_0}\right)^n \frac{1 + \vartheta\_{\mathfrak{a}, \mathfrak{a}}^{(\gamma)}(z)}{1 + \vartheta\_{\mathfrak{a}, \mathfrak{a}}^{(\gamma)}(z\_0)} = a\_n \left(\frac{z}{z\_0}\right)^n \left(1 + \vartheta\_{\mathfrak{a}, \mathfrak{a}}^{(\gamma)}(z; z\_0)\right),\tag{23}$$

where *ϑ* (*γ*) *<sup>α</sup>*,*<sup>n</sup>* (*z*; *<sup>z</sup>*0) = *<sup>ϑ</sup>*(*γ*) *<sup>α</sup>*,*<sup>n</sup>* (*z*) <sup>−</sup> *<sup>ϑ</sup>*(*γ*) *<sup>α</sup>*,*<sup>n</sup>* (*z*0) <sup>1</sup> <sup>+</sup> *<sup>ϑ</sup>*(*γ*) *<sup>α</sup>*,*<sup>n</sup>* (*z*0) . Then, due to (8) and the Γ-functions quotient property, *ϑ* (*γ*) *<sup>α</sup>*,*<sup>n</sup>* (*z*; *<sup>z</sup>*0) satisfies the following relation

$$
\widehat{\theta}\_{a,n}^{(\gamma)}(z; z\_0) = O\left(\frac{1}{n^{a\gamma}}\right). \tag{24}
$$

Writing <sup>∞</sup> ∑ *n*=0 *anF*∗ *<sup>n</sup>*,*α*,*γ*(*z*; *z*0) in the form

$$\sum\_{n=0}^{\infty} a\_n F\_{n,n,\gamma}^\*(z; z\_0) = \sum\_{n=0}^{\infty} a\_n \left(\frac{z}{z\_0}\right)^n \frac{1 + \theta\_{a,n}^{(\gamma)}(z)}{1 + \theta\_{a,n}^{(\gamma)}(z\_0)}\tag{25}$$
 
$$= \sum\_{n=0}^{\infty} a\_n \left(\frac{z}{z\_0}\right)^n \left(1 + \hat{\theta}\_{a,n}^{(\gamma)}(z; z\_0)\right),$$
 
$$\text{and denoting } w\_n(z) = a\_n \left(\frac{z}{z\_0}\right)^n \hat{\theta}\_{a,n}^{(\gamma)}(z; z\_0), \text{ we consider the series } \sum\_{n=0}^{\infty} w\_n(z).$$

*n*=0 According to condition (21), the numerical sequence {*nan*}<sup>∞</sup> *<sup>n</sup>*=0, being a convergent sequence, is bounded. Then, since |*wn*(*z*)|≤|*an*| |*ϑ* (*γ*) *<sup>α</sup>*,*<sup>n</sup>* (*z*; *<sup>z</sup>*0)<sup>|</sup> and having in view (8), there exists a constant *<sup>C</sup>*, such that <sup>|</sup>*wn*(*z*)| ≤ *<sup>C</sup>*/*n*1+*αγ* for all the positive integers *<sup>n</sup>*. Since <sup>∞</sup> ∑ *n*=1 1/*n*1+*αγ* converges, the series <sup>∞</sup> ∑ *n*=0 *wn*(*z*) also converges, even absolutely and uniformly on the segment [0, *z*0]. Therefore, changing the order of the limit and summation, in view of the equality lim*z*→*z*<sup>0</sup> *wn*(*z*) = 0, we deduce that

$$\lim\_{z \to z\_0} \sum\_{n=0}^{\infty} w\_n(z) = \sum\_{n=0}^{\infty} \lim\_{z \to z\_0} w\_n(z) = 0. \tag{26}$$

Then, bearing in mind that (20) can be written in the form

$$F(z) = \sum\_{n=0}^{\infty} a\_n F\_{n,a,\gamma}^\*(z; z\_0) = \sum\_{n=0}^{\infty} a\_n \left(\frac{z}{z\_0}\right)^n + \sum\_{n=0}^{\infty} w\_n(z)\_n$$

along with the assumption (22), we conclude that the limit

$$\lim\_{z \to z\_0} \sum\_{n=0}^{\infty} a\_n \left(\frac{z}{z\_0}\right)^n \tag{27}$$

also exists and, moreover, in view of (26),

$$\lim\_{z \to z\_0} F(z) = \lim\_{z \to z\_0} \sum\_{n=0}^{\infty} a\_n F\_{n, a, \gamma}^\*(z; z\_0) = S = \lim\_{z \to z\_0} \sum\_{n=0}^{\infty} a\_n \left(\frac{z}{z\_0}\right)^n. \tag{28}$$

Now, from (28) and the existence of the limit (27), by the classical Tauber theorem for the power series, it follows that the series <sup>∞</sup> ∑ *n*=0 *an* converges and ∞ ∑ *n*=0 *an* = *S* .

The conclusion of the above theorem is still valid even if the condition imposed on the coefficients *an* is weakened. Namely, the following theorem holds true.

**Theorem 5** (of Littlewood type)**.** *If* {*an*}<sup>∞</sup> *<sup>n</sup>*=<sup>0</sup> *is a sequence of complex numbers with*

$$a\_n = O(1/n),\tag{29}$$

*F*(*z*) *is the function defined by* (20)*, and if there exists*

$$\lim\_{z \to z\_0} F(z) = \mathcal{S} \quad (|z| < R, z \to z\_0 \text{ radially}), \tag{30}$$
 
$$\text{then the numerical series } \sum\_{n=0}^{\infty} a\_n \text{ is convergent and } \sum\_{n=0}^{\infty} a\_n = \mathcal{S}.$$

**Proof.** Let *z* belong to the segment [0, *z*0]. The proof goes in the same way as the proof of Theorem 4, and using the same denotations. The only difference is in proving the estimation for |*wn*(*z*)|. More especially, according to the relation (24) and the condition (29), it follows that there exists a constant *<sup>C</sup>*, such that <sup>|</sup>*wn*(*z*)| ≤ *<sup>C</sup>*/*n*1+*αγ* for all the positive integers *n*. Finally, the proof ends applying in the last step Littlewood's classical theorem instead of Tauber's theorem. The details are omitted.

#### **7. (***Fα***,***γ***,** *Z***0)—Summation and (***J***,** *Z***0)—Summation**

The theorems in the previous section can be formulated in alternative forms. For this purpose, two additional definitions are firstly given.

Let us consider the numerical series

$$\sum\_{n=0}^{\infty} a\_{n\prime} \quad a\_n \in \mathbb{C}, \quad n = 0, 1, 2, \dots \tag{31}$$

To define its Abel summability ([23], p. 20), we consider also the power series <sup>∞</sup> ∑ *n*=0 *anzn*.

**Definition 1.** *The series* (31) *is called A—summable if the series* <sup>∞</sup> ∑ *n*=0 *anz<sup>n</sup> converges in the open unit disk D*(0; 1) *and moreover there exists*

$$\lim\_{z \to 1-0} \sum\_{n=0}^{\infty} a\_n z^n = S.$$

*The complex number S is called A-sum of the series (31) and the usual notation of that is*

$$\sum\_{n=0}^{\infty} a\_n = \mathbb{S} \quad (A).$$

**Remark 3.** *The A-summation is regular. It means that if the series (31) converges, then it is A-summable, and its A-sum is equal to its usual sum.*

**Remark 4.** *It is well known that in general, the A-summability of the series (31) does not imply its convergence. However, with additional conditions imposed on the growth of the general term of the series (31), the convergence can be provided.*

Let *<sup>z</sup>*<sup>0</sup> <sup>∈</sup> <sup>C</sup>, <sup>|</sup>*z*0<sup>|</sup> <sup>=</sup> *<sup>R</sup>*, 0 <sup>&</sup>lt; *<sup>R</sup>* <sup>&</sup>lt; <sup>∞</sup> and *<sup>F</sup>*(*γ*) *<sup>α</sup>*, *<sup>n</sup>* (*z*0) <sup>=</sup> 0 (note that, the last condition is again fulfilled due to Remark 2). For the sake of convenience, denote

$$F\_{n,a,\gamma}^\*(z;z\_0) = \frac{\tilde{F}\_{a,n}^{(\gamma)}(z)}{\tilde{F}\_{a,n}^{(\gamma)}(z\_0)}.\tag{32}$$

Further, by analogy with the A-summability of the series (31), another definition is introduced, where the power series <sup>∞</sup> ∑ *n*=0 *anz<sup>n</sup>* is replaced by the series in the Le Roy type functions (32) with the same coefficients.

**Definition 2.** *The numerical series (31) is said to be* (*Fα*,*γ*, *z*0)*—summable if the series*

$$\sum\_{n=0}^{\infty} a\_n F\_{n, \alpha, \gamma}^\*(z; z\_0), \tag{33}$$

*converges in the open disk D*(0; *R*) *and, moreover, there exists the limit*

$$\lim\_{z \to z\_0} \sum\_{n=0}^{\infty} a\_n F\_{n, a, \gamma}^\*(z; z\_0), \tag{34}$$

*provided z remains on the segment* [0, *z*0) *(i.e., z radially tends to z*0*).*

**Remark 5.** *The* (*Fα*,*γ*, *z*0)*—summation is regular, and this property is merely a particular case of Theorem 3.*

Taking into account the latest definitions and remarks, Theorems 4 and 5 can be formulated in the following alternative ways.

**Theorem 6** (of Tauber type)**.** *If the numerical series (31) is* (*Fα*,*γ*, *z*0)*—summable and*

$$\lim\{na\_n\} = 0,\tag{35}$$

*then it is convergent.*

**Theorem 7** (of Littlewood type)**.** *If the numerical series (31) is* (*Fα*,*γ*, *z*0)*—summable and*

$$a\_n = O(1/n),\tag{36}$$

*then it is convergent.*

**Remark 6.** *We observe that all the functions of the family*

$$(F\_{\mathfrak{a},\gamma}; z\_0) = \{F\_{\mathfrak{a},\mathfrak{a},\gamma}^\*(z; z\_0), \quad \mathfrak{a} = 0, 1, \dots\} \tag{37}$$

*are entire functions satisfying the condition F*∗ *<sup>n</sup>*,*α*,*γ*(*z*0; *z*0) = 1*.*

For convenience, in order to make Definition 2 more universal and usable for various considerations, we intend to paraphrase it in the way, given in [11] (p. 35). For this purpose, we firstly introduce one more denotation.

Let *z*<sup>0</sup> ∈ C, *z*<sup>0</sup> = 0, |*z*0| = *R*, 0 < *R* < ∞ and let (*J*; *z*0) be the following family of functions

$$(f; z\_0) := \{j\_n \colon j\_n - \text{entire} \quad \text{function}, \quad j\_n(z\_0) = 1\}\_{n \in \mathbb{N}\_0}.\tag{38}$$

Now, considering the series given below

$$\sum\_{n=0}^{\infty} a\_n j\_n(z), \quad j\_n \in (f; z\_0), \tag{39}$$

Definition 2 can be expanded as follows.

**Definition 3.** *The numerical series (31) is said to be* (*J*, *z*0)*-summable, if the series (39) converges in the disk D*(0; *R*)*, and moreover, there exists the limit*

$$\lim\_{z \to z\_0} \sum\_{n=0}^{\infty} a\_n j\_n(z),\tag{40}$$

*provided z remains on the segment* [0, *z*0).

**Remark 7.** *Let us note that using this definition must necessarily take into account of the regularity of the summation.*

Ending this section we are going to make one more remark.

**Remark 8.** *Taking jn*(*z*) = *F*<sup>∗</sup> *<sup>n</sup>*,*α*,*γ*(*z*; *z*0)*, the family (38) of entire functions reduces to the family (37). Therefore, in this case the* (*J*, *z*0)*—summation and* (*Fα*,*γ*, *z*0)*—summation are the same. Thus Theorems 6 and 7 can be written in equivalent ways, using the notion* (*J*, *z*0)−*summation (with jn*(*z*) = *F*<sup>∗</sup> *<sup>n</sup>*,*α*,*γ*(*z*; *z*0)*), instead of* (*Fα*,*γ*, *z*0)*—summation. That means that the theorems of Tauber and Littlewood type are statements relating the* (*J*, *z*0)*—summability and the usual convergence of a numerical series by means of some assumptions imposed on the general term of the numerical series under consideration.*

#### **8. Special Cases**

In this section we consider some interesting special cases of the Le Roy type function *F*(*γ*) *<sup>α</sup>*,*<sup>β</sup>* , given by (1), taking the parameters

$$
\alpha, \beta \in \mathbb{C}, \; \Re(\alpha) > 0 \text{ and } \gamma > 0,
$$

when (1) is an entire function.

Case 1. If *γ* is an arbitrary positive number, *α* = 1 and *β* = 1, then the function (1) coincides with the Le Roy function (confer with (4)), i.e.,

$$F^{(\gamma)}(z) = F\_{1,1}^{(\gamma)}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{\left[\Gamma(k+1)\right]^{\gamma}}, \quad z \in \mathbb{C}.\tag{41}$$

We have to note that, studying the asymptotics of the analytic continuation of the sum of power series, Le Roy himself used it in [24]. This reason for the origin of (41) sounds somehow similar to the Mittag–Leffler's idea to introduce the function *Eα*(*z*) for the aims of analytic continuation (it have to be noted that Mittag–Leffler and Le Roy were working on this idea in competition). The Le Roy function is involved in the solution of various problems; in particular it has been recently used in the construction of a Conway–Maxwell– Poisson distribution [25] which is important due to its ability to model count data with different degrees of over- and under-dispersion [26,27].

Case 2. If *γ* = 1, then the function (1) gives the Mittag–Leffler function *Eα*,*β*, namely

$$E\_{a, \beta}(z) = F\_{a, \beta}^{(1)}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{\Gamma(ak + \beta)}, \quad z \in \mathbb{C}. \tag{42}$$

In addition, when *β* = 1, the function (1) reduces to *Eα*, and to the exponential function, if *α* = *β* = 1, i.e.,

$$E\_a(z) = F\_{a,1}^{(1)}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{\Gamma(ak+1)}, \text{ exp } z = F\_{1,1}^{(1)}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{k!}; \quad z \in \mathbb{C}.\tag{43}$$

The functions (42) and (43) are named after the great Swedish mathematician Gösta Magnus Mittag–Leffler (1846–1927) who defined the 1-parametric function *Eα*(*z*) by a power series (given by (43)) and he studied its properties in 1902–1905 (detailed description can be seen in [28]). Actually, Mittag–Leffler introduced the function *Eα*(*z*) for the purposes of his method for summation of divergent series. Later, the function (43) was recognized as the 'Queen function of fractional calculus' [29–31], see also [11], for its basic role for analytic solutions of fractional order integral and differential equations and systems. In the recent decades successful applications of the Mittag–Leffler function and its generalizations in problems of physics, biology, chemistry, engineering and other applied sciences made it better known among scientists. A considerable literature is devoted to the investigation of the analytical properties of these functions; among the references of [11,28,32], where are quoted several authors who, after Mittag–Leffler, have investigated such kinds of functions from a pure mathematical, applied and numerical oriented point of view as well.

Case 3. If *γ* = 1/2 and *α* = *β* = 1, then the function (1) becomes the function *R*(*z*), given by the series (see Kolokoltsov [33] Formula (50))

$$R(z) = F\_{1,1}^{(1/2)}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{\sqrt{k!}}, \quad z \in \mathbb{C}.\tag{44}$$

The function (44) is used by Kolokoltsov in [33] to estimate the solution of initial stochastic differential equations. As he comments in his paper, the function *R*(*z*) plays the same role for stochastic equations as the exponential and the Mittag–Leffler functions for deterministic equations.

Case 4. If the parameter *γ* = 2 and *α* = *β* = 1, then the function (1) can be presented as Bessel function of the first kind and related to it, and as 2-parametric Bessel–Maitland function, as well. Namely, the function (1) can be written in the following alternative forms:

$$F\_{1,1}^{(2)}(z) = J\_0(2i\sqrt{z}) = I\_0(2\sqrt{z}) = \mathbb{C}\_0(-z) = J\_0^1(-z) = \sum\_{k=0}^{\infty} \frac{z^k}{(k!)^{2^\prime}}, \quad z \in \mathbb{C}.\tag{45}$$

In this relation *J*<sup>0</sup> and *I*<sup>0</sup> are respectively the classical Bessel function of the first kind *J<sup>ν</sup>* and its modified function *I<sup>ν</sup>* with an index *ν* = 0, *C*<sup>0</sup> is the Bessel–Clifford function *C<sup>ν</sup>* with an index *ν* = 0, and *J*<sup>1</sup> <sup>0</sup> is its 2-parametric Bessel–Maitland generalization *J μ <sup>ν</sup>* (named after Sir Edward Maitland Wright and also known as Bessel–Wright function) with indices *ν* = 0 and *μ* = 1.

Case 5. If the number *m* is a positive integer, *γ* = *m* + 1, *β* = *λ* + 1 (*λ* = 0), and *α* = 1, then the function (1) can be expressed with 3−index generalization, as well as by the 4−index generalization of the Bessel function of the first kind. More especially if *m* = 1, then the special function (1) turns, with an exactness to a power function, into

the generalized Bessel–Maitland (or Wright's) function *J μ <sup>ν</sup>*, *<sup>λ</sup>* (with *ν* = 0 and *μ* = 1) of the Bessel function *Jν*(*z*), introduced by Pathak (for details see [14]):

$$\left[\right]\_{\nu,\lambda}^{\mu}(z) = (z/2)^{\nu+2\lambda} \left. \overline{\right|\_{\nu,\lambda}^{\mu}}(z) = (z/2)^{\nu+2\lambda} \sum\_{k=0}^{\infty} \frac{(-1)^k (z/2)^{2k}}{\Gamma(k+\lambda+1)\Gamma(\mu k + \nu + \lambda + 1)} \cdot \tag{46}$$

More precisely,

$$\widetilde{J}\_{0,\lambda}^{1}(2i\sqrt{z}) = \,^{F}\_{1,\lambda+1}^{(2)}(z) = \sum\_{k=0}^{\infty} \frac{z^{k}}{\left[\Gamma(k+\lambda+1)\right]^{2}}.\tag{47}$$

The special case (for *m* ≥ 2) is expressed by the generalized Lommel–Wright function *J μ*, *m <sup>ν</sup>*, *<sup>λ</sup>* with 4 indices (with *ν* = 0 and *μ* = 1), introduced by de Oteiza, Kalla and Conde (for details see [14]):

$$f\_{\nu,\lambda}^{\mu,m}(z) = (z/2)^{\nu+2\lambda} \hat{f}\_{\nu,\lambda}^{\mu,m}(z) = (z/2)^{\nu+2\lambda} \sum\_{k=0}^{\infty} \frac{(-1)^k (z/2)^{2k}}{(\Gamma(k+\lambda+1))^m \Gamma(\mu k + \nu + \lambda + 1)} \cdot \tag{48}$$

Especially,

$$\hat{J}\_{0,\lambda}^{1,m}(2i\sqrt{z}) = F\_{1,\lambda+1}^{(m+1)}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{[\Gamma(k+\lambda+1)]^{m+1}}.\tag{49}$$

Just to mention that *J μ*, 1 *<sup>ν</sup>*, *<sup>λ</sup>* = *J μ <sup>ν</sup>*, *<sup>λ</sup>*, as well as *J μ*, 1 *<sup>ν</sup>*, *<sup>λ</sup>* = *J μ <sup>ν</sup>*, *<sup>λ</sup>*.

Case 6. If the number *m* is a positive integer *m* ≥ 2, then the function (1) can be presented as the multi-index extensions of (42) (with 2*m* and 3*m* parameters, *m* = 1, 2, . . . , [11,13,34–36]), i.e., the so-called multi-index Mittag–Leffler functions. The first one was introduced by Yakubovich and Luchko [37] and studied in details by Kiryakova [12,34]. It is defined by the formula

$$E\_{\left(a\_i\right), \left(\beta\_i\right)}\left(z\right) = E\_{\left(a\_i\right), \left(\beta\_i\right)}^{m}\left(z\right) = \sum\_{k=0}^{\infty} \frac{z^k}{\Gamma(a\_1k + \beta\_1) \dots \Gamma(a\_mk + \beta\_m)},\tag{50}$$

for *z* ∈ C and *m* > 1. The parameters *αi*, *β<sup>i</sup>* are all complex for *i* = 1, 2, ... *m* and (*αi*) > 0. The second one has *m* additional complex parameters *γi*. It was introduced and studied in details by Paneva-Konovska (for its properties see e.g., [11]). It is defined by the formula

$$E\_{(a\_i),(\beta\_i)}^{(\gamma\_i),m}(z) = \sum\_{k=0}^{\infty} \frac{(\gamma\_1)\_k \dots (\gamma\_m)\_k}{\Gamma(\alpha\_1 k + \beta\_1) \dots \Gamma(\alpha\_m k + \beta\_m)} \frac{z^k}{(k!)^m} \tag{51}$$

where (*γ*)*<sup>k</sup>* is the Pochhammer symbol: (*γ*)*<sup>k</sup>* = *γ*(*γ* + 1)...(*γ* + *k* − 1), *k* = 1, 2, ... , (*γ*)<sup>0</sup> = 1. More precisely, in this case the function (1) turns into the above multi-index Mittag–Leffler functions, with indices *α<sup>i</sup>* = *α*, *β<sup>i</sup>* = *β* and *γ<sup>i</sup>* = 1 (*i* = 1, 2, . . . *m*), namely

$$E\_{(a),(\beta)}(z) = E\_{(a),(\beta)}^{(1),m}(z) = F\_{a,\beta}^{(m)}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{[\Gamma(ak+\beta)]^m} \cdot \tag{52}$$

Case 7. If the number *m* is a positive integer *m* ≥ 2, *α* = 1 and *β* = 1 then the function (1) is the hyper-Bessel function

$$J\_{\nu\_1,\dots,\nu\_{m-1}}^{\left(m-1\right)}(z) = \left(\frac{z}{m}\right)^{\frac{m-1}{\sum}\_{\nu\_i}} \sum\_{k=0}^{\infty} \frac{(-1)^k \left(\frac{z}{m}\right)^{km}}{\Gamma(k+\nu\_1+1)\dots\Gamma(k+\nu\_{m-1}+1)} \frac{1}{k!} \cdot \frac{1}{z}$$

introduced by Delerue in 1953 [38]. It is a generalization of the Bessel function of the first type *J<sup>ν</sup>* with vector indices *ν* = (*ν*1, *ν*2, ... , *νm*−1). The hyper-Bessel function of Delerue is closely related to the hyper-Bessel differential operators of arbitrary order *m* > 1,

introduced by Dimovski [39]. The function (1) is represented as the hyper-Bessel function with parameters *ν<sup>i</sup>* = 0 (*i* = 1, 2, . . . *m* − 1), i.e.,

$$J\_{0,\ldots,0}^{(m-1)}\left(m(-z)^{1/m}\right) = F\_{a,\emptyset}^{(m)}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{\left[\Gamma(k+1)\right]^m} \cdot \tag{53}$$

At last, let us note that if *γ* = *m* is a positive integer, then the Le Roy function *F*(*m*) *α*,*β* is the Wright generalized hypergeometric function with 2 × *m* indices *α<sup>i</sup>* = *α*, *β<sup>i</sup>* = *β* (*i* = 1, . . . , *m*), namely

$$F\_{a,\emptyset}^{(m)}(z) = \sum\_{k=0}^{\infty} \frac{z^k}{[\Gamma(ak+\beta)]^m} = {}\_1\Psi\_m \left[ \begin{array}{c} (1,1) \\ (\beta\_{i\prime}, \alpha\_i)\_1^m \end{array} \bigg| z \right] = {}\_1\Psi\_m \left[ \begin{array}{c} (1,1) \\ (\beta, \alpha)\_1^m \end{array} \bigg| z \right].$$

and it is a particular case of the Wright generalized hypergeometric function with 2 × (*p* + *q*) indices *ai*, *Ai* (*i* = 1, . . . , *p*), and *bj*, *Bj* (*j* = 1, . . . , *q*), defined by the formula

$${}\_{p}\Psi\_{q}\left[\begin{matrix} (a\_{1},A\_{1})\dots (a\_{p},A\_{p}) \\ (b\_{1},B\_{1})\dots (b\_{q},B\_{q}) \end{matrix}\bigg| \sigma \right] = \sum\_{k=0}^{\infty} \frac{\Gamma(a\_{1}+kA\_{1})\dots\Gamma(a\_{p}+kA\_{p})}{\Gamma(b\_{1}+kB\_{1})\dots\Gamma(b\_{q}+kB\_{q})} \frac{\sigma^{k}}{k!} \cdot \frac{\sigma^{k}}{k!} \frac{\sigma^{k}}{k!} \prod\_{j=0}^{\infty} \frac{\Gamma(a\_{j}+kB\_{j})\dots\Gamma(a\_{j}+kB\_{j})}{\Gamma(b\_{1}+kB\_{1})\dots\Gamma(b\_{q}+kB\_{q})} \frac{\sigma^{k}}{k!} \frac{\sigma^{k}}{k!}$$

#### **9. Conclusions**

Letting the parameter *β* in the condition (2) be a positive integer, we consider the family of Le Roy type functions (5) with parameters as follows:

$$
\alpha > 0, \ \gamma > 0, \text{ and } \beta = n \in \mathbb{N}.
$$

In Section 2 we provide an asymptotic formula for these functions for large values of the parameter *n* (Theorem 1). We also give upper estimates for the moduli of their remainder terms in the nonempty compact subsets of the complex plane and in the whole complex plane as well (Lemma 1). Further, in order to summarize the results obtained here, we consider the family of the type

$$\left\{ \widetilde{j\_n}(z) \right\}\_{n \in \mathbb{N}} \,\, \,\, \tag{54}$$

with the functions *jn* as in (10), and the series

$$\sum\_{n=0}^{\infty} \ a\_n \,\, \widetilde{f}\_n(z),\tag{55}$$

in this case coinciding with the series (11) in Le Roy type functions with complex coefficients *an* (*n* = 0, 1, 2, . . .) and for *z* ∈ C.

It turns out that the series (55) absolutely converges in the open disk *D*(0; *R*) with the corresponding radius *R*, given by the Formula (12) and it diverges in its outside, i.e., for *z* ∈ C with |*z*| > *R*. Moreover, inside the disk *D*(0; *R*), i.e., in each closed disk [*D*(0;*r*)] = {*z* : *z* ∈ C, |*z*| ≤ *r*} with *r* < *R*, the convergence is uniform. Near the boundary *C*(0; *R*) the series (55) satisfies Theorem 3 of Abel type. At last, the series fulfills the theorem of Tauber and Littlewood types, which are inverse of the Abel type theorem.

Now, let us consider the functions from the Section 8 with the same types of parameters. Since in this case they are of the types (5), then all of them satisfy Lemma 1 and the inequalities therein. Further, paying attention to the fact, that the functions (42), (52), (47) and (49) can be considered as representatives of different families of the types (5), we have to note that the functions of each family, discussed above, have the asymptotic Formula (9) with the corresponding values of the parameters *α* and *γ*. Further, taking the family of the type (54) with the functions *jn* as follows:

$$\widetilde{f\_n}(z) = z^n \left[ \Gamma(n) \right]^m E\_{(\left(a\right), \left(n\right))}(z) = z^n \left[ \Gamma(n) \right]^m F\_{a,n}^{\left(m\right)}(z), \quad m, n \in \mathbb{N}, \tag{56}$$

in the case (52) (in particular *m* = 1 in the case (42)), and respectively

$$\widehat{j\_n}(z) = z^n \left[ \Gamma(n) \right]^{m+1} \widehat{j}\_{0,n-1}^{1,m}(2i\sqrt{z}) = z^n \left[ \Gamma(n) \right]^{m+1} F\_{1,n}^{(m+1)}(z), \ m, n \in \mathbb{N}, \tag{57}$$

in the case (49) (*m* = 1 in the case (47)), and adding, just for completeness *j*0(*z*) = 1, we consider the corresponding series (55) with complex coefficients *an* (*n* = 0, 1, 2, ...) for

$$z \in \mathbb{C} \text{ \textquotedblleft } \text{\textquotedblright} \text{\textquotedblleft the series } \sum\_{\substack{\mathfrak{n} = 0 \\ \mathfrak{n}}}^{\infty} a\_{\mathfrak{n}} \stackrel{\widetilde{j}\_{\mathfrak{n}}}{\mathfrak{j}}(z) . \text{\textquotedblleft}$$

Taking into account that the series (55) is of the type (11) (however with special values of the parameters), it has the same behaviour. That means that the series (55) absolutely converges in the open disk *D*(0; *R*) with the corresponding radius *R*, and it diverges in its outside, i.e., for *z* ∈ C with |*z*| > *R*. Moreover, inside the disk *D*(0; *R*), i.e., in each closed disk [*D*(0;*r*)] with *r* < *R*, the convergence is uniform. Replacing the parameter *γ* with the corresponding value in Theorem 3, it is reduced to the Abel type theorem for the series (55), referring to the behaviour of (55) near the boundary *C*(0; *R*). At last, the series (55) fulfills the theorem of Tauber and Littlewood types, which are inverse of the Abel type theorem.

Thus, generally speaking, the described behaviour of the series (11) in Le Roy type functions, as well as in particular the behaviour of the corresponding series (55) (in the functions of the families (56), respectively (57)), and that of the classical power series are the same. Moreover, the results discussed here are analogues to the Cauchy–Hadamard, Abel, Tauber and Littlewood theorems for the widely used power series.

**Author Contributions:** The ideas and results in this survey-paper reflect the author's own contributions, resulting from more than 25 years' research on the topic. The author has read and agreed to the published version of the manuscript.

**Funding:** This research received no financial funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** This paper is done under the working programs on bilateral collaboration contracts of Bulgarian Academy of Sciences with Serbian and Macedonian Academies of Sciences and Arts, and under the COST program, COST Action CA15225 'Fractional'.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


## *Article* **The Asymptotic Expansion of a Function Introduced by L.L. Karasheva**

**Richard Paris**

Division of Computing and Mathematics, Abertay University, Dundee DD1 1HG, UK; r.paris@abertay.ac.uk

**Abstract:** The asymptotic expansion for *<sup>x</sup>* → ±<sup>∞</sup> of the entire function *Fn*,*σ*(*x*; *<sup>μ</sup>*) = <sup>∞</sup> ∑ *k*=0 sin (*nγk*) sin *γ<sup>k</sup> xk <sup>k</sup>*!Γ(*μ*−*σk*), *<sup>γ</sup><sup>k</sup>* <sup>=</sup> (*k*+1)*<sup>π</sup>* <sup>2</sup>*<sup>n</sup>* for *μ* > 0, 0 < *σ* < 1 and *n* = 1, 2, ... is considered. In the special case *σ* = *α*/(2*n*), with 0 < *α* < 1, this function was recently introduced by L.L. Karasheva (*J. Math. Sciences*, **250** (2020) 753–759) as a solution of a fractional-order partial differential equation. By expressing *Fn*,*σ*(*x*; *μ*) as a finite sum of Wright functions, we employ the standard asymptotics of integral functions of hypergeometric type to determine its asymptotic expansion. This was found to depend critically on the parameter *σ* (and to a lesser extent on the integer *n*). Numerical results are presented to illustrate the accuracy of the different expansions obtained.

**Keywords:** wright function; asymptotic expansions; Stokes phenomenon

**MSC:** 33C70; 34E05; 41A30; 41A60

#### **1. Introduction**

In a recent paper, L.L. Karasheva [1] introduced the entire function

$$\Theta\_{n,a}(\mathbf{x};\mu) := \sum\_{k=0}^{\infty} \frac{\sin\left(n\gamma\_k\right)}{\sin\gamma\_k} \frac{\mathbf{x}^k}{k!\Gamma\left(\mu - \frac{ak}{2n}\right)}, \qquad \gamma\_k := \frac{(k+1)\pi}{2n},\tag{1}$$

where *μ* > 0, 0 < *α* < 1 and *n* = 1, 2, ... and, throughout, *x* is a real variable. This function is of interest as it is involved in the fundamental solution of the differential equation

$$\frac{\partial^n u}{\partial t^n} + (-1)^n \frac{\partial^{2n} u}{\partial x^{2n}} = f(x, t)$$

for positive integer *n*, where the derivative with respect to *t* is the fractional derivative of the order *α*. In the simplest case *n* = 1, we have Θ1,*α*(*x*; *μ*) = *φ*(−*σ*, *μ*; *x*), *σ* := *α*/(2*n*), where *φ*(−*σ*, *μ*; *x*) is the Wright function

$$\phi(-\sigma,\mu;x) := \sum\_{k=0}^{\infty} \frac{x^k}{k!\Gamma(\mu - \sigma k)} \qquad (\sigma < 1), \tag{2}$$

which finds application as a fundamental solution of the diffusion-wave equation [2]. Under the above assumptions on *n* and *α* it follows that the parameter *σ* associated with (1) satisfies 0 < *σ* < <sup>1</sup> 2 .

In this study, however, we shall allow the parameter *σ* to satisfy 0 < *σ* < 1 and consider the function

$$F\_{\boldsymbol{n},\sigma}(\mathbf{x};\boldsymbol{\mu}) := \sum\_{k=0}^{\infty} \frac{\sin\left(\boldsymbol{n}\gamma\_{k}\right)}{\sin\gamma\_{k}} \frac{\mathbf{x}^{k}}{k!\Gamma(\boldsymbol{\mu}-\sigma k)} \qquad (0 < \sigma < 1),\tag{3}$$

**Citation:** Paris, R. The Asymptotic Expansion of a Function Introduced by L.L. Karasheva. *Mathematics* **2021**, *9*, 1454. https://doi.org/10.3390/ math9121454

Academic Editor: Francesco Mainardi

Received: 30 April 2021 Accepted: 17 June 2021 Published: 21 June 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

which coincides with Θ*n*,*α*(*x*; *μ*) when *σ* = *α*/(2*n*). From the well-known expansion

$$\frac{\sin\left(n\gamma\_k\right)}{\sin\gamma\_k} = \sum\_{r=0}^{n-1} \mathfrak{e}^{i\gamma\_k(2r-n+1)} = \sum\_{r=0}^{n-1} \mathfrak{e}^{-i(k+1)\omega\_{r,k}}$$

where

$$
\omega\_r := \frac{(n - 2r - 1)\pi}{2n} \qquad (0 \le r \le n - 1), \tag{4}
$$

it follows that (3) can be expressed as a finite sum of Wright functions defined in (2) with rotated arguments (compare [1], Equation (4))

$$F\_{\hbar,\sigma}(\mathbf{x};\mu) = \sum\_{r=0}^{n-1} e^{-i\omega\_r} \,\Phi(-\sigma,\mu;\mathbf{x}e^{-i\omega\_r}) \,. \tag{5}$$

We note that the extreme values of *ω<sup>r</sup>* satisfy *ω*<sup>0</sup> = −*ωn*−<sup>1</sup> = (*n* − <sup>1</sup>)*π*/(2*n*), whence <sup>|</sup>*ωr*<sup>|</sup> <sup>&</sup>lt; <sup>1</sup> <sup>2</sup>*π* for 0 ≤ *r* ≤ *n* − 1.

We use the representation in (5), with the values of *ω<sup>r</sup>* in (4), to determine the asymptotic expansion of *Fn*,*σ*(*x*; *μ*) for *x* → ±∞ by application of the asymptotic theory of the Wright function. A summary of the expansion of *φ*(−*σ*, *μ*; *z*) for large |*z*| is given in Section 3. The expansions of *Fn*,*σ*(*x*; *μ*) for *x* → ±∞ are given in Sections 4 and 5, where they are shown to depend critically on the parameter *σ* (and to a lesser extent on the integer *n*). A concluding section presents our numerical results confirming the accuracy of the different expansions obtained.

#### **2. An Alternative Representation of** *Fn***,***σ***(***x***;** *μ***)**

The Wright function appearing in (2) can be written alternatively as

$$\phi(-\sigma,\mu;x) = \frac{1}{\pi} \sum\_{k=0}^{\infty} \frac{\mathbf{x}^k}{k!} \Gamma(1-\mu+\sigma k) \sin \pi(\mu-\sigma k)$$

$$= \frac{1}{2\pi} \left\{ e^{\pi i \theta} \Psi(\mathbf{x} e^{\pi i \sigma}) + e^{-\pi i \theta} \Psi(\mathbf{x} e^{-\pi i \sigma}) \right\}$$

upon use of the reflection formula for the gamma function, where *ϑ* := <sup>1</sup> <sup>2</sup> − *μ*. The associated Wright function Ψ(*z*) is defined by

$$\Psi(z) := \sum\_{k=0}^{\infty} \frac{z^k}{k!} \Gamma(\sigma k + \delta) \qquad (0 < \sigma < 1, \ \delta = 1 - \mu), \tag{6}$$

which is valid for |*z*| < ∞. Hence, we obtain the representation

$$F\_{\hbar\mathcal{L}}(\mathbf{x};\mu) = \frac{1}{2\pi} \sum\_{r=0}^{n-1} e^{-i\omega\_r} \mathbf{Y}\_r(\sigma;\mathbf{x}),$$

where

$$\Upsilon\_r(\sigma; \mathfrak{x}) := \mathfrak{e}^{\pi i \vartheta} \Psi(\mathfrak{x} e^{\pi i \sigma - i \omega\_r}) + \mathfrak{e}^{-\pi i \vartheta} \Psi(\mathfrak{x} e^{-\pi i \sigma - i \omega\_r}).$$

If we now exploit the symmetry of the *ω<sup>r</sup>* in (4) (and the fact that *x* is a real variable), we observe that the values of *ω<sup>r</sup>* for 0 ≤ *r* ≤ *N* − 1, where *N* = *n*/2, satisfy

$$\{\omega\eta,\omega\_{1},\dots,\omega\_{N-1}\} = \left\{\frac{(n-1)\pi}{2n},\frac{(n-3)\pi}{2n},\dots,\frac{\pi}{2n}\varepsilon\_{n}\right\}, \qquad \varepsilon\_{\mathbb{R}} = \left\{\begin{array}{c} 1 \quad (n \text{ even})\\2 \quad (n \text{ odd}) .\end{array}\right.\tag{7}$$

Then, we can write

$$F\_{\hbar,\sigma}(\mathbf{x};\mu) = \frac{1}{\pi}\Re\left\{\sum\_{r=0}^{N-1} e^{-i\omega\_r} \mathbf{Y}\_r(\sigma;\mathbf{x}) + \Delta\_{\hbar} e^{\pi i \theta} \Psi(\mathbf{x} e^{\pi i \sigma})\right\},\tag{8}$$

where

$$
\Delta\_n = \begin{cases} 0 & (n \text{ even}) \\ -1 & (n \text{ odd}) . \end{cases}
$$

The form (8) involves half the number of Wright functions Ψ(*z*) and will be used to determine the asymptotic expansion of *Fn*,*σ*(*x*; *μ*) as *x* → ±∞ in Sections 4 and 5.

#### **3. The Asymptotic Expansion of Ψ(***z***) for** *|z| →* **∞**

We first present the large-|*z*| asymptotics of the function Ψ(*z*) in (6) based on the presentation described in ([3], Section 4); see also ([4], Section 4.2), ([5], §2.3). We introduce the following parameters:

$$\kappa = 1 - \sigma, \quad h = \sigma^{\sigma}, \quad \theta = \delta - \frac{1}{2}, \quad \delta = 1 - \mu,\tag{9}$$

together with the associated (formal) exponential and algebraic expansions

$$E(z) := Z^{\theta} \varepsilon^{Z} \sum\_{j=0}^{\infty} A\_{j}(\sigma) Z^{-j}, \qquad H(z) := \frac{1}{\sigma} \sum\_{k=0}^{\infty} \frac{(-1)^{k}}{k!} \Gamma \left( \frac{k+\delta}{\sigma} \right) z^{-(k+\delta)/\sigma}, \tag{10}$$

where (The dependence of the coefficients *Aj*(*σ*) on the parameter *δ* is not indicated.)

$$Z := \kappa (\hbar z)^{1/\kappa}, \qquad A\_0(\sigma) = (2\pi/\kappa)^{1/2} \left(\sigma/\kappa\right)^{\vartheta}. \tag{11}$$

Then, since 0 < *κ* < 1, we obtain from ([5], p. 57) the large-*z* expansion

$$\Psi(z) \sim \begin{cases} E(z) + H(ze^{\mp \pi i}) & (|\arg z| \le \frac{1}{2}\pi\kappa) \\\\ H(ze^{\mp \pi i}) & (\frac{1}{2}\pi\kappa < |\arg z| \le \pi), \end{cases} \tag{12}$$

where the upper or lower signs are chosen according as arg *z* > 0 or arg *z* < 0, respectively.

The expansion *<sup>E</sup>*(*z*) is exponentially large as <sup>|</sup>*z*| → <sup>∞</sup> in the sector <sup>|</sup> arg *<sup>z</sup>*<sup>|</sup> <sup>&</sup>lt; <sup>1</sup> <sup>2</sup>*πκ*, and oscillatory (multiplied by the algebraic factor *<sup>z</sup>ϑ*/*κ*) on the anti-Stokes lines arg *<sup>z</sup>* <sup>=</sup> <sup>±</sup><sup>1</sup> <sup>2</sup>*πκ*. In the adjacent sectors <sup>1</sup> <sup>2</sup>*πκ* < | arg *z*| < *πκ*, the expansion *E*(*z*) *continues to be present*, but is exponentially small reaching maximal subdominance relative to the algebraic expansion on the Stokes lines (On these rays, *E*(*z*) undergoes a Stokes phenomenon where it switches off in a smooth manner (see [6], p. 67).) arg *z* = ±*πκ*. In our treatment of *Fn*,*σ*(*x*; *μ*), we will not be concerned with exponentially small contributions, except in one special case when *x* → −∞ where the expansion of *Fn*,*σ*(*x*; *μ*) is exponentially small.

The first few normalised coefficients *cj* = *Aj*(*σ*)/*A*0(*σ*) are [3,4]:

$$c\_0 = 1, \qquad c\_1 = \frac{1}{24\sigma} \{2 + 7\sigma + 2\sigma^2 - 12\delta(1+\sigma) + 12\delta^2\},$$

$$c\_2 = \frac{1}{1152\sigma^2} \{4 + 172\sigma + 417\sigma^2 + 172\sigma^3 + 4\sigma^4 - 24\delta(6 + 41\sigma + 41\sigma^2 + 6\sigma^3)\},$$

$$+ 120\delta^2 (4 + 11\sigma + 4\sigma^2) - 480\delta^3(1+\sigma) + 144\delta^4\},$$

$$c\_3 = \frac{1}{414\sqrt{720\sigma^3}} \{(-1112 + 9636\sigma + 163.734\sigma^2 + 336.347\sigma^3 + 163.734\sigma^4 + 9636\sigma^5\},$$

$$- 1112\sigma^6) - \delta(3600 + 220, 320\sigma + 929, 700\sigma^2 + 929, 700\sigma^3 + 220, 320\sigma^4 + 3600\sigma^5)$$

$$+ \delta^2(65, 520 + 715, 680\sigma + 1, 440, 180\sigma^2 + 715, 680\sigma^3 + 65, 520\sigma^4)$$

$$-\delta^3(161,280+816,480\nu+816,480\nu^2+161,280\nu^3)$$

$$+\delta^4(151,200+378,000\nu+151,200\nu^2)-60,480\delta^5(1+\sigma)+8640\delta^6\}.\tag{13}$$

In addition to the Stokes lines arg *z* = ±*πκ*, where *E*(*z*) is maximally subdominant relative to the algebraic expansion, the positive real axis is also a Stokes line. Here, the algebraic expansion is maximally subdominant relative to *E*(*z*). As the positive real axis is crossed from the upper to the lower half plane the factor *e*−*π<sup>i</sup>* appearing in *H*(*ze*−*π<sup>i</sup>* ) changes to *eπ<sup>i</sup>* , and vice versa. The details of this transition will not be considered here; see ([5], p. 248) for the case of the confluent hypergeometric function <sup>1</sup>*F*1(*a*; *b*; *z*).

#### **4. The Asymptotic Expansion of** *Fn***,***σ***(***x***;** *μ***) for** *x →* **+∞**

*4.1. Asymptotic Character as a Function of σ*

Let us denote the arguments of the Ψ functions appearing in (8) by

$$z\_r^{\pm} = \propto \exp\left[i\phi\_r^{\pm}\right], \qquad \phi\_r^{\pm} = \pm \pi \sigma - \omega\_r.$$

The representation of the asymptotic structure of the functions Ψ(*z*± *<sup>r</sup>* ) is illustrated in Figure 1 for different values of *σ*. The figures show the rays arg *z* = ±*πσ* and the anti-Stokes lines (dashed lines) arg *<sup>z</sup>* <sup>=</sup> <sup>±</sup><sup>1</sup> <sup>2</sup>*πκ*. In the case *<sup>σ</sup>* <sup>=</sup> <sup>2</sup> <sup>3</sup> , the exponentially large sector is <sup>|</sup> arg *<sup>z</sup>*<sup>|</sup> <sup>&</sup>lt; <sup>1</sup> <sup>6</sup>*π*, and it is seen from Figure 1a that the arguments *z*<sup>±</sup> *<sup>r</sup>* for 0 ≤ *r* ≤ *N* − 1 and *xe*±*πi<sup>σ</sup>* all lie in the domain where Ψ(*z*) has an algebraic expansion; this conclusion applies *a fortiori* when <sup>2</sup> <sup>3</sup> <sup>&</sup>lt; *<sup>σ</sup>* <sup>&</sup>lt; 1. When *<sup>σ</sup>* <sup>=</sup> <sup>1</sup> <sup>2</sup> , the exponentially large sector is <sup>|</sup> arg *<sup>z</sup>*<sup>|</sup> <sup>&</sup>lt; <sup>1</sup> <sup>4</sup>*π*; when *n* = 2, we have *ω*<sup>0</sup> = <sup>1</sup> <sup>4</sup>*<sup>π</sup>* so that *<sup>z</sup>*<sup>+</sup> <sup>0</sup> is situated on the boundary of the exponentially large sector.

Other values of *<sup>n</sup>* <sup>≥</sup> 3 will have some *<sup>z</sup>*<sup>+</sup> *<sup>r</sup>* inside this sector, whereas the *z*<sup>−</sup> *<sup>r</sup>* are in the algebraic sector for *<sup>n</sup>* <sup>≥</sup> 2. Similarly, the case *<sup>σ</sup>* <sup>=</sup> <sup>1</sup> <sup>3</sup> , where the rays arg *z* = ±*πσ* and arg *<sup>z</sup>* <sup>=</sup> <sup>±</sup><sup>1</sup> <sup>2</sup>*πκ* coincide, has all the *<sup>z</sup>*<sup>+</sup> *<sup>r</sup>* situated in the exponentially large sector, with the *z*− *<sup>r</sup>* situated in the algebraic domain. Finally, when *σ* = <sup>1</sup> <sup>6</sup> , the exponentially large sector <sup>|</sup> arg *<sup>z</sup>*<sup>|</sup> <sup>&</sup>lt; <sup>5</sup> <sup>12</sup>*<sup>π</sup>* encloses the rays arg *<sup>z</sup>* <sup>=</sup> <sup>±</sup>*πσ* with the result that all the *<sup>z</sup>*<sup>+</sup> *<sup>r</sup>* lie in the exponentially large sector, whereas the *z*− *<sup>r</sup>* lie in the algebraic domain (except when *n* = 2 when *z*− <sup>0</sup> lies on the lower boundary of the exponentially large sector).

**Figure 1.** *Cont*.

**Figure 1.** Diagrams representing the rays arg *z* = ±*πσ* and the boundaries of the exponentially large sector (shown by dashed rays) <sup>|</sup> arg *<sup>z</sup>*<sup>|</sup> <sup>&</sup>lt; <sup>1</sup> <sup>2</sup>*πκ*, *κ* = 1 − *σ* for (**a**) *σ* = 2/3, (**b**) *σ* = 1/2, (**c**) *σ* = 1/3, and (**d**) *σ* = 1/6. Outside the exponentially large sector, the expansion of Ψ(*z*) is algebraic in character. The circular quadrants represent the range of the arguments arg *z* = ±*πσ* − *ω<sup>r</sup>* for 0 ≤ *r* ≤ *n*/2 − 1, with *n* ≥ 2 and the arrow-head corresponds to *n* = ∞. When *σ* = 1/3, the rays arg *<sup>z</sup>* <sup>=</sup> <sup>±</sup>*πσ* and arg *<sup>z</sup>* <sup>=</sup> <sup>±</sup><sup>1</sup> <sup>2</sup>*πκ* coincide.

To summarise, we have the following asymptotic character of *Fn*,*σ*(*x*; *μ*) when *x* → +∞ as a function of the parameter *σ*:

$$\begin{array}{l} 0 < \sigma < \frac{1}{2} & \text{Exp. large} + \text{Algebraic (for } n \ge 2) \\\\ \frac{1}{2} \le \sigma < \frac{2}{3} & \text{Exp. large (dependent on } n) + \text{Algebraic} \\\\ \frac{2}{3} \le \sigma < 1 & \text{Algebraic (for } n \ge 2). \end{array} \tag{14}$$

#### *4.2. Asymptotic Expansion*

From (8) and (10), we have the algebraic expansion associated with *Fn*,*σ*(*x*; *μ*) given by

$$\mathbf{H}(\mathbf{x}) = \frac{1}{\sigma} \sum\_{k=0}^{\infty} \frac{\mathbf{x}^{-K}}{k! \Gamma(1-K)} \theta\_{n,k} \, \, \, \qquad K := \frac{k+\delta}{\sigma} \, \, \, \, \tag{15}$$

where, with appropriate choices of the factors *e*±*π<sup>i</sup>* in *H*(*z*),

$$\begin{array}{lcl}\theta\_{n,k} &=& \frac{(-1)^{k}}{\sin\pi K} \Re\left\{ \sum\_{r=0}^{N-1} e^{\pi i \bar{r} \theta - i\omega\_{r}} (e^{\pi i \bar{r} - i\omega\_{r}} \cdot e^{-\pi i \bar{r}})^{-K} + e^{-\pi i \bar{r} - i\omega\_{r}} (e^{-\pi i \bar{r} - i\omega\_{r}} \cdot e^{\pi i \bar{i}})^{-K} \right\} \\\\ & & + \Delta\_{\text{fl}} e^{\pi i \bar{\text{f}} \theta} (e^{\pi i \bar{\text{f}}} \cdot e^{-\pi i \bar{\text{f}}})^{-K} \\\\ &=& \frac{(-1)^{k}}{\sin\pi K} \Re \left\{ \sum\_{r=0}^{N-1} e^{(K-1)i\omega\_{r}} (e^{\pi i (\theta + \kappa K)} + e^{-\pi i (\theta + \kappa K)}) + \Delta\_{\text{fl}} e^{\pi i (\theta + \kappa K)} \right\} \\\\ &=& \Re \left\{ 2 \sum\_{r=0}^{N-1} e^{(K-1)i\omega\_{r}} + \Delta\_{\text{h}} \right\}\_{} \end{array} \tag{16}$$

as cos *<sup>π</sup>*(*<sup>ϑ</sup>* <sup>+</sup> *<sup>κ</sup>K*) = cos *<sup>π</sup>*(*<sup>K</sup>* <sup>−</sup> *<sup>k</sup>* <sup>−</sup> <sup>1</sup> <sup>2</sup> )=(−1)*<sup>k</sup>* sin *<sup>π</sup>K*. For the exponential component, we introduce the quantities

$$X = \kappa (hx)^{1/\kappa}, \qquad \Phi\_r^{\pm} = \pm \frac{\pi \vartheta}{\kappa} - \omega\_l \left(1 + \frac{\vartheta}{\kappa}\right) \tag{17}$$

and the formal asymptotic sum

$$S(Xe^{i\Omega}) := \sum\_{j=0}^{\infty} A\_j(\sigma) (Xe^{i\Omega/\chi})^{-j}.\tag{18}$$

Then, from (8) and (10), we have the exponential expansion in the form

$$\mathbf{E}(\mathbf{x}) = \frac{X^{\theta}}{\pi} \Re \left\{ \sum\_{r=0}^{N-1} \left( \exp\left[Xe^{i\Phi\_{r}^{+}/\kappa} + i\Phi\_{r}^{+}\right] \mathbf{S}\left(Xe^{i\Phi\_{r}^{+}}\right) + \exp\left[Xe^{i\Phi\_{r}^{-}/\kappa} + i\Phi\_{r}^{-}\right] \mathbf{S}\left(Xe^{i\Phi\_{r}^{-}}\right) \right) \right. \\\\ \left. + \Delta\_{\mathrm{fl}} \exp\left[Xe^{\pi i\bar{\nu}^{\prime}/\kappa} + \pi i\theta/\kappa\right] \mathbf{S}\left(Xe^{\pi i\bar{\nu}^{\prime}}\right) \right\}. \tag{19}$$

It is important to stress that only the exponential terms with |*φ*<sup>±</sup> *<sup>r</sup>* | ≤ <sup>1</sup> <sup>2</sup>*πκ*, that is those with

$$\left| \pm \pi \sigma - \omega\_r \right| \le \frac{1}{2} \pi \kappa\_r$$

are to be retained in **E**(*x*) in (19). In addition, it is seen by inspection of Figure 1 that the second term involving *S*(*Xeiφ*<sup>−</sup> *<sup>r</sup>* ) does not contribute to **E**(*x*) when <sup>1</sup> <sup>3</sup> ≤ *σ* < 1, since, for this range of *<sup>σ</sup>*, the ray arg *<sup>z</sup>* <sup>=</sup> <sup>−</sup>*πσ* lies outside (or, when *<sup>σ</sup>* <sup>=</sup> <sup>1</sup> <sup>3</sup> , on the lower boundary of) the exponentially large sector <sup>|</sup> arg *<sup>z</sup>*<sup>|</sup> <sup>&</sup>lt; <sup>1</sup> <sup>2</sup>*πκ*. Thus, when <sup>1</sup> <sup>2</sup> <sup>≤</sup> *<sup>σ</sup>* <sup>&</sup>lt; <sup>2</sup> <sup>3</sup> , the exponential expansion is significant if *πσ* <sup>−</sup> *<sup>ω</sup>*<sup>0</sup> <sup>≤</sup> <sup>1</sup> <sup>2</sup>*πκ*; that is, if *n* ≥ *n*<sup>0</sup> = 1/(2 − 3*σ*).

In summary, we have the following theorem.

**Theorem 1.** *The following expansion holds for x* → +∞*:*

$$F\_{\mathbf{t},\sigma}(x;\mu) \sim \begin{cases} \mathbf{E}(x) + \mathbf{H}(x) & (0 < \sigma < \frac{1}{2}; \ n \ge 2) \\\\ \mathbf{E}(x) + \mathbf{H}(x) & \left(\frac{1}{2} \le \sigma < \frac{2}{3}; \ n \ge n\_0\right) \\\\ \mathbf{H}(x) & \left(\frac{1}{2} \le \sigma < \frac{2}{3}; \ n < n\_0\right) \\\\ \mathbf{H}(x) & \left(\frac{2}{3} \le \sigma < 1; \ n \ge 2\right) \end{cases}$$

*where n*<sup>0</sup> = 1/(2 − 3*σ*) *and the exponential and algebraic expansions* **E**(*x*) *and* **H**(*x*) *are defined in (15) and (19).*

*4.3. Karasheva's Estimate for* |Θ*n*,*α*(*x*; *μ*)|

When *σ* = *α*/(2*n*) < <sup>1</sup> <sup>2</sup> , we see from Theorem 1 that the dominant exponential expansion as *x* → +∞ corresponds to *r* = 0, yielding

$$\Theta\_{\boldsymbol{n},\boldsymbol{\mu}}(\boldsymbol{x};\boldsymbol{\mu}) \sim \frac{A\_{0}(\boldsymbol{\sigma})X^{\theta}}{\pi} \Re \exp\left[X\epsilon^{j(\pi\sigma-\omega\_{0})/\kappa + i\Phi\_{0}^{+}}\right]$$

$$=\frac{A\_{0}(\boldsymbol{\sigma})X^{\theta}}{\pi}\exp\left[X\cos(\pi\sigma-\omega\_{0})/\kappa\right]\cos[X\sin(\pi\sigma-\omega\_{0})/\kappa] + \Phi\_{0}^{+}\big],$$
 where 
$$\frac{\pi\sigma-\omega\_{0}}{}=\frac{2n\pi\sigma-(n-1)\pi}{}=\frac{(\boldsymbol{\alpha}+1-n)\pi}{}$$

$$\frac{\pi \sigma - \omega\_0}{\kappa} = \frac{2n\pi\sigma - (n-1)\pi}{2n - a} = \frac{(a+1-n)\pi}{2n - a} \dots$$

Thus, we have the leading order estimate

$$\Theta\_{\mathsf{R},\mathsf{a}}(\mathsf{x};\mu) \sim \frac{A\_{0}(\sigma)X^{\theta}}{\pi} \exp\left[X\cos\left(\frac{(n-1-a)\pi}{2n-a}\right)\right] \cos\left[X\sin\left(\frac{(n-1-a)\pi}{2n-a}\right) - \Phi\_{0}^{+}\right] \tag{20}$$

as *x* → +∞. When expressed in our notation, Karasheva's estimate for |Θ*n*,*α*(*x*; *μ*)| in ([1], §8) agrees with (20) (when the second cosine term is replaced by 1), except that she did not give the value of the multiplicative constant *A*0(*σ*)/*π* given in (11). However, the presentation of her result as an upper bound is not evident due to the presence of possibly less dominant exponential expansions and also the subdominant algebraic expansion.

#### **5. The Expansion of** *Fn***,***σ***(***x***;** *μ***) for** *x → −***∞**

To examine the case of negative *x*, we replace *x* by *e*∓*π<sup>i</sup> x*, with *x* > 0, and use the fact that Ψ(*ze*2*π<sup>i</sup>* ) = Ψ(*z*) to find, from (8), that

$$F\_{\hbar,\tau}(-\mathbf{x};\mu) = \frac{1}{\pi}\Re\left\{\sum\_{r=0}^{N-1} e^{-i\omega\_{\tau}}\mathbf{Y}\_{\tau}(-\kappa;\mathbf{x}) + \Delta\_{\hbar}e^{\pi i\theta}\Psi(\mathbf{x}e^{-\pi i\kappa})\right\}.\tag{21}$$

The rays arg *z* = ±*πσ* in Figure 1 are now replaced by the Stokes lines arg *z* = ±*πκ*. The Stokes and anti-Stokes lines arg *<sup>z</sup>* <sup>=</sup> <sup>±</sup><sup>1</sup> <sup>2</sup>*πκ* are illustrated in Figure <sup>2</sup> when 0 <sup>&</sup>lt; *<sup>σ</sup>* <sup>&</sup>lt; <sup>1</sup> 2 and <sup>1</sup> <sup>2</sup> <sup>&</sup>lt; *<sup>σ</sup>* <sup>&</sup>lt; 1. In the sectors <sup>1</sup> <sup>2</sup>*πκ* < | arg *z*| < *πκ*, we recall that the exponential expansion *E*(*z*) is still present but is exponentially small as |*z*| → ∞.

**Figure 2.** Diagrams representing the rays arg *z* = ±*πκ* and the boundaries of the exponentially large sector (shown by dashed rays) <sup>|</sup> arg *<sup>z</sup>*<sup>|</sup> <sup>&</sup>lt; <sup>1</sup> <sup>2</sup>*πκ*, *<sup>κ</sup>* <sup>=</sup> <sup>1</sup> <sup>−</sup> *<sup>σ</sup>* for (**a**) 0 <sup>&</sup>lt; *<sup>σ</sup>* <sup>&</sup>lt; <sup>1</sup> <sup>2</sup> and (**b**) <sup>1</sup> <sup>2</sup> < *σ* < 1. The circular quadrants represent the range of the arguments arg *z* = ±*πκ* − *ω<sup>r</sup>* for 0 ≤ *r* ≤ *N* − 1 with the arrow-head corresponding to *n* = ∞. The ± signs in (**b**) denote the signs to be chosen in *H*(*z*) on either side of the Stokes line arg *z* = 0.

For the algebraic component of the expansion two cases arise when the argument *πκ* − *ω<sup>r</sup>* of the second Ψ function in Υ*r*(−*κ*; *x*) is either (i) positive or (ii) negative. In case (i), the algebraic expansion *H*(*z*) does not encounter a Stokes phenomenon as its argument does not cross arg *z* = 0, whereas in case (ii), a Stokes phenomenon arises for those values of *r* that make *πκ* − *ω<sup>r</sup>* < 0. In case (i), the algebraic component contains the factor inside the sum over *r* in (21)

$$e^{\pi i i \theta} \left( e^{-\pi i \mathbf{x} - i \omega\_r} \cdot e^{\pi i} \right)^{-K} + e^{-\pi i i \theta} \left( e^{\pi i \mathbf{x} - i \omega\_r} \cdot e^{-\pi i} \right)^{-K}$$

$$= e^{i \omega\_r K} \left( e^{\pi i (\vartheta - \sigma K)} + e^{-\pi i (\vartheta - \sigma K)} \right) = 2e^{i \omega\_r K} \cos \pi (k + \frac{1}{2}) \equiv 0$$

upon recalling the definition of *<sup>K</sup>* in (15) and noting that *<sup>δ</sup>* <sup>−</sup> *<sup>ϑ</sup>* <sup>=</sup> <sup>1</sup> <sup>2</sup> . Similarly, the final term involves the factor *eπiϑ*(*e*−*πi<sup>κ</sup>* · *<sup>e</sup>π<sup>i</sup>* )−*<sup>K</sup>* <sup>=</sup> cos *<sup>π</sup>*(*<sup>ϑ</sup>* <sup>−</sup> *<sup>σ</sup>K*) = 0. Thus, the algebraic contribution to *Fn*,*σ*(−*x*; *μ*) vanishes in case (i).

For case (ii) to apply, we require that *πκ* − *ω*<sup>0</sup> < 0; that is, *n* > *n*<sup>∗</sup> = 1/(2*σ* − 1). Suppose that *πκ* − *ω<sup>r</sup>* < 0 for 0 ≤ *r* ≤ *r*0. Then, the algebraic component resulting from the terms with *r* ≤ *r*<sup>0</sup> becomes

$$\frac{1}{\pi\sigma}\Re\left\{\sum\_{k=0}^{\infty}\frac{(-1)^{k}\Gamma(K)}{k!}\mathbf{x}^{-K}\sum\_{r=0}^{r\_{0}}e^{(K-1)i\omega\_{r}}\left(e^{\pi i\theta}(e^{-\pi i\kappa}\cdot e^{\pi i})^{-K}+e^{-\pi i\theta}(e^{\pi i\kappa}\cdot e^{\pi i})^{-K}\right)\right\},$$

$$=\frac{2}{\pi\sigma}\Re\left\{\sum\_{k=0}^{\infty}\frac{(-1)^{k}\Gamma(K)}{k!}\mathbf{x}^{-K}\sum\_{r=0}^{r\_{0}}e^{(K-1)i\omega\_{r}-\pi iK}\cos\pi(\theta-\sigma K+\pi K)\right\},$$

where, in the second term in round braces, we have taken account of the Stokes phenomenon (the first term and that multiplied by Δ*<sup>n</sup>* are unaffected). Some routine algebra then produces the algebraic contribution

$$\hat{\mathbf{H}}(\mathbf{x}) := \frac{2}{\sigma} \sum\_{k=0}^{\infty} \frac{\mathbf{x}^{-K}}{k! \Gamma(1-K)} \hat{\theta}\_{n,k\prime} \qquad \hat{\theta}\_{n,k} := \sum\_{r=0}^{r\_0} \cos \left\{ \pi K - (K-1)\omega\_r \right\} \tag{22}$$

when *<sup>n</sup>* <sup>&</sup>gt; *<sup>n</sup>*<sup>∗</sup> and **Hˆ** (*x*) <sup>≡</sup> 0 when *<sup>n</sup>* <sup>&</sup>lt; *<sup>n</sup>*∗. (We avoid here consideration of the algebraic contribution when *πκ* − *ω<sup>r</sup>* = 0, that is, on the Stokes line arg *z* = 0.)

Reference to Figure 2 shows that there is no exponential contribution to *Fn*,*σ*(−*x*; *μ*) from the terms Ψ(*xe*−*πiκ*) and Ψ(*xe*−*πiκ*−*iω<sup>r</sup>* ). From (10) and (21), we find the exponential expansion results from the terms Ψ(*xeπiκ*−*iω<sup>r</sup>* ), which is given by

$$\mathbf{f}(\mathbf{x}) := \frac{X^{\theta}}{\pi} \Re \sum\_{r=0}^{N-1} \exp \left[ -X e^{-i\omega\_{r}/\kappa} - i\Phi \right] S(-X e^{-i\omega\_{r}/\kappa}),\tag{23}$$

where *X* and the asymptotic sum *S* are defined in (17) and (18) with Φ := *ωr*(1 + *ϑ*/*κ*). For *σ* < <sup>1</sup> <sup>2</sup> (when the algebraic expansion vanishes), the expansion of *Fn*,*σ*(−*x*; *μ*) will be exponentially small provided *πκ* <sup>−</sup> *<sup>ω</sup>*<sup>0</sup> <sup>&</sup>gt; <sup>1</sup> <sup>2</sup>*πκ*; that is, when *n* < 1/*σ*. If *n* = 1/*σ*, there is an exponentially oscillatory contribution, and when *n* > 1/*σ*, the expansion is exponentially large.

To summarise, we have the theorem:

**Theorem 2.** *The following expansion holds for x* → +∞*:*

$$F\_{\hbar,\sigma}(-x;\mu) \sim \begin{cases} \triangle(x) & (0 < \sigma \le \frac{1}{2}) \\\\ \triangle(x) + \triangle(x) & (\frac{1}{2} < \sigma < 1), \end{cases} \tag{24}$$

*where the exponential expansion* **Eˆ**(*x*) *is defined in (23). This last expansion is exponentially small as x* → −<sup>∞</sup> *when* <sup>0</sup> <sup>&</sup>lt; *<sup>σ</sup>* <sup>&</sup>lt; <sup>1</sup> <sup>2</sup> *and n* < 1/*σ. The algebraic expansion* **Hˆ** (*x*) *is given by*

$$\mathbf{H}(\boldsymbol{x}) := \frac{2}{\sigma} \sum\_{k=0}^{\infty} \frac{\boldsymbol{x}^{-K}}{k! \Gamma(1-K)} \boldsymbol{\theta}\_{n,k} \ (\boldsymbol{n} > n^\*), \qquad \boldsymbol{0} \ (\boldsymbol{n} < n^\*),$$

*where n*<sup>∗</sup> <sup>=</sup> 1/(2*<sup>σ</sup>* <sup>−</sup> <sup>1</sup>) *and K,* <sup>ˆ</sup> *θn*,*<sup>k</sup> are specified in (15) and (22).*

#### **6. Numerical Results**

In this section, we describe numerical calculations that support the expansions given in Theorems 1 and 2. The function *Fn*,*σ*(*x*; *μ*) was evaluated using the expression in terms of Wright functions (valid for real *x*)

$$F\_{n,\tau}(\mathbf{x};\mu) = 2\Re \sum\_{r=0}^{N-1} e^{i\omega\_r} \phi(-\sigma,\mu;\mathbf{x} e^{i\omega\_r}) + \Delta\_n \phi(-\sigma,\mu;\mathbf{x}), \qquad \mathcal{N} = \lfloor n/2 \rfloor,\tag{25}$$

which follows from (5) and the symmetry of *ωr*.

In Table 1, we present the results of numerical calculations for *x* → +∞ compared with the expansions given in Theorem 1. We choose four representative values of *σ* that focus on the different cases of Theorem 1 and *n* = 2, 3 and 4. The numerical value of *Fn*,*σ*(*x*; *μ*) was obtained by high-precision evaluation of (25). The exponential expansion **E**(*x*) was computed with the truncation index *j* = 3 and the algebraic expansion **H**(*x*) was optimally truncated (that is, at or near its smallest term).

The first case *σ* = <sup>1</sup> <sup>3</sup> has an exponentially large expansion with a subdominant algebraic contribution for all three values of *n*. The second case *σ* = <sup>1</sup> <sup>2</sup> corresponds to *n*<sup>0</sup> = 2; when *n* = 2, **E**(*x*) is oscillatory and makes a similar contribution as **H**(*x*), whereas when *n* = 3 and 4, **E**(*x*) is exponentially large. The third case *σ* = <sup>5</sup> <sup>9</sup> corresponds to *n*<sup>0</sup> = 3; when *n* = 2, there is no exponential contribution, whereas when *n* = 3, **E**(*x*) is oscillatory and thus makes a similar contribution as **H**(*x*); when *n* = 4, **E**(*x*) is exponentially large. Finally, when *σ* = <sup>2</sup> <sup>3</sup> , the expansion of *Fn*,*σ*(*x*; *μ*) is purely algebraic in character.



In Table 2, we present illustrative examples of Theorem 2 when *x* → −∞. The first case, *σ* = <sup>1</sup> <sup>4</sup> (*<sup>κ</sup>* <sup>=</sup> <sup>3</sup> <sup>4</sup> ), has an expansion that is exponential in character; for *n* < 1/*σ* = 4, **<sup>E</sup>**ˆ(*x*) is exponentially small, whereas for *<sup>n</sup>* <sup>=</sup> 4, the argument *πκ* <sup>−</sup> *<sup>ω</sup>*<sup>0</sup> <sup>=</sup> <sup>3</sup> <sup>8</sup>*π* lies on the upper boundary of the exponentially large sector <sup>|</sup> arg *<sup>z</sup>*<sup>|</sup> <sup>&</sup>lt; <sup>3</sup> <sup>8</sup>*π*, and thus **<sup>E</sup>**ˆ(*x*) is oscillatory. For *<sup>n</sup>* <sup>≥</sup> 5, **<sup>E</sup>**ˆ(*x*) becomes exponentially large as *<sup>x</sup>* → −∞. In the second case, *<sup>σ</sup>* <sup>=</sup> <sup>2</sup> <sup>5</sup> (*<sup>κ</sup>* <sup>=</sup> <sup>3</sup> 5 ), **<sup>E</sup>**ˆ(*x*) is exponentially small for *<sup>n</sup>* <sup>=</sup> 2 and exponentially large for *<sup>n</sup>* <sup>≥</sup> 3.

In the third case, *σ* = *κ* = <sup>1</sup> <sup>2</sup> , **<sup>E</sup>**ˆ(*x*) is oscillatory for *<sup>n</sup>* <sup>=</sup> 2 and exponentially large for *<sup>n</sup>* <sup>≥</sup> 3. Finally, when *<sup>σ</sup>* <sup>=</sup> <sup>3</sup> <sup>4</sup> (*<sup>κ</sup>* <sup>=</sup> <sup>1</sup> <sup>4</sup> ), the function *Fn*,*σ*(*x*; *μ*) is exponentially large for *<sup>n</sup>* <sup>=</sup> 2, 3 and *<sup>n</sup>* <sup>≥</sup> 5. However, for *<sup>n</sup>* <sup>=</sup> 4, the two values *<sup>ω</sup>*<sup>0</sup> <sup>=</sup> <sup>3</sup> <sup>8</sup>*<sup>π</sup>* and *<sup>ω</sup>*<sup>1</sup> <sup>=</sup> <sup>1</sup> <sup>8</sup>*π* yield arguments *πκ* − *ω<sup>r</sup>* (*r* = 0, 1) situated on *both* boundaries of the exponentially large sector <sup>|</sup> arg *<sup>z</sup>*<sup>|</sup> <sup>&</sup>lt; <sup>1</sup> <sup>8</sup>*π*. In this case **<sup>E</sup>**ˆ(*x*) is oscillatory and, since *<sup>n</sup>*<sup>∗</sup> <sup>=</sup> 2, there is, in addition, an algebraic contribution **H**ˆ (*x*).

**Table 2.** The values of the exponential and algebraic expansions compared with *Fn*,*σ*(*x*; *μ*) for large *x* < 0 for different values of *σ* and *n* when *μ* = 3/4 and |*x*| = 8 (for *σ* = 1/4, 1/2, 2/5), |*x*| = 5 (for *σ* = 3/4).


#### **7. Concluding Remarks**

We employed the standard asymptotics of the Wright function Ψ(*z*) defined in (6) to determine the asymptotic expansion of *Fn*,*σ*(*x*; *μ*) for *x* → ±∞. We found that this behaviour depended critically on the parameter *σ*. The numerical results presented in Tables 1 and 2 demonstrate that the asymptotic forms of *Fn*,*σ*(*x*; *μ*) stated in Theorems 1 and 2 agreed well with the numerically computed values of *Fn*,*σ*(±*x*; *μ*). In particular, we showed that, when *σ* < <sup>1</sup> <sup>2</sup> , the expansion of *Fn*,*σ*(*x*; *μ*) exponentially decays as *x* → −∞.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


## *Article* **Fractional Calculus in Russia at the End of XIX Century**

**Sergei Rogosin \* and Maryna Dubatovskaya**

Department of Economics, Belarusian State University, 4, Nezavisimosti Ave, 220030 Minsk, Belarus; dubatovska@bsu.by

**\*** Correspondence: rogosin@bsu.by; Tel.: +375-17-282-22-84

**Abstract:** In this survey paper, we analyze the development of Fractional Calculus in Russia at the end of the XIX century, in particular, the results by A. V. Letnikov, N. Ya. Sonine, and P. A. Nekrasov. Some of the discussed results are either unknown or inaccessible.

**Keywords:** fractional integrals and derivatives; Grünwald-Letnikov approach; Sonine kernel; Nekrasov fractional derivative

**MSC:** primary 26A33; secondary 34A08; 34K37; 35R11; 39A70

#### **1. Introduction**

The year of the birth of Fractional Calculus is considered 1695, when Leibniz discussed the possibility of introducing the derivative of an arbitrary order in his letters to Wallis and Bernoulli. Several attempts were made to give a precise meaning to this new notion. A comprehensive detailed analysis of the history of Fractional Calculus is given in Reference [1]. One of most productive periods in this history was the middle-end of the XIX century. Here, we can mention works by Legendre, Fourier, Peacock, Kelland, Tardi, Roberts, and others. The most advanced approach to the determination of the fractional derivative of an arbitrary order was proposed by Liouville. A deep analysis of the results on this subject was given in the article [2] by Letnikov. In particular, he recognized the basic role of Liouville's approach. Letnikov said ([2], p. 92): ". . . we give a survey of the results by Liouville whom we ought to consider as the first scientist paid a serious attention to clarifying the question on the derivative of an arbitrary order. In 1832, he started to publish a series of articles devoted to the foundation and the application of his theory of general differentiation which is the first complete discussion of this topic. Before his work, only few very important but not completely clear remarks were made on this subject".

It should be noted that the works by A. V. Letnikov constitute the first rigorous and comprehensive construction of the theory of the fractional integro-differentiation. An extended description of the results by Letnikov is presented in the articles of References [3,4] and in the book of Reference [5], written in Russian.

In the middle-end of the XIX century, an interest to Fractional Calculus in Russia grew significantly [6–10]. One of the reasons for it was a high standard in the research in Real and Complex Analysis in Russia in this period. Russian Universities took care of the level of the education of young scientists. Many applicants for a professorship had been given the opportunity to spend 1–2 years at the leading research centers and to attend the lectures of known mathematicians.

Letnikov's results attracted people to this branch of the Science, at least in Russia. Nevertheless, these works remained unknown abroad and, for a long time, were unaccessible. After contribution by Letnikov, the serious works on Fractional Calculus in Russia in the second part of the XIX century were published by N. Ya. Sonine and P. A. Nekrasov. They introduced the complex-analytic technique into the study and application of derivatives and integrals of an arbitrary order. It should be noted that Complex Analysis was

**Citation:** Rogosin, S.; Dubatovskaya, M. Fractional Calculus in Russia at the End of XIX Century. *Mathematics* **2021**, *9*, 1736. https://doi.org/ 10.3390/math9151736

Academic Editor: Clemente Cesarano

Received: 28 June 2021 Accepted: 19 July 2021 Published: 22 July 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

traditionally highly developed discipline starting from Leonard Euler, who worked for a long period in Russia (1726–1741 and 1776–1783). This part of Mathematical Analysis was essentially developed in the XIX century by M. V. Ostrogradsky, V. Ya. Bunyakovsky, P. L. Chebyshev, A. M. Lyapunov, and many others. In particular, Sonine and Nekrasov found a fractional analog of the classical Cauchy integral formula for analytic functions.

In our article, we describe the contribution of Alexey Vasil'evich Letnikov (1837–1888), Nikolai Yakovlevich Sonine (1849–1915), and Pavel Alekseevich Nekrasov (1853–1924) to Fractional Calculus and the role of these results in the modern Fractional Calculus and its Applications.

#### **2. Liouville's Approach and Its Analysis by Letnikov**

As it was already said, A. V. Letnikov considered (see, e.g., Reference [2,6,7]) that the Liouville's theory constitutes the only complete treatment of differentiation of an arbitrary order. Realizing the great importance of this theory, Lentnikov had seen that its certain parts did not receive a proper justification and led to some misunderstanding in the works of Liouville's followers.

Let us present here Letnikov's description of the elements of the Lioville's theory following Reference [2]. Letnikov started his analysis with the definitions given by Liouville.

**Definition 1.** *Let the function y*(*x*) *be represented in the form of the following series of exponents:*

$$y(\mathbf{x}) = A\_1 \mathbf{e}^{m\_1 x} + A\_2 \mathbf{e}^{m\_2 x} + \dots,\tag{1}$$

*which is denoted for shortness as* ∑ *Amemx.*

*Fractional derivative of the order p is defined by multiplying each term of the series by p-th power of the index m:*

$$\frac{d^p y}{d\mathbf{x}^p} = \sum A\_m m^p \boldsymbol{\varepsilon}^{mx}.\tag{2}$$

*If p is negative, then Formula (2) determined the fractional integral of order* −*p.*

Fractional integral of order <sup>−</sup>*<sup>p</sup>* is denoted by Liouville as <sup>−</sup>*pydx*−*p*. Liouville considered this definition as the only possible way to generalize the usual derivative. Evaluating its role, Letnikov stressed that Definition 1 contains a key ideas to establish a deep analogy with differences and powers and, thus, could lead to a more simple construction.

Nevertheless, the above definition had a very important restriction. It cannot be applied to an arbitrary function since not all of them possess representations in series of exponents. Liouville himself understood this difficulty. He proposed a way to overcome it. By performing the change of variable *z* = *e<sup>x</sup>* for the function *y* = *F*(*x*), one can expand the composite function *y* = *F*(ln *z*) (with *x* = ln *z*) in a converging power series:

$$F(\ln z) = \sum A\_m z^m. \tag{3}$$

Thus, the initial function *y* = *F*(*x*) admits representation via series of exponents

$$F(\mathbf{x}) = \sum A\_m e^{mx}.\tag{4}$$

But the possibility to represent *y* = *F*(ln *z*) in form (3) met several restrictions. For instance, if we suppose to get representation of *y* = *F*(ln *z*) in a form of series in positive powers of *z*, then all derivatives of *y* = *F*(*x*) at *x* = ∞ should be equal to zero since

$$(F(\ln z))\_z' = \frac{F'(\ln z)}{z}, \\ (F(\ln z))\_z'' = \frac{F''(\ln z) - F'(\ln z)}{z^2}, \dots, 1$$

Similar restriction appears if we suppose to represent *y* = *F*(ln *z*) in a form of series in negative powers of *z* since we deal in this case with the function *y* = *F*(− ln *z*). Such conditions look fairly strong. Moreover, they are neither necessary nor sufficient for the representation of the type (4).

Liouville met such a restriction trying to calculate the derivative of a fractional order of the power function. He started with the Euler formula

$$\frac{1}{\frac{1}{\mathfrak{x}^m}} = \frac{\int\_0^\infty e^{-xz} z^{m-1} dz}{\Gamma(m)}.$$

Liouville supposed that the above integral can be represented in a form of the exponential sum ∑ *Ane*−*nx*. Here, all coefficients in such representation should be infinitely small. Then, using his main definition, Liouville arrived at the formula of the derivative of this function:

$$\frac{d^p}{dx^p}\frac{1}{x^m} = \frac{\int\_0^\infty e^{-xz}(-z)^p z^{m-1} dz}{\Gamma(m)}.\tag{5}$$

Thus, by definition of Γ-function, we get, after substituting *xz* = *t*, the following formula:

$$\frac{d^p}{dx^p}\frac{1}{x^m} = \frac{(-1)^p \Gamma(m+p)}{\Gamma(m)x^{m+p}}.\tag{6}$$

In his first articles, Liouville used the only definition of the Γ-function of positive variable (later, he noted that he was not familiar with the general definition of the Γfunction by Legendre and Gauss). Therefore, he supposed that, due to assumptions *m* > 0, *m* + *p* > 0, one needs to use in the above definition so-called auxiliary functions (more detailed discussion of the role of auxiliary function is presented below in Section 4; in fact, such notion appeared in the works by Liouville since he used indefinite integral for fractional integration). Being very important, the use of auxiliary functions did not lead to a general definition of the fractional derivative. Liouville showed that, if one supposed an existence of auxiliary functions, then these necessarily had to be entire functions.

Letnikov claimed and proved that it follows from his analysis that Liouville's formulas were so general that they had no need of any auxiliary function. Later on, several attempts to correct Liouville's approach were made. In particular, Letnikov analyzed in Reference [2] the works by Kelland, Tardi, and Roberts. But the really rigorous approach which transformed Liouville's formulas to the general definition of the fractional derivative was proposed by Letnikov. We have to note that Letnikov used a definite integral in his construction (see Section 3.1). For such construction, the notion of auxiliary function becomes needless.

#### **3. Letnikov's Contribution to Fractional Calculus**

*3.1. Letnikov or Grünwald-Letnikov Derivative*

Starting his work on determination of the derivative of an arbitrary order, Letnikov posed this problem [6,7] as interpolation in form of the elements of two sequences consisting of successive derivatives of the function *f*(*x*)

$$f(a) \quad f(x), f'(x), f''(x), \dots, f^{(n)}(x), \dots \tag{7}$$

and of successive *n*-fold integrals of this function:

$$(b) \quad f(\mathbf{x}), \int f(\mathbf{x})d\mathbf{x}, \int^2 f(\mathbf{x})d\mathbf{x}^2, \dots, \int^n f(\mathbf{x})d\mathbf{x}^n, \dots \tag{8}$$

In other words, he tried to find such a formula of the derivative of an arbitrary order *α* which, for nonnegative integer, *α* = 0, 1, 2, ... coincides with the corresponding elements of the sequence (*a*), and, for nonpositive integer, *α* = 0, −1, −2, ... coincides with the corresponding elements of the sequence (*b*). Denoting this formula by

$$D^{\alpha}f(x) \quad \text{or} \quad \frac{d^{\alpha}f(x)}{dx^{\alpha}}.$$

he expected to get this new object to have (whenever it is possible) that same properties as elements of sequence (*a*) or (*b*) when *α* is an integer.

The next idea by Letnikov was to restrict the generality of the above question and to consider, instead of the sequence (*b*) (of indefinite *n*-fold integrals), the sequence of definite integrals, supposing that *f*(*x*) is continuous on certain interval [*a*, *x*], i.e., to interpolate in form elements of the double sequence

$$f(A) \quad \dots \int\_{a}^{x} \int\_{a}^{x} f(\mathbf{x}) d\mathbf{x}^2 \prime \int\_{a}^{x} f(\mathbf{x}) d\mathbf{x}, \\ f'(\mathbf{x}), f''(\mathbf{x}), \dots \tag{9}$$

in which any element is the derivative of the previous one.

The corresponding interpolating object he denoted as

[*D<sup>α</sup> f*(*x*)] *x a* .

In order to get such interpolation, Letnikov proposed to examine the following formula:

$$\frac{\sum\_{k=0}^{n}(-1)^{k}\binom{n}{k}y(x-kh)}{h^{n}},\tag{10}$$

where *<sup>h</sup>* <sup>=</sup> *<sup>x</sup>* <sup>−</sup> *<sup>a</sup> <sup>n</sup>* , and *<sup>α</sup> k* denotes the binomial coefficient. This approach was independently used by Grünwald [11] and by Letnikov [6]. When Letnikov found the paper by Grünwald, he decided to decline publication of his work, but later changed his mind. Letnikov developed in Reference [6] more rigorously than in Reference [11] the theory of the derivative of an arbitrary order and found its relationship with many results known in this area.

Elementary algebra yields that, for *α* = *m* being positive integer number, the derivatives of the corresponding order can be defined as a limit of the above expression

$$f^{(m)}(\mathbf{x}) = \newline \frac{f^{(m)}(\mathbf{x}) - \left(\begin{array}{c} m \\ 1 \end{array}\right) f(\mathbf{x} - \delta) + \left(\begin{array}{c} m \\ 2 \end{array}\right) f(\mathbf{x} - 2\delta) + ... + (-1)^n \binom{m}{n} f(\mathbf{x} - n\delta)}{n} . \tag{11}$$

Here, *δ* → 0 is equivalent to *n* → ∞, but the sum in the numerator remains finite since all binomial coefficients with *n* > *m* vanishing. Thus, Formula (11) can be taken as the definition of the derivative of order *m* ∈ N.

Vice versa, for *α* = −*m* being negative integer, the expression under the limit sign in the right-hand side of (11) equals to

$$\frac{f(\mathbf{x}) + \binom{m}{1}f(\mathbf{x} - \delta) + \binom{m}{2}f(\mathbf{x} - 2\delta) + \dots + \binom{m}{n}f(\mathbf{x} - n\delta)}{\delta^{-m}}.\tag{12}$$

Letnikov showed [6] (pp. 5–12) that the limit of this expression as *δ* → 0, or equivalently as *n* → ∞, is equal to the multiple integral, i.e., (in his notation)

$$\begin{aligned} [D^{-m}f(\mathbf{x})]\_a^x &= \\ \lim\_{\delta \to 0} \frac{f(\mathbf{x}) + \binom{m}{1} f(\mathbf{x} - \delta) + \binom{m}{2} f(\mathbf{x} - 2\delta) + \dots + \binom{m}{n} f(\mathbf{x} - n\delta)}{\delta^{-n}} \\ &= \int\_a^\mathbf{x} d\mathbf{x}\_1 \int\_a^\mathbf{x} d\mathbf{x}\_2 \dots \int\_a^\mathbf{x} f(\mathbf{x}\_m) d\mathbf{x}\_m. \end{aligned} \tag{13}$$

This magnitude [*D*−*<sup>m</sup> f*(*x*)] *x <sup>a</sup>* satisfies certain properties. First of all, if we apply to it similar operation of order −*p*, *p* > 0, then we will have

$$\left[D^{-p}D^{-m}f(\mathfrak{x})\right]\_{a}^{\mathfrak{x}} = \left[D^{-m-p}f(\mathfrak{x})\right]\_{a}^{\mathfrak{x}}.$$

Next, if we take the derivative *<sup>d</sup><sup>p</sup> dx<sup>p</sup>* of order *p* > 0, then we will have

$$\frac{d^p}{dx^p} \left[ D^{-m} f(\mathfrak{x}) \right]\_a^x = \left[ D^{-m+p} f(\mathfrak{x}) \right]\_{a'}^x \quad \text{if} \quad m > p\_{\omega}$$

and

$$\frac{d^p}{d\mathfrak{x}^p} \left[ D^{-m} f(\mathfrak{x}) \right]\_a^x = \frac{d^{p-m} f(\mathfrak{x})}{d\mathfrak{x}^{p-m}}, \quad \text{if} \quad m < p.$$

Thus, in particular, the symbol [*D*−*<sup>m</sup> f*(*x*)] *x <sup>a</sup>* means *m*-times differentiable function whose all derivatives up to *m*-th order are vanishing at *x* = *a*.

Formulas (13) and (11) coincide with the corresponding elements of the double sequence (*A*). Therefore, it led Letnikov to the conclusion that the limit

$$\begin{aligned} [D^\mathfrak{a}f(\mathbf{x})]\_{\mathfrak{a}}^\mathbf{x} &:= \\ \lim\_{\delta \to 0} \frac{f(\mathbf{x}) - \left(\begin{array}{c} \mathfrak{a} \\ 1 \end{array}\right) f(\mathbf{x} - \delta) + \left(\begin{array}{c} \mathfrak{a} \\ 2 \end{array}\right) f(\mathbf{x} - 2\delta) + ... + (-1)^{\mathfrak{a}} \left(\begin{array}{c} \mathfrak{a} \\ \mathfrak{a} \end{array}\right) f(\mathbf{x} - \mathfrak{a}\delta)}{\delta^{\mathfrak{a}}} \end{aligned} \tag{14}$$

is a good candidate to solve the interpolation problem for the sequence (*A*), i.e., to be the derivative of arbitrary order.

The relations of this new object to known formulas of the fractional derivatives (integrals) were described by Letnikov [6] (p. 15) using the following elementary:

**Lemma 1.** *Let* (*αk*) *be a sequence of (real or complex) numbers such that*

lim *<sup>k</sup>*→<sup>∞</sup> *<sup>α</sup><sup>k</sup>* <sup>=</sup> <sup>0</sup> *and* lim *k*→∞ (*α*<sup>1</sup> + *α*<sup>2</sup> + ... + *αk*) = *C*,

*and let* (*βk*) *be a sequence of (real or complex) numbers such that*

$$\lim\_{k \to \infty} \beta\_k = 1.$$

*Then, the sequence of their products has the limits equal to C, i.e.,*

$$\lim\_{k \to \infty} (\alpha\_1 \beta\_1 + \alpha\_2 \beta\_2 + \dots + \alpha\_k \beta\_k) = \lim\_{k \to \infty} (\alpha\_1 + \alpha\_2 + \dots + \alpha\_k) = \mathbb{C}.$$

The above formula is valid for [*D<sup>α</sup> f*(*x*)]*<sup>x</sup> <sup>a</sup>* with *α* < 0 (i.e., for fractional integral of order −*α* in modern language).

The corresponding justification of the formula of [*D<sup>α</sup> f*(*x*)]*<sup>x</sup> <sup>a</sup>* with *α* > 0 (i.e., representation of fractional derivative) Letnikov supposed additionally that the function *f*(*x*) is (*n* + 1)-times continuously differentiable on the interval (*a*, *x*), where *n* is a largest integer smaller than *α*, i.e., *n* < *α* < *n* + 1. Then, using quite cumbersome transformation of the binomial coefficients [6] (pp. 21–26), he has got that the limit in (14) is equal:

$$[D^a f(\mathbf{x})]\_a^\mathbf{x} = \frac{f(a)(\mathbf{x} - a)^{-a}}{\Gamma(-a + 1)} + \frac{f'(a)(\mathbf{x} - a)^{-a + 1}}{\Gamma(-a + 2)} + \dots + \frac{f^{(n)}(a)(\mathbf{x} - a)^{-a + n}}{\Gamma(-a + n + 1)} + \dots \tag{15}$$

$$+ \frac{1}{\Gamma(-a + n + 1)} \int\_a^\mathbf{x} (\mathbf{x} - \tau)^{-a + n} f^{(n + 1)}(\tau) d\tau.$$

Note that the same result is true if *α* ∈ C, Re *α* > 0. Integration by parts showed that (15) can be taken as the definition of fractional derivative of an arbitrary order *α* > 0. A slightly more general form can be written for any *s* ∈ Z, *s* ≥ *n*, *n* < Re *α* < *n* + 1 (of course, under additional smoothness conditions):

$$[D^a f(\mathbf{x})]\_a^{\mathbf{x}} = \sum\_{k=0}^s \frac{f^{(k)}(a)(\mathbf{x} - a)^{-\mathbf{a} + k}}{\Gamma(-\mathbf{a} + k + 1)} + \frac{1}{\Gamma(-\mathbf{a} + s + 1)} \int\_a^{\mathbf{x}} (\mathbf{x} - \tau)^{-\mathbf{a} + s} f^{(s+1)}(\tau) d\tau. \tag{16}$$

In Reference [6], Letnikov paid attention to relationship of his formulas with known constructions. In particular, he showed that, if the function *f*(*x*) is defined, infinitely differentiable on [*x*, ∞), and vanishes together with any derivative when *x* is tending to ∞, then the following formula hold for any *α*, Re *α* < 0,:

$$\mathbb{E}\left[D^{\mathfrak{a}}f(\mathbf{x})\right]\_{+\infty}^{\mathbf{x}} = \frac{1}{\Gamma(-\mathfrak{a})} \int\_{+\infty}^{\mathfrak{x}} (\mathbf{x} - \tau)^{-\mathfrak{a}-1} f(\tau) d\tau = \frac{1}{(-1)^{\mathfrak{a}}\Gamma(-\mathfrak{a})} \int\_{0}^{+\infty} z^{-\mathfrak{a}-1} f(\mathbf{x} + z) dz, \quad z \in \mathbb{R}$$

i.e., coincides with the corresponding integral defined by Liouville. Similarly, for any *α*, 0 ≤ *n* < Re *α* < *n* + 1 Letnikov discovered that

$$\begin{split} [D^n f(x)]\_{+\infty}^x &= \frac{1}{\Gamma(-n+n+1)} \int\_{+\infty}^x \frac{f^{(n+1)}(\tau) d\tau}{(x-\tau)^{n-n}} = \\ \frac{1}{(-1)^{n-n-1} \Gamma(-n+n+1)} \int\_0^{+\infty} \frac{f^{(n+1)}(x+z) dz}{z^{n-n}}. \end{split} \tag{17}$$

He also noted that the considered class of functions is not empty, it contains, in particular, all functions of the form *xme*−*x*.

In Reference [6], Letnikov also presented a series of formulas for the values of his derivative of an arbitrary order of elementary functions, such as power function (*<sup>x</sup>* <sup>−</sup> *<sup>a</sup>*)*β*, exponential function *emx*, logarithmic function log *x*, exponential-trigonometric functions *eβ<sup>x</sup>* sin *γx*, *eβ<sup>x</sup>* cos *γx*, and rational functions *<sup>P</sup>*(*x*) *Q*(*x*). These formulas coincide with nowadays known formulas (see, e.g., Reference [1,12]). Composition formulas for fractional derivatives and integrals were found in Reference [6], too. The last result, presented in Reference [6], was the so-called Leibniz rule for the fractional derivative/integral of the product of functions. Note, that after the death of A. V. Letnikov, it was created a committee examined some of his manuscripts [13]. A few results were then published, but not all were found. In particular, the members of the committee reported that they did not find any results on Abel integrals, as it was expected by some researchers.

#### *3.2. Solution to Certain Differential Equations*

In Reference [14], Liouville made a background for further development of Fractional Calculus. In order to show an importance of the new branch of Science, he solved in Reference [15] a number of problems (mainly from geometry, classical mechanics, and mathematical physics) by using his constructions of integral and derivatives of an arbitrary order. Later, in Reference [16], he also discussed the tautochrone problem and usage of fractional derivatives to its solution.

In his master thesis, Letnikov carefully examined these results by Liouville and came to the conclusion that Liouville's solutions of the problems can be obtained by using more traditional methods, too. He also remarked that incorrect usage of Liouville construction by his followers led to certain misunderstandings, and even mistakes. Note that the master thesis by Letnikov was reprinted in Russian recently in Reference [4,5].

Nevertheless, Letnikov believed that newly created technique could find proper applications. One of these applications was presented in his article, Reference [17], devoted to use of the fractional derivative to the solution of the differential equation

$$(a\_n + b\_n x) \frac{d^n y}{dx^n} + (a\_{n-1} + b\_{n-1} x) \frac{d^{n-1} y}{dx^{n-1}} + \dots + (a\_0 + b\_0 x) y = 0. \tag{18}$$

These results were lectured by Letnikov at the meeting of Mathematical Society on 16 April 1876, and at the meeting of the Warsaw Congress of naturalists on 3 September 1876. They were reprinted by P. A. Nekrasov, who parsed the Letnikov's archive after his death.

Denoting

$$\varphi(\rho) := a\_n \rho^n + a\_{n-1} \rho^{n-1} + \dots + a\_1 \rho + a\_0,\ \psi(\rho) := b\_n \rho^n + b\_{n-1} \rho^{n-1} + \dots + b\_1 \rho + b\_0.$$

Equation (18) can be rewritten in the following symbolic form:

$$
\varphi \left( \frac{d}{d\mathbf{x}} \right) y + \mathbf{x} \psi \left( \frac{d}{d\mathbf{x}} \right) y = 0. \tag{19}
$$

Suppose that equation

$$
\psi(\lambda) = 0 \tag{20}
$$

has different zeroes *λ*1, *λ*2,..., *λn*. Denoting for each *j* = 1, 2, . . . , *n*

*y* := *e <sup>λ</sup>jxY*, (21)

one can rewrite (19)

$$
\varphi \left( \lambda\_j + \frac{d}{d\mathbf{x}} \right) Y + \mathbf{x} \psi \left( \lambda\_j + \frac{d}{d\mathbf{x}} \right) Y = 0. \tag{22}
$$

A crucial idea by Letnikov was to look for the solution to Equation (22) in the form:

$$\mathcal{Y} = \left[D^p y\_j\right]\_{a'}^{x} \tag{23}$$

where *yj* is a new unknown function, and [*Dp*] *x <sup>a</sup>* is an inter-limit (Letnikov-type) derivative whose order *p* is to be defined later.

Let there exist the function *yj*, vanishing at *x* = *a*, together with all derivatives up to order *n* − 2, satisfying the following equation:

$$
\varphi\_j \left( \frac{d}{d\mathbf{x}} \right) y\_j + \mathbf{x} \psi\_j \left( \frac{d}{d\mathbf{x}} \right) y\_j = 0,\tag{24}
$$

where

$$\psi\_{\dot{\gamma}}(\rho) = \frac{\psi(\lambda\_{\dot{\gamma}} + \rho)}{\rho}, \quad \rho + 1 = \frac{\varrho(\lambda\_{\dot{\gamma}})}{\psi\_{\dot{\gamma}}(0)} = \frac{\varrho(\lambda\_{\dot{\gamma}})}{\Psi'(\lambda\_{\dot{\gamma}})} = A\_{\dot{\gamma}}, \quad \varrho\_{\dot{\gamma}}(\rho) = \frac{\varrho(\lambda\_{\dot{\gamma}} + \rho) - A\_{\dot{\gamma}}\psi\_{\dot{\gamma}}(\rho)}{\rho}.$$

Then, there exists the solution to Equation (19), which is represented in the form:

$$y = e^{\lambda\_j x} \left[ D^{A\_j - 1} y\_j \right]\_a^x. \tag{25}$$

Thus, by this transformation, we reduce Equation (18) of order *n* to Equation (24) of order *n* − 1. By applying this method, one can reduce the order of the equation up to 1 and get the possible solution via successive application of the inter-limit derivative.

#### **4. Sonine-Letnikov Discussion**

In 1868, A. V. Letnikov published the main part of his master thesis as an article in Mathematical Sbornik [6], supplemented by the historical survey on the development of the theory of differentiation of an arbitrary order [2]. This article, and Grünwald's article [11], was criticized by N. Ya. Sonine in Reference [9], who also presented in Reference [9] his own approach to determine the derivatives of an arbitrary order. Sonine's article started with the discussion of Liouville's definition of the derivative of an arbitrary order (not necessarily positive). The latter definition is based on the derivative of an arbitrary order *p* ∈ R of exponential function

$$\frac{d^p}{dx^p} \mathcal{e}^{mx} = m^p \mathcal{e}^{mx}$$

and on the possibility to expand a differentiable function into exponential series (the Dirichlet series in modern language):

$$f(\mathbf{x}) = \sum\_{k=-\infty}^{+\infty} A\_k e^{m\_k x}.$$

Sonine made two important remarks concerning Liouville's definition. First, he showed that the derivative of negative order (i.e., the fractional integral) cannot be considered as an inverse to the fractional derivative. The second remark by Sonine is related to the problem discovered by Liouvile himself. If one applies Liouville's definition of the derivative of arbitrary order to power function, then it leads immediately to a kind of contradiction. Liouville founded this phenomena using the following representation of *x*:

$$\mathfrak{x} = \lim\_{\beta \to 0} \frac{e^{\beta \mathfrak{x}} - e^{-\beta \mathfrak{x}}}{2\beta}.$$

If we suppose that the limit and the fractional derivative are interchangeable, then the half-derivative of *x* becomes infinite. Moreover, the derivative of an infinitesimal quantity could be finite. From these facts, Liouville concluded the existence of additional functions that are the derivative of zero function and coincide with an entire function with arbitrary coefficients. Contrariwise, Sonine has shown that such contradiction follows from a not completely rigorous way of expansion of the function into the series in exponents. He also remarked that Liouville's proof of existence of additional functions does not have a proper rigor.

In the second part of his article [9], Sonine criticized an approach by Grünwald and Letnikov, in which the fractional derivative is defined by the following limit:

$$\begin{aligned} D^a[f(x)]\_{x=a}^{x=x} &:= \\ \lim\_{\delta \to 0} \frac{f(x) - \binom{a}{1} f(x-h) + \binom{a}{2} f(x-2h) + \dots + (-1)^n \binom{a}{n} f(x-nh)}{h^a} , \end{aligned} \tag{26}$$

where *nh* = *x* − *a*.

Sonine had two main objections. First, he noted that the series in the numerator of (26) is converging. Hence, the fraction should be infinite. The second remark by Sonine was that, if we apply to the fractional integral *D*−*<sup>α</sup>* the fractional derivative *D<sup>β</sup>* (*α*, *β* > 0), then, by Leibnitz rule, the result should coincide with *Dβ*−*α*. It leads to contradiction, even for the function *f*(*x*) = 1, since it should exist for any *β*, but it is so only for Re *α* > [Re *β*], where [·] means the integer part of the number.

Concerning the first remark, Letnikov noted in Reference [7] that, in the case Re *α* > 0, Formula (26) determines the fractional integral (coinciding with m-times repeated integral if *α* = *m* ∈ N) if the series in the numerator of (26) is converging, and its sum is equal to zero. Moreover, Letnikov gave sufficient conditions for existence of the limit in (26).

The second question by Sonine appeared due to his incorrect application of the Leibnitz rule. Letnikov noted that fractional integral and fractional derivative are defined by different formulas:

$$D^a[f(x)]\_{\\\chi=a}^{\chi=x} = \frac{1}{\Gamma(-a)} \int\_a^x (x-t)^{-a-1} f(t)dt, \quad a < 0,\tag{27}$$

$${}\_{0}D^{a}[f(\mathbf{x})]\_{\mathbf{x}=\mathbf{a}}^{\mathbf{x}=\mathbf{x}}=\sum\_{k=0}^{m}\frac{f^{(k)}(a)(\mathbf{x}-a)^{-a+k}}{\Gamma(-a+k+1)}+\frac{1}{\Gamma(-a+m+1)}\int\_{a}^{\mathbf{x}}(\mathbf{x}-t)^{-a+m}f^{(m+1)}(t)dt,\tag{28}$$

$$a>0, m=[a].$$

Both formulas are applied under certain conditions. Thus, successive application of these two formulas can lead to certain contradiction if we do not take into account the above conditions.

#### **5. Sonine's Contribution to Fractional Calculus**

*5.1. Sonine's Fractional Derivative and Integral*

In his polemical article [9], N. Ya. Sonine not only criticized Grúnwald-Letnikov approach but also proposed another form of "general" fractional derivative. For his formula, Sonine used generalization of the Cauchy integral (or, better to say, the Cauchytype integral, since the below integral is defined for any continuous function *f*):

$$\frac{d^a f(x)}{dx^a} = \frac{\Gamma(p+1)}{2\pi i} \int\_{\gamma} \frac{f(\tau)d\tau}{(\tau-x)^{a+1}}\tag{29}$$

where *γ* is a closed simple smooth curve on the complex plane surrounding the point *x* (without loss of generality, one can assume that *γ* is the circle of radius *r* centered at the point *x*). This formula is really a good candidate for generalization of usual derivatives since, for *α* = *p*, positive integer (29) gives the value of *p*-th derivative at the point *x*, assuming that the function *f* is *p*-time differentiable at *x*.

This formula was analyzed by Letnikov in his answer [7] on the remarks by Sonine. Letnikov proved that, under assumption that the function *f* is (*m* + 1) = ([*α*] + 1)-times continuously differentiable inside the circle *γ*, Formula (29) coincides with Letnikov's Formula (28). Note that Letnikov discussed Sonine's Formula (29) under the stronger assumptions on the function *f* .

$$\frac{d^a f(x)}{dx^a} = \frac{(-r)^{-a} f(x+r)}{\Gamma(-a+1)} + \frac{(-r)^{-a+1} f'(x+r)}{\Gamma(-a+2)} + \dots + \frac{(-r)^{-a+m} f^{(m)}(x+r)}{\Gamma(-a+m+1)} \quad \text{(30)}$$

$$+ \frac{\Gamma(a-m)}{2\pi r^{a-m-1}} \int\_0^{2\pi} f^{(m+1)}(x+r\epsilon^{j\theta}) e^{-(a-m-1)i\theta} d\theta.$$

Since the last integral can be transformed to the form

$$\begin{aligned} \frac{\Gamma(p-m)}{2\pi r^{\kappa-m-1}} \int\_0^{2\pi} f^{(m+1)}(\mathfrak{x}+re^{i\theta}) e^{-(\mathfrak{a}-m-1)i\theta} d\theta &= 0\\ \frac{(-1)^{-\mathfrak{a}+m}}{\Gamma(-\mathfrak{a}+m+1)} \int\_{\mathfrak{x}+r}^{\mathfrak{x}} f^{(m+1)}(\mathfrak{x}) (\mathfrak{x}-\mathfrak{x})^{-\mathfrak{a}+m} d\mathfrak{x} \,\,\,\end{aligned}$$

then Formula (29) coincides with the definition of the fractional derivative given by Letnikov (28). In Reference [9], Sonine concluded that his formula cannot coincide with Grünwald-Letnikov formula for *α* > 0 without adding auxiliary function.

Sonine's definition of a fractional derivative of negative order (i.e., fractional integral) has been criticized by Letnikov in Reference [7]. Sonine used Leibniz's rule (composition formula) for fractional derivatives (which is generally not valid; see Reference [1]). A fractional integral by Sonine is defined as inverse operation for fractional derivative:

$$\frac{d^a}{d\mathbf{x}^a} \frac{d^{-a}f(\mathbf{x})}{d\mathbf{x}^{-a}} = f(\mathbf{x}).\tag{31}$$

From this formula, Sonine concluded that it should be exist as a so-called auxiliary function *ψ*(*x*), satisfying the following relation:

$$\frac{d^{\alpha}\psi(x)}{d\mathfrak{x}^{\alpha}} = 0.\tag{32}$$

Sonine took the function *ψ*(*x*) in the form

$$\psi(\mathbf{x}) = A\_1(\mathbf{x} - a)^{a-1} + A\_2(\mathbf{x} - a)^{a-2} + \dots + A\_k(\mathbf{x} - a)^{a-k},$$

where *Aj*, *j* = 1, 2, ... , *k*, are arbitrary constants, *k* = [*p*] + 1, but *a* is not defined by Sonine. If Cauchy formula is taken as the definition of the derivative of an arbitrary order, then the auxiliary function has to satisfy the relation

$$\int\_{\gamma} \frac{\psi(\tau)d\tau}{(\tau - x)^{\alpha + 1}} = 0.$$

Since it was shown that by integration by parts Formula (29) is reduced to the definition of the fractional derivative (28) without any auxiliary function, then Letnikov concluded that the following alternative holds: either (1) there exists no auxiliary function of the above form, or (2) formula (29) cannot be taken as the general definition of fractional derivative of an arbitrary order. Instead, he said that his definition is free of necessity to add an auxiliary function (in spite of the fact that this definition is reduced to different a form than (27) and (28) whenever *α* is negative or positive, respectively).

#### *5.2. Sonine Kernel and Sonine Integral Equations*

In one of his pioneering articles [18], Abel presented solution to the integral equation

$$\int\_0^\infty \frac{\varrho(\tau)d\tau}{(\varkappa - \tau)^{1-\kappa}} = F(\varkappa), \ 0 < \varkappa < 1. \tag{33}$$

The main component of Abel's method was the following identity:

$$\int\_{0}^{x} f(t)dt = \frac{\sin \pi a}{\pi} \int\_{0}^{x} \frac{dt}{(x-t)^{1-a}} \int\_{0}^{t} \frac{f(\tau)d\tau}{(t-\tau)^{a}}\tag{34}$$

where *f*(*x*) = *F* (*x*).

Sonine tried to generalize Abel's identity (34) in order to solve more general equation than integral Equation (33). He looked for a pair of functions *σ*(*x*), *ψ*(*x*) satisfying the identity

$$\int\_{0}^{\chi} f(t)dt = \int\_{0}^{\chi} \psi(x-t)dt \int\_{0}^{t} f(\tau)\sigma(t-\tau)d\tau,\tag{35}$$

or, i.e., a pair of functions generating integral representation of unity

$$1 = \int\_0^t \psi(\mathbf{x} - t)dt \int\_0^t \sigma(t - \tau)d\tau. \tag{36}$$

Sonine described in Reference [19] a possible form of the functions *σ*(*x*), *ψ*(*x*),

$$\sigma(t) = \frac{t^{-p}}{\Gamma(1-p)} \sum\_{k=0}^{\infty} a\_k t^k \quad \psi(t) = \frac{t^{-q}}{\Gamma(1-q)} \sum\_{k=0}^{\infty} b\_k t^k \dots$$

where *p* + *q* = 1, and coefficients *ak*, *bk* are defined by the following relations

$$a\_0 b\_0 = 1,\\ \sum\_{k=0}^n \Gamma(k+p)\Gamma(q+n-k)a\_{n-k}b\_k = 0, \ n = 1,2,\dots$$

He also applied relation (36) for representation of the solution to the first kind of integral equations with one of these functions as the kernel:

$$\int\_{0}^{\pi} \sigma(\mathbf{x} - \mathbf{r}) \varphi(\mathbf{r}) d\mathbf{r} = f(\mathbf{x}).\tag{37}$$

Both functions *σ*(*x*), *ψ*(*x*) are known as Sonine kernels, and integral Equation (37), generalizing Abel integral Equation (33), is called a Sonine integral equation. In modern language (see, e.g., Reference [20]), a locally integrable function *σ*(*x*) is called the Sonine kernel if there exists another locally integrable function *ψ*(*x*) such that the following identity holds: *x*

$$\int\_{0}^{\cdot} \sigma(\mathbf{x} - \tau) \psi(\tau) d\tau = 1, \; \mathbf{x} > 0. \tag{38}$$

In fact, the function *ψ*(*x*) is also called the Sonine kernel (sometimes, these functions are called the associated Sonine kernels).

Several special examples of Sonine kernel are presented in Reference [21]. We can mention also Reference [20], in which the properties of the Sonine kernel are discussed in modern setting. Several difficulties which one has to overcome by formal application of Sonine's approach to the solution of the corresponding integral equations are discovered. Possible ways to overcome these difficulties are shown.

In Reference [22], the general fractional integrals and derivatives of arbitrary order are introduced, along with study some of their basic properties and particular cases. First, a suitable generalization of the Sonine condition is presented, and some important classes of the kernels that satisfy this condition are introduced. In the introduction of the general fractional integrals and derivatives, the author follows a recent approach by Kochubei [23]. The general fractional integrals and derivatives with Sonine kernel are defined in the Riemann-Liouville form (see Reference [21,22] and references therein)

$$(\mathbb{I}\_{\sigma}f)(\mathbf{x}) = \int\_{0}^{\mathbf{x}} \sigma(\mathbf{x} - t) f(t) dt, \; \mathbf{x} > 0,\tag{39}$$

$$(\mathbb{D}\_{\Psi}f)(\mathbf{x}) = \frac{d}{d\mathbf{x}} \int\_{0}^{\mathbf{x}} \psi(\mathbf{x} - t) f(t) dt, \; \mathbf{x} > 0,\tag{40}$$

where the functions *σ*(*x*), *ψ*(*x*) are associated Sonine kernels. Operators (39) and (40) are discussed in Reference [21–23] under different conditions on Sonine kernels, and the constructions are not only similar to Riemann-Liouville-type fractional integrals and derivatives but also to Dzhrbashian-Caputo-type and to Marshaud-type.

#### *5.3. Higher Order Hypergeometric Functions*

The main research interest by Sonine was to study the properties of several classes of special functions. His results served as an impetus for the development of the theory of cylindrical functions (or Bessel-type functions) in the second half of the XIX century. These results are based on the achievements by C. Neumann, O. Schlömilch, E. Lommel, H. Hankel, N. Nielsen, L. Schlafli, L. Gegenbauer, and others (see, e.g., Reference [24,25]). Sonine defined in Reference [26] the cylindrical functions *Sν*(*z*) as a partial solutions to the following system of functional-differential equations:

$$\begin{cases} S\_{\nu+1}(z) + 2S\_{\nu}'(z) - S\_{\nu-1}(z) = 0, \\ 2\nu S\_{\nu}(z) = z[S\_{\nu-1}(z) + S\_{\nu+1}(z)], \\ S\_1(z) = -S\_0'(z), \end{cases} \tag{41}$$

where *z* is the complex variable, and *ν* is an arbitrary complex parameter. Sonine proved that these partial solutions admit an integral representation:

$$S\_{\nu}(z) = \frac{1}{2\pi i} \int\_{a}^{b} \exp\left\{ \frac{z}{2} \left( t - \frac{1}{t} \right) \right\} \frac{dt}{t^{\nu+1}}.\tag{42}$$

He found four possible cases for the limits of integration, namely: (1) ∞ · *α*, ∞ · *β*; (2) −0 *<sup>α</sup>* , <sup>−</sup> <sup>0</sup> *<sup>β</sup>* ; (3) <sup>−</sup><sup>0</sup> *<sup>α</sup>* , ∞ · *β*; (4) Im(*za*) = ±∞, Im(*zb*) = ±∞, where, in cases (1)–(3), Re (*za*) < 0, Re (*zb*) < 0, but, in case (4) Re (*ν*) > 0. Sonine denoted the functions obtained in these four cases by *S*(*k*) *<sup>ν</sup>* (*z*) and showed that

$$S\_{\nu}^{(1)}(z) = f\_{\nu}(z),\ S\_{\nu}^{(2)}(z) = e^{-\nu \pi i} f\_{-\nu}(z),\ S\_{\nu}^{(3)}(z) = \frac{1}{2} H\_{\nu}^{(1)}(z),\ S\_{\nu}^{(4)}(z) = f\_{\nu}(z).$$

The above integral representation (42) is called Sonine integral representation. It is a source for obtaining new representations for cylindrical functions (see Reference [25]), as well as for calculation of certain definite integrals. Among these integrals are those known as the first and the second Sonine integrals, respectively (or classical Sonine formulas); see, e.g., Reference [27]:

$$J\_{\nu+\mu+1}(aq) = \frac{q^{\nu}}{2^{\nu}\Gamma(\nu+1)a^{\nu+\mu+1}} \int\_{0}^{a} J\_{\mu}(q\mathbf{x})(a^{2} - \mathbf{x}^{2})^{\nu}\mathbf{x}^{\mu+1}d\mathbf{x},\tag{43}$$

$$\int\_0^a J\_{\mu}(qx) J\_{\nu}[z\sqrt{a^2 - x^2}] (a^2 - x^2)^{\frac{\nu}{2}} x^{\nu + 1} dx = a^{\nu + \mu + 1} q^{\mu} z^{\nu} \frac{J\_{\nu + \mu + 1}(a\sqrt{q^2 + z^2})}{(\sqrt{q^2 + z^2})^{\nu + \mu + 1}},\tag{44}$$

where Re*ν*, Re*μ* > −1. Sonine formulas find interest in different questions of analysis (e.g., in Dunkl theory, as in Reference [28], or in the study of Levy processes [29]).

There exist several multivariate extensions of the classical Sonine integral representation for Bessel functions of some index *μ* + *ν* in terms of such functions of lower index *μ* (see, e.g., Reference [30]). For Bessel functions on matrix cones, Sonine formulas involve beta densities *βμ*,*nu* on the cone and go already back to Herz.

Several important results dealing with properties of Γ-function were obtained by Sonine during his career. They are based on the study of the solution to the difference equation

$$F(\mathbf{x} + \mathbf{1}) - F(\mathbf{x}) = f(\mathbf{x}).\tag{45}$$

In these works, Sonine followed the idea by Binet (1838), who examined the relation

$$
\log \Gamma(\mathbf{x} + 1) - \log \Gamma(\mathbf{x}) = \log \mathbf{x}.
$$

Sonine found [31], in particular, the form of the remainder factor in the product representation of Γ-function

$$\Gamma(\mathbf{x}+1) = \frac{n!(n+1)^x}{(\mathbf{x}+1)(\mathbf{x}+2)\cdots(\mathbf{x}+n)} \frac{\left(1+\frac{\mathbf{x}}{n+1}\right)^{\mathbf{x}+n+\theta}}{\left(1+\frac{1}{n+1}\right)^{\mathbf{x}(1+n+\theta)}}, \mathbf{x} \in \mathbb{R}, \ 0 < \theta < 1. \tag{46}$$

In his article on Bernoulli polynomials, Sonine obtained one more representation, related to Γ-functions (this formula was rediscovered by Ch. Hermite in 1895)

$$\log \frac{\Gamma(x+y)}{\Gamma(y)} = x \log y + \sum\_{k=2}^{n} \frac{(-1)^{k} \varphi\_{k}(x)}{(k-1)ky^{k-1}} + R\_{n}(x,y),\tag{47}$$

where *ϕk*(*x*) are Bernoulli polynomials defined by Sonine using difference equation

$$
\varphi\_k(\mathfrak{x}+1) - \varphi\_k(\mathfrak{x}) = k \mathfrak{x}^{k-1}, \\
\varphi\_k(0) = 0, k = 1, 2, \dots, 2
$$

Reference [32] contains a number of the most important articles by N. Ya. Sonine, as well as a survey on his other research.

#### **6. Nekrasov's Contribution to Fractional Calculus**

In Reference [8], Nekrasov proposed a new definition of the general differentiation. In fact, this definition includes Letnikov's definition as a special case. The main idea by Nekrasov was to define the derivative by using integration along closed contour *L* crossing the point *x* and encircling a group of singular points of the differentiable function *f*(*x*). This definition gives, in fact, the differentiation with respect to a doubly connected domain, which is free of the singular points of *f*(*x*), and contains the above said contour *L*. Therefore, Nekrasov used the ideas by Sonine (to take into account the properties of the analytic continuation of a given function and to apply the properties of functions in complex domains). The main aim of Nekrasov's construction is to extend the class of functions to which the general differentiation can be applied.

It should be noted that the construction proposed by Nekrasov is fairly complicated and needs to use properties of the functions on Riemann surfaces. It follows from the properties of functions to which Nekrasov tried to apply his definition. The starting point of his construction is the notion of classes (*q*, *μ*) of function. Let *L* be a closed contour encircled a group of singular points of the function *f*(*z*). Let the function *f*(*z*) have the following property: if the point *z* makes a complete detour along *L* in counter clockwise direction, then the function *f*(*z*), continuously changing, gains the multiplier *e*2*πqi*. Then, this function is of class (*q*, 0). Thus, any function of the class (*q*, 0) can be represented in the form *<sup>f</sup>*(*z*)=(*<sup>z</sup>* <sup>−</sup> *<sup>a</sup>*)*qφ*(*z*), where *<sup>a</sup>* lies inside *<sup>L</sup>*, and *<sup>φ</sup>*(*z*) is of the class (0, 0). The function of the form *<sup>f</sup>*(*z*)=(*<sup>z</sup>* <sup>−</sup> *<sup>a</sup>*)*<sup>q</sup>* log*<sup>μ</sup> <sup>z</sup>φ*(*z*), with *<sup>φ</sup>*(*z*) being of the class (0, 0), is said to belong to the class (*q*, *μ*) (with *q* being the power index, and *μ* being the logarithmic index which is supposed to be nonnegative integer number). It is clear that, if the function *f*(*z*) belongs to the class (*q*, *μ*), then it belongs to any class (*q* ± *m*, *μ*), *m* ∈ N. Clearly, this definition depends on the choice of the contour *L*.

The function *f*(*z*), which can be represented in form of a sum of finite number *n* of functions belonging to different classes with respect to the contour *L*, is said to be reducible to *n* classes (or simply reducible).

Let the function *f*(*z*) be reducible to *n* classes with zero logarithmic indices, i.e.,

$$f(z) = f\_0(z) + f\_1(z) + \dots + f\_{n-1}(z),\tag{48}$$

where *fj*(*z*) is of class (*qj*, 0). Then, we have the following representation:

$$f(z) + \int\_{\substack{d\\(L^k)}}^{(z)} \frac{df(t)}{dt} dt = a\_0^k f\_0(z) + a\_1^k f\_1(z) + \dots + a\_{n-1}^k f\_{n-1}(z),\tag{49}$$

where integration is performed along the contour *L*, traversable *k*-times in counter clockwise direction starting from the point *z*, *α<sup>j</sup>* = *e* 2*πqji* . By assigning the values 0, 1, ... , *n* − 1 to the parameter *k*, we obtain the system of equations sufficient for determination of functions *f*0(*z*), *f*1(*z*),..., *fn*−1(*z*).

Let the function *f*(*z*) be reducible to *n* classes with non-zero logarithmic indices, i.e.,

$$\begin{aligned} f(z) &= \sum\_{s=0}^{n\_0-1} (z-a)^{q\_{s,0}} \phi\_{s,0}(z) + \\ \sum\_{s=0}^{n\_1-1} (z-a)^{q\_{s,1}} \log(z-a) \phi\_{s,1}(z) + \dots + \sum\_{s=0}^{n\_p-1} (z-a)^{q\_{s,p}} \log^{\mu}(z-a) \phi\_{s,\mu}(z), \end{aligned} \tag{50}$$

where *n*<sup>0</sup> + *n*<sup>1</sup> + ... + *n<sup>μ</sup>* = *n*, and all functions *φs*,*j*(*z*) are of class (0, 0). Then, we have the following representation:

$$\begin{aligned} f(z) &+ \int\_{0}^{(z)} \frac{df(t)}{dt} dt = \sum\_{s=0}^{n\_0 - 1} a\_{s,0}^k (z - a)^{q\_{s,0}} \phi\_{s,0}(z) + \\ &\sum\_{s=0}^{n\_1 - 1} a\_{s,1}^k (z - a)^{q\_{s,1}} \{2k\tau i + \log(z - a)\} \phi\_{s,1}(z) + \dots \\ &+ \sum\_{s=0}^{n\_p - 1} a\_{s,\mu}^k (z - a)^{q\_{s,\mu}} \{2k\tau i + \log(z - a)\}^{\mu} \phi\_{s,\mu}(z), \end{aligned} \tag{51}$$

where integration is performed along the contour *L*, traversable *k*-times in counter clockwise direction starting from the point *z*, *αs*,*<sup>j</sup>* = *e* 2*πqs*,*ji* . By assigning the values 0, 1, ... , *n* − 1 to the parameter *k*, we obtain the system of equations sufficient for determination of functions *φs*,0(*z*), *φs*,1(*z*),..., *φs*,*n*−1(*z*).

Therefore, in both cases, we have a finite sum representation of the function *f*(*z*) of the considered form. Now, the question is to define the integral/derivative of arbitrary order of each components of the representation (48) or (50). Moreover, any function *f*(*z*) of the class (*q*, *μ*) can be determined as the following limit:

$$f(z) = \lim\_{h \to 0} \left[ (z - a)^q \left( \frac{(z - a)^h - 1}{h} \right)^{\mu} \right]. \tag{52}$$

Thus, we have the limit of the finite sum of functions belonging to the classes (*q*, 0),(q + h,0), ... ,(*q* + *μh*, 0). Therefore, the definition of the integral/derivative of arbitrary order of any reducible function can be completely described if one can define the definition of a function of class (*q*, 0). Nekrasov noted that his construction of his derivative generally speaking cannot be rigorously defined in the case when *f*(*z*) is reducible to infinite number of classes.

For this definition, Nekrasov used Letnikov's formulas. The only difference is that the contour of integration is now specially deformed curve on the Riemann surfaces (which depend on the order of the derivative, i.e., can be either finite-sheeted or infinite-sheeted).

In Reference [33], Nekrasov applied his construction to determinate the solution of the following differential equation:

$$\sum\_{s=0}^{n} (a\_s + b\_s \mathbf{x}) \mathbf{x}^s \frac{d^s y}{d \mathbf{x}^s} = \mathbf{0},\tag{53}$$

which is highly related to the generalized hypergeometric function *pFq*(*z*).

#### **7. Conclusions**

In this article, we analyzed some important results by Russian mathematicians of the end of the XIX century, namely A. V. Letnikov, N. Ya. Sonine, and P. A. Nekrasov. The main attention is paid to their contribution related to Fractional Calculus and Special Functions Theory. Some of these results are presented for the first time in English.

Our article serves to clarify the beginning steps of the development of Fractional Calculus. We believe that it would be useful and interesting for members of fractional society.

#### *Short Biographies*

Letnikov Alexey Vasil'evich (1837–1888), Russian mathematician. A short biography.

A. V. Letnikov was born on 1 January 1837, in Moscow, Russia. When Alexey was 8 years old, his father died. His mother tried to give education to Alexey and his sister. The mother sent Alexey to a grammar school in 1847. In spite of his evident abilities, he was not too successful in education. Therefore, he was moved to Konstantin's land-surveyors institute (full-time provisional military-type institute). That was a second rank educational establishment. Its director discovered high interest of Alexey to mathematics and supported his growth in the subject. The director decided to prepare him to the career of a teacher in mathematics in Konstantin's land-surveyors institute. To get the corresponding position, Letnikov was sent to Moscow University and studied mathematics there for two years (1856–1858) as an extern student.

After graduation, he was sent to Paris in order to extend his knowledge in the most known mathematical center for two years and to study the structure and the content of the technical education in France. In Paris, Letnikov attended lectures of many well-known mathematicians (Liouville among them) in the Ecole Polytechnique, College de France and Sorbonne.

Returning from Paris in December 1860, he was appointed as a teacher in the engineering class of the Konstantin's land-surveyors institute and started to teach Probability Theory. Letnikov actively participated in mathematical life in Moscow. In particular, he was among the founders of Moscow Mathematical Society in 1864. In 1863, it was approved a new Statute of Higher Education. Among other regulations, it was supposed to enlarge a number of chairs at universities and to recruit new university teachers. To get a position at university, one needed either to pass graduation gymnasium's exams or to receive the degree at a foreign university. Letnikov decided to use the second option. In 1867, he defended his PhD, "Über die Bedingungen der Integrabilität einiger Differential-Gleichungen", at Leipzig University. In 1868, he got a position at recently reopened Imperial Technical College (now Bauman's Technical University). Letnikov was working in this College up to 1883, when he moved to Alexandrov's Commercial College, sharing this job with a part-time teacher in Konstantin's land-surveyors institute and in the Imperial Technical College.

It was active time for him, and he was awarded the degree of a state councillor, got the order of Saint Stanislav, and was appointed in 1884 as a corresponding member of St.-Petersburg Academy of Sciences (by recommendation of V. Imshenetsky, V. Bunyakovsky, and O. Backlund). At the end of 1880s, Letnikov should have received a state pension and was supposed to leave teaching and to concentrate on the research. He was dreaming of getting a position at the Moscow University. It was not to happen since, at the opening ceremony of a new building of Alexandrov's Commercial College, he caught a cold. He had no serious illness before and continued to deliver lectures. But, this time, the illness was strong enough, and he died on 27 February 1888.

Sonine Nikolai Yakovlevich (1849–1915), Russian mathematician. A short biography.

N.Ya.Sonine was born on February 10th, 1849, in Tula, Russia.

He studied at Physical-mathematical Faculty of Moscow University (1865–1869). After graduation, he continued research in Moscow University for two years and, in 1871, defended his Master Thesis, "On expansion of functions into infinite series". In June 1871, he became Associate Professor (dozent) of Warsaw University.

In 1873, he was sent to Paris to continue research study. In Paris, he attended lectures by Liouville, Hermite, Bertrand, Serre and Darboux. In September 1874, in Moscow University, N. Ya. Sonine defended his PhD Thesis, "On integration of partial differential equations of the second order". In 1877, he became extra-ordinary Professor of Warsaw University and, in 1879, becamoe an ordinary Professor of Warsaw University. In 1891, he resigned from his position at Warsaw University, but he still continued his research. In 1891, N. Ya. Sonine was elected a corresponding member of Academy of Sciences, and, in 1893, he became an academician of St.-Petersburg Academy of Sciences (by recommendation of P. L. Chebyshev). In 1890, he was awarded by V. Ya. Bunyakovsky Prize for the Best Results in Mathematics.

Starting in 1899, N. Ya. Sonine occupied different administrative positions, mostly in the education. He died on 18 February 1915, in St.-Petersburg.

Nekrasov Pavel Alekseevich (1853–1924), Russian mathematician and philosopher. A short biography.

P. A. Nekrasov was born on 1st (13th) February 1853, in Ryazan region, Russia.

After graduation at Ryazan Orthodox seminary in 1874, he entered Physicalmathematical faculty of the Moscow University. In 1878, he graduated from the Moscow University with degree of the candidate of sciences and was left at the department of pure mathematics for preparation to the professorship. From August 1879, P. A. Nekrasov shared his research with teaching mathematics at the private Voskresensky's real school. In 1883, he defended his master thesis, "Study of the equation *<sup>u</sup><sup>m</sup>* <sup>−</sup> *pu<sup>n</sup>* <sup>−</sup> *<sup>q</sup>* <sup>=</sup> 0". For this work, he was awarded by V. Ya. Bunyakovsky Prize for the Best Results in Mathematics.

In 1985, P. A. Nekrasov became a Privatdozent in the Moscow University (having defended his Russian PhD "On Lagrange series" in 1886), and, in 1886, he got the position of an associate professor (extraordinary professor) at Moscow University. In 1890, he received a full professorship. In 1893, he became the rector of Moscow University. After his term as the rector, he actually wanted to retire, but he was not allowed to. He also taught, from 1885–1891, the Probability Theory and the Higher Mathematics in the Landsurveyors institute.

Starting in 1898, he performed only administrative duties for the Ministry of Education (he was curator of the Moscow University and responsible for the schools in Moscow and the surrounding area) and moved in 1905 to Saint Petersburg as a member of the Council of the Ministry of Education. After the Russian Revolution, he tried to adapt himself to the new realities, dealt with mathematical economics (which he lectured in 1918–1919), and studied Marxism. He died of pneumonia on 20 December 1924, in Moscow.

**Author Contributions:** Conceptualization, S.R. and M.D. Writing—review and editing, S.R. and M.D. The authors made equal contributions to this article. Both authors have read and agreed to the published version of the manuscript.

**Funding:** The work has been supported by Belarusian Fund for Fundamental Scientific Research through the grant F20R-083.

**Data Availability Statement:** The study did not report any data.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Special Functions of Fractional Calculus in the Form of Convolution Series and Their Applications**

**Yuri Luchko**

Department of Mathematics, Physics, and Chemistry, Beuth Technical University of Applied Sciences Berlin, Luxemburger Str. 10, 13353 Berlin, Germany; luchko@beuth-hochschule.de

**Abstract:** In this paper, we first discuss the convolution series that are generated by Sonine kernels from a class of functions continuous on a real positive semi-axis that have an integrable singularity of power function type at point zero. These convolution series are closely related to the general fractional integrals and derivatives with Sonine kernels and represent a new class of special functions of fractional calculus. The Mittag-Leffler functions as solutions to the fractional differential equations with the fractional derivatives of both Riemann-Liouville and Caputo types are particular cases of the convolution series generated by the Sonine kernel *κ*(*t*) = *t <sup>α</sup>*−1/Γ(*α*), 0 < *α* < 1. The main result of the paper is the derivation of analytic solutions to the single- and multi-term fractional differential equations with the general fractional derivatives of the Riemann-Liouville type that have not yet been studied in the fractional calculus literature.

**Keywords:** Sonine kernel; Sonine condition; general fractional derivative; general fractional integral; convolution series; fundamental theorems of fractional calculus; fractional differential equations

**MSC:** 26A33; 26B30; 44A10; 45E10

#### **1. Introduction**

Special functions of mathematical physics are usually defined in the form of a power series, or as solutions to some differential equations, or via integral representations. Of course, for a given function, these three (and possibly other) forms coincide for all arguments and parameter values for which they exist. However, the validity domains of different representations can be unequal. Very often, the series representations of the special functions hold valid only on some restricted domains. To define the corresponding functions for other values of their arguments and parameters, analytical continuation of the series in the form of integral representations is usually employed.

For special functions of fractional calculus (FC), the situation is very similar to the one described above. For instance, one of the most important FC special functions – the two-parameter Mittag-Leffler function – is usually defined in the form of a power series:

$$E\_{\mathfrak{a},\mathfrak{\beta}}(z) = \sum\_{k=0}^{+\infty} \frac{z^k}{\Gamma(\mathfrak{a}|k+\beta)}, \ \mathfrak{a} > 0, \ \mathfrak{\beta}, z \in \mathbb{C}. \tag{1}$$

Because the series is convergent for all *z* ∈ C, this definition can be used for all *z* ∈ C without any analytical continuation. Still, the integral representations of the Mittag-Leffler function are very important, say, for derivation of its asymptotic behavior [1] and for its numerical calculation [2]. For 0 < *α* < 2 and (*β*) > 0, the following integral representations of the Mittag-Leffler function in terms of the integrals over the Hankel-type contours were presented in [1]:

$$E\_{\alpha,\emptyset}(z) = \frac{1}{2\pi\alpha i} \int\_{\gamma(\mathfrak{c};\delta)} \frac{\mathfrak{e}^{\mathbb{Z}^{1/\alpha}} \widetilde{\mathfrak{z}}^{(1-\beta)/\alpha}}{\widetilde{\mathfrak{z}} - z} \, d\zeta, \, z \in G^{(-)}(\mathfrak{e};\delta),$$

**Citation:** Luchko, Y. Special Functions of Fractional Calculus in the Form of Convolution Series and Their Applications. *Mathematics* **2021**, *9*, 2132. https://doi.org/10.3390/ math9172132

Academic Editor: Manuel Manas

Received: 2 August 2021 Accepted: 30 August 2021 Published: 2 September 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

$$E\_{a,\beta}(z) = \frac{1}{a} z^{(1-\beta)/a} e^{z^{1/a}} + \frac{1}{2\pi a i} \int\_{\gamma(c;\delta)} \frac{e^{\zeta^{1/a}} \zeta^{(1-\beta)/a}}{\zeta - z} d\zeta, \; z \in G^{(+)}(\epsilon; \delta), \; \zeta$$

where the integration contour *γ*(; *δ*) ( > 0, 0 < *δ* ≤ *π*) with non-decreasing arg *ζ* consists of the following parts:


For 0 < *δ* < *π*, the domain *G*(−)(; *δ*) is to the left of the contour *γ*(; *δ*) and the domain *G*(+)(; *δ*) is to the right of this contour. If *δ* = *π*, the contour *γ*(; *δ*) consists of the circumference <sup>|</sup>*ζ*<sup>|</sup> <sup>=</sup> and of the cut <sup>−</sup><sup>∞</sup> <sup>&</sup>lt; *<sup>ζ</sup>* ≤ −. In this case, the domain *<sup>G</sup>*(−)(; *<sup>δ</sup>*) is the circle <sup>|</sup>*ζ*<sup>|</sup> <sup>&</sup>lt; and *<sup>G</sup>*(+)(; *<sup>α</sup>*) = {*<sup>ζ</sup>* : <sup>|</sup> arg *<sup>ζ</sup>*<sup>|</sup> <sup>&</sup>lt; *<sup>π</sup>*, <sup>|</sup>*ζ*<sup>|</sup> <sup>&</sup>gt; }.

For some parameter values, the Mittag-Leffler function can be also introduced in terms of solutions to the fractional differential equations with the Riemann-Liouville or Caputo fractional derivatives. For instance, for 0 < *α* ≤ 1, the equation

$$(D\_{0+}^{\\\kappa}y)(t) = \lambda \, y(t) \tag{2}$$

has the general solution [3]

$$y(t) = \mathbb{C} \, t^{a-1} E\_{a,a}(\lambda \, t^a), \, \mathbb{C} \in \mathbb{R}. \tag{3}$$

In Equation (2), the Riemann-Liouville fractional derivative *D<sup>α</sup>* <sup>0</sup><sup>+</sup> is defined by

$$(D\_{0+}^{a}f)(t) = \frac{d}{dt}(I\_{0+}^{1-a}f)(t),\ t>0,\tag{4}$$

where *I<sup>α</sup>* <sup>0</sup><sup>+</sup> is the Riemann-Liouville fractional integral of order *α* (*α* > 0):

$$(I\_{0+}^{
\mathfrak{a}}f)(t) = \frac{1}{\Gamma(\mathfrak{a})} \int\_0^t (t-\tau)^{\mathfrak{a}-1} f(\tau) \,d\tau, \ t > 0. \tag{5}$$

The general solution to the equation

$$(\ \_ {}^\* D\_{0+}^{a} y)(t) = \lambda \, y(t) \tag{6}$$

with the Caputo fractional derivative

$$(\,\_\*D\_{0+}^{\alpha}f)(t) = (D\_{0+}^{\alpha}f)(t) - f(0)\frac{t^{-\alpha}}{\Gamma(1-\alpha)},\ t>0\tag{7}$$

has the form [4]

$$y(t) = \mathbb{C} \, E\_{a,1}(\lambda \, t^a), \, \mathbb{C} \in \mathbb{R}. \tag{8}$$

As we can see, the solutions to the fractional differential Equations (2) and (6) are expressed in terms of the Mittag-Leffler functions. However, the arguments of these functions are *λ t <sup>α</sup>* and not just *λ t*. Thus, these solutions are represented in the form of power series with the fractional and not integer exponents. For more advanced properties and applications of the Mittag-Leffler type functions, see [1] and the recent book [5].

In [6], the single- and multi-term fractional differential equations with the general fractional derivatives of the Caputo type have been studied. By definition, their solutions belong to the class of the FC special functions (as the ones represented in form of solutions to the fractional differential equations). Moreover, in [6], another representation of these new FC special functions was derived, namely in terms of the convolution series generated by the Sonine kernels.

The convolution series are a far-reaching generalization of the conventional power series and the power series with the fractional exponents including the Mittag-Leffler Functions (3) and (8). They represent a new class of the FC special functions worth for investigation. In [7], the convolution series were employed for derivation of two different forms of the generalized convolution Taylor formula for representation of a function as a convolution polynomial with a remainder in the form of a composition of the *n*-fold general fractional integral and the *n*-fold general sequential fractional derivative of the Riemann-Liouville and the Caputo types, respectively. In [7], the generalized Taylor series in form of convolution series were also discussed. In this paper, we employ the convolution series for derivation of analytical solutions to the single- and multi-terms fractional differential equations with the general fractional derivatives in the Riemann-Liouville sense. This type of the fractional differential equations has not yet been studied in the FC literature.

One of the main reasons for this situation is that until recently, it was not clear at all what type of initial conditions is required while dealing with fractional differential equations with general fractional derivatives of the Riemann-Liouville type. A solution to this problem was provided in a very recent publication [7], where an explicit form of the projector operator of the *n*-fold sequential general fractional derivative in the Riemann-Liouville sense has been derived for the first time. Another challenge for treatment of fractional differential equations with general fractional derivatives in the Riemann-Liouville sense is an absence of methods for derivation of their analytical solutions. In [6], fractional differential equations with general fractional derivatives of the Caputo type have been studied by means of an operational calculus developed for these derivatives. An operational calculus for general fractional derivatives of the Riemann-Liouville has not yet been constructed. Thus, in this paper, we employ another method for analytical treatment of fractional differential equations with general fractional derivatives of the Riemann-Liouville type, namely the method of convolution series. This method is introduced and applied to fractional differential equations for the first time in the FC literature.

The rest of this paper is organized as follows. In the next section, we introduce general fractional derivatives of the Riemann-Liouville and Caputo types with Sonine kernels from a special class of functions and discuss some of their properties needed for further discussion. In the third section, we first provide some results regarding the convolution series generated by Sonine kernels. Then, convolution series are applied for derivation of analytical solutions to single- and multi-term fractional differential equations with general fractional derivatives in the Riemann-Liouville sense. For a treatment of single- and multiterm fractional differential equations with general fractional derivatives in the Caputo sense, we refer interested readers to [6].

#### **2. General Fractional Integrals and Derivatives**

General fractional derivatives (GFDs) with kernel *k* in the Riemann-Liouville and in the Caputo sense, respectively, are defined as follows [8–13]:

$$(\mathbb{D}\_{(k)}f)(t) = \frac{d}{dt}(k\*f)(t) = \frac{d}{dt}\int\_0^t k(t-\tau)f(\tau) \,d\tau,\tag{9}$$

$$(\,\_\*\mathbb{D}\_{(k)}f\rangle(t) = (\mathbb{D}\_{(k)}f)(t) - f(0)k(t),\tag{10}$$

where by ∗ the Laplace convolution is denoted:

$$(f \ast g)(t) = \int\_0^t f(t - \tau)g(\tau) \, d\tau. \tag{11}$$

The Riemann-Liouville and the Caputo fractional derivatives of order *α*, 0 < *α* < 1, defined by (4) and (7), respectively, are particular cases of the GFDs (9) and (10) with the kernel

$$k(t) = h\_{1-a}(t),\ 0 < a < 1,\ h\_{\beta}(t) := \frac{t^{\beta - 1}}{\Gamma(\beta)},\ \beta > 0. \tag{12}$$

The multi-term fractional derivatives and fractional derivatives of distributed order are also particular cases of the GFDs (9) and (10) with the kernels

$$k(t) = \sum\_{k=1}^{n} a\_k \, h\_{1-a\_k}(t), \; 0 < a\_1 < \dots < a\_n < 1, \; a\_k \in \mathbb{R}, \; k = 1, \dots, n,\tag{13}$$

$$k(t) = \int\_0^1 h\_{1-a}(t) \, d\rho(a),\tag{14}$$

respectively, where *ρ* is a Borel measure defined on the interval [0, 1].

Several useful properties of the Riemann-Liouville fractional integral and the Riemann-Liouville and Caputo fractional derivatives are based on the formula

$$(h\_a \* h\_{\beta})(t) := h\_{a+\beta}(t), \ a, \beta > 0, \ t > 0 \tag{15}$$

that immediately follows from the well-known representation of the Euler Beta-function in terms of the Gamma-function:

$$B(\mathfrak{a}, \mathfrak{\beta}) := \int\_0^1 (1 - \tau)^{\mathfrak{a} - 1} \tau^{\mathfrak{f} - 1} d\tau = \frac{\Gamma(\mathfrak{a}) \Gamma(\mathfrak{\beta})}{\Gamma(\mathfrak{a} + \mathfrak{\beta})}, \mathfrak{a}, \mathfrak{\beta} > 0.$$

In Formula (15) and in what follows, the power function *hα* is defined as in (12).

In our discussions, we employ the integer order convolution powers that for a function *f* = *f*(*t*), *t* > 0 are defined by the expression

$$f^{}(t) := \begin{cases} 1, & n=0, \\ f(t), & n=1, \\ \underbrace{(f\*.\ldots\*f)}\_{n \text{ times}}(t), & n=2,3,\ldots. \end{cases} \tag{16}$$

For the kernel *κ*(*t*) = *hα*(*t*) of the Riemann-Liouville fractional integral, we apply Formula (15) and arrive at the important representation

$$h\_a^{}(t) = h\_{na}(t), \ n \in \mathbb{N}.\tag{17}$$

A well-known particular case of (17) is the formula

$$\{1\}^n(t) = h\_1^n(t) = h\_n(t) = \frac{t^{n-1}}{\Gamma(n)} = \frac{t^{n-1}}{(n-1)!}, \ n \in \mathbb{N},\tag{18}$$

where by {1} we denoted the function that is identically equal to 1 for *t* > 0. Now let us write down Formula (15) for *β* = 1 − *α*, 0 < *α* < 1:

$$(h\_a \* h\_{1-a})(t) = h\_1(t) = \{1\}, \ 0 < a < 1, \ t > 0. \tag{19}$$

In [14,15], Abel employed the relation (19) to derive an inversion formula for the operator that is presently referred to as the Caputo fractional derivative and obtained it in form of the Riemann-Liouville fractional integral (solution to the Abel model for the tautochrone problem).

By an attempt to extend the Abel solution method to more general integral equations of convolution type, Sonine introduced in [16] the relation

$$(\kappa \* k)(t) = \{1\}, \ t > 0\tag{20}$$

that is presently referred to as the Sonine condition. The functions that satisfy the Sonine condition are called Sonine kernels. For a Sonine kernel *κ*, the kernel *k* that satisfies the Sonine condition (20) is called an associated kernel to *κ*. Of course, *κ* is then an associated kernel to *k*. In what follows, we denote the set of the Sonine kernels by S.

In [16], Sonine introduced a class of Sonine kernels in the form

$$\kappa(t) = h\_a(t) \cdot \kappa\_1(t), \; \kappa\_1(t) = \sum\_{k=0}^{+\infty} a\_k t^k, \; a\_0 \neq 0, \; 0 < a < 1,\tag{21}$$

$$k(t) = h\_{1-a}(t) \cdot k\_1(t), \; k\_1(t) = \sum\_{k=0}^{+\infty} b\_k t^k. \tag{22}$$

where *κ*<sup>1</sup> = *κ*1(*t*) and *k*<sup>1</sup> = *k*1(*t*) are analytical functions and the coefficients *ak*, *bk*, *k* ∈ N<sup>0</sup> satisfy the following triangular system of linear equations:

$$a\_0 b\_0 = 1,\\ \sum\_{k=0}^n \Gamma(k+1-n)\Gamma(n+n-k)a\_{n-k}b\_k = 0, \ n \ge 1. \tag{23}$$

An important example of the kernels from S in the form (21), (22) was derived in [16] in terms of the Bessel function *Jν* and the modified Bessel function *Iν*:

$$\kappa(t) = (\sqrt{t})^{n-1} I\_{n-1}(2\sqrt{t}), \; k(t) = (\sqrt{t})^{-n} I\_{-n}(2\sqrt{t}), \; 0 < n < 1,\tag{24}$$

where

$$J\_{\nu}(t) = \sum\_{k=0}^{+\infty} \frac{(-1)^k (t/2)^{2k+\nu}}{k! \Gamma(k+\nu+1)},\ l\_{\nu}(t) = \sum\_{k=0}^{+\infty} \frac{(t/2)^{2k+\nu}}{k! \Gamma(k+\nu+1)}.$$

For other examples of Sonine kernels we refer readers to [8,12,13,17].

In this paper, we deal with general fractional integrals (GFIs) with kernels *κ* ∈ S defined by the formula

$$(\mathbb{I}\_{\{\mathbf{x}\}}f)(t) := (\mathbb{x} \ast f)(t) = \int\_0^t \kappa(t - \tau) f(\tau) \,d\tau, \, t > 0 \tag{25}$$

and with GFDs with associated Sonine kernels *k* in the Riemann-Liouville and Caputo senses defined by (9) and (10), respectively.

In our discussions, we restrict ourselves to a class of the Sonine kernels from space *C*−1,0(0, +∞) that is an important particular case of the following two-parameter family of spaces [6,12,13]:

$$\mathbb{C}\_{\mathfrak{a},\emptyset}(0, +\infty) = \{ f \, : \, f(t) = t^p f\_1(t), \, t > 0, \, \mathfrak{a} < p < \beta, \, f\_1 \in \mathbb{C}[0, +\infty) \}. \tag{26}$$

By *C*−1(0, +∞) we mean the space *C*−1,+∞(0, +∞). The set of such Sonine kernels will be denoted by L<sup>1</sup> [13]:

$$(\mathfrak{x}, k \in \mathcal{L}\_1) \iff (\mathfrak{x}, k \in \mathbb{C}\_{-1,0}(0, +\infty)) \land ((\mathfrak{x} \ast k)(t) = \{1\}).\tag{27}$$

In the rest of this section, we present some important results for GFIs and GFDs with Sonine kernels from L<sup>1</sup> on space *C*−1(0, +∞) and its sub-spaces.

The basic properties of the GFI (25) on space *C*−1(0, +∞) easily follow from the known properties of the Laplace convolution:

$$\mathbb{I}\_{(\kappa)} \colon \mathbb{C}\_{-1}(0, +\infty) \to \mathbb{C}\_{-1}(0, +\infty), \tag{28}$$

$$\mathbb{L}\_{\left(\kappa\_{1}\right)}\mathbb{L}\_{\left(\kappa\_{2}\right)} = \mathbb{L}\_{\left(\kappa\_{2}\right)}\mathbb{L}\_{\left(\kappa\_{1}\right)}, \ \kappa\_{1}, \kappa\_{2} \in \mathcal{L}\_{1}.\tag{29}$$

$$\mathbb{L}\_{\left(\kappa\_{1}\right)}\mathbb{L}\_{\left(\kappa\_{2}\right)} = \mathbb{L}\_{\left(\kappa\_{1}\ast\kappa\_{2}\right)\prime}\,\,\kappa\_{1},\,\kappa\_{2}\in\mathscr{L}\_{1}.\tag{30}$$

For functions *<sup>f</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>1</sup> <sup>−</sup>1(0, <sup>+</sup>∞) :<sup>=</sup> { *<sup>f</sup>* : *<sup>f</sup>* <sup>∈</sup> *<sup>C</sup>*−1(0, <sup>+</sup>∞)}, GFDs of the Riemann-Liouville type can be represented as follows [12]:

$$(\mathbb{D}\_{(k)}f)(t) = (k \ast f')(t) + f(0)k(t), \; t \gg 0. \tag{31}$$

Thus, for *<sup>f</sup>* <sup>∈</sup> *<sup>C</sup>*<sup>1</sup> <sup>−</sup>1(0, <sup>+</sup>∞), GFD (10) of the Caputo type takes the form

$$( ( \_ {}^ {} \mathbb{D} \_ {(k)} f ) ( t ) = ( k \* f ' ) ( t ) , \ t > 0. \tag{32}$$

It is worth mentioning that in FC publications, the Caputo fractional derivative (7) is often defined as in Formula (32):

$$(\ ( \ \_ \* D\_{0+}^{a} f ) )(t) = (h\_{1-a} \ \* f')(t) = (I\_{0+}^{1-a} f')(t), \ t > 0. \tag{33}$$

Now, following [7,12], we define the *n*-fold GFI and the *n*-fold sequential GFDs in the Riemann-Liouville and Caputo senses.

**Definition 1** ([12])**.** *Let κ* ∈ L1*. The n-fold GFI (n* ∈ N*) is a composition of n GFIs with the kernel κ:*

$$(\mathbb{L}^{<\mathfrak{n}>}\_{\left(\kappa\right)}f)(t) := \underbrace{(\mathbb{L}\_{\left(\kappa\right)}\dots\mathbb{L}\_{\left(\kappa\right)})}\_{\text{times}}f)(t),\ t>0.\tag{34}$$

*n times*

It is worth mentioning that the index law (30) leads to a representation of the *n*-fold GFI (34) in the form of GFI with kernel *κ*<*n*>:

$$(\mathbb{T}^{}\_{\left(\kappa\right)}f)(t) = \left(\kappa^{} \* f\right)(t) = \left(\mathbb{T}\_{\left(\kappa\right)^{}}f\right)(t), \ t > 0. \tag{35}$$

Kernel *<sup>κ</sup>*<*n*>, *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> belongs to space *<sup>C</sup>*−1(0, <sup>+</sup>∞), but it is not always a Sonine kernel.

**Definition 2** ([7])**.** *Let κ* ∈ L<sup>1</sup> *and k be its associated Sonine kernel. The n-fold sequential GFDs in the Riemann-Liouville and in the Caputo sense, respectively, are defined as follows:*

$$(\mathbb{D}^{}\_{(k)}f)(t) := (\underbrace{\mathbb{D}\_{(k)}\dots\mathbb{D}\_{(k)}}\_{n \text{ times}}f)(t), \ t > 0,\tag{36}$$

$$(\mathop{\ast}\_{\*}\mathbb{D}\_{(k)}^{}f)(t) := (\underbrace{\mathop{\ast}\_{\*}\mathbb{D}\_{(k)}\dots\mathop{\ast}\_{\*}\mathbb{D}\_{(k)}}\_{n\text{ times}}f)(t),\ t>0.\tag{37}$$

It is worth mentioning that in [6,12], the *n*-fold GFDs (*n* ∈ N) were defined in a different form:

$$(\mathbb{D}^n\_{(k)}f)(t) := \frac{d^n}{dt^n} (k^{} \* f)(t), \ t > 0,\tag{38}$$

$$(\ \_\ast \mathbb{D}\_{(k)}^\imath f)(t) := (k \, ^{\leq n \geq \ast} \ast f^{(n)})(t), \; t > 0. \tag{39}$$

The *n*-fold sequential GFDs (36) and (37) are a far-reaching generalization of the Riemann-Liouville and the Caputo sequential fractional derivatives to the case of Sonine kernels from L1.

Some important connections between *n*-fold GFI (34) and *n*-fold sequential GFDs (36) and (37) in the Riemann-Liouville and Caputo senses are provided in the so-called first and second fundamental theorems of FC ([18]) formulated below.

**Theorem 1** ([7])**.** *Let κ* ∈ L<sup>1</sup> *and k be its associated Sonine kernel.*

*Then, the n-fold sequential GFD* (36) *in the Riemann-Liouville sense is a left inverse operator to the n-fold GFI* (34) *on the space C*−1(0, +∞)*:*

$$(\mathbb{D}^{<\mathrm{n}>}\_{(k)}\mathbb{L}^{<\mathrm{n}>}\_{(\mathrm{x})}f)(t) = f(t),\ f \in \mathbb{C}\_{-1}(0, +\infty),\ t > 0,\tag{40}$$

*and the n-fold sequential GFD* (37) *in the Caputo sense is a left inverse operator to the n-fold GFI* (34) *on the space C<sup>n</sup>* −1,(*k*) (0, +∞)*:*

$$(\ ( \ \_ \* \mathbb{D}\_{(k)}^{} \mathbb{I}\_{(\kappa)}^{} f )(t) = f(t), \ f \in \mathbb{C}\_{-1,(k)}^{n} (0, +\infty), \ t > 0,\tag{41}$$

*where C<sup>n</sup>* −1,(*k*) (0, <sup>+</sup>∞) :<sup>=</sup> { *<sup>f</sup>* : *<sup>f</sup>*(*t*)=(I<*n*<sup>&</sup>gt; (*k*) *<sup>φ</sup>*)(*t*), *<sup>φ</sup>* <sup>∈</sup> *<sup>C</sup>*−1(0, <sup>+</sup>∞)}*.*

**Theorem 2** ([7])**.** *Let κ* ∈ L<sup>1</sup> *and k be its associated Sonine kernel.*

*For a function <sup>f</sup>* <sup>∈</sup> *<sup>C</sup>*(*n*) −1,(*k*) (0, <sup>+</sup>∞) = { *<sup>f</sup>* <sup>∈</sup> *<sup>C</sup>*−1(0, <sup>+</sup>∞) : (D<*j*<sup>&</sup>gt; (*k*) *f*) ∈ *C*−1(0, +∞), *j* = 1, . . . , *n*}*, the formula*

$$(\mathbb{T}^{}\_{\left(k\right)} \mathbb{D}^{}\_{\left(k\right)} f)(t) = f(t) - \sum\_{j=0}^{n-1} \left(k \ast \mathbb{D}^{}\_{\left(k\right)} f\right)(0) \kappa^{}(t) = \tag{42}$$

$$f(t) - \sum\_{j=0}^{n-1} \left(\mathbb{I}\_{\left(k\right)} \mathbb{D}^{}\_{\left(k\right)} f\right)(0) \kappa^{}(t), \ t > 0$$

*holds valid, where* <sup>I</sup><*n*<sup>&</sup>gt; (*κ*) *is the n-fold GFI* (34) *and* <sup>D</sup><*n*<sup>&</sup>gt; (*k*) *is the n-fold sequential GFD* (36) *in the Riemann-Liouville sense.*

*For a function f* <sup>∈</sup> *<sup>C</sup><sup>n</sup>* <sup>−</sup>1(0, <sup>+</sup>∞) :<sup>=</sup> { *<sup>f</sup>* : *<sup>f</sup>* (*n*) <sup>∈</sup> *<sup>C</sup>*−1(0, <sup>+</sup>∞)}*, the formula*

$$(\mathbb{L}^{}\_{\left(\kappa\right)} \, \_\*\mathbb{D}^{}\_{\left(k\right)} f)(t) = f(t) - f(0) - \sum\_{j=1}^{n-1} \binom{}{\ast} \, \_\*\mathbb{D}^{}\_{\left(k\right)} f)(0) \left( \{1\} \, \ast \, \kappa^{} \right)(t) \tag{43}$$

*holds valid, where* <sup>I</sup><*n*<sup>&</sup>gt; (*κ*) *is the n-fold GFI* (34) *and* <sup>∗</sup>D<*n*<sup>&</sup>gt; (*k*) *is the n-fold sequential GFD* (37)*.*

For proofs of Theorems 1 and 2 and their particular cases we refer interested readers to [7].

#### **3. Solutions to Fractional Differential Equations with GFDs in the Riemann-Liouville Sense in Terms of the Convolution Series**

First, we introduce the convolution series and treat some of their properties needed for the further discussions.

**Definition 3.** *For a function κ* ∈ *C*−1(0, +∞)*, the series in form*

$$\Sigma\_{\kappa}(t) = \sum\_{j=0}^{+\infty} a\_j \,\kappa^{}(t), \, a\_j \in \mathbb{R} \ (a\_j \in \mathbb{C}) \tag{44}$$

*is called convolution series generated by κ.*

Convolution series generated by Sonine kernels *κ* ∈ L<sup>1</sup> were introduced in [13] for analytical treatment of fractional differential equations with *n*-fold GFDs of the Caputo type by means of an operational calculus developed for these GFDs. In [7], some of the results presented in [13] were extended to convolution series in the form (44) generated by any function *κ* ∈ *C*−1(0, +∞) (i.e., not necessarily a Sonine kernel).

A very important question regarding convergence of the convolution series (44) was answered in [6,7].

**Theorem 3** ([7])**.** *Let a function κ* ∈ *C*−1(0, +∞) *be represented in the form*

$$\kappa(t) = h\_p(t)\kappa\_1(t), \ t>0, \ p>0, \ \kappa\_1 \in C[0, +\infty) \tag{45}$$

*and the convergence radius of the power series*

$$\Sigma(z) = \sum\_{j=0}^{+\infty} a\_j z^j, \ a\_j \in \mathbb{C}, \ z \in \mathbb{C} \tag{46}$$

*be non-zero. Then the convolution series* (44) *is convergent for all t* > 0 *and defines a function from the space C*−1(0, +∞)*. Moreover, the series*

$$t^{1-a} \Sigma\_{\kappa}(t) = \sum\_{j=0}^{+\infty} a\_j \, t^{1-a} \, \kappa^{}(t), \; a = \min\{p, 1\} \tag{47}$$

*is uniformly convergent for t* ∈ [0, *T*] *for any T* > 0*.*

In what follows, we always assume that the coefficients of the convolution series satisfy the condition that the convergence radius of the corresponding power series is non-zero and thus Theorem 3 is applicable for these convolution series.

As an example, let us consider the geometric series

$$\Sigma(z) = \sum\_{j=0}^{+\infty} \lambda^j z^j, \ \lambda \in \mathbb{C}, \ z \in \mathbb{C}. \tag{48}$$

For *λ* = 0, the convergence radius *r* of this series is equal to 1/|*λ*| and thus we can apply Theorem 3 that says that the convolution series generated by a function *κ* ∈ *C*−1(0, +∞) in form

$$d\_{\mathbf{x},\lambda}(t) = \sum\_{j=0}^{+\infty} \lambda^j \mathbf{x}^{}(t), \ \lambda \in \mathbb{C} \tag{49}$$

is convergent for all *t* > 0 and defines a function from the space *C*−1(0, +∞).

The convolution series *lκ*,*<sup>λ</sup>* defined by (49) plays a very important role in the operational calculus for GFD of Caputo type developed in [6]. It provides a far-reaching generalization of both the exponential function and the two-parameter Mittag-Leffler function in form (3).

Indeed, let us consider the convolution series (49) in the case of the kernel function *<sup>κ</sup>* <sup>=</sup> {1}. Due to the formula *<sup>κ</sup>*<*j*+1>(*t*) = {1}<*j*+1>(*t*) = *hj*+1(*t*) (see (17)), the convolution series (49) is reduced to the power series for the exponential function:

$$d\_{\kappa,\lambda}(t) = \sum\_{j=0}^{+\infty} \lambda^j h\_{j+1}(t) = \sum\_{j=0}^{+\infty} \frac{(\lambda \, t)^j}{j!} = e^{\lambda \, t}. \tag{50}$$

For the kernel *κ*(*t*) = *hα*(*t*) of the Riemann-Liouville fractional integral, the formula *κ*<*j*+1>(*t*) = *h* <sup>&</sup>lt;*j*+1<sup>&</sup>gt; *<sup>α</sup>* (*t*) = *<sup>h</sup>*(*j*+1)*α*(*t*) (see (17)) holds valid. Thus, the convolution series (49) takes the form

$$I\_{\mathbf{x},\boldsymbol{\lambda}}(t) = \sum\_{j=0}^{+\infty} \boldsymbol{\lambda}^j h\_{(j+1)\mathbf{a}}(t) = t^{\mathbf{a}-1} \sum\_{j=0}^{+\infty} \frac{\boldsymbol{\lambda}^j t^{j\mathbf{a}}}{\Gamma(j\mathbf{a} + \boldsymbol{\alpha})} = t^{\mathbf{a}-1} E\_{\mathbf{a},\mathbf{a}}(\boldsymbol{\lambda} \, t^{\mathbf{a}}) \tag{51}$$

that is the same as the two-parameter Mittag-Leffler Function (3).

For *κ* ∈ L1, another important convolution series was introduced in [6] as follows:

$$L\_{\mathbf{x},\lambda}(t) = (k \ast l\_{\mathbf{x},\lambda})(t) = 1 + \left(\{1\} \ast \sum\_{j=1}^{+\infty} \lambda^j \kappa^{}(\cdot)\right)(t), \ \lambda \in \mathbb{C},\tag{52}$$

where *k* is Sonine kernel associated with the kernel *κ*. It is easy to see that in the case *κ* = {1}, the convolution series (52) coincides with the exponential function:

$$L\_{\kappa,\lambda}(t) = 1 + \left(\{1\} \* \sum\_{j=1}^{+\infty} \lambda^j h\_j(\cdot)\right)(t) = 1 + \sum\_{j=1}^{+\infty} \lambda^j h\_{j+1}(t) = \varepsilon^{\lambda,t}.\tag{53}$$

In the case of the kernel *κ*(*t*) = *hα*(*t*), *t* > 0, 0 < *α* < 1, the convolution series *Lκ*,*<sup>λ</sup>* is reduced to the two-parameter Mittag-Leffler Function (8):

$$L\_{\mathbf{x},\boldsymbol{\lambda}}(t) = 1 + \left(\{1\} \* \sum\_{j=1}^{+\infty} \boldsymbol{\lambda}^j h\_{j\mathbf{a}}(\cdot)\right)(t) = 1 + \sum\_{j=1}^{+\infty} \boldsymbol{\lambda}^j h\_{j\mathbf{a}+1}(t) = E\_{\mathbf{a},1}(\boldsymbol{\lambda}\ \mathbf{t}^\mathbf{a}).\tag{54}$$

Analytical solutions to single- and multi-term fractional differential equations with *n*-fold GFDs of the Caputo type were presented in [6] in terms of the convolution series *lκ*,*<sup>λ</sup>* and *Lκ*,*λ*. In the rest of this section, we treat linear single- and multi-term fractional differential equations with *n*-fold GFDs in the Riemann-Liouville sense.

We start with the following auxiliary result:

**Theorem 4.** *Two convolution series generated by the same Sonine kernel κ* ∈ L<sup>1</sup> *coincide for all <sup>t</sup>* <sup>&</sup>gt; <sup>0</sup>*, i.e.,* <sup>+</sup><sup>∞</sup>

$$\sum\_{j=0}^{+\infty} b\_j \,\kappa^{}(t) \equiv \sum\_{j=0}^{+\infty} c\_j \,\kappa^{}(t), \,\, t>0\tag{55}$$

*if and only if the corresponding coefficients of these series are equal:*

$$a\_{\rangle} = b\_{\rangle}, \, j = 0, 1, 2, \dots \, \tag{56}$$

**Proof.** If the corresponding coefficients of two convolution series generated by the same Sonine kernel *κ* ∈ L<sup>1</sup> are equal, then we have just one series and evidently the identity (55) holds valid.

The idea of the proof of the second part of this theorem is the same as the one for the proof of the analogous calculus result for the power series, i.e., under the condition that the identity (55) holds valid we first show that *b*<sup>0</sup> = *c*<sup>0</sup> and then apply the same arguments to prove that *b*<sup>1</sup> = *c*1, *b*<sup>2</sup> = *c*2, etc.

According to Theorem 3, the convolution series in the form (44) is uniformly convergent on any interval [, *T*], and thus we can apply the GFI I(*k*) to this series term by term:

 I(*k*) +∞ ∑ *j*=0 *aj <sup>κ</sup>*<*j*+1>(·) (*t*) = +∞ ∑ *j*=0 <sup>I</sup>(*k*) *aj <sup>κ</sup>*<*j*+1>(·) (*t*) = +∞ ∑ *j*=0 *aj* (*k*(·) <sup>∗</sup> *<sup>κ</sup>*<*j*+1>(·) (*t*) = *a*<sup>0</sup> + +∞ ∑ *j*=1 *aj* {1} ∗ *<sup>κ</sup>*<*j*>(·) (*t*) = *a*<sup>0</sup> + {1} ∗ +∞ ∑ *j*=1 *aj <sup>κ</sup>*<*j*>(·) (*t*) = *a*<sup>0</sup> + ({1} ∗ *f*1)(*t*),

where *f*<sup>1</sup> is the following convolution series:

$$f\_1(t) = \sum\_{j=1}^{+\infty} a\_j \,\kappa^{}(t) = \sum\_{j=0}^{+\infty} a\_{j+1} \,\kappa^{}(t). \tag{57}$$

Summarizing the calculations from above, for the convolution series in form (44), the formula

$$\left(\mathbb{I}\_{(k)}\sum\_{j=0}^{+\infty}a\_j\,\mathrm{\,\,\mathrm{x}^{}}\left(\cdot\right)\right)(t) = a\_0 + \left(\{1\}\,\,\mathrm{\,\,\*}\sum\_{j=0}^{+\infty}a\_{j+1}\,\mathrm{\,\,\mathrm{x}^{}}\left(\cdot\right)\right)(t)\tag{58}$$

holds valid.

Because the convergence radius of the power series Σ1(*t*) = ∑+<sup>∞</sup> *<sup>j</sup>*=<sup>0</sup> *aj*+<sup>1</sup> *<sup>z</sup><sup>j</sup>* is the same as the convergence radius of the power series Σ(*t*) = ∑+<sup>∞</sup> *<sup>j</sup>*=<sup>0</sup> *aj <sup>z</sup><sup>j</sup>* , Theorem 3 ensures the inclusion *f*<sup>1</sup> ∈ *C*−1(0, +∞), where *f*<sup>1</sup> is defined by Formula (57). As has been shown in [4], the definite integral of a function from *C*−1(0, +∞) is a continuous function on the whole interval [0, +∞) that takes the value zero at the point zero:

$$(\{1\} \ast f\_1)(t) = (l\_{0+}^1 f\_1)(t) \in \mathbb{C}[0, \ +\infty), \ (l\_{0+}^1 f\_1)(0) = 0. \tag{59}$$

Now we act with the GFI I(*k*) on the equality (55) and apply Formula (58) to obtain the relationship

$$a\_0 + \left( \{1\} \;\*\sum\_{j=0}^{+\infty} b\_{j+1} \; \mathbf{x}^{} (\cdot) \right) (t) \equiv c\_0 + \left( \{1\} \;\*\sum\_{j=0}^{+\infty} c\_{j+1} \mathbf{x}^{} (\cdot) \right) (t), \; t > 0. \tag{60}$$

Substituting point *t* = 0 into equality (60) and using Formula (59), we deduce that *b*<sup>0</sup> = *c*0. Now we differentiate equality (60) and obtain the following identity:

$$\sum\_{j=0}^{+\infty} b\_{j+1} \, \kappa^{}(t) \equiv \sum\_{j=0}^{+\infty} c\_{j+1} \, \kappa^{}(t), \, t > 0. \tag{61}$$

This identity has exactly same structure as identity (55) from Theorem 4. Thus, we can apply the same arguments as above and derive the relationhship *b*<sup>1</sup> = *c*1. By repeating the same reasoning repeatedly, we arrive at Formula (56) that we wanted to prove.

Now we are ready to apply the method of convolution series for derivation of solutions to the fractional differential equations with GFDs, and start with the fractional relaxation equation with the GFD of the Riemann-Liouville type:

$$(\mathbb{D}\_{(k)}y)(t) = \lambda y(t), \ \lambda \in \mathbb{R}, \ t > 0. \tag{62}$$

As in the case of the power series, we look for a general solution to this equation in the form of a convolution series generated by the Sonine kernel *κ* that is an associated kernel to the kernel *k* of the GFD from Equation (62):

$$y(t) = \sum\_{j=0}^{+\infty} b\_j \,\kappa^{}(t), \, b\_j \in \mathbb{R}.\tag{63}$$

To proceed, let us first calculate the image of the convolution series (63) by action of the GFD D(*k*):

$$(\mathbb{D}\_{(k)}y)(t) = \left(\mathbb{D}\_{(k)}\sum\_{j=0}^{+\infty}b\_j\,\mathrm{\kappa}^{}(\cdot)\right)(t) = \frac{d}{dt}\left(\mathbb{I}\_{(k)}\sum\_{j=0}^{+\infty}b\_j\,\mathrm{\kappa}^{}(\cdot)\right)(t).$$

In the proof of Theorem 4 we already calculated the image of the convolution series (63) by action of the GFI I(*k*) (Formula (58)). Applying this formula, we arrive at the representation

$$(\mathbb{D}\_{(k)}y)(t) = \frac{d}{dt}\Big(b\mathbf{0} + \left(\{1\} \ast \sum\_{j=0}^{+\infty} b\_{j+1} \kappa^{}(\cdot)\right)(t)\Big) = \sum\_{j=0}^{+\infty} b\_{j+1} \kappa^{}(t). \tag{64}$$

In the next step, we substitute the right-hand side of (64) into Equation (62) and obtain an equality of two convolution series generated by the same kernel *κ*:

$$\sum\_{j=0}^{+\infty} b\_{j+1} \kappa^{}(t) = \sum\_{j=0}^{+\infty} \lambda \ b\_j \kappa^{}(t), \ t > 0.$$

Application of Theorem 4 to the above identity leads to the following relationships for the coefficients of the convolution series (63):

$$b\_{j+1} = \lambda \, b\_j, \, j = 0, 1, 2, \dots \, \tag{65}$$

The infinite system (65) of linear equations can be easily solved step by step and we arrive at the explicit solution in form

$$b\_{\vec{j}} = b\_0 \lambda^{\vec{j}}, \; j = 1, 2, \dots, \tag{66}$$

where *b*<sup>0</sup> ∈ R is an arbitrary constant. Summarizing the arguments presented above, we proved the following theorem:

**Theorem 5.** *The general solution to the fractional relaxation Equation* (62) *with GFD* (9) *in the Riemann-Liouville sense can be represented as follows:*

$$y(t) = \sum\_{j=0}^{+\infty} b\_0 \,\lambda^j \,\kappa^{}(t) = b\_0 \, l\_{\kappa,\lambda}(t), \, b\_0 \in \mathbb{R},\tag{67}$$

*where lκ*,*<sup>λ</sup> is the convolution series* (49)*.*

**Remark 1.** *The constant b*<sup>0</sup> *in the general solution* (67) *to Equation* (62) *can be determined from a suitably posed initial condition. The form of this initial condition is prescribed by Theorem 2 (see also Formula* (58)*). Indeed, setting n* = 1 *in the relation* (42)*, we obtain the following representation of the projector operator of the GFD* (9) *in the Riemann-Liouville sense:*

$$(Pf)(t) = f(t) - (\mathbb{I}\_{\{\mathbf{x}\}} \mathbb{D}\_{\{k\}}f)(t) = \left(\mathbb{I}\_{\{k\}}f\right)(0)\mathbf{x}(t), \ f \in \mathbb{C}\_{-1,\{k\}}^{(1)}(0, +\infty). \tag{68}$$

*Thus, the initial-value problem*

$$\begin{cases} (\mathbb{D}\_{(k)}y)(t) = \lambda y(t), \; \lambda \in \mathbb{R}, \; t > 0, \\ \left(\mathbb{I}\_{(k)}y\right)(0) = b\_0 \end{cases} \tag{69}$$

*has a unique solution given by Formula* (67)*.*

In the case of the Sonine kernel *k*(*t*) = *h*1−*α*(*t*), 0 < *α* < 1, the Equation (62) is reduced to Equation (2) with the Riemann-Liouville fractional derivative and its solution (67) is exactly the solution (3) of Equation (2) in terms of the two-parameter Mittag-Leffler function (see Formula (51)). The initial-value problem (69) takes the well-known form

$$\begin{cases} (D\_{0+}^{a}y)(t) = \lambda y(t), \; \lambda \in \mathbb{R}, \; t > 0, \\ \left(I\_{0+}^{1-a}y\right)(0) = b\_{0}. \end{cases} \tag{70}$$

Its unique solution is given by the formula *y*(*t*) = *b*<sup>0</sup> *t <sup>α</sup>*−1*Eα*,*α*(*λ t α*). Now we proceed with the inhomogeneous equation of type (62)

$$(\mathbb{D}\_{(k)}y)(t) = \lambda y(t) + f(t), \ \lambda \in \mathbb{R}, \ t > 0,\tag{71}$$

where the source function *f* is represented in form of a convolution series

$$f(t) = \sum\_{j=0}^{+\infty} a\_j \,\kappa^{}(t), \, a\_j \in \mathbb{R}.\tag{72}$$

Again, we look for solutions to Equation (71) in the form of the convolution series (63). Applying exactly the same reasoning as above, we arrive at the following infinite system of linear equations for the coefficients of the convolution series (63):

$$b\_{\dot{j}+1} = \lambda \, b\_{\dot{j}} + a\_{\dot{j}\prime} \, \dot{j} = 0, 1, 2, \dots \tag{73}$$

The explicit form of solutions to this system of equations is as follows:

$$b\_{j} = b\_{0} \lambda^{j} + \sum\_{i=0}^{j-1} a\_{i} \lambda^{j-i-1}, \; j = 1, 2, \dots, \tag{74}$$

where *b*<sup>0</sup> ∈ R is an arbitrary constant. Then the general solution to Equation (71) can be written in form of the following convolution series:

$$y(t) = b\_0 \kappa(t) + \sum\_{j=1}^{+\infty} \left( b\_0 \lambda^j + \sum\_{i=0}^{j-1} a\_i \lambda^{j-i-1} \right) \kappa^{}(t) \\ = b\_0 \sum\_{j=0}^{+\infty} \lambda^j \kappa^{}(t) + \sum\_{j=1}^{+\infty} \sum\_{i=0}^{j-1} a\_i \lambda^{j-i-1} \kappa^{}(t).$$

By direct calculation, we verify that the second sum in the last formula can be written in a more compact form:

$$\sum\_{j=1}^{+\infty} \sum\_{i=0}^{j-1} a\_i \lambda^{j-i-1} \kappa^{}(t) = \sum\_{i=0}^{+\infty} a\_i \sum\_{j=1}^{+\infty} \lambda^{j-1} \kappa^{}(t) = (f \ast l\_{\kappa,\lambda})(t),$$

where the convolution series *lκ*,*<sup>λ</sup>* is defined by (49). We thus have proved the following result:

**Theorem 6.** *The general solution to the inhomogeneous Equation* (71) *has the form*

$$y(t) = b\_0 \, l\_{\kappa,\lambda}(t) + (f \ast l\_{\kappa,\lambda})(t), \, b\_0 \in \mathbb{R},\tag{75}$$

*where the convolution series lκ*,*<sup>λ</sup> is defined by* (49)*.*

*The constant b*<sup>0</sup> *is uniquely determined by the initial condition*

$$\left(\mathbb{I}\_{(k)}y\right)(0) = b\_0. \tag{76}$$

Applying Theorem 6 to the case of the Riemann-Liouville fractional derivative (kernel *k*(*t*) = *h*1−*α*(*t*), 0 < *α* < 1), we obtain the well-known result ([3]):

The unique solution to the initial-value problem

$$\begin{cases} (D\_{0+}^{\alpha}y)(t) = \lambda y(t) + f(t), \; \lambda \in \mathbb{R}, \; t > 0, \\ \left(I\_{0+}^{1-\alpha}y\right)(0) = b\_0 \end{cases}$$

is given by the formula

$$y(t) = b\_0 \, t^{a-1} E\_{a,a}(\lambda \, t^a) + \int\_0^t \tau^{a-1} E\_{a,a}(\lambda \, \tau^a) \, f(t-\tau) \, d\tau.$$

**Remark 2.** *In [6], single- and multi-term fractional differential equations with general fractional derivatives of the Caputo type have been studied. In particular, the unique solution to the initialvalue problem*

$$\begin{cases} (\_\*\mathbb{D}\_{(k)}y)(t) = \lambda y(t) + f(t), & \lambda \in \mathbb{R}, \ t > 0, \\ y(0) = b\_0, & b\_0 \in \mathbb{R} \end{cases} \tag{77}$$

*with the GFD of the Caputo type defined by* (10) *was derived in the form*

$$y(t) = (f \ast l\_{\kappa,\lambda})(t) + b\_0 L\_{\kappa,\lambda}(t),\tag{78}$$

*where κ* ∈ L<sup>1</sup> *is the Sonine kernel associated with the kernel k and lκ*,*λ, Lκ*,*<sup>λ</sup> are the convolution series* (49) *and* (52)*, respectively.*

*In the case of the homogeneous initial condition (y*(0) = *b*<sup>0</sup> = 0*), Formula* (10) *says that GFDs of the Riemann-Liouville and Caputo types coincide. As we see, the solutions to the initial-value problems with the homogeneous initial conditions for Equations* (71) *and* (77) *are also identical.*

Let us now consider a linear inhomogeneous multi-term fractional differential equation with the sequential GFDs (36) of the Riemann-Liouville type and with the constant coefficients:

$$\sum\_{i=0}^{n} \lambda\_i(\mathbb{D}\_{(k)}^{}y)(t) = f(t), \ \lambda\_i \in \mathbb{R}, \ i = 0, 1, \ldots, n, \ \lambda\_n \neq 0, \ t > 0,\tag{79}$$

where the source function *f* is represented in form of the convolution series (72).

As in the case of the single-term Equation (71), we look for solutions to the multi-term Equation (79) in the form of the convolution series (63). First, we determine the images of the convolution series (63) by action of the sequential GFDs <sup>D</sup><*i*<sup>&</sup>gt; (*k*) , *<sup>i</sup>* <sup>=</sup> 1, 2, ... , *<sup>n</sup>*. For *<sup>i</sup>* <sup>=</sup> 1, the image is provided by Formula (64). For *i* = 2, ... , *n*, Formula (64) is applied iteratively and we arrive at the following result:

$$(\mathbb{D}\_{(k)}^{}y)(t) = \sum\_{j=0}^{+\infty} b\_{j+i} \,\kappa^{}(t), \, i = 1, 2, \ldots, n. \tag{80}$$

Now we substitute the convolution series (63), its images by action of the sequential GFDs <sup>D</sup><*i*<sup>&</sup>gt; (*k*) , *<sup>i</sup>* <sup>=</sup> 1, 2, ... , *<sup>n</sup>* provided by Formula (80), and the convolution series (72) for the source function into Equation (79) and arrive at the following identity:

$$\sum\_{i=0}^{n} \lambda\_i \left( \sum\_{j=0}^{+\infty} b\_{j+i} \, \kappa^{} (t) \right) = \sum\_{j=0}^{+\infty} a\_j \, \kappa^{} (t), \, \, t > 0.$$

Application of Theorem 4 to the above identity leads to the following infinite triangular system of linear equations for the coefficients of the convolution series (63):

$$\begin{cases} \lambda\_0 b\_0 + \lambda\_1 b\_1 + \dots + \lambda\_n b\_n = a\_0 \\ \lambda\_0 b\_1 + \lambda\_1 b\_2 + \dots + \lambda\_n b\_{n+1} = a\_1 \\ \dots \\ \lambda\_0 b\_n + \lambda\_1 b\_{n+1} + \dots + \lambda\_n b\_{2n} = a\_n \\ \lambda\_0 b\_{n+1} + \lambda\_1 b\_{n+2} + \dots + \lambda\_n b\_{2n+1} = a\_{n+1} \\ \dots \end{cases} \tag{81}$$

In this system, the first *n* coefficients (*b*0, *b*1, ... , *bn*−1) can be chosen arbitrarily and all other coefficients are determined step by step as solutions to the infinite triangular system (81) of linear equations:

$$b\_{n+l} = (a\_l - \lambda\_0 b\_l - \dots - \lambda\_{n-1} b\_{n+l-1}) / \lambda\_{n\prime} \; l = 0, 1, 2, \dots \tag{82}$$

We thus proved the following theorem:

**Theorem 7.** *The general solution to the inhomogeneous multi-term fractional differential Equation* (79) *can be represented as the convolution series* (63)*, where the first n coefficients (b*0, *b*1, ... , *bn*−1*) are arbitrary real constants and other coefficients are calculated according to Formula* (82)*.*

The constants *b*0, *b*1, ... , *bn*−<sup>1</sup> in the general solution to Equation (79) presented in Theorem (7) can be determined based on the suitably posed initial conditions. The form of these initial conditions is prescribed by Theorem 2. Indeed, Formula (42) can be rewritten as follows:

$$(Pf)(t) = f(t) - (\mathbb{I}\_{\{k\}}^{} \mathbb{D}\_{\{k\}}^{} f)(t) = \sum\_{j=0}^{n-1} (\mathbb{L}\_{(k)} \mathbb{D}\_{\{k\}}^{} f)(0) \kappa^{}(t), \ t > 0, \ f \in \mathbb{C}\_{-1,(k)}^{(n)} (0, +\infty), \tag{83}$$

where *P* is the projector operator of the *n*-fold sequential GFD of the Riemann-Liouville type. Thus, to uniquely determine the constants *b*0, *b*1, ... , *bn*−<sup>1</sup> in the general solution, Equation (79) has to be equipped with the initial conditions in the form

$$\left(\mathbb{I}\_{(k)} \operatorname{\mathbb{D}}\_{(k)}^{} y\right)(0) = b\_{j\prime} \; j = 0, 1, \ldots, n-1. \tag{84}$$

Finally, we mention that the inhomogeneous multi-term fractional differential equation of type (79) with sequential Riemann-Liouville fractional derivatives (the case of the kernel *k*(*t*) = *h*1−*α*(*t*) in Equation (79)) was treated in [3,19] using operational calculus of the Mikusi ´nski type for the Riemann-Liouville fractional derivative.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


## *Article* **Basic Fundamental Formulas for Wiener Transforms Associated with a Pair of Operators on Hilbert Space**

**Hyun Soo Chung**

Department of Mathematics, Dankook University, Cheonan 31116, Korea; hschung@dankook.ac.kr

**Abstract:** Segal introduce the Fourier–Wiener transform for the class of polynomial cylinder functions on Hilbert space, and Hida then develop this concept. Negrin define the extended Wiener transform with Hayker et al. In recent papers, Hayker et al. establish the existence, the composition formula, the inversion formula, and the Parseval relation for the Wiener transform. But, they do not establish homomorphism properties for the Wiener transform. In this paper, the author establishes some basic fundamental formulas for the Wiener transform via some concepts and motivations introduced by Segal and used by Hayker et al. We then state the usefulness of basic fundamental formulas as some applications.

**Keywords:** Hilbert space; convolution product; first variation; integration by parts formula; translation theorem

**MSC:** 60J65; 28C20

#### **1. Introduction**

and

Let *X* be a normed space and let *T* be a operator on *X*. In functional analysis theory and algebraic structures, the homomorphism properties

$$T(f \ast \mathfrak{g}) = T(f)T(\mathfrak{g}) \tag{1}$$

$$(T(f) \* T(\emptyset)) = T(f\emptyset) \tag{2}$$

are very important subjects to various fields of mathematics for *f* , *g* ∈ *X*, where ∗ denotes a corresponding convolution product of *T*.

In [1–3], Segal introduce the Fourier–Wiener transform for the class of polynomial cylinder functions on Hilbert space. Hida then develop this concept via the Fourier analysis on the dual space of nuclear spaces [4,5]. In addition, Negrin obtain an explicit integral representation of the second quantization by use of an integral operator and hence the Wiener transform [6] is extended. Later, Hayker et al. analyze and study some results and formulas of them via the matrix expressions [7].

In [8,9], the authors establish the existence, the composition formula, the inversion formula and the parseval relationship for the Wiener transform. But, they do not establish homomorphism properties (1) and (2) for the Wiener transform.

In this paper, we shall establish homomorphism properties for the Wiener transform. In addition, we obtain an integration by parts formula, and give some applications of it with respect to the Wiener transform. Our integration by parts formula takes a different form than in the Euclidean space. The reason is that the measure used in this paper is a probability measure, unlike the Lebesgue measure.

#### **2. Definitions and Preliminaries**

In this section, we first state some definitions and notations to understand the paper.

**Citation:** Chung, H.S. Basic Fundamental Formulas for Wiener Transforms Associated with a Pair of Operators on Hilbert Space. *Mathematics* **2021**, *9*, 2738. https:// doi.org/10.3390/math9212738

Academic Editor: Francesco Mainardi

Received: 23 September 2021 Accepted: 25 October 2021 Published: 28 October 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Let **H** be a real Hilbert space and **H** be a complexification of **H** . The inner product on **H** is given by the formula

$$
\langle \mathbf{x} + i\mathbf{y}, \mathbf{x}' + i\mathbf{y}' \rangle\_{\mathbf{H}} = \langle \mathbf{x}, \mathbf{x}' \rangle\_{\mathbf{H}'} + \langle \mathbf{y}, \mathbf{y}' \rangle\_{\mathbf{H}'} + i \langle \mathbf{y}, \mathbf{x}' \rangle\_{\mathbf{H}'} - i \langle \mathbf{x}, \mathbf{y}' \rangle\_{\mathbf{H}'}.
$$

Let *A* and *B* be operators defined on **H** such that there exists an orthonormal basis B = {*eα*}*α*∈A of **<sup>H</sup>** (A being some index set) consisting of elements of **<sup>H</sup>** with

$$A\mathbf{e}\_{\alpha} = \mu\_{\alpha}\mathbf{e}\_{\alpha}, \qquad Be\_{\alpha} = \lambda\_{\alpha}\mathbf{e}\_{\alpha} \tag{3}$$

for some complex numbers *μα* and *λα*. Then we note that for each *x* ∈ **H**,

$$\mathfrak{x} = \sum\_{a \in \mathcal{A}} \langle \mathfrak{x}\_{\prime} e\_{a} \rangle\_{\mathbf{H}^{\mathcal{C}\_{a}}}$$

and so

$$Ax = \sum\_{\alpha \in \mathcal{A}} \langle x, e\_{\alpha} \rangle\_{\mathbf{H}} \mu\_{\alpha} e\_{\alpha}$$

and

$$B\mathfrak{x} = \sum\_{\mathfrak{a}\in\mathcal{A}} \langle \mathfrak{x}\_{\mathfrak{s}} e\_{\mathfrak{a}} \rangle\_{\mathbf{H}} \lambda\_{\mathfrak{a}} e\_{\mathfrak{a}} \dots$$

We now state a class of functions used in this paper.

**Definition 1.** *Let f be a polynomial function on* **H** *defined by the formula*

$$f(\mathbf{x}) = \langle \mathbf{x}, \mathbf{e}\_{a\_1} \rangle\_{\mathbf{H}}^{n\_1} \langle \mathbf{x}, \mathbf{e}\_{a\_2} \rangle\_{\mathbf{H}}^{n\_2} \cdots \langle \mathbf{x}, \mathbf{e}\_{a\_r} \rangle\_{\mathbf{H}}^{n\_r} \tag{4}$$

*where n*1, ··· , *nr* ∈ N ∪ {0}*. Let* P *be the space of all complex-valued polynomial on* **H** *.*

We are ready to state definitions of the Wiener transform, the convolution product and the first variation for functions in P.

**Definition 2.** *For each pair of operators A and B on* **H***, we define the Wiener transform* F*c*,*A*,*B*(*f*) *of f by the formula*

$$\mathcal{F}\_{\varepsilon,A,\mathcal{B}}(f)(y) = \int\_{\mathcal{H}'} f(Ax + By) d\mathcal{g}\_{\varepsilon}(x) \tag{5}$$

*where f is in* P *and the integration on* **H** *is performed with respect to the normalized distribution gc of the variance parameter c* > 0*. In addition, we define the convolution product* (*f*<sup>1</sup> ∗ *f*2)*<sup>A</sup> of f*<sup>1</sup> *and f*<sup>2</sup> *by the formula*

$$(f\_1 \* f\_2)\_A(y) = \int\_{\mathbf{H}'} f\_1\left(\frac{y + Ax}{\sqrt{2}}\right) f\_2\left(\frac{y - Ax}{\sqrt{2}}\right) dg\_\mathcal{L}(\mathbf{x})\tag{6}$$

*and the first variation δ<sup>B</sup> f of f is defined by the formula*

$$
\delta\_B f(\mathbf{x}|\boldsymbol{u}) = \frac{\partial}{\partial \boldsymbol{k}} f(\mathbf{x} + k\boldsymbol{B}\boldsymbol{u})\Big|\_{\boldsymbol{k}=\boldsymbol{0}}\tag{7}
$$

*where f* , *f*1, *f*<sup>2</sup> ∈ P *if they exist.*

#### **3. Existence**

In this section, we establish the existence of the convolution product and the first variation for function *f* of the form (4). Before doing this, we give a theorem for some formulas with respect to the Wiener transform F*c*,*A*,*<sup>B</sup>* which are established by Hayker et al. [9].

**Theorem 1.** *Let A*, *B*, *A* , *B* , *A and B be operators on* **H** *given by*

$$\begin{aligned} A\mathfrak{e}\_{\mathfrak{a}} &= \mu\_{\mathfrak{a}} \mathfrak{e}\_{\mathfrak{a}\prime} \, B\mathfrak{e}\_{\mathfrak{a}} = \lambda\_{\mathfrak{a}} \mathfrak{e}\_{\mathfrak{a}\prime} \, A^{\prime}\mathfrak{e}\_{\mathfrak{a}} = \mu\_{\mathfrak{a}}^{\prime} \mathfrak{e}\_{\mathfrak{a}\prime} \, B^{\prime}\mathfrak{e}\_{\mathfrak{a}} = \lambda\_{\mathfrak{a}}^{\prime} \mathfrak{e}\_{\mathfrak{a}\prime} \\\ A^{\prime\prime}\mathfrak{e}\_{\mathfrak{a}} &= \mu\_{\mathfrak{a}}^{\prime\prime} \mathfrak{e}\_{\mathfrak{a}\prime} \, B^{\prime\prime}\mathfrak{e}\_{\mathfrak{a}} = \lambda\_{\mathfrak{a}}^{\prime\prime} \mathfrak{e}\_{\mathfrak{a}} \end{aligned}$$

*where μα*, *μ <sup>α</sup>*, *μ <sup>α</sup>* , *λα*, *λ <sup>α</sup> and λ <sup>α</sup> are complex numbers. Then we have the following assertions. (a) (Existence): for any f* ∈ P*,*

$$\mathcal{F}\_{\varepsilon,A,\mathbb{B}}(f)(y) = \prod\_{j=1}^{r} \left( \sum\_{p=0}^{\lfloor n/j/2 \rfloor} {n\_j \mathbb{C}\_p \mu\_{a\_j}^{2p} \lambda\_{a\_j}^{n\_j - 2p} \langle y, \varepsilon\_{a\_j} \rangle\_{\mathbb{H}}^{n\_j - 2p} \frac{(2p)!}{p!} \left( \frac{c}{2} \right)^p \right) \tag{8}$$

*and* F*c*,*A*,*B*(*f*) ∈ P*.*

*(b) (Composition formula [9], Theorem 1):*

$$\mathcal{F}\_{\mathfrak{c},A',\mathcal{B'}}(\mathcal{F}\_{\mathfrak{c},A,\mathcal{B}}(f))(y) = \mathcal{F}\_{\mathfrak{c},A'',\mathcal{B'}}(f)(y)$$

*if and only if*

$$
\mu\_a^2 + (\mu\_a' \lambda\_a)^2 = (\mu\_a'')^2 \text{ and } \lambda\_a \lambda\_a' = \lambda\_a''
$$


$$(\mathcal{F}\_{\mathfrak{c},A',B'}(\mathcal{F}\_{\mathfrak{c},A,B}(f))(y) = f(y) \tag{9}$$

*if and only if*

$$
\mu\_{\alpha}^{2} + (\mu\_{\alpha}^{\prime}\lambda\_{\alpha})^{2} = 0 \text{ and } \lambda\_{\alpha}\lambda\_{\alpha}^{\prime} = 1
$$

*for α* ∈ A*.*

*(d) (Parseval relation [9], Theorem 2):*

$$\int\_{\mathbf{H'}} \mathcal{F}\_{\mathfrak{c},A,\mathbf{B}}(f\_1)(y) f\_2(y) d\mathfrak{g}\_{\mathfrak{c}}(y) = \int\_{\mathbf{H'}} \mathcal{F}\_{\mathfrak{c},A,\mathbf{B}}(f\_2)(y) f\_1(y) d\mathfrak{g}\_{\mathfrak{c}}(y)$$

*<sup>α</sup>* + *λ*<sup>2</sup> *<sup>α</sup>* = 1

*if and only if*

*μ*2 *for α* ∈ A*. Furthermore, they show that it can be extended to the Unitary extension.*

We shall obtain the existence of the convolution product and the first variation. To do this, we need an observation as below.

**Remark 1.** *For any f*<sup>1</sup> *and f*<sup>2</sup> *in* P*, we can always express f*<sup>1</sup> *by Equation* (4) *and f*<sup>2</sup> *by*

$$f\_2(\mathbf{x}) = \langle \mathbf{x}, e\_{\mathfrak{a}\_1} \rangle\_{\mathbf{H}}^{m\_1} \langle \mathbf{x}, e\_{\mathfrak{a}\_2} \rangle\_{\mathbf{H}}^{m\_2} \cdots \langle \mathbf{x}, e\_{\mathfrak{a}\_r} \rangle\_{\mathbf{H}}^{m\_r} \tag{10}$$

*using the same nonnegative integer r and αj's. Because, if f*1(*x*) = *x*,*eα*<sup>1</sup> *n*1 **<sup>H</sup>** *x*,*eα*<sup>3</sup> *n*3 **<sup>H</sup>** *and f*2(*x*) = *x*,*eα*<sup>1</sup> *n*1 **<sup>H</sup>** *x*,*eα*<sup>2</sup> *n*2 **<sup>H</sup>** *, then we can set*

$$f\_1(\mathbf{x}) = \langle \mathbf{x}, e\_{\mathcal{R}\_1} \rangle\_{\mathbf{H}}^{n\_1} \langle \mathbf{x}, e\_{\mathcal{R}\_2} \rangle\_{\mathbf{H}}^0 \langle \mathbf{x}, e\_{\mathcal{R}\_3} \rangle\_{\mathbf{H}}^{n\_3}$$

*and*

$$f\_2(\mathbf{x}) = \langle \mathbf{x}, e\_{\alpha\_1} \rangle\_{\mathbf{H}}^{\mathrm{nr}\_1} \langle \mathbf{x}, e\_{\alpha\_2} \rangle\_{\mathbf{H}}^{\mathrm{nr}\_2} \langle \mathbf{x}, e\_{\alpha\_3} \rangle\_{\mathbf{H}}^{\mathrm{0}}.$$

*In addition, if f*1(*x*) = *x*,*eα<sup>n</sup>* **<sup>H</sup>** *and f*2(*x*) = *x*,*eβ<sup>m</sup>* **<sup>H</sup>** *for n* = *m, then we can set*

$$f\_1(\mathbf{x}) = \langle \mathbf{x}, e\_{\gamma\_1} \rangle\_{\mathbf{H}}^{m\_1} \langle \mathbf{x}, e\_{\gamma\_2} \rangle\_{\mathbf{H}}^{0}$$

$$f\_2(\mathbf{x}) = \langle \mathbf{x}, e\_{\gamma\_1} \rangle\_{\mathbf{H}}^{m\_1} \langle \mathbf{x}, e\_{\gamma\_2} \rangle\_{\mathbf{H}}^{0}$$
  $where \ \gamma\_1 = \mathbf{a}, \gamma\_2 = \beta, \boldsymbol{\nu}\_1 = \boldsymbol{\nu}, \boldsymbol{\nu}\_2 = 0, m\_1 = 0 \text{ and } m\_2 = m.$ 

In Theorem 1, we obtain the existence of the convolution product and the first variation for functions in P.

**Theorem 2.** *Let f*<sup>1</sup> *and f*<sup>2</sup> *be elements of* P *and A as in Theorem 1. Then the convolution product* (*f*<sup>1</sup> ∗ *f*2)*<sup>A</sup> of f*<sup>1</sup> *and f*<sup>2</sup> *exists, belongs to* P *and is given by the formula*

$$\begin{split} \left(f\_1\*f\_2\right)\_A(y) \\ = \left(\frac{1}{2\pi c}\right)^{\frac{r}{2}} \prod\_{j=1}^r \left[ \int\_{\mathbb{R}} \left(\frac{1}{\sqrt{2}} \langle y\_\prime e\_{\underline{a}\_j} \rangle\_\mathbb{H} + \frac{\lambda\_{\underline{a}\_j}}{\sqrt{2}} u\_j \right)^{u\_j} \\ \qquad \times \left(\frac{1}{\sqrt{2}} \langle y\_\prime e\_{\underline{a}\_j} \rangle\_\mathbb{H} - \frac{\lambda\_{\underline{a}\_j}}{\sqrt{2}} u\_j \right)^{m\_j} \exp\left\{-\frac{u\_j^2}{2c}\right\} du\_j \right]. \end{split} \tag{11}$$

*Furthermore, the first variation δ<sup>A</sup> f of f exists, belongs to* P *and is given by the formula*

$$\delta\_A f(\mathbf{x}|\boldsymbol{\mu}) = \sum\_{j=1}^r n\_j \lambda\_{\mathbb{A}\_j} \langle \boldsymbol{\mu}, e\_{\boldsymbol{a}\_j} \rangle\_{\mathbb{H}} f\_j(\mathbf{x}) \tag{12}$$

*where*

$$f\_j(\mathbf{x}) = \langle \mathbf{x}, \mathfrak{e}\_{\mathfrak{a}\_1} \rangle\_{\mathbf{H}}^{n\_1} \times \dots \times \langle \mathbf{x}, \mathfrak{e}\_{\mathfrak{a}\_j} \rangle\_{\mathbf{H}}^{n\_j - 1} \times \dots \times \langle \mathbf{x}, \mathfrak{e}\_{\mathfrak{a}\_r} \rangle\_{\mathbf{H}}^{n\_r}. \tag{13}$$

**Proof.** Using Equations (5) and (6), we have

$$\begin{split} & \quad (f\_1 \* f\_2)\_A(y) \\ &= \int\_{\mathbf{H}^\dagger} \prod\_{j=1}^r \Big( \frac{1}{\sqrt{2}} \langle y, \varepsilon\_{a\_j} \rangle\_{\mathbf{H}} + \frac{\lambda\_{a\_i}}{\sqrt{2}} \langle x, \varepsilon\_{a\_j} \rangle\_{\mathbf{H}} \Big)^{n\_j} \Big( \frac{1}{\sqrt{2}} \langle y, \varepsilon\_{a\_j} \rangle\_{\mathbf{H}} - \frac{\lambda\_{a\_i}}{\sqrt{2}} \langle x, \varepsilon\_{a\_j} \rangle\_{\mathbf{H}} \Big)^{m\_j} dg\_c(x) \\ &= \left( \frac{1}{2 \pi c} \right)^{\frac{r}{2}} \prod\_{j=1}^r \Big[ \int\_{\mathbb{R}} \Big( \frac{1}{\sqrt{2}} v\_j + \frac{\lambda\_{a\_i}}{\sqrt{2}} u\_j \Big)^{n\_j} \Big( \frac{1}{\sqrt{2}} v\_j - \frac{\lambda\_{a\_j}}{\sqrt{2}} u\_j \Big)^{m\_j} \exp\Big\{-\frac{u\_j^2}{2c}\} du\_j \Big] \end{split}$$

where *vj* = *y*,*eα<sup>j</sup>* **<sup>H</sup>** for *j* = 1, 2, ··· ,*r*. The last integral always exists because

$$\int\_{\mathbb{R}} p(u) \exp\left\{-\frac{u\_j^2}{2c}\right\} du < \infty$$

for any polynomial function *p*. In addition, it is a polynomial in the variables

$$
\langle y, e\_{\alpha\_1} \rangle\_{\mathbf{H}\_{\prime}} \cdot \cdot \cdot \,\_{\prime} \langle y, e\_{\alpha\_{\ell}} \rangle\_{\mathbf{H}\_{\prime}}.
$$

We next establish Equation (12). From Equation (7), we have

$$\begin{split} \delta\_{A}f(\mathbf{x}|\boldsymbol{\mu}) &= \frac{\partial}{\partial k} \prod\_{j=1}^{r} (\langle \mathbf{x}, \boldsymbol{e}\_{\boldsymbol{\alpha}\_{j}} \rangle\_{\mathbf{H}} + k \lambda\_{\boldsymbol{\alpha}\_{j}} \langle \boldsymbol{\mu}, \boldsymbol{e}\_{\boldsymbol{\alpha}\_{j}} \rangle\_{\mathbf{H}})^{\boldsymbol{n}\_{j}} \Big|\_{k=0} \\ &= \sum\_{j=1}^{r} n\_{j} \lambda\_{\boldsymbol{\alpha}\_{j}} \langle \boldsymbol{\mu}, \boldsymbol{e}\_{\boldsymbol{\alpha}\_{j}} \rangle\_{\mathbf{H}} f\_{\boldsymbol{j}}(\boldsymbol{x}). \end{split}$$

Finally, *δ<sup>A</sup> f* is in P since *fj* ∈ P for all *j* = 1, 2 ··· ,*r*.

#### **4. Homomorphism Properties and Basic Relationships**

In this section, we establish some basic relationships among the Wiener transform, the convolution product and the first variation.

Theorem 3 tells us that the Wiener transform of the convolution product is the product of their Wiener transforms.

**Theorem 3.** *Let f*1, *f*2, *A*, *B and A be as in Theorem 1. Then*

$$\mathcal{F}\_{\mathfrak{c},A,\mathbb{B}}(f\_1\*f\_2)\_A(y) = \mathcal{F}\_{\mathfrak{c},A,\mathbb{B}}(f\_1)\left(\frac{y}{\sqrt{2}}\right)\mathcal{F}\_{\mathfrak{c},A,\mathbb{B}}(f\_2)\left(\frac{y}{\sqrt{2}}\right). \tag{14}$$

*Furthermore, under the hypothesis of Theorem 1, we have*

$$(\mathcal{F}\_{\mathbf{c},A,B}(f\_1) \* \mathcal{F}\_{\mathbf{c},A,B}(f\_2))\_{A'}(y) = \mathcal{F}\_{\mathbf{c},A,B}\left(f\_1(\frac{\cdot}{\sqrt{2}})f\_2(\frac{\cdot}{\sqrt{2}})\right)(y). \tag{15}$$

**Proof.** Using Equations (2), (6) and (11), we have

F*c*,*A*,*B*(*f*<sup>1</sup> ∗ *f*2)*A*(*y*) = - **H** - **<sup>H</sup>** *<sup>f</sup>*<sup>1</sup> *Ax* + *By* + *Az* √2 *f*2 *Ax* <sup>+</sup> *By* <sup>−</sup> *Az* √2 *dgc*(*x*)*dgc*(*z*) = - **H** - **H** *r* ∏ *j*=1 *λα<sup>j</sup>* √2 *x*,*eα<sup>j</sup>* **<sup>H</sup>** <sup>+</sup> *μα<sup>j</sup>* √2 *y*,*eα<sup>j</sup>* **<sup>H</sup>** + *λα<sup>j</sup>* √2 *z*,*eα<sup>j</sup>* **H** *nj* × *r* ∏ *j*=1 *λα<sup>j</sup>* √2 *x*,*eα<sup>j</sup>* **<sup>H</sup>** <sup>+</sup> *μα<sup>j</sup>* √2 *y*,*eα<sup>j</sup>* **<sup>H</sup>** <sup>−</sup> *λα<sup>j</sup>* √2 *z*,*eα<sup>j</sup>* **H** *mj dgc*(*x*)*dgc*(*z*) = 1 2*πc <sup>r</sup>* - R*r* - R*r r* ∏ *j*=1 *λα<sup>j</sup>* √2 *uj* <sup>+</sup> *μα<sup>j</sup>* √2 *vj* + *λα<sup>j</sup>* √2 *wj nj* × *r* ∏ *j*=1 *λα<sup>j</sup>* √2 *uj* <sup>+</sup> *μα<sup>j</sup>* √2 *vj* <sup>−</sup> *λα<sup>j</sup>* √2 *wj mj* exp − *r* ∑ *j*=1 *u*2 *<sup>j</sup>* + *<sup>w</sup>*<sup>2</sup> *j* 2*c dudw*-

where *vj* = *y*,*eα<sup>j</sup>* **<sup>H</sup>** for *j* = 1, 2, ··· ,*r*. Now let *u <sup>j</sup>* <sup>=</sup> *uj*+*wj* <sup>√</sup><sup>2</sup> and *<sup>w</sup> <sup>j</sup>* <sup>=</sup> *uj*−*wj* <sup>√</sup><sup>2</sup> for *<sup>j</sup>* <sup>=</sup> 1, 2, ··· ,*r*. Then we have

$$\begin{split} & \quad \mathcal{F}\_{\mathcal{E},A,B}(f\_1 \ast f\_2) A(y) \\ &= \left(\frac{1}{2\pi\mathfrak{c}}\right)^r \int\_{\mathbb{R}^r} \int\_{\mathbb{R}^r} \prod\_{j=1}^r \left(\lambda\_{a\_j} u\_j^{\prime} + \frac{\mu\_{\vec{\kappa}\_i}}{\sqrt{2}} v\_j\right)^{n\_j} \\ & \qquad \times \prod\_{j=1}^r \left(\lambda\_{a\_j} w\_j^{\prime} + \frac{\mu\_{a\_j}}{\sqrt{2}} v\_j\right)^{m\_j} \exp\left\{-\sum\_{j=1}^r \frac{(u\_j^{\prime})^2 + (w\_j^{\prime})^2}{2c}\right\} d\vec{u}^{\prime} d\vec{w}^{\prime} \\ &= \left(\frac{1}{2\pi\mathfrak{c}}\right)^{\frac{r}{2}} \int\_{\mathbb{R}^r} \prod\_{j=1}^r \left(\lambda\_{a\_j} u\_j^{\prime} + \frac{\mu\_{a\_j}}{\sqrt{2}} v\_j\right)^{n\_j} \exp\left\{-\sum\_{j=1}^r \frac{(u\_j^{\prime})^2}{2c}\right\} d\vec{u}^{\prime} \\ & \qquad \times \left(\frac{1}{2\pi\mathfrak{c}}\right)^{\frac{r}{2}} \int\_{\mathbb{R}^r} \prod\_{j=1}^r \left(\lambda\_{a\_j} w\_j^{\prime} + \frac{\mu\_{a\_j}}{\sqrt{2}} v\_j\right)^{m\_j} \exp\left\{-\sum\_{j=1}^r \frac{(w\_j^{\prime})^2}{2c}\right\} d\vec{w}^{\prime} \end{split}$$

where *vj* = *y*,*eα<sup>j</sup>* **<sup>H</sup>** for *j* = 1, 2, ··· ,*r*. Hence, using Equation (8), we can conclude that

$$
\mathcal{F}\_{\mathfrak{c},A,B}(f\_1\*f\_2)\_A(y) = \mathcal{F}\_{\mathfrak{c},A,B}(f\_1)\left(\frac{y}{\sqrt{2}}\right)\mathcal{F}\_{\mathfrak{c},A,B}(f\_2)\left(\frac{y}{\sqrt{2}}\right).
$$

In addition, using Equation (9), we have

$$\begin{split} & \quad \mathcal{F}\_{\boldsymbol{c},A',B'}(\mathcal{F}\_{\boldsymbol{c},A,B}(f\_1) \ast \mathcal{F}\_{\boldsymbol{c},A,B}(f\_2))\_{A'}(\boldsymbol{y}) \\ &= \mathcal{F}\_{\boldsymbol{c},A',B'}(\mathcal{F}\_{\boldsymbol{c},A,B}(f\_1)) \left(\frac{\boldsymbol{y}}{\sqrt{2}}\right) \mathcal{F}\_{\boldsymbol{c},A',B'}(\mathcal{F}\_{\boldsymbol{c},A,B}(f\_2)) \left(\frac{\boldsymbol{y}}{\sqrt{2}}\right) \\ &= f\_1\left(\frac{\boldsymbol{y}}{\sqrt{2}}\right) f\_2\left(\frac{\boldsymbol{y}}{\sqrt{2}}\right), \end{split}$$

which yields Equation (15) as desired, where F*c*,*A*,*B* is as in Theorem 1.

In our next theorem, we show that the Wiener transform and the first variation are commutable.

**Theorem 4.** *Let f be as in Theorem 1 and let A and B be as in Theorem 1. Let S be an operator on* **H** *with Se<sup>α</sup>* = *γαe<sup>α</sup> for α* ∈ A*. Then*

$$
\delta\_S \mathcal{F}\_{\mathfrak{c}, A, B}(f)(y|u) = \mathcal{F}\_{\mathfrak{c}, A, B}(\delta\_{BS} f(\cdot|u))(y). \tag{16}
$$

**Proof.** Using Equations (5) and (7), we have

$$\begin{split} & \quad \delta\_{\mathcal{S}} \mathcal{F}\_{\mathbf{c},A,\mathbf{B}}(f)(y|u) \\ &= \left. \frac{\partial}{\partial \mathbf{k}} \mathcal{F}\_{\mathbf{c},A,\mathbf{B}}(f)(u+kSu) \right|\_{k=0} \\ &= \left. \frac{\partial}{\partial \mathbf{k}} \int\_{\mathbf{H'}} f(Ax+By+kBSu) d\mathbf{g}\_{\mathbf{c}}(x) \right|\_{k=0} \\ &= \left. \frac{\partial}{\partial \mathbf{k}} \int\_{\mathbf{H'}} \prod\_{j=1}^{r} (\lambda\_{\mathbf{a}\_{j}} \langle \mathbf{x}, e\_{\mathbf{a}\_{j}} \rangle\_{\mathbf{H}} + \mu\_{\mathbf{a}\_{j}} \langle \mathbf{y}, e\_{\mathbf{a}\_{j}} \rangle\_{\mathbf{H}} + k \mu\_{\mathbf{a}\_{j}} \gamma\_{\mathbf{a}\_{j}} \langle \mathbf{u}, e\_{\mathbf{a}\_{j}} \rangle\_{\mathbf{H}})^{\mathbf{n}\_{j}} d\mathbf{g}\_{\mathbf{c}}(x) \right|\_{k=0} \\ &= \sum\_{j=1}^{r} n\_{j} \mu\_{\mathbf{a}\_{j}} \gamma\_{\mathbf{a}\_{j}} \langle \mu, e\_{\mathbf{a}\_{j}} \rangle\_{\mathbf{H}} \mathcal{F}\_{\mathbf{c},A,\mathbf{B}}(f\_{j})(y) \end{split}$$

where *fj* is as in Equation (13). We next use Equations (5) and (7) again to get

$$\begin{split} & \quad \mathcal{F}\_{\mathbf{c},A,\mathbf{B}}(\delta\_{S}f(\cdot|\boldsymbol{u}))(\boldsymbol{y}) \\ &= \int\_{\mathbf{H'}} \frac{\partial}{\partial \mathbf{k}'} f(\boldsymbol{A}\boldsymbol{x} + \boldsymbol{B}\boldsymbol{y} + \boldsymbol{k}\boldsymbol{S}\boldsymbol{u}) \Big|\_{\boldsymbol{k}=\boldsymbol{0}} d\boldsymbol{g}\_{\boldsymbol{c}}(\boldsymbol{x}) \\ &= \frac{\partial}{\partial \boldsymbol{k}} \int\_{\mathbf{H'}} \prod\_{j=1}^{r} (\lambda\_{a\_{j}} \langle \boldsymbol{x}, \boldsymbol{e}\_{a\_{j}} \rangle\_{\mathbf{H}} + \mu\_{a\_{j}} \langle \boldsymbol{y}, \boldsymbol{e}\_{a\_{j}} \rangle\_{\mathbf{H}} + k \gamma\_{a\_{j}} \langle \boldsymbol{u}\_{\boldsymbol{e}}, \boldsymbol{e}\_{a\_{j}} \rangle\_{\mathbf{H}})^{n\_{j}} d\boldsymbol{g}\_{\boldsymbol{c}}(\boldsymbol{x}) \Big|\_{\boldsymbol{k}=\boldsymbol{0}} \\ &= \sum\_{j=1}^{r} n\_{j} \gamma\_{a\_{j}} \langle \boldsymbol{u}, \boldsymbol{e}\_{a\_{j}} \rangle\_{\mathbf{H}} \mathcal{F}\_{\mathbf{c},\boldsymbol{A},\mathbf{B}}(f\_{j})(\boldsymbol{y}) \end{split}$$

where *fj* is as in Equation (13). Comparing two expressions, we obtain Equation (16) as desired.

From Equations (14) and (16) in Theorems 3 and 4, we have the following basic relationships.

**Theorem 5.** *Let f*<sup>1</sup> *and f*<sup>2</sup> *be as in Theorem 3. Let A and B as in Theorem 1 and let S as in Theorem 4. Then we have*

$$\delta(f\_1 \ast f\_2)\_S(y|\mu) = (\delta f\_1(\cdot|\mu/\sqrt{2}) \ast f\_2)\_S(y) + (f\_1 \ast \delta f\_2(\cdot|\mu/\sqrt{2}))\_S(y),\tag{17}$$

$$\begin{aligned} & \mathcal{F}\_{\mathfrak{c},A,B}(\delta\_{BS}f\_1(\cdot|u)\*\delta\_{BS}f\_2(\cdot|u))\_A(y) \\ &= \delta\_{\mathcal{S}}\mathcal{F}\_{\mathfrak{c},A,B}f\_1(y/\sqrt{2}|u)\delta\_{\mathcal{S}}\mathcal{F}\_{\mathfrak{c},A,B}f\_2(y/\sqrt{2}|u)\_A \end{aligned} \tag{18}$$

$$\begin{aligned} &\mathcal{F}\_{\mathfrak{c},A,B}(\delta\_{BS}(f\_1\*f\_2)\_A(\cdot|u))(y) \\ &= \delta\_S(\mathcal{F}\_{\mathfrak{c},A,B}f\_1(\cdot/\sqrt{2})\mathcal{F}\_{\mathfrak{c},A,B}f\_2(\cdot/\sqrt{2}))(y|u) \\ &= \delta\_S\mathcal{F}\_{\mathfrak{c},A,B}(f\_1\*f\_2)\_A(y|u) \end{aligned} \tag{19}$$

$$(\mathcal{F}\_{\mathfrak{c},A,B}\delta\_{BS}f\_1(\cdot|\boldsymbol{u})\*\mathcal{F}\_{\mathfrak{c},A,B}\delta\_{BS}f\_2(\cdot|\boldsymbol{u}))\_A(\boldsymbol{z}) = (\delta\_{\mathfrak{S}}\mathcal{F}\_{\mathfrak{c},A,B}f\_1(\cdot|\boldsymbol{u})\*\delta\_{\mathfrak{S}}\mathcal{F}\_{\mathfrak{c},A,B}f\_2(\cdot|\boldsymbol{u}))\_A(\boldsymbol{y}). \tag{20}$$

**Proof.** We first note that Equation (17) follows directly from the definition of the first variation given by (7). Next we note that Equations (18) and (19) follow from Equations (14)–(16). Finally we note that Equation (20) follows immediately from Equations (14) and (16).

#### **5. Integration by Parts Formula with an Application**

In this section, we obtain an integration by part formula, and give an application with respect to the Wiener transform.

Since the Lebesgue measure *mL* on R*<sup>r</sup>* is an uniform measure and so we see that

$$\int\_{\mathbb{R}^r} h(\vec{u} + \vec{w}) dm\_L(\vec{u}) = \int\_{\mathbb{R}^r} h(\vec{w}) dm\_L(\vec{w})$$

by substitution for *wj* = *uj* + *vj* for *j* = 1, 2, ··· ,*r* if the integrals exist. It is called the translation theorem for the Lebesgue integrals. However, the distribution measure *gc* used in this paper is the Gaussian measure and hence, in generally,

$$\int\_{\mathbf{H'}} h(x+y)d\mathbf{g}\_{\varepsilon}(x) \neq \int\_{\mathbf{H'}} h(z)d\mathbf{g}\_{\varepsilon}(z)$$

even if the integrals exist, see [10–14]. For this reason, a different form of formula is obtained in this paper.

**Lemma 1.** *Let s be a non-negative integer and let p be a function on* **H** *defined by the formula*

$$p(\mathbf{x}) = \langle \mathbf{x}, \mathbf{e}\_{\mathbf{a}} \rangle\_{\mathbf{H}}^{s} \tag{21}$$

*for some e<sup>α</sup>* ∈ B*. Then for all x*<sup>0</sup> ∈ **H** *,*

$$\begin{split} & \int\_{\mathbf{H'}} p(\mathbf{x} + \mathbf{x}\_{0}) d\mathbf{g}\_{\mathbf{c}}(\mathbf{x}) \\ & \qquad = \exp\left\{ -\frac{1}{2c} \langle \mathbf{x}\_{0}, \mathbf{e}\_{\mathbf{a}} \rangle\_{\mathbf{H}}^{2} \right\} \int\_{\mathbf{H}} p(\mathbf{x}) \exp\left\{ \frac{1}{c} \langle \mathbf{x}, \mathbf{e}\_{\mathbf{a}} \rangle\_{\mathbf{H}} \langle \mathbf{x}\_{0}, \mathbf{e}\_{\mathbf{a}} \rangle\_{\mathbf{H}'} \right\} d\mathbf{g}\_{\mathbf{c}}(\mathbf{x}). \end{split} \tag{22}$$

**Proof.** We set *v* = *x*0,*eα***H**. Using Equations (8) and (21), we have

$$\begin{split} &\int\_{\mathbf{H'}} p(\mathbf{x} + \mathbf{x}\_{0}) d\mathbf{g}\_{c}(\mathbf{x}) \\ &= \left(\frac{1}{2\pi c}\right)^{\frac{1}{2}} \int\_{\mathbb{R}} (\mathbf{u} + \mathbf{v})^{s} \exp\left\{-\frac{\mathbf{u}^{2}}{2c}\right\} d\mathbf{u} \\ &= \left(\frac{1}{2\pi c}\right)^{\frac{1}{2}} \int\_{\mathbb{R}} w^{s} \exp\left\{-\frac{(\mathbf{w} - \mathbf{v})^{2}}{2c}\right\} dw \\ &= \exp\left\{-\frac{1}{2c} v^{2}\right\} \left(\frac{1}{2\pi c}\right)^{\frac{1}{2}} \int\_{\mathbb{R}} w^{s} \exp\left\{-\frac{w^{2}}{2c} + \frac{1}{c} vw\right\} dw \\ &= \exp\left\{-\frac{1}{2c} \langle \mathbf{x}\_{0}, e\_{a} \rangle\_{\mathbf{H}}^{2} \right\} \int\_{\mathbf{H'}} p(\mathbf{x}) \exp\left\{\frac{1}{c} \langle \mathbf{x}, e\_{a} \rangle\_{\mathbf{H}} \langle \mathbf{x}\_{0}, e\_{a} \rangle\_{\mathbf{H}}\right\} d\mathbf{g}\_{c}(\mathbf{x}). \end{split}$$

Hence, we have the desired result.

In Theorem 6, we obtain a translation theorem for **H**-integrals.

**Theorem 6** (Translation theorem for **H**-integrals)**.** *Let f be as in Equation* (4) *and let x*<sup>0</sup> ∈ **H** *. Then*

$$\begin{split} & \int\_{\mathbf{H}'} f(\mathbf{x} + \mathbf{x}\_0) d\mathbf{g}\_{\varepsilon}(\mathbf{x}) \\ &= \exp\left\{ -\frac{1}{2c} \sum\_{j=1}^{r} \langle \mathbf{x}\_0, \mathbf{e}\_a \rangle\_{\mathbf{H}}^2 \right\} \int\_{\mathbf{H}'} f(\mathbf{x}) \exp\left\{ \frac{1}{c} \sum\_{j=1}^{r} \langle \mathbf{x}, \mathbf{e}\_{a\_j} \rangle\_{\mathbf{H}} \langle \mathbf{x}\_0, \mathbf{e}\_{a\_j} \rangle\_{\mathbf{H}} \right\} d\mathbf{g}\_{\varepsilon}(\mathbf{x}). \end{split} \tag{23}$$

**Proof.** First, by using fact that

$$\int\_{\mathbf{H'}} f(\mathbf{x})d\mathbf{g}\_{\mathbf{c}}(\mathbf{x}) = \int\_{\mathbf{H'}} \langle \mathbf{x}, \mathfrak{e}\_{\mathfrak{a}\_1} \rangle\_{\mathbf{H}}^{n\_1} d\mathbf{g}\_{\mathbf{c}}(\mathbf{x}) \cdot \cdots \int\_{\mathbf{H'}} \langle \mathbf{x}, \mathfrak{e}\_{\mathfrak{a}\_r} \rangle\_{\mathbf{H}}^{n\_r} d\mathbf{g}\_{\mathbf{c}}(\mathbf{x}),$$

and Equation (22) in Lemma 1 we can establish Equation (23) as desired.

The following theorem is one of main results in this paper.

**Theorem 7** (Integration by parts formula)**.** *Let f be as in Theorem 6 and let S be as in Theorem 4. Then* -

$$\begin{split} & \mathcal{E} \int\_{\mathbf{H'}} \delta\_{\mathbf{S}} f(\mathbf{x}|u) d\mathbf{g}\_{\mathbf{c}}(\mathbf{x}) \\ &= \mathcal{E} \int\_{\mathbf{H'}} f(\mathbf{x}) d\mathbf{g}\_{\mathbf{c}}(\mathbf{x}) + \int\_{\mathbf{H'}} f(\mathbf{x}) \sum\_{j=1}^{r} \gamma\_{\mathbf{a}\_{j}} \langle \mathbf{x}, \boldsymbol{e}\_{\mathbf{a}\_{j}} \rangle\_{\mathbf{H}} \langle u, \boldsymbol{e}\_{\mathbf{a}\_{j}} \rangle\_{\mathbf{H}} d\mathbf{g}\_{\mathbf{c}}(\mathbf{x}). \end{split} \tag{24}$$

**Proof.** Using Equations (1) and (7), we have

$$\begin{split} &\int\_{\mathbf{H'}} \delta\_{\mathbf{S}} f(\mathbf{x}|u) d\mathbf{g}\_{\mathbf{c}}(\mathbf{x}) \\ &= \frac{\partial}{\partial k} \int\_{\mathbf{H'}} f(\mathbf{x} + kSu) d\mathbf{g}\_{\mathbf{c}}(\mathbf{x}) \Big|\_{k=0} \\ &= \frac{\partial}{\partial k} \Big[ \exp\Big\{-\frac{k^{2}}{2c} \sum\_{j=1}^{r} \gamma\_{\mathbf{a}\_{j}}^{2} \langle u\_{\mathbf{e}}, e\_{\mathbf{e}} \rangle\_{\mathbf{H}}^{2} \Big\} \\ &\qquad \times \int\_{\mathbf{H'}} f(\mathbf{x}) \exp\Big\{\frac{k}{c} \sum\_{j=1}^{r} \gamma\_{\mathbf{a}\_{j}} \langle \mathbf{x}, e\_{\mathbf{e}\_{j}} \rangle\_{\mathbf{H}} \langle u\_{\mathbf{e}}, e\_{\mathbf{e}\_{j}} \rangle\_{\mathbf{H}} \Big\} d\mathbf{g}\_{\mathbf{c}}(\mathbf{x}) \Big\} \Big|\_{k=0} \\ &= \int\_{\mathbf{H'}} f(\mathbf{x}) d\mathbf{g}\_{\mathbf{c}}(\mathbf{x}) + \frac{1}{c} \int\_{\mathbf{H'}} f(\mathbf{x}) \sum\_{j=1}^{r} \gamma\_{\mathbf{a}\_{j}} \langle \mathbf{x}, e\_{\mathbf{e}\_{j}} \rangle\_{\mathbf{H}} \langle u, e\_{\mathbf{e}\_{j}} \rangle\_{\mathbf{H}} d\mathbf{g}\_{\mathbf{c}}(\mathbf{x}), \end{split}$$

which yields Equation (24) as desired.

Finally, we give an application of Theorem 7.

**Theorem 8** (Application of Theorem 7)**.** *Let f and S be as in Theorem 7. Let A and B as in Theorem 5. Then*

$$\begin{split} &c\mathcal{F}\_{\mathfrak{c},A,B}(\delta\_{A}f(\cdot|\mu))(y) \\ &= c\mathcal{F}\_{\mathfrak{c},A,B}(f)(y) + \int\_{\mathbf{H}'} f(Ax+By) \sum\_{j=1}^{r} \gamma\_{a\_{j}} \langle \mathbf{x}, e\_{a\_{j}} \rangle\_{\mathbf{H}} \langle u, e\_{a\_{j}} \rangle\_{\mathbf{H}} d\mathfrak{g}\_{\mathfrak{c}}(\mathbf{x}). \end{split} \tag{25}$$

**Proof.** Using Equations (5) and (7), we have

$$\mathcal{F}\_{\mathfrak{c},A,B}(\delta\_A f(\cdot|u))(y) = \frac{\partial}{\partial k} \int\_{\mathbb{H}^\ell} f(Ax + By + kAu) dg\_{\mathfrak{c}}(x) \Big|\_{k=0}.$$

Now, let *fy*(*x*) = *f*(*x* + *y*) and *f <sup>A</sup>*(*x*) = *f*(*Ax*). Then

$$f(Ax + By + kAu) = (f\_{By})^A(x + ku)^B$$

and, hence, using Equation (24) by replacing *f* with (*fBy*)*A*, we have

$$\begin{split} & \quad \mathcal{F}\_{\mathcal{E},\mathcal{A},\mathcal{B}}(\delta\_{\mathcal{A}}f(\cdot|u))(y) \\ &= \frac{\partial}{\partial k} \int\_{\mathbf{H}^{\mathsf{T}}} (f\_{\mathcal{B}y})^{A}(x+ku) d\mathcal{g}\_{\mathcal{E}}(\mathbf{x}) \Big|\_{k=0} \\ &= \int\_{\mathbf{H}^{\mathsf{T}}} (f\_{\mathcal{B}y})^{A}(\mathbf{x}) d\mathcal{g}\_{\mathcal{E}}(\mathbf{x}) + \frac{1}{c} \int\_{\mathbf{H}^{\mathsf{T}}} (f\_{\mathcal{B}y})^{A}(\mathbf{x}) \sum\_{j=1}^{r} \gamma\_{\mathcal{A}\_{j}} \langle \mathbf{x}, \varepsilon\_{\mathcal{a}\_{j}} \rangle\_{\mathbf{H}} \langle u, \varepsilon\_{\mathcal{a}\_{j}} \rangle\_{\mathbf{H}} d\mathcal{g}\_{\mathcal{E}}(\mathbf{x}), \\ &= \mathcal{F}\_{\mathcal{E},\mathcal{A},\mathcal{B}}(f)(y) + \frac{1}{c} \int\_{\mathbf{H}^{\mathsf{T}}} f(Ax + By) \sum\_{j=1}^{r} \gamma\_{\mathcal{A}\_{j}} \langle x, \varepsilon\_{\mathcal{a}\_{j}} \rangle\_{\mathbf{H}} \langle u, \varepsilon\_{\mathcal{a}\_{j}} \rangle\_{\mathbf{H}} d\mathcal{g}\_{\mathcal{E}}(\mathbf{x}). \end{split}$$

Hence, we have the desired results.

#### **6. Applications**

In this section, we give some applications to apply our fundamental formulas obtained in previous sections.

#### *6.1. Application of Theorem 3*

We first give an application to illustrate the usefulness of Equations (14) and (15) in Theorem 3.

**Example 1.** *Let <sup>r</sup>* <sup>=</sup> <sup>2</sup>*. Let <sup>f</sup>*1(*x*) = *x*,*eα*<sup>2</sup> <sup>2</sup> *and let <sup>f</sup>*2(*x*) = *x*,*eα*<sup>1</sup> 2*x*,*eα*<sup>2</sup> *. Let <sup>A</sup> and <sup>B</sup> be as in Theorem 3. From Equation* (8) *we have*

$$\mathcal{F}\_{c,A,\mathcal{B}}(f\_1)(y) = \lambda\_{\alpha\_2}^2 \langle y, e\_{\alpha\_2} \rangle\_{\mathcal{H}}^2 + 2c\mu\_{\alpha\_2}^2$$

*and*

$$\mathcal{F}\_{\mathbf{c},\mathbf{A},\mathbf{B}}(f\_2)(y) = [\lambda\_{\mathfrak{a}\_1}^2 \langle y, \mathfrak{e}\_{\mathfrak{a}\_1} \rangle\_{\mathbf{H}}^2 + 2c\mu\_{\mathfrak{a}\_1}^2] [\mu\_{\mathfrak{a}\_2} \langle y, \mathfrak{e}\_{\mathfrak{a}\_2} \rangle\_{\mathbf{H}}].$$

*Hence, using Equation* (14)*, we have*

$$\begin{split} \mathcal{F}\_{\mathbf{c},A,\mathbf{B}}(f\_1\*f\_2)\_A(y) \\ = \left[\frac{\lambda\_{a\_2}^2}{2} \langle \boldsymbol{y}\_\prime e\_{a\_2} \rangle\_\mathbf{H}^2 + 2c\mu\_{a\_2}^2 \right] \left[\frac{\lambda\_{a\_1}^2}{2} \langle \boldsymbol{y}\_\prime e\_{a\_1} \rangle\_\mathbf{H}^2 + 2c\mu\_{a\_1}^2 \right] \left[\frac{\mu\_{a\_2}}{\sqrt{2}} \langle \boldsymbol{y}\_\prime e\_{a\_2} \rangle\_\mathbf{H} \right]. \end{split} \tag{26}$$

*Furthermore, we note that*

$$f\_1(\mathbf{x})f\_2(\mathbf{x}) = \langle \mathbf{x}, e\_{\alpha\_1} \rangle^2 \langle \mathbf{x}, e\_{\alpha\_2} \rangle^3$$

*and so*

$$\begin{aligned} &\mathcal{F}\_{\mathbf{c},A,B}\left(f\_1(\frac{\cdot}{\sqrt{2}})f\_2(\frac{\cdot}{\sqrt{2}})\right)(y) \\ &= \left[\frac{\lambda\_{a\_1}^2}{2}\langle y,\mathbf{e}\_{a\_1}\rangle\_{\mathbf{H}}^2 + 2c\mu\_{a\_1}^2\right]\left[\frac{\lambda\_{a\_2}^3}{2}\langle y,\mathbf{e}\_{a\_2}\rangle\_{\mathbf{H}}^3 + \frac{3c(\mu\_{a\_2}^2 + \lambda\_{a\_2})}{\sqrt{2}}\langle y,\mathbf{e}\_{a\_2}\rangle\_{\mathbf{H}}\right], \end{aligned}$$

.

*Hence, using Equation* (15)*, we have*

$$\begin{aligned} & \left( \mathcal{F}\_{\boldsymbol{c},A,\boldsymbol{B}}(f\_1) \ast \mathcal{F}\_{\boldsymbol{c},A,\boldsymbol{B}}(f\_2) \right)\_{A'}(\boldsymbol{y}) \\ &= \left[ \frac{\lambda\_{a\_1}^2}{2} \langle \boldsymbol{y}, \boldsymbol{c}\_{a\_1} \rangle\_{\mathbf{H}}^2 + 2c\mu\_{a\_1}^2 \right] \left[ \frac{\lambda\_{a\_2}^3}{2} \langle \boldsymbol{y}, \boldsymbol{c}\_{a\_2} \rangle\_{\mathbf{H}}^3 + \frac{3c(\mu\_{a\_2}^2 + \lambda\_{a\_2})}{\sqrt{2}} \langle \boldsymbol{y}, \boldsymbol{c}\_{a\_2} \rangle\_{\mathbf{H}} \right]. \end{aligned}$$

*These tell us that the Wiener transform of convolution product and the convolution product of Wiener transforms can be calculated without concept of convolution product very easily.*

#### *6.2. Application of Theorem 5*

We next give an application of Equation (19) in Theorem 5.

**Example 2.** *Let f*1, *f*2, *A and B be as in Example 1. Using Equation* (26)*, we have*

*δS*F*c*,*A*,*B*(*f*<sup>1</sup> ∗ *f*2)*A*(*y*|*u*) <sup>=</sup> *<sup>∂</sup> ∂k* F*c*,*A*,*B*(*f*<sup>1</sup> ∗ *f*2)*A*(*y* + *kSu*) *k*=0 <sup>=</sup> *<sup>∂</sup> ∂k λ*<sup>2</sup> *α*2 <sup>2</sup> (*y*,*eα*<sup>2</sup> **<sup>H</sup>** <sup>+</sup> *<sup>k</sup>γα*<sup>2</sup> *u*,*eα*<sup>2</sup> **H**)<sup>2</sup> <sup>+</sup> <sup>2</sup>*cμ*<sup>2</sup> *α*2 × *λ*<sup>2</sup> *α*1 <sup>2</sup> (*y*,*eα*<sup>1</sup> **<sup>H</sup>** <sup>+</sup> *<sup>k</sup>γα*<sup>1</sup> *u*,*eα*<sup>1</sup> **H**)<sup>2</sup> <sup>+</sup> <sup>2</sup>*cμ*<sup>2</sup> *α*1 × *μ* √*α*2 2 (*y*,*eα*<sup>2</sup> **<sup>H</sup>** + *kγα*<sup>2</sup> *u*,*eα*<sup>2</sup> **H**) *k*=0 = *λ*<sup>2</sup> *<sup>α</sup>*<sup>2</sup> *u*,*eα*<sup>2</sup> **<sup>H</sup>** *λ*<sup>2</sup> *α*2 <sup>2</sup> *y*,*eα*<sup>2</sup> <sup>2</sup> **<sup>H</sup>** + <sup>2</sup>*cμ*<sup>2</sup> *α*2 *λ*<sup>2</sup> *α*1 <sup>2</sup> *y*,*eα*<sup>1</sup> <sup>2</sup> **<sup>H</sup>** + <sup>2</sup>*cμ*<sup>2</sup> *α*1 *μ* √*α*2 2 *y*,*eα*<sup>2</sup> **<sup>H</sup>** + *λ*<sup>2</sup> *<sup>α</sup>*<sup>1</sup> *u*,*eα*<sup>1</sup> **<sup>H</sup>** *λ*<sup>2</sup> *α*2 <sup>2</sup> *y*,*eα*<sup>2</sup> <sup>2</sup> **<sup>H</sup>** + <sup>2</sup>*cμ*<sup>2</sup> *α*2 *λ*<sup>2</sup> *α*1 <sup>2</sup> *y*,*eα*<sup>1</sup> <sup>2</sup> **<sup>H</sup>** + <sup>2</sup>*cμ*<sup>2</sup> *α*1 *μ* √*α*2 2 *y*,*eα*<sup>2</sup> **<sup>H</sup>** <sup>+</sup> *<sup>μ</sup>* √*α*2 2 *u*,*eα*<sup>1</sup> **<sup>H</sup>** *λ*<sup>2</sup> *α*2 <sup>2</sup> *y*,*eα*<sup>2</sup> <sup>2</sup> **<sup>H</sup>** + <sup>2</sup>*cμ*<sup>2</sup> *α*2 *λ*<sup>2</sup> *α*1 <sup>2</sup> *y*,*eα*<sup>1</sup> <sup>2</sup> **<sup>H</sup>** + <sup>2</sup>*cμ*<sup>2</sup> *α*1 *μ* √*α*2 2 *y*,*eα*<sup>2</sup> **<sup>H</sup>** .

*Using this, we obtain that*

F*c*,*A*,*B*(*δBS*(*f*<sup>1</sup> ∗ *f*2)*A*(·|*u*))(*y*) = *λ*<sup>2</sup> *<sup>α</sup>*<sup>2</sup> *u*,*eα*<sup>2</sup> **<sup>H</sup>** *λ*<sup>2</sup> *α*2 <sup>2</sup> *y*,*eα*<sup>2</sup> <sup>2</sup> **<sup>H</sup>** + <sup>2</sup>*cμ*<sup>2</sup> *α*2 *λ*<sup>2</sup> *α*1 <sup>2</sup> *y*,*eα*<sup>1</sup> <sup>2</sup> **<sup>H</sup>** + <sup>2</sup>*cμ*<sup>2</sup> *α*1 *μ* √*α*2 2 *y*,*eα*<sup>2</sup> **<sup>H</sup>** + *λ*<sup>2</sup> *<sup>α</sup>*<sup>1</sup> *u*,*eα*<sup>1</sup> **<sup>H</sup>** *λ*<sup>2</sup> *α*2 <sup>2</sup> *y*,*eα*<sup>2</sup> <sup>2</sup> **<sup>H</sup>** + <sup>2</sup>*cμ*<sup>2</sup> *α*2 *λ*<sup>2</sup> *α*1 <sup>2</sup> *y*,*eα*<sup>1</sup> <sup>2</sup> **<sup>H</sup>** + <sup>2</sup>*cμ*<sup>2</sup> *α*1 *μ* √*α*2 2 *y*,*eα*<sup>2</sup> **<sup>H</sup>** <sup>+</sup> *<sup>μ</sup>* √*α*2 2 *u*,*eα*<sup>1</sup> **<sup>H</sup>** *λ*<sup>2</sup> *α*2 <sup>2</sup> *y*,*eα*<sup>2</sup> <sup>2</sup> **<sup>H</sup>** + <sup>2</sup>*cμ*<sup>2</sup> *α*2 *λ*<sup>2</sup> *α*1 <sup>2</sup> *y*,*eα*<sup>1</sup> <sup>2</sup> **<sup>H</sup>** + <sup>2</sup>*cμ*<sup>2</sup> *α*1 *μ* √*α*2 2 *y*,*eα*<sup>2</sup> **<sup>H</sup>** = *λ*2 *<sup>α</sup>*<sup>2</sup> *u*,*eα*<sup>2</sup> **<sup>H</sup>** <sup>+</sup> *<sup>λ</sup>*<sup>2</sup> *<sup>α</sup>*<sup>1</sup> *u*,*eα*<sup>1</sup> **<sup>H</sup>** <sup>+</sup> *<sup>μ</sup>* √*α*2 2 *u*,*eα*<sup>1</sup> **<sup>H</sup>** F*c*,*A*,*B*(*f*<sup>1</sup> ∗ *f*2)*A*(*y*).

#### *6.3. Application of Theorem 7*

We finish this paper by giving an application of Equation (25) in Theorem 7. Equation (25) tells us that

$$\begin{split} & \int\_{\mathbf{H}^{\prime}} f(A\mathbf{x} + By) \sum\_{j=1}^{r} \gamma\_{a\_{j}} \langle \mathbf{x}, \mathfrak{e}\_{a\_{j}} \rangle\_{\mathbf{H}} \langle u, \mathfrak{e}\_{a\_{j}} \rangle\_{\mathbf{H}} d\mathfrak{g}\_{c}(\mathbf{x}) \\ & \qquad = c \mathcal{F}\_{\mathbf{c}, \mathbf{A}, \mathbf{B}} (\delta\_{A} f(\cdot | u))(y) - c \mathcal{F}\_{\mathbf{c}, \mathbf{A}, \mathbf{B}}(f)(y). \end{split} \tag{27}$$

The left-hand side of Equation (27) contains some polynomial-weight and so it is not easy to calculate. However, by using Equation (27), we can calculate it very easy via the Wiener transform and the first variation. We shall explain this as example.

**Example 3.** *Let f*1, *f*2, *A and B be as in Example 1. Then we have*

$$\mathcal{F}\_{\mathfrak{c},\mathcal{A},\mathcal{B}}(\delta\_A f\_1(\cdot|\mu))(y) = 2\mu\_{\mathfrak{a}\_2} \lambda\_{\mathfrak{a}\_2} \langle y, \mathfrak{c}\_{\mathfrak{a}\_2} \rangle\_{\mathbf{H}}$$

*and*

$$\mathcal{F}\_{\varepsilon,A,\mathbf{B}}(f\_1)(y) = \lambda\_{\alpha\_2}^2 \langle y, e\_{\alpha\_2} \rangle\_{\mathbf{H}}^2 + 2c\mu\_{\alpha\_2}^2 \dots$$

*Hence, using Equation* (27)*, we obtain that*

$$\begin{split} &\int\_{\mathbf{H}'} \left[\mu\_{\mathbf{a}\_2} \langle \mathbf{x}, \mathfrak{e}\_{\mathfrak{a}\_2} \rangle\_{\mathbf{H}} + \lambda\_{\mathfrak{a}\_2} \langle \mathfrak{y}, \mathfrak{e}\_{\mathfrak{a}\_2} \rangle\_{\mathbf{H}} \right]^2 \mu\_{\mathbf{a}\_2} \langle \mathbf{x}, \mathfrak{e}\_{\mathfrak{a}\_2} \rangle\_{\mathbf{H}} \langle \mathfrak{u}, \mathfrak{e}\_{\mathfrak{a}\_2} \rangle\_{\mathbf{H}} d\mathfrak{g}\_{\mathfrak{c}} (\mathbf{x}) \\ &= 2c\mu\_{\mathfrak{a}\_2} \lambda\_{\mathfrak{a}\_2} \langle \mathfrak{y}, \mathfrak{e}\_{\mathfrak{a}\_2} \rangle\_{\mathbf{H}} - c\lambda\_{\mathfrak{a}\_2}^2 \langle \mathfrak{y}, \mathfrak{e}\_{\mathfrak{a}\_2} \rangle\_{\mathbf{H}}^2 + 2c^2 \mu\_{\mathbf{a}\_2}^2. \end{split}$$

*In addition, we have*

$$\begin{split} \mathcal{F}\_{\varepsilon,A,\mathtt{B}}(\delta\_{Af\_{2}}(\cdot|\mathsf{u}))(\mathcal{y}) &= 2\varepsilon\mu\_{a\_{1}}^{3}\lambda\_{a\_{2}}\langle\mathsf{u},\varepsilon\_{a\_{1}}\rangle\_{\mathtt{H}} \langle\mathcal{y},\varepsilon\_{a\_{2}}\rangle\_{\mathtt{H}} + 2\mu\_{a\_{1}}^{3}\lambda\_{a\_{1}}^{2}\langle\mathsf{u},\varepsilon\_{a\_{1}}\rangle\_{\mathtt{H}} \langle\mathcal{y},\varepsilon\_{a\_{1}}\rangle\_{\mathtt{H}} \\ &+ \mu\_{a\_{2}}\langle\mathsf{u},\varepsilon\_{a\_{2}}\rangle\_{\mathtt{H}}(\mathsf{c}\mu\_{a\_{1}}^{2} + \lambda\_{a\_{1}}^{2}\langle\mathcal{y},\varepsilon\_{a\_{1}}\rangle\_{\mathtt{H}}^{2}). \end{split}$$

*and*

$$\mathcal{F}\_{\mathbf{c},\mathcal{A},\mathbf{B}}(f\_2)(y) = [\lambda\_{\mathfrak{a}\_1}^2 \langle y, \mathfrak{e}\_{\mathfrak{a}\_1} \rangle\_{\mathbf{H}}^2 + 2c\mu\_{\mathfrak{a}\_1}^2] [\mu\_{\mathfrak{a}\_2} \langle y, \mathfrak{e}\_{\mathfrak{a}\_2} \rangle\_{\mathbf{H}}].$$

*Thus, from Equation* (27) *we conclude that*

$$\begin{split} \int\_{\mathbf{H}'} [\mu\_{a\_1} \langle \mathbf{x}, e\_{a\_1} \rangle\_{\mathbf{H}} + \lambda\_{a\_1} \langle \underline{y}, e\_{a\_1} \rangle\_{\mathbf{H}}]^2 \\ & \quad \times [\mu\_{a\_2} \langle \mathbf{x}, e\_{a\_2} \rangle\_{\mathbf{H}} + \lambda\_{a\_2} \langle \underline{y}, e\_{a\_2} \rangle\_{\mathbf{H}}] \sum\_{j=1}^2 \gamma\_{a\_j} \langle \mathbf{x}, e\_{a\_j} \rangle\_{\mathbf{H}} \langle \underline{u}, e\_{a\_j} \rangle\_{\mathbf{H}} dg\_{\mathcal{E}}(\mathbf{x})] \\ &= 2c^2 \mu\_{a\_1}^3 \lambda\_{a\_2} \langle \underline{u}, e\_{a\_1} \rangle\_{\mathbf{H}} \langle \underline{y}, e\_{a\_2} \rangle\_{\mathbf{H}} + 2c \mu\_{a\_1}^3 \lambda\_{a\_1}^2 \langle \underline{u}, e\_{a\_1} \rangle\_{\mathbf{H}} \langle \underline{y}, e\_{a\_1} \rangle\_{\mathbf{H}} \\ & \quad + c \mu\_{a\_2} \langle \underline{u}, e\_{a\_2} \rangle\_{\mathbf{H}} (c \mu\_{a\_1}^2 + \lambda\_{a\_1}^2 \langle \underline{y}, e\_{a\_1} \rangle\_{\mathbf{H}}^2) \\ & \quad - c [\lambda\_{a\_1}^2 \langle \underline{y}, e\_{a\_1} \rangle\_{\mathbf{H}}^2 + 2c \mu\_{a\_1}^2] [\mu\_{a\_2} \langle \underline{y}, e\_{a\_2} \rangle\_{\mathbf{H}}]. \end{split}$$

#### **7. Conclusions**

According to some results and formula in previous papers [1–3,7–9,15] and our results and formulas in previous Sections 3–5, we note that all results can be explained by the eigenvalue of operators on Hilbert space. As you can see from the results of the previous Sections 3–5, we are able to obtain various relationships that are not found in the previous research results. We also see in Section 6 that our results can be applied to various functions in the application of various fields. Therefore, it can be seen that the results in this paper are structured in a generalized form.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The study did not report any data.

**Acknowledgments:** The author would like to express gratitude to the referees for their valuable comments and suggestions, which have improved the original paper.

**Conflicts of Interest:** The author declares that there is no conflict of interests regarding the publication of this article.

#### **References**


## *Article* **Mathematical Description and Laboratory Study of Electrophysical Methods of Localization of Geodeformational Changes during the Control of the Railway Roadbed**

**Artem Bykov 1,\*, Anastasia Grecheneva 2, Oleg Kuzichkin 3, Dmitry Surzhik 3, Gleb Vasilyev <sup>3</sup> and Yerbol Yerbayev <sup>4</sup>**


**Abstract:** Currently, the load on railway tracks is increasing due to the increase in freight traffic. Accordingly, more and more serious requirements are being imposed on the reliability of the roadbed, which means that studies of methods for monitoring the integrity of the railway roadbed are relevant. The article provides a mathematical substantiation of the possibility of using seismoelectric and phasemetric methods of geoelectric control of the roadbed of railway tracks in order to identify defects and deformations at an early stage of their occurrence. The methods of laboratory modeling of the natural–technical system "railway track" are considered in order to assess the prospects of using the presented methods. The results of laboratory studies are presented, which have shown their high efficiency in registering a weak useful electrical signal caused by seismoacoustic effects against the background of high-level external industrial and natural interference. In the course of laboratory modeling, it was found that on the amplitude spectra of the output electrical signals of the investigated geological medium in the presence of an elastic harmonic action with a frequency of 70 Hz, the frequency of a harmonic electrical signal with a frequency of 40 Hz is observed. In laboratory modeling, phase images were obtained for the receiving line when simulating the process of sinking the soil base of the railway bed, confirming the presence of a transient process that causes a shift in the initial phase of the signal Δϕ = 40◦ by ~45◦ (Δϕ' = 85◦), which allows detection of the initial stage of failure formation.

**Keywords:** railway transport; roadbed; geodynamic processes; seismoelectric method; stress–strain process; transfer functions; frequency characteristics; phasometric method; laboratory modeling

#### **1. Introduction**

The railway is one of the most important cargo transportation systems in the world due to the rapid development of this class of heavy transport, as well as its efficiency in comparison with other transport systems. At the same time, the increase in railway maintenance costs is directly related to the increase in the volume of cargo transportation [1]. Accordingly, in this regard, the requirements for the reliability and safety of the functioning of railway tracks and railway infrastructure in general are significantly increasing, the provision of which through technical monitoring is extremely important and relevant, and, as a result, is the focus of plentiful scientific and applied research [2–4].

**Citation:** Bykov, A.; Grecheneva, A.; Kuzichkin, O.; Surzhik, D.; Vasilyev, G.; Yerbayev, Y. Mathematical Description and Laboratory Study of Electrophysical Methods of Localization of Geodeformational Changes during the Control of the Railway Roadbed. *Mathematics* **2021**, *9*, 3164. https://doi.org/10.3390/ math9243164

Academic Editor: Francesco Mainardi

Received: 19 October 2021 Accepted: 6 December 2021 Published: 8 December 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

This task is complicated by the fact that special requirements for the construction and operation of railways are imposed in areas of activation of geodynamic processes, where there is a significant risk of landslides and the formation of karst craters, which are associated with intensive cyclic impact on the ground base of passing trains [4]. According to the Richter scale, the efficiency of such transport vibration is equivalent to an earthquake of 3–6 points [5]. Moreover, it is known that landslide deformations affect natural and artificial geological environments and can have a varied nature of development, and the activation of karst processes in the area of railway tracks can lead to a gradual destruction of the subgrade (Figure 1) [4]. In addition, the reasons for the deformation of the roadbed may be the discrepancy between the power of the upper structure of the track to the intensive dynamic loads of railway transport, as well as the unfavorable effects of climatic and engineering–geological factors.

It should be noted that the main undesirable activation of near-surface geodynamics arises not at the stage of engineering surveys and construction works, but in the process of direct operation of railway tracks and in most cases is of a sudden, spontaneous nature.

**Figure 1.** Sudden destruction of the railway roadbed as a result of natural factors.

As a result, the indicated defects and deformations of the subgrade lead to the transition of the "railway track-subsoil" system into an unstable state, the moment of occurrence of which is not predicted by modern control systems. Geodeformational changes in the soil lead to a significant deterioration of the railway track, which, in turn, leads to the need to reduce the negative impact of geodeformational changes on the railway track, since the quality and condition of the track directly affects the safety of the movement of trains [3] and the efficiency of its interaction with infrastructure [4]. To detect the early stage of the development of deformation processes of the roadbed of a railway track and predict the dynamics of their development in time, it is important to have current and model information about its state and possible undesirable changes.

Thus, the purpose of the research is to develop methods for monitoring geodynamic changes in the railway roadbed, to detect the appearance of various kinds of anomalies and inhomogeneities, indicating the development of destructive processes in the soil, to create and test a laboratory model of the natural–technical system "railway track" in order to assess the prospects of using the presented methods.

#### **2. Literature Review**

The most reliable method of monitoring the condition of the railway roadbed at the moment is surveying and geodetic observations on the reference points of profile lines [6]. However, due to the large length of railway tracks, the use of this method is very difficult, and the control of the roadbed and the adjacent territory by engineering– geological methods are inappropriate. In this case, it is relevant to attract geophysical methods that are widely used in exploration and engineering geology.

Currently, to obtain information about the structure of the upper layers of various geological environments, ground-penetrating radar sounding [7], vibroseismic methods [8–11], electrometric [9–12] and some other methods are traditionally used. But in some cases, such as when karst cavities have a developed structure, analysis by groups of these methods is very difficult [13,14].

According to the results of numerous studies, it has been established that when organizing automated control of geodynamic objects, the most promising is the use of geoelectric methods of media sounding. They provide effective observations of geological objects, as well as assessment of the state and forecast of their development, which is determined by their high technology [8,10–12,15].

However, as practice shows, the application in geodynamic monitoring of any single method chosen from the ones considered above is not effective enough. As a result, it is necessary to choose the most preferred research method for each specific task [16]. For instance, some research has been done on dynamic prediction models for tunnels [17,18], railway tracks [19], oil sludge straits [20], as well as various underground structures [21] and geotechnical applications in general [22]. But the usual problem of such methods is the ambiguity of the assessment of geophysical data.

The joint use of geoelectric and seismic methods, that is, the use of the seismoelectric control method [23–29], allows increasing the efficiency of geological media studies by reducing the ambiguity of the assessment of geophysical data.

In the course of laboratory modeling, it is planned to study the amplitude spectra of the output electrical signals of the monitored geological environment in the presence of elastic harmonic action, which will allow using this effect to obtain more detailed information about the structure of the soil during sounding by electric fields. In this case, the registration of the phase component of the signal of the receiving lines will presumably increase the noise immunity of measurements. The presence of a transient process causing a shift in the initial phase of the signal of the receiving lines of the installation will indicate the initial stage of the dip formation.

In this regard, the purpose of the work is to substantiate and study an integrated approach to solving problems of monitoring the subsurface of railway tracks and the adjacent territory to identify the initial stage of defects and deformations in it based on the use of the seismoelectric method.

#### **3. Mathematical Description of the Seismoelectric Method of Monitoring the Ground Bed of Railway Tracks**

The seismoelectric control method is based on secondary seismic effects. It consists of the interpretation of signals of an electrical or seismic nature, recorded by the geodynamic monitoring system, which are received when vibrations of these types are simultaneously excited in the studied medium. The method is based on the assumption that the real geological environment is a porous polyphase complex structure in an energetically unstable state. The combined effect of physical fields of various nature (electromagnetic and elastic) on the studied geological environment can lead to a change in its physical properties. The seismoelectric effect of the first kind is the phenomenon of changing the electrical resistance of the geological medium under the influence of elastic vibrations; the second kind is the phenomenon of excitation of an electromagnetic field that occurs under similar conditions. These effects determine, first of all, the nature of the impact on the results of electrical measurements of vibrational seismic–acoustic noise caused by the movement of railway transport. In addition, the nature and degree of their manifestation depend on a number of additional factors, which include the mineral composition of the solid skeleton of the geological environment and its structure, porosity, permeability and structure of pore channels, composition and volume of mineral cement, composition and mineralization of the liquid saturating the pores, etc.

The most important advantage of the seismoelectric method in comparison with traditional methods of applied geophysics is the unambiguity of solving inverse problems. In addition, the role of this method increases significantly with increasing depth and resolution of studies.

In the literature [8,9] it has been demonstrated that with the practical use of the seismoelectric method, the most informative is the study of the recorded electrical signals. This is due to two factors. Firstly, structural changes in various media, first of all, affect their average conductivity, which is characterized by a seismoelectric effect of the first kind. Secondly, there is always a double electric layer with a mobile diffuse part at the boundaries of solid media and pore fluid. Elastic action on such a medium often leads to a relative movement of the porous fluid, which, in turn, generates an external electric current that creates an electromagnetic field—a seismoelectric effect of the second type. In addition, various rock-forming minerals have different types of conductivity: ionic, electron, hole. The contact of such minerals with different types of conductivity can also lead to the emergence of new electric fields. Thus, it is proposed to consider the electrical component as the observed component of the method under study.

The principal possibility of monitoring the roadbed with the use of natural or artificially created geophysical fields is determined by the fact that the selected objects (inhomogeneities) differ in properties from the host medium and as a result create anomalous geophysical fields. In our case, the source of elastic vibrations can be railway trains passing through the studied area (Figure 2). These effects have high energy, and the characteristics of this effect are a priori known [3,4,7,12,20,29]. Geodynamic variations of these fields are a consequence of the action of both natural factors and man-made impacts on the environment, and this makes it possible to distinguish them based on the processing of geodynamic data. At the same time, the seismic impact will highlight the characteristic features in the analyzed signal, adding auxiliary information about the structure of the medium to the controlled parameters, manifested at the combination frequencies of the impact.

**Figure 2.** The principle of application of seismoelectric control of the railway trackbed.

Since the ground base of a railway track can be represented as a base formed by solid particles and a liquid pore filler, the type of deformation processes developing in it will be determined primarily by the physical causes and properties of the specified components and described by the dependence of the stress–strain state of the soil depending on the applied load containing four phases (Figure 3).

**Figure 3.** Model of the stress–strain process of the ground base of the railway track.

According to this model, the elastic deformation phase (0) is up to 10% of the permissible load on the soil and is limited by the value of structural strength, assuming that no structural changes occur in the soil under these loads. The compaction phase (I) is determined by the linear relationship between the applied load and the total deformation of the soil base. The shift phase (II) characterizes the processes of destruction determined by significant shear deformations due to exceeding the limit. The extrusion phase (III) characterizes irreversible deformations, in which the soil is squeezed out from under the railway track.

A similar character, in accordance with the seismoelectric effect of the first kind, is the dependence of the electrical resistance of the ground base on the applied mechanical load (due to the movement of railway transport along the track), which can be considered its amplitude characteristics, and the amplitude and phase spectra of the response to this seismic action are indicators of the development of deformation processes.

The advantage of this approach is that the railway transport generates an intense and long-lasting seismic acoustic signal, the parameters of which change slightly over time relative to the passing train, which eliminates the need to use additional sources of seismic signals. Moreover, it is known that the level of noise generated by a rolling stock consists of the following components: drive noise, aerodynamic noise and wheel rolling noise [30,31]. The first two types of noise can be considered stationary background noise, and the third one arises due to the contact of wheels with the rail and is associated with the high rolling pressure of steel on steel, characteristic of the "wheel-rail" system [32]. In the frequency domain, these noises are sources of low-frequency vibrations, the propagation speed of which coincides with the speed of the train movement. As a result, the spectrum of the resulting seismic signal can be divided into four frequency ranges from 3 to 80 Hz.

In accordance with the above, when implementing the seismoelectric method, as an informative parameter, it is necessary to use the complex transfer function of the investigated section of the geological environment [33]:

$$\dot{N}(j\omega, \Delta u) = \frac{\dot{E}(j\omega)}{\dot{I}(j\omega)} = \dot{Z}\_A(j\omega) + \dot{Z}\_B(j\omega) + \sum\_{i=1}^n Z\_i(j\omega, \Delta u) \tag{1}$$

where . *ZA*(*jω*), . *ZB*(*jω*) are complex resistances of grounding; *E*(*jω*), *I*(*jω*) are complex parameters of the electric field; *<sup>ω</sup>* is frequency of the probing signal; . *Zi*(*jω*, Δ*u*) are complex resistances of the *i*-th elements of the studied section of the geological environment under seismoacoustic influence Δ*u*.

The expression of the transfer function of the studied geological medium (1) is a geoelectric model of complex resistances connected in series. Such a representation allows us to use the pattern of an *N*-layer imperfect dielectric. This model contains *N* elements, and each element has thickness of d and the following electrical parameters of the *i*-th element—dielectric permissivity *ε<sup>i</sup>* and electrical resistivity *ρi*. The transfer function of the

area under investigation can be expressed as the form of *RC* circuits connected in series, while each circuit has the following parameters [34]:

$$\mathbb{C}\_{i} = \varepsilon\_{i} \mathbb{S}(j\omega, \Delta u\_{i}) / d(\Delta u\_{i}), \ R\_{i} = \rho\_{i} d(\Delta u\_{i}) / \mathbb{S}(j\omega, \Delta u\_{i}) \tag{2}$$

where the effective area *S*(*jω*) is determined by the skin effect.

If we omit the the grounding parameters, then we might simply represent the transfer function of the geoelectric section from the dielectric parameters (2):

$$\dot{H}(j\omega, \Delta\mu) = \sum\_{i=1}^{N} \frac{R\_i}{1 + \mathbf{x}\_i^2} - j\sum\_{i=1}^{N} \frac{R\_i \mathbf{x}\_i}{1 + \mathbf{x}\_i^{2}} \tag{3}$$

where *xi* = *ωRiCi* = *ωεiρi*.

At the same time, when an elastic seismic wave propagates in the geological environment, each of its *i*-th element undergoes a mechanical influence described by the deformation tensor Δ*u* = {Δ*ux*,Δ*uy*, Δ*uz*}.

The transfer function (3) can be written in exponential form as

$$H(p, \Delta u) = A(p, \Delta u)e^{i\phi(p, \Delta u)}\tag{4}$$

where *p* is the Laplace operator, *A*(*p*, Δ*u*) = |*H*(*p*, Δ*u*)| = Re2[*H*(*p*, Δ*u*)] + Im2[*H*(*p*, Δ*u*)] is the module of the transfer function, *Re* and *Im* are real, and the imaginary part of the complex function, accordingly, *<sup>φ</sup>*(*p*, <sup>Δ</sup>*u*) = arg[*H*(*p*, <sup>Δ</sup>*u*)] <sup>=</sup> *arctg* Im[*H*(*p*,Δ*u*)] Re[*H*(*p*,Δ*u*)] is the argument of the transfer function.

The module and phase of this transfer function can be calculated based on the input– output model from the ratio

$$H(p, \Delta u) = \frac{\chi(p, \Delta u)}{X(p, \Delta u)}\tag{5}$$

where *X*(*p*,Δ*u*) is the probing electrical signal in operator form, *Y*(*p*, Δ*u*) is the recorded electrical signal in operator form.

In this case, the ground base of the railway track can be represented as a dynamic link—Figure 4.

$$\xrightarrow[]{X(p,\Delta u)}\xrightarrow[]{}H(p,\Delta u)\xrightarrow[]{}\xrightarrow[]{Y(p,\Delta u)}\xrightarrow[]{}$$

**Figure 4.** Representation of the ground base of the railway track in the form of a dynamic link.

By converting (5) taking into account (4), we obtain

$$H(p,\Delta\mu) = \frac{\left|\mathcal{Y}(p,\Delta\mu)e^{j\phi\_Y(p,\Delta\mu)}\right|}{\left|\mathcal{X}(p,\Delta\mu)e^{j\phi\mathcal{X}(p,\Delta\mu)}\right|} = \frac{|\mathcal{Y}(p,\Delta\mu)|}{|\mathcal{X}(p,\Delta\mu)|}e^{j\Delta\phi(p,\Delta\mu)}\tag{6}$$

where Δ*φ*(*p*, Δ*u*) = *φY*(*p*, Δ*u*) − *φX*(*p*, Δ*u*).

It follows from the last expression that the control of the ground base of the railway track can be carried out by tracking both the module and the argument (phase) of the recorded geoelectric signals (since the parameters of the probing signals are constant and are a priori known).

#### **4. Methodology of Experimental Research on the Model of the Natural–Technical System "Railway Track"**

To assess the prospects of the seismoelectric method in the railway roadbed monitoring, we have done laboratory modeling of the railway track. The laboratory installation

is shown in the Figure 5. It allows us to simulate the seismic influence of a train passing by, as well as the natural processes (changes in soil moisture, suffosion, karst, landslide processes, sinkholes).

The installation consists of the following parts:


**Figure 5.** Geodynamic object model.

Primary to the current study, the noise of a railway train was generally registered by a microphone located in the ground base of the railway track. With direct modeling, the seismic impact was simulated by reproducing the recorded noise using a seismic signal source—a vibration loudspeaker.

During the study, we were modeling the soil collapse while registering changes in the characteristics of seismic and electrical signals. Thus, we obtained information about the primary stage of the soil collapse. The proposed approach allows detection of the primary phase of destruction of the railway roadbed, as well as prevention of the man-made catastrophes in the natural–technical system "railway track".

#### **5. Results and Discussion**

Figure 6 represents amplitude spectra of the output electrical signals of the investigated geological environment in the absence (a) and presence (b) of elastic action (if applied, an elastic harmonic impact had a frequency of 70 Hz). At the same time, the harmonic electrical signal had a frequency of 40 Hz.

It can be seen from the obtained spectrograms that the presence of an elastic seismoacoustic impact source in the simulated ground base of the railway track leads to the formation of a spectral component with a frequency of seismoacoustic impact in the spectrum of the recorded electrical signal (seismoelectric effect of the second type). This indicates that in this case, the geological environment has a normal specific resistivity (seismoelectric effect of the first type), and there is no deformation of the soil compaction. Depending on the parameters of external elastic influences applied to the investigated area, the electrical parameters of the medium change their characteristics accordingly. This makes it possible, during further processing of the received electrical signal, to determine the presence of heterogeneity in the medium, its depth and deformation state. In this case, there is no need to place the sensors along the entire monitored area.

**Figure 6.** Amplitude spectra of the output electrical signals of the investigated geological environment in the absence (**a**) and presence (**b**) of elastic action.

Further studies have shown that the analysis of changes in the phase characteristics of the transfer function (6) has a number of significant advantages over the amplitude method; in particular, it is characterized by increased sensitivity and noise immunity [30] and also detects and localizes geodynamic processes in geological environments [30–32]. In Figure 7 as an example, the phase spectrum of the output electrical signals of the geological medium under study was measured while applying the 70 Hz elastic action, because at such frequency a phase shift of seismic acoustic action is clearly traced.

**Figure 7.** Phase spectrum of the output electrical signals of the studied geological medium in the presence of elastic action.

Studies [25–27] explain the modified phase-measuring method of geoelectric control, namely the use of several current signal sources placed closely to the test control object and an array of vector sensors for measuring the electric field. At the same time, the registration of phase characteristics at a fixed position of the sources and the measuring basis with the possibility of controlling the parameters of the probing signals is based on the fact that the primary and secondary electric fields are vector values.

Figure 8 shows a diagram of the laboratory experiment to control the process of occurrence and subsequent growth of a cavity in the ground foundation of a railway track by the phasometric method. The research was carried out on a physical model of a railway track using a specially developed measuring phasometric system. In this case, the current sources designated as A and B (Figure 8) form quadrature harmonic signals with a phase shift of 90 degrees. Sources A and B generate an electric field signal at point O, described as follows:

$$
\stackrel{\rightarrow}{E}\_{AX} = \stackrel{\rightarrow}{E}\_{AX}^{0} + \Lambda \mathbb{E}\_{AX\prime} \\
\stackrel{\rightarrow}{E}\_{BX} = \stackrel{\rightarrow}{E}\_{BX}^{0} + \Lambda \mathbb{E}\_{BX} \tag{7}
$$

where <sup>→</sup> *E* 0 is the electrical signal recorded before the formation of deformation processes in the ground; Δ*E* is an abnormal component of the electric field caused by the presence of deformation processes in the ground.

**Figure 8.** Diagram of the laboratory experiment.

When using multipolar geodynamic control systems at registration points (M1–M4, N1–N4), we have to deal with an elliptically polarized geoelectric field. Moreover, in this case, vector sensors of electric field measurement with the same indices form pairs; the signals from each pair are sent to the measuring system for processing. In this case, data processing of recorded geoelectric signals presupposes the formation of their difference signal (for filtering in-phase interference), its amplification, phase detection (in relation to the reference signal) and low-pass filtering. The principle of registration for the phase of the geoelectric field at an arbitrary receiving point is illustrated in Figure 9.

**Figure 9.** The registration principle for the phase of the geoelectric field at an arbitrary receiving point.

Using this method, simulation of the process of sinkhole formation of the ground base of the railway track in the presence of train noise was carried out (the seismogram taken from the short-period seismic meter ZET 7156 is shown in Figure 10). The electrodes A and B were used as current sources, while the electrodes M and N with corresponding numbers were receivers. The following distances between electrodes were used: between A and B—80 cm, between M1 and N1—70 cm, between M2 and N2—60 cm, between M3 and N3—50 cm, between M4 and N4—40 cm. The frequency of probing electrical signals was 90 Hz, the amplitude of each probing electrical signal was 200 mV, and the analog-to-digital converter was set at a sampling rate of 10,101 Hz.

**Figure 10.** Seismogram of the railway track in the presence of train noise.

Figure 11 shows an example of a phase image obtained as a result of processing in the time domain for the receiving line M3N3.

**Figure 11.** The example of a phase image for the receiving line M3N3, obtained by simulating the process of sinkhole formation of the ground base of the railway track.

The analysis of this process in the time domain makes it possible at an early stage to unambiguously determine the time interval corresponding to the active development of the sinkhole formation process and the direction of its change in time—to localize the source of the deformation process and predict its future dynamics.

For a comparison, Figure 12 shows a variant of solving a similar problem by the classical seismoacoustic method using a highly sensitive seismometer.

A comparison of the two approaches to the control of the underground base of the railway track allows us to conclude that the traditional method allows only the registration of certain geodynamic events in the geological environment without the possibility of their qualitative interpretation, which is eliminated when using the seismoelectric method and monitoring the phase characteristics of the recorded geoelectric signals.

Subsequently, from the recorded signal, it is possible to identify features of soil areas with impaired integrity [31] using Scale Invariant Feature Transform (SIFT)-Support Vector Machine (SVM) methods [35–40]. In the presence of high noise level, it is possible to use a neural back propagation network, which will allow achieving 97% accuracy when processing monitoring data [36,37,39].

**Figure 12.** Seismogram of the process of sinkhole formation of the soil base of the railway roadbed.

#### **6. Conclusions**

The paper demonstrates that for the diagnostics of the railway roadbed, a number of features not found in traditional engineering geology should be taken into account. The article also theoretically and practically substantiates an integrated approach to solving problems of monitoring the roadbed of a railway on the basis of combining two geophysical methods for monitoring the natural environment: geoelectric and seismoacoustic methods.

From the presented research results, it has been established that the proposed methods for monitoring the railway trackbed have high sensitivity to the primary phase of the soil destruction.

During laboratory modeling, it was found that in the amplitude spectra of the output electrical signals of the investigated geological environment in the presence of an elastic harmonic effect with a frequency of 70 Hz, the frequency of a harmonic electrical signal with a frequency of 40 Hz was observed, which makes it possible to use this effect to obtain more detailed information about the structure of the soil when sounding with electric fields. In laboratory modeling, the obtained phase images for the receiving line of the installation when simulating the process of sinking the soil base of the railway bed confirmed the presence of a transient process that causes a shift in the initial phase of the signal Δϕ = 40◦ by ~45◦ (Δϕ' = 85◦), that precedes the initial stage of failure.

Moreover, the joint processing of geoelectric and seismoacoustic signals makes it possible to detect heterogeneity in the environment, as well as its depth and its geodynamic variations. The phasometric ground control method allows increasing the efficiency of the railway roadbed monitoring systems, in particular, through the use of vertical electrotomography methods with the possible localization of heterogeneity. In this case, the metrological stability of geodynamic measurements, the insensitivity of the technique to seismic interference, the simplicity of varying the installation size and the organization of three-dimensional soil monitoring is ensured.

The information obtained during the monitoring of the roadbed will improve the reliability and safety of the functioning of railway transport, especially in areas with intensive geodynamic processes.

**Author Contributions:** Data curation, A.B.; Formal analysis, D.S.; Investigation, A.G.; Supervision, O.K.; Writing—original draft, O.K.; Writing—review & editing, G.V. and Y.Y. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the Ministry of Science and Higher Education of the Russian Federation in accordance with agreement No. 075-15-2020-905 date 16 November 2020 on providing a grant in the form of subsidies from the Federal budget of the Russian Federation. The grant was provided for state support for the establishing and development of a world-class Scientific Center "Agrotechnologies for the Future".

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Acknowledgments:** Anastasia Grecheneva has been supported by the Ministry of Science and Higher Education of the Russian Federation in accordance with agreement No. 075-15-2020-905 date 16 November 2020 on providing a grant in the form of subsidies from the Federal budget of the Russian Federation. The grant was provided for state support for the establishing and development of a world-class Scientific Center "Agrotechnologies for the Future".

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **The Integral Mittag-Leffler, Whittaker and Wright Functions**

**Alexander Apelblat <sup>1</sup> and Juan Luis González-Santander 2,\***


**\*** Correspondence: gonzalezmarjuan@uniovi.es

**Abstract:** Integral Mittag-Leffler, Whittaker and Wright functions with integrands similar to those which already exist in mathematical literature are introduced for the first time. For particular values of parameters, they can be presented in closed-form. In most reported cases, these new integral functions are expressed as generalized hypergeometric functions but also in terms of elementary and special functions. The behavior of some of the new integral functions is presented in graphical form. By using the MATHEMATICA program to obtain infinite sums that define the Mittag-Leffler, Whittaker, and Wright functions and also their corresponding integral functions, these functions and many new Laplace transforms of them are also reported in the Appendices for integral and fractional values of parameters.

**Keywords:** integral Mittag-Leffler functions; integral Whittaker functions; integral Wright functions; Laplace transforms

#### **1. Introduction**

The appearance of special functions of mathematical physics was associated with solutions of particular ordinary differential equations, while the integral special functions arrived much later in mathematical literature after properties of these functions were investigated. Integral special functions were introduced as new special functions, which can be applied in many circumstances, especially in operational calculus, where they are frequently serving as direct and inverse integral transforms. The form of an integrand is identical for all integral functions, but limits of integration are different in order to assure the convergence of defined integrals. There are two types of integral special functions: those with elementary functions in their integrands and those with special functions. To the first group belong the exponential integral −Ei(−*x*), the sine and cosine integrals, si(*x*), Si(*x*), ci(*x*) and Ci(*x*), and the corresponding integrals of hyperbolic trigonometric functions, Shi(*x*) and Chi(*x*). These functions are defined in the following way [1–5]

$$\begin{aligned} \mathrm{E}\_{1}(\mathbf{x}) &= -\mathrm{Ei}(-\mathbf{x}) = \int\_{x}^{\infty} \frac{e^{-t}}{t} dt, \quad \mathbf{x} > 0, \\ \mathrm{Si}(\mathbf{x}) &= \int\_{0}^{\infty} \frac{\sin t}{t} dt, \\ \mathrm{si}(\mathbf{x}) &= -\int\_{x}^{\infty} \frac{\sin t}{t} dt = \mathrm{Si}(\mathbf{x}) - \frac{\pi}{2}, \\ \mathrm{Ci}(\mathbf{x}) &= -\int\_{x}^{\infty} \frac{\cos t}{t} dt = \gamma + \ln \mathrm{x} - \int\_{0}^{x} \frac{1 - \cos t}{t} dt = -\mathrm{ci}(\mathbf{x}), \\ \mathrm{Sii}(\mathbf{x}) &= \int\_{0}^{x} \frac{\sinh t}{t} dt, \\ \mathrm{Cii}(\mathbf{x}) &= \gamma + \ln \mathrm{x} - \int\_{0}^{x} \frac{1 - \cosh t}{t} dt, \end{aligned} \tag{1}$$

where *γ* is the Euler–Mascheroni constant. As can be observed in (1), the integral special functions have integrands in the form, *f*(*t*)/*t*, and the intervals of integrations are 0 < *t* < *x*

**Citation:** Apelblat, A.;

González-Santander, J.L. The Integral Mittag-Leffler, Whittaker and Wright Functions. *Mathematics* **2021**, *9*, 3255. https://doi.org/10.3390/math9243255

Academic Editors: Francesco Mainardi and Jaan Janno

Received: 24 November 2021 Accepted: 12 December 2021 Published: 15 December 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

or *x* < *t* < ∞. Few direct and inverse integral transforms are presented below to illustrate their applications, for example, in the Laplace transformation [6–8],

$$F(s) := \mathcal{L}[f(t)] := \int\_0^\infty e^{-st} f(t) dt,\tag{2}$$

we have

$$\begin{split} \mathcal{L}\left[\frac{1}{\sqrt{t}}\text{Ei}(-t)\right] &= -2\sqrt{\frac{\pi}{s}}\ln\left(\sqrt{s}+\sqrt{s+1}\right), \quad \text{Re}\,s > 0, \\ \mathcal{L}[\text{Si}(t)] &= \frac{\text{cot}^{-1}\,\text{s}}{\text{s}}, \quad \text{Re}\,s > 0, \\ \mathcal{L}[\text{si}(t)] &= \frac{\text{tan}^{-1}\,\text{s}}{\text{s}}, \\ \mathcal{L}[\text{Ci}(t)] &= -\frac{\text{In}(1+s^2)}{\text{2s}}, \\ \mathcal{L}^{-1}\left[\frac{\text{In}(s+b)}{\text{s}+a}\right] &= e^{-at}[\ln(b-a)-\text{Ei}((a-b)t)], \quad \text{Re}\,(s-a) > 0, \\ \mathcal{L}^{-1}\left[\frac{\text{In}s}{\text{s}^2+1}\right] &= \cos t \,\text{Si}(t)-\sin t \,\text{Ci}(t), \quad \text{Re}\,s > 0, \\ \mathcal{L}^{-1}\left[\frac{s\ln s}{\text{s}^2+1}\right] &= -\sin t \,\text{Si}(t)-\cos t \,\text{Ci}(t). \end{split} \tag{3}$$

Integrands in the second group of integral special functions include special functions, the most well-known and applied of which are the integral Bessel functions (see, e.g., [3,7,9–13])

$$\begin{aligned} \mathrm{li}\_{\nu}(\mathbf{x}) &= -\int\_{\mathbf{x}}^{\infty} \frac{I\_{\nu}(t)}{t} dt, \\ \mathrm{li}\_{\nu}(\mathbf{x}) &= -\int\_{\mathbf{x}}^{\infty} \frac{Y\_{\nu}(t)}{t} dt, \\ \mathrm{li}\_{\nu}(\mathbf{x}) &= -\int\_{0}^{\mathbf{x}} \frac{I\_{\nu}(t)}{t} dt, \\ \mathrm{Ki}\_{\nu}(\mathbf{x}) &= -\int\_{\mathbf{x}}^{\infty} \frac{K\_{\nu}(t)}{t} dt. \end{aligned} \tag{4}$$

Already in 1929, van der Pol [9] showed that it is possible to express the differentiation with respect to the order of the Bessel function of the first kind as a convolution integral, which includes the integral Bessel function of the zero-order:

$$\frac{\partial J\_{\boldsymbol{\nu}}(t)}{\partial \boldsymbol{\nu}} = \frac{1}{2} \int\_{0}^{t} \mathbb{J}\_{0}(t-\boldsymbol{x}) [J\_{\boldsymbol{\nu}-1}(\boldsymbol{x}) - J\_{\boldsymbol{\nu}+1}(\boldsymbol{x})] d\boldsymbol{x}.\tag{5}$$

The integral Bessel functions of the zero-order are inverse transforms of the following Laplace transforms [7]

$$\begin{aligned} \mathcal{L}^{-1}\left[\frac{\sinh^{-1}s}{s}\right] &= \text{li}\_0(t),\\ \mathcal{L}^{-1}\left[\frac{\left(\sinh^{-1}s\right)^2}{s}\right] &= \text{li}\_0(t),\\ \mathcal{L}^{-1}\left[\ln\left(s+\sqrt{s^2+1}\right)-\frac{\pi i}{2}\right] &= \text{li}\_0(t),\\ \mathcal{L}^{-1}\left[\frac{\left(\cosh^{-1}s\right)^2}{2s}+\frac{\pi^2}{8s}\right] &= \text{li}\_0(t). \end{aligned} \tag{6}$$

In analogy to the integral Bessel functions and with the possibility of extension to other special functions, this work introduces three new integral functions. Furthermore, these integral functions guide us toward the establishment of integrals and series. Section 2 explores the integral Mittag-Leffler functions. Sections 3 and 4 discuss the integral Whittaker and Wright functions, respectively. Section 5 contains concluding remarks.

In order to preserve the applied form of notation, the following two integral functions are introduced:

$$\text{Fi}(\mathbf{x}) = \int\_0^\mathbf{x} \frac{f(t) - f(0)}{t} dt,\tag{7}$$

and

$$\text{fi}(x) = \int\_{x}^{\infty} \frac{f(t)}{t} dt. \tag{8}$$

To ensure convergence of integrals in (7) or in (8), which depends on the behavior of *f*(*t*)/*t* integrands at the origin and at infinity, the forms of integral functions Fi(*x*) or fi(*x*) are chosen. Since the explicit expressions for *f*(*t*) functions are sometimes given in the form of *f*(*t <sup>α</sup>*) where *<sup>α</sup>* <sup>=</sup> <sup>±</sup><sup>1</sup> <sup>2</sup> , ±1, 2, 3, ... the corresponding change of integration variables for these equations is desired.

In the case of Mittag-Leffler, Whittaker and Wright functions, for some values of parameters, by using the MATHEMATICA program, it was possible to obtain these integral functions in a closed-form. Derived integral functions are tabulated and also in some cases graphically presented (see [3]).

#### **2. The Integral Mittag-Leffler Functions**

The classical one-parameter and the two-parameter Mittag-Leffler functions are defined by [14]:

$$\begin{aligned} \mathrm{E}\_{a}(\mathbf{x}) &= \sum\_{k=0}^{\infty} \frac{\mathbf{x}^{k}}{\Gamma(ak+1)}, \quad \mathrm{Re}\, a > 0, \\\mathrm{E}\_{a,\beta}(\mathbf{x}) &= \sum\_{k=0}^{\infty} \frac{\mathbf{x}^{k}}{\Gamma(ak+\beta)}, \quad \mathrm{Re}\, a > 0, \; \mathrm{Re}\, \beta > 0. \end{aligned} \tag{9}$$

In this investigation, they are only considered for positive real values of the argument, i.e., *x* > 0. In the particular case of positive rational *α* with *α* = *p*/*q* and *p* and *q* positive coprimes, Mittag-Leffler functions are given as a finite sum of generalized hypergeometric functions (see (A3) in Appendix A).

The Laplace transforms of the Mittag-Leffler functions are derived directly from (2) and (9), and we have:

$$\begin{split} \mathcal{L}\left[\mathbf{E}\_{\mathbf{a}}(t)\right] &= \int\_{0}^{\infty} e^{-st} \left[ \sum\_{k=0}^{\infty} \frac{t^{k}}{\Gamma(ak+1)} \right] dt = \sum\_{k=0}^{\infty} \frac{k!}{\Gamma(ak+1)} \left(\frac{1}{s}\right)^{k+1}, \\ \mathcal{L}\left[\mathbf{E}\_{\mathbf{a},\boldsymbol{\beta}}(t)\right] &= \int\_{0}^{\infty} e^{-st} \left[ \sum\_{k=0}^{\infty} \frac{t^{k}}{\Gamma(ak+\boldsymbol{\beta})} \right] dt = \sum\_{k=0}^{\infty} \frac{k!}{\Gamma(ak+\boldsymbol{\beta})} \left(\frac{1}{s}\right)^{k+1}, \\ \text{Re}\,s &> 1. \end{split} \tag{10}$$

For particular values of parameters *α* and *β*, the explicit form of the Mittag-Leffler functions can be obtained by applying the MATHEMATICA program to sums of infinite series in (9), and these results are presented in Appendix A. Using Equation (10), many new Laplace transforms of the Mittag-Leffler functions were evaluated, and they are also reported in Appendix A. Similarly as in the case when *α* is positive rational, the Laplace transforms of the Mittag-Leffler functions can be expressed by the finite sum of products of generalized hypergeometric functions (see (A4) in Appendix A).

The integral Mittag-Leffler functions are introduced by considering their exponential behavior as a function of real, positive variable *x* (see Appendix A).

$$\begin{aligned} \mathrm{Ei}\_{\mathfrak{a}}(\mathbf{x}) &= \int\_{0}^{\mathbf{x}} \frac{\mathrm{E}\_{\mathfrak{a}}(t) - 1}{t} dt, \\ \mathrm{Ei}\_{\mathfrak{a}, \mathfrak{b}}(\mathbf{x}) &= \int\_{0}^{\mathbf{x}} \frac{\mathrm{E}\_{\mathfrak{a}, \mathfrak{b}}(t) - 1/\Gamma(\mathfrak{\beta})}{t} dt. \end{aligned} \tag{11}$$

Formally, by introducing (9) into (11) we have

$$\begin{aligned} \text{Ei}\_{\mathfrak{a}}(\mathbf{x}) &= \sum\_{k=1}^{\infty} \frac{\mathbf{x}^{k}}{k \, \Gamma(\alpha k + 1)}, \\ \text{Ei}\_{\mathfrak{a}, \mathfrak{f}}(\mathbf{x}) &= \sum\_{k=1}^{\infty} \frac{\mathbf{x}^{k}}{k \, \Gamma(\alpha k + \beta)}. \end{aligned} \tag{12}$$

For several values of parameters *α* and *β*, it is possible to derive the integral Mittag-Leffler functions in a closed-form by applying the MATHEMATICA program to the sums of infinite series in (12). These functions are presented in Tables 1 and 2. As it is observable, most of these integral functions are expressed as generalized hypergeometric series. Typical behavior of one-parameter and two-parameter integral Mittag-Leffler functions is illustrated in Figures 1 and 2.

Evidently, also direct integration, by using (11), leads to the integral Mittag-Leffler functions. For example, for E2(*x*) <sup>=</sup> cosh <sup>√</sup>*x*, according to (1), we have

$$\mathrm{Ei}\_2(\mathbf{x}) = \int\_0^\mathbf{x} \frac{\cosh\sqrt{t} - 1}{t} dt = -2\gamma - \mathrm{ln}\mathbf{x} + 2\,\mathrm{Chi}\sqrt{\mathbf{x}},\tag{13}$$

and as expected, this result is identical to that derived from (12) (see Tables 1 and 2).

Applying the formulas (A1) and (A2) given in Appendix A, the integral Mittag-Leffler function for positive rational values of parameter *α* with *α* = *p*/*q* and *p*, *q* positive coprimes is

$$=\sum\_{k=1}^{\text{Ei}} \frac{\mathbf{x}^k}{k \, \Gamma(k/q + \beta)} \,\_2F\_q \left( \begin{array}{c} 1, k/q \\ b\_0, \dots, b\_{p-1}, k/q + 1 \end{array} \bigg| \frac{\mathbf{x}^q}{p^p} \right). \tag{14}$$

where

$$b\_j = \frac{k}{q} + \frac{\beta + j}{p}.$$

In addition, using the sums in (12), it is possible to derive the Laplace transforms of the integral Mittag-Leffler functions:

$$\begin{split} \mathcal{L}[\text{Ei}\_{\mathfrak{a}}(t)] &= \int\_{0}^{\infty} e^{-st} \left[ \sum\_{k=0}^{\infty} \frac{t^{k+1}}{(k+1)\Gamma(a(k+1)+1)} \right] dt \\ &= \sum\_{k=0}^{\infty} \frac{(k+1)!}{(k+1)\Gamma(a(k+1)+1)} \left( \frac{1}{s} \right)^{k+2}, \text{Res} > 1. \\ \mathcal{L}\left[\text{Ei}\_{a,\mathfrak{f}}(t)\right] &= \int\_{0}^{\infty} e^{-st} \left[ \sum\_{k=0}^{\infty} \frac{t^{k+1}}{(k+1)\Gamma(a(k+\beta)+1)} \right] dt \\ &= \sum\_{k=0}^{\infty} \frac{(k+1)!}{(k+1)\Gamma(a(k+1)+\beta)} \left( \frac{1}{s} \right)^{k+2}, \text{ Res} > 1. \end{split} \tag{15}$$

The evaluated Laplace transforms of the integral Mittag-Leffler functions are presented in Tables 3 and 4.


**Table 1.** The integral Mittag-Leffler functions derived for some values of parameters *α* and *β* by using (12).

**Figure 1.** The integral one-parameter Mittag-Leffler function Ei*α*,1(*x*) as a function of variable *x* and parameters *α*.


**Table 2.** The integral Mittag-Leffler functions derived for some values of parameters *α* and *β* by using (12).

The Laplace transforms of the integral Mittag-Leffler functions with positive rational parameter *α* with *α* = *p*/*q* and *p*, *q* positive coprimes can be evaluated from:

$$\mathcal{L}\left[\text{Ei}\_{p/q,\emptyset}(t)\right] \tag{16}$$

$$\mathcal{L} = \frac{1}{s^2} \sum\_{k=0}^{q-1} \frac{k! \, s^{-k}}{\Gamma\left(\frac{p}{q}(k+1) + \emptyset\right)} \,\_{q+1}F\_p\left( \begin{array}{c} 1, a\_0, \dots, a\_{q-1} \\ b\_0, \dots, b\_{p-1} \end{array} \Big| \frac{(q/s)^q}{p^p} \right) . \tag{17}$$

where

$$\begin{array}{rcl} a\_j &=& \frac{k+1+j}{q}, \\ b\_j &=& \frac{k}{q} + \frac{\beta+j}{p}. \end{array}$$

Furthermore, the following relation is satisfied:

**Figure 2.** The integral two-parameter Mittag-Leffler function Ei1,*β*(*x*) as a function of variable *x* and parameters *β*.

**Table 3.** The Laplace transforms of the integral Mittag-Leffler functions Ei*α*,*<sup>β</sup>* derived for some values of parameters *α* and *β* by using (15).



**Table 3.** *Cont.*

**Table 4.** The Laplace transforms of the integral Mittag-Leffler functions Ei*α*,*<sup>β</sup>* derived for some values of parameters *α* and *β* by using (15).


**Table 4.** *Cont.*


#### **3. The Integral Whittaker Functions**

In 1903, Whittaker [15] showed that it is possible to express some special functions such as Bessel functions, parabolic cylinder functions, error functions, incomplete gamma functions, and logarithm and cosine integrals in terms of a new function suggested by him, i.e., the Whittaker function. Two Whittaker functions are applied today, and they are defined by using the Kummer confluent hypergeometric function [3,4]:

$$\begin{split} \mathbf{M}\_{\mathbf{x},\mu}(\mathbf{x}) &= \mathbf{x}^{\mu-1/2} e^{-\mathbf{x}/2} \,\_1F\_1 \left( \begin{array}{c} \mu-\kappa+\frac{1}{2} \\ 1+2\mu \end{array} \bigg| \mathbf{x} \right), \\ \mathbf{M}\_{\mathbf{x},\mu}(\mathbf{x}) &= \frac{\Gamma(-2\mu)}{\Gamma\left(\frac{1}{2}-k-\mu\right)} \mathbf{M}\_{\mathbf{x},\mu}(\mathbf{x}) + \frac{\Gamma(2\mu)}{\Gamma\left(\frac{1}{2}-k+\mu\right)} \mathbf{M}\_{\mathbf{x},-\mu}(\mathbf{x}). \end{split} \tag{18}$$

This permits us to introduce four integral Whittaker functions:

$$\begin{aligned} \mathbf{M}\_{\mathbf{k},\mu}(\mathbf{x}) &= \int\_0^\mathbf{x} \frac{\mathbf{M}\_{\mathbf{k},\mu}(t)}{t} dt, \\ \mathbf{m}\_{\mathbf{k},\mu}(\mathbf{x}) &= \int\_\mathbf{x}^\infty \frac{\mathbf{M}\_{\mathbf{k},\mu}(t)}{t} dt. \end{aligned} \tag{19}$$

and

$$\begin{array}{l} \text{Wi}\_{\mathbf{x},\boldsymbol{\mu}}(\mathbf{x}) = \int\_{0}^{\mathbf{x}} \frac{\mathbf{W}\_{\mathbf{x},\boldsymbol{\mu}}(t)}{t} dt, \\ \text{wi}\_{\mathbf{x},\boldsymbol{\mu}}(\mathbf{x}) = \int\_{\mathbf{x}}^{\infty} \frac{\mathbf{W}\_{\mathbf{x},\boldsymbol{\mu}}(t)}{t} dt. \end{array} \tag{20}$$

The integral Whittaker functions with particular values of parameters *κ* and *μ* can be expressed in terms of elementary and special functions. These cases, derived using the MATHEMATICA program, are presented in Tables 5–9. Several integral Whittaker functions Mi*κ*,*μ*(*x*), mi*κ*,*μ*(*x*), Wi*κ*,*μ*(*x*) and wi*κ*,*μ*(*x*) as a function of variable *x* at fixed values of parameters *κ* and *μ* are plotted in Figures 3–6. Similarly, a long list of the Whittaker functions M*κ*,*μ*(*x*) and W*κ*,*μ*(*x*) with integer and fractional parameters was prepared (see Appendix B). In some cases, it was possible to obtain for them their Laplace transforms, and they are also reported in Appendix B.

**Figure 3.** The integral Whittaker functions Mi*κ*,*μ*(*x*) as a function of variable *x* at fixed values of parameters *κ* and *μ*.

**Figure 4.** The integral Whittaker functions mi*κ*,*μ*(*x*) as a function of variable *x* at fixed values of parameters *κ* and *μ*.

**Figure 5.** The integral Whittaker functions Wi*κ*,*μ*(*x*) as a function of variable *x* at fixed values of parameters *κ* and *μ*.

**Figure 6.** The integral Whittaker functions wi*κ*,*μ*(*x*) as a function of variable *x* at fixed values of parameters *κ* and *μ*.


**Table 5.** The integral Whittaker functions Mi*κ*,*μ* derived for some values of parameters *κ* and *μ* by using (18) and (19).


**Table 6.** The integral Whittaker functions Mi*κ*,*μ* derived for some values of parameters *κ* and *μ* by using (18) and (19).

**Table 7.** The integral Whittaker functions mi*κ*,*μ* derived for some values of parameters *κ* and *μ* by using (18) and (19).



**Table 8.** The integral Whittaker functions Wi*κ*,*μ* derived for some values of parameters *κ* and *μ* by using (18) and (20).

**Table 9.** The integral Whittaker functions wi*κ*,*μ* derived for some values of parameters *κ* and *μ* by using (18) and (20).



**Table 9.** *Cont.*

There is a number of recurrence relations between the Whittaker functions, for example [3,4]

$$\begin{aligned} 2\mu \left[ \mathcal{M}\_{\mathbf{x}-1/2,\mu-1/2}(t) - \mathcal{M}\_{\mathbf{x}+1/2,\mu-1/2}(t) \right] &= t^{1/2} \mathcal{M}\_{\mathbf{x},\mu}(t), \\ (\mathbf{x}+\mu)\mathcal{W}\_{\mathbf{x}-1/2,\mu}(t) + \mathcal{W}\_{\mathbf{x}+1/2,\mu}(t) &= t^{1/2} \mathcal{W}\_{\mathbf{x},\mu+1/2}(t), \end{aligned} \tag{21}$$

and this leads to integrals that are expressed in terms of the integral Whittaker functions

$$\begin{split} \int\_{0}^{\mathbf{x}} \frac{\mathbf{M}\_{\mathbf{x},\mu}(t)}{t^{1/2}} dt &= 2\mu \Big[ \mathbf{M}\_{\mathbf{x}-1/2,\mu-1/2}(t) - \mathbf{M}\_{\mathbf{x}+1/2,\mu-1/2}(t) \Big], \\ \int\_{0}^{\mathbf{x}} \frac{\mathbf{W}\_{\mathbf{x},\mu+1/2}(t)}{t^{1/2}} dt &= (\mathbf{x}+\mu)\mathbf{W}\_{\mathbf{x}-1/2,\mu}(t) + \mathbf{W}\_{\mathbf{x}+1/2,\mu}(t). \end{split} \tag{22}$$

Using the following representation of the Whittaker functions [5]

$$\mathcal{M}\_{\mathbf{x},\boldsymbol{\mu}}(t) = t^{\mu + 1/2} \sum\_{n=0}^{\infty} \,\_2F\_1 \left( \begin{array}{c} -n, \mu - \kappa + \frac{1}{2} \\ 1 + 2\mu \end{array} \bigg| 2 \right) \frac{\left( -t/2 \right)^n}{n!},\tag{23}$$

it is possible to obtain the integral Whittaker functions in terms of a rapidly convergent alternating series as follows:

$$\mathrm{Mi}\_{\mathbb{K},\mathbb{H}}(\mathbf{x}) = \mathbf{x}^{\mu+1/2} \sum\_{n=0}^{\infty} \,\_2F\_1 \left( \begin{array}{c} -n, \mu - \kappa + \frac{1}{2} \\ 1 + 2\mu \end{array} \bigg| 2 \right) \frac{\left( -\infty/2 \right)^n}{n! \left( \frac{1}{2} + \mu + n \right)}. \tag{24}$$

There is a number of particular cases where the integral Whittaker functions can be written in a closed-form, for example, from [5]

$$\mathbf{M}\_{\mathbf{x},\mathbf{x}-1/2}(\mathbf{x}) = \mathbf{x}^{\mathbf{x}} e^{-\mathbf{x}/2},\tag{25}$$

we have

$$\mathbf{M}\_{\mathbf{x},\mathbf{x}-1/2}(\mathbf{x}) = 2^{\mathbf{x}} \gamma \left(\kappa, \frac{\mathbf{x}}{2}\right),\tag{26}$$

but [5]

$$\mathbf{M}\_{\mathbf{x},\mathbf{x}-1/2}(\mathbf{x}) = \mathbf{W}\_{\mathbf{x},\mathbf{x}-1/2}(\mathbf{x}) = \mathbf{W}\_{\mathbf{x},-\mathbf{x}+1/2}(\mathbf{x}) = \mathbf{e}^{-\mathbf{x}/2}\mathbf{x}^{\mathbf{x}},\tag{27}$$

and therefore

$$\mathrm{Mi}\_{\mathbf{x},\mathbf{x}-1/2}(\mathbf{x}) = \mathrm{Wi}\_{\mathbf{x},\mathbf{x}-1/2}(\mathbf{x}) = \mathrm{Wi}\_{\mathbf{k},-\mathbf{x}+1/2}(\mathbf{x}) = 2^{\mathbf{x}} \gamma\left(\mathbf{x}, \frac{\mathbf{x}}{2}\right). \tag{28}$$

Furthermore, from [5]

$$\mathbf{M}\_{0,\mu}(\mathbf{x}) = 2^{2\mu + 1/2} \Gamma(\mu + 1) \sqrt{\frac{t}{2}} \, I\_{\mu} \left( \frac{\mathbf{x}}{2} \right), \tag{29}$$

follows that

$$\mathrm{Mi}\_{0,\mu}(\mathbf{x}) = \frac{\mathfrak{x}^{\mu+1/2}}{\mu+1/2} \,\_1F\_2\left( \begin{array}{c} \frac{2\mu+1}{4} \\ \mu+1, \frac{2\mu+5}{4} \end{array} \; \middle| \; \frac{\mathfrak{x}^2}{16} \right). \tag{30}$$

Similary from

$$\mathcal{W}\_{0,\mu}(\mathbf{x}) = \sqrt{\frac{\mathcal{X}}{\pi}} \mathcal{K}\_{\mu}\left(\frac{\mathbf{x}}{2}\right),\tag{31}$$

we have

$$\text{Wio}\_{\text{0},\mu}(\mathbf{x}) = \frac{\sqrt{\pi}}{2\sin\pi\mu} \left[ \frac{4^{\mu}\,\text{Mi}\_{0,-\mu}(\mathbf{x})}{\Gamma(1-\mu)} - \frac{4^{-\mu}\,\text{Mi}\_{0,\mu}(\mathbf{x})}{\Gamma(1+\mu)} \right],\tag{32}$$

and in a general case

$$\mathbf{W}\_{\mathbf{k},\mu}(\mathbf{x}) = \frac{\Gamma(-2\mu)\mathbf{M}\mathbf{i}\_{\mathbf{k},\mu}(\mathbf{x})}{\Gamma\left(\frac{1}{2} - \kappa - \mu\right)} + \frac{\Gamma(2\mu)\mathbf{M}\mathbf{i}\_{\mathbf{k},-\mu}(\mathbf{x})}{\Gamma\left(\frac{1}{2} - \kappa + \mu\right)}.\tag{33}$$

For *κ* = ±1/2, it is possible to obtain

$$\mathbf{W}\_{\pm\frac{1}{2},\mu}(\mathbf{x}) = \mathbf{F}\_{\mu}^{\pm}(\mathbf{x}) + \mathbf{F}\_{-\mu}^{\pm}(\mathbf{x}),\tag{34}$$

where we have set

$$\begin{split} \mathcal{F}^{\pm}\_{\mu}(\mathbf{x}) &= \frac{2\mathbf{x}^{1/2+\mu}\Gamma(-2\mu)}{(1+2\mu)\Gamma\left(\frac{1}{2}\mp\frac{1}{2}-\mu\right)}\\ &\qquad \left[ {}\_{1}F\_{2}\left( \begin{array}{c} \frac{1}{4}+\frac{\mu}{2} \\ \frac{1}{2}+\mu,\frac{3}{4}+\frac{\mu}{2} \end{array} \; \middle|\; \frac{\mathbf{x}^{2}}{16} \right) \mp \frac{\mathbf{x}/2}{3+2\mu} {}\_{1}F\_{2}\left( \begin{array}{c} \frac{3}{4}+\frac{\mu}{2} \\ \frac{3}{2}+\mu,\frac{7}{4}+\frac{\mu}{2} \end{array} \; \middle|\; \frac{\mathbf{x}^{2}}{16} \right) \right]. \end{split}$$

Since [16]

<sup>2</sup>*F*<sup>1</sup> <sup>−</sup>*n*, *<sup>λ</sup>* 2*λ* + 1 2 (36) <sup>=</sup> <sup>Γ</sup> *λ* + <sup>1</sup> 2 √*<sup>π</sup>* ⎡ ⎣ <sup>1</sup> <sup>+</sup> (−1) *n* 2 Γ *n*+<sup>1</sup> 2 Γ *λ* + *<sup>n</sup>*+<sup>1</sup> 2 <sup>+</sup> <sup>1</sup> <sup>−</sup> (−1) *n* 2 Γ *n* <sup>2</sup> + 1 Γ *λ* + *<sup>n</sup>* <sup>2</sup> + 1 ⎤ ⎦,

and

$$\begin{aligned} &\,\_2F\_1\left( \begin{array}{c} -n,\lambda\\2\lambda-1 \end{array} \bigg| 2 \right) \\ &=\,\_2F\_1\left( \begin{array}{c} \lambda-\frac{1}{2} \end{array} \bigg| \frac{1}{\Gamma\left(\frac{n+1}{2}\right)} \frac{\Gamma\left(\frac{n+1}{2}\right)}{\Gamma\left(\lambda+\frac{n-1}{2}\right)} - \left(\frac{1-\left(-1\right)^n}{2}\right) \frac{\Gamma\left(\frac{n}{2}+1\right)}{\Gamma\left(\lambda+\frac{n}{2}\right)} \right) . \end{aligned} \tag{37}$$

by introducing *λ* = *μ* and *λ* = *μ* + 1, after some steps, it leads to

$$\begin{array}{c} \mathsf{Mi}\_{\pm\frac{1}{2},\mu}(\mathsf{x}) \\ = \ & \frac{\mathsf{x}^{\mu+1/2}}{\mu+1/2} \left[ {}\_1F\_2 \left( \begin{array}{c} \frac{\mu}{2} + \frac{1}{4} \\ \mu + \frac{1}{2}, \frac{\mu}{2} + \frac{3}{4} \end{array} \; \middle| \; \frac{\mathsf{x}^2}{16} \right) \mp \frac{\mathsf{x}/2}{2\mu+3} \, {}\_1F\_2 \left( \begin{array}{c} \frac{\mu}{2} + \frac{3}{4} \\ \mu + \frac{3}{2}, \frac{\mu}{2} + \frac{7}{4} \end{array} \; \middle| \; \frac{\mathsf{x}^2}{16} \right) \right]. \end{array}$$

#### **4. The Integral Wright Functions**

In 1933 [17] and in 1940 [18], Wright introduced new special functions that were considered as a kind of generalization of the Bessel functions. However, today they play a significant independent role in mathematics and in solutions of physical problems by modeling space diffusion, stochastic processes, probability distributions and other diverse natural phenomena [19,20]. The Wright functions are defined by the following series

$$\mathcal{W}\_{\mathfrak{a},\mathfrak{\beta}}(\mathbf{x}) = \sum\_{k=0}^{\infty} \frac{\mathfrak{x}^{k}}{k!\,\Gamma(\mathfrak{a}k + \beta)}.\tag{39}$$

If the parameter *α* is a positive real number, they are called the Wright functions of the first kind, and when −1 < *α* < 0, the Wright functions of the second kind.

Furthermore, consider the following functions:

$$\begin{array}{l} \mathcal{F}\_{\mathfrak{a}}(\mathbf{x}) = \mathcal{W}\_{-\mathfrak{a},0}(-\mathbf{x}), \quad 0 < \mathfrak{a} < 1, \\\mathcal{M}\_{\mathfrak{a}}(\mathbf{x}) = \mathcal{W}\_{-\mathfrak{a},1-\mathfrak{a}}(-\mathbf{x}), \quad 0 < \mathfrak{a} < 1, \\\mathcal{F}\_{\mathfrak{a}}(\mathbf{x}) = \mathfrak{a} \ge \mathcal{M}\_{\mathfrak{a}}(\mathbf{x}). \end{array} \tag{40}$$

These functions with negative arguments *x* and with particular values of parameters are frequently named as the Mainardi functions and are denoted as F*α*(*x*) and M*α*(*x*) [19,20].

Their explicit form is

$$\begin{split} \mathcal{F}\_{\mathfrak{a}}(\mathbf{x}) &= \sum\_{k=1}^{\infty} \frac{(-\mathbf{x})^{k}}{k!\Gamma(-ak)} \\ &= -\frac{1}{\pi} \sum\_{k=1}^{\infty} \frac{(-\mathbf{x})^{k}}{k!} \Gamma(ak+1) \sin(\pi ak), \\ \mathcal{M}\_{\mathfrak{a}}(\mathbf{x}) &= \sum\_{k=0}^{\infty} \frac{(-\mathbf{x})^{k}}{k!\Gamma(-\mathfrak{a}(k+1)+1)} \\ &= \frac{1}{\pi} \sum\_{k=0}^{\infty} \frac{(-\mathbf{x})^{k}}{k!} \Gamma(\mathfrak{a}(k+1)) \sin(\pi \mathfrak{a}(k+1)). \end{split} \tag{41}$$

For positive rational *α* = *p*/*q*, where *p*, *q* are positive coprimes, we have obtained reduction formulas for F*p*/*q*(*x*) and M*p*/*q*(*x*) in Appendix C. Furthermore, by applying the MATHEMATICA program to sums of infinite series in (39), it is possible to obtain the Wright functions of the first and second kinds for particular values of parameters *α* and *β* in an explicit form (Appendix C). The Laplace transforms of these functions are expressed in terms of the Mittag-Leffler functions, so they are omitted here [19–21].

The two-parameter E*α*,*β*(*t*) Mittag-Leffler functions defined in (9) differ only by the absence of factorials from the Wright functions and, therefore, the form of series in (39) leads to the integral Wright function, which is similar to that introduced in (11) and (12).

$$\mathrm{Wi}\_{\mathfrak{a},\mathfrak{\beta}}(\mathbf{x}) = \int\_0^\mathbf{x} \frac{\mathcal{W}\_{\mathfrak{a},\mathfrak{\beta}}(t) - 1/\Gamma(\mathfrak{\beta})}{t} dt. \tag{42}$$

Unfortunately, the notation is the same as the integral Whittaker functions. In an explicit form from (39), we have

$$\mathsf{Wi}\_{\alpha,\emptyset}(\mathbf{x}) = \sum\_{k=1}^{\infty} \frac{\mathbf{x}^k}{k \, k! \, \Gamma(\alpha k + \beta)}. \tag{43}$$

For *p* and *q* positive coprimes, applying (A1) and (A2), the corresponding expression to (14) is

$$\begin{split} & \quad \text{Wi}\_{p/q,\beta}(\mathbf{x}) \\ &= \sum\_{k=1}^{q-1} \frac{\mathbf{x}^{k}}{k \, k! \Gamma\left(\frac{p}{q}k + \beta\right)} \,\_2F\_{p+q}\left( \begin{array}{c} 1, k/q \\ b\_0, \dots, b\_{p-1}, c\_0, \dots, c\_{q-1} \end{array} \; \middle| \; \frac{\mathbf{x}^{q}}{p^p q^q} \right), \\ & b\_j = \frac{k}{q} + \frac{\beta + j}{p}, \quad c\_j = \frac{k+1+j}{q}. \end{split} \tag{4}$$

In the case of the Mainardi functions, we have

$$\begin{split} \text{Fi}\_{p/q}(\mathbf{x}) &= -\frac{1}{\pi} \sum\_{k=1}^{q} \frac{(-\mathbf{x})^{k}}{k \, k!} \Gamma\left(\frac{p}{q}k + 1\right) \sin\left(\pi \frac{p}{q}k\right) \mathbf{S}\_{k}(\mathbf{x}),\\ \text{Mi}\_{p/q}(\mathbf{x}) &= \frac{1}{\pi} \sum\_{k=1}^{q} \frac{(-\mathbf{x})^{k}}{k \, k!} \sin\left(\pi \frac{p}{q}(k+1)\right) \Gamma\left(\frac{p}{q}(k+1)\right) \mathbf{S}\_{k}(\mathbf{x}), \end{split} \tag{45}$$

where

$$\begin{aligned} S\_k(\mathbf{x}) &= \, \_{p+2}F\_{q+1} \left( \begin{array}{c} 1, \frac{k}{q}, a\_{0}, \dots, a\_{p-1} \\ \frac{k}{q} + 1, b\_{0}, \dots, b\_{q-1} \end{array} \bigg| \frac{(-1)^{p+q} \mathbf{x}^{q} p^{p}}{q^{q}} \right), \\\ a\_{j} &= \frac{k}{q} + \frac{j+1}{p}, \quad b\_{j} = \frac{k+1+j}{q}. \end{aligned} \tag{46}$$

In Tables 10 and 11, the integral Wright functions derived with the help of MATH-EMATICA program for some values of parameters *α* and *β* are derived. There are many other expressions for these functions, which are available using this program, but being long and complex, they were omitted. The integral Mainardi fuctions Fi*α*(*x*) and Mi*α*(*x*) for 0 < *α* < 1, are presented in Tables 12 and 13. As can be expected, most of these integral functions are expressed in terms of generalized hypergeometric functions.

**Table 10.** The integral Wright functions Wi*α*,*<sup>β</sup>* derived for some values of parameters *α* and *β* by using (43).



**Table 10.** *Cont.*

**Table 11.** The integral Wright functions Wi*α*,*<sup>β</sup>* derived for some values of parameters *α* and *β* by using (43).



**Table 12.** The integral Mainardi function Fi*α* derived for some values of parameter *α* by using (45).

**Table 13.** The integral Mainardi function Mi*α* derived for some values of parameter *α* by using (45).


#### **5. Conclusions**

For the first time, three new special functions are presented in this investigation: the integral Mittag-Leffler functions, the integral Whittaker functions, and the integral Wright functions. These functions are defined in the mathematical literature in the same manner as other elementary and special integral functions. It is feasible to generate these functions in an explicit form for certain parameters values using the MATHEMATICA application. These integral functions are often represented in terms of generalized hypergeometric functions. The behavior of some of them is shown graphically. In the Appendices, a large number of Mittag-Leffler, Whittaker, and Wright functions with integral and fractional parameters, as well as their Laplace transforms, are presented in tabular form.

It may be observed that, generally, it is highly possible to make general integral functions such as (19) and (20) by using generalized hypergeometric *pFp*(*t*), because they converge in the whole complex t-plane, or, for every real number *t*.

**Author Contributions:** Conceptualization, A.A. and J.L.G.-S.; Methodology, A.A. and J.L.G.-S.; Resources, A.A.; Writing—original draft, A.A. and J.L.G.-S.; Writing—review & editing, A.A. and J.L.G.-S. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** We are grateful to Armando Consiglio from Institut für Theoretische Physik und Astrophysik of University of Würzburg, Germany, who performed numerical evaluations of the integral functions presented in Figures 1 and 2, and to Francesco Mainardi from the Department of Physics and Astronomy, University of Bologna, Bologna, Italy, for his kind encouragement and interest in our work.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A. Representations of the One- and Two-Parameter Mittag-Leffler Functions and Their Laplace Transforms**

The Mittag-Leffler functions are defined by the sums of infinite series presented in (9) and their Laplace transforms in (10). For positive variable *x* and some values of parameters *α* and *β*, these sums can be expressed in terms of elementary and special functions, especially in terms of generalized hypergeometric functions. They were derived by using the MATHEMATICA program and presented in Tables A1 and A2 for the Mittag-Leffler functions, as well as Tables A3 and A4 for the Laplace transforms. These results, given in terms of infinite series, are mostly new, and only they are only partly known in the mathematical literature. Knowing that any infinite sum can be split as

$$\sum\_{k=0}^{\infty} a(k) = \sum\_{j=0}^{q-1} \sum\_{k=0}^{\infty} a(qk+j),\tag{A1}$$

and applying the multiplication formula of the gamma function ([5] (Eqn. 5.5.6)), for *nt* = 0, −1, −2, . . .

$$
\Gamma(nt) = (2\pi)^{(1-n)/2} n^{nt-1/2} \prod\_{j=0}^{n-1} \Gamma\left(t + \frac{j}{n}\right),
\tag{A2}
$$

it is possible to express from (9) the Mittag-Leffler function in the case of positive rational *α* = *p*/*q* with *p* and *q* positive coprimes,

$$\mathbf{E}\_{p/q,\emptyset}(\mathbf{x}) = \sum\_{k=0}^{q-1} \frac{\mathbf{x}^k}{\Gamma\left(\frac{p}{q}k + \beta\right)} \,\_1F\_p\left( \begin{array}{c} 1 \\ & b\_0, \dots, b\_{p-1} \end{array} \bigg| \frac{\mathbf{x}^q}{p^p} \right),\tag{A.3}$$

where

$$b\_{\hat{l}} = \frac{k}{q} + \frac{\beta + j}{p}.$$

The corresponding Laplace transforms are

$$\begin{aligned} &\mathcal{L}\left[\mathbb{E}\_{p/q,\beta}(t)\right] \\ &=\sum\_{k=0}^{q-1} \frac{s^{-k-1}}{\Gamma\left(\frac{p}{q}k+\beta\right)} \,\_{q+1}F\_p\left( \begin{array}{c} 1,a\_0,\ldots,a\_{q-1} \\ b\_{0\prime},\ldots,b\_{p-1} \end{array} \bigg| \frac{(q/s)^q}{p^p} \right), \end{aligned} \tag{A4}$$

where

$$\begin{array}{rcl} a\_j &=& \frac{k+1+j}{q}, \\ b\_j &=& \frac{k}{q} + \frac{\beta+j}{p}. \end{array}$$


**Table A1.** The Mittag-Leffler functions derived for some values of parameters *α* and *β* by using (9).


**Table A2.** The Mittag-Leffler functions derived for some values of parameters *α* and *β* by using (9).

**Table A3.** The Laplace transforms Mittag-Leffler functions derived for some values of parameters *α* and *β* by using (10).


**Table A3.** *Cont.*


**Table A4.** The Laplace transforms Mittag-Leffler functions derived for some values of parameters *α* and *β* by using (10).



**Table A4.** *Cont.*

#### **Appendix B. Representations of the Whittaker Functions and Their Laplace Transforms**

The Whittaker functions M*κ*,*μ*(*x*) and W*κ*,*μ*(*x*) defined in (18) were derived by using the MATHEMATICA program, and they are presented in Tables A5, A6, A10 and A11. The corresponding Laplace transforms are in Tables A7–A9 and A12. Most of the reported results in these tables are unknown in the mathematical reference literature.


**Table A5.** The Whittaker functions M*κ*,*μ* derived for some values of parameters *κ* and *μ* by using (18).


**Table A6.** The Whittaker functions M*κ*,*μ* derived for some values of parameters *κ* and *μ* by using (18).


**Table A6.** *Cont.*


**Table A7.** The Laplace transforms of the Whittaker function M*κ*,*μ* derived for some values of parameters *κ* and *μ*.



**Table A7.** *Cont.*

**Table A8.** The Laplace transforms of the Whittaker functions M*κ*,*μ* derived for some values of parameters *κ* and *μ*.



**Table A8.** *Cont.*

**Table A9.** The Laplace transforms of the Whittaker functions M*κ*,*μ* derived for some values of parameters *κ* and *μ*.



**Table A10.** The Whittaker functions W*κ*,*μ* derived for some values of parameters *κ* and *μ* by using (18).

**Table A11.** The Whittaker functions W*κ*,*μ* derived for some values of parameters *κ* and *μ* by using (18).


**Table A11.** *Cont.*


**Table A12.** The Laplace transforms of the Whittaker function W*κ*,*μ* derived for some values of parameters *κ* and *μ*.



**Table A12.** *Cont.*

#### **Appendix C. Representations of the Wright Functions**

The Wright functions W*α*,*β*(*x*), defined in (39), and presented in Tables A13 and A14, as well as the Mainardi functions F*α*(*x*) and M*α*(*x*), defined in (40), and presented in Tables A15 and A16, were derived by using the MATHEMATICA program. Only a small part of these Wright functions is known in the mathematical reference literature.

In the case of positive rational *α* = *p*/*q* with *p* and *q* positive coprimes, applying (A1) and (A2), it is possible to express the Wright function by

$$=\sum\_{k=0}^{W\_{p/q,\beta}(\mathbf{x})} \frac{\mathbf{x}^k}{k! \, \Gamma\left(\frac{p}{q}k + \beta\right)} \,\_0F\_{p+q-1}\left( \begin{array}{c} - \\ - \\ b\_0, \dots, b\_{p-1}, c\_0^\*, \dots, c\_{q-2}^\* \end{array} \bigg| \frac{\mathbf{x}^q}{p^p q^q} \right),$$

where

$$\begin{aligned} b\_j &= \frac{k}{q} + \frac{\beta + j}{p}, \\ c\_j &= \frac{k+1+j}{q}, \end{aligned} \tag{A6}$$

and the set of numbers , *c*∗ *j* - = 0 *cj* 1 \{1}.

For the Mainardi functions, we have the following reduction formulas for positive rational *α* = *p*/*q* with *p* and *q* positive coprimes:

$$\begin{aligned} & \quad \text{F}\_{p/q}(\mathbf{x}) \\ &= \quad -\frac{1}{\pi} \sum\_{k=1}^{q} \frac{(-\mathbf{x})^{k}}{k!} \Gamma\left(\frac{p}{q}k + 1\right) \sin\left(\pi \frac{p}{q}k\right) \\ & \quad \,\_pF\_{q-1}\left( \begin{array}{c} a\_{0}, \dots, a\_{p-1} \\ b\_{0}^{\*}, \dots, b\_{q-2}^{\*} \end{array} \bigg| \frac{(-1)^{p+q} \mathbf{x}^{q} p^{p}}{q^{q}} \right) . \end{aligned} \tag{4.7}$$

and

$$\mathbf{M}\_{p/q}(\mathbf{x}) = \frac{q}{p\mathbf{x}} \mathbf{F}\_{p/q}(\mathbf{x}),\tag{A8}$$

where

$$\begin{array}{rcl} a\_j &=& \frac{k}{q} + \frac{j+1}{p}, \\ b\_j &=& \frac{k+1+j}{q}, \end{array}$$

and the set of numbers , *b*∗ *j* - = 0 *bj* 1 \{1}.


**Table A13.** The Wright functions W*α*,*<sup>β</sup>* derived for some values of parameters *α* and *β* by using (39).


**Table A14.** The Wright functions W*α*,*<sup>β</sup>* derived for some values of parameters *α* and *β* by using (39).

**Table A15.** The Mainardi function F*α* derived for some values of parameter *α* by using (A7).

**Table A16.** The Mainardi function M*α* derived for some values of parameter *α* by using (A8).


#### **References**


MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Mathematics* Editorial Office E-mail: mathematics@mdpi.com www.mdpi.com/journal/mathematics

MDPI St. Alban-Anlage 66 4052 Basel Switzerland

Tel: +41 61 683 77 34

www.mdpi.com

ISBN 978-3-0365-6991-8