*Article* **A Compound Poisson Perspective of Ewens–Pitman Sampling Model**

**Emanuele Dolera 1,2,3 and Stefano Favaro 2,3,4,\***


**Abstract:** The Ewens–Pitman sampling model (EP-SM) is a distribution for random partitions of the set {1, ... , *n*}, with *n* ∈ N, which is indexed by real parameters *α* and *θ* such that either *α* ∈ [0, 1) and *θ* > −*α*, or *α* < 0 and *θ* = −*mα* for some *m* ∈ N. For *α* = 0, the EP-SM is reduced to the Ewens sampling model (E-SM), which admits a well-known compound Poisson perspective in terms of the log-series compound Poisson sampling model (LS-CPSM). In this paper, we consider a generalisation of the LS-CPSM, referred to as the negative Binomial compound Poisson sampling model (NB-CPSM), and we show that it leads to an extension of the compound Poisson perspective of the E-SM to the more general EP-SM for either *α* ∈ (0, 1), or *α* < 0. The interplay between the NB-CPSM and the EP-SM is then applied to the study of the large *n* asymptotic behaviour of the number of blocks in the corresponding random partitions—leading to a new proof of Pitman's *α* diversity. We discuss the proposed results and conjecture that analogous compound Poisson representations may hold for the class of *α*-stable Poisson–Kingman sampling models—of which the EP-SM is a noteworthy special case.

**Keywords:** Berry–Esseen type theorem; Ewens–Pitman sampling model; exchangeable random partitions; log-series compound poisson sampling model; Mittag–Leffler distribution function; negative binomial compound poisson sampling model; Pitman's *α*-diversity; wright distribution function

## **1. Introduction**

The Pitman–Yor process is a discrete random probability measure indexed by real parameters *α* and *θ* such that either *α* ∈ [0, 1) and *θ* > −*α*, or *α* < 0 and *θ* = −*mα* for some *m* ∈ N—as can be seen in, e.g., Perman et al. [1], Pitman [2] and Pitman and Yor [3]. Let {*Vi*}*i*≥<sup>1</sup> be independent random variables such that *Vi* is distributed as a Beta distribution with parameter (1 − *α*, *θ* + *iα*), for *i* ≥ 1, with the convention for *α* < 0 that *Vm* = 1 and *Vi* is undefined for *<sup>i</sup>* > *<sup>m</sup>*. If *<sup>P</sup>*<sup>1</sup> := *<sup>V</sup>*<sup>1</sup> and *Pi* := *Vi* <sup>∏</sup>1≤*j*≤*i*−1(<sup>1</sup> − *Vj*) for *<sup>i</sup>* ≥ 2, such that almost definitely <sup>∑</sup>*i*≥<sup>1</sup> *Pi* = 1, then the Pitman–Yor process is the random probability measure <sup>p</sup>˜*α*,*<sup>θ</sup>* on (N, 2N) such that <sup>p</sup>˜*α*,*<sup>θ</sup>* ({*i*}) = *Pi* for *<sup>i</sup>* <sup>≥</sup> 1. The Dirichlet process (Ferguson [4]) arises for *α* = 0. Because of the discreteness of p˜*α*,*θ*, a random sample (*X*1, ... , *Xn*) induces a random partition Π*<sup>n</sup>* of {1, ... , *n*} by means of the equivalence *i* ∼ *j* ⇐⇒ *Xi* = *Xj* (Pitman [5]). Let *Kn*(*α*, *θ*) := *Kn*(*X*1, ... , *Xn*) ≤ *n* be the number of blocks of Π*<sup>n</sup>* and let *Mr*,*n*(*α*, *θ*) := *Mr*,*n*(*X*1, ... , *Xn*), for *r* = 1, ... , *n*, be the number of blocks with frequency *r* of <sup>Π</sup>*<sup>n</sup>* with <sup>∑</sup>1≤*r*≤*<sup>n</sup> Mr*,*<sup>n</sup>* = *Kn* and <sup>∑</sup>1≤*r*≤*<sup>n</sup> rMr*,*<sup>n</sup>* = *<sup>n</sup>*. Pitman [2] showed that:

$$\Pr[\left(M\_{1,n}(a,\theta),\ldots,M\_{\mathfrak{n},\mathfrak{n}}(a,\theta)\right) = (\mathbf{x}\_1,\ldots,\mathbf{x}\_n)] = n! \frac{\binom{\theta}{a}\_{(\sum\_{i=1}^n x\_i)} \prod\_{i=1}^n}{(\theta)\_{(n)}} \frac{\left(\frac{a(1-a)\_{(i-1)}}{i!}\right)^{x\_i}}{x\_i!},\quad(1)$$

**Citation:** Dolera, E.; Favaro, S. A Compound Poisson Perspective of Ewens–Pitman Sampling Model. *Mathematics* **2021**, *9*, 2820. https:// doi.org/10.3390/math9212820

Academic Editor: Francisco-José Vázquez-Polo

Received: 7 October 2021 Accepted: 5 November 2021 Published: 6 November 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

with (*x*)(*n*) being the ascending factorial of *<sup>x</sup>* of order *<sup>n</sup>*, i.e., (*x*)(*n*) := <sup>∏</sup>0≤*i*≤*n*−1(*<sup>x</sup>* + *<sup>i</sup>*). The distribution (1) is referred to as the Ewens–Pitman sampling model (EP-SM), and for *α* = 0, it reduces to the Ewens sampling model (E-SM) in Ewens [6]. The Pitman–Yor process plays a critical role in a variety of research areas, such as mathematical population genetics, Bayesian nonparametrics, machine learning, excursion theory, combinatorics and statistical physics. See Pitman [5] and Crane [7] for a comprehensive treatment of this subject.

The E-SM admits a well-known compound Poisson perspective in terms of the logseries compound Poisson sampling model (LS-CPSM). See Charalambides [8] and the references therein for an overview of compound Poisson models. We consider a population of individuals with a random number *K* of distinct types, and let *K* be distributed as a Poisson distribution with parameter *λ* = −*z* log(1 − *q*) for *q* ∈ (0, 1) and *z* > 0. For *i* ∈ N, let *Ni* denote the random number of individuals of type *i* in the population, and let the *Ni*'s be independent of *K* and independent from each other, with the same distribution:

$$\Pr[N\_1 = x] = -\frac{1}{x\log(1-q)}q^x \tag{2}$$

for *<sup>x</sup>* <sup>∈</sup> <sup>N</sup>. Let *<sup>S</sup>* <sup>=</sup> <sup>∑</sup>1≤*i*≤*<sup>K</sup> Ni* and let *Mr* <sup>=</sup> <sup>∑</sup>1≤*i*≤*<sup>K</sup>* {*Ni*=*r*} for *<sup>r</sup>* <sup>=</sup> 1, ... , *<sup>S</sup>*, that is, *Mr* is the random number of *Ni* equal to *<sup>r</sup>* such that <sup>∑</sup>*r*≥<sup>1</sup> *Mr* = *<sup>K</sup>* and <sup>∑</sup>*r*≥<sup>1</sup> *rMr* = *<sup>S</sup>*. If (*M*1(*z*, *n*), ... , *Mn*(*z*, *n*)) denotes a random variable whose distribution coincides with the conditional distribution of (*M*1, ... , *MS*) given *S* = *n*, then (Section 3, Charalambides [8]) it holds:

$$\Pr\left[\left(M\_1(z,n),\ldots,M\_n(z,n)\right)=\left(\mathbf{x}\_1,\ldots,\mathbf{x}\_n\right)\right]=\frac{n!}{\left(z\right)\_{\left(n\right)}}\prod\_{i=1}^n \frac{\left(\frac{z}{l}\right)^{x\_i}}{x\_i!}.\tag{3}$$

The distribution (3) is referred to as the LS-CPSM, and it is equivalent to the E-SM. That is, the distribution (3) coincides with the distribution (1) with *α* = 0. Therefore, the distributions of *<sup>K</sup>*(*z*, *<sup>n</sup>*) = <sup>∑</sup>1≤*r*≤*<sup>n</sup> Mr*(*z*, *<sup>n</sup>*) and *Mr*(*z*, *<sup>n</sup>*) coincide with the distributions of *Kn*(0, *<sup>z</sup>*) and *Mr*,*n*(0, *<sup>z</sup>*), respectively. Let *<sup>w</sup>* −→ denote the weak convergence. From Korwar and Hollander [9], *K*(*z*, *n*)/ log *n <sup>w</sup>* −→ *z* as *n* → +∞, whereas from Ewens [6], it follows that *Mr*(*z*, *<sup>n</sup>*) *<sup>w</sup>* −→ *Pz*/*<sup>r</sup>* as *n* → +∞, where *Pz* is a Poisson random variable with parameter *z*.

In this paper, we consider a generalisation of the LS-CPSM referred to as the negative binomial compound Poisson sampling model (NB-CPSM). The NB-CPSM is indexed by real parameters *α* and *z* such that either *α* ∈ (0, 1) and *z* > 0 or *α* < 0 and *z* < 0. The LS-CPSM is recovered by letting *α* → 0 and *z* > 0. We show that the NB-CPSM leads to extend the compound Poisson perspective of the E-SM to the more general EP-SM for either *α* ∈ (0, 1), or *α* < 0. That is, we show that: (i) for *α* ∈ (0, 1), the EP-SM (1) admits a representation as a randomised NB-CPSM with *α* ∈ (0, 1) and *z* > 0, where the randomisation acts on *z* with respect a scale mixture between a Gamma and a scaled Mittag–Leffler distribution (Pitman [5]); (ii) for *α* < 0 the NB-CPSM admits a representation in terms of a randomised EP-SM with *α* < 0 and *θ* = −*mα* for some *m* ∈ N, where the randomisation acts on *m* with respect to a tilted Poisson distribution arising from the Wright function (Wright [10]). The interplay between the NB-CPSM and the EP-SM is then applied to the large *n* asymptotic behaviour of the number of distinct blocks in the corresponding random partitions. In particular, by combining the randomised representation in (i) with the large *n* asymptotic behaviour or the number of distinct blocks under the NB-CPSM, we present a new proof of Pitman's *α*-diversity (Pitman [5]), namely the large *n* asymptotic behaviour of *Kn*(*α*, *θ*) under the EP-SM for *α* ∈ (0, 1).

#### **2. A Compound Poisson Perspective of EP-SM**

To introduce the NB-CPSM, we considered a population of individuals with a random number *K* of types and let *K* be distributed as a Poisson distribution with parameter *<sup>λ</sup>* <sup>=</sup> *<sup>z</sup>*[<sup>1</sup> <sup>−</sup> (<sup>1</sup> <sup>−</sup> *<sup>q</sup>*)*α*] such that either *<sup>q</sup>* <sup>∈</sup> (0, 1), *<sup>α</sup>* <sup>∈</sup> (0, 1) and *<sup>z</sup>* <sup>&</sup>gt; 0, or *<sup>q</sup>* <sup>∈</sup> (0, 1), *<sup>α</sup>* <sup>&</sup>lt; 0 and *z* < 0. For *i* ∈ N, let *Ni* be the random number of individuals of type *i* in the population, and let the *Ni* be independent of *K* and independent from each other with the same distribution:

$$\Pr[N\_1 = x] = -\frac{1}{\left[1 - (1 - q)^a\right]} \binom{a}{x} (-q)^x \tag{4}$$

for *<sup>x</sup>* <sup>∈</sup> <sup>N</sup>. Let *<sup>S</sup>* <sup>=</sup> <sup>∑</sup>1≤*i*≤*<sup>K</sup> Ni* and *Mr* <sup>=</sup> <sup>∑</sup>1≤*i*≤*<sup>K</sup>* {*Ni*=*r*} for *<sup>r</sup>* <sup>=</sup> 1, ... , *<sup>S</sup>*, that is, *Mr* is the random number of *Ni* equal to *<sup>r</sup>* such that <sup>∑</sup>*r*≥<sup>1</sup> *Mr* = *<sup>K</sup>* and <sup>∑</sup>*r*≥<sup>1</sup> *rMr* = *<sup>S</sup>*. If (*M*1(*α*, *z*, *n*), ... , *Mn*(*α*, *z*, *n*)) is a random variable whose distribution coincides with the conditional distribution of (*M*1, ... , *MS*), given *S* = *n*, then it holds (Section 3, Charalambides [8]):

$$\Pr[(M\_1(a, z, n), \dots, M\_{\mathbb{N}}(a, z, n)) = (\mathbf{x}\_1, \dots, \mathbf{x}\_{\mathbb{N}})] = \frac{n!}{\sum\_{l=0}^{\mathbb{N}} \ell^l(n, j; a) z^l} \prod\_{l=1}^{\mathbb{N}} \frac{\left[2 \frac{a(1 - a)\_{(l-1)} }{l!} \right]^{x\_l}}{\mathbf{x}\_l!},\tag{5}$$

where C (*n*, *j*; *α*) = <sup>1</sup> *<sup>j</sup>*! <sup>∑</sup>0≤*i*≤*<sup>j</sup>* ( *j i* )(−1)*<sup>i</sup>* (−*iα*)(*n*) is the generalised factorial coefficient (Charalambides [11]), with the proviso C (*n*, 0, *α*) = 0 for all *n* ∈ N, C (*n*, *j*, *α*) = 0 for all *j* > *n* and C (0, 0, *α*) = 1. The distribution (5) is referred to as the NB-CPSM. As *α* → 0, the distribution (4) reduces to the distribution (2), and hence the NB-CPSM (5) is reduced to the LS-CPSM (3). The next theorem states the large *n* asymptotic behaviour of the counting statistics *<sup>K</sup>*(*α*, *<sup>z</sup>*, *<sup>n</sup>*) = <sup>∑</sup>1≤*r*≤*<sup>n</sup> Mr*(*α*, *<sup>z</sup>*, *<sup>n</sup>*) and *Mr*(*α*, *<sup>z</sup>*, *<sup>n</sup>*) arising from the NB-CPSM.

**Theorem 1.** *Let P<sup>λ</sup> denote a Poisson random variable with the parameter λ* > 0*. As n* → +∞*, (i) for α* ∈ (0, 1) *and z* > 0*:*

$$\mathcal{K}(\mathfrak{a}, z, \mathfrak{n}) \stackrel{\mathfrak{w}}{\longrightarrow} \mathbf{1} + P\_z \tag{6}$$

*and:*

$$\mathcal{M}\_r(\mathfrak{a}, z, \mathfrak{n}) \stackrel{w}{\longrightarrow} P\_{\frac{a(1-a)\_{(r-1)}}{r!}z} \stackrel{\mathcal{F}}{\longleftrightarrow} \tag{7}$$

$$\text{(ii)} \quad \text{for } \alpha < 0 \text{ and } z < 0;$$

$$\frac{K(\alpha, z, n)}{n^{\frac{-\alpha}{1-\alpha}}} \stackrel{w}{\longrightarrow} \frac{(\alpha z)^{\frac{1}{1-\alpha}}}{-\alpha} \tag{8}$$

*and:*

$$M\_{\mathbb{F}}(\mathfrak{a}, z, \mathfrak{n}) \stackrel{\mathfrak{w}}{\longrightarrow} P\_{\frac{\mathfrak{a}(1-\mathfrak{a})\_{\{r-1\}}}{r!}z} \,. \tag{9}$$

**Proof.** As regards the proof of (6), we start by recalling that the probability generating function *G*(·; *λ*) of *P<sup>λ</sup>* is *G*(*s*; *λ*) = exp{−*λ*(*s* − 1)} for any *s* > 0. Now, let *G*(·; *α*, *z*, *n*) be the probability generating function of *K*(*α*, *z*, *n*). The distribution of *K*(*α*, *z*, *n*) follows by combining the NB-CPSM (5) with Theorem 2.15 of Charalambides [11]. In particular, it follows that:

$$G(s; \mathfrak{a}, z, \mathfrak{n}) = \frac{\sum\_{j=1}^{n} \mathfrak{G}(n, j; \mathfrak{a}) (sz)^{j}}{\sum\_{j=1}^{n} \mathfrak{G}(n, j; \mathfrak{a}) z^{j}}.$$

Hereafter, we show that *G*(*s*; *α*, *z*, *n*) → *s* exp{*z*(*s* − 1)} as *n* → +∞, for any *s* > 0, which implies (6). In particular, by the direct application of the definition of C (*n*, *k*; *α*), we write the following:

$$\sum\_{j=1}^{n} \ell^{\varepsilon}(n, j; a) z^{j} = \sum\_{i=1}^{n} (-1)^{i} (-ia)\_{(n)} \sum\_{k=i}^{n} \frac{1}{k!} \binom{k}{i} z^{k} = \sum\_{i=1}^{n} (-1)^{i} (-ia)\_{(n)} z^{\varepsilon} z^{i} \frac{\Gamma(n - i + 1, \varepsilon)}{i! \Gamma(n - i + 1)},$$

where Γ(*a*, *x*) := +∞ *<sup>x</sup> <sup>t</sup>a*−1*e*−*<sup>t</sup>* d*t* denotes the incomplete gamma function for *a*, *x* > 0 and Γ(*a*) := +∞ <sup>0</sup> *<sup>t</sup>a*−1*e*−*<sup>t</sup>* d*t* denotes the Gamma function for *a* > 0. Accordingly, we write the identity:

$$\mathcal{G}(s; \mathfrak{a}, z, n) = e^{z(s-1)} \frac{-z s^{\frac{\Gamma(n, zs)}{\Gamma(n)}} + \sum\_{i=2}^{n} (-1)^{i} \frac{(-ia)\_{(n)}}{(-a)\_{(n)}} (zs)^{i} \frac{\Gamma(n - i + 1, zs)}{i! \Gamma(n - i + 1)}}{-z \frac{\Gamma(n, z)}{\Gamma(n)} + \sum\_{i=2}^{n} (-1)^{i} \frac{(-ia)\_{(n)}}{(-a)\_{(n)}} z^{i} \frac{\Gamma(n - i + 1, z)}{i! \Gamma(n - i + 1)}}$$

Since lim*n*→+<sup>∞</sup> <sup>Γ</sup>(*n*,*x*) <sup>Γ</sup>(*n*) = 1 for any *x* > 0, the proof (6) is completed by showing that, for any *t* > 0:

$$\lim\_{n \to +\infty} \sum\_{i=2}^{n} (-1)^{i} \frac{(-i\alpha)\_{(n)}}{(-\alpha)\_{(n)}} \frac{\Gamma(n - i + 1, t)}{\Gamma(n - i + 1)} \frac{t^{i}}{i!} = 0. \tag{10}$$

.

By the definition of ascending factorials and the reflection formula of the Gamma function, it holds:

$$\frac{(-i\alpha)\_{(n)}}{(-\alpha)\_{(n)}} = \frac{\Gamma(n - i\alpha)}{\Gamma(n - \alpha)} \frac{\sin i\pi\alpha}{\pi} \Gamma(i\alpha + 1)\Gamma(-\alpha).$$

In particular, by means of the monotonicity of the function [1, +∞) *z* → Γ(*z*), we can write:

$$\frac{1}{i!} \left| \frac{(-i\alpha)\_{(n)}}{(-\alpha)\_{(n)}} \right| \le \frac{|\Gamma(-\alpha)|}{\pi} \frac{\Gamma(n-2\alpha)}{\Gamma(n-\alpha)} \frac{\Gamma(i\alpha+1)}{i!} \tag{11}$$

for any *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> such that *<sup>n</sup>* <sup>&</sup>gt; 1/(<sup>1</sup> <sup>−</sup> *<sup>α</sup>*), and *<sup>i</sup>* ∈ {2, ... , *<sup>n</sup>*}. Note that <sup>Γ</sup>(*n*,*x*) <sup>Γ</sup>(*n*) ≤ 1. Then, we apply (11) to obtain:

$$\begin{split} \left| \sum\_{i=2}^{n} (-1)^{i} \frac{(-ia)\_{(n)}}{(-a)\_{(n)}} \frac{\Gamma(n-i+1,t)}{\Gamma(n-i+1)} \frac{t^{i}}{i!} \right| &\leq \sum\_{i=2}^{n} \frac{t^{i}}{i!} \left| \frac{(-ia)\_{(n)}}{(-a)\_{(n)}} \right| \\ &\leq \frac{|\Gamma(-a)|}{\pi} \frac{\Gamma(n-2a)}{\Gamma(n-a)} \sum\_{i\geq 0} t^{i} \frac{\Gamma(ia+1)}{i!} .\end{split}$$

Now, by means of Stirling approximation, it holds <sup>Γ</sup>(*n*−2*α*) <sup>Γ</sup>(*n*−*α*) <sup>∼</sup> <sup>1</sup> *<sup>n</sup><sup>α</sup>* as *n* → +∞. Moreover, we have:

$$\sum\_{i\geq 0} t^i \frac{\Gamma(i\alpha + 1)}{i!} = \int\_0^{+\infty} e^{tz^\alpha - z} dz < +\infty$$

where the finiteness of the integral follows, for any fixed *t* > 0, from the fact that *tz<sup>α</sup>* < <sup>1</sup> 2 *z* if *<sup>z</sup>* > (2*t*) <sup>1</sup> <sup>1</sup>−*<sup>α</sup>* . This completes the proof of (10) and hence the proof of (6). As regards the proof of (7), we make use of the falling factorial moments of *Mr*(*α*, *z*, *n*), which follows by combining the NB-CPSM (5) with Theorem 2.15 of Charalambides [11]. Let (*a*)[*n*] be the falling factorial of *<sup>a</sup>* of order *<sup>n</sup>*, i.e., (*a*)[*n*] <sup>=</sup> <sup>∏</sup>0≤*i*≤*n*−1(*<sup>a</sup>* <sup>−</sup> *<sup>i</sup>*), for any *<sup>a</sup>* <sup>∈</sup> <sup>R</sup><sup>+</sup> and *<sup>n</sup>* <sup>∈</sup> <sup>N</sup><sup>0</sup> with the proviso (*a*)[0] = 1. Then, we write:

$$\begin{split} & \mathbb{E}[\left(M\_{\mathbf{r}}(\mathbf{a},z,n)\right)\_{[s]}] \\ &= (-1)^{rs}(n)\_{[rs]} \binom{a}{r}^{s}(-z)^{s} \frac{\sum\_{j=0}^{n-rs} \wp^{\prime}(n-rs,j;a)z^{j}}{\sum\_{j=0}^{n} \wp^{\prime}(n,j;a)z^{j}} \\ &= (-1)^{rs}(n)\_{[rs]} \binom{a}{r}^{s}(-z)^{s} \frac{(-z)^{\Gamma(n-rsz)}\_{\Gamma(n-rs)} + \sum\_{i=2}^{n-rs}(-1)^{i} \frac{(-ia)\_{(n-rs)}}{(-a)\_{(n-rs)}}(z)^{i} \frac{\Gamma(n-rs-i+1,z)}{i!\Gamma(n-rs+i+1)}}{(-z)\frac{\Gamma(n,z)}{\Gamma(n)} + \sum\_{i=2}^{n}(-1)^{i} \frac{(-ia)\_{(n)}}{(-a)\_{(n)}}(z)^{i} \frac{\Gamma(n-i+1,z)}{\Gamma(n-i+1)}} \\ &= (-1)^{rs}(n)\_{[rs]} \binom{a}{r}^{s}(-z)^{s} \end{split}$$

$$\times \frac{(-\mathfrak{a})\_{(n-rs)}}{(-\mathfrak{a})\_{(n)}} \frac{(-z)\frac{\Gamma(n-rs,z)}{\Gamma(n-rs)} + \sum\_{i=2}^{n-rs}(-1)^{i}\frac{(-ia)\_{(n-rs)}}{(-a)\_{(n-lr)}}(z)^{i}\frac{\Gamma(n-rs-i+1,z)}{i!\Gamma(n-rs-i+1)}}{(-z)\frac{\Gamma(n,z)}{\Gamma(n)} + \sum\_{i=2}^{n}(-1)^{i}\frac{(-ia)\_{(n)}}{(-a)\_{(n)}}(z)^{i}\frac{\Gamma(n-i+1,z)}{\Gamma(n-i+1)}}.$$

Now, by means of the same argument applied in the proof of statement (6), it holds true that:

$$\lim\_{m \to +\infty} \frac{(-z)^{\frac{\Gamma(n-rs,z)}{\Gamma(n-rs)} + \sum\_{i=2}^{u-rs} (-1)^i \frac{(-ia)\_{(n-rs)}}{(-a)\_{(n-lr)}} (z)^i \frac{\Gamma(n-rs-i+1,z)}{i!\Gamma(n-rs-i+1)}}{(-z)^{\frac{\Gamma(n,z)}{\Gamma(n)} + \sum\_{i=2}^{u} (-1)^i \frac{(-ia)\_{(n)}}{(-a)\_{(n)}} (z)^i \frac{\Gamma(n-i+1,z)}{\Gamma(n-i+1)}} = 1.$$

Then:

$$\lim\_{n \to +\infty} \mathbb{E}[(M\_{\mathbf{f}}(\mathfrak{a}, z, n))\_{[s]}] = (-1)^{rs} \binom{\mathfrak{a}}{r}^s (-z)^s = \left[ \frac{\mathfrak{a} (1 - \mathfrak{a})\_{(r-1)}}{r!} z \right]^s$$

follows from the fact that (*n*)[*rs*] <sup>∼</sup> (−*α*)(*<sup>n</sup>*−*rs*) (−*α*)(*n*) as *<sup>n</sup>* <sup>→</sup> <sup>+</sup>∞. The proof of the large *<sup>n</sup>* asymptotics (7) is completed by recalling that the falling factorial moment of order *s* of *P<sup>λ</sup>* is E[(*Pλ*)[*s*]] = *λ<sup>s</sup>* .

As regards the proof of statement (8), let *α* = −*σ* for any *σ* > 0 and let *z* = −*ζ* for any *ζ* > 0. Then, by direct application of Equation (2.27) of Charalambides [11], we write the following identity:

$$\sum\_{j=0}^{n} \, ^\circ \mathcal{C}(n, j; -\sigma)(-\zeta)^j = (-1)^n \sum\_{v=0}^n s(n, v)(-\sigma)^v \sum\_{j=0}^v \zeta^j S(v, j)\_v$$

where *S*(*v*, *j*) is the Stirling number of that second type. Now, note that ∑*<sup>v</sup>* <sup>0</sup>≤*j*≤*<sup>v</sup> <sup>ζ</sup><sup>j</sup> S*(*v*, *j*) is the moment of order *v* of a Poisson random variable with parameter *ζ* > 0. Then, we write:

$$\sum\_{j=0}^{n} \ell^{\varepsilon}(n, j; -\sigma)(-\zeta)^{j} = \sum\_{v=0}^{n} |\mathbf{s}(n, v)| \sigma^{v} \sum\_{j\geq 0} j^{v} \mathbf{e}^{-\zeta} \frac{\zeta^{j}}{j!} = \sum\_{j\geq 0} \mathbf{e}^{-\zeta} \frac{\zeta^{j}}{j!} \int\_{0}^{+\infty} \mathbf{x}^{n} f\_{\mathrm{G}\_{\overline{\eta}, 1}}(\mathbf{x}) \mathbf{dx}. \tag{12}$$

That is:

$$B\_{\mathfrak{n}}(w) = \mathbb{E}[(\mathcal{G}\_{\sigma P\_{wr}1})^{\mathfrak{n}}],\tag{13}$$

where *Ga*,1 and *Pw* are independent random variables such that *Ga*,1 is a Gamma random variable with a shape parameter *a* > 0 and a scale parameter 1, and *Pw* is a Poisson random variable with a parameter *w*. Accordingly, the distribution of *GσPw*,1, say *μσ*,*w*, is the following:

$$\mu\_{\sigma,w}(\mathbf{d}t) = e^{-w}\delta\_0(\mathbf{d}t) + \left(\sum\_{j\geq 1} \frac{e^{-w}w^j}{j!} \frac{1}{\Gamma(j\sigma)} e^{-t} t^{j\sigma - 1}\right) \mathbf{d}t^j$$

for *t* > 0. The discrete component of *μσ*,*<sup>w</sup>* does not contribute to the expectation (13) so that we focus on the absolutely continuous component, whose density can be written as follows:

$$\sum\_{j\geq 1} \frac{e^{-w}w^j}{j!} \frac{1}{\Gamma(j\sigma)} e^{-t} t^{j\sigma - 1} = \frac{e^{-(w+t)}}{t} \mathcal{W}\_{\sigma,0}(wt^{\sigma})\_{\sigma}$$

where *<sup>W</sup>σ*,*τ*(*y*) :<sup>=</sup> <sup>∑</sup>*j*≥<sup>0</sup> *<sup>y</sup><sup>j</sup> <sup>j</sup>*!Γ(*jσ*+*τ*) is the Wright function (Wright [10]). In particular, for *τ* = 0:

$$B\_n(w) = \int\_0^{+\infty} t^n \frac{e^{-(w+t)}}{t} W\_{\sigma,0}(wt^{\sigma}) \,\mathrm{d}t \,\,. \tag{14}$$

If we split the integral as *M* <sup>0</sup> <sup>+</sup> +∞ *<sup>M</sup>* for any *M* > 0, the contribution of the latter integral is overwhelming with respect to the contribution of the former. Then, *Wσ*,0 can be equivalently replaced by the asymptotics *<sup>W</sup>σ*,0(*y*) <sup>∼</sup> *<sup>c</sup>*(*σ*)*<sup>y</sup>* <sup>1</sup> <sup>2</sup>(1+*σ*) exp{*σ*−1(*<sup>σ</sup>* <sup>+</sup> <sup>1</sup>)(*σy*) <sup>1</sup> <sup>1</sup>+*<sup>σ</sup>* }, as *y* → +∞, for some constant *c*(*σ*) solely depending on *σ*. See Theorem 2 in Wright [10]. Hence:

$$\begin{split} B\_n(w) &\sim c(\sigma) \int\_0^{+\infty} t^{n-1} e^{-(w+t)} (w t^{\sigma})^{\frac{1}{2(1+\sigma)}} \exp\left\{ \frac{\sigma+1}{\sigma} (\sigma w t^{\sigma})^{\frac{1}{1+\sigma}} \right\} dt \\ &= c(\sigma) e^{-w} w^{\frac{1}{2(1+\sigma)}} \int\_0^{+\infty} t^{n + \frac{\sigma}{2(1+\sigma)} - 1} \exp\{A(w, \sigma) t^{\frac{\sigma}{1+\sigma}} - t\} dt, \end{split}$$

where *A*(*w*, *σ*) := *<sup>σ</sup>*+<sup>1</sup> *<sup>σ</sup>* (*σw*) <sup>1</sup> <sup>1</sup>+*<sup>σ</sup>* . Then, the problem is reduced to an integral whose asymptotic behaviour is described in Berg [12]. From Equation (31) of the Berg [12] and Stirling approximation, we have:

$$B\_n(w) \sim c(\sigma) \varepsilon^{-w} w^{\frac{1}{2(1+\sigma)}} \Gamma(n) \exp\left\{ A(w, \sigma) n^{\frac{\sigma}{1+\sigma}} \right\}.\tag{15}$$

This last asymptotic expansion leads directly to (8). Indeed, let *G*(·; −*σ*, −*ζ*, *n*) be the probability generating function of the random variable *K*(−*σ*, −*ζ*, *n*), which reads as *G*(*s*; −*σ*, −*ζ*, *n*) = *Bn*(*sζ*)/*Bn*(*ζ*) for *s* > 0. Then, by means of (15), for any fixed *s* > 0 we write:

$$G(s; -\sigma, -\zeta, n) \sim e^{-w(s-1)} s^{\frac{1}{2(1+\sigma)}} \exp\left\{ n^{\frac{\sigma}{1+\sigma}} \frac{\sigma+1}{\sigma} (\sigma \zeta)^{\frac{1}{1+\sigma}} [s^{\frac{1}{1+\sigma}} - 1] \right\}.\tag{16}$$

Since (15) holds uniformly in *w* in a compact set, we consider the function *G*(*s*; −*σ*, −*ζ*, *n*) evaluated at some point *sn* and extend the validity of (16) with *sn* in the place of *s*, as long as {*sn*}*n*≥<sup>1</sup> varies in a compact subset of [0, <sup>+</sup>∞). Thus, we can choose *sn* <sup>=</sup> *<sup>s</sup>β*(*n*) and *β*(*n*) = <sup>1</sup> *n σ* 1+*σ* and notice that *β*(*n*) → 0 as *n* → +∞. Thus, *sn* 1 + *β*(*n*)log *s* → 1 and we have:

$$n^{\frac{\sigma}{1+\sigma}} \frac{\sigma+1}{\sigma} (\sigma w)^{\frac{1}{1+\sigma}} [s\_n^{\frac{1}{1+\sigma}} - 1] \to \frac{(\sigma \zeta)^{\frac{1}{1+\sigma}}}{\sigma} \log s\_n$$

which implies that *<sup>K</sup>*(−*σ*, <sup>−</sup>*ζ*, *<sup>n</sup>*) <sup>→</sup> (*σζ*) <sup>1</sup> 1+*σ <sup>σ</sup>* as *n* → +∞. This completes the proof of (8). As regards the proof (9), let *α* = −*σ* for any *σ* > 0 and let *z* = −*ζ* for any *ζ* > 0. Similarly to the proof of (7), here we make use of the falling factorial moments of *Mr*(−*σ*, −*ζ*, *n*), that is:

$$\mathbb{E}[(M\_r(-\sigma,\zeta,n))\_{[s]}] = (-1)^{rs}(n)\_{[rs]} \binom{-\sigma}{r}^s \zeta^s \frac{\sum\_{j=0}^{n-rs} \ell^j (n-rs,j;-\sigma)(-\zeta)^j}{\sum\_{j=0}^n \ell^j (n,j;-\sigma)(-\zeta)^j}.$$

At this point, we make use of the same large *n* arguments applied in the proof of statement (7). In particular, by means of the large *n* asymptotic (15), as *n* → +∞, it holds true that:

$$\frac{\sum\_{j=0}^{n-rs} \mathcal{C}\left(n-rs,j;-\sigma\right)(-\zeta)^j}{\sum\_{j=0}^{n} \mathcal{C}\left(n,j;-\sigma\right)(-\zeta)^j} \sim n^{-rs}.$$

Then:

$$\lim\_{n \to +\infty} \mathbb{E}\left[ (M\_{\varGamma}(-\sigma, -\zeta, n))\_{[s]} \right] = (-1)^{rs} \binom{-\sigma}{r}^s \zeta^s = \left[ \frac{-\sigma(1+\sigma)\_{(r-1)}}{r!} (-\zeta) \right]^s$$

it follows from the fact that (*n*)[*rs*] <sup>∼</sup> *<sup>n</sup>rs* as *<sup>n</sup>* <sup>→</sup> <sup>+</sup>∞. The proof of the large *<sup>n</sup>* asymptotics (9) is completed by recalling that the falling factorial moment of order *s* of *P<sup>λ</sup>* is E[(*Pλ*)[*s*]] = *λs* .

In the rest of the section, we make use of the NB-CPSM (5) to introduce a compound Poisson perspective of the EP-SM. In particular, our result extends the well-known compound Poisson perspective of the E-SM to the EP-SM for either *α* ∈ (0, 1), or *α* < 0. For *α* ∈ (0, 1) let *f<sup>α</sup>* denote the density function of a positive *α*-stable random variable *Xα*, that is *X<sup>α</sup>* is a random variable for which E[exp{−*tXα*}] = exp{−*t <sup>α</sup>*} for any *<sup>t</sup>* <sup>&</sup>gt; 0. For *α* ∈ (0, 1) and *θ* > −*α*, let *Sα*,*<sup>θ</sup>* be a positive random variable with the density function:

$$f\_{S\_{\mathfrak{a},\theta}}(s) = \frac{\Gamma(\theta + 1)}{\mathfrak{a}\Gamma(\theta/\mathfrak{a} + 1)} s^{\frac{\theta - 1}{\mathfrak{a}} - 1} f\_{\mathfrak{a}}(s^{-\frac{1}{\mathfrak{a}}}) .$$

That is, *Sα*,*<sup>θ</sup>* is a scaled Mittag–Leffler random variable (Chapter 1, Pitman [5]). Let *Ga*,*<sup>b</sup>* be a Gamma random variable with the scale parameter *b* > 0 and shape parameter *a* > 0, and let us assume that *Ga*,*<sup>b</sup>* is independent of *Sα*,*θ*. Then, for *α* ∈ (0, 1), *θ* > −*α* and *n* ∈ N let:

$$\mathcal{R}\_{\mathfrak{a},\theta,\mathfrak{v}} \stackrel{d}{=} G^{\mathfrak{a}}\_{\theta+\mathfrak{v},1} \mathcal{S}\_{\mathfrak{a},\theta}.\tag{17}$$

Finally, for *<sup>α</sup>* <sup>&</sup>lt; 0, *<sup>z</sup>* <sup>&</sup>lt; 0 and *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, let *<sup>X</sup>*˜ *<sup>α</sup>*,*z*,*<sup>n</sup>* be a random variable on <sup>N</sup> whose distribution is a tilted Poisson distribution arising from the identity (12). Precisely, for any *x* ∈ N:

$$\Pr[\mathcal{X}\_{a,z,n} = x] = \frac{1}{\sum\_{j=1}^{n} \ell'(n,j;a)z^j} \frac{\mathbf{e}^z(-z)^x \Gamma(-\mathbf{x}a + n)}{\mathbf{x}! \Gamma(-\mathbf{x}a)}.\tag{18}$$

The next theorem makes use of *X*¯ *<sup>α</sup>*,*θ*,*<sup>n</sup>* and *X*˜ *<sup>α</sup>*,*z*,*<sup>n</sup>* to set an interplay between NB-CPSM (5) and EP-SM (1). This extends the compound Poisson perspective of the E-SM.

**Theorem 2.** *Let* (*M*1,*n*(*α*, *θ*), ... , *Mn*,*n*(*α*, *θ*)) *be distributed as the EP-SM* (1) *and let X*¯ *<sup>α</sup>*,*θ*,*<sup>n</sup> be the random variable defined in* (17)*, which is independent of* (*M*1,*n*(*α*, *θ*), ... , *Mn*,*n*(*α*, *θ*))*. Moreover, let* (*M*1(*α*, *z*, *n*), ... , *Mn*(*α*, *z*, *n*)) *be distributed as the NB-CPSM* (5)*, and let X*˜ *<sup>α</sup>*,*z*,*<sup>n</sup> be the random variable defined in* (18)*, which is independent of* (*M*1(*α*, *z*, *n*),..., *Mn*(*α*, *z*, *n*))*. Then: (i) for α* ∈ (0, 1) *and θ* > −*α:*

$$(M\_{1,n}(a,\theta),\ldots,M\_{n,n}(a,\theta)) \stackrel{d}{=} (M\_1(a,\check{X}\_{a,\theta,n},n),\ldots,M\_n(a,\check{X}\_{a,\theta,n},n));$$

*(ii) for α* < 0 *and z* < 0*:*

$$\left(M\_1(\mathfrak{a}, z, \mathfrak{n}), \dots, M\_n(\mathfrak{a}, z, \mathfrak{n})\right) \stackrel{d}{=} \left(M\_{1,n}(\mathfrak{a}, -\mathfrak{X}\_{\mathfrak{a}, z, \mathfrak{n}}\mathfrak{a}), \dots, M\_{\mathfrak{n}, n}(\mathfrak{a}, -\mathfrak{X}\_{\mathfrak{a}, z, \mathfrak{n}}\mathfrak{a})\right).$$

**Proof.** As regards the proof of statement (i), it relies on the classical integral representation of the Gamma function. That is, by applying the integral representation of Γ(*θ*/*α* + *k*) to the EP-SM (1), for *<sup>x</sup>*1, ... , *xn* ∈ {0, ... , *<sup>n</sup>*} with <sup>∑</sup>*<sup>n</sup> <sup>i</sup>*=<sup>1</sup> *xi* = *<sup>k</sup>* and <sup>∑</sup>*<sup>n</sup> <sup>i</sup>*=<sup>1</sup> *ixi* = *n*, we can write that:

$$\begin{split} &\Pr[\langle M\_{1,n}(\boldsymbol{\alpha},\boldsymbol{\theta}),\ldots,M\_{n,n}(\boldsymbol{\alpha},\boldsymbol{\theta})\rangle = (\mathbf{x}\_{1},\ldots,\mathbf{x}\_{n})] \\ &= n! \frac{\mathbf{a}^{k}}{\Gamma(\boldsymbol{\theta}+\boldsymbol{n})} \prod\_{i=1}^{n} \frac{\left(\frac{(1-\boldsymbol{a})\_{(i-1)}}{i!}\right)^{\mathbf{x}\_{i}}}{\mathbf{x}\_{i}!} \frac{\Gamma(\boldsymbol{\theta}+1)}{\boldsymbol{a}\Gamma(\boldsymbol{\theta}/\boldsymbol{\alpha}+1)} \\ &\times \int\_{0}^{+\infty} z^{\boldsymbol{\theta}/\boldsymbol{a}-1} \mathbf{e}^{-z} \frac{z^{k}}{\sum\_{j=1}^{n} \boldsymbol{\theta}'(\boldsymbol{n},j;\boldsymbol{\alpha}) z^{j}} \left(\sum\_{j=1}^{n} \boldsymbol{\theta}'(\boldsymbol{n},j;\boldsymbol{\alpha}) z^{j}\right) \mathbf{d}z \end{split}$$

By Equation (13) of Favaro et al. [13]:

$$=n!\frac{\mathfrak{a}^k}{\Gamma(\theta+n)}\prod\_{i=1}^n \frac{\left(\frac{(1-\mathfrak{a})\_{(i-1)}}{i!}\right)^{\mathfrak{X}\_i}}{\mathfrak{x}\_i!}\frac{\Gamma(\theta+1)}{\mathfrak{a}\Gamma(\theta/\mathfrak{a}+1)}$$

$$\begin{split} &\quad \times \int\_{0}^{+\infty} z^{\theta/\kappa - 1} e^{-z} \frac{z^{k}}{\sum\_{j=1}^{n} \ell^{\theta}(n, j, a) z^{j}} \Big( \mathbf{e}^{z} z^{n/a} \int\_{0}^{+\infty} y^{n} \mathbf{e}^{-yz^{1/a}} f\_{a}(y) \mathbf{d}y \Big) \mathbf{d}z \\ &= \int\_{0}^{+\infty} \frac{n!}{\sum\_{j=0}^{n} \ell^{\theta}(n, j, a) z^{j}} \prod\_{i=1}^{n} \frac{\left( z \frac{a(1-a)\_{(i-1)}}{i!} \right)^{x\_{i}}}{x\_{i}!} \\ & \quad \times \frac{\Gamma(\theta + 1)}{a \Gamma(\theta + n) \Gamma(\theta/a + 1)} z^{\theta/a + n/a - 1} \int\_{0}^{+\infty} y^{n} \mathbf{e}^{-y z^{1/a}} f\_{a}(y) \mathbf{d}y \mathbf{d}z \\ &= \int\_{0}^{+\infty} \text{Pr}[(M\_{1}(a, x, n), \dots, M\_{n}(a, x, n)) = (\mathbf{x}\_{1}, \dots, \mathbf{x}\_{n})] \\ & \quad \times \frac{\Gamma(\theta + 1)}{a \Gamma(\theta + n) \Gamma(\theta/a + 1)} z^{\theta/a + n/a - 1} \int\_{0}^{+\infty} y^{n} \mathbf{e}^{-y z^{1/a}} f\_{a}(y) \mathbf{d}y \mathbf{d}z \\ & \quad \times \text{the distribution of } \mathcal{R} \end{split}$$

By the distribution of *X*¯ *<sup>α</sup>*,*θ*,*n*:

$$\mathbf{u} = \int\_0^{+\infty} \Pr[(M\_1(\mathbf{a}, z, \mathbf{n}), \dots, M\_{\mathbb{R}}(\mathbf{a}, z, \mathbf{n})) = (\mathbf{x}\_1, \dots, \mathbf{x}\_{\mathbb{R}})] f\_{\mathbb{X}\_{\mathbf{a}, \theta, \mathbf{n}}}(z) d\mathbf{z},$$

where *fX*¯ *<sup>α</sup>*,*θ*,*<sup>n</sup>* is the density function of the random variable *<sup>X</sup>*¯ *<sup>α</sup>*,*θ*,*n*. This completes the proof of (i).

As regards the proof of statement (ii), for any *α* < 0, *m* ∈ N, *k* ≤ *m* and *n* ∈ N, we define the function *m* <sup>→</sup> *<sup>A</sup>*(*m*; *<sup>k</sup>*, *<sup>α</sup>*, *<sup>n</sup>*) = *<sup>m</sup>*! (*m*−*k*)! Γ(−*mα*) <sup>Γ</sup>(−*mα*+*n*), and then consider the following identity:

$$\frac{(-z)^k}{\sum\_{j=1}^n \ell'(n,j;\alpha)z^j} = \sum\_{m \ge k} A(m;k,\alpha,n) \text{Pr}[\mathcal{R}\_{n,z,n} = m]. \tag{19}$$

By applying (19) to the NB-CPSM (5), for *<sup>x</sup>*1, ... , *xn* ∈ {0, ... , *<sup>n</sup>*} with <sup>∑</sup>*<sup>n</sup> <sup>i</sup>*=<sup>1</sup> *xi* = *k* and ∑*n <sup>i</sup>*=<sup>1</sup> *ixi* = *n*, we write:

$$\begin{split} &\Pr[\left(M\_{1}(\boldsymbol{x},\boldsymbol{z},\boldsymbol{n}),\ldots,M\_{n}(\boldsymbol{a},\boldsymbol{z},\boldsymbol{n})\right) = \left(\mathbf{x}\_{1},\ldots,\mathbf{x}\_{n}\right)] \\ & \quad = \sum\_{m\geq k} n!(-1)^{k} A(m;k,\boldsymbol{a},n) \Pr[\tilde{\mathcal{X}}\_{\boldsymbol{a},\boldsymbol{z},n} = m] \prod\_{i=1}^{n} \frac{\left(\frac{a(1-\boldsymbol{a})\_{(i-1)}}{i!}\right)^{\boldsymbol{x}\_{i}}}{\boldsymbol{x}\_{i}!} \\ & \quad = \sum\_{m\geq k} n!(-1)^{k} \frac{m!}{(m-k)!} \frac{\Gamma(-ma)}{\Gamma(-ma+n)} \Pr[\tilde{\mathcal{X}}\_{\boldsymbol{a},\boldsymbol{z},n} = m] \prod\_{i=1}^{n} \frac{\left(\frac{a(1-\boldsymbol{a})\_{(i-1)}}{i!}\right)^{\boldsymbol{x}\_{i}}}{\boldsymbol{x}\_{i}!} \\ & \quad = \sum\_{m\geq k} n! \frac{(\frac{-ma}{a})\_{(k)}}{(-ma)\_{(n)}} \prod\_{i=1}^{n} \frac{\left(\frac{a(1-\boldsymbol{a})\_{(i-1)}}{i!}\right)^{\boldsymbol{x}\_{i}}}{\boldsymbol{x}\_{i}!} \Pr[\mathcal{X}\_{\boldsymbol{a},\boldsymbol{z},n} = m] \\ & \quad = \sum\_{m\geq k} \text{Pr}[(M\_{1}(\boldsymbol{a}\_{\boldsymbol{r}} - ma), \ldots, M\_{n}(\boldsymbol{a}\_{\boldsymbol{r}} - ma)) = (\mathbf{x}\_{1}, \ldots, \mathbf{x}\_{n})] \text{Pr}[\tilde{\mathcal{X}}\_{\boldsymbol{a},\boldsymbol{z},n} = m]. \end{split}$$

This completes the proof of (ii).

Theorem 2 presents a compound Poisson perspective of the EP-SM in terms of the NB-CPSM, thus extending the well-known compound Poisson perspective of the E-SM in terms of the LS-CPSM. Statement (i) of Theorem 2 shows that for *α* ∈ (0, 1) and *θ* > −*α*, the EP-SM admits a representation in terms of the NB-CPSM with *α* ∈ (0, 1) and *z* > 0, where the randomisation acts on the parameter *z* with respect to the distribution (17). Precisely, this is a compound mixed Poisson sampling model. That is, a compound sampling model in which the distribution of the random number *K* of distinct types in the population is a mixture of Poisson distributions with respect to the law of *X*¯ *<sup>α</sup>*,*θ*,*n*. Statement (ii) of Theorem 2 shows that for *α* < 0 and *z* < 0, the NB-CPSM admits a representation in terms of a randomised EP-SM with *α* < 0 and *θ* = −*mα* for some *m* ∈ N, where the randomisation acts on the parameter *m* with respect to the distribution (17).

**Remark 1.** *The randomisation procedure introduced in Theorem 2 is somehow reminiscent of a class of Gibbs-type sampling models introduced in Gnedin and Pitman [14]. This class is defined from the EP-SM with α* < 0 *and θ* = −*mα, for some m* ∈ N*, and then it assumes that the parameter m is distributed according to an arbitrary distribution on* N*. This can be seen in Theorem 12 of Gnedin and Pitman [14] and Gnedin [15] for example. However, differently from the definition of Gnedin and Pitman [14], in our context, the distribution of m depends on the sample size n.*

For *α* ∈ (0, 1) and *θ* > −*α*, Pitman [5] first studied the large *n* asymptotic behaviour of *Kn*(*α*, *<sup>θ</sup>*). This can also be seen in Gnedin and Pitman [14] and the references therein. Let *<sup>a</sup>*.*s*. −→ denote the almost sure convergence, and let *<sup>S</sup>α*,*<sup>θ</sup>* be the scaled Mittag–Leffler random variable defined above. Theorem 3.8 of Pitman [5] exploited a martingale convergence argument to show that:

$$\frac{K\_n(\mathfrak{a}, \theta)}{n^{\mathfrak{a}}} \stackrel{a.s.}{\longrightarrow} S\_{\mathfrak{a}, \theta} \tag{20}$$

as *n* → +∞. The random variable *Sα*,*<sup>θ</sup>* is referred to as Pitman's *α*-diversity. For *α* < 0 and *θ* = −*mα* for some *m* ∈ N, the large *n* asymptotic behaviour of *Kn*(*α*, *θ*) is trivial, that is:

$$\mathcal{K}\_{\mathfrak{n}}(\mathfrak{a}, \theta) \stackrel{w}{\longrightarrow} m \tag{21}$$

as *n* → +∞. We refer to Dolera and Favaro [16,17] for Berry–Esseen type refinements of (20) and to Favaro et al. [18,19] and Favaro and James [13] for generalisations of (20) with applications to Bayesian nonparametrics. This can also be seen in Pitman [5] (Chapter 4) for a general treatment of (20). According to Theorem 2, it is natural to ask whether there exists an interplay between Theorem 1 and the large *n* asymptotic behaviours (20) and (21). Hereafter, we show that: (i) (20), with the almost sure convergence replaced by the convergence in distribution, arises by combining (6) with (i) of Theorem 2; (ii) (8) arises by combining (21) with (ii) of Theorem 2. This provides an alternative proof of Pitman's *α*-diversity.

**Theorem 3.** *Let Kn*(*α*, *θ*) *and K*(*α*, *z*, *n*) *under the EP-SM and the NB-CPSM, respectively. As n* → +∞*:*

*(i) For α* ∈ (0, 1) *and θ* > −*α:*

$$\frac{K\_{\rm il}(\alpha,\theta)}{n^{\alpha}} \stackrel{w}{\longrightarrow} \mathcal{S}\_{\alpha,\theta}. \tag{22}$$

*(ii) For α* < 0 *and z* < 0*:*

$$\frac{K(\alpha, z, n)}{n^{\frac{-\alpha}{1-\alpha}}} \stackrel{w}{\longrightarrow} \frac{(\alpha z)^{\frac{1}{1-\alpha}}}{-\alpha}. \tag{23}$$

**Proof.** We show that (22) arises by combining (6) with statement (i) of Theorem 2. For any pair of N-valued random variables *U* and *V*, let d*TV*(*U*; *V*) be the total variation distance between the distribution of *U* and the distribution of *V*. Furthermore, let *Pc* denote a Poisson random variable with parameter *c* > 0. For any *α* ∈ (0, 1) and *t* > 0, we show that as *n* → +∞:

$$\mathbf{d}\_{TV}(K(\mathfrak{a}, t n^{\mathfrak{a}}, n); 1 + P\_{t n^{\mathfrak{a}}}) \to 0. \tag{24}$$

This implies (22). The proof of (24) requires a careful analysis of the probability generating function of *K*(*α*, *tnα*, *n*). In particular, let us define *ω*(*t*; *n*, *α*) := *tn<sup>α</sup>* + *tM <sup>α</sup>*(*t*) *<sup>M</sup>α*(*t*) , where *<sup>M</sup>α*(*t*) :<sup>=</sup> <sup>1</sup> *<sup>π</sup>* <sup>∑</sup><sup>∞</sup> *m*=1 (−*t*)*m*−<sup>1</sup> (*m*−1)! <sup>Γ</sup>(*αm*) sin(*παm*) is the Wright–Mainardi function (Mainardi et al. [20]). Then, we apply Corollary 2 of Dolera and Favaro [16] to conclude that d*TV*(*K*(*α*, *tnα*, *<sup>n</sup>*); 1 <sup>+</sup> *<sup>P</sup>ω*(*t*;*n*,*<sup>α</sup>*)) <sup>→</sup> 0 as *<sup>n</sup>* <sup>→</sup> <sup>+</sup>∞. Finally, we applied inequality (2.2) in Adell and Jodrá [21] to obtain:

$$\mathbf{d}\_{TV}(1+P\_{t\mathbb{H}^{\mathfrak{a}}};1+P\_{\omega\left(t;\mathfrak{n},\mathfrak{a}\right)}) = \mathbf{d}\_{TV}(P\_{t\mathbb{H}^{\mathfrak{a}}};P\_{\omega\left(t;\mathfrak{n},\mathfrak{a}\right)}) \leq \frac{tM\_{\mathfrak{a}}'(t)}{M\_{\mathfrak{a}}(t)} \min\left\{1, \frac{\sqrt{\langle 2/\varepsilon\rangle}}{\sqrt{\omega\left(t;n,a\right)} + \sqrt{t n^{\mathfrak{a}}}}\right\}$$

So that d*TV*(1 + *Ptn<sup>α</sup>* ; 1 + *Pω*(*t*;*n*,*<sup>α</sup>*)) → 0 as *n* → +∞, and (24) follows. Now, keeping *α* and *t* fixed as above, we show that (24) entails (22). To this aim, we introduced the Kolmogorov distance d*<sup>K</sup>* which, for any pair of R+-valued random variables *U* and *V*, is defined by <sup>d</sup>*K*(*U*; *<sup>V</sup>*) :<sup>=</sup> sup*x*≥<sup>0</sup> <sup>|</sup>Pr[*<sup>U</sup>* <sup>≤</sup> *<sup>x</sup>*] <sup>−</sup> Pr[*<sup>V</sup>* <sup>≤</sup> *<sup>x</sup>*]|. The claim to be proven is equivalent to:

$$\mathbf{d}\_K(\mathcal{K}\_n(\alpha, \theta) / n^{\alpha}; \mathcal{S}\_{\alpha, \theta}) \to 0$$

as *n* → +∞. We exploit statement (i) of Theorem 2. This leads to the distributional identity *Kn*(*α*, *<sup>θ</sup>*) *<sup>d</sup>* = *K*(*α*, *X*¯ *<sup>α</sup>*,*θ*,*n*, *n*). Thus, in view of the basic properties of the Kolmogorov distance:

$$\begin{split} \mathbf{d}\_{K}(\mathcal{K}\_{\mathfrak{n}}(\boldsymbol{\alpha},\boldsymbol{\theta})/n^{\mathfrak{a}};\mathcal{S}\_{\mathfrak{a},\boldsymbol{\theta}}) &\leq \mathbf{d}\_{K}(\mathcal{K}\_{\mathfrak{n}}(\boldsymbol{\alpha},\boldsymbol{\theta});\mathcal{K}(\boldsymbol{\alpha},n^{\mathfrak{a}}\mathcal{S}\_{\mathfrak{a},\boldsymbol{\theta},\boldsymbol{\nu}})) \\ &+ \mathbf{d}\_{K}(\mathcal{K}(\mathfrak{a},n^{\mathfrak{a}}\mathcal{S}\_{\mathfrak{a},\boldsymbol{\theta}},\boldsymbol{n});1+P\_{n^{\mathfrak{a}}\mathcal{S}\_{\mathfrak{a},\boldsymbol{\theta}}}) \\ &+ \mathbf{d}\_{K}([1+P\_{n^{\mathfrak{a}}\mathcal{S}\_{\mathfrak{a},\boldsymbol{\theta}}}]/n^{\mathfrak{a}};\mathcal{S}\_{\mathfrak{a},\boldsymbol{\theta}}), \end{split} \tag{25}$$

where the {*Pλ*}*λ*≥<sup>0</sup> is thought of here as a homogeneous Poisson process with a rate of 1, independent of *Sα*,*θ*. The desired conclusion will be reached as soon as we will prove that all the three summands on the right-hand side of (25) go to zero as *n* → +∞. Before proceeding, we recall that d*K*(*U*; *V*) ≤ d*TV*(*U*; *V*). Therefore, for the first of these terms, we write:

$$\begin{aligned} &\operatorname{Ad}\_{K}(K\_{n}(\mathfrak{a},\theta);\operatorname{K}(\mathfrak{a},n^{\mathfrak{a}}\mathcal{S}\_{\mathfrak{a},\theta},n)) \\ &\leq \frac{1}{2}\sum\_{k=1}^{n}\Big|\mathcal{C}(n,k;\mathfrak{a})\frac{\Gamma(k+\theta/\mathfrak{a})}{\operatorname{a}\Gamma(\theta/\mathfrak{a}+1)}\frac{\Gamma(\theta+1)}{\Gamma(n+\theta)} - \int\_{0}^{+\infty}\frac{\mathcal{C}(n,k;\mathfrak{a})(t n^{\mathfrak{a}})^{k}}{d\_{n}(t)}f\_{\mathcal{S}\_{\mathfrak{a},\theta}}(t)\operatorname{d}t\Big|\_{0}^{1} \end{aligned}$$

with *dn*(*t*) := ∑*<sup>n</sup> <sup>j</sup>*=<sup>1</sup> <sup>C</sup> (*n*, *<sup>j</sup>*; *<sup>α</sup>*)(*tnα*)*<sup>j</sup>* . Now, let us define *d*∗ *<sup>n</sup>*(*t*) :<sup>=</sup> *<sup>e</sup>tn<sup>α</sup>* (*<sup>n</sup>* <sup>−</sup> <sup>1</sup>)! <sup>1</sup> *<sup>t</sup>*1/*<sup>α</sup> <sup>f</sup>α*( <sup>1</sup> *<sup>t</sup>*1/*<sup>α</sup>* ). Accordingly, we can make the above right-hand side major by means of the following quantity:

$$\begin{split} &\frac{1}{2} \sum\_{k=1}^{n} \left| \left< \theta'(n,k;a) \frac{\Gamma(k+\theta/a)}{a\Gamma(\theta/a+1)} \frac{\Gamma(\theta+1)}{\Gamma(n+\theta)} - \int\_{0}^{+\infty} \frac{\left< \theta'(n,k;a) \right> (tn^{\alpha})^{k}}{d\_{n}^{\ast}(t)} f\_{\mathbb{S}\_{n,\theta}}(t) \mathbf{d}t \right| \right| \\ &+ \frac{1}{2} \int\_{0}^{+\infty} \frac{\left| d\_{n}^{\ast}(t) - d\_{n}(t) \right|}{d\_{n}^{\ast}(t)} f\_{\mathbb{S}\_{n,\theta}}(t) \mathbf{d}t \ . \end{split}$$

Then, by exploiting the identity +∞ 0 (*tnα*)*<sup>k</sup> d*∗ *<sup>n</sup>*(*t*) *fS<sup>α</sup>*,*<sup>θ</sup>* (*t*)d*<sup>t</sup>* <sup>=</sup> <sup>1</sup> (*n*−1)! Γ(*k*+*θ*/*α*) *nθ* Γ(*θ*+1) *<sup>α</sup>*Γ(*θ*/*α*+1), we can write:

$$\sum\_{k=1}^{n} \left| \ell^{\varrho}(n,k;a) \frac{\Gamma(k+\theta/a)}{a\Gamma(\theta/a+1)} \frac{\Gamma(\theta+1)}{\Gamma(n+\theta)} - \int\_{0}^{+\infty} \frac{\ell^{\varrho}(n,k;a)(tn^{a})^{k}}{d\_{n}^{\ast}(t)} f\_{\mathbb{S}\_{a,\theta}}(t) \mathrm{d}t \right| = \left| 1 - \frac{\Gamma(n+\theta)}{\Gamma(n)n^{\theta}} \right|.$$

which goes to zero as *n* → +∞ for any *θ* > −*α*, by Stirling's approximation. To show that the integral +∞ 0 |*d*∗ *<sup>n</sup>*(*t*)−*dn*(*t*)| *d*∗ *<sup>n</sup>*(*t*) *fS<sup>α</sup>*,*<sup>θ</sup>* (*t*)d*<sup>t</sup>* also goes to zero as *<sup>n</sup>* <sup>→</sup> <sup>+</sup>∞, we may resort to identities (13)–(14) of Dolera and Favaro [16], as well as Lemma 3 therein. In particular, let Δ : (0, +∞) → (0, +∞) denote a suitable continuous function independent of *n*, and such that <sup>Δ</sup>(*z*) = *<sup>O</sup>*(1) as *<sup>z</sup>* <sup>→</sup> 0 and <sup>Δ</sup>(*z*)*fα*(1/*z*) = *<sup>O</sup>*(*z*−∞) as *<sup>z</sup>* <sup>→</sup> <sup>+</sup>∞. Then, we write that:

$$\begin{aligned} &\int\_0^{+\infty} \frac{|d\_n^\*(t) - d\_n(t)|}{d\_n^\*(t)} f\_{S\_{x,\theta}}(t) \, \mathrm{d}t \\ &\leq \left| \frac{(n/\varepsilon)^n \sqrt{2\pi n}}{n!} - 1 \right| + \left( \frac{(n/\varepsilon)^n \sqrt{2\pi n}}{n!} \right) \frac{1}{n} \int\_0^{+\infty} \Delta(t^{1/a}) f\_{S\_{x,\theta}}(t) \, \mathrm{d}t \, \mathrm{d}t \end{aligned}$$

Since +∞ <sup>0</sup> Δ(*t* 1/*α*)*fS<sup>α</sup>*,*<sup>θ</sup>* (*t*)d*<sup>t</sup>* < +<sup>∞</sup> by Lemma 3 of Dolera and Favaro [16], both the summands on the above right-hand side go to zero as *n* → +∞, again by Stirling's approximation. Thus, the first summand on the right-hand side of (25) goes to zero as *n* → +∞. As for the second summand on the right-hand side of (25), it can be bounded by

$$\int\_0^{+\infty} \mathbf{d}\_{TV}(K(\alpha, t n^{\alpha}, n); 1 + P\_{t n^{\alpha}}) f\_{S\_{\alpha, \beta}}(t) \mathbf{d}t \; . $$

By a dominated convergence argument, this quantity goes to zero as *n* → +∞ as a consequence of (24). Finally, for the third summand on the right-hand side of (25), we can resort to a conditioning argument in order to reduce the problem to a direct application of the law of large numbers for renewal processes (Section 10.2, Grimmett and Stirzaker [22]). In particular, this leads to *<sup>n</sup>*−*<sup>α</sup>Ptn<sup>α</sup> <sup>a</sup>*.*s*. −→ *<sup>t</sup>* for any *<sup>t</sup>* <sup>&</sup>gt; 0, which entails that *<sup>n</sup>*−*<sup>α</sup>Pn<sup>α</sup>Sα*,*<sup>θ</sup> <sup>a</sup>*.*s*. −→ *<sup>S</sup>α*,*<sup>θ</sup>* as *n* → +∞. Thus, this third term also goes to zero as *n* → +∞ and (22) follows.

Now, we consider (23), showing that it arises by combining (21) with statement (ii) of Theorem 2. In particular, by an obvious conditioning argument, we can write that as *n* → +∞:

$$\frac{\mathcal{K}\_n(\mathfrak{a}, \tilde{\mathcal{X}}\_{\mathfrak{a}, z, n} | \mathfrak{a}|)}{\mathcal{X}\_{\mathfrak{a}, z, n}} \stackrel{a.s.}{\longrightarrow} 1.$$

At this stage, we consider the probability generating function of *X*˜ *<sup>α</sup>*,*z*,*<sup>n</sup>* and we immediately obtain <sup>E</sup>[*sX*˜ *<sup>α</sup>*,*z*,*<sup>n</sup>* ] :<sup>=</sup> *Bn*(−*sz*)/*Bn*(−*z*) for *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> and *<sup>s</sup>* <sup>∈</sup> [0, 1] with the same *Bn* as in (13) and (14). Therefore, the asymptotic expansion we already provided in (15) entails:

$$\frac{\mathcal{X}\_{a,z,\mathsf{n}}}{\mathfrak{n}^{\frac{-a}{1-a}}} \stackrel{w}{\longrightarrow} \frac{(az)^{\frac{1}{1-a}}}{-a} \tag{26}$$

as *n* → +∞. In particular, (26) follows by applying exactly the same arguments used to prove (8). Now, since:

$$\frac{\mathcal{K}\_{\mathfrak{n}}(\boldsymbol{\alpha}, \boldsymbol{\widetilde{X}}\_{\boldsymbol{\alpha}, \boldsymbol{z}, \boldsymbol{n}} | \boldsymbol{\alpha} |)}{n^{\frac{-\mathfrak{n}}{1-\mathfrak{a}}}} \stackrel{d}{=} \frac{\mathcal{K}\_{\mathfrak{n}}(\boldsymbol{\alpha}, \boldsymbol{\widetilde{X}}\_{\boldsymbol{\alpha}, \boldsymbol{z}, \boldsymbol{n}} | \boldsymbol{\alpha} |)}{\mathcal{K}\_{\mathfrak{a}, \boldsymbol{z}, \boldsymbol{n}}} \frac{\boldsymbol{\widetilde{X}}\_{\boldsymbol{\alpha}, \boldsymbol{z}, \boldsymbol{n}}}{n^{\frac{-\mathfrak{a}}{1-\mathfrak{a}}}},$$

the claim follows from a direct application of Slutsky's theorem. This completes the proof.

## **3. Discussion**

The NB-CPSM is a compound Poisson sampling model generalising the popular LS-CMSM. In this paper, we introduced a compound Poisson perspective of the EP-SM in terms of the NB-CPSM, thus extending the well-known compound Poisson perspective of the E-SM in terms of the LS-CPSM. We conjecture that an analogous perspective holds true for the class of *α*-stable Poisson–Kingman sampling models (Pitman [23] and Pitman [5]), of which the EP-SM is a noteworthy special case. That is, for *α* ∈ (0, 1), we conjecture that an *α*-stable Poisson–Kingman sampling model admits a representation as a randomised NB-CPSM with *α* ∈ (0, 1) and *z* > 0, where the randomisation acts on *z* with respect to a scale mixture between a Gamma and a suitable transformation of the Mittag–Leffler distribution. We believe that such a compound Poisson representation would be critical in order to introduce Berry–Esseen type refinements of the large *n* asymptotic behaviour of *Kn* under *α*-stable Poisson–Kingman sampling models. This can be seen in Section 6.1 of Pitman [23] and the references therein. Such a line of research aims to extend the preliminary works of Dolera and Favaro [16,17] on Berry–Esseen type theorems under the EP-SM. Work on this, and on the more general settings induced by normalised random measures (Regazzini et al. [24]) and Poisson–Kingman models (Pitman [23]), is ongoing.

**Author Contributions:** Formal analysis, E.D. and S.F.; writing—original draft preparation, E.D. and S.F.; writing—review and editing, E.D. and S.F. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No 817257. **Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors thank the editor and two anonymous referees for all their comments and suggestions which remarkably improved the original version of the present paper. Emanuele Dolera and Stefano Favaro wish to express their enormous gratitude to Eugenio Regazzini, whose fundamental contributions to the theory of Bayesian statistics have always been a great source of inspiration, transmitting enthusiasm and method for the development of their own research. The authors gratefully acknowledge the financial support from the Italian Ministry of Education, University and Research (MIUR), "Dipartimenti di Eccellenza" grant 2018–2022.

**Conflicts of Interest:** The authors declare no conflict of interest.

## **References**

