3.2.1. Properties

**Proposition 2** (Expected value of logarithmic duration in the GBS-ACD(*<sup>r</sup>*,*<sup>s</sup>*) model)**.** *Assuming that the process* {*Xi* ∼ GBS(*<sup>θ</sup>i*, *g*) : *i* = 1, 2, ...} *is strictly stationary and that <sup>E</sup>*[*<sup>ε</sup>i*] = *μ, where εi is given in* (17)*, we have*

$$E[\ln X\_i] = \frac{2\left[\alpha + \mu(1 + \sum\_{j=1}^r \beta\_j)\right] + (2 + \mu\_1 \kappa^2)\sum\_{j=1}^s \gamma\_j}{2(1 - \sum\_{j=1}^r \beta\_j)}, \quad \forall \vec{\mathbf{i}}\_r$$

*whenever* ∑*rj*=<sup>1</sup> *βj* = 1. *The constant (depending only on the kernel g) u*1 *is given in* (7)*.*

**Proposition 3** (Moments of logarithmic duration in the GBS-ACD(1, 1) model)**.** *If the process* {*Xi* ∼ GBS(*<sup>θ</sup>i*, *g*) : *i* = 1, 2, . . .} *is strictly stationary and <sup>E</sup>*[*<sup>ε</sup>i*] = *μ, where εi is given in* (17)*, then*

$$\begin{aligned} \bullet \ E[\ln X\_i] &= \frac{2[\mu + \mu(1+\beta)] + (2+u\_1\kappa^2)\gamma}{2(1-\beta)}, \quad \beta \neq 1, \\\bullet \ E[(\ln X\_i)^2] &= \mu(2-\mu) + 2\mu \operatorname{E}[\ln X\_i] + \\\bullet \ \frac{\mu^2 - 2a\beta + \frac{\lambda^2}{2}(u\_2\kappa^4 + 4u\_1\kappa^2 + 2) + \gamma(2+u\_1\kappa^2)(a-\beta\mu) + [2a\beta + \gamma\beta(2+u\_1\kappa^2)]\operatorname{E}[\ln X\_i]}{1-\beta^2}, \; \beta \neq \pm 1, \end{aligned} \quad \forall i.$$

3.2.2. Estimation

Let (*<sup>X</sup>*1, ... , *Xn*) be a sample from *Xi* ∼ GBS(*<sup>θ</sup>i*, *g*) for *i* = 1, ... , *n*, and let *x* = (*<sup>x</sup>*1, ... , *xn*) be the observed durations. Then, the log-likelihood function for *ξ* = (*<sup>κ</sup>*, *α*, *β*1, ... , *β<sup>r</sup>*, *γ*1, ... , *<sup>γ</sup>s*) is obtained as

$$\ell\_{\rm GBS}(\mathbf{f}) = \sum\_{i=1}^{n} \left[ \ln \left( 2c \right) - \ln \kappa - \ln \sigma\_{i} + \ln \lg \left( a^{2} (\mathbf{x}\_{i}; \boldsymbol{\theta}\_{i}) \right) + \ln \left( \left( \frac{\sigma\_{i}}{\mathbf{x}\_{i}} \right)^{1/2} + \left( \frac{\sigma\_{i}}{\mathbf{x}\_{i}} \right)^{3/2} \right) \right], \tag{18}$$

where the time-varying conditional median *σi* is given as in (16). The maximum-likelihood (ML) estimates can be obtained by maximizing the expression defined in (18) by equating the score vector ˙ GBS(*ξ*), which contains the first derivatives of GBS(*ξ*), to zero, providing the likelihood equations. They must be solved by an iterative procedure for non-linear optimization, such as the Broyden–Fletcher–Goldfarb–Shanno (BFGS) quasi-Newton method. It can easily be seen that the first-order partial derivatives of GBS(*ξ*) are

$$\frac{\partial \ell\_{\rm GBS}}{\partial u}(\mathfrak{f}) = \sum\_{i=1}^{n} \left[ \frac{2a(\mathbf{x}\_{i\prime};\boldsymbol{\theta}\_{i})}{g(a^{2}(\mathbf{x}\_{i\prime};\boldsymbol{\theta}\_{i}))} \frac{\partial a(\mathbf{x}\_{i\prime};\boldsymbol{\theta}\_{i})}{\partial u} g'(a^{2}(\mathbf{x}\_{i\prime};\boldsymbol{\theta}\_{i})) + \frac{1}{a'(\mathbf{x}\_{i\prime};\boldsymbol{\theta}\_{i})} \frac{\partial a'(\mathbf{x}\_{i\prime};\boldsymbol{\theta}\_{i})}{\partial u} \right],$$

for each *u* ∈ {*<sup>κ</sup>*, *α*, *β*1,..., *β<sup>r</sup>*, *γ*1,..., *<sup>γ</sup>s*}, where

$$\begin{array}{ll}\frac{\partial a(\mathbf{x};\boldsymbol{\theta}\_{i})}{\partial\mathbf{x}} = -\frac{a(\mathbf{x};\boldsymbol{\theta}\_{i})}{\kappa}, & \frac{\partial a(\mathbf{x};\boldsymbol{\theta}\_{i})}{\partial w} = \delta(\mathbf{x}\_{i};\boldsymbol{\theta}\_{i}) \frac{\partial r\_{i}}{\partial w} \\\frac{\partial a'(\mathbf{x}\_{i};\boldsymbol{\theta}\_{i})}{\partial\mathbf{x}} = -\frac{a'(\mathbf{x}\_{i};\boldsymbol{\theta}\_{i})}{\kappa}, & \frac{\partial a'(\mathbf{x}\_{i};\boldsymbol{\theta}\_{i})}{\partial w} = \Delta(\mathbf{x}\_{i};\boldsymbol{\theta}\_{i}) \frac{\partial r\_{i}}{\partial w} \end{array}, \quad w \in \{\mathbf{x}\_{i};\boldsymbol{\theta}\_{1},\ldots,\boldsymbol{\theta}\_{r},\boldsymbol{\gamma}\_{1},\ldots,\boldsymbol{\gamma}\_{s}\},\tag{19}$$

with *<sup>δ</sup>*(*xi*; *<sup>θ</sup>i*) = −√*xi*(<sup>2</sup>*κ*√*<sup>σ</sup>i*)−<sup>1</sup>(*σ*<sup>−</sup><sup>1</sup> *i* − *x*<sup>−</sup><sup>1</sup> *i* ) and <sup>Δ</sup>(*xi*; *<sup>θ</sup>i*) = −(<sup>4</sup>*κ*√*xiσi*)−<sup>1</sup>(*σ*<sup>−</sup><sup>1</sup> *i* + *x*<sup>−</sup><sup>1</sup> *i* ), and *i* = 1, . . . , *n*. Here,

$$\begin{split} \frac{\partial \boldsymbol{v}\_{l}}{\partial \boldsymbol{\alpha}} &= \left(\sum\_{j=1}^{r} \frac{\beta\_{j}}{\sigma\_{i-j}} \frac{\partial \boldsymbol{v}\_{l-j}}{\partial \boldsymbol{\alpha}} - \sum\_{j=1}^{s} \frac{\gamma\_{j} \boldsymbol{x}\_{i-j}}{\sigma\_{i-j}^{2}} \frac{\partial \boldsymbol{v}\_{l-j}}{\partial \boldsymbol{\alpha}}\right) \boldsymbol{\sigma}\_{i}, \\ \frac{\partial \boldsymbol{\sigma}\_{l}}{\partial \boldsymbol{\beta}} &= \left(\beta\_{l} \ln \boldsymbol{\sigma}\_{i-l} + \sum\_{j=1}^{r} \frac{\beta\_{j}}{\sigma\_{i-j}} \frac{\partial \boldsymbol{\sigma}\_{i-j}}{\partial \boldsymbol{\beta}\_{l}} - \sum\_{j=1}^{s} \frac{\gamma\_{j} \boldsymbol{x}\_{i-j}}{\sigma\_{i-j}^{2}} \frac{\partial \boldsymbol{\sigma}\_{i-j}}{\partial \boldsymbol{\beta}\_{l}}\right) \boldsymbol{\sigma}\_{i}, \quad l = 1,\ldots,r, \end{split} \tag{20}$$
  $\frac{\partial \boldsymbol{\sigma}\_{i}}{\partial \boldsymbol{\gamma}\_{m}} = \left(\sum\_{j=1}^{r} \frac{\beta\_{j}}{\sigma\_{i-j}} \frac{\partial \boldsymbol{\sigma}\_{i-j}}{\partial \boldsymbol{\gamma}\_{m}} + \gamma\_{m} \left[\frac{\boldsymbol{x}\_{i-m}}{\sigma\_{i-m}}\right] - \sum\_{j=1}^{s} \frac{\gamma\_{j} \boldsymbol{x}\_{i-j}}{\sigma\_{i-j}^{2}} \frac{\partial \boldsymbol{\sigma}\_{i-j}}{\partial \boldsymbol{\gamma}\_{m}}\right) \boldsymbol{\sigma}\_{i}, \quad m = 1,\ldots,s.$ 

The asymptotic distribution of the ML estimator *ξ* can be used to perform inference for *ξ*. This estimator is consistent and has an asymptotic multivariate normal joint distribution with mean *ξ* and covariance matrix **<sup>Σ</sup>***ξ*, which may be obtained from the corresponding expected Fisher information matrix <sup>I</sup>(*ξ*). Then,

$$\sqrt{n}\left[\widehat{\mathfrak{f}}-\mathfrak{f}\right] \stackrel{\mathcal{D}}{\rightarrow} \mathrm{N}\_{2+r+s}(\mathbf{0}, \Sigma\_{\mathfrak{f}} = \mathcal{J}(\mathfrak{f})^{-1}),$$

as *n* → <sup>∞</sup>, where D→ means "convergence in distribution" and J (*ξ*) = lim*n*→∞[1/*n*]I(*ξ*). Notice that I (*ξ*)−<sup>1</sup> is a consistent estimator of the asymptotic variance–covariance matrix of *ξ*. Here, we approximate the expected Fisher information matrix by its observed version obtained from the Hessian matrix ¨ GBS(*ξ*), which contains the second derivatives of GBS(*ξ*).

The elements of the Hessian are expressed as follows:

$$\begin{split} \frac{\partial^2 \ell\_{\rm GRS}}{\partial u \partial v} (\xi) &= \sum\_{i=1}^{n} \left[ \frac{\partial \Theta(\mathbf{x}\_i; \boldsymbol{\theta}\_i)}{\partial v} g'(a^2(\mathbf{x}\_i; \boldsymbol{\theta}\_i)) + 2 \Theta(\mathbf{x}\_i; \boldsymbol{\theta}\_i) a(\mathbf{x}\_i; \boldsymbol{\theta}\_i) \frac{\partial a(\mathbf{x}\_i; \boldsymbol{\theta}\_i)}{\partial v} g''(a^2(\mathbf{x}\_i; \boldsymbol{\theta}\_i))}{\partial u} \right. \\ &\left. - \frac{1}{\left(a'(\mathbf{x}\_i; \boldsymbol{\theta}\_i)\right)^2} \frac{\partial a'(\mathbf{x}\_i; \boldsymbol{\theta}\_i)}{\partial u} \frac{\partial a'(\mathbf{x}\_i; \boldsymbol{\theta}\_i)}{\partial v} + \frac{1}{a'(\mathbf{x}\_i; \boldsymbol{\theta}\_i)} \frac{\partial^2 a'(\mathbf{x}\_i; \boldsymbol{\theta}\_i)}{\partial u \partial v} \right], \end{split} \tag{21}$$

.

,

for each *u*, *v* ∈ {*<sup>κ</sup>*, *α*, *β*1,..., *β<sup>r</sup>*, *γ*1,..., *<sup>γ</sup>s*}, where

$$\begin{split} \Theta(\mathbf{x}\_{i};\boldsymbol{\theta}\_{i}) &= \frac{2a(\mathbf{x}\_{i};\boldsymbol{\theta}\_{i})}{\mathfrak{g}(\mathfrak{a}^{2}(\mathbf{x}\_{i};\boldsymbol{\theta}\_{i}))} \frac{\partial a(\mathbf{x}\_{i};\boldsymbol{\theta}\_{i})}{\partial u} \quad \text{and} \\\\ \frac{\partial \Theta(\mathbf{x}\_{i};\boldsymbol{\theta}\_{i})}{\partial v} &= \frac{2}{\mathfrak{g}(\mathfrak{a}^{2}(\mathbf{x}\_{i};\boldsymbol{\theta}\_{i}))} \left[ \left(1 - \frac{2\mathfrak{a}^{2}(\mathbf{x}\_{i};\boldsymbol{\theta}\_{i})}{\mathfrak{g}(\mathfrak{a}^{2}(\mathbf{x}\_{i};\boldsymbol{\theta}\_{i}))}\right) \frac{\partial a(\mathbf{x}\_{i};\boldsymbol{\theta}\_{i})}{\partial u} \frac{\partial a(\mathbf{x}\_{i};\boldsymbol{\theta}\_{i})}{\partial v} + a(\mathbf{x}\_{i};\boldsymbol{\theta}\_{i}) \frac{\partial^{2} a(\mathbf{x}\_{i};\boldsymbol{\theta}\_{i})}{\partial u \partial v} \right]. \end{split}$$

The partial derivatives *∂a*(*xi*;*θi*) *∂u* and *<sup>∂</sup><sup>a</sup>* (*xi*;*θi*) *∂u* are given in (19). Furthermore, the second-order partial derivatives of *a*(*xi*; *<sup>θ</sup>i*) and *a* (*xi*; *<sup>θ</sup>i*) in (21), respectively, are given by

$$\begin{split} \frac{\partial^{2}a(x\_{i};\theta\_{i})}{\partial\kappa^{2}} &= \frac{2a(x\_{i};\theta\_{i})}{\kappa^{2}}, \qquad \frac{\partial^{2}a(x\_{i};\theta\_{i})}{\partial w^{2}} = \frac{\sqrt{\pi\_{i}}}{4\kappa r\_{i}^{3/2}} \left(\frac{1}{\sigma\_{i}} - \frac{1}{\chi\_{i}}\right) \left(\frac{\partial\sigma\_{i}}{\partial w}\right)^{2} + \delta\left(\mathbf{x}\_{i};\theta\_{i}\right) \frac{\partial^{2}\sigma\_{i}}{\partial w^{2}},\\ \frac{\partial^{2}a'(x\_{i};\theta\_{i})}{\partial\kappa^{2}} &= \frac{2a'(x\_{i};\theta\_{i})}{\kappa^{2}}, \qquad \frac{\partial^{2}a'(x\_{i};\theta\_{i})}{\partial w^{2}} = \frac{2\kappa}{\sqrt{\pi\_{i}\sigma\_{i}^{3/2}}} \left(\frac{1}{\sigma\_{i}} + \frac{1}{\chi\_{i}}\right) \left(\frac{\partial\sigma\_{i}}{\partial w}\right)^{2} + \Delta\left(\mathbf{x}\_{i};\theta\_{i}\right) \frac{\partial^{2}\sigma\_{i}}{\partial w^{2}}. \end{split}$$

for each *w* ∈ {*<sup>α</sup>*, *β*1, ... , *β<sup>r</sup>*, *γ*1, ... , *<sup>γ</sup>s*}, with *<sup>δ</sup>*(*xi*; *<sup>θ</sup>i*) = −√*xi*(<sup>2</sup>*κ*√*<sup>σ</sup>i*)−<sup>1</sup>(*σ*<sup>−</sup><sup>1</sup> *i* − *x*<sup>−</sup><sup>1</sup> *i* ) and <sup>Δ</sup>(*xi*; *<sup>θ</sup>i*) = −(<sup>4</sup>*κ*√*xiσi*)−<sup>1</sup>(*σ*<sup>−</sup><sup>1</sup> *i* + *x*<sup>−</sup><sup>1</sup> *i* ). Here,

*<sup>∂</sup>*2*σi ∂α*<sup>2</sup> =& − ∑*rj*=<sup>1</sup> *βj <sup>σ</sup>i*−*j* 1*<sup>σ</sup>i*−*<sup>j</sup>*( *∂σi*−*<sup>j</sup> ∂α* )2 − *<sup>∂</sup>*2*σi*−*<sup>j</sup> ∂α*<sup>2</sup> + ∑*sj*=<sup>1</sup> *γjxi*−*j <sup>σ</sup>*2*i*−*<sup>j</sup>* 2*<sup>σ</sup>i*−*<sup>j</sup>*( *∂σi*−*<sup>j</sup> ∂α* )2 − *<sup>∂</sup>*2*σi*−*<sup>j</sup> ∂α*<sup>2</sup> '*<sup>σ</sup>i* + 1*σi*( *∂σi ∂α* )2, *<sup>∂</sup>*2*σi ∂β*<sup>2</sup>*l* =& ln *σi*−*l* + *βl <sup>σ</sup>i*−*l ∂σi*−*<sup>l</sup> ∂βl* − ∑*rj*=<sup>1</sup> *βj <sup>σ</sup>i*−*j* 1*<sup>σ</sup>i*−*<sup>j</sup>*( *∂σi*−*<sup>j</sup> ∂βl* )2 − *<sup>∂</sup>*2*σi*−*<sup>j</sup> ∂β*<sup>2</sup>*l* + ∑*sj*=<sup>1</sup> *γjxi*−*j <sup>σ</sup>*2*i*−*<sup>j</sup>* 2*<sup>σ</sup>i*−*<sup>j</sup>*( *∂σi*−*<sup>j</sup> ∂βl* )2 − *<sup>∂</sup>*2*σi*−*<sup>j</sup> ∂β*<sup>2</sup>*l* '*<sup>σ</sup>i* + 1*σi*( *∂σi ∂βl* )2, *l* = 1, . . . ,*r*, *<sup>∂</sup>*2*σi ∂γ*<sup>2</sup>*m* <sup>=</sup>&−∑*rj*=<sup>1</sup> *βj <sup>σ</sup>i*−*j* 1*<sup>σ</sup>i*−*<sup>j</sup>*( *∂σi*−*<sup>j</sup> ∂γm* )2 − *<sup>∂</sup>*2*σi*−*<sup>j</sup> ∂γ*<sup>2</sup>*m* + *xi*−*<sup>m</sup> σi*−*<sup>m</sup>* − *xi*−*<sup>m</sup> <sup>σ</sup>*2*i*−*<sup>m</sup> ∂σi*−*<sup>m</sup> ∂γm* <sup>+</sup>∑*sj*=<sup>1</sup> *γjxi*−*j <sup>σ</sup>*2*i*−*<sup>j</sup>* 2*<sup>σ</sup>i*−*<sup>j</sup>*( *∂σi*−*<sup>j</sup> ∂γm* )2 − *<sup>∂</sup>*2*σi*−*<sup>j</sup> ∂γ*<sup>2</sup>*m* '*<sup>σ</sup>i* + 1*σi*( *∂σi ∂γm* )2, *m* = 1, . . . ,*s*; *i* = 1, . . . , *n*.

Note that the functions *a*(*xi*; *<sup>θ</sup>i*) and *a* (*xi*; *<sup>θ</sup>i*) have continuous second-order partial derivatives at a given point *θi* ∈ R4, *i* = 1, ... , *n*. Then, by Schwarz's Theorem, it follows that the partial differentiations of these functions are commutative at that point, that is, *<sup>∂</sup>*2*a*(*xi*;*θi*) *∂u∂v* = *<sup>∂</sup>*2*a*(*xi*;*θi*) *∂v∂u* and *<sup>∂</sup>*2*<sup>a</sup>* (*xi*;*θi*) *∂u∂v* = *<sup>∂</sup>*2*<sup>a</sup>* (*xi*;*θi*) *∂v∂u* , for each *u* = *v* ∈ {*<sup>κ</sup>*, *α*, *β*1, ... , *β<sup>r</sup>*, *γ*1, ... , *<sup>γ</sup>s*}. With this in mind, the mixed partial derivatives of *a*(*xi*; *<sup>θ</sup>i*) and *a* (*xi*; *<sup>θ</sup>i*) in (21) have the following form:

*<sup>∂</sup>*2*a*(*xi*;*θi*) *∂κ∂<sup>w</sup>*1 = − 1 *κ ∂a*(*xi*;*θi*) *∂<sup>w</sup>*1 , *<sup>∂</sup>*2*a*(*xi*;*θi*) *∂α∂<sup>w</sup>*2 = √*xi* <sup>4</sup>*κσ*3/2 *i* 1*σi* − 1*xi ∂σi ∂α ∂σi ∂<sup>w</sup>*2 + *<sup>δ</sup>*(*xi*; *<sup>θ</sup>i*) *<sup>∂</sup>*2*σi ∂α∂<sup>w</sup>*2 , *<sup>∂</sup>*2*a*(*xi*;*θi*) *∂βl∂γm* = √*xi* <sup>4</sup>*κσ*3/2 *i* 1*σi* − 1*xi ∂σi ∂βl ∂σi ∂γm* + *<sup>δ</sup>*(*xi*; *<sup>θ</sup>i*) *<sup>∂</sup>*2*σi ∂βl∂γm* , *<sup>∂</sup>*2*<sup>a</sup>* (*xi*;*θi*) *∂κ∂<sup>w</sup>*1 = − 1 *κ <sup>∂</sup><sup>a</sup>* (*xi*;*θi*) *∂<sup>w</sup>*1 , *<sup>∂</sup>*2*<sup>a</sup>* (*xi*;*θi*) *∂α∂<sup>w</sup>*2 = 2*κ* √*xiσ*3/2 *i* 1*σi* + 1*xi ∂σi ∂α ∂σi ∂<sup>w</sup>*2 + <sup>Δ</sup>(*xi*; *<sup>θ</sup>i*) *<sup>∂</sup>*2*σi ∂α∂<sup>w</sup>*2 , *<sup>∂</sup>*2*<sup>a</sup>* (*xi*;*θi*) *∂βl∂γm* = 2*κ* √*xiσ*3/2 *i* 1*σi* + 1*xi ∂σi ∂βl ∂σi ∂γm* + <sup>Δ</sup>(*xi*; *<sup>θ</sup>i*) *<sup>∂</sup>*2*σi ∂βl∂γm*

for each *w*1 ∈ {*<sup>α</sup>*, *β*1, ... , *β<sup>r</sup>*, *γ*1, ... , *<sup>γ</sup>s*}, *w*2 ∈ {*β*1, ... , *β<sup>r</sup>*, *γ*1, ... , *<sup>γ</sup>s*}, and *l* = 1, ... ,*r*; *m* = 1, ... ,*s*, where *<sup>δ</sup>*(*xi*; *<sup>θ</sup>i*) and <sup>Δ</sup>(*xi*; *<sup>θ</sup>i*) are as before. In the above identities, the mixed partial derivatives *<sup>∂</sup>*2*σi ∂α∂<sup>w</sup>*2 and *<sup>∂</sup>*2*σi ∂βl∂γm* , respectively, are given by

*<sup>∂</sup>*2*σi ∂α∂<sup>w</sup>*2<sup>=</sup>&−∑*rj*=<sup>1</sup> *βj <sup>σ</sup>i*−*j* 1*<sup>σ</sup>i*−*j ∂σi*−*<sup>j</sup> ∂α ∂σi*−*<sup>j</sup> ∂<sup>w</sup>*2 − *<sup>∂</sup>*2*σi*−*<sup>j</sup> ∂α∂<sup>w</sup>*2 +∑*sj*=<sup>1</sup> *γjxi*−*j <sup>σ</sup>*2*i*−*<sup>j</sup>* 2*<sup>σ</sup>i*−*j ∂σi*−*<sup>j</sup> ∂α ∂σi*−*<sup>j</sup> ∂<sup>w</sup>*2 − *<sup>∂</sup>*2*σi*−*<sup>j</sup> ∂α∂<sup>w</sup>*2 '*<sup>σ</sup>i* + 1*σi ∂σi ∂α ∂σi ∂<sup>w</sup>*2 , *<sup>∂</sup>*2*σi ∂βl∂γm* =& *βl <sup>σ</sup>i*−*l ∂σi*−*<sup>l</sup> ∂γm* − ∑*rj*=<sup>1</sup> *βj <sup>σ</sup>i*−*j* 1*<sup>σ</sup>i*−*j ∂σi*−*<sup>j</sup> ∂βl ∂σi*−*<sup>j</sup> ∂γm* − *<sup>∂</sup>*2*σi*−*<sup>j</sup> ∂βl∂γm* + ∑*sj*=<sup>1</sup> *γjxi*−*j <sup>σ</sup>*2*i*−*<sup>j</sup>* 2*<sup>σ</sup>i*−*j ∂σi*−*<sup>j</sup> ∂βl ∂σi*−*<sup>j</sup> ∂γm* − *<sup>∂</sup>*2*σi*−*<sup>j</sup> ∂βl∂γm* '*<sup>σ</sup>i* + 1*σi ∂σi ∂βl ∂σi ∂γm* , *l* = 1, . . . ,*r*; *m* = 1, . . . ,*s* and *i* = 1, . . . , *n*.
