**2. Moments**

Let X ∼ APE(<sup>α</sup>, β) and Y ∼ APE(<sup>α</sup>, <sup>1</sup>). Then, we have X = Y/β. Thus, we have E(X) = 1β E(Y) and EX<sup>2</sup> = 1β2 <sup>E</sup>Y<sup>2</sup>.

**Lemma 1.** *We have*

$$\mathrm{E}(\mathsf{Y}) = \frac{\alpha}{\alpha - 1} \{ \log \log(\alpha) + \Gamma(0, \log(\alpha)) + \gamma \},$$

*where* Γ *is the incomplete gamma function defined by* <sup>Γ</sup>(s, x) = ∞x <sup>t</sup><sup>s</sup>−1e−tdt*, and* γ *is the Euler-Mascheroni constant given by* γ ≈ 0.5772156649.

**Proof.** First, we derive <sup>E</sup>(Y). The PDF of Y is given by

$$f(\mathbf{y}) = \frac{\alpha \log(\alpha)}{\alpha - 1} \mathbf{e}^{-\mathbf{y}} \alpha^{-\mathbf{e}^{-\mathbf{y}}} = \frac{\alpha \log(\alpha)}{\alpha - 1} \mathbf{e}^{-\mathbf{y}} \mathbf{e}^{-\log(\alpha)\mathbf{e}^{-\mathbf{y}}}.$$

Using the change of variable (t = e<sup>−</sup>ylog α), we have

$$\mathrm{E}(\mathbf{Y}) = \int\_0^\infty \mathbf{y} \, \mathrm{f}(\mathbf{y}) \, \mathrm{d}\mathbf{y} = \frac{\alpha}{\alpha - 1} \int\_0^{\log \alpha} (\log \log(\alpha) - \log(\mathbf{t})) \mathrm{e}^{-\mathbf{t}} \mathrm{d}\mathbf{t},\tag{5}$$

$$= \log \mathrm{log}(\alpha) - \frac{\alpha}{\alpha - 1} \int\_0^{\log \left(\alpha\right)} \log(\mathbf{t}) \mathrm{e}^{-\mathbf{t}} \mathrm{d}\mathbf{t}.$$

Then, it suffices to obtain log (α) 0 e<sup>−</sup><sup>t</sup> log(t) dt. It should be noted that γ = − ∞0e<sup>−</sup><sup>t</sup> log(t) dt. For more details, refer to Identity (6) of Ref. [41]. Thus, we have

$$\int\_0^{\log(\alpha)} \mathbf{e}^{-t} \log(\mathbf{t}) \, \mathrm{d}\mathbf{t} = \int\_0^{\infty} \mathbf{e}^{-\mathbf{t}} \log(\mathbf{t}) \, \mathrm{d}\mathbf{t} - \int\_{\log \alpha}^{\infty} \mathbf{e}^{-\mathbf{t}} \log(\mathbf{t}) \, \mathrm{d}\mathbf{t} = -\mathbf{y} - \int\_{\log \alpha}^{\infty} \mathbf{e}^{-\mathbf{t}} \log(\mathbf{t}) \, \mathrm{d}\mathbf{t} \tag{6}$$

Using the integration by parts, we have

$$\int\_{\log(a)}^{\infty} \mathbf{e}^{-t} \log(\mathbf{t}) \, \mathrm{d}\mathbf{t} = [-\log(\mathbf{t})\mathbf{e}^{-t}]\_{\log(a)}^{\infty} + \int\_{\log(a)}^{\infty} \frac{1}{\mathbf{t}} \mathbf{e}^{-t} \mathrm{d}\mathbf{t} = \frac{\log \log(a)}{\mathfrak{\alpha}} + \Gamma(0, \log(a)). \tag{7}$$

Substituting (8) into (7), we have

$$\int\_0^{\log(\alpha)} \mathbf{e}^{-t} \log(\mathbf{t}) \, \mathrm{d}\mathbf{t} = -\mathbf{y} - \frac{\log \log(\alpha)}{\alpha} - \Gamma(0, \log(\alpha)). \tag{8}$$

Substituting (9) into (6), we have

$$\mathrm{E}(\mathbf{Y}) = \int\_0^\infty \mathrm{yf}(\mathbf{y})d\mathbf{y} = \log\log(\mathbf{a}) + \frac{\mathbf{a}}{\mathbf{a} - 1} \left\{ \mathbf{y} + \frac{\log\log(\mathbf{a})}{\mathbf{a}} + \Gamma(0, \log(\mathbf{a})) \right\} = \frac{\mathbf{a}}{\mathbf{a} - 1} \{\log\log(\mathbf{a}) + \Gamma(0, \log(\mathbf{a})) + \mathbf{y}\}, \quad \mathbf{y} \in \mathbb{R}^{N \times N}$$

which completes the proof. -

Thus, we have

$$\mathrm{E}(\mathsf{X}) = \frac{1}{\beta} \cdot \frac{\alpha}{\alpha - 1} \{ \log \log(\alpha) + \Gamma(0, \log(\alpha)) + \gamma \}.$$

**Lemma 2.** *We have*

$$\mathrm{E}\left(\mathbf{Y}^2\right) = \frac{2\alpha \log(\alpha)}{\alpha - 1} \cdot\_3 F\_3(1, 1, 1; 2, 2, 2; -\log \alpha).$$

*here, pFq*(·) *is the generalized hypergeometric function [42,43] defined as*

$${}\_{p}F\_{\emptyset}\left(a\_{1},\ldots,a\_{p};b\_{1},\ldots,b\_{q};z\right)=\sum\_{n=0}^{\infty}\frac{\left(a\_{1}\right)\_{n}\cdot\cdots\cdot\left(a\_{p}\right)\_{n}}{\left(b\_{1}\right)\_{n}\cdot\cdots\cdot\left(b\_{q}\right)\_{n}}\frac{z^{n}}{n!},$$

*where* (*a*)*n is the Pochhammer symbol for the rising factorial defined as* (*a*)0 = 1 *and* (*a*)*n* = *a*(*a* + <sup>1</sup>)···(*<sup>a</sup>* + *n* − 1) *for n* = 1, 2, . . ..

**Proof.** Using the change of variable (t = e<sup>−</sup>ylog α), we have

EY<sup>2</sup> = ∞0 y2f(y)dy = αα−<sup>1</sup> log α 0 (loglogα − logt)2e−tdt = αα−<sup>1</sup> [(loglogα)<sup>2</sup> log α 0 <sup>e</sup><sup>−</sup>tdt− 2loglogα log α 0 logt e<sup>−</sup>tdt + log α 0 logt)2e−tdt\*

> Since log (α) 0 e<sup>−</sup>tdt = 1 − 1/α and log (α) 0 logt e<sup>−</sup>tdt = −γ − loglogα/α − Γ(0, log α) from (8), we have

$$\begin{split} \mathrm{E}\left(\mathbf{Y}^{2}\right) &= \frac{\alpha}{\alpha - 1} [\left(\log\log(a)\right)^{2}\left(1 - \frac{1}{\alpha}\right) = -2\log\log(a)\left\{-\mathbf{y} - \frac{1}{a}\log\log(a) - \Gamma(0, \log(a))\right\} \\ &+ \int\_{0}^{\log\,\mathrm{a}} \left(\log(\mathbf{t})\right)^{2} \mathrm{e}^{-\mathrm{t}} \mathrm{d}\mathbf{t} = \frac{\mathrm{a}}{\alpha - 1} [\left(\log\log(a)\right)^{2}\left(1 + \frac{1}{\alpha}\right) + 2\log\log(a)\left\{\mathbf{y} + \Gamma(0, \log(a))\right\} + \int\_{0}^{\log\,\mathrm{a}} \left(\log(\mathbf{t})\right)^{2} \mathrm{e}^{-\mathrm{t}} \mathrm{d}\mathbf{t}. \end{split} \tag{9}$$

Now, it suffices to evaluate log (α) 0 (log(t))2e−tdt, which is in the last term in (5). After tedious calculus and algebra along with the help of Mathematica [44], we have

$$\int (\log(\mathbf{t}))^2 \mathbf{e}^{-\mathbf{t}} \mathbf{d} \mathbf{t} = 2\mathbf{t} \cdot \,\_3\mathrm{F}\_3(1, 1, 1; 2, 2, 2; -\mathbf{t}) - \log(\mathbf{t}) \left\{ \left( 1 + \mathbf{e}^{-\mathbf{t}} \right) \log(\mathbf{t}) + 2\Gamma(\mathbf{0}, \mathbf{t}) + 2\gamma \right\}$$

along with 10 (log(t))2e−tdt = 23F3(1, 1, 1; 2, 2, 2; <sup>−</sup><sup>1</sup>). Thus, we have

$$\begin{aligned} \|\int\_0^{\log(a)} (\log(t))^2 e^{-t} dt &= \int\_0^1 (\log(t))^2 e^{-t} dt + \int\_1^{\log a} (\log(t))^2 e^{-t} dt \\ &= 2 \log(a) \cdot\_3 \mathrm{F}\_3(1,1,1;2,2,2; -\log(a)) - \log \log(a) \left\{ \left(1 + \frac{1}{a}\right) \log \log(a) + 2\Gamma(0, \log(a)) + 2\gamma \right\} \\ &= 2 \log(a) \cdot\_3 \mathrm{F}\_3(1,1,1;2,2,2; -\log(a)) - \left(\log \log(a)\right)^2 \left(1 + \frac{1}{a}\right) - 2 \log \log(a) \left\{ \Gamma(0, \log\left(a\right)) + \gamma \right\} \end{aligned}$$

Substituting the above into (10), we have

$$\mathrm{E}\left(\mathbf{Y}^2\right) = \frac{2\alpha \log(\alpha)}{\alpha - 1} \cdot\_3 \mathrm{F}\_3(1, 1, 1; 2, 2, 2; -\log(\alpha))$$

which completes the proof. -

> Then, using Lemmas 1 and 2, we have

$$\begin{array}{l} \mathrm{E}(\mathsf{X}) = \frac{1}{\mathsf{F}} \cdot \frac{\alpha}{\alpha - 1} \{ \log \log(\alpha) + \Gamma(0, \log(\alpha)) + \gamma \} \\ \mathrm{E}\left(\mathsf{X}^{2}\right) = \frac{2\alpha \log(\alpha)}{\beta^{2}(\alpha - 1)} \cdot {}\_{3}\mathrm{F}\_{3}(1, 1, 1; 2, 2, 2; -\log(\alpha)). \end{array} \tag{10}$$

$$\text{Var}(\lambda) = \frac{1}{\beta^2} \left[ \frac{2\alpha \log(\alpha)}{\left(\alpha - 1\right)} \cdot \text{\raisebox{2.0pt}{ $\alpha$ }} \text{\raisebox{2.0pt}{ $\alpha$ }} \text{\raisebox{2.0pt}{ $\alpha$ }} \text{\raisebox{2.0pt}{ $\alpha = \log(\alpha)$ }} \cdot \text{\raisebox{2.0pt}{ $\alpha = \log(\alpha)$ }} - \log(\alpha) - \frac{\alpha^2}{\left(\alpha - 1\right)^2} \left\{ \log \log(\alpha) + \Gamma(0, \log(\alpha)) + \gamma \right\}^2 \right]$$

Using E(X) and E(X<sup>2</sup>) in the above, we can obtain the method-of-moments estimate as follows. We can set

$$\frac{\mathrm{E}(\mathsf{X}^{2})}{\left\{\mathrm{E}(\mathsf{X})\right\}^{2}} = \frac{2(\alpha - 1)\log(\alpha) \cdot \mathrm{s} \, \mathrm{F}\_{\mathsf{3}}(1, 1, 1; 2, 2, 2; -\log(\alpha))}{\alpha \left\{\log \log(\alpha) + \Gamma(0, \log(\alpha)) + \gamma\right\}^{2}} = \frac{\frac{1}{\mathrm{n}} \sum\_{i=1}^{n} \chi\_{i}^{2}}{\left(\frac{1}{\mathrm{n}} \sum\_{i=1}^{n} \chi\_{i}\right)^{2}}.$$

.

Thus, by solving the above for α, we can estimate α. We denote this estimate as αˆ. Then, the estimate of β can be explicitly obtained by setting E(X) = 1n ∑ni=<sup>1</sup> Xi with (11), which is given by β ˆ = 1 1 n∑ni=<sup>1</sup>Xi · α ˆ α ˆ −1 {loglog( αˆ) + Γ(0, log( α<sup>ˆ</sup>)) + <sup>γ</sup>}.

Figure 1 shows the skewness (SK) and kurtosis (KT) by using moment measures of quartile with different values of parameters. Table 1 discusses the first quartile, median, and third quartile and well as SK and KT of the APE distribution with different values.

**Figure 1.** Three-dimensional plot of skewness and kurtosis with different values of parameters.


**Table 1.** Different measures of the moment by different values of parameters.

### **3. Estimation in the Classical Style**

In this part, the classical point and interval estimation methods are discussed, namely maximum likelihood estimation for finding point estimates of R and asymptotic, boot-p, and boot-t intervals for R for obtaining interval estimates.

### *Maximum Likelihood R Estimation*

Let X ∼ APE(β1,<sup>α</sup>), Y ∼ APE(β2,<sup>α</sup>) and Z ∼ APE(β3,<sup>α</sup>) be independent functions. Assuming α is known in this the case, we have

$$\mathbf{R} = \mathbf{p}(\mathbf{X} < \mathbf{Y} < \mathbf{Z}) = \int\_{-\infty}^{\infty} \mathbf{F} \mathbf{x}(\mathbf{y}) \mathrm{dF} \mathbf{y}(\mathbf{y}) - \int\_{-\infty}^{\infty} \mathbf{F} \mathbf{x}(\mathbf{y}) \mathrm{F} \mathbf{z}(\mathbf{y}) \mathrm{dF} \mathbf{y}(\mathbf{y}), \tag{11}$$

$$= \frac{\mathfrak{\beta}\_1 \left[ 2\mathfrak{\beta}\_1^2 + 2\mathfrak{\beta}\_2^2 + 0.5 \mathfrak{\beta}\_3^2 + 1.5 \mathfrak{\beta}\_1 \mathfrak{\beta}\_3 + 3 \mathfrak{\beta}\_1 \mathfrak{\beta}\_2 \right]}{(\mathfrak{\beta}\_1 + \mathfrak{\beta}\_2)(\mathfrak{\beta}\_1 + 2\mathfrak{\beta}\_2)(\mathfrak{\beta}\_1 + \mathfrak{\beta}\_3)(2\mathfrak{\beta}\_1 + \mathfrak{\beta}\_3)}$$

Figure 2 shows the different plot of multi-stress–strength reliability for different values of parameters, which explains that the multi-stress–strength reliability has different ranges.

**Figure 2.** Three-dimensional plot of multi-stress–strength reliability with different values of parameters.

To obtain the MLE of R, we must first obtain the MLEs of β1, β2 and β3. Let X1;m1,n1,k1 ,...,Xn1;m1,n1,k1 , Y1;m2,n2,k2 ,...,Yn2;m2,n2,k2 , and Z1;m3,n3,k3 ,...,Zn3;m3,n3,k3 be three progressively first-failure censored samples from APE(βi, α) distribution with censoring schemes Rx = (Rx1 ,...,Rxm1 ), Ry = Ry1 ,...,Rym2 , Rz = (Rz1 ,...,Rzm3 ). Using the formulas from (2) and (3), the likelihood function of β1, β2 and β3 is given by

$$\begin{aligned} &\text{L}\left(\beta\_{1},\beta\_{2},\beta\_{3}\right) \\ &\propto \prod\_{i=1}^{3} \left(\beta\_{1}\mathbf{k}\_{i}\right)^{\text{m}\_{i}} \binom{\text{kg}\,\text{at}}{\text{s}-1}^{\text{m}\_{i}+\text{m}\_{2}+\text{m}\_{3}} \prod\_{i=1}^{\text{m}\_{i}} \text{e}^{-\beta\_{1}\mathbf{x}\_{i}(\mathbf{k}\_{i}\mathbf{R}\_{\mathbf{q}\_{i}}+1)} \text{a}^{1-\text{s}^{-\beta\_{1}\mathbf{x}\_{i}(\mathbf{R}\_{\mathbf{q}\_{i}}+1)}} \prod\_{i=1}^{\text{m}\_{2}} \text{e}^{-\beta\_{2}\mathbf{y}\_{i}(\mathbf{k}\_{2}\mathbf{R}\_{\mathbf{q}\_{i}}+1)} \text{a}^{1-\text{s}^{-\beta\_{2}\mathbf{y}\_{i}(\mathbf{k}\_{2}\mathbf{R}\_{\mathbf{q}\_{i}}+1)}} \text{a}^{1-\text{s}^{-\beta\_{2}\mathbf{y}\_{i}(\mathbf{k}\_{2}\mathbf{R}\_{\mathbf{q}\_{i}}+1)}} \\ &\prod\_{i=1}^{\text{m}\_{3}} \text{e}^{-\beta\_{3}\mathbf{x}\_{i}(\mathbf{k}\_{2}\mathbf{R}\_{\mathbf{q}\_{i}}+1)} \text{a}^{1-\text{s}^{-\beta\_{3}\mathbf{y}\_{i}(\mathbf{k}\_{2}\mathbf{R}\_{\mathbf{q}\_{i}}+1)},\end{aligned} \tag{12}$$

We use xi instead of Xi1;m1,n1,k1 to simplify notation. Similarity between yi and zi. The log-likelihood function, can now be written as follows:

(β1, β2, β3) ∝ <sup>∑</sup><sup>3</sup>**j**=<sup>1</sup> mjlnkj + ln βj + (m1 + m2 + m3)[ln(log α) − ln(α − 1)] − ∑m1 **i**=1 β1xi(k1Rxi + 1)+ ∑m1 **<sup>i</sup>**=<sup>1</sup><sup>1</sup> − e<sup>−</sup>β1xi(Rxi+<sup>1</sup>) ln(α) − ∑m2 **i**=1 β2yik2Ryi + 1 + ∑m2 **i**=1 1 − e<sup>−</sup>β2yi(k2Ryi+<sup>1</sup>) ln(α) − ∑m3 **i**=1 β3zi(k3Rzi + 1) + ∑m3 **i**=1 1 − e<sup>−</sup>β3zi(k3Rzi+<sup>1</sup>) ln(α) (13)

Taking the derivative of (14) with respect to β1, β2 and β3, we obtain

$$\frac{\partial \ell}{\partial \beta\_1} = \frac{\mathbf{m}\_1}{\beta\_1} - \sum\_{i=1}^{\mathbf{m}\_1} \mathbf{x}\_i (\mathbf{k}\_1 \mathbf{R}\_{\mathbf{x}\_i} + 1) + \ln(\mathbf{a}) \sum\_{i=1}^{\mathbf{m}\_1} \mathbf{x}\_i (\mathbf{R}\_{\mathbf{x}\_i} + 1) \mathbf{e}^{-\beta\_1 \mathbf{x}\_i (\mathbf{R}\_{\mathbf{x}\_i} + 1)},\tag{14}$$

$$\frac{\partial \ell}{\partial \beta\_2} = \frac{\mathbf{m}\_2}{\beta\_2} - \sum\_{l=1}^{\mathbf{m}\_2} \mathbf{y}\_i (\mathbf{k}\_2 \mathbf{R}\_{\mathbf{y}\_i} + 1) + \ln(\alpha) \sum\_{l=1}^{\mathbf{m}\_2} \mathbf{y}\_i (\mathbf{k}\_2 \mathbf{R}\_{\mathbf{y}\_i} + 1) \mathbf{e}^{-\beta\_2 \mathbf{y}\_i (\mathbf{k}\_2 \mathbf{R}\_{\mathbf{y}\_i} + 1)} \tag{15}$$

$$\frac{\partial \ell}{\partial \beta\_3} = \frac{\mathbf{m}\_3}{\beta\_3} - \sum\_{\mathbf{i}=1}^{\mathbf{m}\_3} \mathbf{z}\_{\mathbf{i}} (\mathbf{k}\_3 \mathbf{R}\_{\mathbf{i}} + 1) + \ln(\alpha) \sum\_{\mathbf{i}=1}^{\mathbf{m}\_3} \mathbf{z}\_{\mathbf{i}} (\mathbf{k}\_3 \mathbf{R}\_{\mathbf{i}} + 1) \mathbf{e}^{-\beta\_3 \mathbf{z}\_{\mathbf{i}} (\mathbf{k}\_3 \mathbf{R}\_{\mathbf{i}} + 1)} \tag{16}$$

It is noted that the MLEs of β1, β2 and β3 can not be found in closed form. Thus, by solving the system of nonlinear Equations (15)–(17), numerical solutions to the nonlinear system in (15)–(17) can be found using an iterative approach, such as Newton–Raphson. Then, the MLEs β ˆ 1, β ˆ 2 and β ˆ 3 can be obtained. To obtain the MLE of R, by replacing β1, β2 and β3in (5) with β ˆ 1, β ˆ 2and β ˆ 3as follows:

$$\mathcal{R} = \frac{\hat{\mathbb{\beta}}\_1 \left[ 2\hat{\mathbb{\beta}}\_1^2 + 2\hat{\mathbb{\beta}}\_2^2 + 0.5\hat{\mathbb{\beta}}\_3^2 + 1.5\hat{\mathbb{\beta}}\_1\hat{\mathbb{\beta}}\_3 + 3\hat{\mathbb{\beta}}\_1\hat{\mathbb{\beta}}\_2 \right]}{\left(\hat{\mathbb{\beta}}\_1 + \hat{\mathbb{\beta}}\_2\right) \left(\hat{\mathbb{\beta}}\_1 + 2\hat{\mathbb{\beta}}\_2\right) \left(\hat{\mathbb{\beta}}\_1 + \hat{\mathbb{\beta}}\_3\right) \left(2\hat{\mathbb{\beta}}\_1 + \hat{\mathbb{\beta}}\_3\right)}\tag{17}$$
