*Article* **Optimal Test Plan of Step Stress Partially Accelerated Life Testing for Alpha Power Inverse Weibull Distribution under Adaptive Progressive Hybrid Censored Data and Different Loss Functions**

**Refah Alotaibi 1, Ehab M. Almetwally 2,3, Qiuchen Hai <sup>4</sup> and Hoda Rezk 5,\***


**Abstract:** Accelerated life tests are used to explore the lifetime of extremely reliable items by subjecting them to elevated stress levels from stressors to cause early failures, such as temperature, voltage, pressure, and so on. The alpha power inverse Weibull (APIW) distribution is of great significance and practical applications due to its appealing characteristics, such as its flexibilities in the probability density function and the hazard rate function. We analyze the step stress partially accelerated life testing model with samples from the APIW distribution under adaptive type II progressively hybrid censoring. We first obtain the maximum likelihood estimates and two types of approximate confidence intervals of the distributional parameters and then derive Bayes estimates of the unknown parameters under different loss functions. Furthermore, we analyze three probable optimum test techniques for identifying the best censoring under different optimality criteria methods. We conduct simulation studies to assess the finite sample performance of the proposed methodology. Finally, we provide a real data example to further demonstrate the proposed technique.

**Keywords:** the alpha power inverse Weibull distribution; step stress partially accelerated life testing; adaptive progressive hybrid censored data; loss functions

**MSC:** 65C20; 60E05; 62P30; 62L15

#### **1. Introduction**

The reliability of products has recently grown greatly in the present era of technical achievements due to an ongoing effort for improving manufacturing processes in various companies. Under the presence of high competition to launch their products within a short time period, direct use of traditional life testing methodologies will be an expensive and time-consuming operation for evaluating the lifetime of a product to predict product failures. As a result, accelerated life tests (ALTs) are usually employed to explore the lifetime of extremely reliable products, as they can be used with elevated stress levels of stressors to trigger early failures, such as temperature, voltage (electric field), pressure, and so on. Thereafter, the constant-stress and step-stress models in the ALTs have been studied in life testing and reliability analyses; see, for example, [1–4].

It is known that each product sample in the ALTs is typically analyzed under a constant-stress scenario, subjected to some continuous amounts of constant stress until

**Citation:** Alotaibi, R.; Almetwally, E.M.; Hai, Q.; Rezk, H. Optimal Test Plan of Step Stress Partially Accelerated Life Testing for Alpha Power Inverse Weibull Distribution under Adaptive Progressive Hybrid Censored Data and Different Loss Functions. *Mathematics* **2022**, *10*, 4652. https://doi.org/10.3390/ math10244652

Academic Editors: Alexandru Agapie, Denis Enachescu, Vlad Stefan Barbu and Bogdan Iftimie

Received: 4 October 2022 Accepted: 3 December 2022 Published: 8 December 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

all units fail or the test is cancelled for any reason, such as censoring plan. However, the test conditions associated with step-stress models do not remain constant throughout the tests, since the stress on a sample of test units could increase step by step at a prescribed period or concurrently when a fixed number of failures occurs. In addition, the ALTs often use a suitable physical model to extrapolate the collected breakdown information under accelerated settings, whereas it is difficult to select a proper physical model to describe the life stress relationships in practical situations [5]. To overcome these drawbacks of ALTs, researchers may employ partially accelerated life testing (PALT), which is classified into two types: constant-stress loading and step-stress loading. In the constant-stress PALT (CSPALT), each sample of tested items is subjected to normal and accelerated levels of constant stress until all units fail or the test is terminated; see, for example, [6,7]. In the step-stress PALT (SSPALT), certain objects or materials are initially tested under normal or usage settings for a predetermined amount of time before being subjected to accelerated test conditions until the termination time; see, for example, [8–10].

In life-testing and reliability trials, data are commonly censored due to time and cost constraints. The hybrid censoring scheme [11], which includes Type-I and Type-II censorings as special cases, is commonly utilized in reliability analysis. We, here, refer the interested reader to [12] for a nice overview of the hybrid censoring. However, the hybrid censoring scheme lacks an option to delete units during the testing period due to time and cost constraints. To address this issue, a progressive censoring scheme was developed by allowing for the deletion of experimental units at various periods of time throughout the test; see, for example, [13,14] in detail. It is worth pointing out that in the progressively Type-II hybrid censoring, the number of required failures and the number of items that must be deleted are determined in advance, whereas there is no time constraint on the experiment, leading to a very long period.

To address this issue, [15] proposed the Type-I Progressive Hybrid Censoring Scheme (TIPHCS), with an additional time and failure constraint that the experiment will run until a predetermined time point or a predetermined number of failures, whichever comes first. However, since the sample size in TIPHCS is random, only a few or even no failure would occur before a pre-specified time limit, resulting in poor efficiency of the parameter estimation. The authors of [16] proposed an adaptive type-II PHCS (AT-II PHCS), in which *n* units are placed on a life test with a predetermined number of failures *m* and a pre-fixed progressive censoring scheme *ε*1, *ε*2, ... , *εm*, but the experimenter is allowed to change some of the *εiw*s during the experiment depending on situations. At the initial failure time, *z*1:*m*:*n*, *ε*<sup>1</sup> units are randomly selected from the remaining *n* − 1 alive items and are then removed from the experiment. At the second failure time, *z*2:*m*:*n*, *ε*<sup>2</sup> units of the remaining *n* − 1 − *ε*<sup>1</sup> units are eliminated at random, and so on. If the *m*-*th* failure time *zm*:*m*:*n*, occurs before the predetermined time *<sup>δ</sup>*, all the remaining *<sup>ε</sup><sup>m</sup>* <sup>=</sup> *<sup>n</sup>* <sup>−</sup> *<sup>m</sup>* <sup>−</sup> *<sup>m</sup>* ∑ *ε<sup>i</sup>* units are removed

*i*=1 and the experiment terminates at time *zm*:*m*:*n*. The AT-II PHCS allows the experiment to run over the test termination time restriction. As a result, if *zm*:*m*:*<sup>n</sup>* > *δ*, the experiment will soon be stopped by setting *εc*<sup>+</sup>1, *εc*+2, ... , *εm*−<sup>1</sup> = 0. This means that if *zc*:*m*:*<sup>n</sup>* < *δ* < *zc*+1:*m*::*n*, with *c* + 1 < *m* and *yc*:*m*:*<sup>n</sup>* is the *c* − *th* failure time that occurred before *δ*, no surviving item will be removed from the experiment until the effective sample of *m* failures is attained,

resulting in the remaining units *<sup>ε</sup><sup>m</sup>* <sup>=</sup> *<sup>n</sup>* <sup>−</sup> *<sup>c</sup>* <sup>−</sup> *<sup>c</sup>* ∑ *i*=1 *εi*.

Due to the importance of the AT-II PHCS, numerous authors have investigated the problem of parameter estimation in different statistical models based on this censoring scheme; see, for example, Refs. [17–19] for the Weibull distribution, Ref. [20] for the lognormal distribution, Ref. [21] for the exponentiated Weibull distribution, Refs. [22,23] for the extended Weibull distribution, Ref. [24] for the Burr Type-XII distribution, Ref. [25] for the exponentiated Pareto distribution, Ref. [26] for the inverted NH distribution, Ref. [27] for the Weibull generalized exponential distribution, Ref. [28] for the exponentiated exponential distribution, Ref. [29] for the exponentiated power Lindley distribution, and references cited therein. To the best of our knowledge, little research attention has been devoted to the

alpha power inverse Weibull (APIW) distribution [30]. This observation motivates us to investigate statistical inference of the APIW distribution under AT-II PHCS.

Due to flexibilities in its probability density function (PDF) and hazard rate function (HRF), the APIW distribution has become a useful model in the study of life testing and reliability analyses. The cumulative distribution function (CDF), PDF, survival function (SF), and HRF for an APIW random variable T are given by

$$F(t; \alpha, \beta, \theta) = \frac{\alpha^{e^{-\beta t^{-\theta}}} - 1}{\alpha - 1}, \alpha, \beta, \theta, t > 0,\tag{1}$$

$$f(t; \mathfrak{a}, \mathfrak{z}, \theta) = \frac{\log(\mathfrak{a}) \; \theta \beta e^{-\beta t^{-\theta}} t^{-\theta - 1} a^{\mathfrak{a}^{-\beta \ell}}}{\mathfrak{a} - 1}, \; \mathfrak{a}, \beta, \theta, t > 0,\tag{2}$$

$$S(t; \alpha, \beta, \theta) = \frac{\alpha}{\alpha - 1} \left( 1 - \alpha^{e^{-\beta t^{-\theta}} - 1} \right), \text{ a.} \,\beta, \theta, t > 0,\tag{3}$$

and

$$h(t; a, \beta, \theta) = \frac{\log(a) \,\theta \beta e^{-\beta t^{-\theta}} t^{-\theta - 1} a^{\epsilon^{-\beta t^{-\theta}} - 1}}{\left(1 - a^{\epsilon^{-\beta t^{-\theta}} - 1}\right)}, \text{ a, } \beta, \theta, t > 0,\tag{4}$$

respectively, where *α* > 0 and *θ* > 0 are the shape parameters and *β* > 0 is a scale parameter. This distribution includes many well-known distributions as special cases, such as the alpha power Fréchet, alpha power inverse Rayleigh, alpha power inverse exponential, inverse Weibull, Fréchet, inverse Rayleigh, and the inverse exponential distributions. In addition, it has closed-form expressions of the SF and HRF, which make the distribution a good alternative to commonly used distributions in life-testing analysis.

In this paper, we analyze the step stress partially accelerated life testing model with samples from the APIW distribution under the AT-II PHCS. We first consider the MLEs and derive asymptotic confidence interval and bootstrap confidence intervals of the model parameters. We then propose Bayes estimates of the unknown parameters with noninformative and informative priors under the symmetric and asymmetric loss functions. In addition, we identify the best progressive censoring scheme to the most information about the unknown parameters among all conceivable progressive censoring schemes. Numerical results from simulation studies and a real-data application show that the performance of the proposed technique is quite satisfactory for analyzing censored data under different sampling schemes.

The rest of this paper is organized as follows. Section 2 describes the lifetime model and the test assumptions. Section 3 derives the MLEs of the APIW parameters under the AT-II PHCS. Section 4 constructs the confidence intervals of the unknown parameters. Bayesian analysis of the unknown parameters is provided in Section 5. We carry out simulations in Section 6 to investigate the finite sample performance of the proposed model. In Section 7, a real-data example is provided for illustrative purposes. Finally, concluding remarks are provided in Section 8 with Fisher information of the model deferred to the Appendix A.

#### **2. Assumptions and Procedure for Testing**

Suppose that in a simple SSPALT, the test employs only two stress levels, *Su* (normal operating circumstances) and *Sa* (accelerated condition), such that *Su* < *Sa*, where *Su* and *Sa* are twins. Under each stress level, at least one failure should occur. We assume that at both stress levels, the failures of the test items follow the APIW distribution in (2). Then, the lifetime *Z* of a test item follows a TRV model given by

$$Z = \begin{cases} \begin{array}{c} T, \text{ if } T < \pi \\ \pi + \frac{T-\pi}{\lambda}, \text{ } T > \pi \end{array} \end{cases}$$

where T indicates the lifetime of an item under the stress ST and represents the time point at which stress ST is switched from *u* to *a*, and *λ* > 1 is an accelerated factor (AF). Then, under the TRV model, we obtain the PDF, CDF, and SF of Z given by

$$f\_{\mu}(z;\alpha,\beta,\theta) = \frac{\log(\alpha)\,\theta\beta e^{-\beta z^{-\theta}} z^{-\theta-1} \alpha^{e^{-\beta z^{-\theta}}}}{\alpha - 1},\tag{5}$$

$$F\_{\mathfrak{u}}(z;\mathfrak{a},\mathfrak{f},\mathfrak{e}) = \frac{\mathfrak{a}^{\mathfrak{e}^{-\beta z^{-\theta}}} - 1}{\mathfrak{a} - 1},\tag{6}$$

$$S\_{\mathfrak{u}}(z;a,\beta,\theta) = \frac{\mathfrak{a}}{a-1} \left(1 - a^{\epsilon^{-az^{-\theta}}-1}\right) \tag{7}$$

Now, at stress ST = *a*, the PDF, CDF, and SF of *Z* are produced as follows:

$$f\_{\mathfrak{a}}(z;\mathfrak{a},\mathfrak{z},\theta,\theta,\lambda) = \frac{\log(a)\,\theta\beta e^{-\beta[\tau+\lambda(z-\tau)]^{-\theta}}[\tau+\lambda(z-\tau)]^{-\theta-1}a^{\varepsilon-\beta[\tau+\lambda(z-\tau)]^{-\theta}}}{a-1},\tag{8}$$

$$F\_{\mathfrak{a}}(z;\mathfrak{a},\mathfrak{z},\mathfrak{z},\mathfrak{z},\lambda) = \frac{a^{e-\beta[r+\lambda(z-r)]} - 1}{\mathfrak{a} - 1}, \text{ a.e.} \mathfrak{z}, \mathfrak{z}, \mathfrak{z} > 0, \lambda > 1,\tag{9}$$

$$S\_{\mathfrak{a}}(z;\mathfrak{a},\mathfrak{z},\theta,\theta,\lambda) = \frac{\mathfrak{a}}{\mathfrak{a}-1} \left(1 - \mathfrak{a}^{e^{-\mathfrak{a}[\mathfrak{r}+\lambda(z-\mathfrak{r})]^{-\theta}}-1}\right),\tag{10}$$

where *α*, *β*, *θ*, *z* > 0, *λ* > 1. We assume that a sample of *n* items is assigned to the stress level *Su* to test according to the SSPALT and a known progressive censoring scheme *ε*1, *ε*2, ... , *εm*. The test will proceed and the items from *n* that do not fail up to time *Su* are placed through *Sa* to test, and the test will continue until the censorship time is reached. If the *m* − *th* failure does not occur within the censoring point *δ*, no item will be removed from the test. The testing will continue until the *m* − *th* failure is registered, at which point it will be terminated when all remaining items are eliminated. As a result, the implemented scheme is *ε*1, *ε*2,..., *εc*, 0, 0, . . . , 0, *εm*. Thus, we obtain the observed samples given by

*z*1:*m*:*<sup>n</sup>* < *z*2:*m*:*<sup>n</sup>* < ... < *zmu*:*m*:*<sup>n</sup>* < *τ* < *zmu*<sup>+</sup>1:*m*:*<sup>n</sup>* < ... < *zc*:*m*:*<sup>n</sup>* < *δ* < *zc*+1:*m*:*<sup>n</sup>* < ... < *zm*:*m*:*n*,

illustrated in Figure 1.

**Figure 1.** Illustration of the AT-II PHCS scheme.

We observe from this figure that the AT-II PHCS is a special instance of the AT-II PHCS as *δ* → 0 and that the AT-II PHCS reduces to the classical Type-II PHCS as *δ* → ∞.

#### **3. The Parameter Estimation**

The resulting likelihood function of the data under the AT-II PHCS is given by

$$L(z; \boldsymbol{a}, \boldsymbol{\beta}, \boldsymbol{\theta}, \boldsymbol{\lambda}) \propto \prod\_{i=1}^{m\_u} \left\{ f\_u(z\_i) [R\_u(z\_i)]^{\varepsilon\_i} \right\} \prod\_{i=m\_u+1}^{m} \left\{ f\_d(z\_i) [R\_d(z\_i)]^{\varepsilon\_i} [R\_d(z\_m)]^{\varepsilon\_m} \right\},\tag{11}$$

where *zi* <sup>=</sup> *zi*:*m*:*n*, *<sup>ε</sup><sup>m</sup>* <sup>=</sup> *<sup>n</sup>* <sup>−</sup> *<sup>m</sup>* <sup>−</sup> *<sup>c</sup>* ∑ *i*=1 *εi*. Then it follows

$$L(z;a,\beta,\theta,\lambda) \propto \prod\_{i=1}^{m\_{\rm{u}}} \left\{ \frac{\log(a)\,\theta\beta e^{-\beta z\_i^{-\theta}} z\_i^{-\theta-1} a^{\epsilon^{-\beta z\_i^{-\theta}}}}{a-1} \left[ \frac{a}{a-1} \left( 1 - a^{\epsilon^{-\beta z\_i^{-\theta}} - 1} \right) \right]^{\varepsilon\_i} \right\}$$
 
$$\frac{m\_{\rm{u}}}{a} \, : \; \log(a)\,\theta\epsilon^{-\beta[\tau+\lambda(z\_i-\tau)]^{-\theta}} \, :\_{\{\tau+\lambda(z\_m-\tau)\} \,\theta - \lambda\_1 \epsilon^{-\beta[\tau+\lambda(z\_j-\tau)]^{-\theta}}} $$

$$\begin{split} \times & \quad \prod\_{i=m\_{\mathrm{u}}+1}^{m} \left\{ \frac{\log(a) \left. \theta \not\!\!/ \varepsilon^{-\beta[\operatorname{\boldsymbol{\tau}} + \lambda(z\_{i}-\boldsymbol{\tau})]^{-\theta}} \left[ \operatorname{\boldsymbol{\tau}} + \lambda(z\_{i}-\boldsymbol{\tau}) \right]^{-\theta-1} \operatorname{\boldsymbol{\alpha}}^{\varepsilon^{-\beta[\operatorname{\boldsymbol{\tau}} + \lambda(z\_{i}-\boldsymbol{\tau})]^{-\theta}}} \left[ \operatorname{\boldsymbol{\alpha}}^{\varepsilon} \operatorname{\boldsymbol{\alpha}} \right. \right. \\ & \left. - \operatorname{\boldsymbol{\alpha}}^{\varepsilon^{-\beta[\operatorname{\boldsymbol{\tau}} + \lambda(z\_{i}-\boldsymbol{\tau})]^{-\theta}} - 1} \right] \left[ \operatorname{\boldsymbol{\alpha}}^{\varepsilon} \operatorname{\boldsymbol{\alpha}}^{\varepsilon^{-\beta[\operatorname{\boldsymbol{\tau}} + \lambda(z\_{m}-\boldsymbol{\tau})]^{-\theta}}} \operatorname{\boldsymbol{\alpha}}^{\varepsilon^{-\beta}} - 1 \right] \end{split} \tag{12}$$

The MLE is commonly used to estimate the unknown parameters, as it effectively and efficiently yields estimates with good statistical properties. By taking the natural logarithm on both sides of Equation (12), we obtain the log-likelihood equation *L*(*z*; *α*, *β*, *θ*, *λ*) = as follows

 = *mulog*(*log*(*α*)) − *mulog*(*α* − 1) + *mulog*(*θ*) + *mulog*(*β*) − *β mu* ∑ *i*=1 *z*−*<sup>θ</sup> <sup>i</sup>* − (*θ* + 1) *mu* ∑ *i*=1 *log*(*zi*) +*log*(*α*) *mu* ∑ *i*=1 *e*−*βz*−*<sup>θ</sup> <sup>i</sup>* <sup>+</sup> *mu* ∑ *i*=1 *<sup>ε</sup>ilog*(*α*) <sup>−</sup> *mu* ∑ *i*=1 *<sup>ε</sup>ilog*(*<sup>α</sup>* <sup>−</sup> <sup>1</sup>) <sup>+</sup> *mu* ∑ *i*=1 *εi* <sup>1</sup> <sup>−</sup> *<sup>α</sup><sup>e</sup>* <sup>−</sup>*βz*−*<sup>θ</sup> <sup>i</sup>* −1 <sup>+</sup>*mlog*(*log*(*α*)) <sup>−</sup> *mlog*(*<sup>α</sup>* <sup>−</sup> <sup>1</sup>) <sup>+</sup> *mlog*(*θ*) + *mlog*(*β*) <sup>−</sup> *<sup>β</sup> <sup>m</sup>* ∑ *i*=*mu*+1 [*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*<sup>θ</sup>* <sup>−</sup>(*<sup>θ</sup>* <sup>+</sup> <sup>1</sup>) *<sup>m</sup>* ∑ *i*=*mu*+1 *log*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)] <sup>+</sup> *log*(*α*) *<sup>m</sup>* ∑ *i*=*mu*+1 *e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* <sup>+</sup> *<sup>m</sup>* ∑ *i*=*mu*+1 (*εi*)*log*(*α*) <sup>−</sup> *<sup>m</sup>* ∑ *i*=*mu*+1 (*εi*)*log*(*<sup>α</sup>* <sup>−</sup> <sup>1</sup>) <sup>+</sup> *<sup>m</sup>* ∑ *i*=*mu*+1 *εilog* <sup>1</sup> <sup>−</sup> *<sup>α</sup>e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 + *εmlog*(*α*) − *εmlog*(*α* − 1) +*εmlog* <sup>1</sup> <sup>−</sup> *<sup>α</sup>e*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 (13)

The MLEs of the parameters *α*, *β*, *θ* and *λ* can be obtained by solving the following nonlinear system equations

*∂ ∂α* <sup>=</sup> *mu*+*<sup>m</sup> <sup>α</sup>log<sup>α</sup>* <sup>−</sup> (*mu*+*m*) (*α*−1) <sup>+</sup> <sup>1</sup> *α mu* ∑ *i*=1 *e*−*βz*−*<sup>θ</sup> <sup>i</sup>* <sup>−</sup> <sup>1</sup> *α mu* ∑ *i*=1 *<sup>ε</sup><sup>i</sup>* <sup>−</sup> <sup>1</sup> *α*−1 *mu* ∑ *i*=1 *ε<sup>i</sup>* − *log*(*α*) *mu* ∑ *i*=1 *εiα<sup>e</sup>* <sup>−</sup>*βz*−*<sup>θ</sup> <sup>i</sup>* −1 +<sup>1</sup> *α m* ∑ *i*=*mu*+1 *<sup>ε</sup>ie*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* + <sup>1</sup> *α m* ∑ *i*=*mu*+1 (*εi*) + <sup>1</sup> (*α*−1) *m* ∑ *i*=*mu*+1 (*εi*) <sup>+</sup>*log*(*α*) *<sup>m</sup>* ∑ *i*=*mu*+1 *<sup>ε</sup><sup>i</sup> <sup>α</sup><sup>e</sup>* <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 <sup>1</sup>−*α<sup>e</sup>* <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 <sup>+</sup> *<sup>ε</sup><sup>m</sup> <sup>α</sup>* <sup>−</sup> *<sup>ε</sup><sup>m</sup>* (*α*−1) <sup>+</sup> *<sup>ε</sup>mlog*(*α*)*αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 <sup>1</sup>−*αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 , (14)

*∂*

*∂ ∂θ* = *mu*+*<sup>m</sup> <sup>θ</sup>* + *β mu* ∑ *i*=1 *z*−*<sup>θ</sup> <sup>i</sup> log*(*zi*) <sup>−</sup> *mu* ∑ *i*=1 *log*(*zi*) +*log*(*α*) *mu* ∑ *i*=1 *βz*−*<sup>θ</sup> <sup>i</sup> log*(*zi*)*e*−*βz*−*<sup>θ</sup> <sup>i</sup>* <sup>+</sup> *mu* ∑ *i*=1 *e*−*βz*−*<sup>θ</sup> <sup>i</sup> βz*−*<sup>θ</sup> <sup>i</sup> log*(*zi*) *e*−*βz*−*<sup>θ</sup> <sup>i</sup>* − 1 *εiα<sup>e</sup>* <sup>−</sup>*βz*−*<sup>θ</sup> <sup>i</sup>* −2 <sup>+</sup>*<sup>β</sup> <sup>m</sup>* ∑ *i*=*mu*+1 [*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*<sup>θ</sup> log*[*τ* + *λ*(*zi* − *τ*)] <sup>−</sup> *<sup>m</sup>* ∑ *i*=*mu*+1 *log*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)] <sup>+</sup> *log*(*α*) *<sup>m</sup>* ∑ *i*=*mu*+1 *e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup> <sup>β</sup>*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*<sup>θ</sup> log*[*τ* + *λ*(*zi* − *τ*)] <sup>+</sup> *<sup>m</sup>* ∑ *i*=*mu*+1 *<sup>ε</sup>ie*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup> <sup>β</sup>*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup> log*[*τ*+*λ*(*zi*−*τ*)] *e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 *αe* <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −2 <sup>1</sup>−*α<sup>e</sup>* <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 + *<sup>ε</sup>me*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup> <sup>β</sup>*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup> log*[*τ*+*λ*(*zm*−*τ*)] *<sup>e</sup>*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 *αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −2 *αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 , (15) *∂β* <sup>=</sup> *mu <sup>β</sup>* <sup>−</sup> *mu* ∑ *i*=1 *z*−*<sup>θ</sup> <sup>i</sup>* − *log*(*α*) *mu* ∑ *i*=1 *z*−*<sup>θ</sup> <sup>i</sup> <sup>e</sup>*−*βz*−*<sup>θ</sup> <sup>i</sup>* <sup>−</sup> *mu* ∑ *i*=1 *εiz*−*<sup>θ</sup> <sup>i</sup> <sup>e</sup>*−*βz*−*<sup>θ</sup> i αe* <sup>−</sup>*βz*−*<sup>θ</sup> <sup>i</sup>* −1 + *<sup>m</sup> <sup>β</sup>* <sup>+</sup> *<sup>m</sup>* ∑ *i*=*mu*+1 [*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*<sup>θ</sup>* <sup>+</sup>*log*(*α*) *<sup>m</sup>* ∑ *i*=*mu*+1 [*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*<sup>θ</sup> e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* <sup>+</sup> *<sup>m</sup>* ∑ *i*=*mu*+1 *εi αe* <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 *e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 <sup>1</sup>−*α<sup>e</sup>* <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 + *εm <sup>e</sup>*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 [*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup> αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 <sup>1</sup>−*αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 , (16) *∂ ∂λ* <sup>=</sup> <sup>−</sup>*<sup>β</sup> <sup>m</sup>* ∑ *i*=*mu*+1 (*zi* <sup>−</sup> *<sup>τ</sup>*)[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*θ*−<sup>1</sup> <sup>−</sup> *<sup>m</sup>* ∑ *i*=*mu*+1 (*θ*+1)(*zi*−*τ*) [*τ*+*λ*(*zi*−*τ*)] <sup>+</sup>*βlog*(*α*) *<sup>m</sup>* ∑ *i*=*mu*+1 *e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* (*zi* <sup>−</sup> *<sup>τ</sup>*)[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*θ*−<sup>1</sup> + ∑*<sup>m</sup> <sup>i</sup>*=*mu*+<sup>1</sup> *<sup>ε</sup>ie*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup> <sup>β</sup>*(*zi*−*τ*)[*τ*+*λ*(*zi*−*τ*)]−*θ*−<sup>1</sup> *e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 *αe* <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −2 <sup>1</sup>−*α<sup>e</sup>* <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 +*εme*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup> <sup>β</sup>*(*zm*−*τ*)[*τ*+*λ*(*zm*−*τ*)]−*θ*−<sup>1</sup> *<sup>e</sup>*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 *αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −2 <sup>1</sup>−*αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 . (17)

We observe that it is difficult to get closed-form solutions of the parameters from the above nonlinear equations. As a result, we employ an iterative approach, such as Newton–Raphson, to find numerical solutions to the nonlinear systems.

#### **4. Confidence Intervals**

A confidence interval (CI) is a collection of numbers that serves as reasonable approximations to an unknown population characteristic (e.g., [31]). We consider two types of CIs for the unknown parameters as follows.

#### *4.1. Approximate Confidence Intervals*

According to large sample theory, the MLEs are consistent and regularly distributed under certain regularity conditions. To be more specific, - (*α*ˆ − *α*), ˆ *θ* − *θ* , *<sup>β</sup>*<sup>ˆ</sup> <sup>−</sup> *<sup>β</sup>* , <sup>λ</sup><sup>ˆ</sup> <sup>−</sup> <sup>λ</sup> . ∼ *N*(0, *σ*) is known to yield the asymptotic distribution of MLEs of *α*, *θ*, β and λ, where *σ* = *σij*, *i*, *j* = 1, 2, 3, 4 is the unknown parameters' in the variance–covariance matrix. The

inverse of the Fisher information matrix is an estimate of the variance–covariance matrix. The estimated 100(1 − *π*)% two-sided CIs for the unknown parameter are provided by

$$(\mathfrak{A}\_{iL'} \,\, \mathfrak{A}\_{iU}) : \, \mathfrak{A}\_{i} \mp z\_{1-\frac{\mu}{2}} \sqrt{\mathfrak{d}\_{ij'}} \, i = 1,2,3,4,4$$

where *z*1−*π*/2 is the *π*−th percentile of the standard normal distribution. However, the above asymptotic CIs may not perform well due to an asymmetric property of the APIW distribution. To deal with this issue, we consider the parametric bootstrap percentile intervals as an alternative [32].

#### *4.2. Bootstrap Confidence Intervals*

We consider the parametric bootstrap sampling with percentile intervals, which can be implemented using Algorithm 1.

#### **Algorithm 1. Bootstrap**


,


15. <sup>+</sup> *α*ˆ <sup>∗</sup>[1] , *α*ˆ <sup>∗</sup>[2] ,..., *α*ˆ <sup>∗</sup>[*G*] , , + ˆ *θ*∗[1] , ˆ *θ*∗[2] ,..., ˆ *θ*∗[*G*] , , + *β*ˆ∗[1] , *β*ˆ∗[2] ,..., *β*ˆ∗[*G*] , , + *λ*ˆ <sup>∗</sup>[1] , *λ*ˆ <sup>∗</sup>[2] ,..., *λ*ˆ <sup>∗</sup>[*G*] ,

The 100(1 − *ω*)% percentile bootstrap CIs for the unknown parameter are computed as follows

$$(\hat{\omega}\_{i1\prime}, \hat{\omega}\_{i\prime I}) = \left(\hat{\omega}\_i^{\*\lceil \frac{\pi}{2} \rceil G}, \ \hat{\omega}\_i^{\*\lceil (1 - \frac{\pi}{2})G \rceil}\right), i = 1, 2, 3, 4, \dots$$

where *ω*ˆ <sup>1</sup> <sup>∗</sup> = *α*∗, *ω*ˆ <sup>2</sup> <sup>∗</sup> = *θ*∗, *ω*ˆ <sup>3</sup> <sup>∗</sup> = β∗, and *ω*ˆ <sup>4</sup> ∗ = λ∗.

#### **5. Bayesian Estimation**

In this section, we focus on Bayes estimation for the unknown parameters. Bayesian analysis begins with prior specifications for the unknown parameters. In this paper, we assume that the parameters *α*, *θ*, *β*, and *λ* are statistically independent and follow independent gamma distributions, denoted by *gamma aj*, *bj* ; *j* = 1, ... , 4, respectively. The joint priors for the APIW parameters can be written as

$$
\varphi(a,\ \theta,\ \beta,\lambda) \propto a^{a\_1 - 1} e^{-b\_1 \infty} \ \theta^{a\_2 - 1} e^{-b\_2 \theta} \ \beta^{a\_3 - 1} e^{-b\_3 \beta} \ \lambda^{a\_4} e^{-b\_4 \lambda} \ \tag{18}
$$

where *aj* ≥ 0 and *bj* ≥ 0; *j* = 1, ... , 4 are pre-determined hyperparameters that reflect prior knowledge of the unknown parameters. The resulting joint posterior distribution of the unknown parameters is given by

$$L(\mathfrak{a}, \mathfrak{e}, \mathfrak{z}, \mathfrak{z} | \underline{\mathfrak{z}}) \propto \varphi(\mathfrak{a}, \mathfrak{e}, \mathfrak{z}, \mathfrak{z}) \prod \prod\_{i=1}^{4} \prod\_{j=1}^{n\_i} f\left(t\_{\vec{\imath}\underline{\mathfrak{z}}}\right) \left(1 - F\left(t\_{\vec{\imath}\underline{\mathfrak{z}}}\right)\right)^{d\_i},\tag{19}$$

which is usually unidentifiable. Thus, we employ Markov chain Monte Carlo (MCMC) methods to generate posterior samples of the unknown parameters for making posterior inference. In particular, the acquired samples will also be used to approximate Bayes estimates and obtain the corresponding highest posterior density (HPD) credible ranges for the unknown parameters [33]. In this paper, we obtain the Bayes estimates of the unknown parameters under the symmetric (SLF) and asymmetric (ELF) loss functions, which are denoted as

$$\ell(a,\tilde{a}) = \left(\tilde{a} - a\right)^2, \ell\left(\theta,\tilde{\theta}\right) = \left(\tilde{\theta} - \theta\right)^2, \ell\left(\beta,\tilde{\beta}\right) = \left(\tilde{\beta} - \beta\right)^2, \ell\left(\lambda,\tilde{\lambda}\right) = \left(\tilde{\lambda} - \lambda\right)^2,\tag{20}$$

where \**α*, \*θ, <sup>β</sup>\* and \*<sup>λ</sup> denote the estimated posterior means of *<sup>α</sup>*, *<sup>θ</sup>*, <sup>β</sup> and <sup>λ</sup>, respectively.

The generalized entropy (GE), an asymmetric loss function, is a simple generalization of the entropy loss with the shape parameter *q* being 1 and is given by

$$\ell(\omega,\tilde{\omega}) \propto \left(\frac{\tilde{\omega}}{\omega}\right)^q - q \ln\left(\frac{\tilde{\omega}}{\omega}\right) - 1, \ q \neq 1,\tag{21}$$

where *<sup>ω</sup>*\* is an approximated estimation of *<sup>ω</sup>* given by

$$
\tilde{\omega}\_{GE} = \left[ E\_{\omega} \left( \omega^{-q} \right) \right]^{\frac{-1}{q}}, \tag{22}
$$

assuming that *ω*−*<sup>q</sup>* exists and is finite and *E<sup>ω</sup>* represents the anticipated value [34]. It should be emphasized that other loss functions may easily be substituted in the same way.

#### **6. Optimization Criterion**

There has been a lot of interest in identifying the best censoring scheme in recent years; see [35–39]. For values of *n* and *m* determined by the samples under a test, possible censoring schemes are all combinations of *R*1, ... , *Rm*. We are interested in selecting the best sample technique, as it entails identifying the progressive censoring scheme that provides the most information about the unknown parameters among all conceivable progressive censoring schemes. The first challenge is to determine a way to generate the unknown parameter information based on specific progressive censoring data, and the second is to compare two distinct information measures based on two different progressive censoring techniques. We, here, provide some of the optimality criteria as follows. We choose the censoring method that provides the most information about the unknown parameters. Table 1 provides a variety of commonly used measures in selecting the proper progressive censoring strategy, *Ci*.

**Table 1.** Some practical censoring plan optimum criteria.


We are interested in maximizing the observed Fisher information *<sup>I</sup>***4**×**4**(**.**) for *<sup>C</sup>*1. Furthermore, for criteria *<sup>C</sup>*<sup>2</sup> and *<sup>C</sup>*3, we reduce the determinant and trace of [*I***4**×**4**(**.**)]−**<sup>1</sup>** . Comparing multiple criteria is simple when dealing with a distribution with a single parameter; however, comparing the two Fisher information matrices becomes difficult for multiparameter distributions, because *C*<sup>2</sup> and *C*<sup>3</sup> are not scale invariant. As a result, the logarithm of the APIW distribution for ˆ*tp* is provided by

$$\log\left(\hat{\mathbf{f}}\_p\right) = \log\left\{\frac{-1}{\beta}\log\left[1 - \frac{\log(1 + p(\alpha - 1))}{\log a}\right]\right\}^{\frac{1}{\beta}}, 0 < p < 1,\tag{23}$$

We apply the delta approach to (23) to obtain the approximated variance for *log* ˆ*tp* of the APIW distribution as

$$\operatorname{Var}\left(\log\left(\widehat{\mathfrak{f}}\_p\right)\right) = \left[\nabla \log\left(\widehat{\mathfrak{f}}\_p\right)\right]^T I\_{4\times 4}^{-1}\left(\mathbb{A}, \widehat{\mathfrak{f}}, \widehat{\mathfrak{f}}, \widehat{\mathfrak{\lambda}}\right) \left[\nabla \log\left(\widehat{\mathfrak{f}}\_p\right)\right].$$

where

$$\begin{bmatrix} \left[ \nabla \log \left( \hat{\mathbf{f}}\_{\mathcal{P}} \right) \right]^{T} = \begin{bmatrix} \frac{\partial}{\partial a} \log \left( \hat{\mathbf{f}}\_{\mathcal{P}} \right), \frac{\partial}{\partial \theta} \log \left( \hat{\mathbf{f}}\_{\mathcal{P}} \right), \ \frac{\partial}{\partial \hat{\boldsymbol{\beta}}} \log \left( \hat{\mathbf{f}}\_{\mathcal{P}} \right), \frac{\partial}{\partial \lambda} \log \left( \hat{\mathbf{f}}\_{\mathcal{P}} \right) \end{bmatrix}\_{\left( a=\hat{a}, \theta=\hat{\boldsymbol{\theta}}, \theta=\hat{\boldsymbol{\beta}}, \lambda=\hat{\lambda} \right)}$$

The optimal progressive censoring corresponds to a maximum value of *C*<sup>1</sup> and a minimum value of *Ci*, *i*, = 1, 2, 3.

#### **7. Simulation**

In this section, simulation experiments are carried out to evaluate the MLEs and Bayesian estimators' performances under the SLF and ELF in terms of their bias, mean square error (MSE), length of asymptotic CIs (LACI), and length of credible CIs (LCCI). The 95% CIs are generated using the asymptotic distribution of the MLEs. Two MLE bootstrap confidence intervals are additionally attained. The HPD is used to calculate the 95% credible intervals. Two schemes of progressive censoring are taken into consideration:

Scheme I: *R*<sup>1</sup> = ... = *Rm*−<sup>1</sup> = 0, and *Rm* = *n* − *m*. Scheme II: *R*<sup>2</sup> =... = *Rm* = 0 and *R*<sup>1</sup> = *n* − *m*.

For more information, see [40]. To choose the best strategy for the determinant and trace of the variance–covariance matrices, maximization of the principal diagonal elements of the Fisher information matrices, minimization of the determinant and trace of the variance–covariance matrix, and minimization of the variance in the logarithmic MLE of the p-th quantile, we used various optimization criteria. The following algorithm is used to carry out the estimation procedure:


$$z = \begin{cases} \begin{array}{cc} \left\{ \frac{-1}{\beta^\*} \ln \left[ \frac{\ln \left( 1 + (\alpha - 1) II \right)}{\ln(\alpha)} \right] \right\}^{\frac{-1}{\theta^\*}} & t < \pi \\ \tau - \tau^\* + \left\{ \frac{-1}{\lambda} \ln \left[ \frac{\ln \left( 1 + (\alpha - 1) II \right)}{\ln(\alpha)} \right] \right\}^{\frac{-1}{\theta^\*}} & t > \pi \end{array} \end{cases}$$

4. To generate the adaptive progressive hybrid censored data for given *n*, *m*, and *δ*, we use the model in (7). The data can be thought of as:

*z*1:*m*:*<sup>n</sup>* < *z*2:*m*:*<sup>n</sup>* < ... < *zu*:*m*:*<sup>n</sup>* < *τ* < *zu*+1:*m*:*<sup>n</sup>* < ... *zc*:*m*:*<sup>n</sup>* < *δ* < *zc*+1:*m*:*<sup>n</sup>* < *zm*:*m*:*<sup>n</sup>*


Numerical simulation studies are provided in Tables 2–7 and Figures 2 and 3. Several conclusions can be drawn as follows.

• By increasing the censored sample sizes *m*, the bias, MSE, and LCI of the estimates for the two alternative censored methods decrease for fixed values of *n* and *δ*.



**Table 2.** Bias, MSE, LACI, LBCI, and LCCI with scheme 1 in case 1.


**Table 3.** Bias, MSE, LACI, LBCI, and LCCI with scheme 2 in case 1.

**Table 4.** Bias, MSE, LACI, LBCI, and LCCI with scheme 1 in case 2.



**Table 4.** *Cont.*

**Table 5.** Bias, MSE, LACI, LBCI, and LCCI with scheme 2 in case 2.



**Table 5.** *Cont.*

**Table 6.** Optimization criterion with different schemes and cases.


**Table 7.** MLE, SE, and different measures of goodness of fit.


**Figure 2.** Heatmap for MSE with scheme 1 in case 1.

**Figure 3.** Heatmap for MSE with scheme 1 in case 2.

#### **8. A Real-Data Application**

We use examination data from [41] to illustrate the practical application of the proposed model. The following information represents the strength measured in GPA for single carbon fibers with gauge lengths of 10 mm and sample size of 63: 1.901, 2.132, 2.203, 2.228, 2.257, 2.350, 2.361, 2.396, 2.397, 2.445, 2.454, 2.474, 2.518, 2.522, 2.525, 2.532, 2.575, 2.614, 2.616, 2.618, 2.624, 2.659, 2.675, 2.738, 2.740, 2.856, 2.917, 2.928, 2.937, 2.937, 2.977, 2.996, 3.030, 3.125, 3.139, 3.145, 3.220, 3.223, 3.235, 3.243, 3.264, 3.272, 3.294, 3.332, 3.346, 3.377, 3.408, 3.435, 3.493, 3.501, 3.537, 3.554, 3.562, 3.628, 3.852, 3.871, 3.886, 3.971, 4.024, 4.027, 4.225, 4.395, and 5.020. Here, we use the modified Kolmogorov–Smirnov as a method for the goodness-of-fit test as follows:

The computational formula for the modified Kolmogorov–Smirnov statistic is then given by

,

$$D\_{m:n} = \max\left(D\_{m:n'}^+D\_{m:n}^-\right),$$

where

$$D\_{m:n}^{+} = \max\_{\hat{\mathbf{i}}} \left( \omega\_{i:m:n} - \mathrm{F} \left( z\_{i:m:n}; \hat{\mathbf{a}}, \hat{\boldsymbol{\beta}}, \hat{\boldsymbol{\beta}}, \hat{\boldsymbol{\lambda}} \right) \right)$$

$$D\_{m:n}^{-} = \max\_{\hat{\mathbf{i}}} \left( \mathrm{F} \left( z\_{i:m:n}; \hat{\mathbf{a}}, \hat{\boldsymbol{\beta}}, \hat{\boldsymbol{\beta}}, \hat{\boldsymbol{\lambda}} \right) - \omega\_{i-1:m:n} \right),$$

$$\omega\_{i:m:n} = 1 - \prod\_{j=m-i+1}^{m} \frac{j + R\_{m-j+1} + \dots + R\_{m}}{j + 1 + R\_{m-j+1} + \dots + R\_{m}}.$$

Ref. [42] proposed a general-purpose goodness-of-fit test by first estimating the unknown parameters of the hypothesized distribution, then transforming the data to normality, and then testing the goodness of fit of the transformed data to normality. Then, along the lines of [42], the proposed test procedure is as follows:


For more information about the *p*-value for KS test for SSPALT samples, see [42–46].

In Table 7, we provide the MLEs with their standard errors (SEs) for the APIW parameters and different measures of goodness of fit as Akaike information criterion (AIC), Bayesian information criterion (BIC), corrected Akaike information criterion (CAIC), Hannan–Quinn information criterion (HQIC), Kolmogorov–Smirnov (KS) test and its *p*value (PVKS), Anderson–Darling (AD), and Cramèr–von Mises (CVM).

The empirical CDF and its CDF fitted (left panel), and the histogram of the data and its fitted density function to the single carbon fiber data (right panel) are displayed in the top of Figure 4. Further, a graph of the PP plot (left panel) and QQ plot (right panel) of the APIW distribution is shown in bottom Figure 4.

**Figure 4.** Fitting plot of APIW distribution of single carbon fibers.

Figure 5 shows the profile log-likelihood function plots for the parameters of the APIW distribution. Figure 6 displays contour plots of the log-likelihood function for the APIW parameters, indicating that the MLEs can be uniquely estimated.

**Figure 6.** Contour plots of the log-likelihood function for the parameters for the APIW model.

Numerical results for the single carbon fiber study are provided in Table 8. Table 9 contains the estimations based on the censored data. For a given fixed scheme, we observe that Bayes estimates of the unknown parameters are close to the MLEs. Table 10 discusses the estimation of *τ*, which is given by equating *F*1(*τ*) = *F*2(*τ*∗). Table 11 discusses different optimality measures for the MLE based on different schemes, illustrating that the proposed technique is quite satisfactory.


**Table 8.** The single carbon fiber study data based on SSPALT when *τ* = 3 and *δ* = 3.8.

**Table 9.** The MLE and its SE and Bayesian and its SD with confidence intervals.


**Table 10.** Estimated *τ*.


**Table 11.** Optimality measures.


#### **9. Conclusions**

It is known that in life-testing and reliability trials, many data may exhibit different shapes and are censored due to time and cost constraints. Thus, accelerated life tests are commonly used to explore the lifetime of reliable items by subjecting them to elevated stress levels of stressors that could cause early failures. This observation motivated us to investigate the step stress partially accelerated life testing model with samples from the APIW distribution under the adaptive type II progressively hybrid censoring. We considered statistical inferences of the unknown model parameters of the APIW distribution from both likelihood and Bayesian perspectives. We first considered the maximum likelihood estimates for the unknown model parameters and used these estimates to construct two types of approximate confidence intervals of the distributional parameters. We then conducted Bayesian inference for the unknown parameters with non-informative and informative priors under the symmetric and asymmetric loss functions. Moreover, we analyzed three different probable optimum test techniques for the proposed model under different optimal criteria. Numerical results from both simulations and a real-data application illustrated that the performance of the proposed method is quite satisfactory for estimating the APIW parameters under different sampling schemes. We may, thus, conclude that the proposed model has great potential for analyzing censored data under the AT-II PHCS in the study of life testing and reliability analyses.

**Author Contributions:** R.A., E.M.A., Q.H., and H.R. have contributed equally. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R50), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data used to support the findings of this study are included within the article.

**Acknowledgments:** Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R50), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A. Fisher Information Matrix**

The Fisher information matrix is a fundamental statistical construct that describes how much information data offer on a variable that is unknown. It can be used to calculate the variance in an estimator as well as the asymptotic behavior of maximum-likelihood estimations. The inverse of the Fisher information matrix is an estimator of the asymptotic covariance matrix. The Fisher information matrix is computed by taking the expected values of the negative second-partial and mixed-partial derivatives of the log-likelihood function with respect to *α*, *θ*, *β* and *λ*. It is further explained below.

$$I\_{4 \times 4} = -E \begin{bmatrix} a\_{11} & a\_{12} & a\_{13} & a\_{14} \\ a\_{21} & a\_{22} & a\_{23} & a\_{24} \\ a\_{31} & a\_{32} & a\_{33} & a\_{34} \\ a\_{41} & a\_{42} & a\_{43} & a\_{44} \end{bmatrix} \tag{A1}$$

where *a***<sup>11</sup>** = *E ∂*2 *∂α*<sup>2</sup> , *a***<sup>12</sup>** = *a***<sup>21</sup>** = *E ∂*2 *∂α∂θ* , *a***<sup>13</sup>** = *a***<sup>31</sup>** = *E ∂*2 *∂α∂*β , *a***<sup>14</sup>** = *a***<sup>41</sup>** = *E ∂*2 *∂α∂* λ *a***<sup>22</sup>** = *E ∂*2 *∂θ*<sup>2</sup> , *a***<sup>33</sup>** = *E ∂*2 *∂*β<sup>2</sup> , *a***<sup>32</sup>** = *a***<sup>23</sup>** = *E ∂*2 *∂θ∂* β , *a***<sup>44</sup>** = *E ∂*2 *∂*λ<sup>2</sup> , *a***<sup>42</sup>** = *a***<sup>24</sup>** = *E ∂*2 *∂θ∂* λ , and *a***<sup>34</sup>** = *a***<sup>34</sup>** = *E ∂*2 *∂*β*∂* λ .

The relevant matrices' elements are computed. As a result, the variance–covariance matrix for MLEs can be constructed as follows:

*∂*2 *∂α∂θ* <sup>=</sup> <sup>1</sup>

*<sup>α</sup>* <sup>∑</sup>*mu*

*∂*2 *∂α*<sup>2</sup> <sup>=</sup> <sup>−</sup>(*mu*+*m*) (*αlogα*) <sup>2</sup> <sup>+</sup> (*mu*+*m*+*εm*) (*α*−1) <sup>2</sup> <sup>−</sup> <sup>1</sup> *α*2 *mu* ∑ *i*=1 *e*−*βz*−*<sup>θ</sup> <sup>i</sup>* + <sup>1</sup> *α*2 *mu* ∑ *i*=1 *ε<sup>i</sup>* + <sup>1</sup> (*α*−1) 2 *mu* ∑ *i*=1 *εi* − 1 *α mu* ∑ *i*=1 *εiα<sup>e</sup>* <sup>−</sup>*βz*−*<sup>θ</sup> <sup>i</sup>* <sup>−</sup><sup>1</sup> + (*log*(*α*)) <sup>2</sup> *mu* ∑ *i*=1 *εiα<sup>e</sup>* <sup>−</sup>*βz*−*<sup>θ</sup> <sup>i</sup>* −1 <sup>−</sup> <sup>1</sup> *α*2 *m* ∑ *i*=*mu*+1 *<sup>ε</sup>ie*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* − 1 *α*2 *m* ∑ *i*=*mu*+1 (*εi*) + <sup>1</sup> (*α*−1) 2 *m* ∑ *i*=*mu*+1 (*εi*) + ⎡ ⎣1 *α m* ∑ *i*=*mu*+1 *<sup>ε</sup><sup>i</sup> <sup>α</sup><sup>e</sup>* <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 <sup>1</sup>−*α<sup>e</sup>* <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 <sup>+</sup>*log*(*α*) *<sup>m</sup>* ∑ *i*=*mu*+1 *<sup>ε</sup>i*(*e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1) *αe*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −2 ⎛ ⎜⎜⎝ <sup>1</sup>−*α<sup>e</sup>* <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 +1 <sup>1</sup>−*α<sup>e</sup>* <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 2 ⎞ ⎟⎟⎠ ⎤ ⎥ ⎥ <sup>⎦</sup> <sup>−</sup> *<sup>ε</sup><sup>m</sup> α*2 + ⎡ ⎣1 *α m* ∑ *i*=*mu*+1 *<sup>ε</sup><sup>i</sup> <sup>α</sup>e*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 <sup>1</sup>−*αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 <sup>+</sup>*log*(*α*) *<sup>m</sup>* ∑ *i*=*mu*+1 *<sup>ε</sup>i*(*e*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1) *αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −2 ⎛ ⎜⎝ <sup>1</sup>−*αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 +1 <sup>1</sup>−*αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 2 ⎞ ⎟⎠ ⎤ ⎥ ⎦, (A2) *<sup>i</sup>*=<sup>1</sup> *<sup>e</sup>*−*βz*−*<sup>θ</sup> <sup>i</sup> log*(*βzi*) <sup>−</sup> *log*(*α*) <sup>∑</sup>*mu <sup>i</sup>*=<sup>1</sup> *<sup>ε</sup>ie*−*βz*−*<sup>θ</sup> <sup>i</sup> log*(*βzi*)*α<sup>e</sup>* <sup>−</sup>*βz*−*<sup>θ</sup> <sup>i</sup>* <sup>−</sup><sup>1</sup> + <sup>1</sup> *<sup>α</sup>* <sup>∑</sup>*<sup>m</sup> <sup>i</sup>*=*mu*+<sup>1</sup> *εilog*(*β*[*τ*+ *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)])*e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* + *log*(*α*) ∑*<sup>m</sup> <sup>i</sup>*=*mu*+<sup>1</sup> *ε<sup>i</sup> αe*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 *e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup> log*(*β*[*τ*+ ⎡ <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 ⎤

$$\begin{split} \lambda\left(z\_{i}-\tau\right) \big| \left( \frac{\left(1-\boldsymbol{a}^{\boldsymbol{\sigma}\cdot\left(\boldsymbol{r}+\boldsymbol{\lambda}(\boldsymbol{z}\_{i}-\boldsymbol{\tau})\right)}-1\right)+\left(\boldsymbol{a}^{\boldsymbol{\sigma}\cdot\left(\boldsymbol{r}+\boldsymbol{\lambda}(\boldsymbol{z}\_{i}-\boldsymbol{\tau})\right)}-1}{\left(1-\boldsymbol{a}^{\boldsymbol{\sigma}\cdot\left(\boldsymbol{r}+\boldsymbol{\lambda}(\boldsymbol{z}\_{i}-\boldsymbol{\tau})\right)}-\boldsymbol{\theta}\right)^{2}}\right)+\\ \boldsymbol{\varepsilon}\_{m}\log\left(\boldsymbol{a}\right)\left(\boldsymbol{a}^{\boldsymbol{\sigma}\cdot\left[\boldsymbol{\tau}+\boldsymbol{\lambda}(\boldsymbol{z}\_{m}-\boldsymbol{\tau})\right]}-\boldsymbol{\theta}\right)-\boldsymbol{\beta}\big{\big{]}}{}-\beta\big{]}\tau+\lambda\left(z\_{m}-\tau\right)^{2}\log\left(\beta\big{]}\big{]}\tau+\lambda\left(z\_{m}-\tau\right)\\ \left(\boldsymbol{\lambda}\right)\big{]} \left(\frac{\left(1-\boldsymbol{a}^{\boldsymbol{\sigma}\cdot\left[\boldsymbol{r}+\boldsymbol{\lambda}(\boldsymbol{z}\_{m}-\boldsymbol{\tau})\right]}-\boldsymbol{\theta}\right)^{2}-\boldsymbol{\theta}}{\left(1-\boldsymbol{a}^{\boldsymbol{\sigma}\cdot\left[\boldsymbol{r}+\boldsymbol{\lambda}(\boldsymbol{z}\_{m}-\boldsymbol{\tau})\right]}-\boldsymbol{\theta}\right)^{2}}\right)}\right)}\right)}\end{pmatrix},\tag{A3}$$

*∂*2 *∂α∂β* <sup>=</sup> <sup>−</sup><sup>1</sup> *α mu* ∑ *i*=1 *e*−*βz*−*<sup>θ</sup> <sup>i</sup> z*−*<sup>θ</sup> <sup>i</sup>* − *log*(*α*) *mu* ∑ *i*=1 *εiα<sup>e</sup>* <sup>−</sup>*βz*−*<sup>θ</sup> <sup>i</sup>* <sup>−</sup>1*e*−*βz*−*<sup>θ</sup> <sup>i</sup> z*−*<sup>θ</sup> <sup>i</sup>* <sup>−</sup> <sup>1</sup> *α m* ∑ *i*=*mu*+1 *<sup>ε</sup>i*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*<sup>θ</sup> e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* <sup>+</sup>*log*(*α*) *<sup>m</sup>* ∑ *i*=*mu*+1 *εi αe*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 [*τ* <sup>+</sup>*λ*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*<sup>θ</sup> e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* ⎡ ⎢ ⎢ ⎣ *αe* <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 +1 <sup>1</sup>−*α<sup>e</sup>* <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 2 ⎤ ⎥ ⎥ ⎦ +*εmlog*(*α*) *αe*−*β*[*τ*+*λ*(*zm*−*τ*),] −*θ* −1 [*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zm* <sup>−</sup> *<sup>τ</sup>*)]−*<sup>θ</sup> e*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* , (A4)

$$\begin{split} \frac{\partial^2 f}{\partial \alpha \Delta t} &= + \frac{1}{\kappa} \sum\_{i=m\_a+1}^m \varepsilon\_i (z\_i - \tau) e^{-\beta[\tau + \lambda(z\_i - \tau)]} ^{-\theta} \\ &\quad + \log(\alpha) \mathop{\arg\limits\_{i=m\_a+1}} \varepsilon\_i e^{-\beta[\tau + \lambda(z\_i - \tau)]} ^{-\theta} \beta(z\_i - \tau) \Big[\tau \\ &\quad + \lambda(z\_i - \tau)]^{-\theta - 1} \alpha^{\kappa - \beta[\tau + \lambda(z\_i - \tau)]^{-\theta} - 1} \Bigg( \frac{\left(1 - \alpha^{\kappa - \beta[\tau + \lambda(z\_i - \tau)]^{-\theta}} - 1\right) + 1}{\left(1 - \alpha^{\kappa - \beta[\tau + \lambda(z\_i - \tau)]^{-\theta}} - 1\right)^2} \\ &\quad + \kappa \eta \log(\alpha) e^{-\beta[\tau + \lambda(z\_m - \tau)]} ^{-\theta} \beta(z\_m - \tau) \Big[\tau \\ &\quad + \lambda(z\_m - \tau)] \mathop{\arg\limits\_{i=m\_a} \varepsilon} \Bigg( \frac{\alpha^{\kappa - \beta[\tau + \lambda(z\_m - \tau)]^{-\theta}} - 1}{\left(1 - \alpha^{\kappa - \beta[\tau + \lambda(z\_m - \tau)]^{-\theta}} - 1\right)^2} \Bigg), \end{split} \tag{A5}$$

*∂*2 *∂θ*<sup>2</sup> = *β mu* ∑ *i*=1 *z*−*<sup>θ</sup> <sup>i</sup>* (*log*(*zi*))<sup>2</sup> +*log*(*α*)*β mu* ∑ *i*=1 *z*−*<sup>θ</sup> <sup>i</sup>* (*log*(*zi*))<sup>2</sup> *e*−*βz*−*<sup>θ</sup> <sup>i</sup>* <sup>−</sup> *<sup>z</sup>*−2*<sup>θ</sup> <sup>i</sup> log*(*zi*)*e*−*βz*−*<sup>θ</sup> i* +*β mu* ∑ *i*=1 *<sup>ε</sup>ie*−*βz*−*<sup>θ</sup> <sup>i</sup> z*−*<sup>θ</sup> <sup>i</sup> log*(*zi*) *e*−*βz*−*<sup>θ</sup> <sup>i</sup>* − 1 *αe* <sup>−</sup>*βz*−*<sup>θ</sup> <sup>i</sup>* <sup>−</sup><sup>2</sup> <sup>+</sup> *<sup>β</sup> <sup>m</sup>* ∑ *i*=*mu*+1 [*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*<sup>θ</sup> log*[*τ* + *λ*(*zi* − *τ*)] <sup>−</sup> *<sup>m</sup>* ∑ *i*=*mu*+1 *log*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)] <sup>+</sup> *log*(*α*) *<sup>m</sup>* ∑ *i*=*mu*+1 *e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup> <sup>β</sup>*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*<sup>θ</sup> log*[*τ* + *λ*(*zi* − *τ*)] <sup>+</sup> *<sup>m</sup>* ∑ *i*=*mu*+1 *βεi*[*log*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]]<sup>2</sup> [*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*<sup>θ</sup> e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup> αe*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −2 + ⎧ ⎪⎪⎨ ⎪⎪⎩ ' *e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 ' 1+ *αe* <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 −[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* +(*αe*) −1 +*α<sup>e</sup>* <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*θ*−<sup>1</sup> (( <sup>1</sup>−*α*−*β*[*τ*+*λ*(*zi*−*τ*)]−*θ*−<sup>1</sup> ⎫ ⎪⎪⎬ ⎪⎪⎭ <sup>+</sup>*βεm*(*log*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zm* <sup>−</sup> *<sup>τ</sup>*)])<sup>2</sup> [*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zm* <sup>−</sup> *<sup>τ</sup>*)]−*<sup>θ</sup> e*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup> <sup>α</sup>e*−*β*[*τ*+*λ*(*zm*−*τ*)]−*θ*−<sup>2</sup> + ⎧ ⎨ ⎩ *<sup>e</sup>*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 1+ *αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 −[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* +(*αe*)−1+*αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 <sup>1</sup>−*αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 ⎫ ⎬ ⎭, (A6) *∂*2 *∂θ∂β* <sup>=</sup> <sup>∑</sup>*mu <sup>i</sup>*=<sup>1</sup> *<sup>z</sup>*−*<sup>θ</sup> <sup>i</sup> log*(*zi*) <sup>+</sup> *log*(*α*) <sup>∑</sup>*mu <sup>i</sup>*=<sup>1</sup> *<sup>z</sup>*−*<sup>θ</sup> <sup>i</sup> log*(*zi*)*e*−*βz*−*<sup>θ</sup> i* <sup>1</sup> <sup>−</sup> *<sup>β</sup>z*−*<sup>θ</sup> i* + ∑*mu <sup>i</sup>*=<sup>1</sup> *ε<sup>i</sup> e*−*βz*−*<sup>θ</sup> <sup>i</sup>* − 1)*z*−*<sup>θ</sup> <sup>i</sup> log*(*zi*)*e*−*βz*−*<sup>θ</sup> i αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −2 <sup>1</sup> <sup>−</sup> *<sup>β</sup>z*−*<sup>θ</sup> i* 1 + <sup>1</sup> *<sup>α</sup> <sup>e</sup>*−*βz*−*<sup>θ</sup> <sup>i</sup> z*−*<sup>θ</sup> i e*−*βz*−*<sup>θ</sup> <sup>i</sup>* − 2 + <sup>∑</sup>*<sup>m</sup> <sup>i</sup>*=*mu*+1[*τ*+ *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*<sup>θ</sup> log*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)] <sup>+</sup> *log*(*α*) <sup>∑</sup>*<sup>m</sup> <sup>i</sup>*=*mu*+<sup>1</sup> *<sup>e</sup>*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* [*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*<sup>θ</sup> log*[*τ* + *λ*(*zi* − *τ*)](1− *<sup>β</sup>*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*<sup>θ</sup>* + ∑*<sup>m</sup> <sup>i</sup>*=*mu*+<sup>1</sup> *<sup>ε</sup>i*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*<sup>θ</sup> log*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]*e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* × ⎡ ⎢ ⎢ ⎣ *e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 *αe* <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −2 <sup>1</sup>−*β*<sup>−</sup> *<sup>β</sup> <sup>e</sup><sup>α</sup>* [*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* <sup>1</sup>−*α<sup>e</sup>* <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 <sup>1</sup>−*α<sup>e</sup>* <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 2 ⎤ ⎥ ⎥ ⎦ + *<sup>ε</sup>m*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zm* <sup>−</sup> *<sup>τ</sup>*)]−*<sup>θ</sup> log*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zm* <sup>−</sup> *<sup>τ</sup>*)]*e*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* <sup>×</sup> <sup>⎡</sup> ⎢ ⎣ *<sup>e</sup>*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 *αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −2 <sup>1</sup>−*β*<sup>−</sup> *<sup>β</sup> <sup>e</sup><sup>α</sup>* [*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* <sup>1</sup>−*αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 <sup>1</sup>−*αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 2 ⎤ ⎥ ⎦, (A7)

*∂*2 *∂λ∂θ* <sup>=</sup> *<sup>β</sup> <sup>m</sup>* ∑ *i*=*mu*+1 (*zi* <sup>−</sup> *<sup>τ</sup>*)[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*θ*−<sup>1</sup> (1 − *θlog*[*τ* + *λ*(*zi* − *τ*)]) <sup>−</sup> *<sup>m</sup>* ∑ *i*=*mu*+1 (*zi*−*τ*) [*τ*+*λ*(*zi*−*τ*)] <sup>+</sup>*log*(*α*) *<sup>m</sup>* ∑ *i*=*mu*+1 +−*β*2*e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* (*zi* <sup>−</sup> *<sup>τ</sup>*)[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−2*θ*−<sup>1</sup> *log*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)] +(*zi* <sup>−</sup> *<sup>τ</sup>*)[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*<sup>θ</sup> e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup> log*[*τ* + *λ*(*zi* − *τ*)] +(*zi* <sup>−</sup> *<sup>τ</sup>*)[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*θ*−<sup>1</sup> *e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>*, <sup>+</sup> *<sup>m</sup>* ∑ *i*=*mu*+1 *βε<sup>i</sup>* <sup>1</sup>−*α*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 2 × *θβ*(*zi* <sup>−</sup> *<sup>τ</sup>*)*e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* [*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*θ*−<sup>1</sup> *log*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)] *e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1) *αe*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −2 <sup>−</sup>*θ*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*θ*−<sup>1</sup> (*zi* <sup>−</sup> *<sup>τ</sup>*)*e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup> e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* − 1 *αe*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −2 + (*zi*−*τ*)*e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup> e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 *αe* <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −2) [*τ*+*λ*(*zi*−*τ*)] +*e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* [*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*θ*−<sup>1</sup> *log*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)] *e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* − 1 *e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −2) *αe*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −3 *e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup> βθ*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*θ*−<sup>1</sup> <sup>+</sup> *βε<sup>m</sup>* <sup>1</sup>−*αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 2 × *θβ*(*zm* <sup>−</sup> *<sup>τ</sup>*)*e*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* [*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zm* <sup>−</sup> *<sup>τ</sup>*)]−*θ*−<sup>1</sup> *log*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zm* <sup>−</sup> *<sup>τ</sup>*)] *e*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1) *αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −2 <sup>−</sup>*θ*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zm* <sup>−</sup> *<sup>τ</sup>*)]−*θ*−<sup>1</sup> (*zm* <sup>−</sup> *<sup>τ</sup>*)*e*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup> e*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* − 1 *αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −2 + (*zm*−*τ*)*e*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup> <sup>e</sup>*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 *αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −2 [*τ*+*λ*(*zm*−*τ*)] +*e*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* [*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zm* <sup>−</sup> *<sup>τ</sup>*)]−*θ*−<sup>1</sup> *log*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zm* <sup>−</sup> *<sup>τ</sup>*)] *e*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1) *e*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* − 2 *αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −3 *e*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup> βθ*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zm* <sup>−</sup> *<sup>τ</sup>*)]−*θ*−<sup>1</sup> , (A8) *∂*2 *∂β*<sup>2</sup> <sup>=</sup> <sup>−</sup>(*mu*+*m*) *<sup>β</sup>*<sup>2</sup> − *log*(*α*) *mu* ∑ *i*=1 *z*−2*<sup>θ</sup> <sup>i</sup> <sup>e</sup>*−*βz*−*<sup>θ</sup> <sup>i</sup>* <sup>−</sup> *mu* ∑ *i*=1 *εiz*−2*<sup>θ</sup> <sup>i</sup> <sup>e</sup>*−*βz*−*<sup>θ</sup> i αe* <sup>−</sup>*βz*−*<sup>θ</sup> <sup>i</sup>* −1 <sup>+</sup>*log*(*α*) *<sup>m</sup>* ∑ *i*=*mu*+1 [*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−2*<sup>θ</sup> e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* <sup>+</sup> *<sup>m</sup>* ∑ *i*=*mu*+1 *<sup>ε</sup>iαe*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* <sup>−</sup>1[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*<sup>θ</sup> e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 2 <sup>+</sup>*e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* <sup>1</sup>−*α<sup>e</sup>* <sup>−</sup>*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 2 + *εm <sup>e</sup>*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 [*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup> <sup>α</sup>*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 2 <sup>+</sup>*e*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* <sup>1</sup>−*αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 <sup>2</sup> , (A9)

*∂*2 *∂β∂λ* <sup>=</sup> <sup>−</sup>*<sup>θ</sup> m* ∑ *i*=*mu*+1 (*zi* <sup>−</sup> *<sup>τ</sup>*)[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*θ*−<sup>1</sup> −*θlog*(*α*) *m* ∑ *i*=*mu*+1 (*zi* <sup>−</sup> *<sup>τ</sup>*)[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*θ*−<sup>1</sup> *e* −*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* <sup>+</sup>*β*[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−2*θ*−<sup>1</sup> (*zi* − *τ*)*e* −*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* + *m* ∑ *i*=*mu*+1 *βε<sup>i</sup> <sup>α</sup>* (*zi* <sup>−</sup> *<sup>τ</sup>*)[*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*θ*−<sup>1</sup> *e* −*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* − 1 2 *αe*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* <sup>−</sup><sup>2</sup> + 1 <sup>1</sup> <sup>−</sup> *<sup>α</sup>θ*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 2 <sup>+</sup>*εm<sup>β</sup> <sup>α</sup>* (*zm* <sup>−</sup> *<sup>τ</sup>*) *e* −*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* − 1 2 [*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zm* <sup>−</sup> *<sup>τ</sup>*)]−*θ*−<sup>1</sup> *αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* <sup>−</sup><sup>2</sup> + 1 <sup>1</sup> <sup>−</sup> *<sup>α</sup>e*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −1 2 , (A10)

*∂*2 *∂λ*<sup>2</sup> <sup>=</sup> *<sup>β</sup>*(*<sup>θ</sup>* <sup>−</sup> <sup>1</sup>) <sup>∑</sup>*<sup>m</sup> <sup>i</sup>*=*mu*+1(*zi* − *τ*) 2 [*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*θ*−<sup>2</sup> <sup>−</sup> (*<sup>θ</sup>* <sup>+</sup> <sup>1</sup>) <sup>∑</sup>*<sup>m</sup> i*=*mu*+1 (*zi*−*τ*) 2 [*τ*+*λ*(*zi*−*τ*)]<sup>2</sup> <sup>+</sup> *βθlog*(*α*) <sup>∑</sup>*<sup>m</sup> <sup>i</sup>*=*mu*+1(*zi*− *τ*)*e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* [*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−2(*θ*−1) [*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−(*θ*−1) <sup>+</sup> <sup>1</sup> + ∑*<sup>m</sup> <sup>i</sup>*=*mu*+<sup>1</sup> *θεiβ*2(*zi*<sup>−</sup> *τ*) 2 *e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* [*<sup>τ</sup>* <sup>+</sup> *<sup>λ</sup>*(*zi* <sup>−</sup> *<sup>τ</sup>*)]−*θ*−<sup>1</sup> *e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* − 1)*αe*−*β*[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −2 ⎛ ⎜⎝<sup>1</sup>+[*τ*+*λ*(*zi*−*τ*)]−(*θ*+1) <sup>+</sup>*e*−*β*[*τ*+*λ*(*zi*−*τ*)]−*θ*+[*τ*+*λ*(*zi*−*τ*)]−<sup>1</sup> <sup>1</sup>−*αe*−*β*[[*τ*+*λ*(*zi*−*τ*)]−*<sup>θ</sup>* −1 2 ⎞ ⎟⎠ <sup>+</sup> *<sup>ε</sup>mβ*2(*zm* <sup>−</sup> *<sup>τ</sup>*) 2 [*τ*+ *<sup>λ</sup>*(*zm* <sup>−</sup> *<sup>τ</sup>*)]−*θ*−<sup>1</sup> *e*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* − 1 *αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* −2 ⎛ ⎜⎝<sup>1</sup>+[*τ*+*λ*(*zm*−*τ*)]−(*θ*+1) <sup>+</sup>*e*−*β*[*τ*+*λ*(*zm*−*τ*)]−*<sup>θ</sup>* +[*τ*+*λ*(*zm*−*τ*)]−<sup>1</sup> <sup>1</sup>−*αe*−*β*[*τ*+*λ*(*zm*−*τ*)]−*θ*−<sup>1</sup> 2 ⎞ ⎟⎠· (A11)

#### **References**

