1. Introduction
Censoring is a popular technique in reliability and life testing investigations. The experimenter must have prior expertise with various test conditions, including time, cost, or money limits, when unit removal is scheduled in advance prior to failure. In reliability investigations, the most commonly employed censoring techniques are time censoring (Type-I) and failure censoring (Type-II). One of these techniques’ primary flaws is that things cannot be removed from the experiment at any point other than the end, so progressive Type-II (T2P) censoring is suggested; for further details, see Balakrishnan and Cramer [
1]. Although the Type-I progressively hybrid censoring, proposed by Kundu and Joarder [
2], ensures that the experiment stops at a predetermined time, the effective sample size collected may be too small; thus, the estimation approach cannot be effective. For this reason, Ng et al. [
3] suggested adaptive progressive Type-II hybrid (T2APH) censoring. This life test plan has become widely common in survival studies and is conducted as follows: Suppose
n (size of total independent identical items),
(size of failed subjects),
(T2P censoring),
(threshold time), are preassigned. This mechanism allows
to change accordingly during the examination and confers the experiment time to run over
T. If
, just like the conventional T2P strategy, end the test at
. Otherwise, if
, where
denotes the total number of failures up to
T, the practitioner must stop the removal items beyond
T, i.e.,
for
and end the test at
. However, the number of staying live units (say
) when
and
are
and
, respectively.
Let
be a T2APH sample is obtained from a population having a cumulative distribution function (CDF)
and probability density function (PDF)
, then the joint likelihood function (LF) of T2APH, where
refers to the vector of parameters under interest, is
where
is a constant and
. It should be noted that this proposed strategy ensures that the life test ends when the required effective sample size is reached; see, for example, Elshahhat and Nassar [
4,
5].
Besides (
1), the PS methodology is also inserted as a good competitive approach to the conventional likelihood. The PS method was independently investigated by Cheng and Amin [
6] and Ranneby [
7]. Similar to the logic of obtaining the MLEs, maximum product spacing estimators (PSEs) can be obtained by maximizing the PS function. For skewed distributions, Anatolyev and Kosenok [
8] demonstrated that the PSEs are more efficient than the traditional MLEs. Following El-Sherpieny et al. [
9], the T2APH using maximum PS method,
, can be defined as
where
is a constant,
and
.
The two-parameter weighted-exponential (WE) distribution was suggested by Gupta and Kundu [
10] by adding a new skewness parameter to the traditional exponential distribution. They also stated that the WE density shape is quite obvious compared to the other extended exponential lifetime models, including: gamma, Weibull, and generalized exponential distributions. Moreover, in many practical situations, the WE distribution has superior properties and may be utilized to fit lifetime data compared to other models in the statistical literature. Thus, the WE distribution might be a good alternate choice to analyze skewed-data. Further, Dey et al. [
11] proposed several properties and derived various estimators of the WE parameters under complete sampling. However, suppose
Y is a random life variable of an item that follows
, where
. Hence, the respective PDF and CDF of
Y are
and
where
and
denote the scale and shape parameters, respectively. Consequently, the reliability function (RF) (say
) and hazard rate function (HRF) (say
) of the WE model, at time
, are given by
and
respectively. Setting
in (
4), two sub-models can be obtained as special cases, namely:
Using some specified values in the range of the parameters
and
, different shapes of the density and failure rate functions of the WE distribution are shown in
Figure 1. It shows that the density shapes of the WE distribution are always log-concave and unimodal with the mode at
; in addition, the HRF has a monotone increasing function for all nonnegative values of
and
(see Gupta and Kundu [
10]). Several works in the recent decade have explored the inference problem of the WE parameters; for example, Farahani and Khorram [
12] examined the Bayesian inference for WE distribution parameters under Type-II censoring, and Tian and Gui [
13] discussed the WE parameters in the presence of T2P-competing risks data.
To the best of our knowledge of T2APH data, we are not aware of any work related to inferring the WE parameters and and/or the reliability and functions. As a result, to bridge this gap, the objectives of the current study are fourfold:
Develop both point and interval estimates of , , and using T2APH samples by exclusively focusing on both frequentist and Bayesian inferential methods.
Acquire the maximum likelihood and product of spacings estimates of , , and . Create the approximate confidence interval (ACI) bounds of the unknown quantities using the observed Fisher information obtained from both the LF and PS approaches.
Explore the PS method as an alternative to the traditional LF method and investigate both in two Bayesian estimation setups for unknown parameters, reliability function, and hazard function. Use independent gamma density priors against the squared-error loss to develop the Bayes estimates. Approximate the Bayes estimates and their credible intervals via Markov-chain Monte-Carlo (MCMC) techniques.
Compare the effectiveness of the offered approaches based on several accuracy criteria, namely: simulated bias, mean squared error, and length of confidence interval values via Monte Carlo simulations. Illustrate a mechanical data set to discuss the suggested methodologies and to highlight the WE distribution’s superiority and flexibility over other eight lifetime models in the literature, namely: Weibull, gamma, Nadarajah–Haghighi, weighted Nadarajah–Haghighi, alpha power exponential, Weibull-exponential, generalized gamma, and generalized beta distributions.
The remainder of the paper is structured as follows:
Section 2 provides the frequentist estimates. In
Section 3, Bayesian (point/interval) estimations are developed.
Section 4 presents the simulated outcomes. Real data results are highlighted in
Section 5. Finally,
Section 6 presents the conclusions and recommendations of the study.
4. Numerical Comparisons
To gauge the behavior of the offered estimators of the WE lifetime distribution discussed in earlier sections, extensive Monte-Carlo simulations based on adaptive progressive Type-II hybrid samples are created.
4.1. Simulation Design
This subsection presents the suggested scenarios of the proposed censoring and the outputs of simulations. From different choices of (threshold time), (complete sample size) and (T2P pattern), large 1000 T2APHC samples are obtained from . At , the offered estimates of and are evaluated when their plausible values are 0.97456 and 0.47968, respectively. For each setting of T and n, the level of m is specified as a failure percent (FP) from each n, i.e., as (=40, 80%). Moreover, different removal patterns of are included in our account, namely:
Scheme 1: ‘Left Censoring’, i.e., ;
Scheme 2: ‘Middle Censoring’, i.e., ;
Scheme 3: ‘Right Censoring’, i.e., ,
where, for instance, implies that 0 repeats times.
To create a T2APHC sample of size m from the WE distribution, conduct the following procedure:
Step 1: Simulate a traditional T2P sample as
- (a)
Simulate from uniform distribution.
- (b)
Put for
- (c)
Set for .
- (d)
Set the T2PC mechanism from is created.
Step 2: Find d and eliminate for .
Step 3: Truncated distribution is used to obtain the first order statistics of size .
From each AT2PHC sample, from the frequentist viewpoint, the MLEs and PSEs (in addition to their 95% ACIs) of , , , and are evaluated by adopting the iterative NR method via ‘’ package. For each parameter, via the MH sampling depicted in Algorithm 1, 12,000 Markovian iterations were made, and then the first 2000 iterations were left to remove the effect of the starting values. Thus, from the remaining 10,000 variates, the Bayes estimates (along with their 95% HPD intervals) using the likelihood and product of spacing approaches of , , , and , when and , are developed.
The acquired point estimates of
are compared using their mean biases (MBs) and mean squared-errors (MSEs) as
and
respectively, where
is the acquired estimate of
at the
jth generated sample. In addition, the average confidence length (ACL) criterion is utilized to assess the acquired interval estimates and is computed as
where
and
represent the lower and higher interval limits of the ACI (or HPD) interval. In a similar pattern, the simulated MB, MSE and ACL of
,
, and
can be easily evaluated.
Via
4.2.2 programming, utilizing two useful packages, namely: (i) ‘
’ (by Henningsen and Toomet [
14]) and (ii) ‘
’ (by Plummer et al. [
20]), the proposed estimators are calculated. By using a heat-map tool (which is a type of data visualization tool that illustrates the magnitude of a phenomenon in two dimensions using colors to represent values), the simulation results of
,
,
, and
are displayed in
Figure 2,
Figure 3,
Figure 4 and
Figure 5, respectively.
In each heat-map, the ‘
’ represents the proposed estimation procedures, while the ‘
’ represents the censoring input which denoted by ‘
-Scheme’. The colors in each heat-map ranged from yellow to red. For example, in the MBs for
in
Figure 2, when the color looks yellow, it means the MB is low, but red indicates the MB is high. As
Supplementary Materials, the numerical outcomes of
,
,
, and
are reported. For simplification; some notations are used: (i) Bayes estimates from the likelihood function such as “BE-ML”, (ii) Bayes estimates from the product of spacing function such as “BE-PS”, (iii) HPD interval estimates from likelihood function such as “HPD-ML”, and (iv) Bayes estimates from the product of spacing function, such as “HPD-PS”.
4.2. Simulation Discussions
In terms of the lowest MB, MSE, and ACL values, from
Figure 2,
Figure 3,
Figure 4 and
Figure 5, this subsection reports useful observations for the behavior of the suggested point and interval estimations of
,
,
, and
:
All proposed estimates of , , and perform satisfactorily.
As n increases, the offered estimates of , , , and behave well. An identical result is noted when is narrowed down.
As T increases, we have observed that
- -
The MBs, MSEs and ACLs for all suggested estimates of , , and decrease.
- -
The MBs and MSEs for all suggested estimates of increase while the ACLs of the same parameter decrease.
Comparing the suggested point/interval inferential techniques, it is clear that
- -
The MBs, MSEs, and ACLs for all suggested estimates of , , and decrease.
- -
In evaluating and , the PS method (and BE-PS method) provides more accurate results than the ML method.
- -
In evaluating and , the ML method (and BE-ML method) provides more accurate results than the PS method.
Comparing the suggested censoring designs, it is clear that
- -
The acquired point estimates of and behaved well using right censoring, while those of and behaved well using left censoring.
- -
The acquired interval estimates of , , and behaved well based on right censoring, while those of behaved well based on left censoring.
In summary, in the presence of data created from the proposed adaptive progressively Type-II hybrid mechanism, using the Bayes MH technique through the product of the spacings approach to evaluate the scale and reliability parameters is recommended, while the Bayes MH technique through the likelihood function is also recommended to estimate the shape and hazard parameters.
5. Mechanical Data Analysis
To exhibit the adaptation of proposed approaches to a real-world phenomenon, one real-life engineering data set consisting of thirty failures of repairable mechanical equipment (RME) items is examined. This application demonstrated that the offered model furnishes a better fit than other eight-lifetime models in the literature and that the suggested inferential approaches are effective and simple to use. The RME data was originally reported by Murthy [
21] and re-analyzed by Alotaibi et al. [
22].
Before calculating the offered estimators, the WE’s fit is compared against eight competitive models, viz., namely:
Weibull (W
) by Weibull [
23];
Gamma (G
) by Johnson et al. [
24];
Nadarajah–Haghighi (NH
) by Nadarajah and Haghighi [
25];
Weighted Nadarajah–Haghighi (WNH
) by Khan et al. [
26];
Alpha-power exponential (APE
) by Mahdavi and Kundu [
27];
Weibull-exponential (W-Ex
) by Oguntunde et al. [
28];
Generalized gamma (GG
) by Stacy [
29];
Generalized beta (GB
) by McDonald and Xu [
30].
For each considered model, the MLEs
,
, and
of
,
, and
, respectively, along with their standard-errors (SEs), based on the full RME data, are evaluated and listed in
Table 1. Comparing the WE and its competitive lifetime models is conducted based on several criteria, namely: (i) Akaike (
); (ii) Bayesian (
); (iii) consistent Akaike (
); (iv) Hannan-Quinn (
); (v) Cramér-von Mises (
); (vi) Anderson-Darling (
); (vii) estimated negative log-likelihood (ENL); and (viii) Kolmogorov-Smirnov (
) distance (along its
p value), see
Table 1. It exhibits that the WE model has the lowest values for all given statistics except the highest
p value; thus, we can decide that it is the best choice among all the others. It should also be noted that the two most competitive lifetime models in relation to the proposed WE model are the G and GG models. Several plots, called fitted PDFs, fitted RFs, and probability-probability (P-P) plots for the WE and the other competing distributions, are provided in
Figure 6 and
Figure 7. As a result, the plots shown in
Figure 6 and
Figure 7 supported the same results as presented in
Table 1.
Taking
and various choices of
and
T, from
Table 2, three artificial T2APH samples are created, as shown in
Table 3. At
, all estimators of
and
are calculated. Due to the fact that we do not have any prior knowledge on
,
, the the Bayes estimates through both LF and PS functions, using improper gamma priors, are developed. Following Algorithm 1, the first 10,000 of 50,000 MCMC iterations for each unknown parameter, are ignored. For running the MCMC algorithm, the acquired MLE (or PSE) values of
,
are considered initial guesses. Then, for each S
i for
the Bayesian and frequentist estimates (with their SEs) as well as 95% ACI/HPD (from LF and PS approaches) estimates (with their widths) of
,
,
, and
are calculated, as shown in
Table 4. It points out that the Bayes (or HPD interval) estimates of
,
,
, or
performed superiorly compared to the conventional approaches.
To demonstrate the existence and uniqueness of ML (or PS) estimates, developed from the RME data,
Figure 8 displays the profile plots from log-LF and log-PS of
and
. It demonstrates that the acquired maximum likelihood and product of spacing estimates of
and
exist and are unique.
In
Table 5, several properties of
,
,
, and
based on the remaining 40,000 MCMC iterations namely: mean, mode, three quartiles
, standard deviation (SD) and skewness (Skew.) are listed.
To highlight the performance for 40,000 MCMC draws (from LF and PS) of
,
,
, and
, both trace and density (with Gaussian line) plots from S1 (as an example) are displayed in
Figure 9. In both trace and density plots, the Bayes estimate is expressed by a soled (—) horizontal line, whereas the 95% HPD interval limits are mentioned by dashed (- - -) horizontal lines. Other density and trace diagrams for samples S
i for
are also plotted and provided in the
supplementary file for brevity.
Figure 9 indicates that the proposed Bayes-MH technique, in both LF-based and PS-based approaches, converges adequately. It also shows that, for all given samples, the simulated marginal posterior density estimates of
,
,
, or
behave in a symmetric manner.
As a result, using LF and PS methodologies in the presence of the T2APH mechanism, the numerical outcomes of the offered estimates of , , , and using the RME data set furnish a significant examination of the WE lifespan model.