Next Article in Journal
Navigating Supply Chain Resilience: A Hybrid Approach to Agri-Food Supplier Selection
Previous Article in Journal
Secure Active Intelligent Reflecting Surface Communication against Colluding Eavesdroppers
Previous Article in Special Issue
A Net Present Value Analysis of Opportunity-Based Age Replacement Models in Discrete Time
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New and Efficient Estimators of Reliability Characteristics for a Family of Lifetime Distributions under Progressive Censoring

by
Syed Ejaz Ahmed
1,
Reza Arabi Belaghi
2,
Abdulkadir Hussein
3,* and
Alireza Safariyan
4
1
Department of Mathematics and Statistics, Brock University, St. Catharines, ON L2S 3A1, Canada
2
Unit of Applied Statistics and Mathematics, Department of Energy and Technology, Swedish University of Agricultural Sciences, 75007 Uppsala, Sweden
3
Department of Mathematics and Statistics, University of Windsor, Windsor, ON N9B 3P4, Canada
4
Department of Statistics, Jahrom University, Jahrom 74137-66171, Iran
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(10), 1599; https://doi.org/10.3390/math12101599
Submission received: 21 February 2024 / Revised: 9 May 2024 / Accepted: 13 May 2024 / Published: 20 May 2024
(This article belongs to the Special Issue Reliability Estimation and Mathematical Statistics)

Abstract

:
Estimation of reliability and stress–strength parameters is important in the manufacturing industry. In this paper, we develop shrinkage-type estimators for the reliability and stress–strength parameters based on progressively censored data from a rich class of distributions. These new estimators improve the performance of the commonly used Maximum Likelihood Estimators (MLEs) by reducing their mean squared errors. We provide analytical asymptotic and bootstrap confidence intervals for the targeted parameters. Through a detailed simulation study, we demonstrate that the new estimators have better performance than the MLEs. Finally, we illustrate the application of the new methods to two industrial data sets, showcasing their practical relevance and effectiveness.

1. Introduction

The reliability function, denoted by R ( t ) , is defined as the probability of a failure-free operation until time t. Thus, if the random variable X denotes the lifetime of an item or a system, then R ( t ) = P ( X > t ) . Another measure of reliability under a stress–strength setup is the probability P = P ( X > Y ) , which represents the reliability of an item or a system of random strength X subject to random stress Y, under a bivariate setting. These two reliability measures are frequently used in many applications. There is a substantial body of literature on the estimation and testing of the parameters R ( t ) and P under progressive censoring. For instance, ref. [1] proposed shrinkage estimators of R ( t ) for the one-parameter exponential distribution. The authors of [2,3] estimated R ( t ) under type-I and type-II censoring, while for estimating P , they used the complete sample case. This strategy’s exceptional benefits have attracted extensive academic research, as evidenced by studies such as [4,5,6,7,8,9,10,11].
There are many scenarios in life-testing and reliability experiments in which units are lost or removed from the experiment before failure. This removal may be unintentional, as in the case of an unexpected breakdown of an experimental unit, or it may be designed into the study. Unintentional loss may occur due to unforeseen circumstances such as lack of funding or lack of access to testing facilities. However, units are often deliberately removed from the test for reasons such as freeing up testing facilities for other experiments, saving time and money, or meeting ethical considerations in experiments involving human subjects. Among the various censoring schemes, the Type-II progressive censoring scheme has become very common in recent years because it allows the experimenter to remove active units during the experiment. The Maximum Likelihood Estimators (MLEs) are commonly used to estimate the reliability parameters under Type-II progressive censoring ([12]). However, the efficiency of MLEs can be enhanced by integrating non-sample information, often available in the form of prior hypotheses regarding the parameter under consideration. Shrinkage estimation is one approach to amalgamate this non-sample information with existing estimators, and a comprehensive exploration of shrinkage and similar estimation methodologies is provided by [13,14].
Motivated by the potential improvements in estimation accuracy, our study focuses on devising shrinkage estimators for reliability and stress–strength parameters within a specific family of lifetime distributions. These parameters play crucial roles in assessing the performance and reliability of various systems, ranging from mechanical components to medical devices. Our research goes beyond mere theoretical formulation; we introduce novel shrinkage-type estimators that incorporate out-of-sample information, thereby offering a practical and robust solution to estimation challenges encountered in real-world scenarios. By addressing limitations inherent in traditional models, our proposed estimators exhibit superior performance, as demonstrated through rigorous simulation studies. Furthermore, our study underscores the practical relevance of these estimators by showcasing their efficacy in industrial contexts by applying them to real-world data. Through real-world applications, we illustrate how our shrinkage estimators can facilitate more accurate and reliable assessments of system reliability and stress–strength parameters.
The rest of this manuscript is organized as follows. In Section 2, we give a brief introduction to existing results on the MLEs and then outline and develop the proposed improved estimation strategies for the reliability measures. The exact biases and efficiencies of the proposed estimators are also derived in this section. Section 3 contains an extensive simulation study to evaluate the performance of the proposed estimators. Section 4 is devoted to the application of the methods to real data sets. We give some concluding remarks in Section 5.

2. Methods

2.1. The MLEs under Progressive Censoring

In this paper, we consider progressive Type-II censoring schemes, where from a total of n units placed on a life-test, only m are completely observed until failure. At the time of the first failure, R 1 out of the remaining n 1 units are randomly withdrawn (or censored) from the life-testing experiment. Subsequently, at each subsequent failure, a predetermined number of units are censored: R 2 out of the n 2 R 1 remaining units after the second failure, and so on. The process continues until the m-th failure, at which point all the remaining R m = n m R 1 . . . R m 1 units are censored. This censoring occurs progressively in m stages, encompassing special cases such as the complete sample situation (when m = n and all R i = 0 for i = 1 , , m ) and the conventional Type-II right censoring situation (when all R i = 0 for i = 1 , , m 1 and R m = n m ).
In this scheme, the quantities R 1 , R 2 , , R m (and hence m) are predetermined. As a result, the censoring times (T’s) are random, but the number of units failing before each censoring time is fixed. For an in-depth discussion, see [15]. This censoring scheme has been extensively studied by researchers, including [16,17,18,19]; see also [20] for additional details.
Consider n independent units subjected to a life-test, with their failure times X 1 , X 2 , , X n being identically distributed with cumulative distribution function F ( x ) and probability density function f ( x ) . Assume that the prefixed number of failures to observe is m and that the progressive Type-II right censoring scheme is given by ( R 1 , R 2 , , R m ) . The m completely observed failure times are denoted by X i : m : n ( R 1 , R 2 , , R m ) for i = 1 , 2 , , m . For brevity, when the censoring scheme is clear from the context, we use the notation X i : m : n for i = 1 , 2 , , m to refer to these failure times, keeping in mind that they depend on the specific censoring scheme ( R 1 , R 2 , , R m ) .
Following [12], we assume that the random variable X follows a distribution with the pdf
f ( x ; a , λ , θ ) = λ G ( x ; a , θ ) exp { λ G ( x ; a , θ ) } ; x > a 0 , λ > 0 .
Here, G ( x ; a , θ ) is a function of x and may also depend on the (known) parameters a, and  θ , were the latter is called a vector of parameters.
We denote this family of lifetime distribution as CN and designate X CN ( a , λ , θ , G ) to indicate that X is distributed according to the CN distribution. We avoid specifying the G ( . ) function and complete our treatment in this paper under the general form. However, specific forms of this function lead to more than ten known classes of distributions such as the exponential (obtained when G x ), the Lomax, the Pareto, the one- and two-parameter exponential, the Weibull, and the Burr distribution (see [2] for details).
The Maximum Likelihood Estimators (MLEs) of λ and R ( t ) are, respectively, given by
λ ^ = m S m ,
R ^ ( t ) = exp m S m G ( t ; a , θ ) ,
where S m = i = 1 m G ( X i ; a , θ ) + i = 1 m R i G ( X i ; a , θ ) .
For the estimation of P , let X CN ( a , λ 1 , θ 1 , G ) be independent of Y CN ( a , λ 2 , θ 2 , G ) so that
P = λ 1 λ 1 + λ 2 ,
and hence the MLE of P is
P ^ = λ ^ 1 λ ^ 1 + λ ^ 2 ,
where λ ^ 1 = m 1 S m 1 , λ ^ 2 = m 2 T m 2 , S m 1 = i = 1 m 1 G ( X i ; a , θ 1 ) + i = 1 m 1 R i G ( X i ; a , θ 1 ) , and T m 2 = i = 1 m 2 G ( Y i ; a , θ 2 ) + i = 1 m 2 R i G ( Y i ; a , θ 2 ) .

2.2. The Proposed Estimators of R ( t )

Sometimes, due to past knowledge or experience, the experimenter may be in a position to make an initial guess on some of the parameters of interest. This prior information can be expressed in the form of the following set of hypotheses:
H o : R ( t ) = R 0 ( t ) = R 0 H a : R ( t ) R 0 ( t ) = R 0 ,
where R 0 [ 0 , 1 ] is a known constant and t is a fixed point in time. These hypotheses are equivalent to H o : λ = λ o against H a : λ λ o , where λ o = log ( 1 / R 0 ) G ( t ; a , θ ) .
The prior information about the R ( t ) , which we just formulated in terms of a null and an alternative hypothesis, may or may not be correct. One possible way of testing the hypotheses is to retain H o if
A = { L ; C 1 L C 2 } ,
where C 1 = χ ( 2 m ) 2 ( α 2 ) and C 2 = χ ( 2 m ) 2 ( 1 α 2 ) and L = 2 λ o S m χ ( 2 m ) 2 , and  α is the level of significance. This is justified by the fact that S m has gamma distribution.
Following [21], define the preliminary test (PT) estimator based on MLE as
R ^ PT ( t ) = R ^ ( t ) ( R ^ ( t ) R 0 ) I ( A ) ,
where I ( A ) is the indicator function of the set A.
Note that the PT estimator depends on the level of significance of α . Further, R ^ PT ( t ) , is not smooth due to the extreme choices of R ( t ) , hence this is not an appropriate estimator for lifetime and duration studies. Stein-type estimators produce all possible values in between the unrestricted estimator ( R ^ ( t ) ) and restricted estimator ( R 0 ) depending on the sample values of the test statistic used for the preliminary test, which shrinks toward the target vector parameter or its estimator. Therefore, we define the Stein-type shrinkage (S) estimator given by
R ^ S ( t ) = R ^ ( t ) ( R ^ ( t ) R 0 ) d L 1 = R ^ ( t ) d ( R ^ ( t ) R 0 ) 2 λ o S m ,
where d is an appropriate nonnegative bounded constant. For more details on Stein-type shrinkage estimators in general and in the context of reliability, see [21,22,23,24,25,26,27,28,29,30].

2.2.1. Bias and Mean Square Errors

In this section, we derive the bias, variance, and mean square errors (MSE) of the proposed estimators. For notation convenience, we define the following quantities:
  • φ 1 = E ( R ^ ( t ) I ( A ) ) = c 1 β c 2 β exp ( ( 2 m λ G ( t ; a , θ ) w + w 2 ) ) w m 1 2 m γ ( m ) d w
  • φ 2 = E ( R ^ 2 ( t ) I ( A ) ) = c 1 β c 2 β exp ( ( 4 m λ G ( t ; a , θ ) w + w 2 ) ) w m 1 2 m γ ( m ) d w
  • φ 3 = E ( I ( A ) ) = H 2 m ( c 2 β ) H 2 m ( c 1 β )
  • φ 4 = E ( R ^ ( t ) ) = 2 ( m λ G ( t ; a , θ ) ) m 2 Γ ( m ) K m ( 2 m λ G ( t ; a , θ ) )
  • φ 5 = E ( R ^ 2 ( t ) ) = 2 ( 2 m λ G ( t ; a , θ ) ) m 2 Γ ( m ) K m ( 2 2 m λ G ( t ; a , θ ) )
  • φ 6 = E ( 1 S m 2 ) = λ 2 ( m 1 ) ( m 2 )
  • φ 7 = E ( R ^ ( t ) S m 2 ) = λ 2 m 2 γ ( m ) ( 4 m λ G ( t ; a , θ ) ) m 1 2 K m 1 ( 2 m λ G ( t ; a , θ ) )
  • φ 8 = E ( R ^ 2 ( t ) S m ) = λ 2 m 2 γ ( m ) ( 8 m λ G ( t ; a , θ ) ) m 1 2 K m 1 ( 2 2 m λ G ( t ; a , θ ) )
  • φ 9 = E ( R ^ ( t ) S m 2 ) = λ 2 2 m 3 γ ( m ) ( 4 m λ G ( t ; a , θ ) ) m 2 2 K m 2 ( 2 m λ G ( t ; a , θ ) )
  • φ 10 = E ( R ^ 2 ( t ) S m 2 ) = λ 2 2 m 3 γ ( m ) ( 8 m λ G ( t ; a , θ ) ) m 2 2 K m 2 ( 2 2 m λ G ( t ; a , θ ) )
  • φ 11 = E ( 1 S m ) = λ ( m 1 )
where β = λ o λ and H k ( Δ ) denote the cumulative distribution function (cdf) of a non-central chi-square distribution with k degrees of freedom and non-centrality parameter, Δ , while K r ( . ) is the modified Bessel function of second kind of order r (see [31]).
The following two theorems, whose proofs are relegated in the Appendix A, summarize the bias and MSE expressions of the estimators of R ( t ) :
 Theorem 1. 
The bias expressions for the unrestricted, PT, and S estimators are given by
(i)
Bias ( R ^ ( t ) ) = φ 4 R ( t ) ,
(ii)
Bias ( R ^ PT ( t ) ) = Bias ( R ^ ( t ) ) φ 1 + R 0 φ 3 ,
(iii)
Bias ( R ^ S ( t ) ) = Bias ( R ^ ( t ) ) d 2 λ o φ 7 + d R 0 2 λ o φ 11 .
 Theorem 2. 
The MSE expressions of the unrestricted, PT, and S estimators are given by
(i)
MSE ( R ^ ( t ) ) = φ 5 2 R ( t ) φ 4 + R 2 ( t ) ,
(ii)
MSE ( R ^ PT ( t ) ) = φ 5 φ 4 2 φ 2 φ 1 2 + R 0 2 φ 3 ( 1 φ 3 ) + 2 φ 3 ( φ 1 φ 4 ) + 2 φ 1 φ 4 + Bias ( R ^ PT ( t ) 2 ,
(iii)
MSE ( R ^ S ( t ) ) = MSE ( R ^ ( t ) ) + d 2 4 λ o 2 [ φ 10 + R 0 2 φ 6 2 R 0 φ 9 ] d λ o [ φ 8 ( R 0 + R ( t ) ) φ 7 + R 0 R ( t ) φ 11 ] .
From the MSE expressions above, one can deduce the value of d that minimizes MSE ( R ^ S ( t ) ) as d = 2 λ o [ φ 8 ( R 0 + R ( t ) ) φ 7 + R 0 R ( t ) φ 11 ] [ φ 10 + R 0 2 φ 6 2 R 0 φ 9 ] . Furthermore, the relative efficiency (RE) of PT and S estimators, relative to the unrestricted MLE, can be also deduced from the above theorem as
RE ( R ^ PT ( t ) ) = MSE ( R ^ ( t ) ) MSE ( R ^ PT ( t ) ) = φ 5 2 R ( t ) φ 1 + R 2 ( t ) φ 5 φ 4 2 φ 2 φ 1 2 + R 0 2 φ 3 ( 1 φ 3 ) + 2 R 0 φ 3 ( φ 1 φ 4 ) + 2 φ 1 φ 4 + Bias ( R ^ PT ( t ) 2 ,
and
RE ( R ^ S ( t ) ) = MSE ( R ^ ( t ) ) MSE ( R ^ S ( t ) ) = φ 5 2 R ( t ) φ 1 + R 2 ( t ) φ 5 2 R ( t ) φ 1 + R 2 ( t ) + d 2 4 λ o 2 [ φ 7 + R 0 2 φ 3 2 R 0 φ 6 ] d λ o [ φ 5 ( R 0 + R ( t ) ) φ 4 + R 0 R ( t ) φ 8 ] .
Figure 1 and Figure 2 display plots of the relative efficiency expressions above for the exponential model G ( x ; a , θ ) = x with R 0 = 0.70 . Based on these figures, the PT estimator has better performance near R ( t ) = R 0 = 0.70 , while such performance deteriorates as we move away from this R 0 . While the S estimator becomes worse than the MLE, away from the point 0.7, the PT estimator has a wider range where it outperforms the MLE and away fro R 0 = 0.7 , it converges to the MLE from above. This means that the S estimator dominates the MLE, uniformly, in terms of MSE.

2.2.2. Bootstrapped Confidence Intervals

In this section, we describe an algorithm based on bootstrap-t for building confidence intervals for the proposed estimators [32]. For this, we need variance formulae for the proposed estimators which can be easily deduced from the theorems in the previous section as follows:
 Corollary 1. 
The variance expressions of the unrestricted, PT, and S estimators are given by
(1)
Var ( R ^ ( t ) ) = φ 5 ( φ 4 ) 2
(2)
Var ( R ^ PT ( t ) ) = φ 5 φ 4 2 φ 2 φ 1 2 + R 0 2 φ 3 ( 1 φ 3 ) + 2 R 0 φ 1 φ 3 + 2 φ 1 φ 4 2 R 0 φ 3 φ 4
(3)
Var ( R ^ S ( t ) ) = φ 5 2 R ( t ) φ 4 + R ( t ) 2 + d 2 4 λ o 2 [ φ 10 + R 0 2 φ 6 2 R 0 φ 9 ] d λ o φ 8 ( R 0 + R ( t ) ) φ 7 + R 0 R ( t ) φ 11 φ 4 R ( t ) d 2 λ o φ 7 + d R 0 2 λ o φ 11 2
The following Algorithm 1 formalizes the procedure for computing the bootstrapped confidence intervals.
Algorithm 1 Bootstrap-t CI for R ( t ) Based on the Bootstrap Variance Estimate
Step 1.
Based on the independent observed samples X i : m : n ( R 1 , R 2 , R m ) , i = 1 , . . . , m with progressive Type-II right censoring scheme ( R 1 , R 2 , , R m ) , compute λ ^ , R ^ ( t ) , R ^ PT ( t ) and R ^ S ( t ) from (2), (3), (7) and (8), respectively.
Step 2.
 Generate X i : m : n Exp ( λ ^ ) , i = 1 , . . . , m . Use them to obtain λ ^ , R ^ ( t ) , R ^ P T ( t ) and R ^ S ( t ) .
Step 3.
Repeat Step 2 for B times and derive R ^ ( b ) ( t ) , R ^ ( b ) P T ( t ) , and R ^ ( b ) S ( t ) , b = 1 , , B .
Step 4.
For each iteration of Step 3, design another parametric bootstrap procedure to estimate the standard deviation of R ^ ( b ) ( t ) , say σ ^ ( R ^ ( b ) ( t ) ) . More precisely, repeat Step 2 for b = 1 , , B , with  λ ^ instead of λ ^ , and then calculate
σ ^ ( R ^ ( b ) ( t ) ) = 1 B 1 b = 1 B ( R ^ ( b ) ( t ) R ¯ ( t ) ) 2
where R ¯ ( t ) = 1 B b = 1 B R ^ ( b ) ( t ) .
Step 5.
Let t = t ( 1 ) , , t ( B ) , where t ( b ) = R ^ ( b ) ( t ) R ^ ( t ) σ ^ ( R ^ ( b ) ( t ) ) , b = 1 , , B .
Step 6.
Compute the 100 ( 1 α ) % bootstrap-t CI for R ( t ) as R ^ ( t ) t 1 α 2 σ ^ ( R ^ ( t ) ) , R ^ ( t ) t α 2 σ ^ ( R ^ ( t ) ) , where t γ is 100 γ % th percentile of t given by Step 5 and σ ^ ( R ^ ( t ) ) = Var ^ ( R ^ ( t ) ) given by (9). This process is carried out for other proposed estimators.

2.2.3. Consistency of the Estimators

The consistency of an estimator refers to its behavior as the sample size increases indefinitely. It analytically hard to proof consistency of the new estimators. Therefore, we used a simulation experiment to investigate the consistency of these proposed estimators. A consistent estimator is expected to converge to the true parameter value as the sample size grows. Therefore, as we increase the number of samples, we assess how the absolute errors (difference between the true parameter and its estimates) of the estimators change. Figure 3 illustrates that for some values of the parameters, there is a decreasing trend in the absolute error with increasing sample size. This trend indicates that the estimators are approaching the true parameter value, demonstrating their consistency.

2.3. The Proposed Estimators of P

Similar to the constructions performed in the previous subsection, suppose we are provided with the following set of hypotheses:
H o : P = P o H a : P P o ,
where P o [ 0 , 1 ] is a known constant. For k = P o / ( 1 P o ) , these hypotheses are equivalent to H o : λ 1 = k λ 2 against the alternative H 1 : λ 1 k λ 2 . Using the familiar likelihood ratio test, we accept H o if
A = ( x ̲ , y ̲ ) : k m 1 m 2 F α 2 ( 2 m 1 , 2 m 2 ) < L < k m 1 m 2 F 1 α 2 ( 2 m 1 , 2 m 2 ) ,
where L = S m 1 T m 2 follows the F-distribution with ( 2 m 1 , 2 m 2 ) degrees of freedom. Making the transformation
W = 1 + λ 2 λ 1 F 1 ,
the pdf of W turns out to be
f ( w ) = m 2 λ 2 m 1 λ 1 m 2 B ( m 1 , m 2 ) . w m 2 1 ( 1 w ) m 1 1 1 + m 2 λ 2 m 1 λ 1 1 w m 1 + m 2 ; 0 < w < 1 .
and
ψ ( l , q ) = E ( W l ( 1 W ) q ) = m 2 λ 2 m 1 λ 1 m 2 B ( m 1 , m 2 ) 0 1 w m 2 + l 1 ( 1 w ) m 1 q 1 1 + m 2 λ 2 m 1 λ 1 1 w m 1 + m 2 d w = ( 1 ) l + m 2 1 B ( m 1 , m 2 ) m 1 λ 1 m 2 λ 2 l 1 m 1 λ 1 m 2 λ 2 m 1 m 2 l + q + 1 i = 0 m 1 q 1 ( 1 ) i m 1 q 1 i m 1 λ 1 m 2 λ 2 i j = 0 m 2 + l 1 ( 1 ) j m 2 + l 1 j I m 2 λ 2 m 1 λ 1 ; j + i m 1 m 2 ,
where
I ( c ; p ) = 1 c t p d t = ( c p + 1 1 ) p + 1 ; p 1 log c ; p = 1 .
Now, define the PT estimator of P as
P ^ PT = P ^ ( P ^ P o ) I ( A ) .
The S estimator of P given by
P ^ S = P ^ ( P ^ P o ) d L 1 = P ^ ( P ^ P o ) d T m 2 S m 1 ,
where d is a proper nonnegative bounded constant.

2.3.1. Bias and Mean Square Errors

Again, we define the following symbols for ease of presentation,
  • ψ 1 = E ( P ^ ) = E 1 + λ 2 ^ λ 1 ^ 1 = E 1 + λ 2 λ 1 F 1 = E ( W ) ,
  • ψ 2 = E ( P ^ 2 ) = E ( W 2 ) ,
  • ψ 3 = E ( P ^ I ( A ) ) = 1 + k F α 2 ( 2 m 1 , 2 m 2 ) 1 1 + k F 1 α 2 ( 2 m 1 , 2 m 2 ) 1 w f ( w ) d w ,
  • ψ 4 = E ( P ^ 2 I ( A ) ) = 1 + k F α 2 ( 2 m 1 , 2 m 2 ) 1 1 + k F 1 α 2 ( 2 m 1 , 2 m 2 ) 1 w 2 f ( w ) d w ,
  • ψ 5 = E ( I ( A ) ) = 1 + k F α 2 ( 2 m 1 , 2 m 2 ) 1 1 + k F 1 α 2 ( 2 m 1 , 2 m 2 ) 1 f ( w ) d w .
Now, by using the above notations, we collect the biases and mean square errors of all the estimators of P in the following two theorems.
 Theorem 3. 
The bias expressions for the unrestricted, PT, and S estimators are given by
(i)
Bias ( P ^ ) = ψ 1 P
(ii)
Bias ( P ^ PT ) = Bias ( P ^ ) ψ 3 + P o ψ 5 ,
(iii)
Bias ( P ^ S ) = Bias ( P ^ ) d m 2 m 1 ψ ( 2 , 1 ) + d P o m 2 m 1 ψ ( 1 , 1 ) .
For the proof, refer to Appendix A.
 Theorem 4. 
The MSE expressions for the unrestricted, PT, and S estimators are given by
(i)
MSE ( P ^ ) = ψ 2 2 P ψ 1 + P 2 ,
(ii)
MSE ( P ^ PT ) = MSE ( P ^ ) ψ 4 + 2 P ψ 3 + P o ( P o 2 P ) ψ 5 ,
(iii)
MSE ( P ^ S ) = MSE ( P ^ ) + d 2 m 2 m 1 2 ψ ( 4 , 2 ) 2 P o ψ ( 3 , 2 ) + P o 2 ψ ( 2 , 2 ) 2 d m 2 m 1 ψ ( 3 , 1 ) ( P o + P ) ψ ( 2 , 1 ) + P P o ψ ( 1 , 1 ) .
The proof is similar to Theorem 2.
The value of d that minimizes MSE ( P ^ S ) is d = ψ ( 3 , 1 ) ( P o + P ) ψ ( 2 , 1 ) + P P o ψ ( 1 , 1 ) ψ ( 4 , 2 ) 2 P o ψ ( 3 , 2 ) + P o 2 ψ ( 2 , 2 ) . The efficiency expression for PT and S estimators relative to MLE is, respectively,
RE ( P ^ PT ) = MSE ( P ^ ) MSE ( P ^ PT ) = ψ 2 2 P ψ 1 + P 2 ψ 2 2 P ψ 1 + P 2 ψ 4 + 2 P ψ 3 + P o ( P o 2 P ) ψ 5 ,
and
RE ( P ^ S ) = MSE ( P ^ ) MSE ( P ^ S ) = ψ 2 2 P ψ 1 + P 2 MSE ( P ^ ) + d 2 m 2 m 1 2 ψ ( 4 , 2 ) 2 P o ψ ( 3 , 2 ) + P o 2 ψ ( 2 , 2 ) 2 d m 2 m 1 ψ ( 3 , 1 ) ( P o + P ) ψ ( 2 , 1 ) + P P o ψ ( 1 , 1 ) .
Figure 4 and Figure 5 represent the relative efficiency of PT and S estimators in the exponential model with known P = 0.50 . Again, we can see that the PT estimator has a better performance when the parameter values are close to the true value of close to P = 0.50 and as we move away from that point, its performance deteriorates quickly. On the other hand, the Stein-type estimator has a much larger range where it performs better than the unrestricted MLE estimator.

2.3.2. Bootstrapped Confidence Intervals

In this section, we propose an algorithm based on bootstrap-t for building confidence intervals for the proposed estimators ([32]). For this, we need variance formulae for the proposed estimators which can be easily deduced from the theorems in the previous section as follows:
 Corollary 2. 
The variance expressions of the unrestricted, PT, and S estimators are given by
(1)
Var ( P ^ ) = ψ 2 ψ 1 2
(2)
Var ( P ^ PT ) = ψ 2 ψ 4 ψ 1 2 + P o ψ 5 ( P o 2 ψ 1 + 2 ψ 3 P o ψ 5 ) ψ 3 ( ψ 3 2 ψ 1 )
(3)
Var ( P ^ S ) = ψ 2 2 d m 2 m 1 ψ ( 3 , 1 ) P o ψ ( 2 , 1 ) + ( d m 2 m 1 ) 2 ψ ( 4 , 2 ) 2 P o ψ ( 3 , 2 ) + P o 2 ψ ( 2 , 2 ) ψ 1 2 ( d m 2 m 1 ) 2 ψ 2 ( 2 , 1 ) ( d P o m 2 m 1 ) 2 ψ 2 ( 1 , 1 ) + 2 d m 2 m 1 ψ 1 ψ ( 2 , 1 ) 2 d P o m 2 m 1 ψ 1 ψ ( 1 , 1 ) + 2 ( d m 2 m 1 ) 2 P o ψ ( 2 , 1 ) ψ ( 1 , 1 ) .
The following Algorithm 2 formalizes the procedure for computing the bootstrapped confidence intervals P .
Algorithm 2 Bootstrap-t CI for P Based on the Bootstrap Variance Estimate
Step 1.
Based on the independent observed samples X i : m 1 : n 1 ( R 1 , R 2 , R m 1 ) , i = 1 , . . . , m 1 with progressive Type-II right censoring scheme ( R 1 , R 2 , , R m 1 ) and Y j : m 2 : n 2 ( R 1 , R 2 , R m 2 ) , j = 1 , . . . , m 2 with progressive Type-II right censoring scheme ( R 1 , R 2 , , R m 2 ) , λ ^ 1 , λ ^ 2 , P ^ , P ^ PT and P ^ S estimators from (2), (5), (13), and (14), respectively.
Step 2.
Generate X i : m 1 : n 1 Exp ( λ ^ 1 ) , i = 1 , . . . , m 1 and Y j : m 2 : n 2 Exp ( λ ^ 2 ) , j = 1 , . . . , m 2 . Use them to obtain λ ^ 1 , λ ^ 2 , P ^ , P ^ P T , and P ^ S .
Step 3.
Repeat Step 2 for B times and derive P ^ ( b ) , P ^ ( b ) P T , and P ^ ( b ) S , b = 1 , , B .
Step 4.
For each iteration of Step 3, design another parametric bootstrap procedure to estimate the standard deviation of R ^ ( b ) ( t ) , say σ ^ ( P ^ ( b ) ( t ) ) . More precisely, repeat Step 2 for b = 1 , , B , with  λ ^ 1 and λ ^ 2 instead of λ ^ 1 and λ ^ 2 , and then calculate
σ ^ ( P ^ ( b ) ) = 1 B 1 b = 1 B ( P ^ ( b ) P ¯ ) 2
where P ¯ = 1 B b = 1 B P ^ ( b ) .
Step 5.
Let t = t ( 1 ) , , t ( B ) , where t ( b ) = P ^ ( b ) P ^ σ ^ ( P ^ ( b ) ) , b = 1 , , B .
Step 6.
Compute the 100 ( 1 α ) % bootstrap-t CI for P as P ^ t 1 α 2 σ ^ ( P ^ ) , P ^ t α 2 σ ^ ( P ^ ) , where t γ is 100 γ % th percentile of t given by Step 5 and σ ^ ( P ^ ) = Var ^ ( P ^ ) given by (15). This process is carried out for other proposed estimators.

3. Simulation Study

Here we conduct a Monte Carlo simulation study with a small sample size to assess the performance of the estimators proposed in this paper. The simulation setting and assumptions are as follows:
R ( t ) : The true value of reliability is taken to be 0.50 at t = 3 ;
m: number of observed failures is taken to be 10;
n: sample size was taken to be 100;
λ o : obtained from R 0 = { 0.35 , 0.40 , 0.45 , 0.50 , 0.55 , 0.60 , 0.65 } (the prior guesses of R ( t ) );
R = ( R 1 , , R m ) : progressive Type-II censoring schemes which are taken to be
Scheme 1 ( S 1 ): R = ( 90 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) ; Scheme 2 ( S 2 ): R = ( 0 , 0 , 0 , 0 , 90 , 0 , 0 , 0 , 0 , 0 ) ; and Scheme 3 ( S 3 ): R = ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 90 ) .
For each combination of R and R 0 , 500 Monte Carlo samples of size n = 100 were generated from the distribution given in (1), taking G ( x ; a ; θ ) = x . The proposed estimators for R ( t ) are calculated under progressive Type-II censoring and their CIs are computed according to Algorithms 1 and 2. Let ( L , U ) be a CI of R ( t ) and ( L i , U i ) , i = 1 , 2 , , 500 , observed values of lower and upper bounds of the proposed CI. The empirical expected length and coverage probability of the intervals are, respectively, computed as
E L = 1 500 i = 1 500 ( U i L i ) C P = 1 500 i = 1 500 I ( L i R U i ) .
Table 1 represents the coverage probability (CP) and expected length (EL) of the estimators for R ( t ) under the three progressive Type-II censoring schemes ( S 1 , S 2 a n d S 3 ) from an exponential model. From this table, we can see that the proposed estimators have, in most of the cases, higher coverage probabilities and shorter expected lengths than the usual estimators. This is more evident near the prior information R 0 , as the S and PT estimators have CPs closer to the target 0.95 with shorter ELs. In general, the asymptotic intervals are conservative while the intervals based on bootstrap are liberal.
As for the estimation of P , the simulation setup was as follows:
P : the true value of P = P ( X > Y ) is taken to be P = 0.70 ;
P o : the prior guesses of P are taken to be { 0.6 , . 67 , 0.68 , . 69 , . 7 , . 71 , . 72 , . 75 , . 8 } ;
m 1 : number of censored observations of X is taken to be 10;
m 2 : number of censored observations of Y is taken to be 8;
R = ( R 1 , , R m 1 ) : progressive Type-II censoring schemes for X, assumed to be Scheme 1 ( S 1 ): R = ( 90 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) ; Scheme 2 ( S 2 ): R = ( 0 , 0 , 0 , 0 , 90 , 0 , 0 , 0 , 0 , 0 ) ; and Scheme 3 ( S 3 ): R = ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 90 ) ;
R = ( R 1 , , R m 2 ) : progressive Type-II censoring schemes of Y, assumed to be Scheme 1 ( S 1 ): R = ( 112 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) ; Scheme 2( S 2 ): R = ( 0 , 0 , 0 , 112 , 0 , 0 , 0 , 0 ) ; and Scheme 3( S 3 ): R = ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 112 ) .
For each combination of P and P o , 500 samples of size n 1 = 100 were generated for X from the distribution given in (1), taking λ 1 = 0.3 and G ( x ; a 1 ; θ 1 ) = x and 500 samples of size n 2 = 120 were generated for Y from the same distribution with λ 2 = 1 P 1 and G ( y ; a 2 ; θ 2 ) = y . The proposed estimators for P are calculated under progressive Type-II censoring and their CIs are computed by using Algorithm 2.
The results of these simulations are presented in Table 2.
The shrinkage-type estimators, PT and S, outperform the MLEs both in their CPs and Els and this is more pronounced than the case of the estimation of R ( t ) . The PT estimator has shorter expected lengths in the asymptotic CIs as compared to the bootstrap CIs. In general, for the S estimator, the asymptotic CIs are conservative while the bootstrap CIs are liberal. As we expected all confidence intervals are close to their theoretical coverage and are shortest near the true values of P .

4. Applications to Real Data Sets

In this section, we analyze the performance of the proposed estimators using two real data sets, separately for the case of R ( t ) and P .

4.1. Time to Breakdown of an Insulating Fluid between Electrodes

Here, we consider the real data set used in [33]. This data set consists of 19 measurements of time to breakdown (in minutes) of insulating fluid between electrodes at a voltage of 34 kV, listed as follows (see also [34]):
0.96 4.15 0.19 0.78 8.01 31.75 7.35 6.50 8.27 33.91 32.52 3.16 4.85 2.78 4.67 1.31 12.06 36.71 72.89 .
Of these, nine were progressively censored values under the scheme ( R 1 = 2 , R 2 = 2 , R 3 = 0 , R 4 = 0 , R 5 = 0 , R 6 = 0 , R 7 = 1 , R 8 = 1 , R 9 = 4 ) as
0.19 0.78 1.31 2.78 4.15 4.67 4.85 6.50 8.01
The authors of [35] applied the Kolmogorov–Smirnov (K-S) test as well as the Chi-Square test to show that the Weibull distribution is a suitable model for the time to breakdown at each fixed voltage level. The MLEs of the parameters of the Weibull distribution are estimated as p ^ = 0.7708 , λ ^ = 6.8865 . Hence, R ^ ( t ) | t = 2 = 0.7488 . On the other hand, ref. [35] estimated R ^ ( t ) | t = 2 = 0.7041 using upper record values. In the current analysis, we choose the latter value as a prior guess, i.e.,  R 0 = 0.7041 . Having this in hand, the resultant estimators are given in Table 3. Although we do not know the actual biases in the various estimation procedures, as the true value is unknown, we can see from the table that the confidence intervals based on the proposed improved estimators are shorter compared to the MLE.

4.2. Stress–Strength of the Carbon Fibers

In this part, we analyze the data reported by [36]. These data represent the strength measured in GPA for single carbon fibers and impregnated 1000-carbon fiber tows. Single fibers were tested under tension at gauge lengths of 20 mm (Data Set 1) and 10 mm (Data Set 2) with sample sizes 69 and 63, respectively. These data have been used previously by [2,37,38,39,40]. After subtracting 0.75 from all the points of these data sets, [38] observed that the Weibull distributions with equal shape parameters were best fit to both data sets. The MLEs of the parameters of the Weibull distribution fitted to data set 1 are λ ^ 1 = 0.0046 and p ^ 1 = 5.5049 , respectively. Similarly, for the data set 2, λ ^ 2 = 0.0023 and p ^ 2 = 5.0494 . In order to facilitate comparisons, we have used two different progressively censored samples using two different sampling schemes tabulated in Table 4 and Table 5, generated by [40]. The generated data and the corresponding censoring schemes are presented in Table 6.
We used P o = 0.1788 obtained by [40] as prior information and computed the proposed estimators and their CIs, summarized in Table 7. From Table 7, we can see the lengths of the CIs (both asymptotic and bootstrap confidence intervals) are shorter when using the improved estimators.

5. Discussion

In various contexts, quality engineers focus on estimating parameters such as reliability, denoted by R ( t ) , and stress–strength, denoted by P . These parameters are typically estimated using Maximum Likelihood Estimators (MLEs). The accuracy of MLEs can be enhanced by integrating the current data with available prior knowledge about the parameters. In this paper, we propose improved estimates for these parameters using shrinkage-type estimators, particularly when the sample data are progressively censored within a broad family of parametric lifetime distributions. We have developed both asymptotic and bootstrap confidence intervals for the reliability parameters based on these improved estimators. Numerical simulations indicate that our new estimators outperform traditional MLEs in terms of mean squared errors across almost the entire parameter space. The procedures for deriving our proposed estimates are straightforward, requiring only the MLEs, which can be computed using common open-source optimization software like the optim function in R software (v4.4.0) [41]. However, a notable limitation of our methodology is its substantial reliance on non-sample information. To address this issue, we suggest adopting a Bayesian approach, where the prior non-sample information can be effectively incorporated through Bayesian prior distributions.

Author Contributions

Conceptualization, S.E.A.; methodology, S.E.A., R.A.B., A.H. and A.S. All authors have read and agreed to the published version of the manuscript.

Funding

S.E.A. and A.H. were supported by the Natural Sciences and Engineering Research Council of Canada’s discovery grants.

Data Availability Statement

Publicly available datasets were analyzed in this study. The data are reported in the body of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In this section, we sketch the proofs of theorems in the body of the manuscript.
 Proof of Theorem 1. 
Bias ( R ^ ( t ) ) = E R ^ ( t ) R ( t ) = E ( e m S m G ( t ; a , θ ) ) R ( t ) = 0 exp { m G ( t ; a , θ ) S m } λ m S m m 1 e λ S m Γ ( m ) d s m R ( t ) = 1 Γ ( m ) 0 w m 1 exp { ( 2 m λ G ( t ; a , θ ) w + w 2 ) } d w R ( t ) = 2 Γ ( m ) { m λ G ( t ; a , θ ) } m 2 K m ( 2 m λ G ( t ; a , θ ) ) R ( t ) = φ 4 R ( t ) ,
and
Bias ( R ^ PT ( t ) ) = E R ^ ( t ) ( R ^ ( t ) R 0 ) I ( A ) R ( t ) = E R ^ ( t ) R ( t ) E ( R ^ ( t ) I ( A ) ) + R 0 E ( I ( A ) ) = Bias ( R ^ ( t ) ) E R ^ ( t ) I ( C 1 L C 2 + R 0 E I ( C 1 L C 2 ) ,
where C 1 = χ ( 2 m ) 2 ( α 2 ) , C 2 = χ ( 2 m ) 2 ( 1 α 2 ) and L = 2 λ o S m χ ( 2 m ) 2 . Then assuming β = λ o λ , we have
Bias ( R ^ PT ( t ) = Bias ( R ^ ( t ) ) E e 2 m λ w G ( t ; a , θ ) I ( C 1 β w C 2 β ) + R 0 E ( C 1 β w C 2 β ) = Bias ( R ^ ( t ) ) c 1 β c 2 β exp ( ( 2 m λ G ( t ; a , θ ) w + w 2 ) ) w m 1 2 m γ ( m ) d w + R 0 H 2 m ( c 2 β ) H 2 m ( c 1 β ) = Bias ( R ^ ( t ) ) φ 1 + R 0 φ 3 ,
and
Bias ( R ^ S ( t ) ) = E R ^ S ( t ) R ( t ) = E R ^ ( t ) d ( R ^ ( t ) R 0 ) 2 λ o S m R ( t ) = Bias ( R ^ ( t ) ) d 2 λ o E R ^ ( t ) S m + R 0 d 2 λ o E 1 S m = Bias ( R ^ ( t ) ) d 2 λ o E e m G ( t ; a , θ ) S m S m + R 0 d 2 λ o E 2 λ 2 λ S m = Bias ( R ^ ( t ) ) d λ λ o 0 exp ( ( 2 m λ G ( t ; a , θ ) w + w 2 ) ) w m 2 2 m γ ( m ) d w + R 0 d 2 λ o E 2 λ w = Bias ( R ^ ( t ) ) d λ 2 m 1 λ o Γ ( m ) 4 m λ G ( t ; a , θ ) m 1 2 K m 1 ( 2 m λ G ( t ; a , θ ) ) + R 0 d 2 λ o λ Γ ( m ) = Bias ( R ^ ( t ) ) d 2 λ o φ 7 + R 0 d 2 λ o φ 11 .
 Proof of Theorem 2. 
MSE ( R ^ ( t ) ) = E R ^ ( t ) R ( t ) 2 = E R ^ 2 2 R ( t ) E ( R ^ ) + R 2 ( t ) = E exp ( 4 m λ G ( t ; a , θ ) w ) 2 R ( t ) E exp ( 2 m λ G ( t ; a , θ ) w ) + R 2 ( t ) = 2 ( 2 m λ G ( t ; a , θ ) ) m 2 Γ ( m ) K m ( 2 2 m λ G ( t ; a , θ ) ) 4 R ( t ) ( m λ G ( t ; a , θ ) ) m 2 Γ ( m ) K m ( 2 m λ G ( t ; a , θ ) ) + R 2 ( t ) = φ 5 2 R ( t ) φ 4 + R 2 ( t ) .
MSE ( R ^ PT ( t ) ) = Var ( R ^ PT ( t ) ) + Bias ( R ^ PT ( t ) 2 ,
first, we derive variance of PT estimator as follows,
Var R ^ PT ( t ) = Var ( R ^ ( t ) ( R ^ ( t ) R 0 ) I ( A ) = Var ( R ^ ( t ) ) + Var R ^ ( t ) R 0 ) I ( A ) 2 Cov R ^ ( t ) , R ^ ( t ) R 0 ) I ( A ) = E ( R ^ 2 ( t ) ) E 2 ( R ^ ( t ) ) E ( R ^ 2 ( t ) I ( A ) ) E 2 ( R ^ ( t ) I ( A ) ) + R 0 2 E ( I ( A ) ) E 2 ( I ( A ) ) + 2 R 0 E ( R ^ ( t ) I ( A ) E ( I ( A ) ) + 2 E ( R ^ ( t ) ) E ( R ^ ( t ) I ( A ) ) 2 R 0 E ( R ^ ( t ) ) E ( I ( A ) ) = φ 5 φ 4 2 φ 2 φ 1 2 + R 0 2 φ 3 ( 1 φ 3 ) + 2 R 0 φ 3 ( φ 1 φ 4 ) + 2 φ 1 φ 4 .
Then, we have,
MSE ( R ^ PT ( t ) ) = φ 5 φ 4 2 φ 2 φ 1 2 + R 0 2 φ 3 ( 1 φ 3 ) + 2 R 0 φ 3 ( φ 1 φ 4 ) + 2 φ 1 φ 4 + Bias ( R ^ PT ( t ) 2 .
MSE ( R ^ S ( t ) ) = E R ^ S ( t ) R ( t ) 2 = E R ^ ( t ) d ( R ^ ( t ) R 0 ) 2 λ o S m R ( t ) 2 = MSE ( R ^ ( t ) ) + d 2 4 λ o 2 E ( R ^ ( t ) R 0 ) 2 S m 2 d λ o E ( R ^ ( t ) R ( t ) ) ( R ^ ( t ) R 0 ) S m = MSE ( R ^ ( t ) ) + d 2 4 λ o 2 E R ^ 2 ( t ) S m 2 + R 0 2 E 1 S m 2 2 R 0 E R ^ ( t ) S m 2 d λ o E R ^ 2 ( t ) S m ( R 0 + R ( t ) ) E R ^ ( t ) S m + R 0 R ( t ) E 1 S m = MSE ( R ^ ( t ) ) + d 2 4 λ o 2 φ 10 + R 0 2 φ 6 2 R 0 φ 9 d λ o φ 8 ( R 0 + R ( t ) ) φ 7 + R 0 R ( t ) φ 11 .
 Proof of Theorem 3. 
Bias ( P ^ ) = E ( P ^ ) P = E ( W ) P = ψ 1 P , Bias ( P ^ PT ) = E ( P ^ PT ) P = E P ^ ( P ^ P o ) I ( A ) P = Bias ( P ^ ) E ( P ^ I ( A ) ) + P o E ( I ( A ) ) = Bias ( P ^ ) ψ 3 + P o ψ 5 , Bias ( P ^ S ) = E ( P ^ S ) P = E P ^ ( P ^ P o ) d L 1 P = Bias ( P ^ ) d E ( P ^ L 1 ) + d P o E ( L 1 ) ,
according to L 1 = m 2 m 1 P ^ 1 P ^ and 12, so, we have,
Bias ( P ^ S ) = Bias ( P ^ ) d m 2 m 1 E P ^ 2 1 P ^ + d P o m 2 m 1 E P ^ 1 P ^ = Bias ( P ^ ) d m 2 m 1 ψ ( 2 , 1 ) + d P o m 2 m 1 ψ ( 1 , 1 ) .

References

  1. Baklizi, A. Shrinkage estimation of the exponential reliability with censored data. Focus Appl. Stat. 2003, 11, 195–204. [Google Scholar]
  2. Chaturvedi, A.; Nandchahal, S. Shrinkage estimators of the reliability characteristics of a family of lifetime distributions. Statistica 2016, 76, 57. [Google Scholar]
  3. Chaturvedi, A.; Vyas, S. Estimation and Testing procedures for the Reliability functions of Exponentiated distributions under censorings. Statistica 2017, 77, 13. [Google Scholar]
  4. Panahi, H.; Moradi, N. Estimation of the inverted exponentiated Rayleigh distribution based on adaptive Type II progressive hybrid censored sample. J. Comput. Appl. Math. 2020, 364, 112345. [Google Scholar] [CrossRef]
  5. Kohansal, A.; Shoaee, S. Bayesian and classical estimation of reliability in a multicomponent stress-strength model under adaptive hybrid progressive censored data. Stat. Pap. 2021, 62, 309–359. [Google Scholar] [CrossRef]
  6. Xiong, Z.; Gui, W. Classical and Bayesian inference of an exponentiated half-logistic distribution under adaptive type II progressive censoring. Entropy 2021, 23, 1558. [Google Scholar] [CrossRef]
  7. Okasha, H.; Lio, Y.; Albassam, M. On reliability estimation of lomax distribution under adaptive type-i progressive hybrid censoring scheme. Mathematics 2021, 9, 2903. [Google Scholar] [CrossRef]
  8. Du, Y.; Gui, W. Statistical inference of adaptive type II progressive hybrid censored data with dependent competing risks under bivariate exponential distribution. J. Appl. Stat. 2022, 49, 3120–3140. [Google Scholar] [CrossRef]
  9. Sarhan, A.M.; Manshi, T.; Smith, B. Statistical inference of reliability distributions based on progressively hybrid censoring data. Sci. Afr. 2023, 21, e01808. [Google Scholar] [CrossRef]
  10. Nassar, M.; Alotaibi, R.; Elshahhat, A. Reliability estimation of XLindley constant-stress partially accelerated life tests using progressively censored samples. Mathematics 2023, 11, 1331. [Google Scholar] [CrossRef]
  11. Kumari, R.; Tripathi, Y.M.; Wang, L.; Sinha, R.K. Reliability estimation for Kumaraswamy distribution under block progressive type-II censoring. Statistics 2024, 58, 1–34. [Google Scholar] [CrossRef]
  12. Safariyan, A.; Arabi Belaghi, R. Estimation for the Reliability Characteristics of a Family of Lifetime Distributions under Progressive Censoring. J. Data Sci. Model. 2021, 1, 71–86. [Google Scholar]
  13. Shah, M.K.A.; Lisawadi, S.; Ahmed, S.E. Combining reliability functions of a Weibull distribution. Lobachevskii J. Math. 2017, 38, 101–109. [Google Scholar] [CrossRef]
  14. Ahmed, S.E.; Ahmed, F.; Yüzbaşı, B. Post-Shrinkage Strategies in Statistical and Machine Learning for High Dimensional Data; CRC Press: Boca Raton, FL, USA, 2023. [Google Scholar]
  15. Balakrishnan, N.; Aggarwala, R. Progressive Censoring: Theory, Methods, and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  16. Ng, H.; Chan, P.S.; Balakrishnan, N. Optimal progressive censoring plans for the Weibull distribution. Technometrics 2004, 46, 470–481. [Google Scholar] [CrossRef]
  17. Kundu, D. Bayesian inference and life testing plan for the Weibull distribution in presence of progressive censoring. Technometrics 2008, 50, 144–154. [Google Scholar] [CrossRef]
  18. Cramer, E.; Schmiedt, A.B. Progressively Type-II censored competing risks data from Lomax distributions. Comput. Stat. Data Anal. 2011, 55, 1285–1303. [Google Scholar] [CrossRef]
  19. Dey, S.; Elshahhat, A.; Nassar, M. Analysis of progressive type-II censored gamma distribution. Comput. Stat. 2023, 38, 481–508. [Google Scholar] [CrossRef]
  20. Balakrishnan, N.; Cramer, E. The Art of Progressive Censoring: Applications to Reliability and Quality; Statistics for Industry and Technology; Birkhäuser: New York, NY, USA, 2014; Volume 138. [Google Scholar]
  21. Saleh, A.M.E. Theory of Preliminary Test and Stein-Type Estimation with Applications; John Wiley & Sons: Hoboken, NJ, USA, 2006; Volume 517. [Google Scholar]
  22. Khan, S.; Saleh, A.M.E. On the comparison of the pre-test and shrinkage estimators for the univariate normal mean. Stat. Pap. 2001, 42, 451–473. [Google Scholar] [CrossRef]
  23. Aminnejad, M.; Gürünlü Alma, Ö.; Belaghi, R.A. Improved Estimators of Two parameter Exponential Distribution Based on Type Two Censored Data. Istat. J. Turk. Stat. Assoc. 2015, 7, 71–79. [Google Scholar]
  24. Roozbeh, M. Shrinkage ridge estimators in semiparametric regression models. J. Multivar. Anal. 2015, 136, 56–74. [Google Scholar] [CrossRef]
  25. Arashi, M.; Roozbeh, M. Shrinkage estimation in system regression model. Comput. Stat. 2015, 30, 359–376. [Google Scholar] [CrossRef]
  26. Roozbeh, M.; Arashi, M. Shrinkage ridge regression in partial linear models. Commun. Stat. Theory Methods 2016, 45, 6022–6044. [Google Scholar] [CrossRef]
  27. Safariyan, A.; Arashi, M.; Ahmed, S.E.; Belaghi, R.A. Reliability Analysis Using Ranked Set Sampling. In Proceedings of the International Conference on Management Science and Engineering Management, Melbourne, VIC, Australia, 1–4 August 2018; pp. 711–722. [Google Scholar]
  28. Safariyan, A.; Arashi, M.; Arabi Belaghi, R. Improved point and interval estimation of the stress–strength reliability based on ranked set sampling. Statistics 2019, 53, 101–125. [Google Scholar] [CrossRef]
  29. Safariyan, A.; Arashi, M.; Arabi Belaghi, R. Improved estimators for stress-strength reliability using record ranked set sampling scheme. Commun. Stat.-Simul. Comput. 2019, 48, 2708–2726. [Google Scholar] [CrossRef]
  30. Raheem, E.; Ahmed, S.E.; Liu, S. Stein-rule M-estimation in sparse partially linear models. Jpn. J. Stat. Data Sci. 2023, 1–29. [Google Scholar] [CrossRef]
  31. Watson, G.N. A Treatise on the Theory of Bessel Functions; Cambridge University Press: Cambridge, UK, 1995. [Google Scholar]
  32. Efron, B. The Jackknife, the Bootstrap, and Other Resampling Plans; Siam: Philadelphia, PA, USA, 1982; Volume 38. [Google Scholar]
  33. Lawless, J.F. Statistical Models and Methods for Lifetime Data; John Wiley Sons: Hoboken, NJ, USA, 1982; pp. 316–317. Volume 10. [Google Scholar]
  34. Nelson, R.R.; Winter, S.G. An Evolutionary Theory of Economic Change; Belknap Press, Harvard University Press: Cambridge, MA, USA, 1982. [Google Scholar]
  35. Chaturvedi, A.; Malhotra, A. Inference on the parameters and reliability characteristics of three parameter Burr distribution based on records. Appl. Math. Inf. Sci. 2017, 11, 1–13. [Google Scholar] [CrossRef]
  36. Bader, M.; Priest, A. Statistical aspects of fibre and bundle strength in hybrid composites. In Progress in Science and Engineering Composites, Proceedings of the ICCM-IV, Tokyo, Japan, 25–28 October 1982; Hayashi, T., Kawata, S., Umekawa, S., Eds.; The Japan Society for Composite Materials: Tokyo, Japan, 1982; pp. 1129–1136. [Google Scholar]
  37. Raqab, M.Z.; Kundu, D. Comparison of different estimators of P(Y<X) for a scaled Burr type X distribution. Commun. Stat. Simul. Comput. 2005, 34, 465–483. [Google Scholar]
  38. Kundu, D.; Gupta, R.D. Estimation of P(Y<X) for Weibull distributions. IEEE Trans. Reliab. 2006, 55, 270–280. [Google Scholar]
  39. Kundu, D.; Raqab, M.Z. Estimation of R=P(Y<X) for three-parameter Weibull distribution. Stat. Probab. Lett. 2009, 79, 1839–1846. [Google Scholar]
  40. Asgharzadeh, A.; Valiollahi, R.; Raqab, M.Z. Stress-strength reliability of Weibull distribution based on progressively censored samples. SORT 2011, 35, 103–124. [Google Scholar]
  41. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2024. [Google Scholar]
Figure 1. The relative efficiency plot of the PT estimator based on R 0 = 0.7 with n = 100 , m = 10 , t = 5 , λ 0 = 0.07 , and progressive censoring scheme R = ( 25 , 10 , 7 , 5 , 3 , 10 , 9 , 5 , 7 , 9 ) .
Figure 1. The relative efficiency plot of the PT estimator based on R 0 = 0.7 with n = 100 , m = 10 , t = 5 , λ 0 = 0.07 , and progressive censoring scheme R = ( 25 , 10 , 7 , 5 , 3 , 10 , 9 , 5 , 7 , 9 ) .
Mathematics 12 01599 g001
Figure 2. The relative efficiency plot of the S estimator based on R 0 = 0.7 with n = 100 , m = 10 , t = 5 , λ 0 = 0.07 , progressive censoring scheme R = ( 25 , 10 , 7 , 5 , 3 , 10 , 9 , 5 , 7 , 9 ) and α = 0.05 .
Figure 2. The relative efficiency plot of the S estimator based on R 0 = 0.7 with n = 100 , m = 10 , t = 5 , λ 0 = 0.07 , progressive censoring scheme R = ( 25 , 10 , 7 , 5 , 3 , 10 , 9 , 5 , 7 , 9 ) and α = 0.05 .
Mathematics 12 01599 g002
Figure 3. Consistency analysis of the proposed estimators of R ( t ) , R 0 = 0.7 for m = 10 , t = 5 , λ 0 = 0.07 , α = 0.05 , progressive censoring scheme R = ( 25 , 10 , 7 , 5 , 3 , 10 , 9 , 5 , 7 , n i = 1 m 1 R i ) , and G ( x ; a , θ ) = x .
Figure 3. Consistency analysis of the proposed estimators of R ( t ) , R 0 = 0.7 for m = 10 , t = 5 , λ 0 = 0.07 , α = 0.05 , progressive censoring scheme R = ( 25 , 10 , 7 , 5 , 3 , 10 , 9 , 5 , 7 , n i = 1 m 1 R i ) , and G ( x ; a , θ ) = x .
Mathematics 12 01599 g003
Figure 4. The RE of the preliminary test estimator based on P = 0.5 .
Figure 4. The RE of the preliminary test estimator based on P = 0.5 .
Mathematics 12 01599 g004
Figure 5. The RE of the Stein-type estimator based on P = 0.5 .
Figure 5. The RE of the Stein-type estimator based on P = 0.5 .
Mathematics 12 01599 g005
Table 1. The CP and EL of R ( t ) = 0.5 with varying R 0 and the three schemes.
Table 1. The CP and EL of R ( t ) = 0.5 with varying R 0 and the three schemes.
R ^ ( t ) R ^ PT ( t ) R ^ S ( t )
Boot. Asymp. Boot. Asymp. Boot. Asymp.
Scheme R 0 CPELCPELCPELCPELCPELCPEL
1 0.350.890.540.850.430.900.510.760.611.000.470.850.29
2 0.400.850.540.850.430.890.370.890.500.980.360.950.21
3 0.450.860.540.840.430.870.300.910.400.910.270.970.19
4 S 1 0.500.860.550.840.430.900.250.930.410.950.240.930.18
5 0.550.860.540.840.430.940.330.920.290.930.310.880.19
6 0.600.830.540.840.430.940.530.930.260.940.430.870.20
7 0.650.830.540.850.430.950.740.880.280.990.560.860.22
8 0.350.890.540.830.430.920.510.740.601.000.470.830.29
9 0.400.860.540.850.430.890.380.830.510.990.360.960.21
10 0.450.880.540.850.430.880.280.890.400.930.270.970.19
11 S 2 0.500.870.550.840.430.920.240.910.420.970.240.950.18
12 0.550.870.540.840.430.920.330.920.290.930.310.900.19
13 0.600.860.540.850.430.920.520.920.270.960.430.880.20
14 0.650.880.550.9830.430.980.760.900.280.970.560.910.22
15 0.350.880.540.840.430.900.500.720.601.000.470.820.29
16 0.400.890.540.840.430.870.380.830.510.970.360.950.21
17 0.450.870.540.840.430.850.280.870.400.910.270.970.19
18 S 3 0.500.900.550.850.430.930.240.910.420.960.240.950.18
19 0.550.900.540.850.430.930.340.920.300.950.310.900.19
20 0.600.880.540.860.430.950.540.880.270.960.430.880.20
21 0.650.890.540.860.430.970.750.850.290.990.560.880.22
Table 2. The CP and EL of P with P = 0.70 .
Table 2. The CP and EL of P with P = 0.70 .
P ^ P ^ PT P ^ S
Boot. Asymp. Boot. Asymp. Boot. Asymp.
Scheme P 0 CPELCPELCPELCPELCPELCPEL
1 0.600.890.470.850.390.760.490.950.221.000.390.870.26
2 0.670.890.470.870.390.850.500.950.191.000.390.910.24
3 0.680.880.470.850.390.910.500.950.190.990.390.880.23
4 0.690.890.470.860.390.910.490.820.190.980.390.910.23
5 S 1 0.700.880.470.850.390.930.450.900.190.980.390.900.23
6 0.710.890.470.830.390.920.470.970.200.950.390.900.23
7 0.720.890.470.840.390.920.460.630.200.950.390.910.23
8 0.750.880.480.860.390.940.440.600.210.970.390.950.24
9 0.800.890.470.850.390.880.410.700.290.950.390.970.29
1 0.600.890.470.850.390.790.490.950.211.000.390.890.26
2 0.670.880.470.840.390.800.500.950.200.990.390.900.24
3 0.680.880.470.850.390.900.500.930.200.990.390.890.23
4 0.690.890.480.860.390.920.490.950.200.980.390.910.23
5 S 2 0.700.880.470.850.390.950.480.930.190.970.390.900.23
6 0.710.880.470.840.390.930.470.970.200.970.390.910.23
7 0.720.880.470.840.390.920.460.970.200.960.390.920.23
8 0.750.880.470.850.390.850.440.890.210.970.390.950.24
9 0.800.880.480.840.390.810.410.730.300.950.390.960.29
1 0.600.880.470.850.390.780.490.940.211.000.390.880.26
2 0.670.890.470.850.390.790.500.930.201.000.390.890.24
3 0.680.880.480.840.390.850.500.940.200.980.390.890.23
4 0.690.890.480.850.390.910.490.930.200.980.390.890.23
5 S 3 0.700.890.470.840.390.910.480.930.200.970.390.900.23
6 0.710.890.470.860.390.940.470.920.190.970.390.930.23
7 0.720.890.470.850.390.920.460.950.200.970.390.930.23
8 0.750.880.470.850.390.950.440.880.220.960.390.950.24
9 0.800.880.470.850.390.890.420.780.290.960.390.960.29
Table 3. Estimators of R ( t ) in time to breakdown data under the assumption R ^ o | t = 2 = 0.7041 .
Table 3. Estimators of R ( t ) in time to breakdown data under the assumption R ^ o | t = 2 = 0.7041 .
α = 0.05 Estimated ValueVarianceBootstrap CI B = 200Asymp. CI
R ^ ( t ) 0.70410.0067(0.59490, 0.8605)(0.5884, 0.9092)
R ^ PT ( t ) 0.70410.0024(0.6879, 0.7284)(0.6074, 0.8007)
R ^ S ( t ) 0.73300.0015(0.6942, 0.9185)(0.6568, 0.8093)
Table 4. Data Set 1 (gauge length of 20 mm).
Table 4. Data Set 1 (gauge length of 20 mm).
1.3121.3141.4791.5521.7001.8031.8611.8651.9441.958
1.9661.9972.0062.0212.0272.0552.0632.0982.1402.179
2.2242.2402.2532.2702.2722.2742.3012.3012.3592.382
2.3822.4262.4342.4352.4782.4902.5112.5142.5352.554
2.5662.5702.5862.6292.6332.6422.6482.6842.6972.726
2.7702.7732.8002.8092.8182.8212.8482.8802.9543.012
3.0673.0843.0903.0963.1283.2333.4333.5853.585
Table 5. Data Set 2 (gauge length of 10 mm).
Table 5. Data Set 2 (gauge length of 10 mm).
1.9012.1322.2032.2282.2572.3502.3612.3962.3972.445
2.4542.4742.5182.5222.5252.5322.5752.6142.6162.618
2.6242.6592.6752.7382.7402.8562.9172.9282.9372.937
2.9772.9963.0303.1253.1393.1453.2203.2233.2353.243
3.2643.2723.2943.3323.3463.3773.4083.4353.4933.501
3.5373.5543.5623.6283.8523.8713.8863.9714.0244.027
4.2254.3955.020
Table 6. Data and the corresponding censored schemes.
Table 6. Data and the corresponding censored schemes.
i , j 12345678910
x i 1.3121.4791.5521.8031.9441.8581.9662.0272.0552.098
R i 10120030150
y j 1.9012.1322.2572.3612.3962.4452.3732.5252.5322.575
R j 02101120044
Table 7. Estimators of P for gauge data under prior information P o = 0.1788 .
Table 7. Estimators of P for gauge data under prior information P o = 0.1788 .
α = 0.05 Estimated ValueAsymp. CIBoot. CI
P ^ 0.1767(0.0156 0.3377)(0.1311 0.2535)
P ^ PT 0.1788(0.0205 0.3370)(0.1779 0.2471 )
P ^ S 0.1777(0.1037 0.2518)(0.1652 0.1941)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahmed, S.E.; Belaghi, R.A.; Hussein, A.; Safariyan, A. New and Efficient Estimators of Reliability Characteristics for a Family of Lifetime Distributions under Progressive Censoring. Mathematics 2024, 12, 1599. https://doi.org/10.3390/math12101599

AMA Style

Ahmed SE, Belaghi RA, Hussein A, Safariyan A. New and Efficient Estimators of Reliability Characteristics for a Family of Lifetime Distributions under Progressive Censoring. Mathematics. 2024; 12(10):1599. https://doi.org/10.3390/math12101599

Chicago/Turabian Style

Ahmed, Syed Ejaz, Reza Arabi Belaghi, Abdulkadir Hussein, and Alireza Safariyan. 2024. "New and Efficient Estimators of Reliability Characteristics for a Family of Lifetime Distributions under Progressive Censoring" Mathematics 12, no. 10: 1599. https://doi.org/10.3390/math12101599

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop