Next Article in Journal
Computational Insight into the Rope-Skipping Isomerization of Diarylether Cyclophanes
Previous Article in Journal
Energy Efficiency Enhanced Landing Strategy for Manned eVTOLs Using L1 Adaptive Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi Stress-Strength Reliability Based on Progressive First Failure for Kumaraswamy Model: Bayesian and Non-Bayesian Estimation

by
Manal M. Yousef
1 and
Ehab M. Almetwally
2,*
1
Department of Mathematics, Faculty of Science, New Valley University, El-Khargah 72511, Egypt
2
Department of Statistics, Faculty of Business Administration, Delta University of Science and Technology, Gamasa 11152, Egypt
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(11), 2120; https://doi.org/10.3390/sym13112120
Submission received: 23 September 2021 / Revised: 26 October 2021 / Accepted: 5 November 2021 / Published: 8 November 2021
(This article belongs to the Section Mathematics)

Abstract

:
It is highly common in many real-life settings for systems to fail to perform in their harsh operating environments. When systems reach their lower, upper, or both extreme operating conditions, they frequently fail to perform their intended duties, which receives little attention from researchers. The purpose of this article is to derive inference for multi reliability where stress-strength variables follow unit Kumaraswamy distributions based on the progressive first failure. Therefore, this article deals with the problem of estimating the stress-strength function, R when X , Y , and Z come from three independent Kumaraswamy distributions. The classical methods namely maximum likelihood for point estimation and asymptotic, boot-p and boot-t methods are also discussed for interval estimation and Bayes methods are proposed based on progressive first-failure censored data. Lindly’s approximation form and MCMC technique are used to compute the Bayes estimate of R under symmetric and asymmetric loss functions. We derive standard Bayes estimators of reliability for multi stress–strength Kumaraswamy distribution based on progressive first-failure censored samples by using balanced and unbalanced loss functions. Different confidence intervals are obtained. The performance of the different proposed estimators is evaluated and compared by Monte Carlo simulations and application examples of real data.

1. Introduction

In a real life scenarios, it is a widespread phenomenon when systems cease to function in their extreme working environments. Often systems fail to perform their intended functions whenever crossing their lower, upper or both extreme working environments. In the literature R = P ( X < Y ) , widely known as stress-strength reliability, has been studied extensively. A system working under such stress-strength set up fails to function when applied stress exceeds the strength of the system. Some of the notable works in this direction include Weerahandi and Johnson [1], Surles and Pedgett [2], Al-Mutairi et al. [3], Rao et al. [4], Singh et al. [5], Almetwally and Almongy [6], Alshenawy et al. [7], Alamri et al. [8], Sabry et al. [9], Abu El Azm et al. [10], Okabe and Otsuka [11] and many more can be added to this list.
Furthermore, the study of stress-strength models have been extended to systems with multiple components, widely known as multicomponent systems. Even though the multicomponent stress-strength model was introduced decades ago by Bhattacharyya and Johnson [12], it has received wide attention in recent years and studied by many researchers for complete as well as censored data. Some of the recently appeared articles include Kotb and Raqab [13], Maurya and Tripathi [14], Mahto et al. [15], Mahto and Tripathi [16], Wang et al. [17,18], Jha et al. [19], Rasekhi et al. [20], Alotaibi et al. [21], Maurya et al. [22], Kohansal and Shoaee [23], Jana and Bera [24] and many more can also be listed.
Many studies have been carried out for R = P ( X < Y ) as stress-strength model and the study is extended to multicomponent systems also. However very less attention is given to an equally important practical scenario when devices cease to function under extreme lower as well as extreme upper working environment. For example, some electrical equipment fail when placed under below and above some specific power supply. In a similar manner, a person’s systolic and diastolic pressure limits should not exceeded. Such applications are not limited and it is quite basic and natural, reflecting sound relationships among various real-world phenomena. It is a useful relationship in various subareas of genetics and psychology also where strength Y should not only be larger than stress X, but also be lower than the stress Z. Many researchers have studied the estimation of the stress-strength parameter for many statistical models. Estimation of R = P ( X < Y < Z ) based on independent samples was examined by Chandra and Owen [25], Hlawka [26], Singh [27], Dutta and Sriwastav [28], and Ivshin [29]. The estimation in the stress-strength model under the supposition that the strength of a component lies in an interval and estimation of the probability R = P ( X 1 < Y < X 2 ) was obtained by Singh [27], where X 1 and X 2 were independent random stress variables and Y was a random strength variable. The estimation of R = P [ m a x ( Y 1 , Y 2 , , ( Y k ) < X ] was considered by Chandra and Owen [25] when ( Y 1 , Y 2 , , Y k ) were normal distributions and X was another independent normal random variable. Hanagal [26] estimated the reliability of a component subjected to two different stresses that were independent of the strength of a component. Hanagal [30] estimated the system reliability in the multi-component series stress-strength model. Waegeman et al. [31] suggested a simple calculation algorithm for P ( X < Y < Z ) and its variance using existing U-statistics. Chumchum et al. [32] studied the cascade system with P ( X < Y < Z ) . Guangming et al. [33] discussed nonparametric statistical inference for P ( X < Y < Z ) . Inference of R = P ( X < Y < Z ) for n-Standby System: A Monte-Carlo Simulation Approach was obtained by Patowary et al. [34].
Based on the censored sample, many articles that appeared include: Kohansal and Shoaee [23] discussed Bayesian and likelihood estimation methods of reliability in a multicomponent stress-strength model under adaptive hybrid progressive censored data for Weibull distribution. Saini et al. [35] obtained reliability of a multicomponent stress-strength system based on Burr XII distribution using progressively first-failure censored samples. Kohansal et al. [36] introduced multicomponent stress–strength estimation of a non-identical-component strengths system under the adaptive hybrid progressive censoring. Hassan [37] estimated the reliability of multicomponent stress-strength with generalized linear failure rate distribution based on progressive Type II censoring data.
Often, when dealing with reliability characteristics in statistical analysis even after knowing that there may be some loss of efficiency, different ways of early removals of live units known as censoring schemes are used to save time and cost. Many types of censoring schemes are well known, such as the type-II censoring scheme, progressive type-II censoring scheme, and progressive first failure censoring scheme, for example. Wu and Kus [38] proposed a new life-test plan called the progressive first failure censoring scheme, merging progressive type-II censoring and first failure censoring schemes. It is possible to characterize the progressive first failure censoring scheme as follows: assume that n independent groups with k items within each group are placed on a life-test. Once the first failure Y 1 ; m , n , k has occurred, R 1 units and the group in which the first failure is spotted are randomly withdrawn from the experiment. At the time of the second failure Y 2 ; m , n , k , the R 2 units and the group in which the second failure is observed are randomly withdrawn from the remaining live ( n R 1 2 ) groups. At the end, when the m-th observation Y m ; m , n , k fails, the rest of the live units R m , ( m < n ) are withdrawn from the test. Then, the obtained ordered observations Y 1 ; m , n , k < < Y m ; m , n , k are called progressively first-failure censored with progressive censored scheme specified by R ̲ = ( R 1 , R 2 , , R m ) , where m failures and sum of all removals sums to n, that is, n = m + i = 1 m R i . One may notice that a special case with R 1 = R 2 = = R m = 0 reduces the progressive first-failure censoring to first-failure censoring scheme. Similarly, with R 1 = R 2 = = R m 1 = 0 and R m = n m , first-failure type-II censoring comes as a particular case of this censoring scheme. With the assumption that each group contains exactly one unit, that is, k = 1 , the progressive first-failure censoring scheme reduces to the progressive type-II censoring scheme. Thus, a generalization of progressive censoring is progressive first-failure censoring.
Let Y 1 ; m , n , k , , Y m ; m , n , k denoting a progressive first-failure type-II censored population sample with f X ( . ) pdf and F X ( . ) distribution function with progressive censoring scheme R ̲ . The likelihood function is based on Balakrishnan and Aggarwala [39] and Wu and Kus [38] on the basis of considered progressive first-failure censored sample is given as follows:
f 1 , 2 , , m ( Y 1 ; m , n , k , Y 2 ; m , n , k , , Y m ; m , n , k ) = A K m i = 1 m f ( Y i ; m , n , k ) [ 1 F ( Y i ; m , n , k ) ] k ( R i + 1 ) 1 , 0 < Y 1 ; m , n , k < < Y m ; m , n , k <
where
A = n ( n R 1 1 ) ( n R 1 R 2 2 ) ( n i = 1 m 1 R i m + 1 )
Kumaraswamy [40] proposed a distribution having double bounded support by describing it’s first application in the field of hydrology. The Kumaraswamy distribution (KuD) having parameters α and λ , respectively, is described by the probability density function (PDF), cumulative distribution function (CDF), and hazard rate function given as:
f ( y ) = λ α y α 1 ( 1 y α ) λ 1 , 0 < y < 1 , α , λ > 0 ,
F ( y ) = 1 ( 1 y α ) λ , 0 < y < 1 , α , λ > 0 , and H ( y ) = λ α y α 1 1 y α , 0 < y < 1 , α , λ > 0 .
respectively.
In terms of properties, the KuD is more like the beta distribution which shares many of the common properties but in terms of tractability, KuD has better tractable form than the beta distribution. The densities of Kumaraswamy also share the shapes with beta distribution for various values and may have unimodal, increasing, decreasing or constant densities based on various values of it’s parameters. KuD applies to many natural phenomena, such as the height of individuals, atmospheric temperatures, and scores obtained on a test. It can be used to approximate many well known distributions, for instance, uniform, triangular, and many others. Jones [41] found that the KuD can be applied to model reliability data resulting from various life studies. Golizadeh et al. [42] used ungrouped data to analyze classical and Bayesian estimators for the shape parameter of the KuD and also considered the relationship between them. For more details, see Sindhu et al. [43], Sharaf EL-Deen et al. [44], Wang [45], Kumar et al. [46] and Fawzy [47].
Therefore, we intend to introduce inference for multicomponent reliability where stress-strength variables follow unit KuD based on the progressive first-failure. The challenge of estimating the stress-strength function R, where X , Y , and Z come from three independent KuD is addressed in this paper. The likelihood estimation based on progressive first-failure censored for point estimation, asymptotic confidence interval, bootstrap -p, and t methods are also discussed. The Bayesian estimation methods based on progressive first-failure censored are obtained by using Markov chain Monte Carlo (MCMC) and Lindly’s approximation. Symmetric and asymmetric loss functions have been used for Bayesian estimation. The balanced and unbalanced loss functions have been used to estimate the reliability of multi stress–strength Kumaraswamy distribution based on progressive first-failure censored samples. Monte Carlo simulations and application examples of real data are used to evaluate and compare the performance of the various proposed estimators.
The rest of the paper is organized as follows. The classical point estimates maximum likelihood estimation and interval estimation, namely asymptotic, boot-p and boot-t are considered in Section 2. In Section 3, Bayesian estimation techniques are considered, including Lindley and MCMC techniques. We provide the Bayes estimate of R in this section. Extensive simulation studies are given in Section 4. The application example of real data are obtained in Section 5. Finally, we conclude the paper in Section 6.

2. Classical Estimation

In this section, the classical point and interval estimation is considered, namely maximum likelihood estimation for obtaining point estimates of R and asymptotic, boot-p and boot-t intervals for R are considered for interval obtaining interval estimates.

2.1. Maximum Likelihood Estimation of R

Let X KuD( λ , α 1 ), Y KuD( λ , α 2 ), Z KuD( λ , α 3 ) and they are independent. Assuming that λ is known, we have
R = P ( X < Y < Z ) = F X ( y ) d F Y ( y ) F X ( y ) F Z ( y ) d F Y ( y ) , = α 1 α 2 ( α 2 + α 3 ) ( α 1 + α 2 + α 3 )
To derive the MLE of R, first we obtain the MLEs of α 1 , α 2 and α 3 . Let( X 1 ; m 1 , n 1 , k 1 , , X n 1 ; m 1 , n 1 , k 1 ), ( Y 1 ; m 2 , n 2 , k 2 , , Y n 2 ; m 2 , n 2 , k 2 ) and ( Z 1 ; m 3 , n 3 , k 3 , , Z n 3 ; m 3 , n 3 , k 3 ), be three progressively first failure censored samples from KuD ( λ , α i ) distribution with censoring schemes R ̲ x = ( R x 1 , , R x m 1 ) ,   R ̲ y = ( R y 1 , , R y m 2 ) ,   R ̲ z = ( R z 1 , , R z m 3 ) . Therefore, using the expressions from (2) and (3), the likelihood function of α 1 , α 2 and α 3 is given by
l ( α 1 , α 2 , α 3 ) j = 1 3 ( k j α j ) m j λ ( m 1 + m 2 + m 3 ) i = 1 m 1 x i λ 1 ( 1 x i λ ) α 1 k 1 ( R x i + 1 ) 1 × i = 1 m 2 y i λ 1 ( 1 y i λ ) α 2 k 2 ( R y i + 1 ) 1 i = 1 m 3 z i λ 1 ( 1 z i λ ) α 3 k 3 ( R z i + 1 ) 1 ,
For the simplicity of notation, we will use x i instead of X i ; m 1 , n 1 , k 1 . Similarity for y i and z i .
The log-likelihood function may now be expressed as:
L ( α 1 , α 2 , α 3 ) j = 1 3 m j ( ln k j + ln α j ) + ( m 1 + m 2 + m 3 ) ln λ + ( λ 1 ) ( i = 1 m 1 ln x i + i = 1 m 2 ln y i + i = 1 m 3 ln z i ) + i = 1 m 1 ( α 1 k 1 ( R x i + 1 ) 1 ) ln ( 1 x i λ ) + i = 1 m 2 ( α 2 k 2 ( R y i + 1 ) 1 ) × ln ( 1 y i λ ) + i = 1 m 3 ( α 3 k 3 ( R z i + 1 ) 1 ) ln ( 1 z i λ ) ,
Taking the derivative of (6) with respect to α 1 , α 2 and α 3 , respectively, we have
L α 1 = m 1 α 1 + k 1 i = 1 m 1 ( R x i + 1 ) ln ( 1 x i λ ) , L α 2 = m 2 α 2 + k 2 i = 1 m 2 ( R y i + 1 ) ln ( 1 y i λ ) , L α 3 = m 3 α 3 + k 3 i = 1 m 3 ( R z i + 1 ) ln ( 1 z i λ ) .
The MLEs of α 1 , α 2 and α 3 are obtained, respectively, by equating the partial derivatives in (7) to zero and are written as:
α 1 ^ = m 1 k 1 i = 1 m 1 ( R x i + 1 ) ln ( 1 x i λ ) , α 2 ^ = m 2 k 2 i = 1 m 2 ( R y i + 1 ) ln ( 1 y i λ ) , α 3 ^ = m 3 k 3 i = 1 m 3 ( R z i + 1 ) ln ( 1 z i λ ) ,
Replacing α 1 , α 2 and α 3 by α 1 ^ , α 2 ^ and α 3 ^ , respectively, in (4), the MLE of R becomes
R ^ = α ^ 1 α ^ 2 ( α ^ 2 + α ^ 3 ) ( α ^ 1 + α ^ 2 + α ^ 3 ) .

2.2. Asymptotic Confidence Interval

The Fisher information matrix of 3-dimensional vector α = ( α 1 , α 2 , α 3 ) is written as
I ( α 1 , α 2 , α 3 ) = E 2 l α 1 2 E 2 l α 1 α 2 E 2 l α 1 α 3 E 2 l α 2 α 1 E 2 l α 2 2 E 2 l α 2 α 3 E 2 l α 3 α 1 E 2 l α 3 α 2 E 2 l α 3 2
where E 2 l α 1 2 = m 1 α 1 2 , E 2 l α 2 2 = m 2 α 2 2 and E 2 l α 3 2 = m 3 α 3 2 . Suppose the MLE of α is denoted by α ^ . Then, as m 1 , m 2 and m 3
n ( α ^ α ) D N ( 0 , I 1 )
where n = n 1 = n 2 = n 3 and I 1 is the inverse matrix of the Fisher information matrix I. Here, we define
B = R α 1 , R α 2 , R α 3 ,
where, R α 1 = α 2 ( α 1 + α 2 + α 3 ) 2 , R α 2 = α 1 ( α 3 ( α 1 + α 3 ) α 2 2 ) ( α 2 + α 3 ) 2 ( α 1 + α 2 + α 3 ) 2 and R α 3 = α 2 α 3 ( 2 ( α 1 + α 3 ) + α 1 ) ( α 2 + α 3 ) 2 ( α 1 + α 2 + α 3 ) 2 . Then, using the delta method, for more details, one may refer to Ferguson [48], the asymptotic distribution of R ^ is found as
n ( R ^ R ) D N ( 0 , σ R 2 )
where σ R 2 = B T I 1 B is the asymptotic variance of R ^ . The approximate 100 ( 1 γ ) % confidence interval for R can be expressed as ( R ^ z γ / 2 σ ^ R , R ^ + z γ / 2 σ ^ R ), where z γ / 2 is the upper γ / 2 percentile of the standard normal distribution.

2.3. Bootstrap Confidence Interval

In this subsection, we propose to use two additional confidence intervals based on the parametric bootstrap methods; (i) percentile bootstrap method (we call it Boot-p) based on the idea of Efron [49], (ii) bootstrap-t method (Boot-t) based on the idea of Hall [50]. Stepwise illustrations of the two methods are briefly presented below for obtaining the bootstrap intervals for reliability R.
Boot-p Methods:
  • From the sample { X 1 ; m 1 , n 1 , k 1 , , X m 1 ; m 1 , n 1 , k 1 }, { Y 1 ; m 2 , n 2 , k 2 , , Y m 2 ; m 2 , n 2 , k 2 } and
    { Z 1 ; m 3 , n 3 , k 3 , , Z m 3 ; m 3 , n 3 , k 3 } compute α 1 ^ , α 2 ^ and α 3 ^ .
  • A bootstrap progressive first-failure type-II censored sample, denoted by
    { X 1 ; m 1 , n 1 , k 1 * , , X m 1 ; m 1 , n 1 , k 1 * }, is generated from the KuD( λ , α 1 ) based on the censoring scheme of R ̲ x . A bootstrap progressive first-failure type-II censored sample, denoted by { Y 1 ; m 2 , n 2 , k 2 * , , Y m 2 ; m 2 , n 2 , k 2 * }, is generated from the KuD( λ , α 2 ) based on the censoring scheme of R ̲ y . A bootstrap progressive first-failure type-II censored sample, denoted by { Z 1 ; m 3 , n 3 , k 3 * , , Z m 3 ; m 3 , n 3 , k 3 * }, is generated from the KuD( λ , α 3 ) based on the censoring scheme of R ̲ z . Based on { X 1 ; m 1 , n 1 , k 1 * , , X m 1 ; m 1 , n 1 , k 1 * }, { Y 1 ; m 2 , n 2 , k 2 * , , Y m 2 ; m 2 , n 2 , k 2 * } and { Z 1 ; m 3 , n 3 , k 3 * , , Z m 3 ; m 3 , n 3 , k 3 * } compute the bootstrap sample estimate of R using (4), say R * ^ .
  • Repeat step 2, N p number of times.
  • Let G ( x ) = P ( R * ^ x ) , denoting the cumulative distribution function of R * ^ . Define R ^ B o o t P ( x ) = G 1 ( x ) for a given x. The approximate 100 ( 1 γ ) % confidence interval of R is given by
    ( R ^ B o o t P ( γ / 2 ) , R ^ B o o t P ( 1 γ / 2 ) ) .
Bootstrap-t Methods:
  • From the sample { X 1 ; m 1 , n 1 , k 1 , , X m 1 ; m 1 , n 1 , k 1 }, { Y 1 ; m 2 , n 2 , k 2 , , Y m 2 ; m 2 , n 2 , k 2 } and
    { Z 1 ; m 3 , n 3 , k 3 , , Z m 3 ; m 3 , n 3 , k 3 } compute α 1 ^ , α 2 ^ and α 3 ^ .
  • Use α 1 ^ to generate a bootstrap sample { X 1 ; m 1 , n 1 , k 1 * , , X m 1 ; m 1 , n 1 , k 1 * }, α 2 ^ to generate a bootstrap sample { Y 1 ; m 2 , n 2 , k 2 * , , Y m 2 ; m 2 , n 2 , k 2 * } and similarly α 3 ^ to generate a bootstrap sample{ Z 1 ; m 3 , n 3 , k 3 * , , Z m 3 ; m 3 , n 3 , k 3 * } as before. Based on { X 1 ; m 1 , n 1 , k 1 * , , X m 1 ; m 1 , n 1 , k 1 * }, { Y 1 ; m 2 , n 2 , k 2 * , , Y m 2 ; m 2 , n 2 , k 2 * } and { Z 1 ; m 3 , n 3 , k 3 * , , Z m 3 ; m 3 , n 3 , k 3 * } compute the bootstrap sample estimate of R using Equation (4), say R * ^ . and the following statistic:
    T * = m ( R * ^ R ^ ) V ( R * ^ )
  • Repeat step 2, N p number of times.
  • Once N p number of T * values are obtained, bounds of 100 ( 1 γ ) % confidence interval of R are then determined as follows: Suppose T * follows a cumulative distribution function given as H ( x ) = P ( T * x ) . For a given x, define
    R ^ B o o t t = R ^ + V ( R ^ ) / m H 1 ( x )
    The 100 ( 1 γ ) % boot-t confidence interval of R is obtained as
    ( R ^ B o o t t ( γ / 2 ) , R ^ B o o t P ( 1 γ / 2 ) ) .
It is often useful to incorporate prior knowledge about the parameters that may be as prior data, expert opinion or some other medium of knowledge, to get improved estimates of parameters or some function of parameters. Incorporation of such prior knowledge to the estimation process is done using a Bayesian approach. Therefore, next we discuss the Bayesian method of estimation in detail, where prior knowledge is incorporated in terms of prior distributions.

3. Bayes Estimation

In this section, we use the Bayesian inference of R with respect to symmetric loss function as balanced squared error (BSE) and asymmetric loss function as balanced LINEX (BLINEX) loss functions considering that the three parameters α 1 , α 2 and α 3 are random variables.
The use of loss function, widely known as balanced loss function (BLF), first introduced by Zellner [51], was further suggested by Ahmadi et al. [52] to be of the form
L * ( θ , δ ) = Ω ρ ( δ o , δ ) + ( 1 Ω ) ρ ( θ , δ ) ,
where ρ ( θ , δ ) is an arbitrary loss function, δ o is a chosen estimate of δ and the weight 0 Ω 1 . By choosing ρ ( θ , δ ) = ( δ θ ) 2 , the BLF is reduced to the BSE loss function, in the form
L * ( θ , δ ) = Ω ( δ δ o ) 2 + ( 1 Ω ) ( δ θ ) 2 .
The associated Bayes estimate of the function G is expressed as
G ^ B S E = Ω G ^ + ( 1 Ω ) E ( G ( θ ) | y ) ,
where is G ^ the MLE of G. Furthermore, by choosing ρ ( θ , δ ) = exp ( c ( δ θ ) ) c ( δ θ ) 1 , we get BLINEX loss function, in the form
L * ( θ , δ ) = Ω ( exp ( c ( δ δ o ) ) c ( δ δ o ) 1 ) + ( 1 Ω ) ( exp ( c ( δ θ ) ) c ( δ θ ) 1 ) .
In this case, the Bayes estimate of G will be
G ^ B L = 1 c ln ( Ω exp ( c ( δ o ) + ( 1 Ω ) E ( exp ( c G ( θ ) ) | y ) ) ,
where c is taken to be nonzero, that is, c 0 , is the shape parameter of BLINEX loss function.

3.1. Prior and Posterior Distributions

The prior knowledge is incorporated in terms of some prior distributions, and here we assume that the three parameters α 1 , α 2 and α 3 are random variables having independent gamma priors. So, the joint prior density is written as
π ( α 1 , α 2 , α 3 ) = j = 1 3 α j μ j 1 Γ ( μ j ) ν j μ j e j = 1 3 α j ν j .
The joint posterior density function of α 1 , α 2 and α 3 can be written from (5) and (9) as
π * ( α 1 , α 2 , α 3 ) l ( α 1 , α 2 , α 3 ) π ( α 1 , α 2 , α 3 ) j = 1 3 k j m j α j m j + μ j 1 Γ ( μ j ) ν j μ j e j = 1 3 α j ν j λ m 1 + m 2 + m 3 i = 1 m 1 x i λ 1 ( 1 x i λ ) α 1 k 1 ( R x i + 1 ) 1 × i = 1 m 2 y i λ 1 ( 1 y i λ ) α 2 k 2 ( R y i + 1 ) 1 i = 1 m 3 z i λ 1 ( 1 z i λ ) α 3 k 3 ( R z i + 1 ) 1 .
Analytical computation of Bayes estimates of R using (10) is found to be difficult. Therefore, we are left with the option of choosing some approximation technique to approximate the corresponding Bayes estimates. At the first, we apply the Lindley approximation technique for this purpose, but the use of this technique is limited to point estimation only. So secondly, we also use MCMC technique to obtain posterior samples for parameters and then for R to obtain the point as well as interval estimates.

3.2. Lindley’s Approximation

Here, we apply Lindley’s [53] approximation method for obtaining the approximate Bayes estimates of R under BSE and BLINEX loss functions and are given, respectively, by
R ˜ B S L = Ω R ^ + ( 1 Ω ) ( R ^ + U 1 a 1 + U 2 a 2 + U 3 a 3 + a 4 + a 5 + 1 2 ( ψ 1 ( U 1 σ 11 + U 2 σ 12 + U 3 σ 13 ) + ψ 2 ( U 1 σ 21 + U 2 σ 22 + U 3 σ 23 ) + ψ 3 ( U 1 σ 31 + U 2 σ 32 + U 3 σ 33 ) ) ) . R ˜ B L L = 1 c ln ( Ω e c R ^ + ( 1 Ω ) ( e c R ^ + U 1 a 1 + U 2 a 2 + U 3 a 3 + a 4 + a 5 + 1 2 ( ψ 1 ( U 1 σ 11 + U 2 σ 12 + U 3 σ 13 ) + ψ 2 ( U 1 σ 21 + U 2 σ 22 + U 3 σ 23 ) + ψ 3 ( U 1 σ 31 + U 2 σ 32 + U 3 σ 33 ) ) ) ) .
For the detailed derivations, see Appendix A.

3.3. Markov Chain Monte Carlo

The use of Lindley approximation is limited to point estimation only. Therefore, to obtain inerval estimates, we suggest to use MCMC technique to generate samples from (10) and then using the obtained samples, compute the Bayes estimates of R. The conditional posterior distributions of the three model parameters α 1 , α 2 and α 3 can be expressed, respectively, as
π * ( α 1 | α 2 , α 3 , x ) α 1 m 1 + μ 1 1 e α 1 [ 1 ν 1 k 1 i = 1 m 1 ( R x i + 1 ) ln ( 1 x i λ ) ] , π * ( α 2 | α 1 , α 3 , y ) α 2 m 2 + μ 2 1 e α 2 [ 1 ν 2 k 2 i = 1 m 2 ( R y i + 1 ) ln ( 1 y i λ ) ] , π * ( α 3 | α 1 , α 2 , z ) α 3 m 3 + μ 3 1 e α 3 [ 1 ν 3 k 3 i = 1 m 3 ( R z i + 1 ) ln ( 1 z i λ ) ] ,
where, π * ( α 1 | α 2 , α 3 , x ) denotes a Gamma( m 1 + μ 1 , 1 / ( 1 ν 1 k 1 i = 1 m 1 ( R x i + 1 ) ln ( 1 x i λ ) ) ) distribution, so we use the Gibbs sampling technique to generate random sample of α 1 . Similarly, The posterior pdf’s of α 2 and α 3 are Gamma( m 2 + μ 2 , 1 / ( 1 ν 2 k 2 i = 1 m 2 ( R y i + 1 ) ln ( 1 y i λ ) ) ) and Gamma( m 3 + μ 3 , 1 / ( 1 ν 3 k 3 i = 1 m 3 ( R z i + 1 ) ln ( 1 z i λ ) ) ) distribution, respectively. Therefore, the procedure of Gibbs sampling can be expressed as follows:
Step 1. Choose the MLEs α 1 ^ , α 2 ^ and α 3 ^ , as the starting values ( α 1 ( 0 ) , α 2 ( 0 ) , α 3 ( 0 ) ) of α 1 , α 2 and α 3 .
Step 2. Set i = 1
Step 3. Generate α 1 ( i ) from Gamma( m 1 + μ 1 , 1 / ( 1 ν 1 k 1 i = 1 m 1 ( R x i + 1 ) ln ( 1 x i λ ) ) ) .
Step 4. Generate α 2 ( i ) from Gamma( m 2 + μ 2 , 1 / ( 1 ν 2 k 2 i = 1 m 2 ( R y i + 1 ) ln ( 1 y i λ ) ) ) .
Step 5. Generate α 3 ( i ) from Gamma( m 3 + μ 3 , 1 / ( 1 ν 3 k 3 i = 1 m 3 ( R z i + 1 ) ln ( 1 z i λ ) ) ) .
Step 6. Set i = i + 1
Step 7. Compute R ( i ) = α 1 ( i ) α 2 ( i ) ( α 2 ( i ) + α 3 ( i ) ) ( α 1 ( i ) + α 2 ( i ) + α 3 ( i ) ) .
Step 8. Repeat steps 3–7 N times.
Step 9. The approximate means of R and e c R are given, respectively, by
R ˜ S M = E ( R | d a t a ) = 1 N M i = M + 1 N R ( i ) , R ˜ L M = 1 c ln E ( e c R | d a t a ) = 1 c ln [ 1 N M i = M + 1 N e c R ( i ) ] ,
where M is the burn-in period.
Therefore, the Bayes estimates of R based on BSE and BLINEX loss functions is given, respectively, by
R ˜ B S M = Ω R ^ + ( 1 Ω ) E ( R | d a t a ) , R ˜ B L M = 1 c ln [ Ω e c R ^ + ( 1 Ω ) E ( e c R | d a t a ) ] ,
Using the posterior samples, we construct 100 ( 1 γ ) % HPD interval of the reliability R using the widely discussed technique of Chen and Shao [54].

4. Simulation Study

In this section, a Monte Carlo simulation study is conducted to compare the performance of different methods described in the preceding sections. We compare the ML and Bayes estimates under SELF using gamma informative prior in terms of mean-squared errors (MSE). For the ease of simulation, we consider same group sizes k = k 1 = k 2 = k 3 , same number of groups n = n 1 = n 2 = n 3 , and same number of failures m = m 1 = m 2 = m 3 with same pre-fixed censoring schemes R ̲ = R ̲ x = R ̲ y = R ̲ z . In the Bayes estimation, we consider values of parameters α 1 = 1.5662 ,   α 2 = 3.5418 ,   α 3 = 0.8117 and λ = 1.56121 with corresponding hyper-parameters μ 1 = 0.2 , ν 1 = 0.4 , μ 2 = 0.7 , ν 2 = 0.9 , μ 3 = 0.9 , ν 3 = 0.7 . For varying choices of sample size n, and number of observed failure time m, which represent 60%, 80% and 100% of the sample size. For showing behavior in different scenarios, we have considered three progressive first-failure censoring schemes, namely:
Scheme I: R 1 = n m and R i = 0 for i m ;
Scheme II: R m = n m and R i = 0 for i 1 ;
Scheme III:
R m + 1 2 = n m , R i = 0 i m + 1 2 , if m is odd , R m 2 = n m , R i = 0 i m 2 , if m is even .
We obtain the average MSEs of MLE and Bayes estimator for R over 1000 progressively first-failure-censored samples generated using an algorithm proposed by Balakrishnan and Sandhu [55] with distribution function 1 ( 1 F ( x ) ) k from KuD. The Bayes estimates relative to both BSE and BLINEX with varying values of the shape parameter c of LINEX loss function and various values of Ω . We applied MCMC technique for generating samples based on 11,000 MCMC simulation repetition and discard the initial 1000 values as burn-in for avoiding any dependency on initial value. The results of the simulation study are reported in Table 1 and Table 2. Moreover, to observe the behavior of different CIs in terms of different sample sizes and different parameter values, we obtained expected length and 95% coverage probability (CP) of various CIs and Bayesian credible intervals, which are given in Table 3 and Table 4.
Concluding on the Simulation Results
Table 1, Table 2, Table 3 and Table 4 describe the simulation results of the approaches presented in this research for point estimation and interval estimation for reliability for multi-component stress–strength KuD based on progressive first-failure censored samples. We analyze the MSE, length of CIs, and CP of confidence interval values in order to conduct the required comparison between various point estimating methods. The following conclusions can be drawn from these tables:
  • The MSE of reliability for multi stress–strength KuD based on progressive first-failure censored samples for both ML and Bayes estimation is decreased as the number of groups n and the effective sample size m increase.
  • In most cases, the MSE decreases as k increases for the fixed scheme of reliability for multi stress–strength KuD based on progressive first-failure censored samples.
  • The Bayes estimates when compared in terms of MSEs from the ML estimates show better performance with smaller values of MSE in all the considered cases.
  • According to MSE and confidence interval, Scheme I is the best Scheme in the majority of situations.
  • In MCMC and Lindley’s, BLINEX is better than BSEL estimation.
  • In MCMC and in BLINEX, we note MSE decrease as c increases.
  • In Lindley’s and in BLINEX, we note MSE decrease as c decreases.
  • Boot P is better than boot T.
  • Complete sample has the smallest MSE and length of CI.
  • It is observed that Bayesian intervals are having smaller interval lengths than the classical interval estimates.
  • It is also observed that the CPs of asymptotic confidence intervals is quit low than the nominal level but for boot-p, boot-t and Bayesian interval estimates are showing coverage probabilities higher than nominal level.

5. Data Analysis and Application

We consider a real data set to illustrate the methods of inference discussed in this article. These strength data sets were analyzed previously by Kundu and Gupta [56], and Surles and Padgett [2].The first data is inverse of strength measured in GPA for carbon fibers tested under tension at gauge lengths of 20 mm, these data are 0.762, 0.761, 0.676, 0.644, 0.588, 0.555, 0.537, 0.536, 0.514, 0.511, 0.509, 0.501, 0.499, 0.495, 0.493, 0.487, 0.485, 0.477, 0.467, 0.459, 0.450, 0.446, 0.444, 0.441, 0.440, 0.440, 0.435, 0.435, 0.424, 0.420, 0.420, 0.412, 0.411, 0.411, 0.404, 0.402, 0.398, 0.398, 0.394, 0.392, 0.390, 0.389, 0.387, 0.380, 0.380, 0.379, 0.378, 0.373, 0.371, 0.367, 0.361, 0.361, 0.357, 0.356, 0.355, 0.354, 0.351, 0.347, 0.339, 0.332, 0.326, 0.324, 0.324, 0.323, 0.320, 0.309, 0.291, 0.279, 0.279. The second data is the inverse of strength measured in GPA for carbon fibers tested under tension at gauge lengths of 10 mm, these data are 0.526, 0.469, 0.454, 0.449, 0.443, 0.426, 0.424, 0.417, 0.417, 0.409, 0.407, 0.404, 0.397, 0.397, 0.396, 0.395, 0.388, 0.383, 0.382, 0.382, 0.381, 0.376, 0.374, 0.365, 0.365, 0.350, 0.343, 0.342, 0.340, 0.340, 0.336, 0.334, 0.330, 0.320, 0.319, 0.318, 0.311, 0.310, 0.309, 0.308, 0.306, 0.306, 0.304, 0.300, 0.299, 0.296, 0.293, 0.291, 0.286, 0.286, 0.283, 0.281, 0.281, 0.276, 0.260, 0.258, 0.257, 0.252, 0.249, 0.248, 0.237, 0.228, 0.199. Table 5 shows the ML estimation of marginals of KuD with standard error (SE), Cramer–von Mises (CvM) Anderson–Darling (AD), Akaike information criterion (AIC), and Bayesian information criterion (BIC) statistics. The Kolmogorov–Smirnov (KS) distances and corresponding p-values in Table 5 show that the KuD with equal shape parameters fit reasonably well to the modified data sets.
Figure 1 and Figure 2 give the estimated pds, CDF and PP-plot for Data set 1 and Data set 2, respectively.
Table 5, Figure 1 and Figure 2, confirmed the fitting of the KuD to the data.
The MLE and Bayesian estimation method for stress-strength reliability model are obtain for parameters of the model based on progressive first-failure in Table 6. k = 3 The scheme 1 is Type-II first failure where R 1 = ( 0 * * 17 , 5 ) and R 2 = ( 0 * * 17 , 3 ) , where * * point to replication of censored scheme.
x1 is 0.279, 0.309, 0.324, 0.332, 0.351, 0.356, 0.361, 0.373, 0.380, 0.389, 0.394, 0.402, 0.411, 0.420, 0.435, 0.441, 0.450, 0.477. x2 is 0.199, 0.248, 0.257, 0.276, 0.283, 0.291, 0.299, 0.306, 0.309, 0.318, 0.330, 0.340, 0.343, 0.365, 0.381, 0.383, 0.396, 0.404.
The scheme 2 is Progressive first failure where R 1 = ( 5 , 0 * * 17 ) and R 2 = ( 3 , 0 * * 17 ) . x1 is 0.279, 0.324, 0.332, 0.351, 0.373, 0.380, 0.389, 0.394, 0.402, 0.411, 0.420, 0.435, 0.441, 0.450, 0.477, 0.493, 0.501, 0.514. x2 is 0.199, 0.248, 0.257, 0.276, 0.283, 0.291, 0.299, 0.306, 0.309, 0.318, 0.330, 0.343, 0.365, 0.381, 0.383, 0.396, 0.417, 0.426.
Table 6 show Bayesian estimation is the best estimation method according to SE and reliability. Furthermore, we show scheme 2 has the reliability of 0.7803 for ML and 0.8055 for Bayesian which is a better scheme than other schemes. Figure 3 and Figure 4 show convergence plots of MCMC for parameter estimates of KuD for different schemes.

6. Conclusions

In this article, we have considered estimation of the reliability R when the observed data are progressive first failure censored coming from KuD distribution. For comparing the results obtained using various methods, we have obtained the MSEs for the reliability R. In the case of Bayesian estimation, looking at the limitation of Lindley estimation to point and interval estimation, we perform the approximation using the MCMC technique also. For different censoring schemes, for comparison of point estimates, we list the MSEs from the ML and Bayesian methods in Table 1 and Table 2. Further, for interval estimation, comparison criteria are set as the interval length and coverage probability of the estimates and are tabulated in Table 3 and Table 4. The performance of the unknown parameters α 1 , λ 1 , α 2 , and λ 2 and the system of stress-strength reliability in practical applications were demonstrated using two real data sets. The goodness of fit of the methods estimators for each real data set was examined using the KS, and the results were sufficient and satisfactory. The balanced loss functions give an efficient Bayesian estimator, as shown by the experimental findings. The ML estimates are compared by Bayesian estimates which seem to be better. For the Bayesian estimates, when two approximation methods namely Lindley and MCMC methods are compared, the performance of the two methods are quite close in terms of estimated MSEs.

Author Contributions

Methodology, and formal analysis, M.M.Y.; Application of real data, E.M.A.; Software coding, M.M.Y.; Writing—original draft, M.M.Y.; writing—review and editing, M.M.Y., E.M.A.; Mathematical analysis, M.M.Y. All authors have read and agreed to the published version of the manuscript.

Funding

Academy of Scientific Research & Technology (ASRT), Egypt. Grant No. 6461 under the project Science Up. (ASRT).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are included within the article.

Acknowledgments

This project was supported financially by the Academy of Scientific Research & Technology (ASRT), Egypt. Grant No. 6461 under the project Science Up. (ASRT) is the 2nd affiliation of this research.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In a three parameter case U = U ( α 1 , α 2 , α 3 ) , Lindley’s approximation form reduces to
E ( U ( ( α 1 , α 2 , α 3 ) | t ) ) = U + ( U 1 a 1 + U 2 a 2 + U 3 a 3 + a 4 + a 5 ) + 1 2 ( ψ 1 ( U 1 σ 11 + U 2 σ 12 + U 3 σ 13 ) + ψ 2 ( U 1 σ 21 + U 2 σ 22 + U 3 σ 23 ) + ψ 3 ( U 1 σ 31 + U 2 σ 32 + U 3 σ 33 ) ) .
where U i = U ξ i , σ i j is the element ( i , j ) in the variance-covariance matrix ( L i j ) , i , j = 1 , 2 , 3 , and
a i = ρ 1 σ i 1 + ρ 2 σ i 2 + ρ 3 σ i 3 , i = 1 , 2 , 3
a 4 = U 12 σ 12 + U 13 σ 13 + U 23 σ 23 ,
a 5 = 1 2 ( U 11 σ 11 + U 22 σ 22 + U 33 σ 33 ) ,
ψ 1 = σ 11 L 111 + 2 ( σ 12 L 121 + σ 13 L 131 + σ 23 L 231 ) + σ 22 L 221 + σ 33 L 331 ,
ψ 2 = σ 11 L 112 + 2 ( σ 12 L 122 + σ 13 L 132 + σ 23 L 232 ) + σ 22 L 222 + σ 33 L 332 ,
ψ 3 = σ 11 L 113 + 2 ( σ 12 L 123 + σ 13 L 133 + σ 23 L 233 ) + σ 22 L 223 + σ 33 L 333 ,
where ρ i = ρ ξ i , U i j = 2 U ξ i ξ j , L i j k = 3 L ξ i ξ j ξ k .
For Lindley approximation using (A1), we obtain some associated expressions as follows
ρ ( α 1 , α 2 , α 3 ) = ln π ( α 1 , α 2 , α 3 ) ( μ 1 1 ) ln α 1 + ( μ 2 1 ) ln α 2 + ( μ 3 1 ) ln α 3 α 1 ν 1 α 2 ν 2 α 3 ν 3 .
therefore,
ρ 1 = ρ α 1 = μ 1 1 α 1 1 ν 1 , ρ 2 = ρ α 2 = μ 2 1 α 2 1 ν 2 , ρ 3 = ρ α 3 = μ 3 1 α 3 1 ν 3 .
and
L 111 = 3 L α 1 = 2 m 1 α 1 3 , L 222 = 3 L α 2 = 2 m 2 α 2 3 , L 333 = 3 L α 3 = 2 m 3 α 3 3 , L 112 = 0 , L 113 = 0 , L 123 = 0 , L 221 = 0 , L 223 = 0 , L 331 = 0 , L 332 = 0 .
Furthermore,
ψ 1 = 2 α 1 , ψ 2 = 2 α 2 , ψ 3 = 2 α 3 .

References

  1. Weerahandi, S.; Johnson, R.A. Testing reliability in a stress-strength model when X and Y are normally distributed. Technometrics 1992, 34, 83–91. [Google Scholar] [CrossRef]
  2. Surles, J.G.; Padgett, W.J. Inference for reliability and stress-strength for a scaled Burr Type X distribution. Lifetime Data Anal. 2001, 7, 187–200. [Google Scholar] [CrossRef]
  3. Al-Mutairi, D.K.; Ghitany, M.E.; Kundu, D. Inferences on stress-strength reliability from Lindley distributions. Commun. Stat.-Theory Methods 2013, 42, 1443–1463. [Google Scholar] [CrossRef]
  4. Rao, G.S.; Aslam, M.; Kundu, D. Burr-XII distribution parametric estimation and estimation of reliability of multicomponent stress-strength. Commun. Stat.-Theory Methods 2015, 44, 4953–4961. [Google Scholar] [CrossRef]
  5. Singh, S.K.; Singh, U.; Yaday, A.; Viswkarma, P.K. On the estimation of stress strength reliability parameter of inverted exponential distribution. Int. J. Sci. World 2015, 3, 98–112. [Google Scholar] [CrossRef] [Green Version]
  6. Almetwally, E.M.; Almongy, H.M. Parameter estimation and stress-strength model of Power Lomax distribution: Classical methods and Bayesian estimation. J. Data Sci. 2020, 18, 718–738. [Google Scholar] [CrossRef]
  7. Alshenawy, R.; Sabry, M.A.; Almetwally, E.M.; Almongy, H.M. Product Spacing of Stress–Strength under Progressive Hybrid Censored for Exponentiated-Gumbel Distribution. Comput. Mater. Contin. 2021, 66, 2973–2995. [Google Scholar] [CrossRef]
  8. Alamri, O.A.; Abd El-Raouf, M.M.; Ismail, E.A.; Almaspoor, Z.; Alsaedi, B.S.; Khosa, S.K.; Yusuf, M. Estimate stress-strength reliability model using Rayleigh and half-normal distribution. Comput. Intell. Neurosci. 2021, 2021. [Google Scholar] [CrossRef] [PubMed]
  9. Sabry, M.A.; Almetwally, E.M.; Alamri, O.A.; Yusuf, M.; Almongy, H.M.; Eldeeb, A.S. Inference of fuzzy reliability model for inverse Rayleigh distribution. AIMS Math. 2021, 6, 9770–9785. [Google Scholar] [CrossRef]
  10. Abu El Azm, W.S.; Almetwally, E.M.; Alghamdi, A.S.; Aljohani, H.M.; Muse, A.H.; Abo-Kasem, O.E. Stress-Strength Reliability for Exponentiated Inverted Weibull Distribution with Application on Breaking of Jute Fiber and Carbon Fibers. Comput. Intell. Neurosci. 2021, 2021. [Google Scholar] [CrossRef]
  11. Okabe, T.; Otsuka, Y. Proposal of a Validation Method of Failure Mode Analyses based on the Stress-Strength Model with a Support Vector Machine. Reliab. Eng. Syst. Saf. 2021, 205, 107247. [Google Scholar] [CrossRef]
  12. Bhattacharyya, G.K.; Johnson, R.A. Estimation of reliability in a multicomponent stress-strength model. J. Am. Stat. Assoc. 1974, 69, 966–970. [Google Scholar] [CrossRef]
  13. Kotb, M.S.; Raqab, M.Z. Estimation of reliability for multi-component stress–strength model based on modified Weibull distribution. Stat. Pap. 2020, 2020, 1–35. [Google Scholar] [CrossRef]
  14. Maurya, R.K.; Tripathi, Y.M. Reliability estimation in a multicomponent stress-strength model for Burr XII distribution under progressive censoring. Braz. J. Probab. Stat. 2020, 34, 345–369. [Google Scholar] [CrossRef]
  15. Mahto, A.K.; Tripathi, Y.M.; Kızılaslan, F. Estimation of Reliability in a Multicomponent Stress–Strength Model for a General Class of Inverted Exponentiated Distributions Under Progressive Censoring. J. Stat. Theory Pract. 2020, 14, 1–35. [Google Scholar] [CrossRef]
  16. Mahto, A.K.; Tripathi, Y.M. Estimation of reliability in a multicomponent stress-strength model for inverted exponentiated Rayleigh distribution under progressive censoring. OPSEARCH 2020, 57, 1043–1069. [Google Scholar] [CrossRef]
  17. Wang, L.; Dey, S.; Tripathi, Y.M.; Wu, S.J. Reliability inference for a multicomponent stress–strength model based on Kumaraswamy distribution. J. Comput. Appl. Math. 2020, 376, 112823. [Google Scholar] [CrossRef]
  18. Wang, L.; Wu, K.; Tripathi, Y.M.; Lodhi, C. Reliability analysis of multicomponent stress–strength reliability from a bathtub-shaped distribution. J. Appl. Stat. 2020, 1–21. [Google Scholar] [CrossRef]
  19. Jha, M.K.; Dey, S.; Alotaibi, R.M.; Tripathi, Y.M. Reliability estimation of a multicomponent stress-strength model for unit Gompertz distribution under progressive Type II censoring. Qual. Reliab. Eng. Int. 2020, 36, 965–987. [Google Scholar] [CrossRef]
  20. Rasekhi, M.; Saber, M.M.; Yousof, H.M. Bayesian and classical inference of reliability in multicomponent stress-strength under the generalized logistic model. Commun. Stat.-Theory Methods 2020, 1–12. [Google Scholar] [CrossRef]
  21. Alotaibi, R.M.; Tripathi, Y.M.; Dey, S.; Rezk, H.R. Bayesian and non-Bayesian reliability estimation of multicomponent stress–strength model for unit Weibull distribution. J. Taibah Univ. Sci. 2020, 14, 1164–1181. [Google Scholar] [CrossRef]
  22. Maurya, R.K.; Tripathi, Y.M.; Kayal, T. Reliability Estimation in a Multicomponent Stress-Strength Model Based on Inverse Weibull Distribution. Sankhya B 2021, 1–38. [Google Scholar] [CrossRef]
  23. Kohansal, A.; Shoaee, S. Bayesian and classical estimation of reliability in a multicomponent stress-strength model under adaptive hybrid progressive censored data. Stat. Pap. 2021, 62, 309–359. [Google Scholar] [CrossRef]
  24. Jana, N.; Bera, S. Interval estimation of multicomponent stress–strength reliability based on inverse Weibull distribution. Math. Comput. Simul. 2022, 191, 95–119. [Google Scholar] [CrossRef]
  25. Chandra, S.; Owen, D.B. On estimating the reliability of a component subject to several different stresses (strengths). Nav. Res. Logist. Quart. 1975, 22, 31–39. [Google Scholar] [CrossRef]
  26. Hlawka, P. Estimation of the Parameter p = P(X < Y < Z); No.11, Ser. Stud. i Materiaty No. 10 Problemy Rachunku Prawdopodobienstwa; Prace Nauk. Inst. Mat. Politechn.: Wroclaw, Poland, 1975; pp. 55–65. (In Polish) [Google Scholar]
  27. Singh, N. On the estimation of Pr(X1 < Y < X2). Commun. Statist. Theory Meth. 1980, 9, 1551–1561. [Google Scholar]
  28. Dutta, K.; Sriwastav, G.L. An n-standby system with P(X < Y < Z). IAPQR Trans. 1986, 12, 95–97. [Google Scholar]
  29. Ivshin, V.V. On the estimation of the probabilities of a double linear inequality in the case of uniform and two-parameter exponential distributions. J. Math. Sci. 1998, 88, 819–827. [Google Scholar] [CrossRef]
  30. Hanagal, D.D. Estimation of system reliability in multicomponent series stress—strength model. J. Indian Statist. Assoc. 2003, 41, 1–7. [Google Scholar]
  31. Waegeman, W.; De Baets, B.; Boullart, L. On the scalability of ordered multi-class ROC analysis. Comput. Statist. Data Anal. 2008, 52, 33–71. [Google Scholar] [CrossRef]
  32. Chumchum, D.; Munindra, B.; Jonali, G. Cascade System with Pr(X < Y < Z). J. Inform. Math. Sci. 2013, 5, 37–47. [Google Scholar]
  33. Pan, G.; Wang, X.; Zhou, W. Nonparametric statistical inference for P(X < Y < Z). Indian J. Stat. 2013, 75, 118–138. [Google Scholar]
  34. Patowary, A.N.; Sriwastav, G.L.; Hazarika, J. Inference of R = P(X < Y < Z) for n-Standby System: A Monte-Carlo Simulation Approach. J. Math. 2016, 12, 18–22. [Google Scholar]
  35. Saini, S.; Tomer, S.; Garg, R. On the reliability estimation of multicomponent stress–strength model for Burr XII distribution using progressively first-failure censored samples. J. Stat. Comput. Simul. 2021, 1–38. [Google Scholar] [CrossRef]
  36. Kohansal, A.; Fernández, A.J.; Pérez-González, C.J. Multi-component stress–strength parameter estimation of a non-identical-component strengths system under the adaptive hybrid progressive censoring samples. Statistics 2021, 1–38. [Google Scholar] [CrossRef]
  37. Hassan, M.K. On Estimating Standby Redundancy System in a MSS Model with GLFRD Based on Progressive Type II Censoring Data. Reliab. Theory Appl. 2021, 16, 206–219. [Google Scholar]
  38. Wu, S.J.; Kus, C. On estimation based on progressive first-failure-censored sampling. Comput. Stat. Data Anal. 2009, 53, 3659–3670. [Google Scholar] [CrossRef]
  39. Balakrishnan, N.; Aggarwala, R. Progressive Censoring: Theory, Methods, and Applications; Springer Science & Business Media Birkhauser Boston: Cambridge, MA, USA, 2000. [Google Scholar]
  40. Kumaraswamy, P. A generalized probability density function for double-bounded random processes. J. Hydrol. 1980, 46, 79–88. [Google Scholar] [CrossRef]
  41. Jones, M.C. Kumaraswamy’s distribution: A beta-type distribution with some tractability advantages. J. Statist. Methodol. 2009, 6, 70–81. [Google Scholar] [CrossRef]
  42. Golizadeh, A.; Sherazi, M.A.; Moslamanzadeh, S. Classical and Bayesian estimation on Kumaraswamy distribution using grouped and ungrouped data under difference of loss functions. J. Appl. Sci. 2011, 11, 2154–2162. [Google Scholar] [CrossRef]
  43. Sindhu, T.N.; Feroze, N.; Aslam, M. Bayesian analysis of the Kumaraswamy distribution under failure censoring sampling scheme. Int. J. Adv. Sci. Technol. 2013, 51, 39–58. [Google Scholar]
  44. Sharaf EL-Deen, M.M.; AL-Dayian, G.R.; EL-Helbawy, A.A. Statistical inference for Kumaraswamy distribution based on generalized order statistics with applications. J. Adv. Math. Comput. Sci. 2014, 4, 1710–1743. [Google Scholar]
  45. Wang, L. Inference for the Kumaraswamy distribution under -record values. J. Comput. Appl. Math. 2017, 321, 246–260. [Google Scholar] [CrossRef]
  46. Kumar, M.; Singh, S.K.; Singh, U.; Pathak, A. Empirical Bayes estimator of parameter, reliability and hazard rate for Kumaraswamy distribution. Life Cycle Reliab. Saf. Eng. 2019, 8, 243–256. [Google Scholar] [CrossRef]
  47. Fawzy, M.A. Prediction of Kumaraswamy distribution in constant-stress model based on type-I hybrid censored data. Stat. Anal. Data Min. ASA Data Sci. J. 2020, 13, 205–215. [Google Scholar] [CrossRef]
  48. Ferguson, T. A Course in Large Sample Theory. In Chapman & Hall Texts in Statistical Science Series; Taylor & Francis: Milton Park, VA, USA, 1996. [Google Scholar]
  49. Efron, B. The Jackknife, the Bootstrap and other Resampling Plans. In CBMS-NSF Regional Conference Series in Applied Mathematics; SIAM: Philadelphia, PA, USA, 1982; Volume 38. [Google Scholar]
  50. Hall, P. Theoretical comparison of bootstrap confidence intervals. Ann. Stat. 1988, 16, 927–953. [Google Scholar] [CrossRef]
  51. Zellner, A. Bayesian and non-Bayesian estimation using balanced loss functions. In Statistical Decision Theory and Methods; Berger, J.O., Gupta, S.S., Eds.; Springer: New York, NY, USA, 1994; pp. 337–390. [Google Scholar]
  52. Ahmadi, J.; Jozani, M.J.; Marchand, E.; Parsian, A. Bayes estimation based on k- record data from a general class of distributions under balanced Type loss functions. J. Stat. Plan. Inference 2009, 139, 1180–1189. [Google Scholar] [CrossRef]
  53. Lindley, D.V. Approximate Bayesian method. Trab. Estad. 1980, 31, 223–237. [Google Scholar] [CrossRef]
  54. Chen, M.H.; Shao, Q.M. Monte Carlo estimation of Bayesian credible and HPD intervals. J. Comput. Graph. Stat. 1999, 8, 69–92. [Google Scholar]
  55. Balakrishnan, N.; Sandhu, R.A. A simple simulational algorithm for generating progressive Type-II censored samples. Am. Stat. 1995, 49, 229–230. [Google Scholar]
  56. Kundu, D.; Gupta, R.D. Estimation of P[Y < X] for Weibull distributions. IEEE Trans. Reliab. 2016, 55, 270–280. [Google Scholar]
Figure 1. Plots of estimated pdfs of distributions for first Data set.
Figure 1. Plots of estimated pdfs of distributions for first Data set.
Symmetry 13 02120 g001
Figure 2. Plots of estimated pdfs of distributions for second Data set.
Figure 2. Plots of estimated pdfs of distributions for second Data set.
Symmetry 13 02120 g002
Figure 3. Convergence plots of MCMC for parameter estimates of this model when complete sample.
Figure 3. Convergence plots of MCMC for parameter estimates of this model when complete sample.
Symmetry 13 02120 g003
Figure 4. Convergence plots of MCMC for parameter estimates of this model when scheme 1.
Figure 4. Convergence plots of MCMC for parameter estimates of this model when scheme 1.
Symmetry 13 02120 g004
Table 1. MSE of the estimates of R with Ω = 0.0 .
Table 1. MSE of the estimates of R with Ω = 0.0 .
MLBayes
Lindley’sMCMC
BSELBLINEXBSELBLINEX
nmkSch −20.52 −20.52
20121I0.003230.001630.001570.001650.001690.002070.003090.001940.00167
II0.003280.001670.001610.001680.001730.002080.003030.001950.00169
III0.003330.001700.001640.001710.001760.002130.003140.001990.00171
16 I0.002350.001430.001400.001440.001470.001660.002220.001580.0014
II0.002380.001360.001320.001370.001390.001610.002220.001520.00133
III0.002520.001520.001480.001530.001560.001770.002390.001680.00148
20 0.002020.001290.001260.001290.001310.001450.001890.001380.00122
4024 I0.001580.001140.001130.001140.001150.001240.001560.001190.00107
II0.001500.001030.001020.001030.001050.001140.001450.001080.00097
III0.001610.001140.001120.001140.001150.001240.001550.001190.00107
32 I0.001170.000900.000900.000900.000910.000950.001120.000920.00085
II0.001260.000960.000960.000970.000970.001020.001200.000990.0009
III0.001130.000890.000880.000890.000900.000940.001120.000910.00083
40 0.000990.000800.000790.000800.000800.000830.000950.000810.00075
20123I0.003130.001650.001590.001660.001710.002060.003080.001930.00166
II0.003160.001550.001490.001570.001610.001970.002920.001840.00159
III0.003440.001690.001630.001700.001750.002120.003110.001990.00172
16 I0.002300.001400.001370.001410.001440.001640.002260.001550.00135
II0.002540.001470.001430.001480.001510.001730.002340.001640.00144
III0.002320.001490.001460.001500.001530.001730.002380.001640.00144
2020 0.001890.001330.001310.001330.001350.001480.001910.001410.00126
4024 I0.001900.001280.001270.001290.001300.001400.001730.001350.00122
II0.001620.001150.001140.001160.001170.001260.001570.001200.00108
III0.001580.001160.001150.001160.001170.001260.001590.001210.00109
32 I0.001380.001030.001020.001030.001040.001090.001290.001050.00096
II0.001190.000920.000910.000920.000930.000970.001160.000940.00086
III0.001140.000870.000860.000870.000870.000920.001080.000890.00082
4040 0.001030.000830.000820.000830.000830.000860.000980.000840.00078
Table 2. MSE of the estimates of R with Ω = 0.5 .
Table 2. MSE of the estimates of R with Ω = 0.5 .
MLBayes
Lindley’sMCMC
BSELBLINEXBSELBLINEX
nmkSch −20.52 −20.52
20121I0.003240.002110.002090.002120.002140.002510.003010.002280.00172
II0.003380.002250.002230.002260.002280.002650.003130.002420.00186
III0.003420.002210.002180.002220.002240.002620.003130.002390.00182
16 I0.002550.001880.001870.001880.001890.002070.002360.001950.00160
II0.002290.001700.001690.001700.001710.001870.002150.001760.00144
III0.002210.001630.001620.001630.001640.001800.002080.001690.00137
20 0.001800.001410.001400.001410.001410.001510.001700.001440.00120
4024 I0.001480.001210.001210.001210.001210.001270.001400.001220.00106
II0.001640.001360.001360.001360.001360.001430.001560.001380.00121
III0.001620.001330.001320.001330.001330.001390.001530.001340.00118
32 I0.001100.000950.000950.000950.000950.000980.001050.000950.00087
II0.001140.001000.001000.001000.001000.001030.001110.001000.00091
III0.001100.000960.000960.000960.000960.000980.001060.000960.00088
40 0.000970.000860.000860.000860.000860.000880.000930.000860.00080
20123I0.003060.002040.002020.002050.002070.002410.002910.002190.00165
II0.003080.002040.002010.002040.002060.002410.002890.002190.00166
III0.003100.002080.002050.002090.002100.002440.002940.002230.00169
16 I0.002340.001760.001750.001760.001770.001940.002240.001820.00147
II0.002360.001800.001790.001800.001810.001980.002270.001860.00152
III0.002590.001910.001900.001910.001920.002110.002440.001980.00158
2020 0.001880.001470.001460.001470.001470.001570.001760.001500.00127
4024 I0.001640.001370.001360.001370.001370.001430.001570.001380.00121
II0.001600.001310.001310.001310.001310.001370.001510.001320.00116
III0.001670.001370.001370.001370.001370.001440.001580.001390.00121
32 I0.001110.000950.000950.000950.000950.000980.001050.000960.00087
II0.001200.001030.001030.001030.001030.001060.001140.001030.00093
III0.001250.001070.001070.001070.001070.001100.001180.001070.00098
4040 0.000940.000840.000840.000840.000840.000860.000910.000840.00078
Table 3. Lengths and CPs of 95 % CIs for R estimates with Ω = 0.0 .
Table 3. Lengths and CPs of 95 % CIs for R estimates with Ω = 0.0 .
MLBoot PBoot TBayes
nmkSchLengthCPLengthCPLengthCPLengthCP
20121I0.203410.9060.2264810.253000.9980.194280.949
II0.202680.9090.2250810.253180.9960.194070.959
III0.201050.8850.2259310.250520.9950.193080.947
16 I0.176690.9220.1927010.211950.9980.172920.958
II0.176920.9240.1929410.212430.9980.173300.960
III0.175950.8980.1930410.210510.9970.172250.942
20 0.158330.8990.1717510.185950.9990.157580.940
4024 I0.143720.9180.1567710.164880.9990.144660.951
II0.145190.9330.1560310.1671710.145630.969
III0.144640.9120.1562610.1665810.145200.952
32 I0.125910.9300.1350810.1422710.128110.963
II0.125230.9240.1351610.1416110.127750.961
III0.124930.9270.1351810.1405910.127380.958
40 0.112710.9250.1208510.1258610.115660.949
20123I0.202070.9090.2264010.249590.9970.193230.953
II0.203210.9210.2246010.2536410.194470.974
III0.202700.9120.2251910.254150.9930.194400.954
16 I0.175490.9170.1928410.2094210.172050.964
II0.175800.9090.1926310.211830.9980.172500.956
III0.174550.9110.1935010.207500.9970.171210.956
20 0.156700.9080.1714010.182170.9970.155970.944
4024 I0.144480.8980.1562610.1669710.145390.942
II0.144150.9230.1564910.1653610.144950.948
III0.143750.9170.1563510.164460.9990.144610.951
32 I0.125540.9090.1349610.1419210.127960.951
II0.125370.9240.1350810.1414910.127750.962
III0.125760.9270.1350510.1421210.128200.958
40 0.112330.9150.1209310.1252310.115460.944
Table 4. Lengths and CPs of 95 % CIs for R estimates with Ω = 0.5 .
Table 4. Lengths and CPs of 95 % CIs for R estimates with Ω = 0.5 .
MLBoot PBoot TBayes
nmkSchLengthCPLengthCPLengthCPLengthCP
20121I0.202570.9050.2253410.252130.9970.193960.958
II0.201580.9070.2252110.251240.9960.193360.961
III0.203550.9060.2270510.255110.9970.194640.951
16 I0.176760.9140.1938610.212390.9960.173080.955
II0.176520.9190.1931610.210970.9980.172780.962
III0.176160.9160.1927510.210650.9970.172780.962
20 0.158590.9290.1715110.185210.9980.157540.968
4024 I0.144960.9320.1563710.1669310.145520.964
II0.143880.9170.1561410.165650.9990.144790.957
III0.144430.9140.1562410.166300.9990.145100.957
32 I0.125340.9420.1351810.1417410.127830.971
II0.124780.9340.1349210.1400910.127130.956
III0.125200.9380.1351710.1411610.127600.963
40 0.112400.9280.1208010.1252110.115460.956
20123I0.201400.9160.2251110.249160.9950.192970.956
II0.202960.9020.2255510.252740.9950.194100.962
III0.201580.9170.2261710.249710.9990.192990.954
16 I0.175440.9070.1922110.208730.9940.171790.948
II0.174110.9120.1930110.207220.9980.171110.948
III0.175490.9010.1932410.210330.9950.172010.952
20 0.158120.9140.1713110.1853310.157410.961
4024 I0.144160.9090.1564510.165550.9990.144730.943
II0.144910.9140.1564310.167120.9980.145440.957
III0.144490.9130.1561910.1662710.145160.952
32 I0.125980.9320.1349810.1421710.128260.962
II0.125390.9180.1353910.141720.9990.127780.955
III0.125930.9110.1350910.1426610.128120.945
4040 0.112210.9290.1207710.124910.9990.115160.958
Table 5. MLE with SE and different measures.
Table 5. MLE with SE and different measures.
EstimatesSEKSp-ValueCvMADAICBIC
x 1 α 3.99230.35590.14390.11500.42422.7357−106.9507−102.4825
λ 19.82615.2178
x 2 α 4.90970.39140.08610.73870.10620.5857−153.7927−149.5065
λ 134.842050.0833
Table 6. MLE and Bayesian estimation method for stress-strength reliability.
Table 6. MLE and Bayesian estimation method for stress-strength reliability.
MLEBayesian
Scheme EstimatesSEREstimatesSER
Complelet α 1 3.98910.35560.73444.00460.31150.7418
λ 1 19.77605.200320.31144.6123
α 2 5.12780.44445.32130.4118
λ 2 169.533371.7588218.850668.1867
1 α 1 6.33351.22310.74875.98330.78170.7723
λ 1 59.932761.225149.121929.0219
α 2 5.61700.93045.74880.7462
λ 2 100.698294.3161137.119687.5453
2 α 1 4.56670.74520.78024.71400.58860.8055
λ 1 10.89246.163012.81445.4122
α 2 5.14370.68085.42870.6144
λ 2 60.690139.997495.059831.6362
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yousef, M.M.; Almetwally, E.M. Multi Stress-Strength Reliability Based on Progressive First Failure for Kumaraswamy Model: Bayesian and Non-Bayesian Estimation. Symmetry 2021, 13, 2120. https://doi.org/10.3390/sym13112120

AMA Style

Yousef MM, Almetwally EM. Multi Stress-Strength Reliability Based on Progressive First Failure for Kumaraswamy Model: Bayesian and Non-Bayesian Estimation. Symmetry. 2021; 13(11):2120. https://doi.org/10.3390/sym13112120

Chicago/Turabian Style

Yousef, Manal M., and Ehab M. Almetwally. 2021. "Multi Stress-Strength Reliability Based on Progressive First Failure for Kumaraswamy Model: Bayesian and Non-Bayesian Estimation" Symmetry 13, no. 11: 2120. https://doi.org/10.3390/sym13112120

APA Style

Yousef, M. M., & Almetwally, E. M. (2021). Multi Stress-Strength Reliability Based on Progressive First Failure for Kumaraswamy Model: Bayesian and Non-Bayesian Estimation. Symmetry, 13(11), 2120. https://doi.org/10.3390/sym13112120

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop