Next Article in Journal
Kullback–Leibler Divergence of a Freely Cooling Granular Gas
Previous Article in Journal
Monitoring Parameter Change for Time Series Models of Counts Based on Minimum Density Power Divergence Estimator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of the Reliability of a Stress–Strength System from Poisson Half Logistic Distribution

1
School of Mechanical Engineering and Automation, Northeastern University, Shenyang 110819, China
2
College of Mechanical and Electrical Engineering, Guangdong University of Petrochemical Technology, Maoming 525000, China
3
School of Control and Engineering, Northeastern University, Qinhunangdao 066004, China
*
Authors to whom correspondence should be addressed.
Entropy 2020, 22(11), 1307; https://doi.org/10.3390/e22111307
Submission received: 11 October 2020 / Revised: 9 November 2020 / Accepted: 12 November 2020 / Published: 17 November 2020

Abstract

:
This paper discussed the estimation of stress-strength reliability parameter R = P ( Y < X ) based on complete samples when the stress-strength are two independent Poisson half logistic random variables (PHLD). We have addressed the estimation of R in the general case and when the scale parameter is common. The classical and Bayesian estimation (BE) techniques of R are studied. The maximum likelihood estimator (MLE) and its asymptotic distributions are obtained; an approximate asymptotic confidence interval of R is computed using the asymptotic distribution. The non-parametric percentile bootstrap and student’s bootstrap confidence interval of R are discussed. The Bayes estimators of R are computed using a gamma prior and discussed under various loss functions such as the square error loss function (SEL), absolute error loss function (AEL), linear exponential error loss function (LINEX), generalized entropy error loss function (GEL) and maximum a posteriori (MAP). The Metropolis–Hastings algorithm is used to estimate the posterior distributions of the estimators of R. The highest posterior density (HPD) credible interval is constructed based on the SEL. Monte Carlo simulations are used to numerically analyze the performance of the MLE and Bayes estimators, the results were quite satisfactory based on their mean square error (MSE) and confidence interval. Finally, we used two real data studies to demonstrate the performance of the proposed estimation techniques in practice and to illustrate how PHLD is a good candidate in reliability studies.

1. Introduction

In the context of the mechanical reliability of a system or materials, it is very important to study the system performance referred to as the stress–strength parameter. Suppose a component has stress X and is subjected to a strength Y, then R = P ( Y < X ) defined the system performance, and its called the stress–strength parameter. The system will fail if and only if the stress applied is greater than the strength. A good design in practice is such a way that the strength is always greater than the expected stress. Since the [1] proclaim that numerical values of R make more sense to researchers particularly those in the medical profession and point out that R can be estimated under many distributional assumptions not necessarily the normality, thus giving rise to using other distributions than the normal distribution when normality is obviously inappropriate. In statistical mechanics, inference about the stress–strength parameter based on a complete or censored sample has attracted many researchers over decades and the problem of estimating R under different conditions has been widely studied. It is usually considered that R has a greater interest in reliability studies but nevertheless, R is an important measure in fields other than reliability since it is a measure of the difference between two populations. For example, R is called a measure of the treatment effect in a case where Y is the response of a control group and X represents a treatment group. When Y is the strength of a rocket chamber and X is maximal chamber pressure which is generated when a solid propellant is ignited R is the probability that the engine will be fired successfully [2]. R has an application used in comparing the strength of two types of steel [3], more applications of the stress–strength in engineering, biomedical sciences, and finance can be found in ([4] Chap. 7).
Many authors have studied the statistical inference of stress-strength model R in different viewpoints. For instance, for independent random variables X and Y that follow: half logistic distribution [5], Burr type X distribution [6], normal distribution [7,8], skew normal random distribution [9,10], generalized gamma distribution [11,12], logistic distribution [13], generalized logistic distribution [14], Laplace distribution [15], generalized exponential distribution [16], two-parameter exponential distribution [17,18], exponential random variables with the common location parameter [19], generalized quadratic hazard rate distribution [20], Fréchet distribution [21], power Lindley distribution [22], quadratic hazard rate-geometric distribution [23], generalized exponential Poisson [24], beta distribution [25], beta-Erlang truncated exponential [26], exponentiated half logistic-Poisson [27], Pareto distribution [28], Weibull distribution [29,30,31] among others.
In other viewpoints, ref. [32] estimated R for the Weibull based on the hybrid censored data, ref. [33] consider the estimation of R for the Weibull random variable case on progressively type-II censored data, ref. [34] discussed the estimation of R for the generalized exponential based on data on records, ref. [35] investigate the estimation of R based on Burr type XII distribution under hybrid progressive censored samples, ref. [36] estimate R based on independent Lomax under type-II hybrid censored samples. ref. [37] introduced a new flexible two-parameter lifetime model called Poisson half logistic distribution (PHLD). The cumulative distribution ( F ( x ) ) and the density function ( f ( x ) ) of a random variable X > 0 having the PHLD with parameter α , λ > 0 are given by
F ( x ) = e λ Δ ( x ) 1 e λ 1 ,
where, Δ ( x ) = 1 e α x 1 + e α x , and
f ( x ) = 2 α λ e α x + λ Δ ( x ) ( e λ 1 ) [ 1 + e α x ] 2 ,
respectively. The quantile function of PHLD can be used for random number generation by random sampling from the uniform ( 0 , 1 ) distribution.
Proposition 1
([37]). The pth quantile of the PHLD is given by
x p = 1 α log 1 ξ 1 + ξ , 0 < p < 1 ,
where ξ = 1 λ log [ p ( e λ 1 ) + 1 ] .
Proposition 2
([37]). If the random variable X has PHLD with pdf (1), then the rth moment of X is given, for  r = 1 , 2 , 3 , , by
μ r = E [ X r ] = l = 0 N ϖ 2 ( 1 ) 2 1 + 1 r f 1 + 1 ,
where the ϖ are the zeros and the corresponding Christoffel numbers of the Legendre–Gauss quadrature formula on the interval ( 1 , 1 ) , see [38].
The r-th moment of X can also be represented in similar way to [39] by using the following series expansion in (3). For a real and non-integer n and | z | < a , we have
( a + z ) n = i = 0 n i a n i z i .
After some algebra the rth moment become:
E [ X r ] = i = 0 w = 0 i 2 w 2 ( 1 ) r λ i + 1 ( e λ 1 ) α r i ! B 0 r ( i + 1 , w + 1 ) ,
where B k d ( a , b ) = k + d B ( a , b ) a k b d is the partial derivative of a beta function B k d ( a , b ) .
The PHLD has been receiving much attention in recent years as a model for various applications, one can see the performance of PHLD in an application to the remission times (in months) of a random sample of 128 bladder cancer patients [40], and in the analysis of failure times of ball bearings in million revolutions [37].
In this paper, we are aiming at the estimation of the stress–strength parameter R from independent random variables with a PHLD distribution, the classical maximum likelihood method and Bayesian estimation techniques were discussed and analyzed numerically by simulation studies. Two sets of real data analysis are provided for illustration.
The rest of the paper follows: In Section 2, we provide the estimation of R in the general case and its maximum likelihood estimation, asymptotic distribution, and confidence interval, the bootstrap confidence interval is also considered. In Section 3 the estimation of R with one common parameter is studied also the maximum likelihood estimation (MLE), asymptotic distribution, confidence interval (CI), and bootstrap confidence interval are analyzed. In Section 4, Bayes estimation of R both in the general case and in the case of common scale parameter is proposed. The estimation of R based on various loss functions such as the square error loss function (SEL), absolute error loss function (AEL), maximum a posteriori (MAP), linear exponential loss function (LINEX), and general entropy loss function (GEL) are discussed. Further, the highest posterior density (HPD) credible interval for R is constructed. In Section 5, the estimation techniques were analyzed numerically by simulation studies. Two real data studies consisting of four different data set were given for illustration in Section 6. In Section 7, we provide a conclusion.

2. Estimation of R in General Case

In this section, we derive the expression of R in the general case and parameter estimation by maximum likelihood estimation. The asymptotic confidence interval and the bootstrap confidence interval of R are discussed.
Let X PHLD1( α 1 , λ 1 ) and Y PHLD2( α 2 , λ 2 ), let f 1 ( x ) be the density of X and F 2 ( y ) be the cumulative distribution of Y given by
f 1 ( x ) = 2 α 1 λ 1 e α 1 x + λ 1 Δ 1 ( x ) ( e λ 1 1 ) [ 1 + e α 1 x ] 2 , x , α 1 , λ 1 > 0 ,
where Δ 1 ( x ) = 1 e α 1 x 1 + e α 1 x , and
F 2 ( y ) = e λ 2 Δ 2 ( y ) 1 e λ 2 1 , y , α 2 , λ 2 > 0 ,
where Δ 2 ( y ) = 1 e α 2 y 1 + e α 2 y , then, the strength-stress parameter R is derive as
R = 0 f 1 ( x ) F 2 ( x ) d x = 2 α 1 λ 1 ( e λ 1 1 ) ( e λ 2 1 ) 0 e α 1 x + λ 1 Δ 1 ( x ) e λ 2 Δ 2 ( x ) 1 [ 1 + e α 1 x ] 2 d x = 2 α 1 λ 1 ( e λ 1 1 ) ( e λ 2 1 ) 0 e α 1 x + λ 1 Δ 1 ( x ) e λ 2 Δ 2 ( x ) [ 1 + e α 1 x ] 2 d x 1 e λ 2 1 .
Let u = e x , then u α 1 = e α 1 x , and u α 2 = e α 2 x , thus,
R = 2 α 1 λ 1 ( e λ 1 1 ) ( e λ 2 1 ) 0 1 u α 1 1 ( 1 + u α 1 ) 2 e λ 1 1 u α 1 1 + u α 1 e λ 2 1 u α 2 1 + u α 2 d u 1 e λ 2 1 .
The above integral can be computed numerically, but we can represent R in a series form by solvinging the integral part as follows. Let defined B as
B = 2 α 1 λ 1 ( e λ 1 1 ) ( e λ 2 1 ) 0 1 u α 1 1 ( 1 + u α 1 ) 2 e λ 1 1 u α 1 1 + u α 1 e λ 2 1 u α 2 1 + u α 2 d u .
Let v = 1 u α 1 , this implies d u = d v / ( α 1 u α 1 1 ) , and u α 2 = ( 1 v ) α 2 α 1 , therefore,
B = 2 λ 1 ( e λ 1 1 ) ( e λ 2 1 ) 0 1 1 ( 2 v ) 2 e λ 1 v 2 v e λ 2 1 ( 1 v ) α 2 α 1 1 + ( 1 v ) α 2 α 1 d v .
Recall that for | z | < 1 ,
( 1 z ) s = k = 0 s + k 1 k z k ,
and by the exponential expansion we get
B = 2 λ 1 ( e λ 1 1 ) ( e λ 2 1 ) i , j = 0 λ 1 i λ 2 j i ! j ! 0 1 v i ( 2 v ) 2 + i 1 ( 1 v ) α 2 α 1 1 + ( 1 v ) α 2 α 1 j d v ,
by the expansion in (5) and the generalized binomial expansion we obtain
B = 2 λ 1 ( e λ 1 1 ) ( e λ 2 1 ) i , j = 0 k , r = 0 l = 0 j i + k + 1 k j + r 1 r j l ( 1 ) k + l + r λ 1 i λ 2 j i ! j ! 0 1 v i ( 1 v ) α 2 α 1 ( r + l ) + k d v ,
thus,
B = i , j = 0 k , r = 0 C i , j , k , l , r B ( i + 1 , α 2 α 1 ( r + l ) + k + 1 ) ,
where, C i , j , k , l , r = l = 0 j i + k + 1 k j + r 1 r j l 2 ( 1 ) k + l + r λ 1 i + 1 λ 2 j i ! j ! ( e λ 1 1 ) ( e λ 2 1 ) .
Hence, by putting (6) in (4), R can be approximated as
R = i , j = 0 k , r = 0 C i , j , k , l , r B ( i + 1 , α 2 α 1 ( r + l ) + k + 1 ) 1 e λ 2 1 .

2.1. Maximum Likelihood Estimation

Suppose x 1 , x 2 , , x n 1 is a random sample of size n 1 from PHLD1 ( α 1 , λ 1 ) and y 1 , y 2 , , y n 2 is an independent random sample of size n 2 from PHLD2 ( α 2 , λ 2 ) . The log likelihood function L ( α 1 , α 2 , λ 1 , λ 2 ) = L ( θ ) is given by (8) below, where θ is a vector of parameters given by θ = ( α 1 , α 2 , λ 1 , λ 2 ) T .
log L = i = 1 n 1 log f X ( x i ) + j = 1 n 2 log f Y ( y j ) = ( n 1 + n 2 ) log 2 + n 1 log α 1 + n 2 α 2 + n 1 log λ 1 + n 2 log λ 2 n 1 log ( e λ 1 1 ) n 2 log ( e λ 2 1 ) α 1 i = 1 n 1 x i α 2 j = 1 n 2 y j 2 i = 1 n 1 log ( 1 + e α 1 x i ) 2 j = 1 n 2 log ( 1 + e α 2 y j ) + λ 1 i = 1 n 1 Δ 1 ( x i ) + λ 2 j = 1 n 2 Δ 2 ( y j ) .
To obtain the estimators of θ that is θ ^ = ( α ^ 1 , α ^ 2 , λ ^ 1 , λ ^ 2 ) T , we need to solve the following nonlinear Equations (9)–(12) below. These equations cannot be solved analytically, but by the use of numerical optimizations available in Mathematica, Matlab or R.
L α 1 = n 1 α 1 i = 1 n 1 x i + 2 i = 1 n 1 x i e α 1 x i 1 + e α 1 x i + 2 λ 1 i = 1 n 1 x i e α 1 x i ( 1 + e α 1 x i ) 2 ,
L α 2 = n 2 α 2 j = 1 n 2 y j + 2 j = 1 n 2 y j e α 2 y j 1 + e α 2 y j + 2 λ 2 j = 1 n 2 y j e α 2 y j ( 1 + e α 2 y j ) 2 ,
L λ 1 = n 1 λ 1 n 1 e λ 1 e λ 1 1 + i = 1 n 1 Δ 1 ( x i ) ,
L λ 2 = n 2 λ 2 n 2 e λ 2 e λ 2 1 + j = 1 n 2 Δ 2 ( y j ) .
Once θ ^ is computed, we can get the maximum likelihood estimator of R ( θ ) say R ^ ( θ ^ ) from (7) or (4).

2.2. Asymptotic Distribution and Confidence Interval

In this subsection, we derived the asymptotic distribution of θ ^ = ( α ^ 1 , α ^ 2 , λ ^ 1 , λ ^ 2 ) T then we derive the asymptotic distribution of R ^ , and the asymptotic confidence intervals of R. We first require the Fisher information matrix defined by I ( θ ) = E [ J ( θ ) ] , where J ( θ ) = 2 L θ θ T , thus,
I ( θ ) = I α 1 α 1 I α 1 α 2 I α 1 λ 1 I α 1 λ 2 I α 2 α 1 I α 2 α 2 I α 2 λ 1 I α 2 λ 2 I λ 1 α 1 I λ 1 α 2 I λ 1 λ 1 I λ 1 λ 2 I λ 2 α 1 I λ 2 α 2 I λ 2 λ 1 I λ 2 λ 2 .
The elements of J ( θ ) and the computation of the element of I are provided in Appendix A. Before we provided the elements of I ( θ ) we need the following Lemma 1 used to compute the elements of I ( θ ) .
Lemma 1.
Let x , α , γ 4 > 0 γ 1 , γ 2 , γ 3 N , let
( α , γ 1 , γ 2 , γ 3 , γ 4 ) = 0 x γ 1 e γ 2 α x + γ 4 Δ ( x ) [ 1 + e α x ] γ 3 d x ,
then,
( α , γ 1 , γ 2 , γ 3 , γ 4 ) = m , s = 0 ϕ m , s B 0 , γ 1 ( m + 1 , γ 2 + s ) ,
where ϕ m , s = γ 3 + m + s 1 s γ 4 m ( 1 ) γ 1 + s m ! α γ 1 + 1 and B t , γ 1 ( a , b ) = t + γ 1 a t b γ 1 B ( a , b ) . In particular, when γ 1 = 1 , 2 , we have
( α , 1 , γ 2 , γ 3 , γ 4 ) = m , s = 0 ϕ m , s ψ 0 ( γ 2 + s ) ψ 0 ( m + γ 2 + s + 1 ) B ( m + 1 , γ 2 + s ) ,
and
( α , 2 , γ 2 , γ 3 , γ 4 ) = m , s = 0 ϕ m , s ( ψ 0 ( γ 2 + s ) ψ 0 ( m + γ 2 + s + 1 ) 2 + ψ 1 ( γ 2 + s ) ψ 1 ( m + γ 2 + s + 1 ) ) B ( m + 1 , γ 2 + s ) ,
where ψ m ( z ) = d m d z m ψ ( z ) = d m + 1 d z m + 1 ln Γ ( z ) is called polygamma function, and  ψ ( z ) = ψ 0 ( z ) = d d z ln Γ ( z ) = Γ ( z ) Γ ( z ) is called the digamma function.
Proof. 
Let
= 0 x γ 1 e γ 2 α x + γ 4 Δ ( x ) [ 1 + e α x ] γ 3 d x ,
by the exponential expansion we have
= m = 0 γ 4 m m ! 0 x γ 1 ( 1 e α x ) m e γ 2 α x ( 1 + e α x ) γ 3 + m d x .
Let u = e α x , then, e γ 2 α x = ( 1 u ) γ 2 , in similar way to the computation of R we get
= m = 0 γ 4 m ( 1 ) γ 1 m ! α γ 1 + 1 0 1 log γ 1 ( 1 u ) u m ( 1 u ) γ 2 1 ( 1 + ( 1 u ) ) γ 3 + m d u , = m , s = 0 γ 3 + m + s 1 s γ 4 m ( 1 ) γ 1 + s m ! α γ 1 + 1 0 1 log γ 1 ( 1 u ) u m ( 1 u ) γ 2 + s 1 d u , = m , s = 0 ϕ m , s B 0 , γ 1 ( m + 1 , γ 2 + s ) ,
where ϕ m , s = γ 3 + m + s 1 s γ 4 m ( 1 ) γ 1 + s m ! α γ 1 + 1 and B t , γ 1 ( a , b ) = t + γ 1 a t b γ 1 B ( a , b ) . □
Hence, the elements of I ( θ ) are derived using the Lemma 1 as:
I α 1 α 1 = n 1 α 1 2 4 n 1 α 1 λ 1 ( e λ 1 1 ) ( α 1 , 2 , 2 , 3 , λ 1 ) + 4 n 1 α 1 λ 1 ( e λ 1 1 ) ( α 1 , 2 , 3 , 4 , λ 1 ) , 4 n 1 α 1 λ 1 2 ( e λ 1 1 ) ( α 1 , 2 , 2 , 4 , λ 1 ) + 8 n 1 α 1 λ 1 2 ( e λ 1 1 ) ( α 1 , 2 , 3 , 5 , λ 1 ) ,
I α 2 α 2 = n 2 α 2 2 4 n 2 α 2 λ 2 ( e λ 2 1 ) ( α 2 , 2 , 2 , 3 , λ 2 ) + 4 n 2 α 2 λ 2 ( e λ 2 1 ) ( α 2 , 2 , 3 , 4 , λ 2 ) , 4 n 2 α 2 λ 2 2 ( e λ 2 1 ) ( α 2 , 2 , 2 , 4 , λ 2 ) + 8 n 2 α 2 λ 2 2 ( e λ 2 1 ) ( α 2 , 2 , 3 , 5 , λ 2 ) , I λ 1 λ 1 = n 1 λ 1 2 n 1 e λ 1 ( e λ 1 1 ) 2 , I λ 2 λ 2 = n 2 λ 2 2 n 2 e λ 2 ( e λ 2 1 ) 2 , I α 1 λ 1 = 4 n 1 λ 1 α 1 e λ 1 1 ( α 1 , 1 , 2 , 4 , λ 1 ) , I α 2 λ 2 = 4 n 2 λ 2 α 2 e λ 2 1 ( α 2 , 1 , 2 , 4 , λ 2 ) , I α 2 α 1 = I α 1 α 2 = I α 1 λ 2 = I λ 2 α 1 = I α 2 λ 1 = I λ 1 α 2 = I λ 1 λ 2 = I λ 2 λ 1 = 0 .
Thus, we can establish the asymptotic distribution for the MLEs.
Lemma 2.
As n 1 , n 2 , then, n 1 + n 2 ( θ ^ θ ) N 4 ( 0 , I 1 ( θ ) ) , where
lim n 1 , n 2 1 n 1 + n 2 I 1 ( θ ) = V a r ( α 1 ^ ) C o v ( α 1 ^ α 2 ^ ) C o v ( α 1 ^ λ 1 ^ ) C o v ( α 1 ^ λ 2 ^ ) C o v ( α 2 ^ α 1 ^ ) V a r ( α 2 ^ ) C o v ( α 2 ^ λ 1 ^ ) C o v ( α 2 ^ λ 2 ^ ) C o v ( λ 1 ^ α 1 ^ ) C o v ( λ 1 ^ α 2 ^ ) V a r ( λ 1 ^ ) C o v ( λ 1 ^ λ 2 ^ ) C o v ( λ 2 ^ α 1 ^ ) C o v ( λ 2 ^ α 2 ^ ) C o v ( λ 2 ^ λ 1 ^ ) V a r ( λ 2 ^ )
Proof. 
follows from the asymptotic normality of MLE. □
To, establish the asymptotic distribution of R, we need to compute the partial derivative of R ( θ ) , say B ( θ ) = R α 1 , R α 2 , R λ 1 , R λ 2 T as follows, but before that, we need the following Lemma 3.
Lemma 3.
Let x > 0 Θ = ( α 1 , α 2 , λ 1 , λ 2 , δ 1 , δ 2 , δ 3 , δ 4 , δ 5 , δ 6 ) T > 0 , let,
ζ ( Θ ) = 0 x δ 1 ( 1 e α 1 x ) δ 3 ( 1 e α 2 x ) δ 4 ( 1 + e α 1 x ) δ 5 ( 1 + e α 2 x ) δ 6 e δ 2 x + λ 1 Δ 1 ( x ) + λ 2 Δ 2 ( x ) d x ,
then
ζ ( Θ ) = i , j = 0 l , m = 0 D i , j , k , l , m B 0 , δ 1 ( δ 3 + i + 1 , δ 2 1 α 1 + α 2 α 1 ( k + m ) + α 1 l ) ,
where D i , j , k , l , m = k = 0 δ 4 + j ( 1 ) δ 1 + k + l + m λ 1 i λ 2 j i ! j ! α 1 α 1 + 1 δ 5 + i + l 1 l δ 6 + j + m 1 m δ 4 + j k .
Proof. 
Let
ζ ( Θ ) = 0 x δ 1 ( 1 e α 1 x ) δ 3 ( 1 e α 2 x ) δ 4 ( 1 + e α 1 x ) δ 5 ( 1 + e α 2 x ) δ 6 e δ 2 x + λ 1 Δ 1 ( x ) + λ 2 Δ 2 ( x ) d x ,
by applying the exponential expansion and letting u = e x in similar way to the computation of (7) we get
ζ ( Θ ) = i , j = 0 ( 1 ) δ 1 λ 1 i λ 2 j i ! j ! 0 1 log δ 1 u u δ 2 1 ( 1 u α 1 ) δ 3 + i ( 1 u α 2 ) δ 4 + j ( 1 + u α 1 ) δ 5 + i ( 1 + u α 2 ) δ 6 + j d u ,
letting v = 1 u α 1 we have
ζ ( Θ ) = i , j = 0 ( 1 ) δ 1 λ 1 i λ 2 j i ! j ! α 1 α 1 + 1 0 1 log δ 1 ( 1 v ) ( 1 v ) δ 2 1 α 1 1 v δ 3 + i ( 1 ( 1 v ) α 2 α 1 ) δ 4 + j ( 1 + ( 1 v ) α 1 ) δ 5 + i ( 1 + ( 1 v ) α 2 α 1 ) δ 6 + j d v ,
by the generalized binomial expansion, finally we obtain
ζ ( Θ ) = i , j = 0 l , m = 0 D i , j , k , l , m 0 1 log δ 1 ( 1 v ) v δ 3 + i ( 1 v ) δ 2 1 α 1 + α 2 α 1 ( k + m ) + α 1 l 1 d v ,
hence,
ζ ( Θ ) = i , j = 0 l , m = 0 D i , j , k , l , m B 0 , δ 1 ( δ 3 + i + 1 , δ 2 1 α 1 + α 2 α 1 ( k + m ) + α 1 l ) ,
where D i , j , k , l , m = k = 0 δ 4 + j ( 1 ) δ 1 + k + l + m λ 1 i λ 2 j i ! j ! α 1 α 1 + 1 δ 5 + i + l 1 l δ 6 + j + m 1 m δ 4 + j k . □
Now, the derivative of R ( θ ) can be computed by applying the Lemma 3.
R λ 1 = 2 α 1 ( e λ 1 1 ) ( e λ 2 1 ) 0 e α 1 x + λ 1 Δ 1 ( x ) + λ 2 Δ 2 ( x ) ( 1 + e α 1 x ) 2 d x 2 α 1 λ 1 e λ 1 ( e λ 1 1 ) ( e λ 2 1 ) 0 e α 1 x + λ 1 Δ 1 ( x ) + λ 2 Δ 2 ( x ) ( 1 + e α 1 x ) 2 d x + 2 α 1 λ 1 ( e λ 1 1 ) ( e λ 2 1 ) 0 e α 1 x + λ 1 Δ 1 ( x ) + λ 2 Δ 2 ( x ) ( 1 e α 1 x ) ( 1 + e α 1 x ) 3 d x = 2 α 1 ( e λ 1 1 ) ( e λ 2 1 ) ζ ( α 1 , α 2 , λ 1 , λ 2 , 0 , α 1 , 0 , 0 , 2 , 0 ) 2 α 1 λ 1 e λ 1 ( e α 1 1 ) 2 ( e α 2 1 ) ζ ( α 1 , α 2 , λ 1 , λ 2 , 0 , α 1 , 0 , 0 , 2 , 0 ) + 2 α 1 λ 1 ( e λ 1 1 ) ( e λ 2 1 ) ζ ( α 1 , α 2 , λ 1 , λ 2 , 0 , α 1 , 1 , 0 , 3 , 0 ) , R λ 2 = 2 α 1 λ 1 e λ 2 ( e λ 1 1 ) ( e λ 2 1 ) 2 0 e α 1 x + λ 1 Δ 1 ( x ) + λ 2 Δ 2 ( x ) ( 1 + e α 1 x ) 2 d x + 2 α 1 λ 1 ( e λ 1 1 ) ( e λ 2 1 ) 0 e α 1 x + λ 1 Δ 1 ( x ) + λ 2 Δ 2 ( x ) ( 1 e α 1 x ) ( 1 + e α 1 x ) 2 ( 1 + e α 2 x ) d x + e λ 2 ( e λ 2 1 ) 2 = 2 α 1 λ 1 e λ 2 ( e λ 1 1 ) ( e λ 2 1 ) 2 ζ ( α 1 , α 2 , λ 1 , λ 2 , 0 , α 1 , 0 , 0 , 2 , 0 ) + 2 α 1 λ 1 ( e λ 1 1 ) ( e λ 2 1 ) ζ ( α 1 , α 2 , λ 1 , λ 2 , 0 , α 1 , 1 , 0 , 2 , 1 ) + e λ 2 ( e λ 2 1 ) 2 , R α 1 = 2 α 1 λ 1 ( e λ 1 1 ) ( e λ 2 1 ) 0 e α 1 x + λ 1 Δ 1 ( x ) + λ 2 Δ 2 ( x ) ( 1 + e α 1 x ) 2 d x 2 α 1 λ 1 ( e λ 1 1 ) ( e λ 2 1 ) 0 x e α 1 x + λ 1 Δ 1 ( x ) + λ 2 Δ 2 ( x ) ( 1 + e α 1 x ) 2 d x + 4 α 1 λ 1 ( e λ 1 1 ) ( e λ 2 1 ) 0 x e 2 α 1 x + λ 1 Δ 1 ( x ) + λ 2 Δ 2 ( x ) ( 1 + e α 1 x ) 3 d x + 4 α 1 λ 1 2 ( e λ 1 1 ) ( e λ 2 1 ) 0 x e 2 α 1 x + λ 1 Δ 1 ( x ) + λ 2 Δ 2 ( x ) ( 1 + e α 1 x ) 4 d x
= 2 α 1 λ 1 ( e λ 1 1 ) ( e λ 2 1 ) ζ ( α 1 , α 2 , λ 1 , λ 2 , 0 , α 1 , 0 , 0 , 2 , 0 ) 2 α 1 λ 1 ( e λ 1 1 ) ( e λ 2 1 ) ζ ( α 1 , α 2 , λ 1 , λ 2 , 1 , α 1 , 0 , 0 , 2 , 0 ) + 4 α 1 λ 1 ( e λ 1 1 ) ( e λ 2 1 ) ζ ( α 1 , α 2 , λ 1 , λ 2 , 1 , 2 α 1 , 0 , 0 , 3 , 0 ) + 4 α 1 λ 1 2 ( e λ 1 1 ) ( e λ 2 1 ) ζ ( α 1 , α 2 , λ 1 , λ 2 , 1 , 2 α 1 , 0 , 0 , 4 , 0 ) R α 2 = 4 α 1 λ 1 λ 2 ( e λ 1 1 ) ( e λ 2 1 ) 0 x e ( α 1 + α 2 ) x + λ 1 Δ 1 ( x ) + λ 2 Δ 2 ( x ) ( 1 + e α 1 x ) 2 ( 1 + e α 2 x ) 2 d x = 4 α 1 λ 1 λ 2 ( e λ 1 1 ) ( e λ 2 1 ) ζ ( α 1 , α 2 , λ 1 , λ 2 , 1 , α 1 + α 2 , 0 , 0 , 2 , 2 ) .
Therefore, using Lemma 4, we obtain the asymptotic distribution of R ( θ ) ^ as
n 1 + n 2 ( R ^ R ) N 4 ( 0 , B T ( θ ) I 1 ( θ ) B ( θ ) ) .
Thus, the asymptotic variance of R ^ from (13) is
V a r ( R ^ ) = 1 n 1 + n 2 B T ( θ ) I 1 ( θ ) B ( θ ) = R α 1 2 V a r ( α 1 ^ ) + R α 2 2 V a r ( α 2 ^ ) + R λ 1 2 V a r ( λ 1 ^ ) + R λ 2 V a r ( λ 2 ) + 2 R α 1 R α 2 C o v ( α 1 ^ α 2 ^ ) + 2 R α 1 R λ 1 C o v ( α 1 ^ λ 1 ^ ) + 2 R α 1 R λ 2 C o v ( α 1 ^ λ 2 ^ ) + 2 R α 2 R λ 1 C o v ( α 2 ^ λ 1 ^ ) + 2 R α 2 R λ 2 C o v ( α 2 ^ λ 2 ^ ) + 2 R λ 1 R λ 2 C o v ( λ 1 ^ λ 2 ^ ) .
The asymptotic 100 ( 1 ϵ ) confidence interval A S c l for R can be constructed as
R ^ ± Z ϵ 2 V a r ( R ^ ) ,
where Z ϵ 2 is the upper ϵ 2 quantile of the standard normal distribution. Next, we consider the use of a bootstrap confidence interval preferably not for a larger sample size. The bootstrap confidence interval for a large sample may require sufficient time computationally.

2.3. Bootstrap Confidence Intervals for R

In this subsection, we proposed two non-parametric confidence intervals, the percentile bootstrap confidence interval ( B p ), and the student’s bootstrap confidence interval ( B t ) based on [41]. The procedures for the estimation of the two bootstrap confidence intervals of R are as follows.
  • Generate independent samples x 1 , x 2 , x 3 , , x n 1 from PGHLD1 ( α 1 , λ 1 ) , and y 1 , y 2 , y 3 , , y n 2 from PGHLD2 ( α 2 , λ 2 ) . The samples can be generated from (2) by sampling p from uniform distribution i.e., p U ( 0 , 1 ) .
  • Generate an independent bootstrap sample x 1 * , x 2 * , x 3 * , , x n 1 * and y 1 * , y 2 * , y 3 * , , y n 2 * taken with replacement from the given samples above in the first step. Based on the bootstrap sample compute the maximum likelihood estimates of θ = ( α 1 , α 2 , λ 1 , λ 2 ) T say θ ^ * = ( α ^ 1 * , α ^ 2 * , λ ^ 1 * , λ ^ 2 * ) T as well as the MLE of R * ^ ( θ * ^ ) .
  • Repeat step 2 to 3 B-times to obtain a set of bootstrap samples of R say R ^ j * , j = 1 , 2 , , B .
From the above bootstrap sample of R ^ j * we can determine the two different bootstrap confidence intervals of R as follows by rearranging the sample in the order R ^ 1 * < R ^ 2 * < R ^ 3 * < < R ^ B * .

2.3.1. Percentile Bootstrap Confidence Interval ( B p ) :

Let R ^ ( τ ) * be the τ percentile of R ^ j * , j = 1 , 2 , 3 , , B , such that
1 B j = 1 B I R ^ j * R ^ τ * = τ , 0 < τ < 1 ,
where I ( . ) is the indicator function. A 100 ( 1 ϵ ) % B p confidence interval of R is given as
R ^ ( ϵ 2 ) * , R ^ ( 1 ϵ 2 ) * .

2.3.2. Student’s t Bootstrap Confidence Interval ( B t ):

Let R ^ ¯ * and s e ( R ^ * ) be the sample mean and sample standard deviation of the R ^ j * , j = 1 , 2 , , 3 , respectively, that is
R ^ ¯ * = j = 1 B R ^ j * B a n d s e ( R ^ * ) = 1 B j = 1 B R ^ j * R ^ ¯ * 2 .
Then, let t ^ τ * be the τ percentile of R ^ j * R ^ s e ( R ^ * ) , j = 1 , 2 , , 3 , that is t ^ τ * is such that
1 B j = 1 B I R ^ j * R ^ s e ( R ^ * ) t ^ * = τ , 0 < τ < 1 .
A 100 ( 1 ϵ ) % B t confidence interval of R is given as
R ^ ± t ^ ( ϵ 2 ) * s e ( R ^ * ) .

3. Estimation of R with Common Scale Parameter α

In this section, we derived the approximation of R when the random variables have the same scale parameter α . The maximum likelihood estimation of R, the asymptotic distribution of the MLEs and R, and the bootstrap confidence interval of R are presented.
Let X PHLD1( α , λ 1 ) and Y PHLD2( α , λ 2 ), let f 1 ( x ) be the density of X and F 2 ( y ) be the cumulative distribution of Y, then, in similar way, the reliability R in this case is given by
R = 2 α λ 1 ( e λ 1 1 ) ( e λ 2 1 ) 0 e α x + λ 1 Δ ( x ) e λ 2 Δ ( x ) 1 [ 1 + e α x ] 2 d x , = 2 α λ 1 ( e λ 1 1 ) ( e λ 2 1 ) 0 e α x + ( λ 1 + λ 2 ) Δ ( x ) [ 1 + e α x ] 2 d x 1 e λ 2 1 .
Let u = 1 e α x , then, d x = d u / ( α e α x ) , and 1 + e α x = 1 + ( 1 u ) , therefore,
R = 2 λ 1 ( e λ 1 1 ) ( e λ 2 1 ) 0 1 e ( λ 1 + λ 2 ) u 1 + ( 1 u ) [ 1 + ( 1 u ) ] 2 d u 1 e λ 2 1 ,
= 2 λ 1 ( e λ 1 1 ) ( e λ 2 1 ) i = 0 ( λ 1 + λ 2 ) i i ! 0 1 u i [ 1 + ( 1 u ) ] 2 + i d u 1 e λ 2 1 , = 2 λ 1 ( e λ 1 1 ) ( e λ 2 1 ) i , j = 0 i + j + 1 j ( 1 ) j ( λ 1 + λ 2 ) i i ! 0 1 u i ( 1 u ) j d u 1 e λ 2 1 , = 2 λ 1 ( e λ 1 1 ) ( e λ 2 1 ) i , j = 0 i + j + 1 j ( 1 ) j ( λ 1 + λ 2 ) i i ! B ( i + 1 , j + 1 ) 1 e λ 2 1 , = i , j = 0 C i , j * B ( i + 1 , j + 1 ) 1 e λ 2 1 ,
where C i , j * = 2 λ 1 ( λ 1 + λ 2 ) i ( 1 ) j ( e λ 1 1 ) ( e λ 2 1 ) i ! i + j + 1 j . Notice that, in this case, the reliability R is independent of α .

3.1. Maximum Likelihood Estimation

Let x 1 , x 2 , , x n 1 be a random sample of size n 1 from PHLD1 ( α , λ 1 ) and y 1 , y 2 , , y n 2 is an independent random sample of size n 2 from PHLD2 ( α , λ 2 ) . The log likelihood function is given by (15), here θ is given by θ = ( α , λ 1 , λ 2 ) T .
log L = ( n 1 + n 2 ) log 2 + ( n 1 + n 2 ) log α + n 1 log λ 1 + n 2 log λ 2 α i = 1 n 1 x i α j = 1 n 2 y j n 1 log ( e λ 1 1 ) n 2 log ( e λ 2 1 ) 2 i = 1 n 1 log ( 1 + e α x i ) 2 j = 1 n 2 log ( 1 + e α y j ) + λ 1 i = 1 n 1 Δ ( x i ) + λ 2 j = 1 n 2 Δ ( y i ) .
The estimators of θ the θ ^ = ( α ^ , λ ^ 1 , λ ^ 2 ) T can be obtained in similar way by solve the nonlinear Equations given (16)–(18) below,
L λ 1 = n 1 λ 1 n 1 e λ 1 e λ 1 1 + i = 1 n 1 1 e α x i 1 + e α x i ,
L λ 2 = n 2 λ 2 n 2 e λ 2 e λ 2 1 + j = 1 n 2 1 e α y j 1 + e α y j ,
L α = n 1 + n 2 α i = 1 n 1 x i j = 1 n 2 y j + 2 i = 1 n 1 x i e α x i 1 + e α x i + 2 j = 1 n 2 y j e α y j 1 + e α y j , + 2 λ 1 i = 1 n 1 x i e α x i ( 1 + e α x i ) 2 + 2 λ 2 j = 1 n 2 y j e α y j ( 1 + e α y j ) 2 .
Hence, the maximum likelihood estimator of R ( θ ) in (14) can be computed as R ^ ( θ ^ ) .

3.2. Asymptotic Distribution and Confidence Intervals

In this subsection, we derived the asymptotic distribution of θ ^ = ( α ^ , λ ^ 1 , λ ^ 2 ) T , the asymptotic distribution of R ^ , then the asymptotic confidence intervals of R. The Fisher information matrix is I ( θ ) = E [ J ( θ ) ] , where J ( θ ) = 2 L θ θ T , therefore,
I ( θ ) = I α α I α λ 1 I α λ 2 I λ 1 α I λ 1 λ 1 I λ 1 λ 2 I λ 2 α I λ 2 λ 1 I λ 2 λ 2
The elements of J ( θ ) are given by
2 L λ 1 2 = n 1 λ 1 2 e λ n 1 e λ 1 1 + e 2 λ 1 n 1 e λ 1 1 2 , 2 L λ 2 2 = n 2 λ 2 2 e λ 2 n 2 e λ 2 1 + e 2 λ 2 n 2 e λ 2 1 2 , 2 L λ 1 α = i = 1 n 1 2 x i e α x i ( 1 + e α x i ) 2 , 2 L λ 2 α = j = 1 n 2 2 y j e α y j ( 1 + e α y j ) 2 , 2 L α 2 = n 1 + n 2 α 2 2 i = 1 n 1 x i 2 e α x i ( 1 + e α x i ) 2 j = 1 n 2 y j 2 e α y j ( 1 + e α y j ) + 2 i = 1 n 1 x i 2 e 2 α x i ( 1 + e α x i ) 2 + 2 j = 1 n 2 y j 2 e 2 α y j ( 1 + e α y j ) 2 2 λ 1 i = 1 n 1 x i 2 e α x i ( 1 + e α x i ) 2 + 4 λ 1 i = 1 n 1 x i 2 e 2 α x i ( 1 + e α x i ) 3 2 λ 2 j = 1 n 2 y j 2 e α y j ( 1 + e α y j ) 2 + 4 λ 2 j = 1 n 2 y j 2 e 2 α y j ( 1 + e α y j ) 3 ,
thus, the elements of I are given below by applying the Lemma 1.
I λ 1 λ 1 = n 1 λ 1 2 e λ n 1 e λ 1 1 + e 2 λ 1 n 1 e λ 1 1 2 , I λ 2 λ 2 = n 2 λ 2 2 e λ 2 n 2 e λ 2 1 + e 2 λ 2 n 2 e λ 2 1 2 , I α λ 1 = 4 n 1 α λ 1 e λ 1 1 ( α , 1 , 2 , 4 , λ 1 ) , I α λ 2 = 4 n 2 α λ 2 e λ 2 1 ( α , 1 , 2 , 4 , λ 2 ) , I α α = ( n 1 + n 2 ) α 2 + 4 n 1 α λ 1 e λ 1 1 ( α , 2 , 2 , 4 , λ 1 ) + 4 n 2 α λ 2 e λ 2 1 ( α , 2 , 2 , 4 , λ 2 ) + 4 n 1 α λ 1 2 e λ 1 1 ( α , 2 , 2 , 4 , λ 1 ) 4 n 1 α λ 1 2 e λ 1 1 ( α , 2 , 3 , 5 , λ 1 ) + 4 n 2 α λ 2 2 e λ 2 1 ( α , 2 , 2 , 4 , λ 2 ) 4 n 2 α λ 2 2 e λ 2 1 ( α , 2 , 3 , 5 , λ 2 ) .
Lemma 4.
As n 1 and n 2 then, n 1 + n 2 ( θ ^ θ ) N 3 ( 0 , I 1 ( θ ) ) , where
lim n 1 , n 2 1 n 1 + n 2 I 1 ( θ ) = V a r ( α ^ ) C o v ( α ^ λ 1 ^ ) C o v ( α ^ λ 2 ^ ) C o v ( λ 1 ^ α ^ ) V a r ( λ 1 ^ ) C o v ( λ 1 ^ λ 2 ^ ) C o v ( λ 2 ^ α ^ ) C o v ( λ 2 ^ λ 1 ^ ) V a r ( λ 2 ^ )
Proof. 
follows from the asymptotic normality of MLE. □
To derive the asymptotic distribution of R as in similar way to the general case we compute the partial derivative of R ( θ ) in (14), remember that in this case R is independent of α , thus,  B ( θ ) = R λ 1 , R λ 2 T by considering the following Lemma 5.
Lemma 5.
Let x > 0 Θ * = ( α , λ 1 , λ 2 , δ 1 , δ 2 , δ 3 , δ 4 ) T > 0 , let,
ζ * ( Θ * ) = 0 x δ 1 e δ 2 α x + ( λ 1 + λ 2 ) Δ ( x ) ( 1 e α x ) δ 3 [ 1 + e α x ] δ 4 d x ,
then,
ζ * ( Θ * ) = i = 1 j = 1 ϕ i , j * B 0 , δ 1 ( δ 3 + i + 1 , δ 2 + j 1 ) ,
where ϕ i , j * = ( 1 ) δ 1 + j ( λ 1 + λ 2 ) i α δ 1 + 1 i ! δ 4 + i + j 1 j , in particular, for δ 1 = 1 we have
ζ * ( Θ * ) = i = 1 j = 1 ϕ i , j * B ( 3 , j ) ψ 0 ( j ) ψ 0 ( j + 3 ) ,
with ϕ i , j * = ( 1 ) j + 1 ( λ 1 + λ 2 ) i α 2 i ! δ 4 + i + j 1 j .
Proof. 
Let
ζ * ( Θ * ) = 0 x δ 1 e δ 2 α x + ( λ 1 + λ 2 ) Δ ( x ) ( 1 e α x ) δ 3 [ 1 + e α x ] δ 4 d x ,
by the expansion of e ( λ 1 + λ 2 ) Δ ( x ) and some algebraic simplification we have
ζ * ( Θ * ) = i = 0 ( λ 1 + λ 2 ) i i ! 0 x δ 1 e δ 2 α x ( 1 e α x ) δ 3 + i [ 1 + e α x ] δ 4 + i d x ,
letting u = 1 e α x and then expansion of the denominator, we get
ζ * ( Θ * ) = i = 0 ( λ 1 + λ 2 ) i i ! ( 1 ) δ 1 α δ 1 + 1 0 1 log δ 1 ( 1 u ) u δ 3 + i ( 1 u ) δ 2 1 ( 1 + ( 1 u ) ) δ 4 + i d u , = i = 0 j = 0 ( λ 1 + λ 2 ) i i ! ( 1 ) δ 1 + 1 α δ 1 + 1 δ 4 + i + j 1 j 0 1 log δ 1 ( 1 u ) u δ 3 + i ( 1 u ) δ 2 + j 1 d u .
Thus,
ζ * ( Θ * ) = i = 1 j = 1 ϕ i , j * B 0 , δ 1 ( δ 3 + i + 1 , δ 2 + j 1 ) ,
where ϕ i , j * = ( 1 ) δ 1 + j ( λ 1 + λ 2 ) i α δ 1 + 1 i ! δ 4 + i + j 1 j .
From the above lemma we derive R λ 1 and R λ 2 as follows.
R λ 2 = 2 α λ 1 e λ 2 ( e λ 1 1 ) ( e λ 2 1 ) 2 0 e α x + ( λ 1 + λ 2 ) Δ ( x ) ( 1 + e α x ) 2 d x + 2 α λ 1 ( e λ 1 1 ) ( e λ 2 ) 1 0 Δ ( x ) e α x + ( λ 1 + λ 2 ) Δ ( x ) ( 1 + e α x 1 ) 2 d x + e λ 2 ( e λ 2 1 ) 2 = 2 α λ 1 e λ 2 ( e λ 1 1 ) ( e λ 2 1 ) 2 ζ * ( α , λ 1 , λ 2 , 0 , 1 , 0 , 2 ) + 2 α λ 1 ( e λ 1 1 ) ( e λ 2 1 ) ζ * ( α , λ 1 , λ 2 , 0 , 1 , 1 , 3 ) + e λ 2 ( e λ 2 1 ) 2 ,
R λ 1 = 2 α ( e λ 1 1 ) ( e λ 2 1 ) 0 e α x + ( λ 1 + λ 2 ) Δ ( x ) ( 1 + e α x ) 2 d x 2 α λ 1 e λ 1 ( e λ 1 1 ) ( e λ 2 1 ) 0 e α x + ( λ 1 + λ 2 ) Δ ( x ) ( 1 + e α x ) 2 d x + 2 α λ 1 ( e λ 1 1 ) ( e λ 2 1 ) 0 Δ ( x ) e α x + ( λ 1 + λ 2 ) Δ ( x ) ( 1 + e α x ) 2 d x = 2 α ( e λ 1 1 ) ( e λ 2 1 ) ζ * ( α , λ 1 , λ 2 , 0 , 1 , 0 , 2 ) 2 α λ 1 e λ 1 ( e λ 1 1 ) ( e λ 2 1 ) ζ * ( α , λ 1 , λ 2 , 0 , 1 , 0 , 2 ) + 2 α λ 1 ( e λ 1 1 ) ( e λ 2 1 ) ζ * ( α , λ 1 , λ 2 , 0 , 1 , 1 , 3 ) .
In similar way, we can obtain the asymptotic distribution of R ^ ( θ ) as
n 1 + n 2 ( R ^ R ) N 3 ( 0 , B T ( θ ) I 1 ( θ ) B ( θ ) ) ,
hence, the asymptotic variance of R ^ is computed as
V a r ( R ^ ) = 1 n 1 + n 2 B T ( θ ) I 1 ( θ ) B ( θ ) = R λ 1 2 V a r ( λ 1 ^ ) + R λ 2 2 V a r ( λ 2 ^ ) + 2 R λ 1 R λ 2 C o v ( λ 1 ^ λ 2 ^ ) .
The 100 ( 1 ϵ ) asymptotic confidence interval for R can be constructed as
R ^ ± Z ϵ 2 V a r ( R ^ ) ,
where Z ϵ 2 is the upper ϵ 2 quantile of the standard normal distribution. Moreover, we can use the bootstrap confidence interval preferably for moderate sample sizes, the computation of the bootstrap confidence interval follows similarly to the steps given in Section 2.3.

4. Bayes Estimation of R

In this section, we discuss the Bayes estimation of R in general case and the Bayes estimation of R with common scale parameter α . We employ the use of the Bayesian estimation to estimate R under various loss functions. The point estimators ϑ ^ are derived from the posterior distributions given the sample data. The estimator that minimizes the square error loss function (SEL) for the assumed prior distribution is ( ϑ ^ ϑ ) 2 which is the posterior mean, here, we compute R ^ S E L = 1 N M i = M + 1 N R ( i ) . The absolute error loss function (AEL), | ϑ ^ ϑ | for the assumed prior distribution is minimizes by the posterior median as R ^ A E L . The maximum a posteriori (MAP) can be used to obtain the estimators when there is no loss function, it depends on the likelihood function and prior distribution, that makes it closely related to maximum likelihood, it is the value that maximizes the posterior distribution i.e., the mode. The linear exponential loss function (LINEX) with parameters c is defined by ( e c ( ϑ ^ ϑ ) c ( ϑ ^ ϑ ) 1 ) and we can minimized by the estimator R ^ L I N = 1 c log 1 N M i = M + 1 N e c R ( i ) , the sign of the parameter c reflect the direction of asymmetry, while its magnitude reflect the degree of the asymmetry. The general entropy loss function (GEL) [42] is ϑ ^ ϑ q q ϑ ^ ϑ 1 , and its minimized by R ^ G E L = 1 N M i = M + 1 N ( R ( i ) ) q 1 q . Moreover, the highest posterior density (HPD) credible interval for R is constructed. N is the number of iterations and M is the burn in.

4.1. Bayes Estimation of R in General Case

Let x 1 , x 2 , , x n 1 is an independent random sample of size n 1 from PHLD1 ( α 1 , λ 1 ) and y 1 , y 2 , , y n 2 is an independent random sample of size n 2 from PHLD2 ( α 2 , λ 2 ) . Let assumed that α 1 , α 2 , λ 1 , λ 2 are independent and follow gamma density function G a m m a ( a 1 , b 1 ) , G a m m a ( a 2 , b 2 ) , G a m m a ( a 3 , b 3 ) , and G a m m a ( a 4 , b 4 ) , respectively. Then the joint density of the data, α 1 , α 2 , λ 1 , λ 2 is given by
( d a t a ; α 1 , α 2 , λ 1 , λ 2 ) = L ( α 1 , α 2 , λ 1 , λ 2 | d a t a ) π 1 ( α 1 ) π 2 ( α 2 ) π 3 ( λ 1 ) π 4 ( λ 2 ) ,
where π i ( . ) , i = 1 , 2 , 3 , 4 , are the gamma prior density for α 1 , α 2 , λ 1 and λ 2 respectively. Thus, the joint posterior density of α 1 , α 2 , λ 1 , λ 2 given the data sets is given by
P ( α 1 , α 2 , λ 1 , λ 2 | d a t a ) = ( d a t a ; α 1 , α 2 , λ 1 , λ 2 ) ( d a t a ; α 1 , α 2 , λ 1 , λ 2 ) d α 1 d α 2 d λ 1 d λ 2 .
The above Equation (19) cannot be expressed in a closed form, therefore, we employ the Gibbs sampling technique to compute the Bayes estimate of R under various measures and an approximate 100 ( 1 ϵ ) % credible interval of R. The marginal posterior densities of α 1 , α 2 , λ 1 and λ 2 are:
P 1 ( α 1 | d a t a ) α 1 n 1 + a 1 1 e α 1 b 1 + i = 1 n 1 x i + λ 1 i = 1 n 1 Δ 1 ( x i ) i = 1 n 1 ( 1 + e α 1 x i ) 2 1 ,
P 2 ( α 2 | d a t a ) α 2 n 2 + a 2 1 e α 2 b 2 + j = 1 n 2 y j + λ 2 j = 1 n 2 Δ 2 ( y j ) j = 1 n 2 ( 1 + e α 2 y j ) 2 1 ,
P 3 ( λ 1 | d a t a ) λ 1 n 1 + a 3 1 e λ 1 b 3 i = 1 n 1 Δ 1 ( x i ) e λ 1 1 n 1 ,
P 4 ( λ 2 | d a t a ) λ 2 n 2 + a 4 1 e λ 2 b 4 j = 1 n 2 Δ 2 ( y j ) e λ 2 1 n 2 .
The marginal conditional distributions obtained from the posterior distribution P i in (20)–(23) are not straightforward, they are not from well-known distributions, so we are going to obtain samples by applying the Metropolis– Hastings algorithm, see [43,44,45], we take our proposal distribution to be a normal distribution. In general, we consider the Gibbs sampling technique to generate samples from the posterior distributions, then to compute the Bayes estimators of R with respect to some loss functions. We can obtain the highest posterior density (HPD) interval for R. The step-by-step Gibbs sampling algorithm is given below:
  • Step 1: Start with initial guess ( α 1 ( 0 ) , α 2 ( 0 ) , λ 1 ( 0 ) , λ 2 ( 0 ) )
  • Step 2: Set t = 1
  • Step 3: Use the Metropolis–Hastings algorithm to generate α 1 ( t ) from P 1 and λ 1 ( t ) from P 3
  • Step 4: Use the Metropolis–Hastings algorithm to generate α 2 ( t ) from P 2 and λ 2 ( t ) from P 4
  • Step 5: Compute R ( t ) from Equation (7)
  • Step 6: Set t = t + 1
  • Step 7: Repeat step 3 to 6, T times.
For sufficiently large value of T, we can have an approximate of R S E L , R A E L , R M A P , R L I N , and R G E L . An approximate 100 ( 1 ϵ ) % credible interval of R from SEL can computed by using the procedure provided by [46] as the shortest distance of the intervals of ( R ^ ( 1 ) , R ^ ( 1 ϵ ) T ) , ( R ^ 2 , R ^ ( 1 ϵ ) ( T + 1 ) ) , , ( R ^ ϵ T , R ^ T ) .

4.2. Bayes Estimation of R with Common Scale Parameter α

Let x 1 , x 2 , , x n 1 is an independent random sample of n 1 size from PHLD1 ( α , λ 1 ) and y 1 , y 2 , , y n 2 is an independent random sample of size n 2 from PHLD2 ( α , λ 2 ) . Let assumed that α , λ 1 , λ 2 are independent with gamma density function G a m m a ( a 1 , b 1 ) , G a m m a ( a 2 , b 2 ) , and  G a m m a ( a 3 , b 3 ) , respectively. Then the joint density of the data, α , λ 1 , λ 2 is given by
( d a t a ; α , λ 1 , λ 2 ) = L ( α , λ 1 , λ 2 | d a t a ) π 1 ( α ) π 2 ( λ 1 ) π 3 ( λ 2 ) ,
where π i ( . ) , i = 1 , 2 , 3 , are the gamma prior density for α , λ 1 and λ 2 respectively. Thus, the joint posterior density of α , λ 1 , λ 2 given the data sets is given by
P ( α , λ 1 , λ 2 | d a t a ) = ( d a t a ; α , λ 1 , λ 2 ) ( d a t a ; α , λ 1 , λ 2 ) d α d λ 1 d λ 2 .
The above Equation (24) is required to apply the Gibbs sampling technique to obtain the Bayes estimates of α , λ 1 , λ 2 to compute R and its credible interval. The posterior densities of α , λ 1 and λ 2 are:
P 1 ( α | d a t a ) α n 1 + n 2 + a 1 1 e α b 1 + i = 1 n 1 x i + j = 1 n 2 y j + λ 1 i = 1 n 1 ( x i ) + λ 2 j = 1 n 2 ( y j ) i = 1 n 1 ( 1 + e α x i ) 2 j = 1 n 2 ( 1 + e α y j ) 2 ,
P 2 ( λ 1 | d a t a ) λ 1 n 1 + a 2 1 e λ 1 b 2 i = 1 n 1 Δ ( x i ) e λ 1 1 n 1 ,
P 3 ( λ 2 | d a t a ) λ 2 n 2 + a 3 1 e λ 2 b 3 j = 1 n 2 Δ ( y j ) e λ 2 1 n 2 .
Here, the posterior distributions P i in (25)–(27) are not from well-known distributions, therefore we apply the Gibbs sampling technique to generate samples from the posterior distributions as in the Section 4.1 above, then to compute the Bayes estimators of R with respect to some loss function, and the HPD credible interval for R. The steps are given below:
  • Step 1: Start with initial guess ( α ( 0 ) , λ 1 ( 0 ) , λ 2 ( 0 ) )
  • Step 2: Set t = 1
  • Step 3: Use the Metropolis–Hastings algorithm to generate λ 1 ( t ) from P 2 and λ 2 ( t ) from P 3
  • Step 4: Use the Metropolis–Hastings algorithm to generate α ( t ) from P 1
  • Step 5: Compute R ( t ) from Equation (14)
  • Step 6: Set t = t + 1
  • Step 7: Repeat step 3 to 6, T times.
For sufficiently large value of T the approximate R S E L , R A E L , R M A P , R L I N , R G E L , and the HPD credible interval of R can be computed from the resulting sampling as described in Section 4.1.

5. Simulation

In this section, Monte Carlo simulation was used to examine the performance of the different estimators discussed. Simulated samples were generated using different values of parameters from independent PHLD 1 ( α 1 , λ 1 ) and PHLD 2 ( α 2 , λ 2 ) of sizes say n 1 and n 2 respectively, using Equation (2). We consider the cases when n 1 = n 2 , n 1 > n 2 and n 1 < n 2 as ( 20 , 20 ) , ( 40 , 30 ) , ( 40 , 50 ) and ( 60 , 60 ) . The simulation studies were conducted using 1000 samples from PHLD1 and PHLD2. The MLE and the 95 % asymptotic confidence interval (AS C I ) from the expected information matrix were computed, also the percentile bootstrap (B p ) and student’s bootstrap B t confidence interval was computed based on B = 1000 replications. The Bayes estimator of R was computed under various loss functions with M = 1000 iterations by considering the first 10 % as a burn-in, also the 95 % HPD credible interval H P D C I is observed based on SEL by the package HDInterval[47] in R-software. For the estimators based on LINEX and GEL, we take c = 2 and q = 3 respectively. We discussed the Bias, mean square error (MSE), and confidence intervals with their coverage probability C P of R based on the various estimation techniques. All the computations were performed using R-software[48]. The resulting simulation obtained were presented in Table 1 and Table 2. Observe from these tables that: (i) the MSE decreases as the sample sizes increases in both the estimators; (ii) the MSE of R based on R S E L , R A E L and R L I N are more closer in most cases; (iii) based on our choice of q = 3 the bias of the R G E L is negative in the majority the of cases; (iv) the average length of confidence interval (ALCI) decreases as the sample size increases in all the techniques; (v) the MLE has larger ALCI for smaller sample size and smaller ALCI for the largest size in most cases; (vi) the Bayes estimators has averagely smaller size of confidence interval as compared in all the cases; (vii) the B P and B t performance is quite good, their CI appear almost the same for all the cases, and their coverage probability covers the nominal sizes in most cases both for small and large sample size; (viii) in general, both the estimation technique and confidence interval were quite sufficient and can be used to analyzed stress strength data from PHLD, but for smaller sample size we recommend the used of bootstrap B p and B t for estimation of confidence interval.

6. Real Data Study

In this section, we provide two real data applications to demonstrate how the proposed estimation techniques can be applied in practice. We computed R by maximum likelihood, and Bayes estimation under the various loss functions discussed, also the four confidence intervals studied are obtained. The goodness of fit statistic, called Kolmogorov–Smirnov (KS), was used to demonstrate how good the models fit the data sets by the proposed techniques. We consider the B = 1000 replication for the bootstrap confidence interval and M = 10 , 000 for the Bayes estimation and the first 10 % burn-in. For the R L I N we consider c = 2 and q = 3 for the R G E L .

6.1. Real Data Study 1

This is the strength data measured in GPA, for single carbon fibers and impregnated 1000-carbon fiber tows. Single fibers were tested under tension at gauge lengths of 1, 10, 20, and 50 mm, also impregnated tows of 1000 fibers were tested at gauge lengths of 20, 50, 150, and 300 mm, some of these data set were studied in the stress strength analysis by [27,49]. Here are the single fibers of 20 mm (data1) and 10 mm (data2) in gauge lengths. The data was provided by [50] also analyzed by [51,52,53].
Data1: ( n 1 = 69 ) 0.312, 0.314, 0.479, 0.552, 0.700, 0.803, 0.861, 0.865, 0.944, 0.958, 0.966, 0.997, 1.006, 1.021, 1.027, 1.055, 1.063, 1.098, 1.140, 1.179, 1.224, 1.240, 1.253, 1.270, 1.272, 1.274, 1.301, 1.301, 1.359, 1.382, 1.382, 1.426, 1.434, 1.435, 1.478, 1.490, 1.511, 1.514, 1.535, 1.554, 1.566, 1.570, 1.586, 1.629, 1.633, 1.642, 1.648, 1.684, 1.697, 1.726, 1.770, 1.773, 1.800, 1.809, 1.818, 1.821, 1.848, 1.880, 1.954, 2.012, 2.067, 2.084, 2.090, 2.096, 2.128, 2.233, 2.433, 2.585, 2.585, and
Data2: ( n 2 = 63 ) 0.101, 0.332, 0.403, 0.428, 0.457, 0.550, 0.561, 0.596, 0.597, 0.645, 0.654, 0.674, 0.718, 0.722, 0.725, 0.732, 0.775, 0.814, 0.816, 0.818, 0.824, 0.859, 0.875, 0.938, 0.940, 1.056, 1.117, 1.128, 1.137, 1.137, 1.177, 1.196, 1.230, 1.325, 1.339, 1.345, 1.420, 1.423, 1.435, 1.443, 1.464, 1.472, 1.494, 1.532, 1.546, 1.577, 1.608, 1.635, 1.693, 1.701, 1.737, 1.754, 1.762, 1.828, 2.052, 2.071, 2.086, 2.171, 2.224, 2.227, 2.425, 2.595, 3.220.
For both the MLE and Bayes techniques the estimated parameters, KS with p-values are presented, the log-likelihood (L) for the MLEs are given in Table 3. Table 4 provides the estimated values of R, the confidence intervals, and their length. The confidence intervals obtained include the asymptotic confidence interval, B p , B t , and the HPD credible interval. From the table, all the R computed were almost similar except that the MAP and MLE are very closed. The confidence intervals are quite good and almost the same length in all the techniques. Table 3 shown that PHLD gives a better fit for both data sets by considering the KS values. Based on the MLE, Figure 1a and Figure 2c show the empirical and the fitted PHLD survival functions for the data1 and data2, it can be seen graphically how the PHLD survival curve fitted the empirical curve, indicating how good PHLD represented the two data sets. Figure 1b and Figure 2d show the quantile–quantile plots for data1 and data2, where almost all of the quantile points lie on the straight line this also shows how good PHLD represent the data sets. Figure 3 is the profile log-likelihood for each parameter, it is clear from the curves that the maximized log-likelihood function has a unique value. Figure 4 is the posterior densities of each parameter and the density of R base on the iterations obtained from the Bayes estimation, also showing how good the Bayes estimation performed and both the densities go to the true posterior densities of their parameters. Figure 5 is the iterations obtained from the Gibbs and Metropolis–Hastings algorithms of each parameter and computed R from the Bayes technique, notice that for each parameter the iterated sample values were centered to their mean and the iterations converge to their true population density in each of the parameters within the first few iterations. This demonstrated the performance of the Bayes estimators of the parameters of PHLD. From this application, we can see that PHLD is a good model for reliability studies.

6.2. Real Data Study 2

In other fields of studies, R is considered an important measure to study the difference between the two populations. Here, we analyzed the data sets studied by [54,55,56,57] in the estimation of the stress strength parameter. The data set consists of the waiting times before the service of the customers of two banks A (data1) and B (data2), the data1 were also discussed by [58].
Data1: ( n 1 = 100 ) 0.8, 0.8, 1.3, 1.5, 1.8, 1.9, 1.9, 2.1, 2.6, 2.7, 2.9, 3.1, 3.2,3.3, 3.5, 3.6,4.0, 4.1, 4.2, 4.2, 4.3, 4.3, 4.4, 4.4, 4.6, 4.7, 4.7, 4.8, 4.9, 4.9, 5.0, 5.3, 5.5, 5.7, 5.7, 6.1, 6.2, 6.2, 6.2, 6.3, 6.7, 6.9, 7.1, 7.1, 7.1, 7.1, 7.4, 7.6, 7.7, 8.0, 8.2, 8.6, 8.6, 8.6, 8.8, 8.8, 8.9, 8.9, 9.5, 9.6, 9.7, 9.8, 10.7, 10.9, 11.0, 11.0, 11.1, 11.2, 11.2, 11.5, 11.9, 12.4, 12.5, 12.9, 13.0, 13.1, 13.3, 13.6, 13.7, 13.9, 14.1, 15.4, 15.4, 17.3, 17.3, 18.1, 18.2, 18.4, 18.9, 19.0, 19.9, 20.6, 21.3, 21.4, 21.9, 23.0, 27.0, 31.6, 33.1, 38.5, and
Data2: ( n 2 = 60 ) 0.1, 0.2, 0.3, 0.7, 0.9, 1.1, 1.2, 1.8, 1.9, 2.0, 2.2, 2.3, 2.3, 2.3, 2.5, 2.6, 2.7, 2.7, 2.9, 3.1, 3.1, 3.2, 3.4, 3.4, 3.5, 3.9, 4.0, 4.2, 4.5, 4.7, 5.3, 5.6, 5.6, 6.2, 6.3, 6.6, 6.8, 7.3, 7.5, 7.7, 7.7, 8.0, 8.0, 8.5, 8.5, 8.7, 9.5, 10.7, 10.9, 11.0, 12.1, 12.3, 12.8, 12.9, 13.2, 13.7, 14.5, 16.0, 16.5, 28.0.
In a similar way, the MLEs and the Bayes estimators of the parameters, KS with p-values and log-likelihood (L) of the MLE are computed and given in Table 5. The estimated values of R, the confidence intervals, and the length of the confidence interval computed from the various techniques are provided in Table 6. Observed that all the R computed were closer and the confidence intervals are quite good. It is clear from Table 5 that PHLD provides a good fit for both data sets by considering the KS values. Based on the MLE, Figure 6 and Figure 7 show the fitted PHLD survival function (e), (g) and quantile-quantile plots (f), (h) for the data1 and data2 to illustrate how PHLD represents the two data set. Figure 8 is the profile log-likelihood for each parameter showing that the Log-likelihood is unique. We also provided in Figure 9 the posterior densities of each parameter and the density of R base on the iterations from the Bayes estimation, whereas Figure 10 shows the Gibbs and Metropolis–Hastings algorithms iterations of each parameter and the computed R; this is indicating the good performance of the Bayes estimators. From this illustration, it’s quite clear that PHLD can be a good candidate in stress strength reliability analysis.

7. Conclusions

The estimation of the stress–strength parameter R when the random variables X and Y are independent Poisson half logistic distributions are provided. We have addressed the case in general and when the scale parameter is common. The point and interval estimation of R was discussed; these include the maximum likelihood estimation of R and its asymptotic confidence interval, percentile bootstrap and student’s bootstrap confidence interval; Bayes estimation of R is computed under the square error loss function, absolute error loss function, linear exponential error loss function, generalized entropy error loss function, and maximum a posteriori, also the credible interval based on the square error loss function is obtained. We examine by simulation studies the proposed point and interval estimates, and they work very well for various samples sizes as discussed by their MSE and the confidence intervals; the MSE decreases as the sample increases in both techniques, and based on the simulation result we recommend the use of the bootstrap for estimating the confidence interval of very small size. We used two real data studies to demonstrate the performance of the two estimators of R in practical applications, the MLE and the BE estimators goodness of fit for each real data were examined by KS statistic and the result was sufficient and satisfactory, the Gibbs and Metropolis–Hastings iterations in the Bayes estimators converge to their true parameter within the first few iterations in both data sets. In each of the two real data studies the estimates of R obtained from the MLE and BE techniques were closely identical. We hoped that the PHLD will be a very useful tool in stress–strength reliability studies. Based on our results, we suggested that the Bayesian estimation of the model can be further discussed under different priors, also the analysis of the MLE and the Bayes estimations of the stress strength parameter R can be further studied by considering the progressively type-II censored samples, hybrid censored samples, and estimation of R based on records values.

Author Contributions

Conceptualization, I.M. and X.W.; Formal analysis I.M., X.W. and C.L.; Funding acquisition, X.W.; Investigation, I.M., X.W., C.L., M.Y. and M.C.; Methodology, I.M. and X.W.; Project administration, X.W.; Resources, I.M., X.W. and C.L.; Software, I.M.; Supervision, X.W. and C.L.; Validation I.M., X.W. and C.L.; Writing—original draft, I.M.; Writing—review & editing, I.M., X.W., C.L., M.Y. and M.C. All authors have read and agreed to the published version of the manuscript.

Funding

The work was supported by the National Natural Science Foundation of China (Grant No. 51475086), CAST-BISEE2019-019, the Fundamental Research Funds for the Central Universities (Grant No. N2023023), and the Natural Science Foundation of Hebei Province (No. E2020501013).

Acknowledgments

The authors are thankful to the editor and referees for their useful suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations and some notations are used in this manuscript:
AELabsolute error loss function
ALCIaverage length of confidence interval
A S C I asymptotic confidence interval
BEBayes estimation
B p percentile bootstrap confidence interval
B t student’s bootstrap confidence interval
CIconfidence interval
CPcoverage probability
GELgeneral entropy loss function
HPDhighest posterior density
I ( θ ) Fisher information matrix
KSKolmogorov-Smirnov
Llog-likelihood
LINEXlinear exponential loss function
MAPmaximum a posteriori
MLEmaximum likelihood estimation
MSEmean square error
PHLDPoisson half logistic distribution
Rstress-strength parameter
SELsquare error loss function

Appendix A

The elements of J ( θ ) :
2 L α 1 2 = n 1 α 1 2 2 i = 1 n 1 x i 2 e α 1 x i 1 + e α 1 x i + 2 i = 1 n 1 x i 2 e 2 α 1 x i ( 1 + e α 1 x i ) 2 2 λ 1 i = 1 n 1 x i 2 e α 1 x i ( 1 + e α 1 x i ) 2 + 4 λ 1 i = 1 n 1 x i 2 e 2 α 1 x i ( 1 + e α 1 x i ) 3 , 2 L α 2 2 = n 2 α 2 2 2 j = 1 n 2 y j 2 e α 2 y j 1 + e α 2 y j + 2 j = 1 n 2 y j 2 e 2 α 2 y j ( 1 + e α 2 y j ) 2 2 λ 2 j = 1 n 2 y j 2 e α 2 y j ( 1 + e α 2 y j ) 2 + 4 λ 2 j = 1 n 2 y j 2 e 2 α 2 y j ( 1 + e α 2 y j ) 3 , 2 L λ 1 2 = n 1 λ 1 2 n 1 e λ 1 ( e λ 1 1 ) 2 , 2 L λ 2 2 = n 2 λ 2 2 n 2 e λ 2 ( e λ 2 1 ) 2 , 2 L α 1 λ 1 = 2 i = 1 n 1 x i e α 1 x i ( 1 + e α 1 x i ) 2 2 L α 2 λ 2 = 2 j = 1 n 2 y j e α 2 y j ( 1 + e α 2 y j ) 2 , 2 L α 1 α 2 = 2 L α 2 α 1 = 0 , 2 L α 1 λ 2 = 2 L λ 2 α 1 = 0 2 L α 2 λ 1 = 2 L λ 1 α 2 = 0 , 2 L λ 1 λ 2 = 2 L λ 2 λ 1 = 0 .
The elements of I ( θ ) , and the rest of the elements follow similar way:
2 L α 1 2 = n 1 α 1 2 2 i = 1 n 1 E x i 2 e α 1 x i 1 + e α 1 x i + 2 i = 1 n 1 E x i 2 e 2 α 1 x i ( 1 + e α 1 x i ) 2 2 λ 1 i = 1 n 1 E x i 2 e α 1 x i ( 1 + e α 1 x i ) 2 + 4 λ 1 i = 1 n 1 E x i 2 e 2 α 1 x i ( 1 + e α 1 x i ) 3 = n 1 α 1 2 2 i = 1 n 1 0 x i 2 e α 1 x i 1 + e α 1 x i f 1 ( x ) d x + 2 i = 1 n 1 0 x i 2 e 2 α 1 x i ( 1 + e α 1 x i ) 2 f 1 ( x ) d x 2 λ 1 i = 1 n 1 0 x i 2 e α 1 x i ( 1 + e α 1 x i ) 2 f 1 ( x ) d x + 4 λ 1 i = 1 n 1 0 x i 2 e 2 α 1 x i ( 1 + e α 1 x i ) 3 f 1 ( x ) d x = n 1 α 1 2 4 n 1 α 1 λ 1 ( e λ 1 1 ) 0 x 2 e 2 α 1 x + λ 1 Δ 1 ( x ) ( 1 + e α 1 x ) 3 d x + 4 n 1 α 1 λ 1 ( e λ 1 1 ) 0 x 2 e 3 α 1 x + λ 1 Δ 1 ( x ) ( 1 + e α 1 x ) 4 d x 4 n 1 α 1 λ 1 2 ( e λ 1 1 ) 0 x 2 e 2 α 1 x + λ 1 Δ 1 ( x ) ( 1 + e α 1 x ) 4 d x + 8 n 1 α 1 λ 1 2 ( e λ 1 1 ) 0 x 2 e 3 α 1 x + λ 1 Δ 1 ( x ) ( 1 + e α 1 x ) 5 d x = n 1 α 1 2 4 n 1 α 1 λ 1 ( e λ 1 1 ) ( α 1 , 2 , 2 , 3 , λ 1 ) + 4 n 1 α 1 λ 1 ( e λ 1 1 ) ( α 1 , 2 , 3 , 4 , λ 1 ) 4 n 1 α 1 λ 1 2 ( e λ 1 1 ) ( α 1 , 2 , 2 , 4 , λ 1 ) + 8 n 1 α 1 λ 1 2 ( e λ 1 1 ) ( α 1 , 2 , 3 , 5 , λ 1 ) .

References

  1. Wolfe, D.A.; Hogg, R.V. On constructing statistics and reporting data. Am. Stat. 1971, 25, 27–30. [Google Scholar]
  2. Lloyd, D.K.; Lipow, M. Reliability, Management, Methods and Mathematics; Prentice-Hall: Englewood Cliffs, NJ, USA, 1962. [Google Scholar]
  3. Guttman, I.; Johnson, R.A.; Bhattacharyya, G.K.; Reiser, B. Confidence limits for stress-strength models with explanatory variables. Technometrics 1988, 30, 161–168. [Google Scholar] [CrossRef]
  4. Kotz, S.; Lumelskii, Y.; Pensky, M. The Stress–Strength Model and Its Generalizations: Theory and Applications; World Scientific: Singapore, 2003. [Google Scholar]
  5. Ratnam, R.R.L.; Rosaiah, K.; Anjaneyulu, M.S.R. Estimation of reliability in multicomponent stress-strength model: Half logistic distribution. IAPQR Trans. 2000, 25, 43–52. [Google Scholar]
  6. Kim, D.H.; Kang, S.G.; Cho, J.S. Noninformative priors for stress-strength system in the Burr-type X model. J. Korean Stat. Soc. 2000, 29, 17–27. [Google Scholar]
  7. Guo, H.; Krishnamoorthy, K. New approximate inferential methods for the relia- bility parameter in a stress-strength model: The normal case. Commun. Stat. Theory Methods 2004, 33, 1715–1731. [Google Scholar] [CrossRef]
  8. Barbiero, A. Confidence intervals for reliability of stress-strength models in the normal case. Commun. Stat. Simul. Comput. 2011, 40, 907–925. [Google Scholar] [CrossRef]
  9. Gupta, R.C.; Brown, N. Reliability studies of the skew-normal distribution and its application to a strength-stress model. Commun. Stat. Theory Methods 2001, 30, 2427–2445. [Google Scholar] [CrossRef]
  10. Azzalini, A.; Chiogna, M. Some results on the stress-strength model for skew-normal variates. Metron 2004, 62, 315–326. [Google Scholar]
  11. Shawky, A.I.; El Sayed, H.S.; Nassar, M.M. On stress-strength reliability model in generalized gamma case. IAPQR Trans. 2001, 26, 1–8. [Google Scholar]
  12. Khan, M.A.; Islam, H.M. On strength reliability for generalized gamma distributed stress. J. Stat. Theory Appl. 2009, 8, 115–124. [Google Scholar]
  13. Nadarajah, S. Reliability for logistic distributions. Elektron. Model. 2004, 26, 65–82. [Google Scholar]
  14. Asgharzadeh, A.; Valiollahi, R.; Raqab, M.Z. Estimation of the stress-strength reliability for the generalized logistic distribution. Stat. Meth. 2013, 15, 73–94. [Google Scholar] [CrossRef]
  15. Nadarajah, S. Reliability for Laplace distributions. Math. Prob. Eng. 2004, 2, 169–183. [Google Scholar] [CrossRef] [Green Version]
  16. Kundu, D.; Gupta, R.D. Estimation of P(Y < X) for generalized exponential distribution. Metrika 2005, 61, 291–308. [Google Scholar]
  17. Krishnamoorthy, K.; Mukherjee, S.; Guo, H. Inference on reliability in two- parameter exponential stress-strength model. Metrika 2007, 65, 261–273. [Google Scholar] [CrossRef] [Green Version]
  18. Baklizi, A. Interval estimation of the stress-strength reliability in the two-parameter exponential distribution based on records. J. Stat. Comput. Simul. 2014, 84, 2670–2679. [Google Scholar] [CrossRef]
  19. Baklizi, A.; El-Masri, A.Q. Shrinkage estimation of P(X<Y) in the exponential case with common location parameter. Metrika 2004, 59, 163–171. [Google Scholar]
  20. Kayid, M.; Elbatal, I.; Merovci, F. A new family of generalized quadratic hazard rate distribution with applications. J. Test. Eval. 2016, 44. [Google Scholar] [CrossRef]
  21. Abbas, K.; Tang, Y. Objective Bayesian analysis of the Fréchet stress-strength model. Stat. Prob. Lett. 2014, 84, 169–175. [Google Scholar] [CrossRef]
  22. Ghitany, M.E.; Al-Mutairi, D.K.; Aboukhamseen, S.M. Estimation of the reliability of a stress-strength system from power Lindley distributions. Commun. Stat. Simul. Comput. 2015, 44, 118–136. [Google Scholar] [CrossRef]
  23. Okasha, H.M.; Kayid, M.; Abouammoh, M.A.; Elbatal, I. A new family of quadratic hazard rate-geometric distributions with reliability applications. J. Test. Eval. 2016, 44, 1937–1948. [Google Scholar] [CrossRef]
  24. Nadarajah, S.; Bagheri, S.; Alizadeh, M.; Samani, E. Estimation of the Stress Strength Parameter for the Generalized Exponential-Poisson Distribution. J. Test. Eval. 2018, 46, 2184–2202. [Google Scholar] [CrossRef]
  25. Nadarajah, S. Reliability for some bivariate beta distributions. Math. Prob. Eng. 2005, 1, 101–111. [Google Scholar] [CrossRef] [Green Version]
  26. Shrahili, M.; Elbatal, I.; Muhammad, I.; Muhammad, M. Properties and applications of beta Erlang-truncated exponential distribution. J. Math. Comput. Sci. JM 2020, 22, 16–37. [Google Scholar] [CrossRef]
  27. Muhammad, M.; Lixia, L. A New Extension of the Generalized Half Logistic Distribution with Applications to Real Data. Entropy 2019, 21, 339. [Google Scholar] [CrossRef] [Green Version]
  28. Ahmad, K.E.; Jaheen, Z.F.; Yousef, M.M. Inference on Pareto distribution as stress- strength model based on generalized order statistics. J. Appl. Stat. Sci. 2010, 17, 247–257. [Google Scholar]
  29. Krishnamoorthy, K.; Lin, Y. Confidence limits for stress-strength reliability involv- ing Weibull models. J. Stat. Plan. Infer. 2010, 140, 1754–1764. [Google Scholar] [CrossRef]
  30. Kundu, D.; Gupta, R.D. Estimation of P[Y < X] for Weibull distributions. IEEE Trans. Reliab. 2006, 55, 270–280. [Google Scholar]
  31. Asgharzadeh, A.; Valiollahi, R.; Raqab, M.Z. Stress-strength reliability of Weibull distribution based on progressively censored samples. SORT 2011, 35, 103–124. [Google Scholar]
  32. Asgharzadeh, A.; Kazemi, M.; Kundu, D. Estimation of P(X<Y) for Weibull distribution based on hybrid censored samples. Int. J. Syst. Assur. Eng. Manag. 2015. [Google Scholar] [CrossRef] [Green Version]
  33. Valiollahi, R.; Asgharzadeh, A.; Raqab, M.Z. Estimation of P(Y<X) for Weibull distribution under progressive type-II censoring. Commun. Stat. Theory Methods 2013, 42, 4476–4498. [Google Scholar]
  34. Asgharzadeh, A.; Valiollahi, R.; Raqab, M.Z. Estimation of Pr(Y<X) for the two- parameter generalized exponential records. Commun. Stat. Simul. Comput. 2017, 46, 371–394. [Google Scholar]
  35. Kohansal, A. Bayesian and classical estimation of R=P(X<Y) based on Burr type XII distribution under hybrid progressive censored samples. Commun. Stat. Theory Methods 2019. [Google Scholar] [CrossRef]
  36. Yadav, A.S.; Singh, S.K.; Singh, U. Bayesian estimation of stress–strength reliability for Lomax distribution under type-II hybrid censored data using asymmetric loss function. Life Cycle Reliab. Saf. Eng. 2019, 8, 257–267. [Google Scholar] [CrossRef]
  37. Alaa, H.; Properties, A.-H. Estimations and Predictions for a Poisson- Half-Logistic Distribution Based on Progressively Type-II Censored Samples. Appl. Math. Model. 2016. [Google Scholar] [CrossRef]
  38. Canuto, C.; Hussaini, M.Y.; Quarteroni, A.; Zang, T.A. Spectral Methods: Fundamentals in Single Domains; Springer: New York, NY, USA, 2006. [Google Scholar]
  39. Muhammad, M.; Yahaya, M.A. The Half Logistic-Poisson Distribution. Asian J. Math. Appl. 2017, 2017. [Google Scholar]
  40. Muhammad, M. Generalized Half Logistic Poisson Distributions. Commun. Stat. Appl. Methods 2017, 24, 1–14. [Google Scholar] [CrossRef] [Green Version]
  41. Efron, B.; Tibshirani, R.J. An Introduction to Bootstrap; Chapman & Hall Inc.: New York, NY, USA, 1993. [Google Scholar]
  42. Calabria, R.; Pulcini, G. Point estimation under asymmetric loss function for left truncated exponential samples. Commun. Stat. Theory Meth. 1996, 25, 285–600. [Google Scholar] [CrossRef]
  43. Metropolis, N.; Rosenbluth, A.; Rosenbluth, M.; Teller, A.; Teller, E. Equations of state calculations by fast computing machine. J. Chem. Phys. 1953, 21, 1087–1091. [Google Scholar] [CrossRef] [Green Version]
  44. Hastings, W.K. Monte Carlo sampling methods using Markov chains and their appli- cations. Biometrika 1970, 57, 97–109. [Google Scholar] [CrossRef]
  45. Gelfand, A.E.; Smith, A.F.M. Sampling-based approaches to calculating marginal densities. J. Am. Stat. Assoc. 1990, 85, 398–409. [Google Scholar] [CrossRef]
  46. Chen, M.H.; Shao, Q.M. Monte Carlo estimation of Bayesian Credible and HPD intervals. J. Comput. Graph. Stat. 1999, 8, 69–92. [Google Scholar]
  47. Meredith, M.; Kruschke, J. HDInterval: Highest (Posterior) Density Intervals. R Package Version 0.2.0. 2018. Available online: https://CRAN.R-project.org/package=HDInterval (accessed on 11 October 2020).
  48. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2019; Available online: https://www.R-project.org/ (accessed on 11 October 2020).
  49. Al-Mutairi, D.K.; Ghitany, M.E.; Kundu, D. Inferences on stress-strength reliability from weighted lindley distributions. Commun. Stat. Theory Meth. 2015, 44, 4096–4113. [Google Scholar] [CrossRef]
  50. Badar, M.G.; Priest, A.M. Statistical aspects of fibre and bundle strength in hybrid composites. In Progress in Science and Engineering Composites; Hayashi, T., Kawata, K., Umekawa, S., Eds.; ICCM-IV: Tokyo, Japan, 1982; pp. 1129–1136. [Google Scholar]
  51. Raqab, M.Z.; Kundu, D. Comparison of different estimators of P(Y<X) for a scaled Burr Type X distribution. Commun. Stat. Simul. Comput. 2005, 34, 465–483. [Google Scholar]
  52. Surles, J.G.; Padgett, W.J. Inference for Reliability and Stress-Strength for a Scaled Burr-Type X Distribution. Lifetime Data Anal. 1998, 7, 187–200. [Google Scholar] [CrossRef]
  53. Surles, J.G.; Padgett, W.J. Inference for P(Y<X) in the Burr-Type X Model. J. Appl. Stat. Sci. 2001, 7, 225–238. [Google Scholar]
  54. Al-Mutairi, D.K.; Ghitany, M.E.; Kundu, D. Inferences on stress-strength reliability from lindley distributions. Commun. Stat. Theory Meth. 2013, 42, 1443–1463. [Google Scholar] [CrossRef]
  55. Ali, S. On the mean residual life function and stress and strength analysis under different loss function for lindley distribution. J. Qual. Reliab. Eng. 2013. [Google Scholar] [CrossRef]
  56. Singh, S.K.; Singh, U.; Sharma, V.K. Estimation on system reliabilityin generalized Lindley stress-strength model. J. Stat. Appl. Prob. 2014, 3, 61–75. [Google Scholar] [CrossRef]
  57. Sadek, A. Mostafa Mohie Eldin and Shaimaa Elmeghawry. Estimation of Stress-Strength Reliability for Quasi Lindley Distribution. Adv. Syst. Sci. Appl. 2018, 18, 39–51. [Google Scholar]
  58. Lindley, D.V. Fiducial distributions and bayes theorem. J. R. Stat. Soc. 1958, 20, 102–107. [Google Scholar] [CrossRef]
Figure 1. (a) fitted Poisson half logistic random variables (PHLD)1 survival function, and (b) quantile–quantile plots of the PHLD1 for data1 of the real data study 1.
Figure 1. (a) fitted Poisson half logistic random variables (PHLD)1 survival function, and (b) quantile–quantile plots of the PHLD1 for data1 of the real data study 1.
Entropy 22 01307 g001
Figure 2. (c) Fitted PHLD2 survival function, and (d) quantile–quantile plots of the PHLD2 for data2 of the real data study 1.
Figure 2. (c) Fitted PHLD2 survival function, and (d) quantile–quantile plots of the PHLD2 for data2 of the real data study 1.
Entropy 22 01307 g002
Figure 3. Plots of the profile log-likelihood for each parameter α 1 (left), α 2 (middle-left), λ 1 (middle-right), λ 2 (right) for the real data study 1.
Figure 3. Plots of the profile log-likelihood for each parameter α 1 (left), α 2 (middle-left), λ 1 (middle-right), λ 2 (right) for the real data study 1.
Entropy 22 01307 g003
Figure 4. Plots of the posterior densities of each parameter and the density of R for the real data study 1; α 1 (left), α 2 (middle-left), λ 1 (middle), λ 2 (middle-right) and R (right).
Figure 4. Plots of the posterior densities of each parameter and the density of R for the real data study 1; α 1 (left), α 2 (middle-left), λ 1 (middle), λ 2 (middle-right) and R (right).
Entropy 22 01307 g004
Figure 5. Iterations obtained from the Gibbs and Metropolis–Hastings algorithms for each parameter and R for the real data study 1; α 1 (left), α 2 (middle-left), λ 1 (middle), λ 2 (middle-right) and R (right).
Figure 5. Iterations obtained from the Gibbs and Metropolis–Hastings algorithms for each parameter and R for the real data study 1; α 1 (left), α 2 (middle-left), λ 1 (middle), λ 2 (middle-right) and R (right).
Entropy 22 01307 g005
Figure 6. (e) Fitted PHLD1 survival function, and (f) quantile-quantile plots of the PHLD1 for data1 of the real data study 2.
Figure 6. (e) Fitted PHLD1 survival function, and (f) quantile-quantile plots of the PHLD1 for data1 of the real data study 2.
Entropy 22 01307 g006
Figure 7. (g) Fitted PHLD2 survival function, and (h) quantile-quantile plots of the PHLD2 for data2 of the real data study 2.
Figure 7. (g) Fitted PHLD2 survival function, and (h) quantile-quantile plots of the PHLD2 for data2 of the real data study 2.
Entropy 22 01307 g007
Figure 8. Plots of the profile log-likelihood for each parameter α 1 (left), α 2 (middle-left), λ 1 (middle-right), λ 2 (right) for the real data study 2.
Figure 8. Plots of the profile log-likelihood for each parameter α 1 (left), α 2 (middle-left), λ 1 (middle-right), λ 2 (right) for the real data study 2.
Entropy 22 01307 g008
Figure 9. Plots of the posterior densities of each parameter and the density of R for the real data study 2; α 1 (left), α 2 (middle-left), λ 1 (middle), λ 2 (middle-right) and R (right).
Figure 9. Plots of the posterior densities of each parameter and the density of R for the real data study 2; α 1 (left), α 2 (middle-left), λ 1 (middle), λ 2 (middle-right) and R (right).
Entropy 22 01307 g009
Figure 10. Iterations obtained from the Gibbs and Metropolis–Hastings algorithms for each parameter and R for the real data study 2; α 1 (left), α 2 (middle-left), λ 1 (middle), λ 2 (middle-right) and R (right).
Figure 10. Iterations obtained from the Gibbs and Metropolis–Hastings algorithms for each parameter and R for the real data study 2; α 1 (left), α 2 (middle-left), λ 1 (middle), λ 2 (middle-right) and R (right).
Entropy 22 01307 g010
Table 1. Bias, mean square error (MSE), average length of confidence interval (ALCI) with their coverage probability (CP) of various estimators of R.
Table 1. Bias, mean square error (MSE), average length of confidence interval (ALCI) with their coverage probability (CP) of various estimators of R.
α 1 α 2 λ 1 λ 2 R ( n 1 , n 2 ) ( n 1 , n 2 ) ( n 1 , n 2 ) ( n 1 , n 2 )
2.0 1.9 3.7 3.8 0.4683 ( 20 , 20 ) ( 40 , 30 ) ( 40 , 50 ) ( 60 , 60 )
Bias MSE Bias MSE Bias MSE Bias MSE
R M L E 0.0021 0.0084 0.0024 0.0049 0.0003 0.0036 0.0032 0.0029
R S E L 0.0140 0.0031 0.0124 0.0024 0.0045 0.0020 0.0090 0.0019
R M A P 0.0130 0.0039 0.0128 0.0029 0.0034 0.0023 0.0086 0.0022
R A E L 0.0136 0.0032 0.0124 0.0024 0.0042 0.0020 0.0089 0.0019
R L I N 0.0083 0.0029 0.0086 0.0023 0.0013 0.0020 0.0066 0.0019
R G E L 0.0123 0.0033 0.0048 0.0024 0.0098 0.0022 0.0018 0.0019
ALCI CP ALCI CP ALCI CP ALCI CP
A S C I 0.8396 0.90 0.3819 0.93 0.2391 0.90 0.1812 0.89
B p C I 0.3476 0.94 0.2669 0.94 0.2349 0.96 0.2024 0.93
B t C I 0.3476 0.93 0.2669 0.94 0.2349 0.95 0.2024 0.93
H P D C I 0.2904 0.98 0.2382 0.98 0.2167 0.98 0.1909 0.95
1.5 4.0 2.0 1.9 0.8248 ( 20 , 20 ) ( 40 , 30 ) ( 40 , 50 ) ( 60 , 60 )
Bias MSE Bias MSE Bias MSE Bias MSE
R M L E 0.0050 0.0047 0.0034 0.0024 0.0022 0.0020 0.0024 0.0016
R S E L 0.0750 0.0075 0.0571 0.0045 0.0470 0.0032 0.0438 0.0028
R M A P 0.0632 0.0061 0.0507 0.0039 0.0434 0.0030 0.0407 0.0027
R A E L 0.0709 0.0069 0.0548 0.0042 0.0455 0.0031 0.0428 0.0027
R L I N 0.0785 0.0081 0.0593 0.0048 0.0486 0.0033 0.0451 0.0027
R G E L 0.0852 0.0094 0.0628 0.0053 0.0512 0.0036 0.0472 0.0031
ALCI CP ALCI CP ALCI CP ALCI CP
A S C I 0.5684 0.90 0.3748 0.90 0.1762 0.89 0.1341 0.88
B p C I 0.2555 0.96 0.1908 0.97 0.1804 0.97 0.1536 0.96
B t C I 0.2555 0.96 0.1908 0.96 0.1804 0.96 0.1536 0.95
H P D C I 0.2217 0.83 0.1732 0.90 0.1530 0.88 0.1374 0.89
1.5 1.5 3.0 3.5 0.4690 ( 20 , 20 ) ( 40 , 30 ) ( 40 , 50 ) ( 60 , 60 )
Bias MSE Bias MSE Bias MSE Bias MSE
R M L E 0.0003 0.0087 0.0005 0.0049 0.0005 0.0037 0.0017 0.0027
R S E L 0.0137 0.0034 0.0132 0.0026 0.0035 0.0022 0.0064 0.0018
R M A P 0.0136 0.0043 0.0140 0.0032 0.0023 0.0026 0.0066 0.0021
R A E L 0.0135 0.0035 0.0132 0.0026 0.0031 0.0022 0.0062 0.0019
R L I N 0.0080 0.0032 0.0094 0.0025 0.0004 0.0022 0.0039 0.0018
R G E L 0.0126 0.0034 0.0037 0.0026 0.0105 0.0024 0.0044 0.0019
ALCI CP ALCI CP ALCI CP ALCI CP
A S C I 0.6148 0.95 0.3373 0.90 0.2060 0.90 0.1583 0.89
B p C I 0.3459 0.93 0.2659 0.95 0.2351 0.94 0.2024 0.96
B t C I 0.3459 0.92 0.2660 0.94 0.2351 0.93 0.2024 0.96
H P D C I 0.2905 0.98 0.2356 0.98 0.2143 0.97 0.1905 0.98
1.9 1.7 4 3.0 0.5055 ( 20 , 20 ) ( 40 , 30 ) ( 40 , 50 ) ( 60 , 60 )
Bias MSE Bias MSE Bias MSE Bias MSE
R M L E 0.0029 0.0083 0.0013 0.0051 0.0018 0.0035 0.0005 0.0027
R S E L 0.0057 0.0029 0.0008 0.0024 0.0068 0.0020 0.0008 0.0018
R M A P 0.0048 0.0039 0.0003 0.0029 0.0068 0.0023 0.0006 0.0021
R A E L 0.0058 0.0031 0.0006 0.0024 0.0069 0.0020 0.0009 0.0018
R L I N 0.0115 0.0030 0.0047 0.0024 0.0099 0.0020 0.0017 0.0018
R G E L 0.0312 0.0042 0.0172 0.0028 0.0200 0.0024 0.0094 0.0019
ALCI CP ALCI CP ALCI CP ALCI CP
A S C I 0.5754 0.92 0.2823 0.87 0.1868 0.89 0.1406 0.90
B p C I 0.3527 0.94 0.2711 0.94 0.2353 0.95 0.2034 0.95
B t C I 0.3527 0.94 0.2712 0.93 0.2353 0.95 0.2034 0.96
H P D C I 0.2907 0.99 0.2385 0.98 0.2144 0.99 0.1908 0.98
2.5 2.7 1.6 1.9 0.5068 ( 20 , 20 ) ( 40 , 30 ) ( 40 , 50 ) ( 60 , 60 )
Bias MSE Bias MSE Bias MSE Bias MSE
R M L E 0.0029 0.0023 0.0013 0.0020 0.0005 0.0019 0.0004 0.0017
R S E L 0.0028 0.0034 0.0012 0.0023 0.0042 0.0021 0.0047 0.0016
R M A P 0.0034 0.0045 0.0006 0.0029 0.0045 0.0024 0.0044 0.0019
R A E L 0.0030 0.0036 0.0010 0.0024 0.0042 0.0021 0.0047 0.0017
R L I N 0.0030 0.0035 0.0048 0.0023 0.0013 0.0020 0.0024 0.0016
R G E L 0.0220 0.0043 0.0162 0.0027 0.0076 0.0022 0.0044 0.0017
ALCI CP ALCI CP ALCI CP ALCI CP
A S C I 0.5153 0.91 0.1734 0.80 0.1111 0.81 0.1101 0.85
B p C I 0.2246 0.89 0.2646 0.96 0.2341 0.95 0.2019 0.95
B t C I 0.2246 0.96 0.2646 0.95 0.2341 0.94 0.2019 0.95
H P D C I 0.2896 0.98 0.2283 0.98 0.2053 0.98 0.1807 0.98
0.9 0.9 0.9 0.9 0.5000 ( 20 , 20 ) ( 40 , 30 ) ( 40 , 50 ) ( 60 , 60 )
Bias MSE Bias MSE Bias MSE Bias MSE
R M L E 0.0024 0.0079 0.0021 0.0046 0.0002 0.0037 0.0007 0.0024
R S E L 0.0023 0.0048 0.0020 0.0030 0.0001 0.0026 0.0001 0.0017
R M A P 0.0017 0.0061 0.0026 0.0036 0.0004 0.0031 0.0001 0.0021
R A E L 0.0023 0.0051 0.0024 0.0031 0.0002 0.0027 0.0002 0.0018
R L I N 0.0034 0.0049 0.0016 0.0030 0.0028 0.0027 0.0021 0.0017
R G E L 0.0228 0.0058 0.0133 0.0033 0.0118 0.0029 0.0090 0.0019
ALCI CP ALCI CP ALCI CP ALCI CP
A S C I 4.7021 0.69 0.1147 0.70 0.0687 0.69 0.0511 0.70
B p C I 0.3457 0.93 0.2631 0.95 0.2321 0.94 0.2007 0.95
B t C I 0.3457 0.93 0.2631 0.94 0.2321 0.93 0.2007 0.95
H P D C I 0.2877 0.95 0.2287 0.96 0.2032 0.94 0.1785 0.95
Table 2. Bias, MSE, ALCI with their CP of various estimators of R.
Table 2. Bias, MSE, ALCI with their CP of various estimators of R.
α 1 α 2 λ 1 λ 2 R ( n 1 , n 2 ) ( n 1 , n 2 ) ( n 1 , n 2 ) ( n 1 , n 2 )
1.7 3.5 2.0 2.1 0.7529 ( 20 , 20 ) ( 40 , 30 ) ( 40 , 50 ) ( 60 , 60 )
Bias MSE Bias MSE Bias MSE Bias MSE
R M L E 0.0021 0.0065 0.0009 0.0034 0.0244 0.0028 0.0007 0.0020
R S E L 0.0643 0.0065 0.0511 0.0042 0.0454 0.0033 0.0384 0.0027
R M A P 0.0549 0.0060 0.0454 0.00396 0.0418 0.0032 0.0036 0.0026
R A E L 0.0601 0.0061 0.0489 0.0040 0.0441 0.0032 0.0374 0.0026
R L I N 0.0688 0.0072 0.0538 0.0045 0.0475 0.0036 0.0400 0.0028
R G E L 0.0784 0.0089 0.0592 0.00523 0.0516 0.0040 0.0432 0.0031
ALCI CP ALCI CP ALCI CP ALCI CP
A S C I 0.5512 0.92 0.2947 0.91 0.1801 0.89 0.1361 0.82
B p C I 0.2937 0.94 0.2211 0.94 0.2061 0.95 0.1745 0.95
B t C I 0.2937 0.93 0.2211 0.94 0.2061 0.95 0.1745 0.95
H P D C I 0.2507 0.92 0.1983 0.89 0.1768 0.89 0.1565 0.88
2.0 2.0 2.0 1.5 0.5378 ( 20 , 20 ) ( 40 , 30 ) ( 40 , 50 ) ( 60 , 60 )
Bias MSE Bias MSE Bias MSE Bias MSE
R M L E 0.0029 0.0086 0.0010 0.0049 0.0023 0.0037 0.0005 0.0030
R S E L 0.0174 0.0042 0.0143 0.0028 0.0110 0.0021 0.0092 0.0018
R M A P 0.0161 0.0051 0.0013 0.0032 0.0110 0.0024 0.0089 0.0021
R A E L 0.0169 0.0045 0.0139 0.0028 0.0108 0.0021 0.0090 0.0018
R L I N 0.0232 0.0044 0.0179 0.0029 0.0138 0.0021 0.0115 0.0019
R G E L 0.0418 0.0061 0.0290 0.0036 0.0223 0.0025 0.0180 0.0021
ALCI CP ALCI CP ALCI CP ALCI CP
A S C I 0.2800 0.79 0.1556 0.78 0.0985 0.75 0.0743 0.72
B p C I 0.3488 0.93 0.2683 0.94 0.2357 0.96 0.2039 0.95
B t C I 0.3488 0.92 0.2683 0.94 0.2357 0.95 0.2039 0.94
H P D C I 0.2895 0.96 0.2299 0.96 0.2049 0.97 0.1814 0.96
1.7 1.8 2.5 2.5 0.5245 ( 20 , 20 ) ( 40 , 30 ) ( 40 , 50 ) ( 60 , 60 )
Bias MSE Bias MSE Bias MSE Bias MSE
R M L E 0.0048 0.0083 0.0001 0.0050 0.0022 0.0038 0.0008 0.0025
R S E L 0.0057 0.0034 0.0065 0.0025 0.0049 0.0021 0.0036 0.0016
R M A P 0.0040 0.0045 0.0051 0.0031 0049 0.0025 0.0029 0.0019
R A E L 0.0053 0.0035 0.0061 0.0026 0.0048 0.0021 0.0034 0.0016
R L I N 0.0114 0.0035 0.0101 0.0026 0.0079 0.0021 0.0059 0.0016
R G E L 0.0270 0.0038 0.0216 0.0031 0.0170 0.0025 0.0131 0.0018
ALCI CP ALCI CP ALCI CP ALCI CP
A S C I 0.4264 0.85 0.2357 0.82 0.1479 0.83 0.1168 0.83
B p C I 0.3500 0.93 0.2681 0.93 0.2367 0.94 0.2044 0.96
B t C I 0.3500 0.92 0.2681 0.93 0.2367 0.94 0.2044 0.96
H P D C I 0.2888 0.98 0.2326 0.98 0.2100 0.98 0.1863 0.98
1.9 1.8 3.6 4.5 0.4228 ( 20 , 20 ) ( 40 , 30 ) ( 40 , 50 ) ( 60 , 60 )
Bias MSE Bias MSE Bias MSE Bias MSE
R M L E 0.0001 0.0077 0.0013 0.0046 0.0018 0.0034 0.0002 0.0027
R S E L 0.0342 0.0037 0.0307 0.0067 0.0133 0.0021 0.0145 0.0020
R M A P 0.0321 0.0044 0.0290 0.0036 0.0111 0.0025 0.0130 0.0022
R A E L 0.0335 0.0038 0.0302 0.0032 0.0126 0.0021 0.0140 0.0020
R L I N 0.0284 0.0034 0.0268 0.0029 0.0101 0.0020 0.0120 0.0019
R G E L 0.0064 0.0029 0.0124 0.0025 0.0020 0.0021 0.0027 0.0019
ALCI CP ALCI CP ALCI CP ALCI CP
A S C I 0.8269 0.97 0.4540 0.96 0.2306 0.94 0.2143 0.90
B p C I 0.3393 0.94 0.2603 0.94 0.2306 0.95 0.1990 0.94
B t C I 0.3393 0.93 0.2603 0.93 0.2306 0.94 0.1990 0.94
H P D C I 0.2903 0.98 0.2385 0.97 0.2163 0.98 0.1909 0.96
4.0 3.5 5.0 2.9 0.5489 ( 20 , 20 ) ( 40 , 30 ) ( 40 , 50 ) ( 60 , 60 )
Bias MSE Bias MSE Bias MSE Bias MSE
R M L E 0.0005 0.8882 0.0004 0.0053 0.0028 0.0037 0.0045 0.0027
R S E L 0.0345 0.0036 0.0216 0.0026 0.0157 0.0021 0.0047 0.0017
R M A P 0.0346 0.0044 0.0203 0.0031 0.0155 0.0024 0.0039 0.0020
R A E L 0.0342 0.0037 0.0213 0.0027 0.0156 0.0021 0.0043 0.0017
R L I N 0.0402 0.0040 0.0254 0.0028 0.0189 0.0022 0.0071 0.0017
R G E L 0.0589 0.0061 0.0370 0.0037 0.028 0.0027 0.0140 0.0019
ALCI CP ALCI CP ALCI CP ALCI CP
A S C I 0.5480 0.94 0.3087 0.90 0.1907 0.88 0.1429 0.80
B p C I 0.3536 0.95 0.2722 0.94 0.2335 0.95 0.2036 0.95
B t C I 0.3536 0.94 0.2722 0.93 0.2335 0.95 0.2036 0.94
H P D C I 0.2889 0.98 0.2374 0.98 0.2126 0.96 0.1889 0.98
Table 3. Estimated parameters by MLE and Bayes estimation, log-likelihood (L) and Kolmogorov–Smirnov (KS) with the p-values for the real data study 1.
Table 3. Estimated parameters by MLE and Bayes estimation, log-likelihood (L) and Kolmogorov–Smirnov (KS) with the p-values for the real data study 1.
α 1 α 2 λ 1 λ 2 L
MLE 2.2424 2.1369 8.3180 4.5606 109.05
Bayes 1.9001 1.7690 5.0526 2.6749
D a t a 1 M L E D a t a 2 M L E D a t a 1 B a y e s D a t a 2 B a y e s
KS 0.0762 0.0829 0.1158 0.0942
p-value 0.7894 0.7478 0.3089 0.5975
Table 4. Estimated value of R, confidence intervals and the length of confidence interval for the real data study 1.
Table 4. Estimated value of R, confidence intervals and the length of confidence interval for the real data study 1.
R MLE R SEL R MAP R AEL R LIN R GEL
R 0.6133 0.59654 0.61173 0.59798 0.59432 0.58887
A S B p B t H P D
CI ( 0.4790 , 0.7477 ) ( 0.5203 , 0.7084 ) ( 0.5203 , 0.7064 ) ( 0.5018 , 0.6857 )
LCI 0.2687 0.1861 0.1861 0.1839
Table 5. Estimated parameters by MLE and Bayes estimation, log-likelihood (L) and KS with the p-values for real data study 2.
Table 5. Estimated parameters by MLE and Bayes estimation, log-likelihood (L) and KS with the p-values for real data study 2.
α 1 α 2 λ 1 λ 2 L
MLE 0.1932 0.2369 1.5734 0.3897 489.34
Bayes 0.1722 0.2425 0.8863 0.4738
D a t a 1 M L E D a t a 2 M L E D a t a 1 B a y e s D a t a 2 B a y e s
KS 0.0594 0.0700 0.0793 0.0718
p-value 0.8724 0.9101 0.5551 0.8946
Table 6. Estimated value of R, confidence intervals and the length of confidence interval for real data study 2.
Table 6. Estimated value of R, confidence intervals and the length of confidence interval for real data study 2.
R MLE R SEL R MAP R AEL R LIN R GEL
R 0.65995 0.63970 0.65208 0.64078 0.63809 0.63455
A S B p B t H P D
CI ( 0.6380 , 0.6817 ) ( 0.5729 , 0.7417 ) ( 0.5729 , 0.7470 ) ( 0.5569 , 0.7148 )
LCI 0.0439 0.1742 0.1742 0.1579
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Muhammad, I.; Wang, X.; Li, C.; Yan, M.; Chang, M. Estimation of the Reliability of a Stress–Strength System from Poisson Half Logistic Distribution. Entropy 2020, 22, 1307. https://doi.org/10.3390/e22111307

AMA Style

Muhammad I, Wang X, Li C, Yan M, Chang M. Estimation of the Reliability of a Stress–Strength System from Poisson Half Logistic Distribution. Entropy. 2020; 22(11):1307. https://doi.org/10.3390/e22111307

Chicago/Turabian Style

Muhammad, Isyaku, Xingang Wang, Changyou Li, Mingming Yan, and Miaoxin Chang. 2020. "Estimation of the Reliability of a Stress–Strength System from Poisson Half Logistic Distribution" Entropy 22, no. 11: 1307. https://doi.org/10.3390/e22111307

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop