Next Article in Journal
The Use of the Statistical Entropy in Some New Approaches for the Description of Biosystems
Previous Article in Journal
Unraveling Hidden Major Factors by Breaking Heterogeneity into Homogeneous Parts within Many-System Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Statistical Inference of Inverted Exponentiated Rayleigh Distribution under Joint Progressively Type-II Censoring

Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(2), 171; https://doi.org/10.3390/e24020171
Submission received: 2 December 2021 / Revised: 6 January 2022 / Accepted: 20 January 2022 / Published: 24 January 2022
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
Inverted exponentiated Rayleigh distribution is a widely used and important continuous lifetime distribution, which plays a key role in lifetime research. The joint progressively type-II censoring scheme is an effective method used in the quality evaluation of products from different assembly lines. In this paper, we study the statistical inference of inverted exponentiated Rayleigh distribution based on joint progressively type-II censored data. The likelihood function and maximum likelihood estimates are obtained firstly by adopting Expectation-Maximization algorithm. Then, we calculate the observed information matrix based on the missing value principle. Bootstrap-p and Bootstrap-t methods are applied to get confidence intervals. Bayesian approaches under square loss function and linex loss function are provided respectively to derive the estimates, during which the importance sampling method is introduced. Finally, the Monte Carlo simulation and real data analysis are performed for further study.

1. Introduction

1.1. Inverted Exponentiated Rayleigh Distribution

Rayleigh distribution is a special form of the Weibull distribution, which was first proposed by Rayleigh when he studied the problems in the field of acoustics. Ref. [1] discussed the generalization of the Rayleigh distribution and its application to practical problems. Ref. [2] introduced Bayesian approaches to study the statistical inference of Rayleigh distribution. Ref. [3] combined progressively type-II censored data with the Rayleigh model and studied the estimations of parameters. Rayleigh distribution has only one parameter, and its applications in practice are limited and not flexible. Many scholars have extended it to two-parameter distributions. Refs. [4,5] began to study the estimations of generalized Rayleigh distribution. Ref. [6] extended progressively type-II censoring scheme to generalized Rayleigh distribution. Recently, under non-informative prior, Ref. [7] studied the estimation of the shape parameter of generalized Rayleigh distribution. Noticing that the hazard function of generalize Rayleigh distribution is monotone when the shape parameter is greater than 1 2 but the hazard function of inverted exponentiated Rayleigh distribution is nonmonotone, which is more realistic in real life, more scientists become interested in this distribution. Ref. [8] analyzed the estimation of the inverted exponentiated Rayleigh distribution under progressively first-failure censoring scheme and Ref. [9] further studied its prediction. Ref. [10] considered inverted exponentiated Rayleigh distribution with adaptive type-II progressive hybrid censored data. Ref. [11] applied this distribution to analyzing coating weights of iron sheets data.
A random variable X follows inverted exponentiated Rayleigh distribution (IERD) if its probability density function (pdf), the cumulative distribution function (cdf) and hazard function take the forms, respectively, as
f ( x ; θ , λ ) = 2 θ λ x 3 e λ x 2 ( 1 e λ x 2 ) θ 1 , x > 0 ; θ , λ > 0 ,
F ( x ; θ , λ ) = 1 ( 1 e λ x 2 ) θ , x > 0 ; θ , λ > 0 ,
h ( x ; θ , λ ) = 2 θ λ x 3 e λ x 2 ( 1 e λ x 2 ) 1 , x > 0 ; θ , λ > 0 ,
where θ and λ are the shape and scale parameters respectively, and they are both positive. Figure 1 shows the pdfs and hazard functions for different θ and λ of the IERD. It is obvious that both functions are nonmonotone. The IERD can be treated as a useful alternative to some other lifetime distributions such as Weibull distributions, and it has been applied in many fields. Specially, it can effectively be utilized to most experiments of electrical or mechanical devices, patient treatments, and so on.

1.2. Joint Progressively Type-II Censoring Scheme

Type-I and type-II censoring schemes have been mostly used. The definition of a type-I censoring scheme is that the observations are terminated at a fixed time and the failure times are recorded. The definition of a type-II censoring scheme is that the observations are terminated until a sufficient and prefixed number of units fail. However, these two censoring schemes do not work well when the lifetimes of tested units are relatively long. More censoring schemes have been proposed later, and the popular and attractive ones are progressive censoring schemes that remove test units every failure time, not just at the last time. Ref. [12] gave details about the progressive censoring schemes. Progressive censoring schemes can be classified as progressively type-I and progressively type-II. The progressively type-I censoring scheme is used in multiple lifetime models, like Refs. [13,14,15]. Many scholars have studied progressively type-II censoring scheme, too. Ref. [16] began to apply it to generalized Gamma distribution. Refs. [17,18] discussed different distributions under progressively type-II censoring scheme.
The censoring schemes mentioned above are all used in one-sample problems. In real life, we face and need to consider two or more samples from different assembly lines. The joint progressively type-II censoring scheme is quite useful in comparing the lifetimes of products from different assembly lines and has received a lot of attention in recent years. Ref. [19] first introduced the joint progressively type-II censoring scheme to study two populations from different exponential distributions. Ref. [20] extended it to multiple exponential populations and provided statistical inference. In the same year, they proposed a new two-sample progressively type-II censoring scheme [21] to study two populations more conveniently. Recently, Ref. [22] discussed the estimation of generalized exponential distribution based on joint progressively type-II censored data. Ref. [23] estimated the two unknown parameters of the inverse exponentiated Rayleigh distribution based on progressive censored data using the pivotal quantity method. In Ref. [24], the reliability of the stress-strength model was introduced and derived for the probability P ( T < X < Z ) of a component strength X relation between two stresses, T and Z, which follows to have IERD with different unknown shape parameters and common known scale parameter.
The joint progressively type-II censoring (JPC) scheme can be briefly described as follows. Suppose sample A and sample B are taken from two populations. Sample A has m units and sample B has n units. The two samples are merged and used for a life testing experiment. Set k as the amount of failures and let r 1 , , r k be the numbers of units withdrawn every time, which satisfy i = 1 k ( r i + 1 ) = m + n . At the time of the first failure w 1 , r 1 units are intentionally withdrawn from the remaining units. r 1 units include s 1 units withdrawn from sample A and t 1 units withdrawn from sample B. Similarly, at the time of i-th failure w i ( i = 2 , 3 , , k 1 ), r i units are withdrawn from the remaining m + n i l = 1 i 1 r l units. r i units include s i units withdrawn from sample A and t i units withdrawn from sample B. Here r i is prefixed, and s i and t i are random but satisfy s i + t i = r i . The experiment will be finished at the k-th time and we withdraw all the rest of the surviving units. Besides, we let z i =1 (or 0) if the i-th failure is from sample A (or sample B). According to the scheme above, we introduce three elements to express the observed censored sample, ( ( w i , s i , z i ) , , ( w k , s k , z k ) ) . Let k 1 = i = 1 k z i be the total number of failures from sample A, and k 2 = i = 1 k ( 1 z i ) be the total number of failures from sample B. Figure 2 below shows the scheme.
In this paper, we make statistical inference and analyze two samples from two-parameter IERD under joint progressively type-II censoring scheme. The maximum likelihood estimation and Bayesian inference are applied to get point estimations and interval estimations. Expectation-Maximization (EM) algorithm is used to calculate the maximum likelihood estimates, which is a three-dimensional optimization problem. Then, the observed information matrix is derived. Bootstrap-p and Bootstrap-t methods are adopted to compute the confidence intervals. In Bootstrap-t method, the observed information matrix obtained is essential. When doing Bayesian inference, non-informative prior and informative prior are provided. With these two Bayesian priors, we obtain Bayesian estimates based on both square loss function and linex loss function. To get the arithmetic solution, importance sampling technique is used. Monte Carlo simulation and real data analysis are performed to compare the performance of different methods.
The rest of the paper is arranged as follows. In Section 2, the EM algorithm is proposed to derive MLEs. In Section 3, we calculate the observed information matrix based on the missing value principle. Then, Bootstrap methods are used to obtain confidence intervals in Section 4. Bayes inference based on non-informative and informative priors is presented in Section 5. In Section 6, the methods above are compared through Monte Carlo simulation and data analysis.

2. Maximum Likelihood Estimators and EM Algorithm

Suppose sample A has m units and their lifetimes independently follow IERD( θ 1 , λ ) and sample B has n units and their lifetimes independently follow IERD( θ 2 , λ ). According to the JPC scheme, we have the observed data ( ( w i , s i , z i ) , , ( w k , s k , z k ) ) for prefixed r 1 , , r k . Therefore, the likelihood function without the normalizing constant can be written as
L ( θ 1 , θ 2 , λ | d a t a ) = w i S f ( w i ; θ 1 , λ ) w j T f ( w j ; θ 2 , λ ) l = 1 k 1 F ( w l ; θ 1 , λ ) s l 1 F ( w l ; θ 2 , λ ) t l ,
where s l + t l = r l for l = 1 , , k , S means the times of failures from sample A and T means the times of failures from sample B. According to (1) and (2), the likelihood function becomes
L ( θ 1 , θ 2 , λ | d a t a ) = ( 2 λ ) k θ 1 k 1 θ 2 k 2 i = 1 k w i 3 e λ w i 2 ( 1 e λ w i 2 ) z i θ 1 + ( 1 z i ) θ 2 1 × i = 1 k ( 1 e λ w i 2 ) θ 1 s i ( 1 e λ w i 2 ) θ 2 t i ,
where k 1 = i = 1 k z i , k 2 = i = 1 k ( 1 z i ) = k k 1 .
If k 1 = 0 , k 2 = k , the function becomes the following form:
L ( θ 1 , θ 2 , λ | d a t a ) = ( 2 θ 2 λ ) k i = 1 k w i 3 e λ w i 2 ( 1 e λ w i 2 ) θ 2 1 i = 1 k ( 1 e λ w i 2 ) θ 1 s i ( 1 e λ w i 2 ) θ 2 t i .
When s i = 0 , ( 1 e λ w i 2 ) θ 1 s i = 1 . When s i 0 , ( 1 e λ w i 2 ) θ 1 s i is a strictly decreasing function of θ 1 for a fixed λ and it decreases to 0. So when k 1 = 0 , L ( θ 1 , θ 2 , λ | d a t a ) is a strictly decreasing function of θ 1 for fixed θ 2 and λ , which implies that maximum likelihood estimators (MLEs) do not exist. For k 2 = 0 , the situation is similar. Thus, we assume that k 1 > 0 , k 2 > 0 to avoid the trivial cases.
The log-likelihood function can be expressed as
ln L ( θ 1 , θ 2 , λ | d a t a ) = k ln ( 2 λ ) + k 1 ln θ 1 + k 2 ln θ 2 + i = 1 k 3 ln w i λ w i 2 + z i θ 1 + ( 1 z i ) θ 2 1 ln ( 1 e λ w i 2 ) + i = 1 k θ 1 s i ln ( 1 e λ w i 2 ) + θ 2 t i ln ( 1 e λ w i 2 ) .
Take the partial derivatives of ln L ( θ 1 , θ 2 , λ | d a t a ) , let them equal 0 and the roots of the equations are the maximum likelihood estimates of ( θ 1 , θ 2 , λ ) . It is found that the equations are non-linear, and it is infeasible to calculate the solutions directly. The New-Raphson method needs second partial derivatives to inevitably calculate and the computation is cumbersome.

EM Algorithm

In this subsection, we use the EM algorithm to get MLEs. Ref. [25] proposed the method to get maximum likelihood from incomplete data and Ref. [26] introduced its applications in a generalized partial credit model. The observed data are available and the missing data are the lifetimes of those units withdrawn. Complete data of the experiment consist of the observed data ( ( w 1 , s 1 , z 1 ) , , ( w k , s k , z k ) ) and missing data. At the time of i-th failure, s i units are withdrawn from sample A ( i = 1 , 2 , , k ). Assume that the lifetimes of the s i units are ( u i 1 , u i 2 , , u i s i ) . Similarly, t i units are withdrawn from sample B. Assume that the lifetimes of the t i units are ( v i 1 , v i 2 , , v i t i ) . The missing data are ( ( u 11 , u 12 , , u 1 s 1 ) , , ( u k 1 , u k 2 , , u k s k ) , ( v 11 , v 12 , , v 1 t 1 ) , , ( v k 1 , v k 2 , , v k t k ) ) which can be expressed as ( U 1 , , U k , V 1 , , V k ) = ( U , V ) . The complete data d a t a * (say) are ( ( w 1 , s 1 , z 1 ) , , ( w k , s k , z k ) , U , V ) . The log-likelihood function for the complete data is obtained as
ln L c ( θ 1 , θ 2 , λ | d a t a * ) = ( m + n ) ln ( 2 λ ) + m ln θ 1 + n ln θ 2 3 i = 1 k j = 1 s i ln u i j + l = 1 t i ln v i l + ln w i λ i = 1 k j = 1 s i 1 u i j 2 + l = 1 t i 1 v i l 2 + 1 w i 2 + ( θ 1 1 ) i = 1 k j = 1 s i ln ( 1 e λ u i j 2 ) + ( θ 2 1 ) i = 1 k l = 1 t i ln ( 1 e λ v i l 2 ) + i = 1 k z i θ 1 + ( 1 z i ) θ 2 1 ln ( 1 e λ w i 2 ) .
The EM algorithm is divided into two steps. The pseudo log-likelihood function at the ‘E’-step is given by:
l c ( θ 1 , θ 2 , λ | d a t a * ) = ( m + n ) ln ( 2 λ ) + m ln θ 1 + n ln θ 2 3 i = 1 k s i E ( ln U i | U i > w i ) + t i E ( ln V i | V i > w i ) 3 i = 1 k ln w i λ i = 1 k s i E ( 1 u i j 2 ) + t i E ( 1 V i 2 | V i > w i ) + 1 w i 2 + ( θ 1 1 ) i = 1 k s i E ln ( 1 e λ U i 2 ) | U i > w i + ( θ 2 1 ) i = 1 k t i E ln ( 1 e λ V i 2 ) | V i > w i + i = 1 k z i θ 1 + ( 1 z i ) θ 2 1 ln ( 1 e λ w i 2 ) .
The conditional pdfs of the U i j and V i l are expressed respectively as (see [27])
f U i j | ( W 1 , S 1 , Z 1 ) , , ( W k , S k , Z k ) ( u i j | ( w 1 , s 1 , z 1 ) , , ( w k , s k , z k ) ) = f U i j | W i ( u i j | w i ) = f I E R D ( u i j ; θ 1 , λ ) 1 F I E R D ( w i ; θ 1 , λ ) ,
f V i l | ( W 1 , S 1 , Z 1 ) , , ( W k , S k , Z k ) ( v i l | ( w 1 , s 1 , z 1 ) , , ( w k , s k , z k ) ) = f V i l | W i ( v i l | w i ) = f I E R D ( v i l ; θ 2 , λ ) 1 F I E R D ( w i ; θ 2 , λ ) ,
for i = 1 , , k .
Then, the following formulas are obtained:
E ( ln U i j | U i j > w i ) = w i 2 θ 1 λ u i j 3 ln u i j e λ u i j 2 ( 1 e λ u i j 2 ) θ 1 1 ( 1 e λ w i 2 ) θ 1 d u i j ,
E ( 1 U i j 2 | U i j > w i ) = w i 2 θ 1 λ u i j 5 e λ u i j 2 ( 1 e λ u i j 2 ) θ 1 1 ( 1 e λ w i 2 ) θ 1 d u i j ,
E ln ( 1 e λ U i j 2 ) | U i j > w i = w i 2 θ 1 λ u i j 3 ln ( 1 e λ u i j 2 ) e λ u i j 2 ( 1 e λ u i j 2 ) θ 1 1 ( 1 e λ w i 2 ) θ 1 d u i j .
The expectations related to the functions of V i l are similar and omitted here.
In the ‘M’ step, we calculate the estimates of θ 1 , θ 2 and λ , which maximize the pseudo log-likelihood function with finite iterations. Take partial derivatives of (8) to get the expressions of θ 1 and θ 2 in terms of λ :
θ 1 ^ ( λ ) = θ 1 = m i = 1 k s i E ln ( 1 e λ U i 2 ) | U i > w i + i = 1 k z i ln ( 1 e λ w i 2 ) ,
θ 2 ^ ( λ ) = θ 2 = n i = 1 k t i E ln ( 1 e λ V i 2 ) | V i > w i + i = 1 k ( 1 z i ) ln ( 1 e λ w i 2 ) .
Equation (8) can be rewritten as the function only for λ and the problem turns to be a one-dimensional optimization problem. At the q-th iteration ( q = 1 , 2 , ), let ( θ 1 ( q ) , θ 2 ( q ) , λ ( q ) ) be the estimates of ( θ 1 , θ 2 , λ ) . Plug ( θ 1 ( q 1 ) , θ 2 ( q 1 ) ) into (8), maximize the function and obtain λ ( q ) . For fixed θ 1 ( q 1 ) , θ 2 ( q 1 ) and λ ( q ) , θ 1 ( q ) and θ 2 ( q ) can be derived by using the formulas below:
θ 1 ( q ) = m i = 1 k s i E ( θ 1 ( q 1 ) , θ 2 ( q 1 ) , λ ( q ) ) ln ( 1 e λ U i 2 ) | U i > w i + i = 1 k z i ln ( 1 e λ ( q ) w i 2 ) ,
θ 2 ( q ) = n i = 1 k t i E ( θ 1 ( q 1 ) , θ 2 ( q 1 ) , λ ( q ) ) ln ( 1 e λ V i 2 ) | V i > w i + i = 1 k ( 1 z i ) ln ( 1 e λ ( q ) w i 2 ) .
The iterations are stopped until | λ ( p ) λ ( p 1 ) | 0.0001 , | θ 1 ( p ) θ 1 ( p 1 ) | 0.0001 , | θ 2 ( p ) θ 2 ( p 1 ) | 0.0001 . θ 1 ( p ) , θ 2 ( p ) , λ ( p ) are the maximum likelihood estimates of θ 1 , θ 2 , λ .

3. Observed Fisher Information Matrix

In this section, the observed information matrix is calculated and will be used in Section 4. According to the idea of [28], we have
I o ( θ 1 , θ 2 , λ ) = m I 1 ( θ 1 , θ 2 , λ ) + n I 2 ( θ 1 , θ 2 , λ ) i = 1 k j = 1 s i I u i j | w i ( θ 1 , θ 2 , λ ) + i = 1 k l = 1 t i I v i l | w i ( θ 1 , θ 2 , λ ) ,
and the notations below:
I c : the observed information matrix of complete data;
I 1 : the observed information matrix of one unit from sample A;
I 2 : the observed information matrix of one unit from sample B;
I o : the observed information matrix of observed data;
I m : the observed information matrix of missing data.
For sample A,
I 1 = I 11 ( 1 ) 0 I 13 ( 1 ) 0 0 0 I 31 ( 1 ) 0 I 33 ( 1 )
where
I 11 ( 1 ) = E ( 2 ln f I E R D ( x ; θ 1 , λ ) θ 1 2 ) , I 13 ( 1 ) = I 31 ( 1 ) = E ( 2 ln f I E R D ( x ; θ 1 , λ ) θ 1 λ ) , I 33 ( 1 ) = E ( 2 ln f I E R D ( x ; θ 1 , λ ) λ 2 ) .  
The missing observed matrices can be obtained as follow:
I u i j | w i ( θ 1 , θ 2 , λ ) = I 11 u 0 I 13 u 0 0 0 I 31 u 0 I 33 u
where
I 11 u = E ( 2 ln f C P F ( x ; θ 1 , λ ) θ 1 2 ) , I 13 u = I 31 u = E ( 2 ln f C P F ( x ; θ 1 , λ ) θ 1 λ ) , I 33 u = E ( 2 ln f C P F ( x ; θ 1 , λ ) λ 2 ) .
Here, C P F means the conditional pdf. Expressions of all the expectations related to θ 1 are given in Appendix A. The expressions above related to sample B are similar and omitted.
After getting the observed information matrix, for every fixed ( θ 1 , θ 2 , λ ) , the covariance matrix of estimators is the inverse matrix of the observed information matrix.

4. Bootstrap Methods

In this section, the Bootstrap methods are introduced to construct confidence intervals. The algorithms of Bootstrap-p method and Bootstrap-t method are respectively given in Algorithms 1 and 2.
  • Bootstrap-p method:
    Algorithm 1 The algorithm of Bootstrap-p method.
    Step 1: Generate two random samples, which are from IERD( θ 1 , λ ) and IERD( θ 2 , λ ), respectively, and use the JPC scheme to get the observed data.
    Step 2: Calculate the MLEs (say ( θ 1 ^ , θ 2 ^ , λ ^ ) ).
    Step 3: Use ( θ 1 ^ , θ 2 ^ , λ ^ ) to generate two new samples, respectively.
    Step 4: Get new MLEs ( θ 1 ^ ( i ) , θ 2 ^ ( i ) , λ ^ ( i ) ) .
    Step 5: Repeat steps 3 and 4 N times.
    Step 6: Get the results ( ( θ 1 ^ ( 1 ) , θ 2 ^ ( 1 ) , λ ^ ( 1 ) ) , , ( θ 1 ^ ( N ) , θ 2 ^ ( N ) , λ ^ ( N ) ) ) .
    Step 7: Sort ( θ 1 ^ ( 1 ) , , θ 1 ^ ( N ) ) , ( θ 2 ^ ( 1 ) , , θ 2 ^ ( N ) ) , ( λ ^ ( 1 ) , , λ ^ ( N ) ) in ascending order as
    ( θ 1 ^ ( 1 ) , , θ 1 ^ ( N ) ) , ( θ 2 ^ ( 1 ) , , θ 2 ^ ( N ) ) , ( λ ^ ( 1 ) , , λ ^ ( N ) ) .
    Step 8: The 100 ( 1 α ) % symmetric Bootstrap-p confidence intervals for ( θ 1 , θ 2 , λ ) are
    θ 1 ^ ( l b ) , θ 1 ^ ( h b ) , θ 2 ^ ( l b ) , θ 2 ^ ( h b ) a n d λ ^ ( l b ) , λ ^ ( h b ) ,
    where l b = α 2 N , h b = 1 α 2 N . Here x means the largest integer not exceeding x.
  • Bootstrap-t method:
    Algorithm 2 The algorithm of Bootstrap-t method.
    Steps 1 to 5 are the same as those in Algorithm 1.
    Step 6: Get the results ( ( θ 1 ^ ( 1 ) , θ 2 ^ ( 1 ) , λ ^ ( 1 ) ) , , ( θ 1 ^ ( N ) , θ 2 ^ ( N ) , λ ^ ( N ) ) ) and
    ( C o v ( 1 ) , C o v ( 2 ) , , C o v ( N ) ) , where C o v ( i ) is given by
    C o v ( i ) = I o 1 ( θ 1 ^ ( i ) , θ 2 ^ ( i ) , λ ^ ( i ) )
    where I o is obtained in Section 3.
    Step 7: V a r ( θ 1 ^ ( i ) ) , V a r ( θ 2 ^ ( i ) ) and V a r ( λ ^ ( i ) ) which are diagonal elements of C o v ( i ) can be obtained. Define
    T j ( i ) = θ j ^ ( i ) θ j ^ V a r ( θ 1 ^ ( i ) ) , f o r j = 1 , 2
    and
    T λ ( i ) = λ ^ ( i ) λ ^ V a r ( λ ^ ( i ) ) .
    Step 8: Sort ( T 1 ( 1 ) , , T 1 ( N ) ) , ( T 2 ( 1 ) , , T 2 ( N ) ) , ( T λ ( 1 ) , , T λ ( N ) ) in ascending order as
    ( T 1 ( 1 ) , , T 1 ( N ) ) , ( T 2 ( 1 ) , , T 2 ( N ) ) , ( T λ ( 1 ) , , T λ ( N ) ) .
    Step 9: The 100 ( 1 α ) % Bootstrap-t confidence intervals for ( θ 1 , θ 2 , λ ) are given by
    θ j ^ V a r ( θ j ^ ) T j ( h b ) , θ j ^ V a r ( θ j ^ ) T j ( l b ) f o r j = 1 , 2 ,
    and
    λ ^ V a r ( λ ^ ) T λ ( h b ) , λ ^ V a r ( λ ^ ) T λ ( l b ) .
    where l b = α 2 N , h b = 1 α 2 N . Here x means the largest integer not exceeding x.

5. Bayesian Inference

In this section, we study the Bayesian inference for the three parameters based on non-informative prior and informative prior. The estimates under symmetric loss function (square loss function) and asymmetric loss function (linex loss function) are derived.
Notice that all the parameters range from 0 to + , so let Gamma distributions be the prior distributions. Gamma distribution ( G a ( α , β ) ) has the following density:
f ( x | α , β ) = β α x α 1 e β x Γ ( α ) , x 0 , α > 0 , β > 0 ,
It is assumed that θ 1 , θ 2 and λ are independent and follow different Gamma distributions. θ 1 G a ( a 1 , b 1 ) , θ 2 G a ( a 2 , b 2 ) and λ G a ( c , d ) . The hyperparameters a 1 , b 1 , a 2 , b 2 , c , d > 0 . The priors considered are conjugate.
The joint posterior density is given by
π ( θ 1 , θ 2 , λ | d a t a ) π ( θ 1 , θ 2 , λ ) L ( θ 1 , θ 2 , λ | d a t a ) θ 1 a 1 + k 1 1 e b 1 i = 1 k ( z i + s i ) ln ( 1 e λ w i 2 ) θ 1 × θ 2 a 2 + k 2 1 e b 2 i = 1 k ( 1 z i + t i ) ln ( 1 e λ w i 2 ) θ 2 × λ k + c 1 e ( d + i = 1 k 1 w i 2 ) λ × 1 i = 1 k ( 1 e λ w i 2 ) = π ( θ 1 | λ , d a t a ) × π ( θ 2 | λ , d a t a ) × π ( λ | d a t a ) ,
where π ( θ 1 , θ 2 , λ ) means the joint prior density.
Observing that in (18) the first two items are Gamma densities, in fact, we check the shape and scale parameters and confirm that a 1 + k 1 > 0 , a 2 + k 2 > 0 , b 1 i = 1 k ( z i + s i ) ln ( 1 e λ w i 2 ) > 0 and b 2 i = 1 k ( 1 z i + t i ) ln ( 1 e λ w i 2 ) > 0 .
Some facts are obtained:
π ( θ 1 | λ , d a t a ) = G a a 1 + k 1 , b 1 i = 1 k ( s i + z i ) ln ( 1 e λ w i 2 ) ,
π ( θ 2 | λ , d a t a ) = G a a 2 + k 2 , b 2 i = 1 k ( t i + 1 z i ) ln ( 1 e λ w i 2 ) ,
π ( λ | d a t a ) λ c + k 1 e ( d + i = 1 k 1 w i 2 ) λ i = 1 k ( 1 e λ w i 2 ) × b 1 i = 1 k ( s i + z i ) ln ( 1 e λ w i 2 ) ( a 1 + k 1 ) × b 2 i = 1 k ( t i + 1 z i ) ln ( 1 e λ w i 2 ) ( a 2 + k 2 ) ,
The distributions of θ 1 and θ 2 both depend on λ . Generating the random variates of λ is the key. The well-known log-concave method is considered and discussed first.
ln π ( λ | d a t a ) can be written as
ln π ( λ | d a t a ) = ln C + ( k + c 1 ) ln λ ( d + i = 1 k 1 w i 2 ) λ i = 1 k ln ( 1 e λ w i 2 ) ( a 1 + k 1 ) ln b 1 i = 1 k ( s i + z i ) ln ( 1 e λ w i 2 ) ( a 2 + k 2 ) ln b 2 i = 1 k ( t i + 1 z i ) ln ( 1 e λ w i 2 ) ,
where C means the normalization constant of π ( λ | d a t a ) and the second-order partial derivative of ln π ( λ | d a t a ) is given by
2 ln π ( λ | d a t a ) λ 2 = i = 1 k e λ w i 2 w i 4 ( 1 e λ w i 2 ) 2 + k + c 1 λ 2 + ( a 1 + k 1 ) i = 1 k ( s i + z i ) e λ w i 2 w i 2 ( 1 e λ w i 2 ) 2 i = 1 k ( s i + z i ) e λ w i 2 w i 4 ( 1 e λ w i 2 ) 2 × b 1 i = 1 k ( s i + z i ) ln ( 1 e λ w i 2 ) × b 1 i = 1 k ( s i + z i ) ln ( 1 e λ w i 2 ) 2 + ( a 2 + k 2 ) i = 1 k ( t i + 1 z i ) e λ w i 2 w i 2 ( 1 e λ w i 2 ) 2 i = 1 k ( t i + 1 z i ) e λ w i 2 w i 4 ( 1 e λ w i 2 ) 2 × b 2 i = 1 k ( t i + 1 z i ) ln ( 1 e λ w i 2 ) × b 2 i = 1 k ( t i + 1 z i ) ln ( 1 e λ w i 2 ) 2 .
It cannot be determined whether (22) is negative or positive. So, log-concave method is not appropriate here. The importance sampling method [29] is adopted. (21) is rewritten as below:
π ( λ | d a t a ) G a ( c + k , d + i = 1 k 1 w i 2 ) × b 1 i = 1 k ( s i + z i ) ln ( 1 e λ w i 2 ) ( a 1 + k 1 ) × b 2 i = 1 k ( t i + 1 z i ) ln ( 1 e λ w i 2 ) ( a 2 + k 2 ) i = 1 k ( 1 e λ w i 2 ) 1 .
  • Square loss function:
Square loss function is given below:
l S ( g ^ ( Θ ) , g ( Θ ) ) = ( g ^ ( Θ ) g ( Θ ) ) 2 .
where Θ means the parameter vector, g ( Θ ) means any function of Θ . The Bayesian estimate of any function of ( θ 1 , θ 2 , λ ) , say g ( θ 1 , θ 2 , λ ) under square loss function can be obtained as
g ^ ( θ 1 , θ 2 , λ ) = 0 0 0 g ( θ 1 , θ 2 , λ ) π ( θ 1 , θ 2 , λ | d a t a ) d θ 1 d θ 2 d λ .
The following Algorithm 3 can be used in Bayesian estimates under square loss function.
Algorithm 3 Bayesian estimates under square loss function.
Step 1: Given data ( W , S , Z ) , generate λ from G a ( k + c , d + i = 1 k 1 w i 2 ) .
Step 2: For a given λ , generate θ 1 and θ 2 based on (19) and (20).
Step 3: Repeat Steps 1 and 2 for M times.
Step 4: For generated ( λ ( j ) , θ 1 ( j ) , θ 2 ( j ) ), calculate g ( λ ( j ) , θ 1 ( j ) , θ 2 ( j ) ) and the importance weight (say w e i g h t * ),     j = 1 , 2 , , M .
w e i g h t ( j ) * = c ( j ) j = 1 M c ( j ) ,
where
c ( j ) = b 1 i = 1 k ( s i + z i ) ln ( 1 e λ ( j ) w i 2 ) ( a 1 + k 1 ) b 2 i = 1 k ( t i + 1 z i ) ln ( 1 e λ ( j ) w i 2 ) ( a 2 + k 2 ) i = 1 k 1 e λ ( j ) w i 2 .
Step 5: The estimate of g ( λ , θ 1 , θ 2 ) can be obtained by
g ^ ( λ , θ 1 , θ 2 ) = j = 1 M w e i g h t ( j ) * g ( λ ( j ) , θ 1 ( j ) , θ 2 ( j ) ) .
  • Linex loss function:
Linex loss function is given below:
l l ( g ^ ( Θ ) , g ( Θ ) ) = e δ g ^ ( Θ ) g ( Θ ) δ g ^ ( Θ ) g ( Θ ) 1 , δ is a nonzero constant .
The Bayesian estimate of any function of ( θ 1 , θ 2 , λ ) , say g ( θ 1 , θ 2 , λ ) under linex loss function can be obtained as
g ^ ( θ 1 , θ 2 , λ ) = 1 δ ln 0 0 0 π ( θ 1 , θ 2 , λ | d a t a ) e δ g ( θ 1 , θ 2 , λ ) d θ 1 d θ 2 d λ .
The following Algorithm 4 can be used in Bayesian estimates under linex loss function.
Algorithm 4 Bayesian estimates under linex loss function.
Steps 1 to 4 are the same as those in Algorithm 3.
Step 5: The estimate of g ( λ , θ 1 , θ 2 ) can be obtained by
g ^ ( θ 1 , θ 2 , λ ) = 1 δ ln i = 1 M w e i g h t ( j ) * e δ g ( λ ( j ) , θ 1 ( j ) , θ 2 ( j ) ) .
To compute 100 ( 1 α ) % symmetric credible intervals of g ( λ , θ 1 , θ 2 ) , where α means the significance level, g ( λ ( 1 ) , θ 1 ( 1 ) , θ 2 ( 1 ) ) , g ( λ ( 2 ) , θ 1 ( 2 ) , θ 2 ( 2 ) ) , , g ( λ ( M ) , θ 1 ( M ) , θ 2 ( M ) ) are sorted in ascending order as ( g ( 1 ) , g ( 2 ) , , g ( M ) ) and the corresponding weights are ( w e i g h t ( 1 ) , w e i g h t ( 2 ) , , w e i g h t ( M ) ) .
Define s u m c ( ρ ) = l = 1 ρ w e i g h t ( l ) for l = 1 , 2 , , M . When s u m c ( ρ 1 ) < α 2 and s u m c ( ρ ) α 2 , record the g ( ρ ) . When s u m c ( η ) < 1 α 2 and s u m c ( η + 1 ) 1 α 2 , record the g ( η ) . Then a ( 1 α ) % credible interval of g ( λ , θ 1 , θ 2 ) is
[ g ( ρ ) , g ( η ) ] .

6. Simulation and Data Analysis

6.1. Numerical Simulation

In this section, different m, n, k and r 1 , r 2 , , r k are taken and simulation results through the methods mentioned above are displayed. We consider different censoring schemes. For the rest of the paper, the notation ( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) ) means that no unit is withdrawn for the first 5 times, 5 units are withdrawn for 5 times, and no unit is withdrawn for the last 10 times. For other notations, the meanings are similar.
Let θ 1 = 3 , θ 2 = 2 , λ = 2 . The average values (AVs) and mean square errors (MSEs) of MLEs are calculated by repeating the EM algorithm 1000 times. To be more practical, both the true values and θ 1 0 = θ 2 0 = λ 0 = 7 are set as the initial values in EM algorithm. The results are given in Table 1 and Table 2.
In Table 1 and Table 2, the estimates of λ get closer to the true value than those of θ 1 and θ 2 under the same censoring scheme. The results are acceptable because λ is the same in two samples. With more information, λ gets better estimates. When considering the same sample size and failure times (k), there are better results if the withdrawal processes are executed in the middle rather than at the beginning and end. Besides, Figure 3 shows that a more dispersed withdrawal scheme, which means withdrawing the units for several times but not one time, yields better estimates. Comparing censoring schemes ( n = 20 , m = 25 ) with ( n = 25 , m = 20 ) , the estimates of θ 2 are better as n increases. The MSEs of θ 2 ^ decrease.It is reasonable because if the sample size n increases and more information of sample B can be utilized, better estimates can be obtained. When keeping n and m fixed and increasing k, estimations under these schemes are better. The plots of MSEs were shown below: (take k = 20 , n = 40 , m = 45 as an example).
The parameters of prior distributions are a 1 = 2 , b 1 = 1 , a 2 = 1 , b 2 = 2 , c = 3 , d = 2 and linex loss constant δ = 2 . Then, the Bayes estimates for informative prior under square loss function (say θ 1 S ^ , θ 2 S ^ , λ S ^ ) and linex loss function (say θ 1 L ^ , θ 2 L ^ , λ L ^ ) are compared based on 1000 replications. The results are given in Table 3 and Table 4.
Table 3 and Table 4 show that MSEs are bigger and AVs are closer to the true value under linex loss function than the results under square loss function. Bayesian inference performs better than MLEs in terms of MSEs. However, Bayesian estimators mostly underestimate the true parameter values and MLEs do not show this pattern. In the schemes with larger sample sizes, MLEs get better AVs but bigger MSEs than Bayesian estimators. The tables also reveal that MSEs become small with k increases. As true value increases, the methods mentioned above all show bigger MSEs, which means that the results are more dispersed.
Next, we compare Bayes estimates for non-informative prior and informative prior. According to [30], let hyperparameters a 1 = 0.0001 , b 1 = 0.0001 , a 2 = 0.0001 , b 2 = 0.0001 , c = 0.0001 , d = 0.0001 . The patterns of different schemes are similar to those mentioned above. Observing Table 5 and Table 6, it is found that if the samples have more units but the failure times are relatively less, Bayesian estimators with informative priors perform better than those with non-informative priors. When the failure times are relatively more, in another word, there are more observed data even the sample sizes are small, and the results with these two methods have little difference. In addition, the results are closer to the true value under square loss function. In a word, Bayesian estimation with informative prior under square loss function performs best among the methods discussed.
Besides the point estimates, Bootstrap-p, Bootstrap-t, and Bayesian methods are used to obtain the 90% confidence/credible intervals. In Table 7 and Table 8, the average lengths (ALs) and coverage percentages (CPs) are calculated based on 1000 replications. In Bootstrap methods, boot-time is set as 1000 ( N = 1000 ). Here, IP means under informative prior density and NIP means under non-informative prior density.
Table 7 displays the ALs and CPs of confidence intervals with Bootstrap-p and Bootstrap-t methods. The contrast between Bootstrap-p and Bootstrap-t indicates that CPs are similar but ALs of Bootstrap-t are wider than those of Bootstrap-p. Therefore, the Bootstrap-p method is more appropriate to get the confidence intervals.
Table 8 shows the ALs and CPs of credible intervals under informative prior density and non-informative prior density. The contrast indicates that ALs of NIP tend to be longer than those of IP but CPs are less than those of IP. Obviously, IP performs better than NIP. Besides, for a fixed scheme, θ 2 has the best estimates of credible intervals. Figure 4 displays the contrast of CPs among different methods which also indicates that Bootstrap-p and Bayes method with informative prior are more suitable for interval estimates. Compared to Bootstrap methods, Bayesian method yields better results of credible intervals. In the condition of large k, CPs increase a lot with both the Bayesian method and Bootstrap methods. When there are sufficient units, choosing the Bayesian method with informative priors is better.
The plots of CPs were shown below: (take k = 20 , n = 40 , m = 45 as an example).

6.2. Real Data Analysis

In this part, we analyze one application in the coating weights with two real data sets and apply the approaches put forward in the sections above. The data sets are from ALAF (formerly called Aluminium Africa Limited) industry, Tanzania, which contain the coating weights (mg/m 2 ) by chemical procedure on the top center side (TCS) and by chemical procedure on the bottom center side (BCS). For each data set, there are 72 observations. The data were also analyzed by [11]. The data are shown below:
Data set 1: (TCS)
36.8, 47.2, 35.6, 36.7, 55.8, 58.7, 42.3, 37.8, 55.4, 45.2, 31.8, 48.3, 45.3, 48.5, 52.8, 45.4, 49.8,
48.2, 54.5, 50.1, 48.4, 44.2, 41.2, 47.2, 39.1, 40.7, 40.3, 41.2, 30.4, 42.8, 38.9, 34.0, 33.2, 56.8,
52.6, 40.5, 40.6, 45.8, 58.9, 28.7, 37.3, 36.8, 40.2, 58.2, 59.2, 42.8, 46.3, 61.2, 58.4, 38.5, 34.2,
41.3, 42.6, 43.1, 42.3, 54.2, 44.9, 42.8, 47.1, 38.9, 42.8, 29.4, 32.7, 40.1, 33.2, 31.6, 36.2, 33.6,
32.9, 34.5, 33.7, 39.9
Data set 2: (BCS)
45.5, 37.5, 44.3, 43.6, 47.1, 52.9, 53.6, 42.9, 40.6, 34.1, 42.6, 38.9, 35.2, 40.8, 41.8, 49.3, 38.2,
48.2, 44.0, 30.4, 62.3, 39.5, 39.6, 32.8, 48.1, 56.0, 47.9, 39.6, 44.0, 30.9, 36.6, 40.2, 50.3, 34.3,
54.6, 52.7, 44.2, 38.9, 31.5, 39.6, 43.9, 41.8, 42.8, 33.8, 40.2, 41.8, 39.6, 24.8, 28.9, 54.1, 44.1,
52.7, 51.5, 54.2, 53.1, 43.9, 40.8, 55.9, 57.2, 58.9, 40.8, 44.7, 52.4, 43.8, 44.2, 40.7, 44.0, 46.3,
41.9, 43.6, 44.9, 53.6
For convenience, the data sets are divided by 10. First, to verify that IERD is suitable for the data sets, we fit it for each data set and have Kolmogorov-Smirnov(K-S) distance test. By calculating the largest difference value of empirical cumulative distribtuion functions and the fitted distribution functions and comparing that value with the 95% critical value, we find the data sets can be fitted well. The results are shown in Table 9:
K-S distances are less than 95% critical value, so the IERD fits well for both data sets. Figure 5 shows the fitness of the data sets separately. Then, the likelihood ratio test is used to test if the scale parameters can be considered as the same value. H 0 : λ 1 = λ 2 . The p-value is calculated to be 94.3%. Obviously, the null hypothesis cannot be rejected. The two scale parameters can be considered as the same. Based on the null hypothesis, the MLEs are obtained as θ 1 ^ = 15.55 , θ 2 ^ = 14.87 , λ ^ = 57.08 .
Use the complete data above and generate observed data for the following three censoring schemes, ( 0 ( 18 ) , 2 ( 36 ) , 0 ( 18 ) ) , ( 0 ( 35 ) , 36 ( 2 ) , 0 ( 35 ) ) , and ( 36 , 0 ( 70 ) , 36 ) . Take MLEs for complete data as the initial values of EM algorithm. Then, the AVs and MSEs of MLEs can be obtained in Table 10.
To verify the stablitity of iteration, we change the initial guesses and plot the trend of the estimates. The results are shown in Figure 6. The iteration times are 15 times in (a), 23 times in (b) and (c), and 26 times in (d). It is observed that with the same initial guess of λ , the more dispersed scheme need less iteration times. When the initial guesses are not close to the true value, the iteration times will increase but the processes are still stable.
In this case, we cannot get the informative priors, so all Bayesian estimates are based on non-informative priors. Table 11 and Table 12 record the results of Bayesian method with non-informative priors. The 90% confidence/credible intervals with Bootstrap methods and Bayes estimates for non-informative prior are displayed in Table 13.
From the real data, some facts are displayed. Bayesian point estimates for non-informative prior under square loss function are higher than those under linex function. Besides, the first scheme ( 0 ( 18 ) , 2 ( 36 ) , 0 ( 18 ) ) corresponds to higher estimates in point estimations and shorter interval lengths in interval estimations. Table 13 reveals that Bayesian inference under non-informative priors yields shorter interval lengths than Bootstrap-p and Bootstrap-t methods. More dispersed schemes and Bayesian inference are preferred in real data analysis.

7. Conclusions

In this article, we studied two samples following inverted exponentiated Rayleigh distribution under joint progressively type-II censoring scheme. It was supposed that the shape parameters were different but that the scale parameters were the same. The expectation-maximization algorithm was applied to obtain the estimates of MLEs. The performance of MLEs and Bayesian estimators for non-informative prior and informative prior was compared. Bootstrap-p, Bootstrap-t, and Bayesian methods were used in intervals estimations. Importance sampling technique was introduced when calculating Bayesian estimates. The contrast between estimates under square loss function and linex loss function was also studied.
During the point estimation process, Bayesian inference under informative priors turned out to be the best method and many patterns of censoring scheme were concluded. To study the confidence intervals, Bootstrap methods were applied and evaluated. Besides, observed Fisher information matrix played a key role in getting Bootstrap-t intervals based on the missing value principle.
In the future, the methods can be extended to many other distributions such as multivariate Gaussian distribution with zero mean. Ref. [31] proposed a total Bregman divergence-based matrix information geometry (TBD-MIG) detector and applied it to detect targets emerged into nonhomogeneous clutter. We are still doing more work the situations where the scale parameters are different and the samples are not independent.

Author Contributions

Investigation, J.F.; Supervision, W.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Project 202210004004 which was supported by National Training Program of Innovation and Entrepreneurship for Undergraduates.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in [11].

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The elements of the Fisher information matrix are expressed as follows:
E ( 2 ln f I E R D ( x ; θ 1 , λ ) θ 1 2 ) = 1 θ 1 2
E ( 2 ln f I E R D ( x ; θ 1 , λ ) θ 1 λ ) = 0 2 θ 1 λ x 5 e 2 λ x 2 ( 1 e λ x 2 ) θ 1 2 d x
E ( 2 ln f I E R D ( x ; θ 1 , λ ) λ 2 ) = 1 λ 2 ( θ 1 1 ) 0 2 θ 1 λ x 7 e 2 λ x 2 ( 1 e λ x 2 ) θ 1 3 d x
E ( 2 ln f C P F ( x ; θ 1 , λ ) θ 1 2 ) = 1 θ 1 2
E ( w i ) ( 2 ln f C P F ( x ; θ 1 , λ ) θ 1 λ ) = e λ w i 2 w i 2 ( 1 e λ w i 2 ) w i 2 θ 1 λ x 5 e λ x 2 ( 1 e λ x 2 ) θ 1 2 ( 1 e λ w i 2 ) θ 1 d x
E ( w i ) ( 2 ln f C P F ( x ; θ 1 , λ ) λ 2 ) = 1 λ 2 + θ 1 ln ( 1 e λ w i 2 ) ( θ 1 1 ) w i 2 θ 1 λ x 7 e λ x 2 ( 1 e λ x 2 ) θ 1 3 ( 1 e λ w i 2 ) θ 1 d x

References

  1. Ochi, M.K. Generalization of Rayleigh probability distribution and its application. J. Ship Res. 1978, 22, 259–265. [Google Scholar] [CrossRef]
  2. Dey, S.; Das, M. A note on prediction interval for a Rayleigh distribution: Bayesian approach. Am. J. Math. Manag. Sci. 2007, 27, 43–48. [Google Scholar] [CrossRef]
  3. Mousa, M.A.M.A.; Al-Sagheer, S.A. Statistical inference for the Rayleigh model based on progressively type-II censored data. Statistics 2006, 40, 149–157. [Google Scholar] [CrossRef]
  4. Al-Khedhairi, A.; Sarhan, A.; Tadj, L. Estimation of the generalized Rayleigh distribution parameters. Int. J. Reliab. Appl. 2007, 8, 199–210. [Google Scholar]
  5. Kundu, D.; Raqab, M.Z. Generalized Rayleigh distribution: Different methods of estimations. Comput. Stat. Data Anal. 2005, 49, 187–200. [Google Scholar] [CrossRef]
  6. Raqab, M.Z.; Madi, M.T. Inference for the generalized Rayleigh distribution based on progressively censored data. J. Stat. Plan. Inference 2011, 141, 3313–3322. [Google Scholar] [CrossRef]
  7. Aliyu, Y.; Yahaya, A. Bayesian estimation of the shape parameter of generalized Rayleigh distribution under non-informative prior. Int. J. Adv. Stat. Probab. 2015, 4, 1–10. [Google Scholar] [CrossRef] [Green Version]
  8. Gao, S.; Gui, W. Parameter estimation of the inverted exponentiated Rayleigh distribution based on progressively first-failure censored samples. Int. J. Syst. Assur. Eng. Manag. 2019, 10, 1–12. [Google Scholar] [CrossRef]
  9. Maurya, R.K.; Tripathi, Y.M.; Rastogi, M.K. Estimation and prediction for a progressively first-failure censored inverted exponentiated Rayleigh distribution. J. Stat. Theory Pract. 2019, 13, 38–85. [Google Scholar] [CrossRef]
  10. Hp, A.; Nm, B. Estimation of the inverted exponentiated Rayleigh distribution based on adaptive type-II progressive hybrid censored sample. J. Comput. Appl. Math. 2020, 364, 112345. [Google Scholar]
  11. Rao, G.S.; Mbwambo, S. Exponentiated inverse Rayleigh distribution and an application to coating weights of iron sheets data. J. Probab. Stat. 2019, 2019, 1–13. [Google Scholar] [CrossRef]
  12. Balakrishnan, N.; Aggarwala, R. Progressive Censoring; Birkhäuser: Boston, MA, USA, 2000. [Google Scholar]
  13. Attia, A.F.; Aly, H.M.; Bleed, S.O. Estimating and planning accelerated life test using constant stress for generalized logistic distribution under type-I censoring. Isrn Appl. Math. 2011, 2011, 67–81. [Google Scholar] [CrossRef] [Green Version]
  14. Balakrishnan, N.; Han, D.; Iliopoulos, G. Exact inference for progressively type-I censored exponential failure data. Metrika 2011, 73, 335–358. [Google Scholar] [CrossRef]
  15. Noori Asl, M.; Arabi, R.; Bevrani, H. Classical and Bayesian inferential approaches using Lomax model under progressively type-I hybrid censoring. J. Comput. Appl. Math. 2018, 343, 115–147. [Google Scholar]
  16. Maswadah, M.S. Structural inference on the generalized gamma distribution based on type-II progressively censored sample. J. Aust. Math. Soc. 1991, 50, 15–22. [Google Scholar] [CrossRef] [Green Version]
  17. Dey, T.; Dey, S.; Kundu, D. On progressively type-II censored two-parameter Rayleigh distribution. Commun. Stat. Simul. Comput. 2016, 45, 438–455. [Google Scholar] [CrossRef]
  18. Raqab, M.Z.; Asgharzadeh, A.; Valiollahi, R. Prediction for Pareto distribution based on progressively type-II censored samples. Comput. Stat. Data Anal. 2020, 54, 1732–1743. [Google Scholar] [CrossRef]
  19. Balakrishnan, N.; Su, F. Exact likelihood inference for two exponential populations under joint type-II censoring. Comput. Stat. Data Anal. 2010, 39, 2172–2191. [Google Scholar] [CrossRef]
  20. Mondal, S.; Kundu, D. Exact inference on multiple exponential populations under a joint type-II progressive censoring scheme. Statistics 2019, 53, 1329–1356. [Google Scholar] [CrossRef]
  21. Mondal, S.; Kundu, D. A new two sample type-II progressive censoring scheme. Commun. Stat. Theory Methods 2009, 48, 2602–2618. [Google Scholar] [CrossRef]
  22. Arshad, M.; Abdalghani, O. On estimating the location parameter of the selected exponential population under the linex loss function. Braz. J. Probab. Stat. 2020, 34, 167–182. [Google Scholar] [CrossRef]
  23. Gao, S.; Yu, J.; Gui, W. Pivotal inference for the inverted exponentiated Rayleigh distribution based on progressive type-II censored data. Am. J. Math. Manag. Sci. 2020, 39, 315–328. [Google Scholar] [CrossRef]
  24. Yousif, S.M.; Karam, N.S.; Karam, G.S.; Abood, Z.M. Stress-strength reliability estimation for p(T<X<Z) using exponentiated inverse Rayleigh distribution. AIP Conf. Proc. 2020, 2307, 020013. [Google Scholar]
  25. Dempster, A.P. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. 1977, 39, 1–22. [Google Scholar]
  26. Muraki, E. A generalized partial credit model: Application of an EM algorithm. Appl. Psychol. Meas. 1992, 16, 159–176. [Google Scholar] [CrossRef]
  27. Balakrishnan, N.; Cramer, E. The Art of Progressive Censoring; Birkhäuser: New York, NY, USA, 2014. [Google Scholar]
  28. Louis, T.A. Finding the observed information matrix when using the EM algorithm. J. R. Stat. Soc. Ser. B (Methodol.) 1982, 44, 226–233. [Google Scholar]
  29. Liu, J.S. Metropolized independent sampling with comparisons to rejection sampling and importance sampling. Stat. Comput. 1996, 6, 113–119. [Google Scholar] [CrossRef]
  30. Congdon, P. Applied Bayesian Modeling; Wiley: Hoboken, NJ, USA, 2005. [Google Scholar]
  31. Hua, X.; Ono, Y.; Peng, L.; Cheng, Y.; Wang, H. Target detection within nonhomogeneous clutter via total bregman divergence-based matrix information geometry detectors. IEEE Trans. Signal Process. 2021, 69, 4326–4340. [Google Scholar] [CrossRef]
Figure 1. Plots of pdf and hazard function of IERD. (a) Pdf of IERD. (b) Hazard function of IERD.
Figure 1. Plots of pdf and hazard function of IERD. (a) Pdf of IERD. (b) Hazard function of IERD.
Entropy 24 00171 g001
Figure 2. JPC scheme.
Figure 2. JPC scheme.
Entropy 24 00171 g002
Figure 3. The trend of MSEs for different schemes with MLEs method (set k = 20, n = 40, m = 45 as an example).
Figure 3. The trend of MSEs for different schemes with MLEs method (set k = 20, n = 40, m = 45 as an example).
Entropy 24 00171 g003
Figure 4. The trend of CPs for different schemes with four methods (set k = 20, n = 40, m = 45 as an example).
Figure 4. The trend of CPs for different schemes with four methods (set k = 20, n = 40, m = 45 as an example).
Entropy 24 00171 g004
Figure 5. The IERD fitness of data sets. F o b s ( x ) means the empirical cumulative distribution function of data set. F e p ( x ) means the fitted distribution function of data set. (a) The fitness of data set 1. (b) The fitness of data set 2.
Figure 5. The IERD fitness of data sets. F o b s ( x ) means the empirical cumulative distribution function of data set. F e p ( x ) means the fitted distribution function of data set. (a) The fitness of data set 1. (b) The fitness of data set 2.
Entropy 24 00171 g005
Figure 6. (a,b) The iterations for two different initial guesses of l a m b d a , 57.07 and 70, under ( 0 ( 18 ) , 2 ( 36 ) , 0 ( 18 ) ) . (c,d) The iterations for two different initial guesses of l a m b d a , 57.07 and 70, under ( 36 , 0 ( 70 ) , 36 ) .
Figure 6. (a,b) The iterations for two different initial guesses of l a m b d a , 57.07 and 70, under ( 0 ( 18 ) , 2 ( 36 ) , 0 ( 18 ) ) . (c,d) The iterations for two different initial guesses of l a m b d a , 57.07 and 70, under ( 36 , 0 ( 70 ) , 36 ) .
Entropy 24 00171 g006
Table 1. AVs and MSEs of MLEs when the true values is set as the initial values.
Table 1. AVs and MSEs of MLEs when the true values is set as the initial values.
knmScheme θ 1 ^ θ 2 ^ λ ^
AVMSE AVMSE AVMSE
202025( 0 ( 9 ) , 25 , 0 ( 10 ) )2.7780.960 1.9510.511 1.8660.167
( 10 , 0 ( 18 ) , 15 )2.3301.168 1.5250.574 1.6060.269
( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) )3.0261.277 1.8350.482 1.8780.161
2520( 0 ( 9 ) , 25 , 0 ( 10 ) )2.9590.778 1.8780.275 1.8960.127
( 10 , 0 ( 18 ) , 15 )2.3760.991 1.5380.351 1.6020.196
( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) )2.3030.949 1.4700.402 1.6320.177
4045( 0 ( 9 ) , 65 , 0 ( 10 ) )2.1811.552 1.5980.725 1.5610.314
( 30 , 0 ( 18 ) , 35 )2.1602.219 1.6241.025 1.4350.349
( 0 ( 5 ) , 13 ( 5 ) , 0 ( 10 ) )2.1681.276 1.6840.343 1.8870.225
302025( 0 ( 14 ) , 15 , 0 ( 15 ) )3.2461.021 2.5380.516 2.1280.116
( 5 , 0 ( 28 ) , 10 )3.2100.801 2.0570.213 2.0300.075
( 0 ( 12 ) , 3 ( 5 ) , 0 ( 13 ) )3.1171.119 2.1480.290 2.1960.115
4045( 0 ( 14 ) , 55 , 0 ( 15 ) )2.5610.871 1.5530.322 1.6840.129
( 27 , 0 ( 28 ) , 28 )2.2161.350 1.5120.618 1.6000.277
( 0 ( 9 ) , 5 ( 11 ) , 0 ( 10 ) )2.6791.132 1.5900.402 1.7280.117
404045( 0 ( 19 ) , 45 , 0 ( 20 ) )3.0760.799 1.8700.252 1.9270.111
( 25 , 0 ( 38 ) , 20 )2.7690.625 1.5260.307 1.6590.149
( 0 ( 17 ) , 9 ( 5 ) , 0 ( 18 ) )2.8590.672 1.8230.193 1.8710.075
Table 2. AVs and MSEs of MLEs when 7 is set as the initial values.
Table 2. AVs and MSEs of MLEs when 7 is set as the initial values.
knmScheme θ 1 ^ θ 2 ^ λ ^
AVMSE AVMSE AVMSE
202025( 0 ( 9 ) , 25 , 0 ( 10 ) )2.9311.261 1.8130.264 1.8440.103
( 10 , 0 ( 18 ) , 15 )2.1680.967 1.3950.472 1.5880.227
( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) )2.9771.078 1.8210.502 1.8830.146
2520( 0 ( 9 ) , 25 , 0 ( 10 ) )2.8461.098 1.9030.394 1.8680.139
( 10 , 0 ( 18 ) , 15 )2.2320.938 1.5030.348 1.6370.177
( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) )2.9011.005 1.9190.356 1.8810.120
4045( 0 ( 9 ) , 65 , 0 ( 10 ) )1.7601.706 1.2210.709 1.5260.252
( 30 , 0 ( 18 ) , 35 )1.5632.156 1.0270.998 1.4270.351
( 0 ( 5 ) , 13 ( 5 ) , 0 ( 10 ) )2.1411.497 1.4670.319 1.5360.284
302025( 0 ( 14 ) , 15 , 0 ( 15 ) )3.1020.751 2.2340.220 1.9630.066
( 5 , 0 ( 28 ) , 10 )3.1110.514 1.9990.193 1.9890.069
( 0 ( 12 ) , 3 ( 5 ) , 0 ( 13 ) )2.9020.618 2.0560.147 2.0080.061
4045( 0 ( 14 ) , 55 , 0 ( 15 ) )2.0770.959 1.4580.455 1.5940.194
( 27 , 0 ( 28 ) , 28 )1.8241.520 1.2430.640 1.4900.286
( 0 ( 9 ) , 5 ( 11 ) , 0 ( 10 ) )2.2791.135 1.5490.303 1.6470.155
404045( 0 ( 19 ) , 45 , 0 ( 20 ) )2.030.824 1.8600.237 1.8630.077
( 25 , 0 ( 38 ) , 20 )2.3350.597 1.5220.301 1.6340.161
( 0 ( 17 ) , 9 ( 5 ) , 0 ( 18 ) )2.9520.536 1.9170.205 1.9330.094
Table 3. AVs and MSEs of Bayesian estimates for informative prior under square loss function based on 1000 replications.
Table 3. AVs and MSEs of Bayesian estimates for informative prior under square loss function based on 1000 replications.
knmScheme θ 1 ^ S θ 2 ^ S λ ^ S
AVMSE AVMSE AVMSE
202025( 0 ( 9 ) , 25 , 0 ( 10 ) )2.7790.519 1.7120.233 1.7510.250
( 10 , 0 ( 18 ) , 15 )2.7480.523 1.6430.270 1.6790.314
( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) )2.7120.394 1.7810.207 1.7680.230
2520( 0 ( 9 ) , 25 , 0 ( 10 ) )2.7480.437 1.6710.243 1.7490.251
( 10 , 0 ( 18 ) , 15 )2.7290.524 1.6960.290 1.7830.311
( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) )2.8170.486 1.8260.172 1.7940.211
4045( 0 ( 9 ) , 65 , 0 ( 10 ) )2.4801.220 1.5660.499 1.5910.399
( 30 , 0 ( 18 ) , 35 )2.3811.420 1.5180.697 1.5770.546
( 0 ( 5 ) , 13 ( 5 ) , 0 ( 10 ) )2.6141.034 1.6180.463 1.6120.374
302025( 0 ( 14 ) , 15 , 0 ( 15 ) )2.8310.355 1.8670.168 1.8780.218
( 5 , 0 ( 28 ) , 10 )2.7690.345 1.8030.171 1.8350.253
( 0 ( 12 ) , 3 ( 5 ) , 0 ( 13 ) )2.8550.334 1.8940.143 1.8730.219
4045( 0 ( 14 ) , 55 , 0 ( 15 ) )2.4211.083 1.6340.406 1.6800.406
( 27 , 0 ( 28 ) , 28 )2.3341.064 1.6080.435 1.6250.479
( 0 ( 9 ) , 5 ( 11 ) , 0 ( 10 ) )2.6791.070 1.7170.423 1.7410.456
404045( 0 ( 19 ) , 45 , 0 ( 20 ) )2.6510.830 1.8370.289 1.7050.374
( 25 , 0 ( 38 ) , 20 )2.6840.654 1.6740.259 1.6930.387
( 0 ( 17 ) , 9 ( 5 ) , 0 ( 18 ) )2.6410.836 1.8300.286 1.7910.390
Table 4. AVs and MSEs of Bayesian estimates for informative prior under linex loss function based on 1000 replications.
Table 4. AVs and MSEs of Bayesian estimates for informative prior under linex loss function based on 1000 replications.
knmScheme θ 1 ^ L θ 2 ^ L λ ^ L
AVMSE AVMSE AVMSE
202025( 0 ( 9 ) , 25 , 0 ( 10 ) )2.5680.754 1.6780.297 1.6160.279
( 10 , 0 ( 18 ) , 15 )2.6950.842 1.7120.349 1.6470.346
( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) )2.5830.595 1.6890.265 1.5330.259
2520( 0 ( 9 ) , 25 , 0 ( 10 ) )2.6730.636 1.7400.320 1.6120.281
( 10 , 0 ( 18 ) , 15 )2.6830.762 1.7820.372 1.6470.347
( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) )2.6590.673 1.6850.212 1.6560.243
4045( 0 ( 9 ) , 65 , 0 ( 10 ) )2.3711.425 1.5160.554 1.5800.412
( 30 , 0 ( 18 ) , 35 )2.2911.825 1.4720.761 1.5660.562
( 0 ( 5 ) , 13 ( 5 ) , 0 ( 10 ) )2.5821.241 1.6620.517 1.6000.388
302025( 0 ( 14 ) , 15 , 0 ( 15 ) )2.7620.510 1.8430.194 1.8490.242
( 5 , 0 ( 28 ) , 10 )2.7380.505 1.8890.208 1.8080.278
( 0 ( 12 ) , 3 ( 5 ) , 0 ( 13 ) )2.7970.458 1.8710.163 1.8450.242
4045( 0 ( 14 ) , 55 , 0 ( 15 ) )2.3451.232 1.5980.443 1.6720.415
( 27 , 0 ( 28 ) , 28 )2.2561.205 1.5680.477 1.6170.490
( 0 ( 9 ) , 5 ( 11 ) , 0 ( 10 ) )2.4121.298 1.6820.459 1.7340.464
404045( 0 ( 19 ) , 45 , 0 ( 20 ) )2.5820.945 1.6050.317 1.6980.382
( 25 , 0 ( 38 ) , 20 )2.5040.759 1.5310.290 1.6850.397
( 0 ( 17 ) , 9 ( 5 ) , 0 ( 18 ) )2.5790.941 1.6980.313 1.6850.397
Table 5. AVs and MSEs of Bayesian estimates for non-informative prior under square loss function based on 1000 replications.
Table 5. AVs and MSEs of Bayesian estimates for non-informative prior under square loss function based on 1000 replications.
knmScheme θ 1 ^ NS θ 2 ^ NS λ ^ NS
AVMSE AVMSE AVMSE
202025( 0 ( 9 ) , 25 , 0 ( 10 ) )2.6170.677 2.0600.352 1.5560.252
( 10 , 0 ( 18 ) , 15 )2.4370.710 1.9080.365 1.4620.340
( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) )2.7560.564 2.1620.458 1.5620.247
2520( 0 ( 9 ) , 25 , 0 ( 10 ) )2.8090.672 1.9920.360 1.5590.256
( 10 , 0 ( 18 ) , 15 )2.6550.724 1.8330.287 1.4790.325
( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) )2.9860.966 2.1540.343 1.6180.214
4045( 0 ( 9 ) , 65 , 0 ( 10 ) )2.1951.688 1.4210.487 1.3450.461
( 30 , 0 ( 18 ) , 35 )2.0941.717 1.3370.716 1.3290.621
( 0 ( 5 ) , 13 ( 5 ) , 0 ( 10 ) )2.2651.359 1.5240.492 1.3790.420
302025( 0 ( 14 ) , 15 , 0 ( 15 ) )2.7920.372 2.1600.283 1.6880.216
( 5 , 0 ( 28 ) , 10 )2.7160.389 2.0840.270 1.6460.254
( 0 ( 12 ) , 3 ( 5 ) , 0 ( 13 ) )2.8320.386 2.1990.313 1.7890.214
4045( 0 ( 14 ) , 55 , 0 ( 15 ) )2.3331.310 1.5170.355 1.5510.445
( 27 , 0 ( 28 ) , 28 )2.3261.311 1.4560.419 1.4930.524
( 0 ( 9 ) , 5 ( 11 ) , 0 ( 10 ) )2.4831.374 1.6660.393 1.6110.499
404045( 0 ( 19 ) , 45 , 0 ( 20 ) )2.4850.972 1.7040.257 1.6830.400
( 25 , 0 ( 38 ) , 20 )2.4520.713 1.6710.217 1.6800.408
( 0 ( 17 ) , 9 ( 5 ) , 0 ( 18 ) )2.6810.960 1.8060.242 1.7770.409
Table 6. AVs and MSEs of Bayesian estimates for non-informative prior under linex loss function based on 1000 replications.
Table 6. AVs and MSEs of Bayesian estimates for non-informative prior under linex loss function based on 1000 replications.
knmScheme θ 1 ^ NL θ 2 ^ NL λ ^ NL
AVMSE AVMSE AVMSE
202025( 0 ( 9 ) , 25 , 0 ( 10 ) )2.5500.902 1.8350.272 1.5230.281
( 10 , 0 ( 18 ) , 15 )2.4560.998 1.7040.339 1.4320.373
( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) )2.5770.686 1.9030.289 1.5320.273
2520( 0 ( 9 ) , 25 , 0 ( 10 ) )2.5090.776 1.7860.300 1.5270.285
( 10 , 0 ( 18 ) , 15 )2.4590.932 1.6650.314 1.5470.359
( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) )2.5100.801 1.9500.236 1.5830.239
4045( 0 ( 9 ) , 65 , 0 ( 10 ) )2.1791.925 1.3530.552 1.3340.476
( 30 , 0 ( 18 ) , 35 )2.1501.962 1.2690.804 1.3160.641
( 0 ( 5 ) , 13 ( 5 ) , 0 ( 10 ) )2.2831.582 1.4420.532 1.4670.435
302025( 0 ( 14 ) , 15 , 0 ( 15 ) )2.6060.507 1.9900.211 1.6620.236
( 5 , 0 ( 28 ) , 10 )2.5820.522 1.9300.221 1.6230.275
( 0 ( 12 ) , 3 ( 5 ) , 0 ( 13 ) )2.7640.465 2.0330.215 1.7650.233
4045( 0 ( 14 ) , 55 , 0 ( 15 ) )2.3541.456 1.5700.393 1.5430.455
( 27 , 0 ( 28 ) , 28 )2.2841.472 1.5020.466 1.4840.536
( 0 ( 9 ) , 5 ( 11 ) , 0 ( 10 ) )2.4161.522 1.6240.434 1.5040.508
404045( 0 ( 19 ) , 45 , 0 ( 20 ) )2.4181.093 1.6660.283 1.6770.408
( 25 , 0 ( 38 ) , 20 )2.3710.827 1.6230.243 1.6730.417
( 0 ( 17 ) , 9 ( 5 ) , 0 ( 18 ) )2.7171.074 1.7700.269 1.7710.416
Table 7. ALs and CPs of 90% confidence intervals with Boostrap-p and Bootstrap-t methods based on 1000 replications.
Table 7. ALs and CPs of 90% confidence intervals with Boostrap-p and Bootstrap-t methods based on 1000 replications.
SchemeParameterBootstrap-p Bootstrap-t
ALCP(%) ALCP(%)
k = 20 , n = 20 , m = 25 θ 1 2.72187.8 3.35391.8
scheme = ( 0 ( 9 ) , 25 , 0 ( 10 ) ) θ 2 1.48991.9 1.10488.3
λ 1.31387.6 1.29187.5
k = 20 , n = 20 , m = 25 θ 1 1.64272.5 1.47273.5
scheme = ( 10 , 0 ( 18 ) , 15 ) θ 2 1.22181.2 1.74181.7
λ 0.70673.2 0.86772.7
k = 20 , n = 20 , m = 25 θ 1 2.82375.5 3.40777.8
scheme = ( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) ) θ 2 1.48977.1 2.88478.3
λ 1.31377.6 0.29178.1
k = 20 , n = 25 , m = 20 θ 1 2.90098.8 3.33993.9
scheme = ( 0 ( 9 ) , 25 , 0 ( 10 ) ) θ 2 1.50593.4 1.19990.1
λ 1.00493.2 1.05195.6
k = 20 , n = 25 , m = 20 θ 1 1.66774.3 4.02774.3
scheme = ( 10 , 0 ( 18 ) , 15 ) θ 2 1.22187.2 2.13888.9
λ 0.70775.6 0.68577.5
k = 20 , n = 25 , m = 20 θ 1 2.62983.8 2.28287.3
scheme = ( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) ) θ 2 2.18074.3 2.62770.2
λ 1.87178.9 2.19784.3
k = 20 , n = 40 , m = 45 θ 1 1.17772.9 3.26876.5
scheme = ( 0 ( 9 ) , 65 , 0 ( 10 ) ) θ 2 1.01678.3 0.91975.5
λ 0.57273.0 0.88871.6
k = 20 , n = 40 , m = 45 θ 1 1.00371.4 1.16375.9
scheme = ( 30 , 0 ( 18 ) , 35 ) θ 2 0.88779.5 1.24584.4
λ 0.49378.4 2.70584.2
k = 20 , n = 40 , m = 45 θ 1 1.28167.1 2.25578.2
scheme = ( 0 ( 5 ) , 13 ( 5 ) , 0 ( 10 ) ) θ 2 1.13269.6 2.47580.4
λ 0.69570.4 0.67078.1
k = 30 , n = 20 , m = 25 θ 1 1.50071.6 1.56682.9
scheme = ( 0 ( 14 ) , 15 , 0 ( 15 ) ) θ 2 1.00976.2 2.37086.2
λ 0.62179.3 0.86775.5
k = 30 , n = 20 , m = 25 θ 1 0.80277.1 2.34380.8
scheme = ( 5 , 0 ( 28 ) , 10 ) θ 2 1.86078.5 2.32887.3
λ 1.01980.5 1.53889.2
k = 30 , n = 20 , m = 25 θ 1 1.06084.1 1.81385.4
scheme = ( 0 ( 12 ) , 3 ( 5 ) , 0 ( 13 ) ) θ 2 1.94687.2 0.91780.5
λ 2.23780.3 0.89074.2
Table 8. ALs and CPs of 90% symmetric credible intervals based on 1000 replications.
Table 8. ALs and CPs of 90% symmetric credible intervals based on 1000 replications.
SchemeParameterIP NIP
ALCP(%) ALCP(%)
k = 20 , n = 20 , m = 25 θ 1 1.11178.7 1.11777.5
scheme = ( 0 ( 9 ) , 25 , 0 ( 10 ) ) θ 2 0.79781.8 0.89779.5
λ 0.37074.3 0.41672.7
k = 20 , n = 20 , m = 25 θ 1 1.01269.2 1.05563.7
scheme = ( 10 , 0 ( 18 ) , 15 ) θ 2 0.77379.8 0.86472.4
λ 0.35865.0 0.39360.6
k = 20 , n = 20 , m = 25 θ 1 1.12787.0 1.21184.3
scheme = ( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) ) θ 2 0.84389.2 0.96487.5
λ 0.35981.3 0.41778.1
k = 20 , n = 25 , m = 20 θ 1 1.23987.7 1.30187.7
scheme = ( 0 ( 9 ) , 25 , 0 ( 10 ) ) θ 2 0.79488.8 0.85084.3
λ 0.37489.3 0.42882.8
k = 20 , n = 25 , m = 20 θ 1 1.21384.5 1.27178.4
scheme = ( 10 , 0 ( 18 ) , 15 ) θ 2 0.75187.3 0.80681.7
λ 0.38179.7 0.42371.1
k = 20 , n = 25 , m = 20 θ 1 1.38587.2 1.47287.0
scheme = ( 0 ( 5 ) , 5 ( 5 ) , 0 ( 10 ) ) θ 2 0.84089.2 0.87283.4
λ 0.38879.3 0.46176.8
k = 20 , n = 40 , m = 45 θ 1 0.53069.1 0.53860.2
scheme = ( 0 ( 9 ) , 65 , 0 ( 10 ) ) θ 2 0.37672.7 0.42764.5
λ 0.19061.3 0.19560.1
k = 20 , n = 40 , m = 45 θ 1 0.52570.2 0.54459.3
scheme = ( 30 , 0 ( 18 ) , 35 ) θ 2 0.39368.9 0.46262.8
λ 0.20763.4 0.22758.0
k = 20 , n = 40 , m = 45 θ 1 0.55667.0 0.59662.7
scheme = ( 0 ( 5 ) , 13 ( 5 ) , 0 ( 10 ) ) θ 2 0.41268.8 0.43963.1
λ 0.19466.5 0.20263.9
k = 30 , n = 20 , m = 25 θ 1 1.05191.8 1.11588.9
scheme = ( 0 ( 14 ) , 15 , 0 ( 15 ) ) θ 2 0.78192.4 0.82591.2
λ 0.34484.4 0.38983.2
k = 30 , n = 20 , m = 25 θ 1 0.94383.8 1.01982.5
scheme = ( 5 , 0 ( 28 ) , 10 ) θ 2 0.73686.0 0.77681.6
λ 0.32081.6 0.37380.9
k = 30 , n = 20 , m = 25 θ 1 0.99993.2 1.08190.4
scheme = ( 0 ( 12 ) , 3 ( 5 ) , 0 ( 13 ) ) θ 2 0.76694.3 0.79592.0
λ 0.32385.6 0.37782.3
Table 9. The fitting results of the two data sets.
Table 9. The fitting results of the two data sets.
Data Set θ ^ λ ^ K-S Distance95% Critical Value
data set 113.1853.300.06120.1603
data set 218.2261.560.08710.1603
Table 10. Maximum likelihood estimates under three schemes.
Table 10. Maximum likelihood estimates under three schemes.
Scheme θ 1 ^ θ 2 ^ λ ^
( 0 ( 18 ) , 2 ( 36 ) , 0 ( 18 ) ) 14.73 13.97 54.15
( 0 ( 35 ) , 36 ( 2 ) , 0 ( 35 ) ) 16.78 12.91 57.23
( 36 , 0 ( 70 ) , 36 ) 13.50 12.76 55.51
Table 11. Bayes estimates for non-informative prior under square loss function.
Table 11. Bayes estimates for non-informative prior under square loss function.
Scheme θ 1 ^ θ 2 ^ λ ^
( 0 ( 18 ) , 2 ( 36 ) , 0 ( 18 ) ) 14.40 13.47 54.73
( 0 ( 35 ) , 36 ( 2 ) , 0 ( 35 ) ) 13.55 11.97 54.72
( 36 , 0 ( 70 ) , 36 ) 13.29 10.99 54.68
Table 12. Bayes estimates for non-informative prior under linex loss function.
Table 12. Bayes estimates for non-informative prior under linex loss function.
Scheme θ 1 ^ θ 2 ^ λ ^
( 0 ( 18 ) , 2 ( 36 ) , 0 ( 18 ) ) 14.23 13.38 53.56
( 0 ( 35 ) , 36 ( 2 ) , 0 ( 35 ) ) 13.41 11.86 53.50
( 36 , 0 ( 70 ) , 36 ) 13.16 10.89 53.34
Table 13. The interval estimates with three methods. LB means lower bound and UB means upper bound.
Table 13. The interval estimates with three methods. LB means lower bound and UB means upper bound.
SchemeParameterBootstrap-pBootstrap-tNIP
LBUBLBUBLBUB
θ 1 12.2617.5711.9718.0212.4518.51
( 0 ( 18 ) , 2 ( 36 ) , 0 ( 18 ) ) θ 2 10.2317.059.9118.2011.2616.78
λ 53.3361.5953.0662.3053.5160.13
θ 1 11.3718.4111.8819.7612.4319.17
( 0 ( 35 ) , 36 ( 2 ) , 0 ( 35 ) ) θ 2 9.3617.299.4018.5510.2317.96
λ 51.6060.6450.0561.2353.3061.18
θ 1 13.2720.7412.8120.0712.2519.46
( 36 , 0 ( 70 ) , 36 ) θ 2 11.6919.8011.0919.0812.7220.79
λ 53.5365.1652.3364.1651.4162.11
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Fan, J.; Gui, W. Statistical Inference of Inverted Exponentiated Rayleigh Distribution under Joint Progressively Type-II Censoring. Entropy 2022, 24, 171. https://doi.org/10.3390/e24020171

AMA Style

Fan J, Gui W. Statistical Inference of Inverted Exponentiated Rayleigh Distribution under Joint Progressively Type-II Censoring. Entropy. 2022; 24(2):171. https://doi.org/10.3390/e24020171

Chicago/Turabian Style

Fan, Jingwen, and Wenhao Gui. 2022. "Statistical Inference of Inverted Exponentiated Rayleigh Distribution under Joint Progressively Type-II Censoring" Entropy 24, no. 2: 171. https://doi.org/10.3390/e24020171

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop