Next Article in Journal
Approximation of the Statistical Characteristics of Piecewise Linear Systems with Asymmetric Damping and Stiffness under Stationary Random Excitation
Next Article in Special Issue
A Look at the Primary Order Preserving Properties of Stochastic Orders: Theorems, Counterexamples and Applications in Cognitive Psychology
Previous Article in Journal
Knowledge-Enhanced Dual-Channel GCN for Aspect-Based Sentiment Analysis
Previous Article in Special Issue
From Multi- to Univariate: A Product Random Variable with an Application to Electricity Market Transactions: Pareto and Student’s t-Distribution Case
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Statistical Inference for Competing Risks Model with Adaptive Progressively Type-II Censored Gompertz Life Data Using Industrial and Medical Applications

by
Muqrin A. Almuqrin
1,*,
Mukhtar M. Salah
1 and
Essam A. Ahmed
2,3,*
1
Department of Mathematics, Faculty of Science in Zulfi, Majmaah University, Al-Majmaah 11952, Saudi Arabia
2
Faculty of Business Administration, Taibah University, Medina 42353, Saudi Arabia
3
Mathematics Department, Sohag University, Sohag 82524, Egypt
*
Authors to whom correspondence should be addressed.
Mathematics 2022, 10(22), 4274; https://doi.org/10.3390/math10224274
Submission received: 24 September 2022 / Revised: 2 November 2022 / Accepted: 9 November 2022 / Published: 15 November 2022
(This article belongs to the Special Issue Probability Distributions and Their Applications)

Abstract

:
This study uses the adaptive Type-II progressively censored competing risks model to estimate the unknown parameters and the survival function of the Gompertz distribution. Where the lifetime for each failure is considered independent, and each follows a unique Gompertz distribution with different shape parameters. First, the Newton-Raphson method is used to derive the maximum likelihood estimators (MLEs), and the existence and uniqueness of the estimators are also demonstrated. We used the stochastic expectation maximization (SEM) method to construct MLEs for unknown parameters, which simplified and facilitated computation. Based on the asymptotic normality of the MLEs and SEM methods, we create the corresponding confidence intervals for unknown parameters, and the delta approach is utilized to obtain the interval estimation of the reliability function. Additionally, using two bootstrap techniques, the approximative interval estimators for all unknowns are created. Furthermore, we computed the Bayes estimates of unknown parameters as well as the survival function using the Markov chain Monte Carlo (MCMC) method in the presence of square error and LINEX loss functions. Finally, we look into two real data sets and create a simulation study to evaluate the efficacy of the established approaches.

1. Introduction

In practical applications, especially in medical fields and engineering sciences, lifetime studies are a useful tool to investigate the survival unit distribution. When analyzing data from these studies, an important component is the assumed lifetime distribution. Some common lifetime distributions include the exponential, generalized exponential, Rayleigh, Pareto, and Weibull, to name a few. Besides these common life distributions, the Gompertz distribution (Gompertz [1]) is also frequently used to analyze lifetime data. Further, it is used to describe growth in plants, animals, bacteria, and cancer cells, see Willemse and Koppelaar [2]. In recent studies, the Gompertz model has been successfully used to characterize growth curves in many fields, including biology, crop science, medicine, engineering, computer science, economics, marketing, human mortality, human demographics, and actuarial mortality, to name a few. Due to the recent global spread of COVID-19 cases, this distribution has been used to predict and estimate the number of COVID-19 cases in different countries. For instance, Rodriguez et al. [3] predicted a different number of COVID-19 cases in Mexico using the Gompertz model. According to Jia et al. [4], the Gompertz model has been applied successfully to forecast the amount of COVID-19 infections in China. The Gompertz distribution is thus worthwhile to investigate in this paper due to its numerous uses and applications.
The mathematical symbols for the probability density function (PDF) and cumulative distribution function (CDF) related to the Gompertz distribution are given, respectively, by
f ( x ; θ , λ ) = θ e λ x exp θ λ e λ x 1 , x > 0 , λ , θ > 0 ,
and
F ( x ; θ , λ ) = 1 exp θ λ e λ x 1 , x 0 , λ , θ > 0 .
The reliability or survival function and the failure rate function become, respectively
S ( t ) = exp θ λ e λ t 1 and h ( t ) = θ e λ t , t 0 , λ , θ > 0 ,
where, λ > 0 is the scale parameter and θ > 0 is the shape parameter. The failure rate (hazard) function h ( t ) increases or decreases monotonically, and the log( h ( t ) ) is linear with t. It is known Gompertz distribution is a flexible model that can be skewed to the left or the right by varying the values of θ and λ . Where the parameter λ satisfies the criteria listed below:
  • If 0 < λ θ , then d f ( x ; θ , λ ) d x < 0 , where x > 0 , hence f ( x ; θ , λ ) is monotonically decreasing.
  • If λ > θ , the PDF (1) will monotonically increase when x ( 0 , ln λ / θ λ ) and decrease when x ( ln λ / θ λ , ) .
  • If λ < 1 , then the hazard monotonically decreases over time t.
  • If λ > 1 , the hazard then monotonically increases with time t .
  • When λ 0 , the Gompertz distribution tends to become exponential.
The inference of the unknown lifetime parameters of the Gompertz distribution based on censoring data has been widely discussed in the last two decades. Including, for example, Jaheen [5] investigated this distribution based on progressive Type-II censoring using a Bayesian methodology. Also, based on progressive Type-II censored samples, Wu et al. [6] developed point and interval estimators for the parameters of the Gompertz distribution. Wu et al. [7] explored the estimation of Gompertz distribution with a Type-II progressive censoring scheme, where the units are randomly removed. Ismail [8] used step stress partially accelerated life tests with two stress levels and Type-I censoring and the Gompertz distribution as a life model to apply the Bayesian technique to the estimation problem. The point and interval estimations of a two-parameter Gompertz distribution under partially accelerated life tests with Type-II censoring were also covered by Ismail [9]. Soliman et al. [10] have dealt with parameter estimation using progressive first-failure censored data. Soliman and Al Sobhi [11] analyzed first-failure progressive data to deal with the estimation of Gompertz distribution. The Bayes estimation and expected termination time for the competing risks model for the Gompertz distribution under progressively hybrid censoring with binomial removals have been taken into consideration by Wu and Shi [12]. The statistical inference for the Gompertz distribution with Type-II progressive hybrid data and generalized progressively hybrid censored data have been covered in El-Din et al. [13].
Due to resource shortages, time restraints, employee changes, and accidents, data cannot be fully completed in practice when studying lifetime experiments. Experimenters use filtering or censoring techniques to shorten testing times and associated expenses. The two most common forms of censored systems in literature are Type-I and Type-II. With Type-I censoring, the test is over at a pre-fixed time, whereas with Type-II censoring, only the first m failed units in a random sample of size n ( m < n ) are observed. Or equivalent, the experiment stops when it collects a specified amount of data. Although this method is easy to implement, it has the potential to waste a lot of test time. Furthermore, until the test is over, no unit can be taken out of it. In order to increase the effectiveness of the experiment, progressive Type-II censoring was suggested. With this censoring method, test units can be eliminated at different points throughout the experiment. We refer to Balakrishnan and Cramer [14] for additional information. The following illustrates how the progressive Type-II censoring model works.
Denote X 1 , X 2 , , X n as the respective lifetimes of the n units that are placed on a life test. the progressive Type-II censoring scheme = ( R 1 , R 2 , , R m ) and the number of units observed m ( m < n ) are determined before the experiment. R i > 0 , i = 1 , 2 , , m and n = m + i = 1 m R i . The remaining R i units are arbitrarily removed from the experiment once the ith failure is noticed. As long as this rule is followed, the experiment will continue until m failures are observed and here the experiment ends. As a result, the observed statistics for the progressive Type-II right censoring order are ( X 1 : m : n , X 2 : m : n ,…., X m : m : n ). This model simulates the real-world scenario where some units are lost or removed throughout the experiment, which makes it more logical than Type-II censoring. Even while progressive censorship can considerably increase the effectiveness of the experiment, the trial’s runtime is frequently still too long. In many cases, it is important to know how long before a particular event occurs, especially in clinical research. Ng et al. [15] suggested adaptive Type-II progressive censoring to improve the effectiveness of statistical inference and reduce overall test duration. This plan operates as follows: Consider putting n identical units through a life test. The observed number of failures m ( m < n ) is predetermined, and the test time is permitted to extend beyond the time T that is specified beforehand. The progressive censoring method is specified, although some of the R i s values may change accordingly during the life test. As explained above, after the ith failure is noticed during the life test, R i units are at random removed from the test. We write X i : m : n , i = 1 , 2 , , m , to represent the m fully observed lifetimes. If the mth failure time occurs before time T (i.e. X m : m : n < T ), the test ends at time X m : m : n using the same progressive censoring scheme ( R 1 , R 2 , , R m ), where R m = n m i = 1 m 1 R i . If Jth failure time happens before time T, i.e., X J : m : n < T < X J + 1 : m : n , ( 1 J m 1 ) , where X 0 : m : n 0 and X m + 1 : m : n , then we adapt the number of units progressively withdrawn from test upon failures by setting R J + 1 = R J + 2 = = R m 1 = 0 , and at the time X m : m : n all remaining units R m are eliminated, where R m = n m i = 1 J R i . So, in this situation, the effectively applied progressive censored scheme is ( R 1 , R 2 , , R J , 0 , 0 , , 0 , n m i = 1 J R i ). In this study, we employ X i instead of X i : m : n ; i = 1 , 2 , , m . One of the following two scenarios might represent the observed data under-considered censoring scheme:
Case 1 : ( X 1 , R 1 ) , ( X 2 , R 2 ) , , ( X m , R m ) , if X m < T , where R m = n m i = 1 m 1 R i ,
Case 2 : ( X 1 , R 1 ) , , ( X J , R J ) , ( X J + 1 , 0 ) , , ( X m 1 , 0 ) , ( X m , R m ) , if X J < T < X J + 1 .
It should be noted that Type-II and Type-II progressive censoring schemes are both extensions of the the adaptive Type-II censored scheme. While adaptive Type-II censored scheme reduces to Type-II censoring scheme if T = 0 , J = 0 , no units will be removed, and if T = 1 , J = m , R i ( i = 1 , 2 , , m ) survival units will be eliminated at random during the trial, adaptive Type-II censored scheme is exactly Type-II progressive censored scheme.
There have been a lot of discussions recently about the adaptive Type-II censored scheme. As an illustration, Sobhi and Soliman [16] worked with the exponentiated Weibull distribution, they investigated the estimate of its parameters, reliability, and hazard functions. They employed the approach of Bayesian estimation as well as the MLE under the adaptive Type-II censored scheme. ML and Bayes estimates for the unknown parameters of the inverse Weibull distribution under the adaptive Type-II censored scheme were described by Nassar and Abo-Kasem [17]. According to the adaptive Type-II censored scheme, Sewailem and Baklizi [18] investigated the ML and Bayes estimates for the log-logistic distribution parameters. The estimations of entropy for inverse Weibull distributions using the adaptive Type-II censored scheme were developed by Xu and Gui [19]. The parameters of an exponentiated inverted Rayleigh model were calculated by Panahi and Moradi [20]. They studied the MLE and Bayesian analysis under an adaptive Type-II censored hybrid censored scheme. Chen and Gui [21] concentrated on a statistical analysis of the Chen and Gui Chen model with adaptive progressive Type-II censoring. Kumarswamy-exponential distribution was taken into consideration in the adaptive progressive Type-II censoring technique by Mohan and Chacko [22]. Under the adaptive progressive Type-II censoring scheme, Hora et al. [23] considered the classical and Bayesian inferences for unknown parameters of the inverse Lomax distribution. Recently, using the adaptive Type-II progressive censored sample from the Gompertz distribution, Amein et al. [24] examined several estimation strategies.
The adaptive Type-II progressive censored is used in this paper instead of Type-I, Type-II, and Type-II progressive censored schemes because it favors experiments where units must be disassembled at different stages of failure before the appropriate intended sample size is reached and also has a predetermined time during which the experiment can occur.
In some medical or engineering studies, individuals may fail due to different failure causes. In literature, it refers to the competing risks model. According to the competing risks model, observable data include the individual failure time and a cause-of-failure indicator. These failure factors might or might not be independent. In most cases, when analyzing data on competing risks, the failure factors are considered to be independent of each other. For example, a patient can die from breast cancer or stroke, but he cannot die from both. In the same field, when studying thyroid cancer, three causal factors play a possible role in thyroid cancer. The first is radiation exposure, the second is an elevated level of thyroid-stimulating hormone, and the third suggested factor is prolonged exposure to iodine deficiency. Based on the assessment of these factors, patients are divided into low or high-risk groups.
Another example is applied in the industrial and mechanical fields, an assembly device may fail to break the welding/bond plate front due to fatigue, or low electrical/optical signal (voltage, current, or light intensity) to an unacceptable level due to aging deterioration. In this example, the electronic product fails due to two independent elements of failure: welding interface fracture (catastrophic failure or difficult failure) and brightness of electrical/optical signal reductions (degradation failure or fine failure). Crowder [25] is a reliable source for a comprehensive investigation of several competing risk models.
Many academics have recently studied statistical inference for the parameters of various lifetime parametric models utilizing various censoring techniques with competing risk data. Kundu et al. [26], for instance, took into account the analysis of competing risks data when the data are progressively Type-II censored from exponential distributions. Based on progressive Type-II censoring of competing risks, Pareek et al. [27] determined the MLEs of the parameters of Weibull distributions and their asymptotic variance-covariance. When the lifetime distributions are Weibull distributions, Kundu and Pradhan [28] studied the Bayesian inference of the unknown parameters of the progressively censored competing risks data. The estimators of the parameters for Lomax distributions were determined by Cramer and Schmiedt [29] using a progressive Type-II censoring competitive risks model. For the distribution parameters, they calculated the expected Fisher information matrices and MLEs. Generalized exponential distribution with adaptive Type-II progressive hybrid censored competing risks data was studied by Ashour and Nassar [30]. A competing risks model with a generalized Type I hybrid censoring method was presented by Mao et al. [31]. They estimated both exact and approximate confidence intervals using the exact distributions, asymptotic distributions, and parametric bootstrap approaches, respectively. Wu and Shi [12] developed the Bayes estimation for the two-parameter Gompertz distribution competitive risks model under Type-I gradually hybrid censoring scheme with binomial removals. The point estimate and point prediction for a class of an exponential distribution with Type-I progressively interval-censored competing risks data were studied by Ahmadi et al. [32]. Dey et al. [33] took into account the Bayesian analysis of modified Weibull distribution under progressively censored competing risk models. Inference techniques for the Weibull distribution under adaptive Type-I progressive hybrid censored competing risks data are described by Ashour and Nassar [34]. Additionally, a competing risk model using exponential distributions and the adaptive Type-II progressively censoring scheme is also taken into account by Hemmati and Khorram [35]. They developed MLEs of unknown parameters and constructed the confidence intervals as well as the two different bootstraps of different unknown parameters. The Bayes estimates and associated two-sides probability intervals were also likewise driven by them. Azizi et al. [36] considered statistical inference for a competing risks model using Weibull data with progressive interval censoring. Based on progressive Type-II censored competing risks data with binomial removals, Chacko and Mohan [37] developed the Bayesian analysis of the Weibull distribution. Baghestani and Baharanchi [38] investigate an improper Weibull distribution for competing for risk analysis using a Bayesian technique. The statistical inference of the Burr-XII distribution under progressive Type-II censored competing risks data with binomial removals has been studied by Qin and Gui [39]. Progressive Type-II censored competing risks data from the linear exponential distribution have been examined by Davies and Volterman [40]. In the adaptive Progressive Type-II censored model with independent competing risks, Ren and Gui [41] proposed several of statistical inference techniques to estimate the parameters and reliability of the Weibull distribution. Recent research by Lodhi et al. [42] examined a competing risks model utilizing the Gompertz distribution under progressive Type-II censoring where failure cause probability distributions are identically distributed with a similar scale and variable shape parameters.
The major goal of this research is to analyze the adaptive progressively Type-II censored with competing risks sample from the Gompertz distribution because there aren’t many relevant works that deal with adaptive progressively Type-II censored competing risks data. The model parameters and reliability function are estimated using the maximum likelihood method. In this method, with the help of the graphical method, developed by Balakrishnan and Kateri [43], the issue of the starting value of the MLEs is resolved here. The existence and uniqueness of the MLEs of the model parameters are established. The Newton-Rapshon (NR) method and the stochastic expectation-maximization (SEM) algorithm are the two algorithms that are being taken into consideration to numerically determine the MLEs for the parameters. We cover interval estimation using the two approximation information matrix methods and the bootstrap method. With the assumption that the model parameters follow independent gamma priors for the two different shape parameters and inverted gamma for the scale parameter, the Bayes estimators and associated credible intervals are then obtained using the Metropolis-Hasting (MH) algorithm based on squared error (SE) and linear-exponential (LINEX) loss functions. Last but not least, through Monte Carlo simulation, the performances of estimates are assessed using average bias and mean squared error (MSE) for point estimation and average length and probability coverage for interval estimation.
The remaining portions of this article are structured as follows: We describe the model in Section 2 of the paper. The MLEs of the unknown parameters based on the NR and SEM techniques are discussed in Section 3 of this article. We also present the estimated confidence intervals using the corresponding MLEs’ normalcy requirement. In Section 4, The bootstrap confidence intervals for the unknown parameters as well as the reliability function are obtained. The Markov chain Monte Carlo (MCMC) approach is used in Section 5 to approximate the Bayesian estimates and to generate MCMC intervals for the unknowns. Section 6 presents a Monte Carlo simulation analysis that contrasts the results of the various approaches. This part also introduces actual data sets to demonstrate the efficacy of the methods used in this paper. Several conclusions are provided as a conclusion in Section 7.

2. Model Assumptions

In this light, failure times have an independent Gompertz distribution and two different causes for failure. As a result, the cause-specific density function for the random variable X i k , k = 1 , 2 is given by
f k ( x ; ϑ ) = θ k e λ x exp θ k λ e λ x 1 , x > 0 , λ , θ k > 0 , k = 1 , 2 ,
and the cause specific survival (reliability) function is defined as
F ¯ k ( x ; ϑ ) = exp θ k λ e λ x 1 , x > 0 , λ , θ k > 0 , k = 1 , 2 ,
where item’s lifetime is shown as X i , i = 1 , 2 , , n and the time the element i fails as a result of cause k is X i , where X i = min { X i 1 , X i 2 } .
Remark 1.
If X 1 Gompertz( θ 1 , λ ) and X 2 Gompertz( θ 2 , λ ) are mutually independent random variables, then the survivor function of X = min { X 1 , X 2 } is a Gompertz random variable with scale parameter θ 1 + θ 2 and shape parameter λ. Suppose F ¯ ( x ) is a surviving function of X that may be obtained by
F ¯ ( x ) = P min { X 1 , X 2 } > x = P X 1 > x P X 2 > x = F ¯ 1 x F ¯ 2 x = exp θ 1 λ e λ x 1 exp θ 2 λ e λ x 1 = exp θ 1 + θ 2 λ e λ x 1 .
Consequently, the cumulative distribution function ( F ( x ) ) and probability density function ( f ( x ) ) are given by
F x = 1 exp θ 1 + θ 2 λ e λ x 1 and f x = θ 1 + θ 2 e λ x exp θ 1 + θ 2 λ e λ x 1 .
The life test experiment is ended with an adaptive progressively Type-II censoring system under competing risks Gompertz models when the number of failures exceeds m < n . The following algorithm is then used to create the random sample of total lifetime X:
A1:
Generate two i.i.d samples of size n for each cause of failure as follows:
( X 11 , X 21 , , X n ) i . i . d Gompertz ( θ 1 , λ ) and ( X 12 , X 22 , , X n ) i . i . d Gompertz ( θ 2 , λ ) ,
A2:
For each i = 1 , , n , If X i 1 X i 2 set δ i = 1 and X i = X i 1 , else, if X i 1 > X i 2 set δ i = 2 and X i = X i 1 . We now have the data as follows. ( X 1 , δ 1 ) , ( X 2 , δ 2 ) , , ( X n , δ n ) . Set n 1 = { # X i : X i X i 1 } , n 2 = { # X i : X i X i 2 } , and n = n 1 + n 2 .
A3:
This is now ordered irrespective of the cause of failure, but without losing track of the corresponding cause of failure. Thus we now have the n usual order statistics. ( X 1 : n , δ 1 ) , ( X 2 : n , δ 2 ) , …, ( X n : n , δ n ) .
A4:
In advance of the experiment, the number of units observed m ( m < n ) and the Type-II progressive censoring scheme = ( R 1 , R 2 , , R m ) are both determined, where R i > 0 , i = 1 , 2 , m and i = 1 m R i = n + m .
A5:
To begin with X 1 : n is observed as the first failure and thus X 1 : m : n = X 1 : n . Then R 1 of the n 1 units that are still alive are chosen at random and removed from the experiment. At this point, the second failure is observed as the next smallest lifetime of the remaining units, i.e., ( X 2 : m : n ), and as a result, R 2 of the remaining n R 1 2 units are arbitrarily censored from the study. This procedure is carried out repeatedly until all of the remaining R m = n m i = 1 m 1 R i units are censored at the time of the mth observed failure ( X m : m : n ). We now have a progressively Type-II censoring data ( X 1 : m : n , δ 1 * ) , ( X 2 : m : n , δ 2 * ) , , ( X m : m : n , δ m * ) , where we introduce the notation * since δ i * s are concomitants of the order statistics. Thus, δ i * may not be equal to δ i .
A6:
Here, the expression δ i * = k , k = 1 , 2 indicates that the failure of unit i at time X i : m : n was failed by cause k. Let I k ( A ) is the indicator of the event A, where
I 1 ( δ i * = 1 ) = 1 , δ i * = 1 0 else and I 2 ( δ i * = 2 ) = 1 , δ i * = 2 0 else .
The number of failures attributable to the first and second causes of failure, respectively, are thus described by the random variables m 1 = i = 1 m I ( δ i * = 1 ) and m 2 = i = 1 m I ( δ i * = 2 ) , where m 1 + m 2 = m , and m > 0 .
A7:
Let’s say a test has a predetermined expected completion time T, but we allow the total test time to exceed T. When X m : m : n < T , the life testing experiment comes to an end. In the absence of this, the experiment ends when X J : m : n < T < X J + 1 : m : n , ( 1 J m 1 ) . We don’t discard any survival unit, it means that the censoring scheme will be changed to R J + 1 = R J + 2 = = R m 1 = 0 , R m = n m j = 1 J R j . where J = max { j : X j : m : n < T } . Thus, under the adaptive Type-II progressive censoring scheme and in presence of competing risks data, our observations are of the form ( X 1 : m : n , δ 1 * , R 1 ) , …, ( X J : m : n , δ J * , R J ) , ( X J + 1 : : m : n , δ J + 1 * , 0 ) , …, ( X m 1 : m : n , δ m 1 * , 0 ) , ( X m : m : n , δ m 1 * , R m ) .

3. Maximum Likelihood Estimation

In this section, we look at the issue of constructing maximum likelihood estimators (MLEs), their confidence intervals for unknown parameters, and the reliability function of the Gompertz distribution using the adaptive Type-II progressive censoring scheme and competing risks data. Here, it is suggested to estimate the unknown parameters using the NR and SEM algorithms. These two algorithms each have benefits and drawbacks. In this case, the SEM algorithm outperforms the NR algorithm by avoiding saddle points and resolving the issues with local maxima by repeatedly simulating new values. Moreover, the calculations are accelerated and made simpler by the NR approach. Therefore, it can be challenging to select one algorithm over another. It might be preferable to consider the two strategies as alternatives and do a numerical search to assess how the outcomes respond to each technique.

3.1. MLE via Newton–Raphson Procedure

According to the previously mentioned assumptions, the likelihood function of the competing risk model can be expressed as follows (see Kundu et al. [26])
L ϑ ; x = C J i = 1 m f 1 x i F ¯ 2 x i I δ i = 1 { f 2 x i F ¯ 1 x i I δ i = 2 } i = 1 J F ¯ x i R i F ¯ x m R * ,
where
C J = i = 1 m ( n i + 1 j = 1 max { i 1 , J } R j , ) , R * = n m i = 1 J R i , and F ¯ k x i = 1 F k x i , k = 1 , 2
From (1), (2) and (7), we obtain the likelihood function of observed sample data X, it can be written as
L ϑ ; x = C J θ 1 m 1 θ 2 m 2 e λ i = 1 m x i exp A ( x , λ ) k = 1 2 θ k λ ,
where
A ( x , λ ) = i = 1 m e λ x i 1 + i = 1 J R i e λ x i 1 + R * e λ x m 1 .
The additive constant C J is ignored by the log-likelihood function, which is given by:
L ϑ ; x k = 1 2 m k log θ k + λ i = 1 m x i A ( x , λ ) k = 1 2 θ k λ ,
The log-likelihood function’s first order derivatives with respect to θ 1 , θ 2 and λ are given by
L ϑ ; x θ k = m k θ k A ( x , λ ) λ = 0 , k = 1 , 2 ,
and
L ϑ ; x λ = i = 1 m x i + k = 1 2 θ k λ A ( x , λ ) λ B ( x , λ ) = 0 ,
where, A ( x , λ ) is given by (9) and B ( x , λ ) = i = 1 m x i e λ x i + i = 1 J R i x i e λ x i + R m x m e λ x m .
From Equation (10), the MLE of θ k is given by
θ ^ k λ = m k λ ^ A ( x , λ ^ ) , k = 1 , 2 .
Theorem 1.
Assume that under an adaptive Type-II progressively censoring scheme, the failure rates for the competing risks follow the Gompertz distribution with different parameters θ 1 and θ 2 , for θ 1 > 0 , θ 2 > 0 and λ > 0 , the MLE of θ k , k = 1 , 2 exists and is given by (12).
Proof. 
Since log t t 1 , t > 0 holds and let t = θ k θ ^ k , then
m k log θ k = m k log θ k θ ^ k + m k log θ ^ k m k θ k θ ^ k m k + m k log θ ^ k = θ k A ( x , λ ) λ m k + m k log θ ^ k .
Which implies that
L ϑ ; x k = 1 2 m k log θ k + λ i = 1 m x i k = 1 2 θ k λ A ( x , λ ) k = 1 2 θ k A ( x , λ ) λ m k + m k log θ ^ k + λ i = 1 m x i k = 1 2 θ k λ A ( x , λ ) .
From (12), by using
m k = θ ^ k A ( x , λ ) λ ,
then we have
L ϑ ; x k = 1 2 m k log θ ^ k + λ i = 1 m x i k = 1 2 θ k λ A ( x , λ ) = L ϑ ^ ; x
Equality holds if and only if θ 1 = θ ^ k , k = 1 , 2 .
By omitting the constant and substituting θ ^ k λ into L ϑ ; x , we may obtain the profile log-likelihood function of λ as follows.
H λ     m log λ log A ( x , λ ) + λ i = 1 m x i .
Shi and Wu [44] proof that the profile log-likelihood function H λ is concave by using Cauchy–Schwarz inequality. From this point, we can conclude H λ is unimodal and has a singular maximum. Since H λ is unimodal, most of the common iterative approaches can be used to determine the MLE of λ . The MLE λ ^ of λ satisfies the following equation,
λ = g λ ,
where,
g λ = B ( x , λ ) A ( x , λ ) x ¯ 1
From (14), we can determine the estimated value of the shape parameter λ by applying the method of a simple iterative scheme described by Kundu [45]. Once the iteration results become stable, the MLEs of unknown parameters, say θ ^ 1 N R , θ ^ 2 N R and λ ^ N R can be obtained. The main process is as follows:
Step 1: 
Start with an initial guess of λ , say λ ( 0 ) and set l = 0 .
Step 2: 
Substitute λ ( 0 ) into the right of Equation (14) and λ ( l + 1 ) can be calculated.
Step 3: 
Stop the iterative procedure when λ ( l + 1 ) λ ( l ) < ε , where ε is a tolerable error.
Step 4: 
Once we obtain λ ^ N R , the MLEs of θ k , k = 1 , 2 can be obtained from (12), say θ ^ k N R , k = 1 , 2 .
Using the MLE’s invariance property and the MLEs θ ^ 1 N R , θ ^ 2 N R and λ ^ N R , the reliability function’s MLE can be determined from (6) by
S ^ N R ( t ) = exp [ ( θ ^ 1 N R + θ ^ 2 N R ) λ ^ N R ( e λ ^ N R t 1 ) ] , t 0 .

3.2. Asymptotic Confidence Intervals

In this part, we use the MLE of the parameter ϑ to estimate the asymptotic distribution and the confidence interval of ϑ = ( θ 1 , θ 2 , λ ) . The Fisher information matrix, represented by the symbol I ϑ , is the inverse of the variance-covariance matrix of the MLE of the vector parameter ϑ , which is given by I ϑ = E 2 L ϑ ; x ϑ 2 . Additionally, the symmetric matrix I o b s ( ϑ ) , which represents the observed fisher information matrix, can be used to approximate the value of I ϑ , and it is easily obtained by
I o b s ( ϑ ) = I o b s ( θ 1 , θ 2 , λ ) = O i j = ( 2 L ( ϑ ; x ) ϑ i ϑ j ) , ϑ = ϑ 1 , ϑ 2 , ϑ 3 = ( θ 1 , θ 2 , λ ) .
The following equations can be used to determine the components of the observed fisher information matrix from (10).
O 11 = 2 L ϑ ; x θ 1 2 = m 1 θ 1 2 , O 12 = O 21 = 2 L ϑ ; x θ 1 θ 2 = 0 , O 22 = 2 L ϑ ; x θ 2 2 = m 2 θ 2 2 ,
O 13 = O 31 = O 23 = O 32 = 2 L ϑ ; x θ 1 λ = 2 L ϑ ; x θ 2 λ = A ( x , λ ) λ 2 B ( x , λ ) λ ,
and
O 33 = 2 L ϑ ; x λ 2 = 2 k = 1 2 θ k A ( x , λ ) λ 3 k = 1 2 θ k B ( x , λ ) λ 2 k = 1 2 θ k B ( x , λ ) λ 2 + k = 1 2 θ k C ( x , λ ) λ ,
where, C ( x , λ ) = i = 1 m x i 2 e λ x i + i = 1 J R i x i 2 e λ x i + R m x m 2 e λ x m . The approximate confidence intervals for the unknown model parameters are based on the asymptotic distribution of the MLEs. It can be easily shown that for large n,
ϑ ^ N R ϑ N ( 0 , v ( ϑ ^ N R ) ) ,
where a v ( ϑ ^ N R ) is the Cramer-Rao lower bound represented by the inverse matrix of I ϑ , let’s say I o b s 1 ( ϑ ^ N R ) , and it is denoted as follows
I o b s 1 ( ϑ ^ N R ) = V 11 V 12 V 13 V 21 V 22 V 23 V 31 V 32 V 33
be the variance-covariance matrix of ϑ ^ N R . Based on Slutsky’s Theorem, we can show that the pivotal quantities Z ϑ i = ( ϑ ^ i N R ϑ i ) / V i i , i = 1 , 2 , 3 , converge in distribution to the standard normal distribution. Therefore, two-sided 100 ( 1 α ) % approximate confidence intervals (ACIs) for ϑ are given by:
ϑ ^ i N R Z α / 2 V i i , i = 1 , 2 , 3 , ϑ ^ N R = θ ^ 1 N R , θ ^ 2 N R or λ ^ N R ,
where, Z α / 2 is the upper ( α / 2 ) -th point of the standard normal distribution.
The lower approximative confidence intervals computed using the previous procedure occasionally have negative values. The delta approach, proposed by Green [46], can be used to approximate confidence intervals to get around this issue. It is possible to approximate the distribution of log ϑ ^ N R (Meeker and Escobar [47]) using a normal distribution. Or equivalent
Z log θ = log ϑ ^ N R log ϑ Var ^ ( log ϑ ^ N R ) N ( 0 , 1 ) ; ϑ = θ 1 , θ 2 , or λ ,
where the Var ^ ( log ϑ ^ N R ) can be approximated by delta method as Var ^ ( log ϑ ^ N R ) = Var ^ ( ϑ ^ N R ) / ϑ ^ N R 2 . In light of this, the two-sided 100 ( 1 α ) % normal approximation confidence interval for a positive parameter like ϑ can be expressed as
ϑ ^ N R exp Z ( 1 α / 2 ) Var ^ ( ϑ ^ N R ) ϑ ^ N R , ϑ ^ N R exp Z ( 1 α / 2 ) Var ^ ( ϑ ^ N R ) ϑ ^ N R ,
where ϑ ^ N R and Var ^ ( ϑ ^ N R ) are the MLE and asymptotic variance of ϑ ^ N R = θ ^ 1 N R , θ ^ 2 N R or λ ^ N R , respectively.
It is evident that the variance of S ( t ) is required to compute the asymptotic confidence interval of it. For this purpose, the delta method is also used again. The delta method is a statistical approach to derive an approximate probability distribution for a function of an asymptotically normal estimator using the Taylor series approximation. If W ( ϑ ) , is any function of ϑ , the variance of ϑ and the first derivative of the function W ( ϑ ) need to be considered in calculating the approximate for the variance of W ( ϑ ) . Using this method, the approximate variances of S ^ N R ( t ) is given by
V ( S ^ N R ( t ) ) ( S t θ 1 , S t θ 2 , S t λ ) V ( ϑ ^ N R ) ( S t θ 1 , S t θ 2 , S t λ ) T ,
where V ( ϑ ^ N R ) = I o b s 1 ( ϑ ^ N R ) is given by (19). Upon using the approximate variances of S ^ N R ( t ) given above, the 100 ( 1 α ) % asymptotic confidence intervals of S ( t ) with respect to NR method is given by
S ^ N R ( t ) exp Z ( 1 α / 2 ) V ( S ^ N R ( t ) ) S ^ N R ( t ) , S ^ N R ( t ) exp Z ( 1 α / 2 ) V ( S ^ N R ( t ) ) S ^ N R ( t ) ,
The second derivatives of the log-likelihood are required for all iterations of the NR method, which can occasionally be challenging. Additionally, it is well known that the NR technique does not always converge and that the MLEs obtained by using it are highly sensitive to the starting parameter values. The stochastic expectation maximization (SEM) algorithm is used in the following paragraph to compute the MLEs and their asymptotic variance-covariance matrix.

3.3. MLE via Stochastic EM Algorithm

The expectation maximization (EM) algorithm is an effective method for estimating the MLEs in a missing or incomplete information environment. Dempster and colleagues [48] developed the EM algorithm as an iterative method for computing MLEs. Two steps are used in the algorithm’s estimation of the parameters: the expectation step (E-step) and the maximization step (M-step). Expectation-Maximization, sometimes known as the EM algorithm, is the name given to the process. Assuming that incomplete data is observed, the E-step determines the expected value of the likelihood function with complete data. By maximizing the expected likelihood function derived in the E-step, we find the estimates in the M-step. After identifying the values that best fit the expected likelihood function, the parameters update the preliminary estimations. We acquire the MLEs for model parameters by doing the E-step and the M-step repeatedly up until the point at which the preliminary estimates converge. Here, the adaptive progressive Type-II censoring scheme can be viewed as missing data, so the EM algorithm is developed to obtain the MLEs of the parameters.
Let’s use the notation X = ( X 1 , X 2 , , X m ) and Z i = ( Z i 1 , Z i 2 , , Z i R i ) to represent the observed data and Z i = ( Z i 1 , Z i 2 , , Z i R i ) , Z m = ( Z m 1 , Z m 2 , , Z m R m ) for the censored data at the time X i , i = 1 , 2 , …, J and X m . We represent Z as Z = { Z i j , j , 1 , 2 , , R i , i , 1 , 2 , , J } { Z m j , j , 1 , 2 , , R m } , R m = n m j = 1 J R j . Then, W = ( X , Z ) indicates the complete data in this case. In terms of competing risks with adaptive progressive Type-II censoring, the joint density function of W = ( X , Z ) is thus given by
L W = C J k = 1 2 i = 1 m k f k x i F ¯ 3 k x i I δ i * = k i = 1 J j = 1 R i f k z i j F ¯ 3 k z i j I δ i * = k × j = 1 R m f k z m j F ¯ 3 k z m j I δ i * = k ,
The corresponding log-likelihood function, with exception of the constant term, is given by
L W = n 1 log θ 1 + n 2 log θ 2 + n θ 1 + θ 2 λ + λ i = 1 m x i + i = 1 J j = 1 R i z i j + j = 1 R m z m j θ 1 + θ 2 λ i = 1 m e λ x i + i = 1 J j = 1 R i e λ z i j + j = 1 R m e λ z m j ,
where the number of failures n k due to risk k is given by
n k = i = 1 n I δ i = 1 , k = 1 , 2 and n = n 1 + n 2 .
In an E-step, the pseudo-likelihood function must be selected. This function is obtained from L W by substituting any function of z i j , say w ( z i j ) by its conditional expectation E ( w ( z i j | z i j > x i ) ) , and w ( z m j ) by E ( w ( z m j | z m j > x m ) ) . Consequently, the pseudo-likelihood function is provided by
L W * = n 1 log θ 1 + n 2 log θ 2 + n θ 1 + θ 2 λ + λ i = 1 m x i + i = 1 J j = 1 R i E z i j | z i j > x i + j = 1 R m E z m j | z m j > x m θ 1 + θ 2 λ i = 1 m e λ x i + i = 1 J j = 1 R i E e λ z i j | z i j > x i + j = 1 R m E e λ z m j | z m j > x m ,
To obtain MLEs for unknown parameters θ 1 , θ 2 and λ , we next calculate the partial derivatives of L W * for each of these unknown parameters. As a result, the following equations exist.
E L W * θ k | x = n k θ k + 1 λ n i = 1 m e λ x i i = 1 J j = 1 R i E e λ z i j | z i j > x i j = 1 R m E e λ z m j | z m j > x m
and
E L W * λ | x = n θ 1 + θ 2 λ 2 + i = 1 m x i + i = 1 J j = 1 R i E z i j | z i j > x i + j = 1 R m E z m j | z m j > x m θ 1 + θ 2 λ i = 1 m x i e λ x i + i = 1 J j = 1 R i E z i j e λ z i j | z i j > x i + j = 1 R m E z m j e λ z m j | z m j > x m , + θ 1 + θ 2 λ 2 i = 1 m e λ x i + i = 1 J j = 1 R i E e λ z i j | z i j > x i + j = 1 R m E e λ z m j | z m j > x m
Therefore, we should calculate the following conditional expectations: E e λ t | t > x i , E t | t > x i and E t e λ t | t > x i , where, the conditional distribution of z i j follows a truncated Gompertz distribution accompanied by left truncation at x i , see Ng et al. [15].
f Z i j | X i ( z i j | x i , θ 1 , θ 2 , λ ) = f Z i j ( z i j ; θ 1 , θ 2 , λ ) 1 F X i ( x i ; θ 1 , θ 2 , λ ) , z i j > x i ,
where f Z i j ( z i j ; θ 1 , θ 2 , λ ) and F X i ( x i ; θ 1 , θ 2 , λ ) are defined in (6). Hence,
f Z i j | X i ( z i j | x i , θ 1 , θ 2 , λ ) = θ 1 + θ 2 e λ z i j exp θ 1 + θ 2 λ e λ z i j e λ x i , z i j > x i .
In this situation, it is challenging to find out the expectation required for the E-step. We suggest simulating this expectation to estimate it. As a result, we can employ the SEM algorithm that Celeux and Diebolt [49] first proposed. The S-Step, which is relatively simple to construct regardless of the underlying distribution and the missing data, replaces the E-Step in this algorithm, which is a very appealing feature. Numerous studies demonstrate that the SEM algorithm outperforms the EM method. For more information, see Belaghi et al. [50] and Mitra and Balakrishnan [51].
Denoting ϑ ( l ) = ( θ 1 ( l ) , θ 2 ( l ) , λ ( l ) ) the value of ϑ at the lth SEM cycle, then the ( l + 1 )th cycle proceeds as follows
The S-Step: First, we generate the missing samples, z i j ; i = 1 , 2 , , J , j = 1 , 2 , , R i and z m j , j = 1 , 2 , , R m , whose conditional distributions functions are given by
G i ( z i j ; θ 1 , θ 2 , λ | x i ) = F Z ( z i j ; ϑ ) F X i ( x i ; ϑ ) 1 F X i ( x i ; ϑ ) , z i j > x i ,
and
G m ( z m j ; θ 1 , θ 2 , λ | x m ) = F X i ( z m j ; ϑ ) F X m ( x m ; ϑ ) 1 F X m ( x m ; ϑ ) , z m j > x m .
To create a random sample from Equation (31), we can first create a random realization of U ( 0 , 1 ) , let’s say, u, and then acquire a realization of z i j as
z i j = F 1 ( u + ( 1 u ) F X i ( x i ; ϑ ) ) ,
where the inverse function of F ( . ) is represented by F 1 ( . ) . The conditional expectation can be approximated by the sample data as
E z i j | z i j > x i z i j , E e λ z i j | z i j > x i e λ z i j , and E z i j e λ z i j | z i j > x i z i j e λ z i j .
In other words, given x i and x m , z i j is a Gompertz variable left-truncated at x i . Given (31), a random realization of z is readily generated from G i ( z ; θ 1 ( l ) , θ 2 ( l ) , λ ( l ) | x i ) .
The M-Step: Subsequently, from (28) the ML estimators of λ at the ( l + 1 )th stage are given by iterating the following fixed
λ ( l + 1 ) = n + W x , z , λ V x , z , λ + U x , z λ ( l ) 2 ( θ 1 ( l ) λ ( l ) + θ 2 ( l ) λ ( l ) ) V x , z , λ ,
where
U x , z = i = 1 m x i + i = 1 J j = 1 R i z i j + j = 1 R m z m j , V x , z , λ = i = 1 m x i e λ x i + i = 1 J j = 1 R i z i j e λ z i j + j = 1 R m z m j e λ z m j ,
W x , z , λ = i = 1 m e λ x i + i = 1 J j = 1 R i e λ z i j + j = 1 R m e λ z m j ,
and θ k ( l + 1 ) . is given from (27) by
θ k ( l + 1 ) . = λ n k W x , z , λ n , k = 1 , 2 .
The MLEs of θ 1 , θ 2 , and λ , based on NR method can be considered as the initial values in this algorithm. After an initial burn-in period ( K 0 ), the sequence of { ϑ ( l ) , ϑ = θ 1 , θ 2 , or λ } is averaged to obtain an approximation of the MLEs ( θ ^ 1 S E M , θ ^ 2 S E M , λ ^ S E M ). Or equivalent, the MLEs of the parameters are thus given by
ϑ ^ S E M = 1 K K 0 i = K 0 + 1 K ϑ ( l ) , ϑ = θ 1 , θ 2 , or λ ,
where K = 1000 iterations are sufficient to estimate the parameters, and a burn-in period of K 0 = 100 iterations is sufficient under moderate missing data rates, see Ye and Ng [52]. Once we have obtained these estimates we can use the invariance property of MLE’s to estimate the reliability function, i.e.,
S ^ S E M ( t ) = exp ( θ ^ 1 S E M + θ ^ 2 S E M ) λ ^ S E M ( e λ ^ S E M t 1 ) , t 0 ,
Additionally, we can generate the observed fisher’s information matrix using the SEM algorithm. The observed information matrix I o b s * 1 ( ϑ ^ ) can be inverted to produce the asymptotic variance-covariance matrix of the MLEs ( θ ^ 1 S E M , θ ^ 2 S E M , λ ^ S E M ) of θ 1 , θ 2 , and λ and it is provided by
I o b s * 1 ( ϑ ^ ) = L W θ 1 2 L W θ 1 θ 2 L W θ 1 λ L W θ 2 2 L W θ 2 λ L W λ 2 θ 1 = θ ^ 1 S E M , θ 2 = θ ^ 2 S E M , λ = λ ^ S E M 1 .
The observed Fisher information matrix’s quantities can be found as
2 L W θ 1 2 = n 1 θ 1 2 , 2 L W θ 1 2 = n 2 θ 2 2 , 2 L W θ 1 θ 2 = 2 L W θ 2 θ 1 = 0 ,
2 L W λ 2 = 2 θ 1 + θ 2 n W x , z , λ λ 3 + 2 θ 1 + θ 2 W x , z , λ λ 2 θ 1 + θ 2 W x , z , λ λ ,
2 L W λ θ 1 = 2 L W λ θ 2 = W x , z , λ n λ 2 W x , z , λ λ ,
where,
W x , z , λ = i = 1 m x i e λ x i + i = 1 J j = 1 R i z i j e λ z i j + j = 1 R m z m j e λ z m j , W x , z , λ = i = 1 m x i 2 e λ x i + i = 1 J j = 1 R i z i j 2 e λ z i j + j = 1 R m z m j 2 e λ z m j .
Using the above variance-covariance matrix, one can derive the 100 ( 1 α ) % confidence intervals for the parameter Ψ = ( θ 1 , θ 2 , λ , S ( t ) ) as following
Ψ ^ S E M exp Z ( 1 α / 2 ) Var ^ ( Ψ ^ S E M ) Ψ ^ S E M , ϕ ^ S E M exp Z ( 1 α / 2 ) Var ^ ( Ψ ^ S E M ) Ψ ^ S E M ,
where Ψ ^ S E M = ( θ ^ 1 S E M , θ ^ 2 S E M , λ ^ S E M , S ^ S E M ( t ) ) , Var ^ ( θ ^ 1 S E M ) , Var ^ ( θ ^ 2 S E M ) , and Var ^ ( λ ^ S E M ) are given from (37) and Var ^ ( S ^ S E M ( t ) ) can be obtained by using delta method.

4. Confidence Intervals via Parametric Bootstrap

As described in the previous section, normal approximations work well when the appropriate sample size is large. On the other hand, the assumption of normality does not apply to a small sample size. Resampling methods, like the bootstrap, offer more precise approximations of confidence intervals in this case. To determine approximate confidence intervals for the Gompertz distribution’s parameters θ 1 , θ 2 , λ , and S ( t ) , two bootstrap resampling techniques are suggested in this section.

4.1. Bootstrap-p Method

Based on the percentile parametric bootstrap (Boot-p) approach described by Efron and Tibshirani [53], we construct confidence intervals in this subsection. The procedure for obtaining percentile Boot-p confidence intervals is shown below.
Step 1: 
Create an adaptive progressively Type-II censored competing risks sample x 1 : m : n , x 2 : m : n , …, x m : m : n using the Gompertz distributions (Gompertz( θ 1 , λ ), Gompertz( θ 2 , λ )) using the θ 1 , θ 2 , λ , n, m, t, T, and progressive censoring scheme ( R 1 , R 2 , , R m ). Then calculate the MLEs Ψ ^ = ( θ ^ 1 , θ ^ 2 , λ ^ , S ^ ( t ) ) . To create the adaptive Type II censored data set with two competing causes of failure from Gompertz lifetimes, we follow the instructions below.
(i)
Generate m independent and identical observations W 1 , W 2 , , W m of size m using a standard uniform distribution Uniform ( 0 , 1 ) .
(ii)
For the progressive censoring schemes R 1 , R 2 , , R m , set V i = W i 1 / ( i + R m + R m 1 + + R m i + 1 ) for i = 1 , , m .
(iii)
Evaluate U i = 1 V m V m 1 V m i + 1 , i = 1 , 2 , , m . Then { U 1 , U 2 , , U m } is progressive Type-II censored sample coming from Uniform ( 0 , 1 ) distribution.
(iv)
Thus, given initial values of θ 1 , θ 2 , λ , the sample data from Gompertz( θ k , λ ) of progressively Type-II censoring scheme can be calculated by set X i = 1 λ log 1 λ θ k log 1 Z i , i = 1 , 2 , , m and k = 1 , 2 .
(v)
For each i = 1 , , m , if X i 1 X i 2 set δ i * = 1 and X i = X i 1 , else, if X i 1 > X i 2 set δ i * = 2 and X i = X i 2 . Hence, the required progressively Type-II censored competing risks sample is ( X 1 : m : n , δ 1 * ) , ( X 2 : m : n , δ 2 * ) , , ( X m : m : n , δ m * ) . Where, the random variables m 1 = i = 1 m I δ i * = 1 and m 2 = i = 1 m I δ i * = 2 describe the number of failures due to the cause of failure k, k = 1 , 2 .
Note that: One can generate the required progressively Type-II censored competing risks sample ( X 1 : m : n , X 2 : m : n , , X m : m : n , δ m * ) from Gompertz( θ 1 + θ 2 , λ ).
(vi)
Find the value of J that satisfies the condition X J : m : n < T < X J + 1 : m : n , then discard the sample X J + 2 : m : n , , X m : m : n
(vii)
Using a truncated distribution f ( x , θ 1 , θ 2 , λ ) 1 F ( x J + 1 : m : n , θ 1 , θ 2 , λ ) , generate the first m J 1 order statistics X J + 2 : m : n , , X m : m : n , where the sample size is n ( J + 1 + i = 1 J R i ) . Then we have the following observation: ( X 1 : m : n , δ 1 * , R 1 ) , ( X 2 : m : n , δ 2 * , R 2 ) , , ( X J : m : n , δ J * , R J ) , ( X J + 1 : m : n , δ J + 1 * , 0 ) ,… ( X m 1 : m : n , δ m 1 * , 0 ) , ( X m : m : n , δ m * , R m ) .
Step 2: 
Create a bootstrap sample x 1 : m : n * , x 2 : m : n * , , x m : m : n * , from Gompertz ( θ ^ 1 , λ ^ ) and Gompertz ( θ ^ 2 , λ ^ ) as provided by the previous step based on n, m, ( R 1 , R 2 , , R m ), θ ^ 1 , θ ^ 2 , λ ^ , T.
Step 3: 
Using the bootstrap sample x 1 : m : n * , x 2 : m : n * , , x m : m : n * , determine the bootstrap estimates of Ψ = ( θ 1 , θ 2 , λ , S ( t ) ) , say Ψ ^ * = ( θ ^ 1 * , θ ^ 2 * , λ ^ * , S ^ * ( t ) ) .
Step 4: 
Repetition of Steps 2 and 3 B times, where, we can have B estimates Ψ b p * ( b ) , where b = 1 , 2, …, B .
Step 5: 
Get the bootstrap estimates in the form { Ψ ^ * 1 , Ψ ^ * 2 , …, Ψ ^ * B } by arranging Ψ ^ * ( b ) ={ θ ^ 1 * ( b ) , θ ^ 2 * ( b ) , λ ^ * ( b ) , S ^ * ( b ) ( t ) }, in ascending order, b = 1 , 2, …, B .
Step 6: 
The two-sided 100 ( 1 α ) % confidence interval for parameters θ 1 , θ 2 , λ , or S ( t ) is provided by
Ψ ^ Boot - p * B γ / 2 , Ψ ^ Boot - p * B 1 γ / 2 , Ψ = θ 1 , θ 2 , λ , or S ( t ) ,
where [i] denotes the integer part of i.

4.2. Bootstrap-t Method

When the sample size is modest ( m < 30 ), the bootstrap-t (Boot-t) approach, as described by [53], allows for the computation of the confidence interval for the parameters of interest. The subsequent procedure can be used to generate parametric Boot-t confidence intervals.
Step 1: 
Repeat Step 1 of the above technique to create an adaptive progressive Type-II censored competing risks sample, such as ( x 1 : m : n , x 2 : m : n , , x m : m : n ), using Gompertz distributions. Next, compute the MLEs θ ^ 1 , θ ^ 2 , λ ^ and S ( t ) for the unknown parameters θ 1 , θ 2 , λ and S ( t ) .
Step 2: 
Using θ ^ 1 , θ ^ 2 , and λ ^ , generate a bootstrap sample ( x 1 : m : n * , , x m : m : n * ) from Gompertz ( θ ^ 1 , λ ^ ) and Gompertz( θ ^ 2 , λ ^ ) and compute the bootstrap estimates Ψ ^ * = ( θ ^ 1 * , θ ^ 2 * , λ ^ * , S ^ * ( t ) ) . Further, using the Fisher information matrix and delta method, compute the variance of Ψ ^ * , say Var ^ Ψ ^ * , Ψ = θ 1 , θ 2 , λ or S ( t ) .
Step 3: 
Find the forthcoming statistics:
T i * = Ψ ^ i * Ψ ^ i Var ^ ( Ψ ^ i * ) , i = 1 , 2 , 3 , 4 .
Step 4: 
Steps (2) through (3) should be repeated B times.
Step 5: 
Assume that G ^ y = P ( T i * y ) represents the cumulative distribution function of T i * , where i = 1 , 2 , , 5 , and that for a given 0 < α < 1 define Ψ ^ b o o t t ( α ) = Ψ ^ + Var ^ ( Ψ ^ * ) G ^ 1 α , then the approximate 100 ( 1 α ) % confidence interval of Ψ is now given by
Ψ ^ Boot - t ( α 2 ) , Ψ ^ Boot - t ( 1 α 2 ) , Ψ ^ = θ ^ 1 , θ ^ 2 , λ ^ or S ^ ( t ) .

5. Bayesian Estimation Using MCMC

This section uses a Bayesian approach based on adaptive Type-II censored with competing risks data to estimate the unknown parameters and reliability function of the Gompertz distribution. For the sake of simplicity, it is assumed that the unknown parameters θ 1 , θ 2 , and λ are each independent and follow the gamma distribution, θ 1 gamma( a 1 , b 1 ), θ 1 gamma( a 2 , b 2 ) and λ gamma( a 3 , b 3 ), respectively. The following can be used to express the joint prior distribution:
π θ 1 , θ 2 , λ     θ 1 a 1 1 θ 2 a 2 1 λ a 3 1 e b 1 θ 1 + b 2 θ 2 + b 3 λ , θ 1 , θ 2 , λ , a i , b i > 0 , i = 1 , 2 , 3 .
Thus, Equations (8) and (39) can be combined to yield the joint posterior distribution, and the resulting expression is given by
π * θ 1 , θ 2 , λ | x     θ 1 m 1 + a 1 1 θ 2 m 2 + a 2 1 λ a 3 1 e λ b 3 i = 1 m x i e b 1 θ 1 + b 2 θ 2 exp A ( x , λ ) k = 1 2 θ k λ ,
According to the SE and LINEX loss function, the Bayes estimator of any function θ 1 , θ 2 , and λ , let’s say Φ ( θ 1 , θ 2 , λ ) , may be written as
Φ ^ B S = E Φ ( θ 1 , θ 2 , λ ) | x = 0 0 0 Φ ( θ 1 , θ 2 , λ ) π * θ 1 , θ 2 , λ | x d θ 1 d θ 2 d λ 0 0 0 π * θ 1 , θ 2 , λ | x d θ 1 d θ 2 d λ ,
and
Φ ^ L I N E X = 1 c log E e c Φ ( θ 1 , θ 2 , λ ) | x = 1 c log 0 0 0 e c Φ ( θ 1 , θ 2 , λ ) π * θ 1 , θ 2 , λ | x d θ 1 d θ 2 d λ 0 0 0 π * θ 1 , θ 2 , λ | x d θ 1 d θ 2 d λ ,
A numerical approach is needed to solve the Bayes estimators numerically under the SE and LINEX loss functions because they cannot be obtained explicitly. Here, we recommend incorporating the Metropolis-Hastings (M-H) into the Gibbs method to produce the Bayes estimates for θ 1 , θ 2 , and λ . In the beginning, we can write the marginal posterior distributions of θ 1 , θ 2 , and λ as
π 1 * θ 1 | θ 2 , λ , x θ 1 m 1 + a 1 1 exp [ θ 1 ( b 1 + A ( x , λ ) λ ) ] gamma ( m 1 + a 1 , ( b 1 + A ( x , λ ) λ ) ) ,
π 2 * θ 2 | θ 1 , λ , x θ 2 m 1 + a 2 1 exp [ θ 2 ( b 2 + A ( x , λ ) λ ) ] gamma ( m 2 + a 2 , ( b 2 + A ( x , λ ) λ ) ) ,
and
π 3 * λ | θ 1 , θ 2 , x λ a 3 1 e λ ( b 3 i = 1 m x i ) exp { A ( x , λ ) k = 1 2 θ k λ } ,
Since Equation (43) cannot be reduced to standard form, the posterior sample for the parameter λ can be derived using the M-H algorithm (see Metropolis et al. [54], and Hastings [55]). The MCMC algorithm will carry out the following actions:
Step 1: 
Select an initial guess of ( θ 1 , θ 2 , λ ), indicated by ( θ 1 0 , θ 2 0 , λ 0 ), and set i = 1 .
Step 2: 
Generate λ i from π 3 * ( λ i 1 | θ 1 i 1 , θ 2 i 1 , x ) using the M-H method using the normal proposal distribution N λ i 1 , V a r λ where λ i 1 is the current value of λ and V a r λ is a variance of λ .
Step 3: 
Generate θ k i from gamma ( m k + a k , ( b k + A ( x , λ i 1 ) λ i 1 ) ) , k = 1 , 2 .
Step 4: 
The reliability function can be calculated by
S i ( t ) = exp [ ( θ 1 i + θ 2 i ) λ i ( e λ i t 1 ) ] , t > 0 ,
Step 5: 
Set i = i + 1
Step 6: 
Repeat steps (2–4) N times to get the necessary number of samples ( θ 1 1 , θ 2 1 , λ 1 , S 1 ( t ) ), ( θ 1 2 , θ 2 2 , λ 2 , S 2 ( t ) ), …, ( θ 1 N , θ 2 N , λ N , S N ( t ) ). The remaining N M burn-in samples are utilized to create the Bayesian estimates after the first M burn-in samples are discarded.
Step 7: 
The Bayes estimate of any function Φ ( θ 1 , θ 2 , λ ) for the SE and LINEX loss functions may now be calculated as
Φ ^ B S = 1 N M i = M + 1 N Φ ( θ 1 i , θ 2 i , λ i ) and Φ ^ L I N E X = 1 c log 1 N M i = M + 1 N e c Φ ( θ 1 i , θ 2 i , λ i ) ,
where Φ ( θ 1 , θ 2 , λ ) refers to the parameters θ 1 , θ 2 , λ , or S ( t ) .
Step 8: 
Order Φ ( M + 1 ) , Φ ( M + 2 ) , …, Φ ( N ) as Φ ( 1 ) < Φ ( 2 ) < < Φ ( N M ) . Then, the 100 ( 1 α ) % Bayesian credible interval of is given by Φ ( N M ) α / 2 , Φ ( N M ) 1 α / 2 . Where [ q ] denotes the integer portion of q.

6. Analyzing Application Data

The significance of the theoretical findings that were discussed in the preceding parts will be clarified in this section using a few examples from the medical fields and industry. This section’s investigation of two real-world data sets supports the proposed point and interval estimates of unknown parameters and the reliability function.

6.1. Application to Reticulum Cell Sarcoma

In the first application, we take into account the data provided by Hoel [56]. For review, this data is also illustrated by [26,27,29]. According to these data, male mice and rats received 300 roentgens of radiation when they were 5 to 6 weeks old. In searching for the causes that led to the death of each mouse, the following reasons were reached: (1) Thymic lymphoma, (2) Reticulum cell sarcoma, or (3) Other causes. Here, we classify the reticulum cell sarcoma as cause 1, and the other two causes of death are combined to form cause 2. This data contained n = 77 observations. Of which 38 are due to the first cause of death and 39 are due to the second cause of death.
Cause 1: 317, 318, 399, 495, 525, 536, 549, 552, 554, 557, 558, 571, 586, 594, 596, 605, 612, 621, 628, 631, 636, 643, 647, 648, 649, 661, 663, 666, 670, 695, 697, 700, 705, 712, 713, 738, 748, 753.
Cause 2: 40, 42, 51, 62, 163, 179, 206, 222, 228, 252, 259, 282, 324, 333, 341, 366, 385, 407, 420, 431, 441, 461, 462, 482, 517, 517, 524, 564, 567, 586, 619, 620, 621, 622, 647, 651, 686, 761, 763.
Assuming independent Gompertz distributions for the latent cause of failures, using the hypotheses H 0 (Data follows the Gompertz distribution) and H 1 (Data does not follow the Gompertz distribution), a Chi-square ( χ 2 ) goodness-of-fit test as well as Kolmogorov-Smirnov (K-S) test are applied to test the goodness of fit of the proposed model to the two causes of failure. The values of the χ 2 and K-S test statistics are given in Table 1.
Based on these results, for the two causes of failure, one can say that at 5 % level of significance; the χ 2 observed value is less than the χ 2 tabulated value and the p-value is also quite large in this case. Thus, we can not reject H 0 and the data set is fitted well with our model. In the same way, the computed K-S test statistics are higher than the critical value for the K-S test statistic. Additionally, as we can see, the p-values for the K-S test statistics for the Gompertz distribution are higher than the significance level (0.05), indicating that the Gompertz model generally well fits the previous real data. For further clarification, we provided Figure 1, which contains both fitted and empirical CDFs of Gompertz distribution based on the two causes (Figure 1a,b), computed at the estimated parameters. The figures show that the fitted distribution and empirical distribution are very similar. As a result, the Gompertz model provides an excellent fit to the provided data set in each scenario that results in death.
By using the censoring scheme R 1 = R 2 = ··· = R 24 = 2 , R 25 = 4 and an ideal total test time T = 550 , we generate an adaptive progressively Type II censored sample of size m = 25 from the complete data. The generated data is obtained as:
(40, 2), (42,2), (51, 2), (62,2), (179, 2), (206, 2), (222, 2), (228, 2), (252, 2), (259, 2), (282, 2), (317, 1), (318, 1), (324, 2), (341, 2), (366, 2), (385, 2), (399, 1), (461, 2), (517, 2), (549, 1), (557, 1), (586, 1), (636, 1), (649, 1).
From the above generated data, we observed m 1 = 8 failure due to cause 1, m 2 = 17 failures due to cause 2 and only 21 observed failures ( J = 21 ) were observed before time T = 550 . Thus, we have R = ( 2 21 , 0 3 , 10 ) . Here, this sample will be utilized to perform numerical calculations on the results obtained through theoretical in earlier sections.
The iteration method and SEM algorithm, which are both covered in Section 3 and Section 4, are used to calculate the MLE of unknown parameters. Based on Hoel’data, we plot the profile log-likelihood function (13) before calculating the MLEs, see Figure 2a. From this figure, it can be seen that the profile log-likelihood function is unimodal with the mode falling between 0.003 and 0.004. It indicates that the MLE of λ is unique.
Additionally, a graphical technique developed by [43] is used to calculate the MLE of the shape parameter λ . Figure 3a shows the curves of ( 1 λ ) and g λ based on Hoel [56] data. According to Figure 3a, the intersection of the two functions 1 λ and g λ is roughly at 0.00347. Therefore, to begin the iteration to determine the MLE of λ , we suggest choosing λ = 0.00347 as the initial value, and stopping the process when λ ( s + 1 ) λ ( s ) < 10 6 . The MLEs of θ 1 , θ 2 , and S ( t ) are computed based on NR method using the estimated initial value of λ , and the results are presented in Table 2 along with the estimated standard errors. The reliability function S ( t ) is computed at time t = 500 .
Next, we use the SEM method created in Section 3.2 to compute the MLEs of θ 1 , θ 2 , λ and S ( t ) . For the SEM algorithm, the associated MLEs’ initial values of θ 1 , θ 2 and λ are established using the NR approach and K = 5100 is assumed to be the number of SEM cycles. The first 100 cycles are employed as a burn-in period, and the following 5000 cycles are averaged to estimate the unknown parameters θ 1 , θ 2 , λ , and S ( t ) . The trace plots of these parameters against the SEM cycles are displayed in Figure 4. In this figure, the red horizontal lines represent the SEM cycles, and the parameter values bounce around them without exhibiting an upward or downward trend. This signifies that a stationary distribution for the Markov Chain { ϑ ( s ) } has been reached. To approach the MLE, the average of the sequence { ϑ ( s ) } would be sufficient. The computed and reported standard errors (SEs) for the MLEs derived using the SEM technique are shown also in Table 2. Using both the NR and the SEM techniques, the asymptotic 95% confidence intervals of θ 1 , θ 2 , λ , and S ( t ) are computed, and the results are presented in Table 3. Furthermore, the results of the computation of the 95% confidence intervals using the Boot-p and Boot-t with B = 1000 bootstrap replications were also reported in Table 3.
The Bayes estimates of θ 1 , θ 2 , λ , and S ( t = 500 ) versus the SE and LINEX loss functions will now be calculated using the MCMC samples. Since we don’t know anything about the unknown parameters beforehand, we consider their noninformative gamma priors to be a i = b i = 0 , i = 1 , 2 , 3 . Nobody is not aware of the fact that for the LINEX loss function, c > 0 implies that overestimation results in more penalty than underestimation and the converse is true for c < 0 . Additionally, the LINEX loss function becomes symmetric for c near zero and behaves similar to the SE loss function.
When the LINEX loss function is taken into consideration, Bayes estimates are generated for two alternative values of c, where c = ± 10 3 . As was mentioned earlier in Section 5, the posterior analysis was conducted using a hybrid technique that included the Gibbs chain and Metropolis-Hastings. In order to run the MCMC sampler algorithm, the initial values for the three parameters θ 1 , θ 2 and λ were assumed to be their MLEs. With N = 30 , 000 samples and the first M = 5000 iterations serving as the burn-in period, we generate the Markov chain samples. The trace plots of the 25 , 000 MCMC outputs for the posterior distribution of θ 1 , θ 2 , λ , and S ( t ) are shown in Figure 5 (first row) to verify the MCMC method’s convergence. Also, Figure 5 displays histogram plots (second row) of the samples that we generated using the M-H algorithm for θ 1 , θ 2 , λ , and S ( t ) . It is clear that the MCMC method converges extremely effectively. In Table 2 and Table 3, point Bayes estimates for θ 1 , θ 2 , λ , and S ( t ) are produced together with the corresponding 95% credible ranges. The estimated standard errors of the Bayes estimates are also calculated and are shown in Table 2.

6.2. Application to Breaking Strengths of Jute Fibres

Jute fibers contain a wide range of applications and become one of the most important fibers in the manufacture of bio-compounds. For instance, jute fibers are mainly used in the textile industry, where they are used to make clothes, ropes, bed covers, bags, shoelaces, etc. To a large extent, jute fibers also made their way into the automotive sector, where it is used to make cup holders, various parts of the instrument cluster, and door panels. According to a real-world data set published by Xia et al. [57], two different gauge lengths are what lead to the breaking strengths failure data of jute fiber. We denote δ i = 1 if the breaking strengths of jute fiber of gauge length 10 mm and δ i = 2 if the the breaking strengths of jute fiber of gauge length 20 mm. The breaking strengths of jute fibres at 10 mm, and 20 mm gauge lengths are provided in Table 4. These two independent data sets representing two groups of breaking strengths samples as competing risks data, say cause 1 and cause 2, respectively.
Before processing, it was determined whether or not these data sets could be analyzed using the Gompertz distributions. Let random variables X 1 and X 2 be breaking strengths of jute fiber of gauge length 10 mm and 20 mm, respectively. Based on the MLEs via NR method, we first obtain the K-S with the corresponding p-values between the fitted distribution and the empirical CDF for two random variables X 1 and X 2 . Table 5 summarizes the results. The results do not allow us to reject the null hypothesis but force us to accept that the data comes from the Gompertz distribution. This is done for both cause 1 and cause 2. Figure 6 displays the fitted and empirical distribution functions. The two distributions for the two random variables X 1 and X 2 are a reasonably close match.
The previous data set was utilized in this illustration to simulate an adaptive progressive Type-II censored sample with m = 25 , ideal total test time T = 350 , and a progressive censoring scheme R = ( 3 , 1 , 0 8 , 3 ) .
For clarity, 3 , 1 , 0 2 is given as a short form of ( 3 , 1 , 0 , 3 , 1 , 0 ) . Thus, the observed adaptive progressive Type-II censored sample of size m from the original complete sample of size n = 60 is
(36.75, 2), (43.93, 1), (45.58, 2), (48.01, 2), (50.16, 1), (71.46, 2), (83.55, 2), (99.72, 2), (108.94, 1), (113.85, 2), (116.99, 2), (119.86, 2), (151.48, 1), (163.4, 1), (177.25, 1), (183.16, 1), (187.13, 2), (200.16, 2), (212.13, 1), (284.64, 2), (323.83, 1), (350.7, 2), (353.24, 1), (375.81, 2), (383.43, 1).
Here, m 1 = 11 and m 2 = 14 , and J = 21 . Thus, we have R = ( 3 , 1 , 0 7 , 0 3 , 7 ) . To find an initial guess of λ , we display the profile log-likelihood function of in Figure 2 to determine an initial guess of, and it is obvious that the profile log-likelihood is a unimodal function with a mode close to 0.004 . Furthermore, the position at which the two functions 1 λ and g λ overlap in Figure 3b is quite close to 0.00403 . Then, according to Figure 2 and Figure 3, the initial value of λ can be thought of as 0.004 . The MLEs of θ 1 , θ 2 , λ , and S ( t ) are obtained via both NR method and SEM algorithm with t = 125 . For the SEM algorithm, we used 5100 iterations and the first 100 iterations were used as burn-in. The trace graphs of these parameters versus the SEM cycles are shown in Figure 7. The average of the iterations after the burn-in should be used to estimate the parameters because Figure 7 indicates that SEM iterations have converged to a density function. Table 6 reports the MLEs with the NR and SEM algorithm of size 5000. Using noninformative gamma priors, Table 6 also includes the Bayes estimates of θ 1 , θ 2 , λ , and S ( t ) with respect to the squared error (SE) loss function and the LINEX with c = ± 10 3 . The trace plots and the histograms of the MCMC outputs of θ 1 , θ 2 , λ , and S ( t ) based are represented in Figure 8. It is evident that from Figure 8, the MCMC procedure converges very well. Finally, the 95% asymptotic confidence intervals, bootstrap confidence intervals (Boot-p and Boot-t), and Bayes credible intervals for all parameters θ 1 , θ 2 , λ , and S ( t ) are tabulated in Table 7.
It is clear that the MCMC technique is better than the ML method via NR or SEM algorithm in respect of estimated standard error. Further, it is observed from Table 7 that the Boot-t intervals have shorter lengths than other intervals.
As can be seen in this previous examples, the outcomes of all estimates are similar. It should be noted that the MLEs produced by the SEM algorithm have the lowest standard errors. As a result, the performance of ML estimates acquired using the SEM algorithm is often superior to that of estimates obtained using the NR and MCMC methods with noninformative priors. We must provide numerical simulation to compare all methods accurately, clearly and objectively.

7. Simulation Study

In this section, we conduct a simulation analysis to evaluate the effectiveness of several estimation approaches for the unknwon parameters and reliability function covered in the preceding sections. We create adaptive Type-II progressive censored samples with competing risks from the Gompertz models by employing the algorithm described in Section 2 for specific total sample sizes n = ( 30 , 60 , 80 ) , failure sample sizes m = ( 15 , 25 , 40 , 50 , 60 ) , and censoring schemes. The true values of θ 1 , θ 2 and λ are assumed to be 0.05, 0.06, and 1.8, respectively. To create the appropriate samples, the progressive censoring schemes listed below are taken into account:
Scheme I: R 1 = n m and R 2 = ··· = R m = 0 ,
Scheme II: R 1 = R 2 = ··· = R n m = 1 and R n m + 1 = ··· = R m = 0 ,
Scheme III: R 1 = R 2 = ··· = R m 1 = 0 and R m = n m .
It should be noted that the first scheme is the left censoring scheme, where n m units are taken from the test at the time of the mth failure, the second scheme is the usual Type II progressive censoring scheme, and the third scheme is the Type II censoring scheme. Using the NR and SEM algorithm approaches, we compute MLEs of unknown parameters θ 1 , θ 2 , λ as well as reliability function S ( t = 0.9 ) based on generated data. We use the parameters’ true values as starting points for the SEM algorithm. Additionally, we perform the iterative procedure up to K = 1100 iterations with K 0 = 100 serving as the burn-in sample in order to apply the SEM algorithm. We utilize the NMaximize command of the Mathematica 11 package to solve the nonlinear equations and obtain the MLEs of the parameters. Under the SE and LINEX loss functions, the gamma prior distributions are used to obtain the Bayes estimates of unknown parameters. There are two distinct priors considered. First, we examine the non-informative priors for the three parameters θ 1 , θ 2 , and λ . In this case, we choose hyper-parameters such that a i = b i = 0 ;   i = 1 , 2 , 3 . It is instructive to use a second prior in which the hyper-parameters are chosen so that the prior expectations equal the values of the corresponding true parameters, i.e., a 1 = 1 , a 2 = 3 , a 3 = 9 , and b 1 = 20 ,   b 2 = 50 ,   b 3 = 5 . This helps us to see how much does the informative prior effect contributes to the results obtained based on observed data. Additionally, when computing the Bayes estimates with regard to the LINEX loss function, we assume c = 2.0 and 2.0 , which, respectively, give more weight to underestimation and overestimation. These calculations are based on 10000 MCMC samples using Gibbs within the Metropolis method.
The accuracy of the point estimates (ML and Bayes) is compared against the bias and squared error values (MSE) in these settings. When evaluating the various interval estimations, we take into account the average interval lengths and the average interval coverage percentages (CPs). The scheme with the lowest mean squared error (MSE) of the estimator is considered to be the best one. In Table 8, Table 9, Table 10, Table 11, we show the bias and MSEs of the proposed estimates of the unknown parameters and reliability function. The results are presented by considering two different values of T ( 0.8 , 1.5 ). By using NR, the SEM algorithm, bootstrap (Boot-p and Boot-t), and MCMC intervals (with non-informative prior (NIP) and informative prior (IP)), the average length (AV) and coverage probability (CP) of 95% asymptotic confidence are provided in Table 12, Table 13, Table 14 and Table 15. The ALs and CPs are evaluated and summarized for various censoring combinations using 1000 sets of random samples and the Bootstrap confidence intervals are obtained in our simulations after B = 1000 resampling.
From the results of Table 8, Table 9, Table 10, Table 11, we can obtain the following conclusions:
  • The MSEs of MLEs decrease using NR and SEM approaches as well as Bayes estimates within the SEL and LINEX loss functions when T and n are fixed but m increases.
  • When T is fixed but n and m increases, the MSEs of all estimates generally decrease.
  • In most cases, when n and m are fixed but T increases, the MSEs increase.
  • In general, all the point estimates are completely effective because the corresponding average biases and MSEs are very small. Where, both the average bias and MSEs tend to zero when n and m increase.
  • We can see from the simulation results that the Bayes estimations perform better than the other estimates. When compared to all other estimates, the Bayes estimates based on the informative prior (IP) show fewer biases and MSEs. However, it is evident that the SEM method performs better than the NR method and Bayes estimates with uninformative priors (NIP).
  • The best MSEs for estimations of θ 1 , θ 2 ,and λ are those based on Bayes estimates under LINEX ( c = 2 ). While c = 2 is a better option for the S ( t ) under the LINEX loss function.
  • It is clear from the ALs and CPs for all confidence intervals (see Table 12, Table 13, Table 14 and Table 15) that the Bayes credible intervals based on IP offer lower widths and higher coverage probability than other approaches. So, for interval estimates, we advise adopting the Bayesian approach. Furthermore, we see that adopting the ML via the NR technique yields the longest ALs. It is evident from a comparison of the two approximation methods that the ALs confidence intervals obtained using the SEM algorithm method are smaller than those obtained using the NR method. In terms of having smaller ALs but greater CPs, we can observe that the Bayes estimates based on informative priors perform better than those based on noninformative priors for the two Bayesian intervals. Furthermore, when utilizing bootstrap Type intervals, the Boot-p strategy provides more precise confidence interval estimations than the Boot-t method. Additionally, when employing all approaches, the ALs get shorter as sample sizes n and m rise and the 95% CPs get closer to 0.95.
Although the Bayes estimators outperform all other estimators, the simulation results show that all point and interval estimators methods are efficient. The Bayes technique may be chosen if one has enough prior knowledge.

8. Conclusions

This paper analyzes the Gompertz competitive risk model with the adaptive progressively Type-II censoring and presents some statistical inferences. The latent lifetime distributions are assumed to have the same shape parameters but different scales. The point and interval estimators were developed based on the Bayesian approach, and classical frequency theory, respectively. As a result of the inability to construct explicit equations for MLEs of some parameters, we turn to a stochastic EM technique for support. Utilizing the observed Fisher matrix, the approximate confidence intervals of MLEs and Bootstrap confidence intervals have been studied. We then proceed to the Bayesian technique, where the Bayes estimates are generated under the assumption of independent Gamma priors based on square error and LINEX loss functions. Some unknown parameters’ posterior distributions show that they do not follow well-known distributions. Therefore, we employ M-H sampling as part of the Gibbs sampling steps technique to compute Bayes estimates with corresponding credible intervals. The performance of all of the aforementioned approaches was then directly compared in a simulated study. Based on the simulation results, we conclude that the Bayes method can be adopted for estimating and constructing approximate confidence intervals for unknown parameters when available data is adaptive progressive Type-II censored with competing risks from independent Gompertz distributions. Also, it was observed that the performance of the SEM algorithm is very good and even better than NR’s method. Finally, the Gompertz distribution was applied to actual medical and industrial data, and it was found that it could accurately represent current data to the extent that it could be trusted to use it to examine similar real data in those fields.
In this paper, there is still a lot of future work to be done. For instance, the creation of the most effective censoring schemes, the statistical prediction of competing risk models, and the inference of competing risks model with more failure factors, these topics can be investigated in the future.
Risk group analysis should be the basis for all patient procedures. In order to develop a data model for all the risk factors that will be used in medical predictions and discoveries, it is possible to apply the data mining method to extract knowledge. In this field of study, experts can identify differences in patient survival and calculate confidence intervals for survival. This may be a topic for further investigation.

Author Contributions

E.A.A. and M.M.S.; methodology, E.A.A.; software, M.A.A.; validation, E.A.A. and M.A.A.; formal analysis, M.A.A.; investigation, M.M.S.; resources, M.A.A.; data curation, M.M.S.; writing—original draft preparation, E.A.A. and M.M.S.; writing—review and editing, E.A.A.; visualization, E.A.A.; supervision, E.A.A.; project administration, M.A.A.; funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Theauthor would like to thank the Editor, the Assistant Editor and the anonymous reviewers for their detailed comments and valuable suggestions, which significantly improved the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gompertz, B. On the nature of the function expressive of the law of human mortality, and on a new model of determining the value of life contingencies. Philos. Trans. R. Soc. Lond. 1825, 115, 513–585. [Google Scholar]
  2. Willemse, W.; Koppelaar, H. Knowledge Elicitation of Gompertz’ Law of Morality. Scand. Actuar. J. 2000, 2, 168–180. [Google Scholar] [CrossRef]
  3. Rodriguez, O.T.; Gutiérrezand, R.A.C.; Javier, A.L.H. Modeling and prediction of COVID-19 in Mexico applying mathematical and computational models. Chaos Solitons Fractals 2020, 138, 109946. [Google Scholar] [CrossRef]
  4. Jia, L.; Li, K.; Jiang, Y.; Guo, X.; Zhao, T. Prediction and analysis of coronavirus disease 2019. arXiv 2020, arXiv:2003.05447. [Google Scholar]
  5. Jaheen, Z.F. Prediction of Progressive Censored Data from the Gompertz Model. Commun. Stat. Simul. Comput. 2003, 32, 663–676. [Google Scholar] [CrossRef]
  6. Wu, S.J.; Chang, C.T.; Tsai, T.R. Point and interval estimations for the Gompertz distribution under progressive Type-II censoring. Metron 2003, 61, 403–418. [Google Scholar]
  7. Wu, J.W.; Tseng, H.C. Statistical inference about the shape parameter of the Weibull distribution by upper record values. Stat. Pap. 2006, 48, 95–129. [Google Scholar] [CrossRef]
  8. Ismail, A.A. Bayes estimation of Gompertz distribution parameters and acceleration factor under partially accelerated life tests with Type-I censoring. J. Stat. Comput. Simul. 2010, 80, 1253–1264. [Google Scholar] [CrossRef]
  9. Ismail, A.A. Planning step-stress life tests with Type-II censored Data. Sci. Res. Essay 2011, 6, 4021–4028. [Google Scholar]
  10. Soliman, A.A.; Abd-Ellah, A.H.; Abou-Elheggag, N.A.; Abd-Elmougod, G.A. Estimation of the parameters of life for Gompertz distribution using progressive first-failure censored data. Comput. Stat. Data Anal. 2012, 56, 2471–2485. [Google Scholar] [CrossRef]
  11. Soliman, A.A.; Sobhi, M.M.A. Bayesian MCMC inference for the Gompertz distribution based on progressive first-failure censoring data. Amer. Instit. Phys. Conf. Ser. 2015, 1643, 25–134. [Google Scholar]
  12. Wu, M.; Shi, Y. Bayes estimation and expected termination time for the competing risks model from Gompertz distribution under progressively hybrid censoring with binomial removals. J. Comput. Appl. Math. 2016, 300, 420–431. [Google Scholar] [CrossRef]
  13. El-Din, M.M.; Abdel-Aty, Y.; Abu-Moussa, M.H. Statistical inference for the Gompertz distribution based on Type-II progressively hybrid censored data. Comm. Stat. Simul. Comput. 2017, 46, 6242–6260. [Google Scholar] [CrossRef]
  14. Balakrishnan, N.; Cramer, E. The Art of Progressive Censoring; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  15. Ng, H.K.T.; Kundu, D.; Chan, P.S. Statistical analysis of exponential lifetimes under an adaptive Type-II progressively censoring scheme. Nav. Res. Logist. 2009, 56, 687–698. [Google Scholar] [CrossRef] [Green Version]
  16. Sobhi, M.M.A.; Soliman, A.A. Estimation for the exponentiated Weibull model with adaptive Type-II progressive censored schemes. Appl. Math. Model. 2016, 40, 1180–1192. [Google Scholar] [CrossRef]
  17. Nassar, M.; Abo-Kasem, O.E. Estimation of the inverse Weibull parameters under adaptive Type-II progressive hybrid censoring scheme. J. Comput. Appl. Math. 2017, 315, 228–239. [Google Scholar] [CrossRef]
  18. Sewailem, M.F.; Baklizi, A. Inference for the log-logistic distribution based on an adaptive progressive Type-II censoring scheme. Cogent Math. Stat. 2019, 6, 1684228. [Google Scholar] [CrossRef]
  19. Xu, R.; Gui, W. Entropy Estimation of Inverse Weibull Distribution under Adaptive Type-II Progressive Hybrid Censoring Schemes. Symmetry 2019, 11, 1463. [Google Scholar]
  20. Panahi, H.; Moradi, N. Estimation of the inverted exponentiated Rayleigh distribution based on adaptive Type II progressive hybrid censored sample. J. Comput. Appl. Math. 2020, 364, 112345. [Google Scholar] [CrossRef]
  21. Chen, S.; Gui, W. Statistical Analysis of a Lifetime Distribution with a Bathtub-Shaped Failure Rate Function under Adaptive Progressive Type-II Censoring. Mathematics 2020, 8, 670. [Google Scholar] [CrossRef]
  22. Mohan, R.; Chacko, M. Estimation of parameters of Kumaraswamy-exponential distribution based on adaptive Type-II progressive censored schemes. J. Stat. Comput. Simul. 2021, 91, 81–107. [Google Scholar] [CrossRef]
  23. Hora, R.; Kabdwal, N.C.; Srivastava, P. Classical and Bayesian Inference for the Inverse Lomax Distribution Under Adaptive Progressive Type-II Censored Data with COVID-19. J. Reliab. Stat. Stud. 2022, 505–534. [Google Scholar] [CrossRef]
  24. Amein, M.M.; El-Saady, M.; Shrahili, M.M.; Shafay, A.R. Statistical Inference for the Gompertz Distribution Based on Adaptive Type-II Progressive Censoring Scheme. Math. Probl. Eng. 2022, in press. [CrossRef]
  25. Crowder, M.J. Classical Competing Risks; Chapman and Hall/CRC: London, UK, 2001. [Google Scholar]
  26. Kundu, D.; Kannan, N.; Balakrishnan, N. Analysis of progressively censored competing risks data. Handb. Stat. 2004, 23, 331–348. [Google Scholar]
  27. Pareek, B.; Kundu, D.; Kumar, S. On progressive censored competing risks data for Weibull distributions. Comput. Stat. Data Anal. 2009, 53, 4083–4094. [Google Scholar] [CrossRef]
  28. Kundu, D.; Pradhan, B. Bayesian analysis of progressively censored competing risks data. Sankhya Ser. B. 2011, 73, 276–296. [Google Scholar] [CrossRef]
  29. Cramer, E.; Schmiedt, A.B. Progressively Type-II censored competing risks data from Lomax distributions. Comput. Stat. Data Anal. 2011, 55, 1285–1303. [Google Scholar] [CrossRef]
  30. Ashour, S.K.; Nassar, M.M.A. Analysis of generalized exponential distribution under adaptive Type-II progressive hybrid censored competing risks data. Int. J. Adv. Stat. Probab. 2014, 2, 108–113. [Google Scholar] [CrossRef]
  31. Mao, S.; Shi, Y.M.; Sun, Y.D. Exact inference for competing risks model with generalized Type I hybrid censored exponential data. J. Stat. Comput. Simul. 2014, 84, 2506–2521. [Google Scholar] [CrossRef]
  32. Ahmadi, K.; Yousefzadeh, F.; Rezaei, M. Analysis of progressively Type-I interval censored competing risks data for a class of an exponential distribution. J. Stat. Comput. Simul. 2016, 86, 3629–3652. [Google Scholar] [CrossRef]
  33. Dey, A.K.; Jha, A.; Dey, S. Bayesian analysis of modified Weibull distribution under progressively censored competing risk model. arXiv 2016, arXiv:1605.06585. [Google Scholar]
  34. Ashour, S.K.; Nassar, M.M.A. Inference for Weibull distribution under adaptive Type-I progressive hybrid censored competing risks data. Commun. Stat. Theory Methods 2017, 46, 4756–4773. [Google Scholar] [CrossRef]
  35. Hemmati, F.; Khorram, E. On Adaptive Progressively Type-II Censored Competing Risks Data. Commun. Stat. Simul. Comput. 2017, 46, 4671–4693. [Google Scholar] [CrossRef]
  36. Azizi, F.; Haghighi, F.; Tabibi Gilani, N. Statistical inference for competing risks model under progressive interval censored Weibull data. Commun. Stat. Simul. Comput. 2018, 7, 1–14. [Google Scholar] [CrossRef]
  37. Chacko, M.; Mohan, R. Bayesian analysis of Weibull distribution based on progressive Type-II censored competing risks data with binomial removals. Comput. Stat. 2019, 34, 233–252. [Google Scholar] [CrossRef]
  38. Baghestani, A.R.; Hosseini-Baharanchi, F.S. An improper form of Weibull distribution for competing risks analysis with Bayesian approach. J. Appl. Stat. 2019, 46, 2409–2417. [Google Scholar] [CrossRef]
  39. Qin, X.; Gui, W. Statistical inference of Burr-XII distribution under progressive Type-II censored competing risks data with binomial removals. J. Comput. Appl. Math. 2020, 378, 112922. [Google Scholar] [CrossRef]
  40. Davies, K.F.; Volterman, W. Progressively Type-II censored competing risks data from the linear exponential distribution. Commun. Stat. Theory Methods 2022, 51, 1444–1460. [Google Scholar] [CrossRef]
  41. Ren, J.; Gui, W. Statistical analysis of adaptive Type-II progressively censored competing risks for Weibull models. Appl. Math. Model. 2021, 98, 323–342. [Google Scholar] [CrossRef]
  42. Lodhi, C.; Tripathi, Y.M.; Bhattacharya, R. On a progressively censored competing risks data from Gompertz distribution. Commun. Stat. Simul. Comput. 2021, 1–22. [Google Scholar] [CrossRef]
  43. Balakrishnan, N.; Kateri, M. On the maximum likelihood estimation of parameters of Weibull distribution based on complete and censored data. Stat. Probab. Lett. 2008, 78, 2971–2975. [Google Scholar] [CrossRef]
  44. Shi, Y.; Wu, M. Statistical analysis of dependent competing risks model from Gompertz distribution under progressively hybrid censoring. SpringerPlus 2016, 5, 1–14. [Google Scholar] [CrossRef] [Green Version]
  45. Kundu, D. On hybrid censored Weibull distribution. J. Stat. Plan. Inference 2007, 137, 2127–2142. [Google Scholar] [CrossRef] [Green Version]
  46. Greene, W.H. Econometric Analysis, 4th ed.; Prentice Hall: New York, NY, USA, 2000. [Google Scholar]
  47. Meeker, W.Q.; Escobar, L.A. Statistical Methods for Reliability Data; John Wiley and Sons, Inc.: Hoboken, NJ, USA, 1998. [Google Scholar]
  48. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B (Methodol). 1977, 39, 1–22. [Google Scholar]
  49. Celeux, G.; Diebolt, J. The SEM algorithm: A probabilistic teacher algorithm derived from the EM algorithm for the mixture problem. Comput. Stat. Q. 1986, 2, 73–82. [Google Scholar]
  50. Belaghi, R.A.; Asl, M.N.; Singh, S. On estimating the parameters of the Burr XII model under progressive Type-I interval censoring. J. Stat. Comput. Simul. 2017, 87, 3132–3151. [Google Scholar] [CrossRef]
  51. Mitra, D.; Balakrishnan, N. Statistical inference based on left truncated and interval censored data from log-location-scale family of distributions. Commun. Stat. Simul. Comput. 2021, 50, 1073–1093. [Google Scholar] [CrossRef]
  52. Ye, Z.; Ng, H.K.T. On analysis of incomplete field failure data. Ann. Appl. Stat. 2014, 8, 1713–1727. [Google Scholar]
  53. Efron, B.; Tibshirani, R. An Introduction to the Bootstrap; Chapman & Hall: New York, NY, USA, 1994. [Google Scholar]
  54. Metropolis, N.; Rosenbluth, A.; Rosenbluth, M.; Teller, A.; Teller, E. Equations of state calculations by fast computing machines. J. Chem. Phys. 1953, 21, 1087–1092. [Google Scholar] [CrossRef] [Green Version]
  55. Hastings, W.K. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 1970, 57, 97–109. [Google Scholar] [CrossRef]
  56. Hoel, D.G. A representation of mortality data by competing risks. Biometrics 1972, 28, 475–488. [Google Scholar] [CrossRef]
  57. Xia, Z.P.; Yu, J.Y.; Cheng, L.D.; Liu, L.F.; Wang, W.M. Study on the breaking strength of jute fibres using modified Weibull distribution. Compos. Part A Appl. Sci. Manuf. 2009, 40, 54–59. [Google Scholar] [CrossRef]
Figure 1. Empirical cumulative distribution functions (Black lines) and fitted parametric cumulative distribution functions (red dashed lines) for the data from Hoel (1972). Panels (a,b) represent the cause 1 and cause 2 of death, respectively.
Figure 1. Empirical cumulative distribution functions (Black lines) and fitted parametric cumulative distribution functions (red dashed lines) for the data from Hoel (1972). Panels (a,b) represent the cause 1 and cause 2 of death, respectively.
Mathematics 10 04274 g001
Figure 2. Profile log-likelihood function of the shape parameter λ .
Figure 2. Profile log-likelihood function of the shape parameter λ .
Mathematics 10 04274 g002
Figure 3. Graphical technique for obtaining the initial value of λ ( 1 / λ , is solid red line, and g ( λ ) is the dashed line).
Figure 3. Graphical technique for obtaining the initial value of λ ( 1 / λ , is solid red line, and g ( λ ) is the dashed line).
Mathematics 10 04274 g003
Figure 4. Traces plot of SEM samples based on the data from Hoel (1972). Horizontal lines are the estimated parameter values.
Figure 4. Traces plot of SEM samples based on the data from Hoel (1972). Horizontal lines are the estimated parameter values.
Mathematics 10 04274 g004
Figure 5. MCMC trace plot (first row) and Histogram (second row) of θ 1 , θ 2 λ , and S ( t ) for Hoel (1972) data. Dashed lines ( …) represent the posterior means and soled lines (—) represent lower, and upper bounds 95% probability interval.
Figure 5. MCMC trace plot (first row) and Histogram (second row) of θ 1 , θ 2 λ , and S ( t ) for Hoel (1972) data. Dashed lines ( …) represent the posterior means and soled lines (—) represent lower, and upper bounds 95% probability interval.
Mathematics 10 04274 g005
Figure 6. Empirical cumulative distribution functions (Black lines) and fitted parametric cumulative distribution functions (red dashed lines) for the data from Xia et al. (2009). Panels (a,b) represent the cause 1 and cause 2 of death, respectively.
Figure 6. Empirical cumulative distribution functions (Black lines) and fitted parametric cumulative distribution functions (red dashed lines) for the data from Xia et al. (2009). Panels (a,b) represent the cause 1 and cause 2 of death, respectively.
Mathematics 10 04274 g006
Figure 7. Traces plot of SEM samples based on the data from Xia et al. (2009). Horizontal lines are the estimated parameter values.
Figure 7. Traces plot of SEM samples based on the data from Xia et al. (2009). Horizontal lines are the estimated parameter values.
Mathematics 10 04274 g007
Figure 8. MCMC trace plot (first row) and Histogram (second row) of θ 1 , θ 2 λ , and S ( t ) for Xia et al. (2009) data. Dashed lines ( …) represent the posterior means and soled lines (—) represent lower, and upper bounds 95% probability interval.
Figure 8. MCMC trace plot (first row) and Histogram (second row) of θ 1 , θ 2 λ , and S ( t ) for Xia et al. (2009) data. Dashed lines ( …) represent the posterior means and soled lines (—) represent lower, and upper bounds 95% probability interval.
Mathematics 10 04274 g008
Table 1. The test statistics for Chi-square ( χ 2 ) and Kolmogorov-Smirnov (K-S) for Hoel (1972) data.
Table 1. The test statistics for Chi-square ( χ 2 ) and Kolmogorov-Smirnov (K-S) for Hoel (1972) data.
Data( θ ^ , λ ^ ) χ 2 (Observed) χ 2 (Tabulated)p-ValueK-Sp-Value
Cause 1(2.16 × 10 6 , 013354)1.97455.99150.62740.06130.9988
Cause 2(0.000520, 0.00462)2.72809.48770.60430.07440.9822
Table 2. Point estimate and standard error (SE) of θ 1 , θ 2 , λ and S(t) for data from Hoel (1972).
Table 2. Point estimate and standard error (SE) of θ 1 , θ 2 , λ and S(t) for data from Hoel (1972).
θ 1 θ 2 λ S ( t = 500 )
Method EstimateSEEstimateSEEstimateSEEstimateSE
NR 0.0001172.38 × 10 6 0.0002484.36 × 10 6 0.003480.0000440.611120.002411
SEM 0.0001602.32 × 10 6 0.0001652.38 × 10 6 0.0039410.0000160.6071490.001885
MCMCSEL0.0001242.33 × 10 6 0.0002644.10 × 10 6 0.003380.0000370.6142270.002959
LINEX
c = 10 3 0.000126 0.000269 0.003838 0.875585
c = + 10 3 0.000123 0.000259 0.002973 0.221293
Table 3. Point and interval estimates of θ 1 , θ 2 , λ and S(t) for the data from Hoel (1972).
Table 3. Point and interval estimates of θ 1 , θ 2 , λ and S(t) for the data from Hoel (1972).
θ 1 θ 2 λ S ( t = 500 )
NR(3.2 × 10 7 , 0.000331)(0.000035, 0.000462)(0.001298, 0.005656)(0.492959, 0.729277)
SEM(0.000064, 0.000257)(0.000066, 0.000264)(0.002998, 0.004884)(0.513890, 0.700407)
Boot-p(0.0000221, 0.000096)(0.000017, 0.000113)(0.002673, 0.006567)(0.774804, 0.875795)
Boot-t(0.000102, 0.000116)(0.000169, 0.000237)(0.003434, 0.003580)(0.619715, 0.629505)
MCMC(0.000041, 0.000268)(0.000108, 0.000501)(0.001640, 0.005303)(0.454243, 0.752024)
Table 4. Breaking strengths of jute fiber under different gauge length from Xia et al. (2009).
Table 4. Breaking strengths of jute fiber under different gauge length from Xia et al. (2009).
Cause 1: Data with gauge length 10 mm
43.93 50.16 101.15 123.06 108.94 141.38 151.48 163.40 177.25 183.16
212.13 257.44 262.90 291.27 303.90 323.83 353.24 376.42 383.43 422.11
506.60 530.55 590.48 637.66 671.49 693.73 700.74 704.66 727.23 778.17
Cause 2: Data with gauge length 20 mm
36.75 45.58 48.01 71.46 83.55 99.72 113.85 116.99 119.86 145.96
166.49 187.13 187.85 200.16 244.53 284.64 350.70 375.81 419.02 456.60
547.44 578.62 581.60 585.57 594.29 662.66 688.16 707.36 756.70 765.14
Table 5. The test statistics for Chi-square ( χ 2 ) and Kolmogorov-Smirnov (K-S) from Xia et al. (2009).
Table 5. The test statistics for Chi-square ( χ 2 ) and Kolmogorov-Smirnov (K-S) from Xia et al. (2009).
Data( θ ^ , λ ^ ) χ 2
(Observed)
χ 2
(Tabulated)
p-ValueK-Sp-Value
Cause 1(0.00118, 0.00273)7.7616211.07050.16990.10200.9138
Cause 2(0.00158, 0.00208)5.4738811.07050.36080.14260.5753
Table 6. ML and Bayes estimates of θ 1 , θ 2 , λ , and S(t) for Xia et al. (2009) data.
Table 6. ML and Bayes estimates of θ 1 , θ 2 , λ , and S(t) for Xia et al. (2009) data.
MLEMCMC
NRSELSELLINEX
ParameterCriteria c = 1000 c = 1000
θ 1 Estimate0.000580.0006050.000810.000870.00075
SE0.501 × 10 4 0.353 × 10 4 0.686 × 10 4
θ 2 Estimate0.000740.0006050.001020.001110.00095
SE0.604 × 10 4 0.353 × 10 4 0.809 × 10 4
λ Estimate0.004040.0047130.002150.004570.00092
SE0.359 × 10 3 0.153 × 10 3 0.411 × 10 3
S ( t = 125 ) Estimate0.806440.8153620.776260.937590.57593
SE0.948 × 10 2 0.711 × 10 2 1.085 × 10 2
Table 7. 95% confidence intervals of θ 1 , θ 2 , λ , and S(t) for Xia et al. (2009) data.
Table 7. 95% confidence intervals of θ 1 , θ 2 , λ , and S(t) for Xia et al. (2009) data.
MLEBootstrapBayes
95% CIsNRSELBoot-pBoot-t
θ 1 Lower0.000090.000250.000030.000320.00028
Upper =0.001170.000960.000680.000600.00159
Length0.001080.000710.000650.000380.00131
θ 2 Lower0.000150.000250.000090.000080.00038
Upper0.001330.000960.001010.000780.00192
Length0.001180.000710.000920.000700.00154
λ Lower0.000520.003040.002830.003760.7 × 10 9
Upper0.007560.006390.007570.004720.00643
Length0.007040.003350.004740.000960.00643
S ( t = 125 ) Lower0.713550.742920.825470.810300.66687
Upper0.899330.887800.928860.850150.87576
Length0.185780.144880.103390.039850.20889
Table 8. Average values of the biases (first row) and MSEs (second row) for MLEs and Bayes estimators of θ 1 under informative prior (IP) and noninformative (NIP) prior and different censoring schemes, when ( θ 1 , θ 2 , λ ) = (0.05, 0.06, 1.8).
Table 8. Average values of the biases (first row) and MSEs (second row) for MLEs and Bayes estimators of θ 1 under informative prior (IP) and noninformative (NIP) prior and different censoring schemes, when ( θ 1 , θ 2 , λ ) = (0.05, 0.06, 1.8).
T = 0.8 MLEsSELLINEX
c = 2 c = 2
( n , m )Sc.StatisticNRSEMNIPIPNIPIPNIPIP
(30, 15)IBias0.0029810.0035610.0254900.0040650.0302890.0051100.0216100.003089
MSE0.0017200.0011080.0048060.0006700.0060400.0007200.0039370.000627
IIBias−0.001878−0.0046750.0386690.0023920.0432240.0031400.0346300.001778
MSE0.0013500.0007940.0058590.0004710.0067100.0004970.0051520.000453
IIIBias−0.002942−0.0088420.0345420.0000800.0389060.0008300.030680−0.000636
MSE0.0014100.0009800.0052520.0004380.0060360.0004590.0046050.000421
(30,25IBias−0.0004620.0044440.0100350.0022130.0116240.0028900.0086100.001561
MSE0.0009300.0009360.0015110.0004750.0017040.0004980.0013600.000454
IIBias−0.0004620.0044440.0094090.0023820.0108830.0030500.0080700.001745
MSE0.0009300.0007360.0013360.0004660.0014880.0004880.0012140.000447
IIIBias−0.0008890.0029600.0129980.0019690.0150030.0026300.0112200.001337
MSE0.0009100.0008230.0018850.0004280.0021580.0004480.0016680.000410
(60, 40)IBias0.0001250.0011790.0062450.0020900.0069590.0025400.0055600.001651
MSE0.0005400.0004210.0007980.0003530.0008460.0003640.0007550.000342
IIBias0.000029−0.0019360.0080950.0021170.0089660.0025400.0072700.001703
MSE0.0005500.0004230.0008600.0002980.0009210.0003070.0008060.000289
IIIBias−0.000600−0.0021800.0078370.0009950.0087240.0014000.0070000.000597
MSE0.0005600.0004920.0009400.0002810.0010080.0002890.0008800.000274
(60, 50)IBias−0.0000970.0028500.0046330.0019950.0051550.0023700.0041300.001635
MSE0.0004000.0003830.0004880.0002860.0005080.0002940.0004700.000279
IIBias−0.0001120.0031270.0045330.0019070.0050340.0022600.0040500.001558
MSE0.0004100.0003860.0004780.0002830.0004970.0002910.0004620.000276
IIIBias−0.0008970.0019760.0045970.0011500.0051670.0015200.0040500.000792
MSE0.0004300.0004160.0005500.0002750.0005740.0002820.0005280.000268
(80, 65)IBias−0.0007650.0013010.0026840.0010080.0030510.0012900.0023300.000729
MSE0.0003100.0002720.0003560.0002380.0003670.0002430.0003470.000234
IIBias−0.0011520.0014030.0024700.0009160.0028270.0011900.0021200.000645
MSE0.0003000.0002810.0003350.0002200.0003440.0002240.0003270.000216
IIIBias0.0000040.0013340.0040860.0016640.0044980.0019600.0036900.001375
MSE0.0003500.0003220.0004180.0002480.0004320.0002540.0004050.000243
T = 1.5
(30, 15)IBias0.001502−0.0003090.0238550.0038960.0285370.0049300.0200600.002928
MSE0.0015200.0009380.0044000.0006400.0055390.000687 0.0036020.000599
IIBias−0.000454−0.0044520.0387770.0019660.0434370.002770 0.034660.001204
MSE0.0016600.0008990.0058610.0005100.0067540.000538 0.0051260.000485
IIIBias−0.000171−0.008871 0.0386450.0023350.0433110.0031400.0345200.001566
MSE0.0016700.0009280.0059280.0005060.0068350.000534 0.0051790.000481
(30, 25)IBias−0.0015690.0041090.0086760.0016010.0101500.0022700.007340.000963
MSE0.0008000.0008310.0012890.0004320.0014410.0004520.0011660.000414
IIBias0.0021090.0065790.0125610.0038980.0140800.0045800.0111800.003240
MSE0.0009000.0008770.0016940.0004930.0018740.0005190.0015450.000471
IIIBias−0.0002910.0032730.0136090.0021180.0156070.0027800.0118400.001485
MSE0.0008700.0008430.0017610.0004180.0020100.0004380.0015630.000401
c = 2 c = 2
( n , m )Sc.StatisticNRSEMNIPIPNIPIPNIPIP
(60, 40)IBias0.0002070.0030180.0062310.0021020.0069490.0025600.0055500.001661
MSE0.0005700.0004570.0007350.0003670.0007750.0003790.0007000.000356
IIBias0.0003280.0012480.0054310.0023150.0059910.0027000.0048900.001940
MSE0.0004800.0003960.0005930.0003320.0006190.0003420.0005700.000323
IIIBias0.000520−0.0003140.0088700.0023480.0097800.0027800.0080100.001931
MSE0.0005600.0005010.0009360.0003030.0010060.0003130.0008740.000294
(60, 50)IBias−0.0008240.0016670.0038930.0012640.0044060.0016300.0034000.000909
MSE0.0004200.0003930.0005060.0002970.0005260.0003050.0004880.000290
IIBias0.0000960.0033170.0044410.0019870.0049230.0023400.0039700.001642
MSE0.0004000.0003910.0004850.0002920.0005030.0002990.0004680.000284
IIIBias0.0000630.0019390.0056300.0017000.0062220.0020700.0050600.001336
MSE0.0004800.0004310.0006650.0002990.0006990.0003070.0006330.000292
(80, 65)IBias0.0005280.0041560.0045380.0021140.0045380.0024100.0037800.001826
MSE0.0003200.0003780.0003900.0002420.0003900.0002480.0003660.000237
IIBias0.0004550.0021820.0036660.0020100.0040100.0022800.0033300.001741
MSE0.0003100.0003040.0003570.0002520.0003670.0002570.0003470.000247
IIIBias0.0005710.0015270.0047640.0021240.0051800.0024200.0043600.001832
MSE0.0003100.0002760.0003800.0002210.0003930.0002260.0003680.000216
Table 9. Average values of the biases (first row) and MSEs (second row) for MLEs and Bayes estimators of θ 2 under informative prior (IP) and noninformative (NIP) prior and different censoring schemes, when ( θ 1 , θ 2 , λ ) = (0.05, 0.06, 1.8).
Table 9. Average values of the biases (first row) and MSEs (second row) for MLEs and Bayes estimators of θ 2 under informative prior (IP) and noninformative (NIP) prior and different censoring schemes, when ( θ 1 , θ 2 , λ ) = (0.05, 0.06, 1.8).
T = 0.8 MLEsSELLINEX
c = 2 c = 2
( n , m )Sc.StatisticNRSEMNIPIPNIPIPNIPIP
(30, 15)IBias0.0045340.0001180.0321010.0061390.0387070.0075820.0268300.004795
MSE0.0025500.0011610.0073200.0009780.0094180.0010660.0058560.000904
IIBias−0.001875−0.0094070.0444290.0011950.0503370.0022120.0392100.000230
MSE0.0019700.0011250.0076280.0006660.0088620.0005990.0066120.000539
IIIBias−0.002673−0.0116890.0428480.0014970.0487610.0025150.0376500.000531
MSE0.0019500.0010390.0070930.0005310.0082720.0005610.0061310.000525
(30, 25)IBias0.000943−0.0061700.0138890.0041750.0161450.0051350.0118900.003263
MSE0.0013800.0009080.0023050.0007310.0026430.0007740.0020430.000693
IIBias−0.000564−0.0065370.0116850.0034580.0136960.0043650.0098900.002593
MSE0.0011700.0008030.0018020.0006300.0020320.0006640.0016210.000599
IIIBias−0.001422−0.0081320.0150440.0019860.0177360.0028680.0127000.001144
MSE0.0012700.0009290.0025930.0005010.0029930.0005310.0022810.000514
(60, 40)IBias0.000053−0.0000550.0073300.0024240.0082910.0030330.0064200.001834
MSE0.0007500.0005640.0010680.0004850.0011360.0005030.0010060.000469
IIBias−0.000804−0.0042450.0086580.0018220.0098040.0023850.0075800.001275
MSE0.0007200.0005520.0011080.0003880.0011940.0004010.0010320.000376
IIIBias0.000663−0.0042570.0110410.0024990.0123070.0030710.0098600.001945
MSE0.0008800.0005310.0015050.0004470.0016290.0004630.0013970.000433
(60, 50)IBias−0.000555−0.0030300.0050740.0020070.0057750.0025040.0044000.001524
MSE0.0005400.0004280.0006580.0003870.0006880.0003990.0006310.000376
IIBias−0.000798−0.0049220.0058350.0017980.0066140.0023000.0050900.001309
MSE0.0005600.0004350.0007270.0003630.0007640.0003740.0006940.000353
IIIBias0.000044−0.0036230.0066530.0023390.0074570.0028480.0058800.001843
MSE0.0006600.0005050.0008460.0004150.0008890.0004280.0008060.000403
(80, 65)IBias−0.000375−0.0027750.0038210.0018280.0043290.0022200.0033300.001444
MSE0.0004200.0003390.0004820.0003160.0004990.0003230.0004670.000308
IIBias−0.001150−0.0025000.0031950.0013700.0036840.0017470.0027200.001001
MSE0.0004100.0003630.0004650.0003090.0004790.0003160.0004530.000303
IIIBias−0.000066−0.0029560.0048350.0019400.0053950.0023400.0042900.001548
MSE0.0004600.0003510.0005540.0003200.0005750.0003290.0005350.000313
c = 2 c = 2
( n , m )Sc.StatisticNRSEMNIPIPNIPIPNIPIP
(30, 15)IBias−0.000278−0.0039660.0252520.0032030.0310590.0045340.0206200.001962
MSE0.0020000.0010940.0054450.0007820.0069370.000846 0.0044260.000729
IIBias0.000239−0.0055840.0477900.0037770.0540310.004861 0.0423000.002750
MSE0.0020700.0012060.0078920.0006380.0092260.000659 0.0068010.000596
IIIBias−0.001637−0.012405 0.0444070.0023120.0504260.0033610.0391200.001317
MSE0.0019500.0011850.0074500.0006120.0087040.000650 0.0064310.000579
(30, 25)IBias−0.002069−0.0068320.0101280.0018950.0120990.0027930.0083600.001039
MSE0.0010200.0007770.0016200.0005630.0018260.0005920.0014570.000537
IIBias0.001919−0.0033870.0141080.0045060.0160930.0054230.0123200.003632
MSE0.0011100.0009000.0018470.0006250.0020700.0006610.0016660.000584
IIIBias0.0004370.0085930.0162060.0026770.0181620.0037990.0144300.001619
MSE0.0012700.0008550.0027420.0005660.0030470.0006050.0024860.000533
(60, 40)IBias0.000595−0.0024140.0078750.0030770.0088550.0036990.0069400.002474
MSE0.0007600.0005380.0009920.0004790.0010530.0004980.0009370.000462
IIBias0.0000050.0001510.0060880.0024830.0068310.0029930.0053700.001986
MSE0.0006200.0005670.0007740.0004290.0008120.0004430.0007400.000416
IIIBias0.0004680.0020820.0104220.0022860.0113370.0030040.0095500.001594
MSE0.0008100.0005870.0013060.0004180.0013840.0004370.0012360.000402
(60, 50)IBias−0.001201−0.0041700.0044440.0013940.0051370.0018860.003780.000916
MSE0.0005700.0004670.0006840.0003980.0007150.0004100.0006570.000388
IIBias0.000258−0.0023400.0054420.0025830.0061010.0030650.0048100.002114
MSE0.0005600.0004930.0006710.0004080.0007010.0004200.0006450.000396
IIIBias0.0003980.0041190.0062590.0019050.0068430.0025320.0056900.001299
MSE0.0005600.0004760.0007200.0003550.0007470.0003690.0006940.000349
(80, 65)IBias0.000608−0.0016540.0049970.0025350.0055200.0029350.004490.002144
MSE0.0004500.0003600.0005390.0003470.0005590.0003560.0005210.000338
IIBias0.000155−0.0021290.0039890.0020620.0044480.0024270.0035400.001705
MSE0.0003900.0003420.0004440.0003060.0004580.0003140.0004310.000300
IIIBias0.0002940.0032680.0047250.0017560.0051440.0022540.0043200.001270
MSE0.0005100.0003710.0006230.0003510.0006390.0003660.0006070.000345
Table 10. Average values of the biases (first row) and MSEs (second row) for MLEs and Bayes estimators of λ under informative prior (IP) and noninformative (NIP) prior and different censoring schemes, when ( θ 1 , θ 2 , λ ) = (0.05, 0.06, 1.8).
Table 10. Average values of the biases (first row) and MSEs (second row) for MLEs and Bayes estimators of λ under informative prior (IP) and noninformative (NIP) prior and different censoring schemes, when ( θ 1 , θ 2 , λ ) = (0.05, 0.06, 1.8).
T = 0.8 MLEsSELLINEX
c = 2 c = 2
( n , m )Sc.StatisticNRSEMNIPIPNIPIPNIPIP
(30, 15)IBias0.1791620.1149000.0305580.0823850.3039080.204413−0.234290−0.028933
MSE0.3359000.1702860.3778860.0740670.5741490.1262210.3816410.055574
IIBias0.3280800.375084−0.1414890.1132660.5900840.310275−0.709730−0.052580
MSE0.7629500.4564611.0100900.0751671.7284900.1827961.1181500.049245
IIIBias0.3474670.426425−0.1140990.1186210.6099370.314043−0.687460−0.046693
MSE0.7518800.5160070.9638290.0755341.7113100.1830351.0791900.068319
(30, 25)IBias0.1364970.1293890.0635290.0751850.2124060.164106−0.082030−0.009066
MSE0.1791500.1553680.1806000.0666760.2450540.0979230.1681430.0529610
IIBias0.1595880.1365010.0810280.0773930.2516960.174322−0.082940−0.012808
MSE0.2157200.1712230.2076200.0693930.3200550.1085270.1710660.052445
IIIBias0.1760440.1863460.0680270.0914510.2865790.202572−0.143810−0.011334
MSE0.254550.2010250.2660550.0742880.3864590.1194400.2620450.066133
(60, 40)IBias0.0755320.0450760.0327020.0483650.1167050.109324−0.049650−0.010646
MSE0.1013000.0681810.1031670.0540860.1244990.0689570.0968150.047279
IIBias0.1097600.1447120.0335700.0620490.1897160.155824−0.118980−0.026133
MSE0.1596500.1233320.1674530.0605920.2183870.0887280.1708700.050997
IIIBias0.0970880.1283390.0156680.0538730.1709580.146834−0.136660−0.033404
MSE0.1814700.1145360.1971740.0702620.2415200.0979030.2069140.061001
c = 2 c = 2
( n , m )Sc.StatisticNRSEMNIPIPNIPIPNIPIP
(60, 50)IBias0.0603590.0501710.0266130.0369090.0928670.087208−0.038610−0.012216
MSE0.0730300.0638480.0723140.0433330.0853030.0528520.0686690.039243
IIBias0.0672310.0454860.0301820.0384770.1043620.093667−0.042500−0.015081
MSE0.0872100.0690060.0844710.0480720.1059990.0612420.0756730.041768
IIIBias0.0808670.0789590.0362170.0502970.1302020.116715−0.054920−0.013599
MSE0.1022700.0737160.1040550.0515600.1282170.0670040.0989950.045397
(80, 65)IBias0.0536790.0472670.0291300.0360330.0793630.076585−0.020630−0.003885
MSE0.0585900.0477960.0576630.0388980.0660120.0456630.0546790.035593
IIBias0.0767840.0550050.0477020.0502110.1100150.098807−0.0134400.002782
MSE0.0710400.0551290.0691850.0434940.0869020.0552660.0603480.037008
IIIBias0.0545730.0583940.0211730.0345860.0932160.089135−0.049360−0.018463
MSE0.0778900.0564250.0784810.0457610.0917600.0556560.0761840.042101
T = 1.5
(30, 15)IBias0.2065490.1607380.0608770.0983530.3369040.221013−0.206350−0.013571
MSE0.3137200.1993220.3432090.0765870.5295870.1321200.3508840.055150
IIBias0.3142320.318476−0.1612130.1102230.5703950.306423−0.733670−0.055412
MSE0.6971800.4049960.9405260.0707201.6057900.1749451.0929800.046494
IIIBias0.3393020.504470−0.1286730.1199820.6013030.317356−0.70398−0.046703
MSE0.7295500.5959490.9640920.0723671.7229600.1808831.0844900.058519
(30, 25)IBias0.1401060.1147950.0687140.0768180.2185040.166279−0.076300−0.007634
MSE0.1643000.1361960.1617950.0601360.2291240.0910310.1471940.046948
IIBias0.0978240.0925870.0243980.0529230.1635190.138061−0.112640−0.027771
MSE0.1543500.1354580.1606050.0602520.2081220.0852090.1580880.051080
IIIBias0.1516350.1657380.0468970.0773840.2039650.216469−0.10682−0.048438
MSE0.2349900.1737740.2595610.0699840.3256330.1265880.2516300.056881
(60, 40)IBias0.0740500.0560770.0309440.0450800.1156280.106147−0.052120−0.014060
MSE0.1064000.0781090.1061680.0557030.1281690.0701020.10012400.049318
IIBias0.0796310.0508040.0307250.0452490.1111460.104335−0.049930−0.012780
MSE0.1039900.0726580.1019970.0565090.1222530.0707090.0959970.049923
IIIBias0.0991330.0937910.0205450.0583030.1373850.176871−0.09486−0.051435
MSE0.1801000.1131240.1933430.0695400.2240460.1089070.1944150.054740
(60, 50)IBias0.0750590.0683690.0408810.0497680.1076870.101093−0.024990−0.000241
MSE0.0799000.0691320.0784080.0468990.0939280.0582400.0725160.0412230
IIBias0.0609570.0504430.0284560.0385830.0920520.087899−0.03426−0.009645
MSE0.0731000.0672280.0724180.0440990.0850760.0533890.0683880.039979
IIIBias0.0679130.0755430.0224690.0427070.0920420.125652−0.045770−0.036378
MSE0.0950700.0748980.0962930.0488130.1101040.0687100.0929140.043348
(80, 65)IBias0.0376810.0342090.0114380.0232480.0611840.063595−0.037780−0.016424
MSE0.0556200.0468510.0556980.0370400.0620620.0426880.0545270.034793
IIBias0.0402220.0383610.0155530.0251460.0625880.063666−0.031000−0.012852
MSE0.0512400.0465620.0509090.0351260.0571270.0404750.0493600.032896
IIIBias0.0634220.0682070.0289370.0414140.0830570.110367−0.024220−0.024855
MSE0.0838100.0594320.0852140.0461060.0953220.0639840.0814380.043232
Table 11. Average values of the biases (first row) and MSEs (second row) for MLEs and Bayes estimators of S (t = 0.9) under informative prior (IP) and noninformative (NIP) prior and different censoring schemes, when ( θ 1 , θ 2 , λ ) = (0.05, 0.06, 1.8).
Table 11. Average values of the biases (first row) and MSEs (second row) for MLEs and Bayes estimators of S (t = 0.9) under informative prior (IP) and noninformative (NIP) prior and different censoring schemes, when ( θ 1 , θ 2 , λ ) = (0.05, 0.06, 1.8).
T = 0.8 MLEsSELLINEX
c = 2 c = 2
( n , m )Sc.StatisticNRSEMNIPIPNIPIPNIPIP
(30, 15)IBias−0.002090−0.000338−0.023142−0.006557−0.015753−0.001900−0.030980−0.011428
MSE0.0074140.0047560.0092310.0034500.0084350.0032590.0101840.003696
IIBias0.0040430.013142−0.021266−0.004394−0.015819−0.000660−0.026960−0.008289
MSE0.0049300.0044680.0058110.0028110.0054490.0026740.0062430.002985
IIIBias0.0078520.017509−0.017486−0.001774−0.0120930.001900−0.023130−0.005612
MSE0.0046840.0043570.0054240.0025530.0051100.0024460.0058060.002695
(30, 25)IBias0.0034650.005372−0.008718−0.002165−0.0041630.001110−0.013480−0.005554
MSE0.0047260.0040150.0050560.0025610.0048010.0024830.0053670.002665
IIBias0.0044670.004802−0.007372−0.001871−0.0030410.001270−0.011890−0.005112
MSE0.0042920.0039880.0044730.0022670.0042690.0021990.0047240.002358
IIIBias0.0065500.008857−0.007711−0.000177−0.0032600.002850−0.012360−0.003305
MSE0.0043890.0041080.0049050.0022030.0046730.0021530.0051880.002275
(60, 40)IBias0.0028600.001574−0.004505−0.000022−0.0017230.002210−0.007370−0.002309
MSE0.0028330.0021920.0029760.0018310.0028870.0017990.0030840.001874
IIBias0.0017980.005486−0.005777−0.001623−0.0033440.000250−0.008270−0.003528
MSE0.0022220.0021230.0023820.0013780.0023120.0013540.0024660.001411
IIIBias−0.0006790.001804−0.008379−0.003608−0.005913−0.001720−0.010910−0.005531
MSE0.0022120.0020740.0024400.0014270.0023550.0014750.0025390.001447
(60, 50)IBias0.0029020.002082−0.003065−0.000088−0.0008010.001780−0.005380−0.001997
MSE0.0020990.0018010.0021420.0014570.0020890.0014360.0022060.001486
IIBias0.0012760.001583−0.004435−0.00144−0.0022840.000350−0.00663−0.003259
MSE0.0020370.0018430.0020550.0013270.0020040.0013630.0021160.001418
IIIBias0.0038090.004571−0.0025150.000697−0.0003420.002450−0.004740−0.001088
MSE0.0019970.0018840.0020710.0013000.0020270.0012860.0021270.001321
(80, 65)IBias0.0029450.003065−0.0014540.0004980.0002880.002000−0.003230−0.001024
MSE0.0016420.0013680.0016570.0012120.0016280.0011990.0016930.001229
IIBias0.0025470.001987−0.001786−0.000230−0.0001580.001170−0.003440−0.001651
MSE0.0015240.0014400.0015490.0011240.0015260.0011110.0015790.001141
IIIBias0.0017520.002747−0.002839−0.000345−0.0011880.001040−0.004520−0.001755
MSE0.0015840.0014950.0016240.0011270.0015940.0011150.0016590.001144
T = 1.5
(30, 15)IBias0.0034010.009764−0.017613−0.003657−0.0102920.00093−0.025400−0.008463
MSE0.0067360.0046620.0082480.0032390.0075610.0030820.0090910.003448
IIBias−0.0006830.007124−0.025730−0.008511−0.020195−0.00469−0.031520−0.012504
MSE0.0051070.0044300.0059980.0031800.0055860.0029950.0064850.003406
IIIBias0.0024650.013862−0.022256−0.006814−0.016783−0.003020−0.027990−0.01077
MSE0.0053770.0050810.0061620.0030720.0057800.0029060.0066140.003278
(30, 25)IBias0.0080270.006861−0.0041110.0017120.0003640.00494−0.00879−0.001627
MSE0.0041370.0035210.0043180.0022220.0041300.0021780.0045590.002290
IIBias−0.001135−0.000542−0.012432−0.004362−0.008092−0.001190−0.016950−0.007640
MSE0.0042310.0038810.0046440.0024190.0043880.0023310.0049500.002531
IIIBias0.0039260.006532−0.010595−0.001434−0.0060780.001620−0.015320−0.004583
MSE0.0039830.0037610.0045320.0020720.0042860.0020170.0048330.002148
(60, 40)IBias0.0024880.001781−0.004976−0.000560−0.0021570.001700−0.007880−0.002868
MSE0.0028640.0021970.0029600.0018010.0028670.0017670.0030720.001846
IIBias0.0004940.000402−0.005088−0.002108−0.002903−0.000270−0.007320−0.003984
MSE0.0021920.0020130.0022340.0015420.0021730.0015110.0023060.001581
IIIBias−0.0006790.001804−0.008379−0.003608−0.005913−0.001720−0.010910−0.005531
MSE0.0022120.0020740.0024400.0014270.0023550.0014750.0025390.001447
(60, 50)IBias0.0046060.004970−0.0014500.0014260.0007940.003290−0.003750−0.000478
MSE0.0022550.0019660.0022840.0015450.0022370.0015290.0023420.001569
IIBias0.0010560.000651−0.004177−0.001441−0.0020650.000350−0.006340−0.003260
MSE0.0020640.0019480.0021120.0014630.0020590.0014380.0021750.001496
IIIBias0.0026840.003881−0.0035920.000237−0.0014240.001990−0.005810−0.001549
MSE0.0021850.0020290.0022730.0014180.0022210.0014010.0023350.001442
c = 2 c = 2
( n , m )Sc.StatisticNRSEMNIPIPNIPIPNIPIP
(80, 65)IBias0.0006840.000377−0.003949−0.001248−0.0021930.000260−0.005740−0.002783
MSE0.0017030.0014460.0017450.0012580.0017070.0012400.0017910.001282
IIBias0.0006070.001458−0.003276−0.001182−0.0016870.000210−0.004890−0.002593
MSE0.0015450.0014560.0015600.0011980.0015300.0011810.0015960.001219
IIIBias0.0009370.001927−0.003811−0.000827−0.0021560.000570−0.005500−0.002242
MSE0.0014780.0014180.0015190.0010530.0014880.0010400.0015570.001070
Table 12. 95% CI’s, average length (AL) and coverage percentage (CP) of approximate, Bayes and bootstrap confidence intervals of θ 1 under informative prior (IP) and noninformative (NIP) prior, for different schemes with different values of n and m, when ( θ 1 , θ 2 , λ ) =(0.05,0.06,1.8).
Table 12. 95% CI’s, average length (AL) and coverage percentage (CP) of approximate, Bayes and bootstrap confidence intervals of θ 1 under informative prior (IP) and noninformative (NIP) prior, for different schemes with different values of n and m, when ( θ 1 , θ 2 , λ ) =(0.05,0.06,1.8).
T = 0.8 MLEsMCMC IntervalsBootstrap Intervals
NRSEMNIPIP Boot-pBoot-t
( n , m )Sc.ALCPALCPALCPALCPALCPALCP
(30, 15)I0.19380.9500.13990.9320.20190.9170.11400.9710.19860.8910.20210.921
II0.19700.9470.13800.8880.22010.8810.10260.9820.19940.9020.20730.934
III0.18930.9570.12640.8960.21380.8930.09910.9680.19030.8970.19880.953
(30,25)I0.13330.9490.13920.9480.13010.9220.09450.9680.13450.9230.13790.961
II0.13160.9480.13100.9530.12780.9310.09370.9690.13320.9190.13560.950
III0.13870.9390.12590.9510.14340.9350.09360.9770.13960.9240.14060.948
(60, 40)I0.09850.9480.08980.9390.09480.9340.07850.9640.09930.9210.09980.945
II0.10290.9450.08660.9220.10310.9320.07680.9710.10370.9190.10420.948
III0.10170.9510.08630.9260.10280.9360.07540.9710.10250.9270.10330.956
(60, 50)I0.08660.9610.08290.9540.08330.9510.07170.9720.08800.9610.08920.962
II0.08530.9540.08260.9550.08210.9440.07070.9650.08640.9540.08760.950
III0.08900.9440.08140.9310.08630.9290.07150.9580.09010.9440.09140.961
(80, 65)I0.07360.9410.06810.9450.07080.9280.06330.9530.07730.9230.07840.952
II0.07300.9550.06820.9470.07040.9440.06270.9620.07680.9180.07710.949
III0.07740.9500.06820.9330.07470.9350.06460.9620.07950.9270.07970.955
T = 1.5
(30, 15)I0.19090.9490.11920.9140.19980.9120.11350.9720.19410.8870.19570.960
II0.19690.9640.14300.9080.22200.9010.10210.9740.19960.8960.20030.953
III0.19450.9510.10660.8970.22090.8870.10170.9670.19710.9020.19850.942
(30,25)I0.13160.9630.10890.950.12720.9480.09370.9690.13280.9190.13430.957
II0.13320.9550.13300.9560.13000.9400.09500.9660.13540.9080.13610.962
III0.14000.9600.12770.9450.14500.9390.09360.9690.14110.9210.14140.951
(60, 40)I0.09910.9360.09170.9560.09520.9180.07870.9440.10130.9240.10200.954
II0.08800.9360.08300.9380.08560.9290.07290.9520.09240.9220.09650.961
III0.10380.9540.08250.9270.10470.9410.07710.9680.10490.9130.10560.949
(60, 50)I0.08550.9520.08370.9490.08230.9290.07100.9580.08730.9320.08780.961
II0.08360.9440.08280.9500.08030.9360.07020.9550.08490.9240.08520.948
III0.09010.9450.08110.9380.08720.9300.07200.9540.09170.9160.09280.962
(80, 65)I0.07510.9500.06950.9430.07240.9490.06440.9580.07750.9250.07820.950
II0.07130.9510.06890.9380.06880.9400.06210.9550.07390.9280.07480.951
III0.07820.9560.06850.9450.07550.9440.06510.9590.07990.9340.08140.956
Table 13. 95% CI’s, average length (AL) and coverage percentage (CP) of approximate, Bayes and bootstrap confidence intervals of θ 2 under informative prior (IP) and noninformative (NIP) prior, for different schemes with different values of n and m, when ( θ 1 , θ 2 , λ ) = (0.05,0.06,1.8).
Table 13. 95% CI’s, average length (AL) and coverage percentage (CP) of approximate, Bayes and bootstrap confidence intervals of θ 2 under informative prior (IP) and noninformative (NIP) prior, for different schemes with different values of n and m, when ( θ 1 , θ 2 , λ ) = (0.05,0.06,1.8).
T = 0.8 MLEsMCMC IntervalsBootstrap Intervals
NRSEMNIPIP Boot-pBoot-t
( n , m )Sc.ALCPALCPALCPALCPALCPALCP
( n , m ) Sc.ALCPALCPALCPALCPALCPALCP
(30, 15)I0.22620.9460.13920.9260.23850.9180.13420.9740.22810.9210.22870.934
II0.22430.9560.12930.8840.25130.8840.11690.9870.22500.9250.22570.928
III0.22190.9600.13060.8930.25180.8960.11640.9780.22240.9350.22290.945
(30, 25)I0.15790.9420.12920.9480.13010.9220.09450.9680.15840.9330.15920.940
II0.15150.9490.12800.9420.14830.9350.10870.9700.15220.9190.15240.938
III0.16020.9570.12490.9340.16690.9370.10820.9740.16110.9420.16170.961
(60, 40)I0.11450.9440.09130.9360.11040.9300.09140.9580.11610.9350.11660.942
II0.11870.9540.08700.9170.11890.9400.08860.9710.11890.9400.11930.966
III0.12070.9430.08690.9310.12260.9220.08910.9690.12120.9460.12160.958
(60, 50)I0.10040.9510.08560.9540.09680.9420.08330.9610.10210.9460.10320.962
II0.10010.9570.08630.9430.09660.9430.08300.9690.10050.9380.10110.951
III0.10460.9580.08440.9500.10150.9340.08390.9660.10540.9520.10560.956
(80, 65)I0.08670.9550.07480.9450.08350.9420.07460.9660.08740.9440.08810.960
II0.08550.9560.07510.9380.08250.9340.07350.9550.08620.9470.08650.954
III0.09050.9550.07480.9510.08740.9440.07530.9650.09040.9530.09090.961
T = 1.5
(30, 15)I0.21430.9600.13210.9150.22540.9210.12970.9750.21520.9420.21550.946
II0.23050.9690.13390.9080.26070.9130.11980.9830.23150.9300.23160.962
III0.22360.9470.12960.8860.25370.8840.11720.9790.22440.9370.22480.948
(30, 25)I0.15240.9650.12720.9550.14850.9470.10920.9760.15320.9450.15350.957
II0.15360.9540.13280.9480.15040.9370.11040.9710.15390.9510.15420.966
III0.16430.9610.12650.9480.17200.9460.11040.9810.16480.9570.16520.964
(60, 40)I0.11620.9440.09830.9530.11190.9320.09250.9590.11670.9600.11690.959
II0.10150.9460.09170.9270.09900.9380.08430.9610.10160.9470.10190.952
III0.12130.9490.08930.9250.12310.9330.09000.9790.12170.9550.12200.948
(60, 50)I0.09960.9510.08620.9400.09600.9370.08280.9610.09990.9500.10030.962
II0.09760.9470.08840.9440.09400.9400.08200.9580.09780.9630.09840.970
III0.10600.9480.08740.9370.10270.9260.08460.9580.10630.9590.10690.968
(80, 65)I0.08760.9530.07590.9500.08470.9420.07530.9600.08820.9490.08830.964
II0.08270.9520.07540.9460.07990.940.07210.9620.08340.9550.08360.959
III0.09070.9480.07550.9320.08770.9370.07540.9620.09080.9540.09120.950
Table 14. 95% CI’s, average length (AL) and coverage percentage (CP) of approximate, Bayes and bootstrap confidence intervals of λ under informative prior (IP) and noninformative (NIP) prior, for different schemes with different values of n and m, when ( θ 1 , θ 2 , λ ) = (0.05,0.06,1.8).
Table 14. 95% CI’s, average length (AL) and coverage percentage (CP) of approximate, Bayes and bootstrap confidence intervals of λ under informative prior (IP) and noninformative (NIP) prior, for different schemes with different values of n and m, when ( θ 1 , θ 2 , λ ) = (0.05,0.06,1.8).
T = 0.8 MLEsMCMC IntervalsBootstrap Intervals
NRSEMNIPIP Boot-pBoot-t
( n , m )Sc.ALCPALCPALCPALCPALCPALCP
(30, 15)I2.02280.9181.40990.9062.00580.9181.32750.9742.02340.9322.02510.956
II4.42980.8761.58820.8902.97580.8571.65520.9444.43120.9064.43520.948
III4.87620.8931.57370.9032.98000.8771.64490.9854.87860.9224.88100.950
(30,25)I1.54420.9211.40750.9141.49720.9221.14670.9831.54650.9431.54870.970
II1.63700.9201.40920.9071.58580.9361.18700.9821.63850.9351.64050.964
III1.83680.9101.44140.8781.81270.9291.27350.9921.83740.9291.83810.947
(60, 40)I1.16940.9360.95900.9331.12550.9270.95610.9671.17020.9511.17130.962
II1.58590.9370.99120.9211.53590.9431.17800.9921.58700.9551.58920.949
III1.58250.9230.98350.9341.53080.9261.16990.9801.58340.9481.58520.964
(60, 50)I1.04060.9450.94940.9401.00240.9480.87060.9741.04230.9611.04350.962
II1.09320.9430.95150.9261.05240.9370.90650.9671.09430.9621.09560.958
III1.23680.9460.96330.9251.19010.9390.95690.9791.23840.9591.24030.960
(80, 65)I0.90850.9380.81840.9300.87410.9330.78350.9550.91050.9630.91290.949
II1.00220.9380.92250.9210.96490.9390.85190.9641.00460.9551.00720.951
III1.08900.9430.92500.9231.04450.9460.90630.9741.09160.9621.09390.963
NRSEMNIPIPBoot-pBoot-t
( n , m )Sc.ALCPALCPALCPALCPALCPALCP
(30, 15)I2.02750.9171.41950.8862.01800.9231.33160.9862.03120.9262.03250.951
II2.14520.9111.52060.8382.98200.8881.64880.9912.14860.9122.15110.954
III2.30930.8731.60230.8843.01090.8721.65850.9852.31140.9042.31450.938
(30,25)I1.54340.9391.40070.9421.49620.9461.14840.9881.54650.9591.54890.971
II1.49140.9241.39260.9221.45020.9341.12190.9891.49430.9461.49740.954
III1.82240.9401.43040.8991.79300.9381.26680.9901.82560.9481.82900.966
(60, 40)I1.17610.9321.04840.9381.12940.9220.95680.9571.17840.9381.18060.957
II1.14520.9220.95320.9311.10840.9220.94480.9601.14910.9451.15130.949
III1.59100.9420.97940.9161.54650.9331.17750.9861.59830.9511.60040.955
(60, 50)I1.04700.9350.95510.9291.00660.9310.87960.9661.04970.9541.05210.950
II1.01920.9360.95240.9350.98250.9420.86340.9591.02310.9601.02680.962
III1.23090.9400.95970.9071.17990.930.99320.9731.23420.9481.23870.967
(80, 65)I0.90470.9420.81660.9390.87090.9410.78190.9610.90950.9610.10230.951
II0.87650.9480.81620.9380.84600.9380.76520.9600.87920.9570.88320.962
III1.08730.9390.82330.911.04090.9370.90840.9631.09020.9521.09290.965
Table 15. 95% CI’s, average length (AL) and coverage percentage (CP) of approximate, Bayes and bootstrap confidence intervals of S (t = 0.9) under informative prior (IP) and noninformative (NIP) prior, for different schemes with different values of n and m when ( θ 1 , θ 2 , λ ) = (0.05,0.06,1.8).
Table 15. 95% CI’s, average length (AL) and coverage percentage (CP) of approximate, Bayes and bootstrap confidence intervals of S (t = 0.9) under informative prior (IP) and noninformative (NIP) prior, for different schemes with different values of n and m when ( θ 1 , θ 2 , λ ) = (0.05,0.06,1.8).
T = 0.8 MLEsMCMC IntervalsBootstrap Intervals
NRSEMNIPIP Boot-pBoot-t
( n , m )Sc.ALCPALCPALCPALCPALCPALCP
(30, 15)I0.31650.9050.23600.8880.32940.9330.26520.9810.32050.9440.32180.935
II0.25640.9070.23130.9130.28620.9500.23950.9770.25750.9620.25790.949
III0.25450.9040.22870.8590.28510.9470.23660.9870.25580.9540.25630.962
(30,25)I0.25420.9170.22150.9380.25600.9720.22300.9720.25890.9480.25970.941
II0.24780.9220.22020.9210.25590.9510.21880.9890.24900.9630.24950.961
III0.24480.9000.21270.8930.25870.9430.21500.9860.24680.9350.24750.934
(60, 40)I0.20130.9200.16860.9180.20530.9480.18450.9770.20270.9450.20340.939
II0.18170.9330.16730.9170.19260.9590.16930.9750.18330.9620.18400.956
III0.18100.9270.16650.9210.19200.9490.16810.9860.18210.9580.18250.958
(60, 50)I0.18200.9340.16780.9310.18550.9570.16900.9770.18320.9540.18330.961
II0.17680.9350.16610.9380.18100.9590.16520.9820.17810.9510.17880.964
III0.17610.9310.16620.9350.18200.9510.16370.9730.17790.9500.17810.967
(80, 65)I0.16010.9310.14520.9330.16290.9460.15130.9680.16090.9470.16130.962
II0.15410.9380.14550.9350.15880.9510.14590.9720.15480.9520.15490.967
III0.15370.9380.14560.9320.15770.9540.14640.9690.15420.9560.15440.959
T = 1.5
(30, 15)I0.31390.9060.28080.8670.32750.9400.26330.9810.31520.8970.31580.907
II0.25850.9090.23890.8840.28920.9460.24110.9830.25890.9140.25970.925
III0.25640.9020.23580.8730.28690.9420.23010.9780.25740.9190.25820.927
(30,25)I0.25230.9090.23410.9200.25920.9580.22160.9770.25350.9050.25410.939
II0.24860.9270.23740.9130.25550.9510.21960.9730.24920.9360.24940.949
III0.24690.9140.23450.9040.26070.9530.21590.9850.24710.9290.24790.944
(60, 40)I0.20270.9210.18380.9420.20660.9490.18040.9720.20320.9370.20360.953
II0.17640.9290.16840.9300.18240.9480.16760.9650.17690.9500.17750.957
III0.18250.9320.16830.9200.19390.9530.17010.9750.18340.9440.18410.962
(60, 50)I0.18130.9180.16660.9180.18470.9450.16880.9790.18220.9200.18320.939
II0.17570.9270.16830.9340.17930.9460.16520.9670.17590.9310.17710.957
III0.17630.9220.16710.9200.18160.9350.16380.9630.17700.9520.17840.965
(80, 65)I0.16080.9340.14610.9390.16360.9450.15200.9660.16120.9570.16190.952
II0.15240.9390.14560.9260.15570.9540.14580.9680.15350.9610.15430.957
III0.15440.9390.14580.9350.15910.9530.14620.9720.15530.9500.15590.966
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Almuqrin, M.A.; Salah, M.M.; A. Ahmed, E. Statistical Inference for Competing Risks Model with Adaptive Progressively Type-II Censored Gompertz Life Data Using Industrial and Medical Applications. Mathematics 2022, 10, 4274. https://doi.org/10.3390/math10224274

AMA Style

Almuqrin MA, Salah MM, A. Ahmed E. Statistical Inference for Competing Risks Model with Adaptive Progressively Type-II Censored Gompertz Life Data Using Industrial and Medical Applications. Mathematics. 2022; 10(22):4274. https://doi.org/10.3390/math10224274

Chicago/Turabian Style

Almuqrin, Muqrin A., Mukhtar M. Salah, and Essam A. Ahmed. 2022. "Statistical Inference for Competing Risks Model with Adaptive Progressively Type-II Censored Gompertz Life Data Using Industrial and Medical Applications" Mathematics 10, no. 22: 4274. https://doi.org/10.3390/math10224274

APA Style

Almuqrin, M. A., Salah, M. M., & A. Ahmed, E. (2022). Statistical Inference for Competing Risks Model with Adaptive Progressively Type-II Censored Gompertz Life Data Using Industrial and Medical Applications. Mathematics, 10(22), 4274. https://doi.org/10.3390/math10224274

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop