Skip to Content
MathematicsMathematics
  • Article
  • Open Access

6 May 2023

Generalized Bayes Estimation Based on a Joint Type-II Censored Sample from K-Exponential Populations

,
and
1
Department of Mathematics, College of Science, Taibah University, Al-Madinah Al-Munawarah 30002, Saudi Arabia
2
Department of Mathematics, Faculty of Science, Al-Azhar University, Nasr City 11884, Egypt
3
Department of Statistics and Operations Research, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
4
Department of Mathematical Sciences, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia

Abstract

Generalized Bayes is a Bayesian study based on a learning rate parameter. This paper considers a generalized Bayes estimation to study the effect of the learning rate parameter on the estimation results based on a joint censored sample of type-II exponential populations. Squared error, Linex, and general entropy loss functions are used in the Bayesian approach. Monte Carlo simulations were performed to assess how well the different approaches perform. The simulation study compares the Bayesian estimators for different values of the learning rate parameter and different losses.

1. Introduction

Generalized Bayes is a Bayesian study based on a learning rate parameter ( 0 < η < 1 ) . As a fractional power on the likelihood function L L ( θ ; d a t a ) for the parameter θ Θ , the traditional Bayesian framework is obtained for η = 1 . In this paper, we will show the effect of the learning rate parameter on the estimation results. That is, if the prior distribution of the parameter θ is π ( θ ) , then the generalized Bayes posterior distribution for θ is:
π * ( θ | d a t a ) L η π ( θ ) , θ Θ , 0 < η < 1 .
For more details on the generalized Bayes method and the choice of the value of the rate parameter, see, for example [1,2,3,4,5,6,7,8,9,10]. In addition, we refer readers to [11,12] for recent work on Bayesian inversion.
An exact inference method based on maximum likelihood estimates (MLEs), and compared its performance with approximate, Bayesian, and bootstrap methods developed by [13]. A joint progressive censoring of type-II and the expected values of the number of failures for two populations under joint progressive censoring of type-II introduced and studied by [14]. Exact likelihood inference for two exponential populations under joint progressive censoring of type II was studied by [15]. A precise result based on maximum likelihood estimates developed by [16]. A study of Bayesian estimation and prediction based on a joint type- II censored sample from two exponential populations was presented by [17]. Exact likelihood inference for two populations of two-parameter exponential distributions under joint censoring of type II was studied by [18].
Suppose that products from k different lines are manufactured in the same factory and that k independent samples of size n j , 1 j k are selected from these k lines and simultaneously subjected to a lifetime test. To reduce the cost of the experiment and shorten the duration of the experiment, the experimenter can terminate the lifetime test experiment once a certain number (say r) of failures occur. In this situation, one is interested in either a point or interval estimate of the mean lifetime of the units produced by these k lines.
Suppose { X j n j , j = 1 , , k } are k -samples where, X j n j = { X j 1 , X j 2 , , X j n j } are the lifetimes of n j copies of product line A j and assumed to be independent and identically distributed (iid) random variables from a population with cumulative distribution function (cdf) F j ( x ) and probability density function (pdf) f j ( x ) .
Furthermore, let N = j = 1 k n j be the total sample size and r be the total number of observed failures. Let W 1 W N denote the order statistics of the N random variables { X j n j , j = 1 , , k } . Under the joint Type-II censoring scheme for the k -samples, the observable data consist of ( δ , W ) , where W = ( W 1 , , W r ) , W i { X j i n j i , j i = 1 , , k } , and r is a predefined integerand δ = ( δ 1 j , , δ r j ) associated with ( j 1 , , j r ) is defined by:
δ i j = 1 , i f j = j i 0 , o t h e r w i s e .
If r j = i = 1 r δ i j denotes the number of X j -failures in W and r = j = 1 k r j , then, the joint density function of ( δ , W ) is given by:
f ( δ , w ) = c r i = 1 r j = 1 k ( f j ( w i ) ) δ i j . j = 1 k ( F _ j ( w r ) ) n j r j
where F _ j = 1 F j is the survival functions of the j th population and c r = j = 1 k n j ! ( n j r j ) ! .
The main goal of this paper is to consider the Bayesian estimation of the parameter based on the learning rate parameter under a joint censoring scheme of type-II for exponential populations when censoring is applied to the samples in a combined manner. Section 2 presents the maximum likelihood and generalized Bayes estimators, using squared error, Linex, and general entropy loss functions in the Bayesian approach to estimate the population parameters. A numerical investigation of the results from Section 2 is presented in Section 3. Finally, we conclude the paper in Section 4.

2. Estimation of the Parameters

Suppose that for 1 j k , the k populations are exponential with the following pdf and cdf:
f h = θ j e x p ( θ j x ) , F j = 1 e x p ( θ j x ) , x > 0 , θ j > 0 .
Then, the likelihood function in (3) becomes:
L ( Θ , δ , w ) = c r i = 1 r j = 1 k { θ j e x p ( θ j w i ) } δ i j j = 1 k { e x p ( θ j w r ) } n j r j = c r j = 1 k θ j r j e x p { θ j u j }
where Θ = ( θ 1 , , θ k ) and u j = i = 1 r w i δ i j + w r ( n j r j ) .

2.1. Maximum Likelihood Estimation

From (5), the MLE of θ j , for 1 j k , is given by:
θ ^ j M = r j u j .
Remark 1.
MLEs of  θ j  exist if we have at least  k  failures  ( r k ) which means at least one failure from each sample, i.e.,  1 r j r k + 1  and  r j n j .
We determined the MLEs to compare their results with those of Bayesian estimation, which uses the three types of loss functions for different values of the rate parameters, as described in Section 3.

2.2. Generalized Bayes Estimation

Since the parameters Θ are assumed to be unknown, we can consider the conjugate prior distributions of Θ as independent gamma prior distributions, i.e., θ j G a m ( a j , b j ) . Therefore, the joint prior distribution of Θ is given by:
π ( Θ ) = j = 1 k π j ( θ j ) ,
where
π j ( θ j ) = b j a j Γ ( a j ) θ j a j 1 e b j θ j ,
and Γ ( ) denotes the complete gamma function.
Combining (5) and (7), after raising (5) to the fractional power η , the posterior joint density function of Θ is then:
π * Θ d a t a ) = j = 1 k ( u j η + b j ) r j η + a j θ j r j η + a j 1 Γ ( r j η + a j ) e x p [ { θ j ( u j η + b j ) } ] .
Since π j is a conjugate prior, we see that if θ j G a m a j , b j , then it has the posterior density function as ( θ j | d a t a ) G a m ( r j η + a j , u j η + b j ) .
In generalized Bayes estimation, we consider three types of loss functions:
(i)
The squared error loss function (SE), which is classified as a symmetric function and gives equal importance to losses for overestimates and underestimates of the same magnitude;
(ii)
The Linex loss function, which is asymmetric;
(iii)
The generalization of the entropy (GE) loss function.
Using (9), the Bayesian estimators of θ j under the squared error (SE) loss function are:
θ ^ j S = E ( θ j ) = r j η + a j u j η + b j ,         1 j k .
Under the Linex loss function, the Bayesian estimators of θ j are given by:
θ ^ j L = 1 ν E e ν θ j = r j η + a j ν l o g 1 + ν u j η + b j ,         ν 0 ,     1 j k ,
and under the GE loss function, the Bayesian estimators of θ j are given by
θ ^ j E = { E ( θ j c ) } 1 c = Γ r j η + a j c Γ r j η + a j 1 c 1 u j η + b j , 1 j k .
Remark 2.
Obviously,  θ ^ j  for  1 j k  in the above three cases are the unique Bayes estimators of  θ j  and thus admissible. The estimators  θ ^ j J , are Bayes estimators of  θ j  using the noninformative Jeffreys priors  π J j = 1 k 1 θ j , obtained directly by substituting  a j = b j = 0  into (9), so that (10) leads to MLEs  θ ^ j M .
Remark 3.
For  c = 1 , 1 , 2 , the Bayes estimates  θ ^ j E  agree with the Bayes estimates under the following losses: weighted squared error loss function, squared error loss function, and precautionary loss function.

3. Numerical Study

This section examines the results of a Monte Carlo simulation study to evaluate the performance of the inference procedures derived in the previous section. An example is then presented to illustrate the inference methods discussed here.

3.1. Simulation Study

We have considered the different choices for the three populations sample sizes ( n 1 , n 2 , n 3 ) and also for r . We choose the exponential parameters ( θ 1 , θ 2 , θ 3 ) as (1, 2, 3) and for the Monte Carlo simulations, we use 10,000 replicates. Using (6), we obtain the MLEs of θ 1 , θ 2 , θ 3 and their estimated risk, which are shown in Table 1.
Table 1. Average value of r ¯ 1 , r ¯ 2 , r ¯ 3 and the average value and estimated risk (ER) of the MLEs θ ^ 1 M , θ ^ 2 M , θ ^ 3 M for different choices of n 1 , n 2 , n 3 , r .
In the simulation study, it should be noted that some of the simulated samples do not meet the condition in Remark 1 and, therefore, must be discarded. Thus, the average values of the observed failures ( r ¯ 1 , r ¯ 2 , r ¯ 3 ) are calculated and reported in Table 1. For the Bayesian study, the hyperparameters are represented by Δ = ( a 1 , b 1 , a 2 , b 2 , a 3 , b 3 ) , where Δ = Δ 1 = ( 1,1 , 2,1 , 3,1 ) .
The results of Bayesian estimators of θ 1 , θ 2 , θ 3 at Δ 1 ; c = 1 , 0.5 , 0.75 ; ν = 0.1 , 0.5 ; η = 0.1 , 0.5 , 0.9 are shown in Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7, where the loss at c = 1 is SE. Table 2, Table 3 and Table 4 studied generalized Bayes under general entropy loss function for the chosen values c = 1 , 0.75 , 0.5 .
Table 2. Average value of Bayesian estimators θ ^ 1 E , θ ^ 2 E , θ ^ 3 E and estimated risk for different choices of n 1 , n 2 , n 3 , r and = 1 , c = 1 , 0.75 , 0.5 , η = 0.1.
Table 3. Average value of Bayesian estimators θ ^ 1 E , θ ^ 2 E , θ ^ 3 E and estimated risk for different choices of n 1 , n 2 , n 3 , r and = 1 , c = 1 , 0.75 , 0.5 ,   η = 0.5.
Table 4. Average value of Bayesian estimators θ ^ 1 E , θ ^ 2 E , θ ^ 3 E and estimated risk for different choices of n 1 , n 2 , n 3 , r and = 1 , c = 1 , 0.75 , 0.5 , η = 0.9.
Table 5. Average value of Bayesian estimators θ ^ 1 L , θ ^ 2 L , θ ^ 3 L and estimated risk for different choices of n 1 , n 2 , n 3 , r and = 1 , υ = 0.1 , 0.5 , η = 0.1.
Table 6. Average value of Bayesian estimators θ ^ 1 L , θ ^ 2 L , θ ^ 3 L and estimated risk for different choices of n 1 , n 2 , n 3 , r and = 1 , υ = 0.1 , 0.5 , η = 0.5.
Table 7. Average value of Bayesian estimators θ ^ 1 L , θ ^ 2 L , θ ^ 3 L and estimated risk for different choices of n 1 , n 2 , n 3 , r and = 1 , υ = 0.1 , 0.5 , η = 0.9.
Table 5, Table 6 and Table 7 studied generalized Bayes under Linex loss function for υ = 0.1 , 0.5 ,   η = 0.1 , 0.5 , 0.9 .

3.2. Illustrative Example

To illustrate the usefulness of the results developed in the previous sections, we consider three samples, each of size n 1 = n 2 = n 3 = 10 , from Nelson’s data (groups 1, 4, and 5) (p. 462, [19]), corresponding to the breakdown in minutes of an insulating fluid subjected to a high load. These failure times as samples X i , i = 1,2 , 3 , and their order statistics for ( W , j i ) are shown in Table 8.
Table 8. The failure time data for X1, X2, and X3, and their order (w, ji), where δji = 1.
For r = 20 , 25 , 30 , the MLE and Bayesian estimates of the parameters are shown in Table 9 and Table 10 for η = 0.1 , 0.5 , and using Δ = Δ 2 = ( 1 , 2.6 , 1 , 2 , 1 , 3 ) ; c = 1 , 0.75 , 0.5 and ν = 0.1, 0.5, respectively.
Table 9. ML and Bayesian estimates of the parameters θ1, θ2, θ3 for different choices of r, η = 0.1 and ∆ = ∆2.
Table 10. Bayesian estimates of the parameters θ1, θ2, θ3 for different choices of r; η = 0.5 and ∆ = ∆2.

4. Conclusions

In this work, we considered a joint type-II censoring scheme when the lifetimes of three populations have exponential distributions. We obtained the MLEs and the Bayesian estimates of the parameters using different values for the learning rate parameter η and the loss functions SE, GE, and Linex in a simulation study and an illustrative example. In both methods, the MLEs and the generalized Bayes estimates θ ^ 1 , θ ^ 2 , θ ^ 3 become better with more significant ni; i = 1, 2, 3 for different values of r; of course, the estimates become better, but the Bayes estimators are better than the MLEs. From Table 2 to Table 7, it can be seen that the results improve as c increases. Generally, the best result is obtained for generalized Bayes estimators for η = 0.1, i.e., the result improve better when η becomes small. Studying this work with a different type of censoring might be interesting.

Author Contributions

Conceptualization, Y.A.-A.; methodology, Y.A.-A. and M.K.; software, G.A.; validation, G.A.; formal analysis, M.K.; resources, Y.A.-A.; writing—original draft, Y.A.-A. and M.K.; writing—review & editing, G.A.; supervision, M.K.; project administration, Y.A.-A. and G.A. All authors have read and agreed to the published version of the manuscript.

Funding

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R226), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Data Availability Statement

The data used to support the findings of this study are included in the article.

Acknowledgments

The authors thank the anonymous reviewers and the editor for their constructive criticism and valuable suggestions, which have greatly improved the presentation and explanations in this article. This work was supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R226), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bissiri, P.G.; Holmes, C.C.; Walker, S.G. A general framework for updating belief distributions. J. R. Stat. Soc. Ser. B Stat. Methodol. 2016, 78, 1103–1130. [Google Scholar] [CrossRef] [PubMed]
  2. De Heide, R.; Kirichenko, A.; Grunwald, P.; Mehta, N. Safe-Bayesian generalized linear regression. Int. Conf. Artif. Intell. Stat. 2020, 108, 2623–2633. [Google Scholar]
  3. Gruünwald, P. The safe Bayesian: Learning the learning rate via the mixability gap. In Algorithmic Learning Theory: 23rd International Conference, ALT 2012, Lyon, France, 29–31 October 2012; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7568, pp. 169–183. [Google Scholar]
  4. Gruünwald, P. Safe probability. J. Stat. Plan. Inference 2018, 195, 47–63. [Google Scholar] [CrossRef]
  5. Gruünwald, P.; van Ommen, T. Inconsistency of Bayesian inference for misspecied linear models, and a proposal for repairing it. Bayesian Anal. 2017, 12, 1069–1103. [Google Scholar] [CrossRef]
  6. Holmes, C.C.; Walker, S.G. Assigning a value to a power likelihood in a general Bayesian model. Biometrika 2017, 104, 497–503. [Google Scholar]
  7. Lyddon, S.P.; Holmes, C.C.; Walker, S.G. General Bayesian updating and the loss-likelihood bootstrap. Biometrika 2019, 106, 465–478. [Google Scholar] [CrossRef]
  8. Martin, R. Invited comment on the article by van der Pas, Szabo, and van der Vaart. Bayesian Anal. 2017, 12, 1254–1258. [Google Scholar]
  9. Martin, R.; Ning, B. Empirical priors and coverage of posterior credible sets in a sparse normal mean model. Sankhya A 2020, 82, 477–498. [Google Scholar] [CrossRef]
  10. Miller, J.W.; Dunson, D.B. Robust Bayesian inference via coarsening. J. Am. Stat. Assoc. 2019, 114, 1113–1125. [Google Scholar] [CrossRef] [PubMed]
  11. Khodadadian, A.; Noii, N.; Parvizi, M.; Abbaszadeh, M.; Wick, T.; Heitzinger, C. A Bayesian estimation method for variational phase-field fracture problems. Comput. Mech. 2020, 66, 827–849. [Google Scholar] [CrossRef] [PubMed]
  12. Noii, N.; Khodadadian, A.; Ulloa, J.; Aldakheel, F.; Wick, T.; François, S.; Wriggers, P. Bayesian inversion with open-source codes for various one-dimensional model problems in computational mechanics. Arch. Comput. Methods Eng. 2022, 29, 4285–4318. [Google Scholar] [CrossRef]
  13. Balakrishnana, N.; Rasouli, A. Exact likelihood inference for two exponential populations under joint Type-II censoring. Comput. Stat. Data Anal. 2008, 52, 2725–2738. [Google Scholar] [CrossRef]
  14. Parsi, S.; Bairamov, I. Expected values of the number of failures for two populations under joint Type-II progressive censoring. Comput. Stat. Data Anal. 2009, 53, 3560–3570. [Google Scholar] [CrossRef]
  15. Rasouli, A.; Balakrishnan, N. Exact likelihood inference for two exponential populations under joint progressive type-II censoring. Commun. Stat. Theory Methods 2010, 39, 2172–2191. [Google Scholar] [CrossRef]
  16. Su, F. Exact Likelihood Inference for Multiple Exponential Populations under Joint Censoring. Open Access Dissertations and Theses Paper 7589. Ph.D Thesis, McMaster University, Hamilton, ON, Canada, 2013. [Google Scholar]
  17. Shafay, A.R.; Balakrishnan, N.Y.; Abdel-Aty, Y. Bayesian inference based on a jointly type-II censored sample from two exponential populations. J. Stat. Comput. Simul. 2014, 84, 2427–2440. [Google Scholar] [CrossRef]
  18. Abdel-Aty, Y. Exact likelihood inference for two populations from two-parameter exponential distributions under joint Type-II censoring. Commun. Stat. Theory Methods 2017, 46, 9026–9041. [Google Scholar] [CrossRef]
  19. Nelson, W. Applied Life Data Analysis; Wiley: New York, NY, USA, 1982. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.