Next Article in Journal
Selected Payback Statistical Contributions to Matrix/Linear Algebra: Some Counterflowing Conceptualizations
Previous Article in Journal
Bayesian Hierarchical Copula Models with a Dirichlet–Laplace Prior
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bias-Corrected Maximum Likelihood Estimation and Bayesian Inference for the Process Performance Index Using Inverse Gaussian Distribution

1
Department of Statistics, Tamkang University, Tamsui District, New Taipei City 251301, Taiwan
2
School of Mathematics and Statistics, Northeast Petroleum University, Daqing 163318, China
3
Department of Mathematical Sciences, University of South Dakota, Vermillion, SD 57069, USA
*
Author to whom correspondence should be addressed.
Stats 2022, 5(4), 1079-1096; https://doi.org/10.3390/stats5040064
Submission received: 25 August 2022 / Revised: 22 October 2022 / Accepted: 3 November 2022 / Published: 5 November 2022

Abstract

:
In this study, the estimation methods of bias-corrected maximum likelihood (BCML), bootstrap BCML (B-BCML) and Bayesian using Jeffrey’s prior distribution were proposed for the inverse Gaussian distribution with small sample cases to obtain the ML and Bayes estimators of the model parameters and the process performance index based on the lower specification process performance index. Moreover, an approximate confidence interval and the highest posterior density interval of the process performance index were established via the delta and Bayesian inference methods, respectively. To overcome the computational difficulty of sampling from the posterior distribution in Bayesian inference, the Markov chain Monte Carlo approach was used to implement the proposed Bayesian inference procedures. Monte Carlo simulations were conducted to evaluate the performance of the proposed BCML, B-BCML and Bayesian estimation methods. An example of the active repair times for an airborne communication transceiver is used for illustration.

1. Introduction

Process capability analysis has been widely used to identify how well the outputs from an in-control process meet the requirements, specifications and expectations of customers. In practice, process capability analysis methods aim to continuously monitor the process quality via utilizing capability indices to assure if the quality of products is capable of meeting the specifications and supply information based on the product design and process quality improvement. The results of process capability analysis can be the basis of cost reduction, which is attributable to the decrease in product failures; see Pan and Wu [1]. Kane [2] presented the relations of the process capability indices, C p , C p u , C p l and C p k , which are, respectively, defined as
C p = U L 6 σ ,
C p u = U μ 3 σ ,
C p l = μ L 3 σ
and
C p k = m i n U μ 3 σ , μ L 3 σ ,
where μ is the population mean, σ is the population standard deviation, L is the lower specification limit and U is the upper specification limit. Kane [2] also indicated that these indices can be a complementary system of measures for evaluating the process performance. Kocherlakota [3] and Kotz and Johnson [4] studied the statistical theories for various process capability indices. Comprehensive discussions about using process capability analysis methods for quality control can be found in the works of Rodriguez [5], Palmer and Tsui [6] and Montgomery [7].
The process performance index is one of the widely used process capability indices for evaluating the quality of lifetime data. Let X denote the product lifetime random variable and L denote a specified threshold of a lower bound. The conforming rate can be defined as δ = P ( X > L ) . The process performance index can have a close connection with the conforming rate of δ . The statistical properties regarding using the process capability indices of C p , C p u , C p l and C p k under the normality assumption have been thoroughly studied in the literature. Hong et al. [8] presented analytical procedures for the ML estimation and hypothesis testing to evaluate the process performance index when the lifetimes of products follow the Pareto distribution. Ahmadi et al. [9] used generalized order statistics to conduct inferential procedures for the process performance index under an exponential distribution. Lee et al. [10] proposed optimal inferential procedures to evaluate the process performance index based on progressively type-II censored samples that were taken from the Burr type XII distribution. Lee et al. [11] proposed an inferential procedure by using uniformly minimum variance unbiased estimation and a hypothesis-testing method to evaluate the process performance index based on type-II censored samples that were taken from the two-parameter exponential distribution. Lee et al. [12] proposed Bayesian inference procedures to assess the process performance index based on progressively type-II censored samples under the Rayleigh distribution. Ahmadi et al. [13] proposed ML estimation methods to estimate the process performance index based on progressively first-failure censored samples when the lifetime data follow a Weibull distribution. Wu and Chiu [14] obtained fourteen different estimates of the process performance index for the two-parameter exponential distribution under a multiple type-II censoring scheme. After a simulation study of a performance comparison, three estimates among them are recommended by Wu and Chiu [14] to develop hypothesis-testing procedures for the process performance index. Wu and Lin [15] proposed a statistical inference for the process performance index based on type-II exponentially distributed samples. Montgomery [7] recommended using the process performance index to evaluate the quality of products. Zhu et al. [16] proposed an inferential procedure by using the ML estimation method to evaluate the process performance index based on power-normal distribution samples. They also discussed the drawbacks of using the exact Fisher information matrix with a delta method under the power-normal distribution to obtain an approximate confidence interval (ACI) of the model parameters. All aforementioned works on process performance index are summarized in Table 1 for easy reference.
Among widely used lifetime distributions, the inverse Gaussian (IG) distribution, also known as the Wald distribution, has been renowned as a versatile lifetime model with sound physical interpretation. The ML and Bayesian estimation methods were commonly used to estimate the IG distribution parameters. Banerjee and Bhattacharyya [17] investigated the Bayesian inferential approach for estimating the IG distribution parameters for the application of equipment failure data. Amry [18] studied the Bayesian inference of the IG parameters using Jeffrey’s prior under a quadratic loss function. More information about using the IG distribution for engineering applications can also be found in the books by Chhikara and Folks [19] and Johnson et al. [20]. Sun and Ye [21] discussed the frequentist validity of posterior quantiles for a two-parameter exponential family that includes the IG distribution as a member. Rostamian and Nematollahi [22] studied the stress–strength reliability using the ML estimation method via using an expectation-maximization algorithm and the Bayesian estimation method based on progressively type-II censored IG distribution samples. A survival analysis of the IG distribution based on using Bayesian and Fiducial approaches has been studied by Jayalath and Chhikara [23]. Bera and Jana [24] developed a bootstrap interval of stress–strength reliability assuming that the stress and strength variables are IG distributed.
Sundaraiyer [25] proposed inferential procedures to obtain the ML estimator and bootstrap ACI of the process capability index proposed by Clements [26] when the quality variable follows the IG distribution. We use the term of MLE to denote the ML estimator and ML estimate here and after. Investigating how to reduce the estimation bias for the process performance index based on IG distributed samples and obtain a reliable ACI for the process performance index is helpful for the applications of quality control. Balay [27] used BCMLEs of the generalized inverse Lindley distribution parameters to compute the generalized process capability index C p y k . The C p y k was firstly proposed and studied by Maiti et al. [28]. In addition, Balay [27] obtained a bootstrap ACI of C p y k . Considering the merit of IG distribution being a versatile lifetime model with sound physical interpretation, the IG distribution can be an alternative to the generalized inverse Lindley distribution for reliability analysis applications. The purposes of this article can be twofold. Firstly, we would like to propose analytical procedures to obtain the BCMLEs, whose bias is O ( n 2 ) based on the bias correction method proposed by Cordeiro and Klein [29], B-BCMLEs and Bayes estimators via using the Jeffery’s prior distribution for the IG distribution parameters and the investigated process performance index. Secondly, we would like to establish the procedures taken to obtain an ACI and the highest posterior density interval (HPDI) for the one-sided version of C p y k . The HPDI is the interval with the shortest length on the posterior density at the given confidence level. We use the term BE to denote the Bayes estimator and Bayes estimate here and after. Because the posterior distribution in the Bayesian estimation procedure is complicated, the Markov chain Monte Carlo (MCMC) approach is used to overcome the computational difficulty when generating random samples from the posterior distribution. To our knowledge, these two aforementioned purposes have not yet been studied in the literature.
The rest of this paper is organized as follows. We address the process capability indices and define the process performance index based on the one-sided version of C p y k in Section 2. In Section 3, we derive the inferential procedures to obtain the CK-BCMLE and B-BCMLE of the model parameters through using the bias correction method proposed by Cordeiro and Klein [29] and the bootstrap method, respectively. A bootstrap algorithm is suggested to obtain the B-BCMLEs of the model parameters and the bootstrap ACI of the process performance index. Moreover, the Bayesian estimation procedure is developed via using the Jeffery’s prior distribution, and an MCMC hybrid algorithm of Gibbs sampling and the Metropolis–Hastings algorithm is provided to obtain the BEs of the IG distribution parameters and process performance index. Monte Carlo simulations are conducted in Section 4 to evaluate the performance of the proposed estimation methods. A real data set with 46 active repair times (in hours) for an airborne communication transceiver is given in Section 5 for illustration. Concluding remarks are given in Section 6.

2. The Generalized Process Performance Index

Based on C p and C p k , two generalized process capability indices can be defined as follows:
C p m = C p 1 + ξ 2
and
C p m k = C p k 1 + ξ 2 ,
where ξ = ( μ T ) / σ and T is the process target. When the quality measurements of products follow a normal distribution, C p , C p k , C p m and C p m k have been the four most widely used process capability indices in practical applications. However, non-normally distributed quality measurements, which have a skewed distribution, can be found in many works by Clements [26], Gunter [30], Constable and Hobbs [31], Mukherjee and Singh [32], Tang et al. [33] and Chen et al. [34]. Among the aforementioned studies, Clements [26] proposed two generalized versions of process capability indices that are defined by
C p ( q ) = U L X 0.99865 X 0.00135
and
C p k ( q ) = min U X 0.5 X 0.99865 X 0.5 , X 0.5 L X 0.5 X 0.00135 ,
where X γ is the γ th quantile of the quality characteristic measure, X. When X follows a normal distribution, C p ( q ) and C p k ( q ) reduce to C p and C p k , respectively. C p ( q ) and C p k ( q ) are the two most popular process capability indices used to determine the quality of products if the distribution of the quality variable is not normal. Maiti et al. [28] proposed a new generalized version of the process capability index,
C p y k = min F ( U ) 0.5 0.5 α 2 , 0.5 F ( L ) 0.5 α 1 ,
where F ( x ) = P ( X x ) is the cumulative distribution function (CDF). α 1 and α 2 are the specified lower and upper tailed probabilities of F, respectively. For lifetime products, we are more concerned with if the lifetime is higher than the lower specification limit. Hence, the one-sided version of C p y k was used in this study for the process performance index. We denote the one-sided version of C p y k by C L in this study for simplification. The C L is defined by
C L = 0.5 F ( L ) 0.5 α 1 .

3. The Inverse Gaussian Distribution and Estimation Methods

In this section, the IG distribution is addressed in Section 3.1. Moreover, we propose the analytical procedure in Section 3.2 to obtain the BCMLEs of the model parameters based on the bias correction method proposed by Cordeiro and Klein [29]. This bias correction method is named the CK-BCML method here and after. The bootstrap procedure used to obtain B-BCMLEs of the IG distribution parameters is proposed in Section 3.3. Moreover, the delta method based on the Fisher information matrix is used to obtain an ACI of C L in Section 3.4. In Section 3.5, we use Jeffrey’s prior distribution and the proposed MCMC algorithm to develop the Bayesian estimation procedures for obtaining the BEs of the IG distribution parameters and C L . Moreover, the HDPI of C L is also obtained through the corresponding MCMC chain of C L .

3.1. Maximum Likelihood Estimation

The probability density function (PDF) and CDF of the IG distribution are defined by
f ( x | Θ ) = λ 2 π x 3 / 2 exp λ ( x μ ) 2 2 μ 2 x , x > 0 , μ , λ > 0
and
F ( x | Θ ) = Φ λ x x μ 1 + exp 2 λ μ Φ λ x x μ + 1 , x > 0 , μ , λ > 0 ,
respectively, where Φ ( · ) is the CDF of the standard normal distribution, Θ = ( θ 1 , θ 2 ) = ( μ , λ ) , μ is the mean of the IG distribution and λ is a reciprocal measure of dispersion. Symbolically, denote the IG distribution by X I G ( Θ ) . The moment generating function, variance, skewness and kurtosis can be obtained, respectively, by
M ( t ) = exp λ μ 1 1 2 μ 2 t λ ,
V a r ( X ) = μ 3 λ ,
skewness ( X ) = 3 μ λ ,
kurtosis ( X ) = 3 + 15 μ λ .
Moreover, we can obtain the mean and variance for the reciprocal of X by
E 1 X = 1 μ + 1 λ
and
V a r 1 X = 1 μ λ + 2 λ 2 .
The IG distribution is a positively skewed distribution due to the skewness being always positive. Johnson et al. [20] indicated that the IG distribution is a member of the exponential family and is unimodal. The mode of the IG distribution is located at
μ 1 + 9 μ 2 4 λ 2 3 μ 2 λ .
Based on the CDF of the IG distribution, the C L can be defined by
C L = 0.5 F ( L | Θ ) 0.5 α 1 = 1 0.5 α 1 0.5 Φ λ L L μ 1 e 2 λ / μ Φ λ L L μ + 1 ,
where α 1 = 0.0027 can be a reference number for practical applications.
Let X = ( X 1 , X 2 , , X n ) denote a random sample taken from the IG distribution. The log-likelihood function based on the realization x = ( x 1 , x 2 , , x n ) of X can be expressed by
log ( L ( Θ | x ) ) = n 2 log ( λ ) n 2 log ( 2 π ) 3 2 i = 1 n log ( x i ) + λ n μ n x ¯ 2 μ 2 1 2 i = 1 n 1 x i .
The first derivatives of with respect to μ and λ can be obtained, respectively, by
1 = μ = n λ μ 3 ( x ¯ μ ) ,
2 = λ = n 2 λ + n 2 2 μ x ¯ μ 2 1 n i = 1 n 1 x i .
Let 1 = 0 and 2 = 0 ; we can express the MLEs of μ and λ by
μ ˜ M = x ¯
and
λ ˜ M = 1 n i = 1 n 1 x i 1 x ¯ 1 .
The asymptotic distribution of Θ ˜ M = ( μ ˜ M , λ ˜ M ) can be expressed by
Θ ˜ M d N ( Θ , A ) as n ,
where A A ( Θ ) = I 1 ( Θ ) is the inverse of the Fisher information matrix given in Appendix B. From Appendix B,
I 1 ( Θ ) = η 11 η 12 η 21 η 22 = μ 3 n λ 0 0 2 λ 2 n

3.2. The Bias-Corrected Maximum Likelihood Estimation Method

Let
η i j ( k ) = η i j θ k ,
η i j k = E 3 θ i θ j θ k
and
b i j ( k ) b i j ( k ) ( Θ ) = η i j ( k ) 1 2 η i j k , for i , j , k = 1 , 2 .
Construct matrix B B ( Θ ) = B ( 1 ) | B ( 2 ) with
B ( 1 ) = b 11 ( 1 ) b 12 ( 1 ) b 21 ( 1 ) b 22 ( 1 )
and
B ( 2 ) = b 11 ( 2 ) b 12 ( 2 ) b 21 ( 2 ) b 22 ( 2 ) .
From Table A3 in Appendix C, the matrix B can be expressed by
B = 0 n 2 μ 3 n 2 μ 3 0 n 2 μ 3 0 0 n 2 λ 3 .
Define V A = v e c ( A ) as a vectorization operator that produces a single column vector by stacking all right columns of A below the adjacent left one. Hence,
V A = [ η 11 , η 21 , η 12 , η 22 ] T = μ 3 n λ , 0 , 0 , 2 λ 2 n T .
Under regular conditions of partial derivatives of the log-likelihood function, η i j , η i j k and η i j ( k ) , Cordeiro and Klein [29] showed that the bias of Θ ˜ is O ( n 2 ) and can be expressed as
b i a s ( Θ ˜ ) = A B V A + O ( n 2 ) .
Denote the obtained BCMLE of Θ by Θ ˜ C = ( μ ˜ C , λ ˜ C ) T ; then, Θ ˜ C can be approximated by
Θ ˜ C = Θ ˜ M A ˜ B ˜ V ˜ A ,
where A ˜ = A ( Θ ˜ M ) , B ˜ = B ( Θ ˜ M ) and V ˜ A = v e c ( A ˜ ) . We can show that
A B V A = μ 3 n λ 0 0 2 λ 2 n 0 n 2 μ 3 n 2 μ 3 0 n 2 μ 3 0 0 n 2 λ 3 μ 3 n λ 0 0 2 λ 2 n = 0 3 λ n ,
and the CK-BCMLE Θ ˜ C K can be obtained by
Θ ˜ C K = μ ˜ C K λ ˜ C K = μ ˜ M λ ˜ M 3 λ ˜ M n = μ ˜ M λ ˜ M 1 3 n .

3.3. Bootstrap Bias-Correction Method

The bootstrap methods have been widely used to obtain an ACI for the model parameter. Readers can refer to the book of Efron and Tibshirani [35] to receive more comprehensive information about using bootstrap methods for statistical inference. The bootstrap bias-correction maximum likelihood (B-BCML) method can be implemented through using the following steps:
Initial Step :
Obtain the MLE Θ ˜ M of Θ based on the working sample x = ( x 1 , x 2 , , x n ) .
Step 1 :
Generate a bootstrap sample x * = ( x 1 * , x 2 * , , x n * ) from the I G ( Θ ˜ M ) and obtain the MLE Θ ˜ M * of Θ based on the bootstrap sample x * .
Step 2 :
Repeat Step 1 B times and label the obtained MLEs by { Θ ˜ M , j * , j = 1 , 2 , , B } . The bias of Θ ^ can be obtained by
Θ ˜ B i a s = 1 B i = 1 B Θ ˜ M , j * Θ ˜ M .
Step 3 :
The B-BCML estimate, B-BCMLE, can be obtained by
Θ ˜ B = ( μ ˜ B , λ ˜ B ) = Θ ˜ M Θ ˜ B i a s = 2 Θ ˜ M 1 B i = 1 B Θ ˜ M , j * .
The above bootstrap method is also called the parametric bootstrap method.

3.4. The Approximate Confidence Interval of C L

Replacing the unknown Θ in C L by Θ ˜ M , Θ ˜ C K and Θ ˜ B obtained above, the corresponding MLEs of C L can be presented by
C ˜ L , M = 1 0.5 α 1 0.5 Φ λ ˜ M L L μ ˜ M 1 e 2 λ ˜ M μ ˜ M Φ λ ˜ M L L μ ˜ M + 1 ,
C ˜ L , C K = 1 0.5 α 1 0.5 Φ λ ˜ C K L L μ ˜ C K 1 e 2 λ ˜ C K μ ˜ C K Φ λ ˜ C K L L μ ˜ C K + 1
and
C ˜ L , B = 1 0.5 α 1 0.5 Φ λ ˜ B L L μ ˜ B 1 e 2 λ ˜ B μ ˜ B Φ λ ˜ B L L μ ˜ B + 1 ,
respectively.
Utilizing the delta method, the asymptotic distribution of the MLE, C ˜ L , can be shown as the normal distribution; that is,
C ˜ L d N ( C L , ( C L ) T I 1 ( Θ ) ( C L ) ) as n ,
where C L is the gradient of C L and defined by
( C L ) T = C L μ , C L λ = 1 0.5 α 1 δ T ,
with δ T = ( δ 1 , δ 2 ) ,
δ 1 = λ L μ 2 ϕ λ L L μ 1 + 2 λ e 2 λ / μ μ 2 Φ λ L L μ + 1 e 2 λ / μ λ L μ 2 ϕ λ L L μ + 1
and
δ 2 = 1 2 L λ L μ 1 ϕ λ L L μ 1 2 e 2 λ / μ μ Φ λ L L μ + 1 + e 2 λ / μ 2 L λ ϕ λ L L μ + 1 L μ + 1 .
Let δ ˜ j , h = δ j ( Θ ˜ h ) , j = 1 , 2 , h = M , C K , B , δ ˜ M T = ( δ ˜ 1 , M , δ ˜ 2 , M ) , δ ˜ C K T = ( δ ˜ 1 , C K , δ ˜ 2 , C K ) and δ ˜ B T = ( δ ˜ 1 , B , δ ˜ 2 , B ) . The plug-in asymptotic variance estimates of C ˜ L , h can be
σ ˜ h 2 = 1 ( 0.5 α 1 ) 2 δ ˜ h T I 1 ( Θ ˜ h ) δ h ^ , h = M , C K , B .
Therefore, the approximate ( 1 γ ) × 100 % CIs of C L can be
( C ˜ L , M z 1 γ / 2 × σ ˜ M 2 , C ˜ L , M + z 1 γ / 2 × σ ˜ M 2 ) ,
( C ˜ L , C K z 1 γ / 2 × σ ˜ C K 2 , C ˜ L , C K + z 1 γ / 2 × σ ˜ C K 2 ) ,
and
( C ˜ L , B z 1 γ / 2 × σ ˜ B 2 , C ˜ L , B + z 1 γ / 2 × σ ˜ B 2 )
based on the typical ML estimation method, the bias correction method proposed by Cordeiro and Klein [29] and the bootstrap bias-correction method, respectively, where z 1 γ / 2 satisfies Φ ( z 1 γ / 2 ) = 1 γ / 2 .

3.5. Bayesian Estimation Method

The Fisher information matrix that was given in Section 3.1 has
| I ( Θ ) | = n 1 2 λ μ 3 .
Hence, the Jeffrey’s prior distribution of Θ can be obtained by
π ( Θ ) 1 λ μ 3 .
The Jeffery’s prior distribution is a parameter-free non-informative prior distribution. Hence, we can use the Jeffery’s prior distribution to obtain the BEs of μ , λ and C L with less subjective assumptions. From
L ( Θ | x ) × π ( Θ ) λ n 1 2 μ 3 2 i = 1 n x i 3 2 exp λ μ x i 2 μ 1 + μ 2 x i ,
the posterior distribution of Θ , given x , can be expressed by
π ( Θ | x ) λ n 1 2 μ 3 2 i = 1 n exp λ μ x i 2 μ 1 + μ 2 x i .
Hence, the conditional distribution of μ , given λ and x , can be obtained by
π ( μ | λ , x ) μ 3 2 exp n λ μ x ¯ 2 μ 1
and the conditional distribution of λ , given μ and x , can be derived as
π ( λ | μ , x ) λ n 1 2 exp n λ μ x ¯ 2 μ 1 + μ x ˜ 2 ,
where x ˜ = 1 n i = 1 n 1 x i . It can be noted that the conditional distributions of π ( μ | λ , x ) and π ( λ | μ , x ) do not have complete analytic forms. In this study, the MCMC hybrid algorithm of Gibbs sampling and the Metropolis–Hastings algorithm can be used to obtain the BEs of μ and λ .

3.6. The MCMC Hybrid Algorithm

Step 1-Update μ
For h 1 , generate u U ( 0 , 1 ) and μ ( * ) q 1 ( μ ( * ) | μ ( h ) ) , where U ( 0 , 1 ) is the uniform distribution over the interval (0,1) and q 1 ( · ) is the proposal distribution of μ . Update μ ( h + 1 ) by μ ( * ) if
u min 1 , π ( μ ( * ) | λ ( h ) , x ) π ( μ ( h ) | λ ( h ) , x ) × q 1 ( μ ( h ) | μ ( * ) ) q 1 ( μ ( * ) | μ ( h ) ) ;
otherwise, μ ( h + 1 ) = μ ( h ) .
Step 2-Update λ
For h 1 , generate u U ( 0 , 1 ) and λ ( * ) q 2 ( λ ( * ) | λ ( h ) ) , where q 2 ( · ) is the proposal distribution of λ . Update λ ( h + 1 ) by λ ( * ) if
u min 1 , π ( λ ( * ) | μ ( h + 1 ) , x ) π ( λ ( h ) | μ ( h + 1 ) , x ) × q 2 ( λ ( h ) | λ ( * ) ) q 2 ( λ ( * ) | λ ( h ) ) ;
otherwise, λ ( h + 1 ) = λ ( h ) .
Step 3-Update C L
For h 1 , C L ( h ) = C L ( μ ( h + 1 ) , λ ( h + 1 ) ) .
Step 4-Obtain BEs≤: 
Repeat Step 1 to Step 3 N times, where N is a large integer. Perform s burn-in operation by removing the leading N 1 ( N ) Markov chains for each parameter. Considering the square loss function for implementing the Bayesian estimation, the BE of each parameter can be the sample mean of the remaining ( N N 1 ) Markov chains. Denote the BEs of μ , λ and C L by μ ^ B E , λ ^ B E and C ^ L , B E , respectively.

4. Monte Carlo Simulations

In this section, Monte Carlo simulations were conducted to evaluate the performance of the estimation methods of the typical ML, CK-BCML, B-BCML and Bayesian. Random samples with a size of n = 30 and 50 were generated from I G ( μ = 8 , λ = 5 ) and I G ( μ = 10 , λ = 8 ) , respectively, to obtain the MLEs, CK-BCMLEs, B-BCMLEs and BEs of μ and λ . The MLE, CK-BCMLE, B-BCMLE and BE of the process performance index were also obtained based on the plug-in method for L = 0.5 , 0.6 , 0.8 , and 1. When μ = 8 and λ = 5 , the true value of the process performance index can be obtained by C L = 1.0043 , 0.9957 , 0.9644 and 0.9173 for L = 0.5 , 0.6 , 0.8 , and 1, respectively, and the corresponding parts per million are 2876, 7130, 22,625 and 45,939. When μ = 10 and λ = 8 , the true value of the process performance index can be obtained by C L = 1.0098 , 1.0089 , 1.0033 and 0.9898 for L = 0.5 , 0.6 , 0.8 and 1, respectively, and the corresponding parts per million are 138, 568, 3389 and 10,068.
Bootstrap repetition, B = 500 , was used to implement the parametric bootstrap bias-correction method. For Bayesian estimation, firstly, we generated N = 51,000 Markov iterations to implement the MCMC method, and the leading N 1 = 1000 Markov iterations were removed for the burn-in operation. Secondly, a spacing operation by selecting one of every ten iterations was used to reduce the autocorrelation in each Markov chain. Finally, 5000 Markov chains were used to obtain the BEs of the parameters μ , λ and PPI, respectively.
The measures of relative bias (rbias) and relative root mean squared error (rRMSE) were evaluated using 10,000 iteration runs. Assume that the target parameter is θ and the obtained estimates are θ ˜ j , j = 1 , 2 , , 10,000; the rbias and rsMSE are defined by
rbias = 1 θ × j = 1 10 , 000 ( θ ˜ j θ ) 10 , 000
and
rRMSE = 1 θ × j = 1 10 , 000 ( θ ˜ j θ ) 2 100 .
The rbias and rRMSE are scale-free measures. All simulation results for estimating model parameters are reported in Table 2 and Table 3.
From Table 2 and Table 3, we can find that the estimation quality of the MLE of μ ˜ is good. The derivation processes in Section 3.2 show that the CK-BCMLE of μ is the same as the typical MLE; that is, μ ˜ M = μ ˜ C K . Hence, the values of rbias and rRMSE of μ ˜ M are also close to that of the B-BCMLE μ ˜ B . However, we can find that the BCMLEs λ ˜ C K and λ ˜ B outperform the MLE λ ˜ M , with a smaller rbias and rRMSE when the sample size is small. Because the proposed Bayes estimation method is developed with the parameter-free non-informative prior distribution of Jeffery, the performance of BE can be compared with its competitor of MLE. In Table 2 and Table 3, we also find that the rbias of μ ˜ B E is small but larger than the the bias of μ ˜ M . The rRMSE of λ ˜ B E is larger but close to the rRMSE of λ ˜ M . Based on the findings in this study, using gradient methods for optimization makes the ML estimation method less reliable for estimating the process performance index. The obtained ACI of the process performance index via using the MLEs with the delta method and exact Fisher information matrix is conservative. We will study the CPs of the ACI and HPDI of the process performance index to show the good performance of the proposed Bayesian estimation method.
To verify the performance of the delta and Bayesian estimation methods for the interval inference of PPI under small sample size cases, we established the 95% ACI and HPDI for the PPI at the lower specification limit L = 0.5 , 0.6 , 0.8 and 1. Moreover, the mean lower bounds and mean upper bounds of the 95% ACI and HPDI of the PPI and their CPs were evaluated based on 10,000 iterations. We use the terms of mLB and mUB to denote the mean lower bounds and mean upper bounds, respectively. All simulation results are summarized in Table 4 and Table 5.
In view of Table 4 and Table 5, we find that the condition of a small sample has an impact on the quality of the interval inference for the PPI. When n 50 , the CP of the 95% ACI of C L is below its nominal value. In particular, for the case of small L and n = 30 , the CP of the 95% ACI of C L seriously underestimates the nominal value 0.95 for the delta method with the ML, CK-BCML and B-BCML methods. The delta method with the ML method performs worst among all competitors. When the sample size increases, the estimation performance of the delta method with the ML, CK-BCML and B-BCML methods can be significantly improved. The proposed Bayesian estimation method outperforms all of the competitors based on the delta method in terms of the CP when obtaining a HDPI for the process performance index; even the sample size and L are small. In summary, we recommend using the proposed Bayesian estimation method to obtain the HPDI of the process performance index for IG distribution.

5. An Example

A maintenance data set concerning the active repair times for an airborne communication transceiver is used to illustrate the applications of the proposed estimation methods. This data set contains 46 repair times in hours as follows: 0.2, 0.3, 0.5, 0.5, 0.5, 0.5, 0.6, 0.6, 0.7, 0.7, 0.7, 0.8, 0.8, 1.0, 1.0, 1.0, 1.0, 1.1, 1.3, 1.5, 1.5, 1.5, 1.5, 2.0, 2.0, 2.2, 2.5, 2.7, 3.0, 3.0, 3.3, 3.3, 4.0, 4.0, 4.5, 4.7, 5.0, 5.4, 5.4, 7.0, 7.5, 8.8, 9.0, 10.3, 22.0, 24.5. This data set of active repair times was initially investigated by Chhikara and Folks [36]. They obtained the MLEs μ ˜ M = 3.607 and λ ˜ M = 1.659 and used the Kolmogorov–Smirnor test to show that the IG distribution can characterize this data set well. The data set was also analyzed based on Bayesian estimation methods by many studies after Chhikara and Folks [36]; for example, Sinha [37], Betrò and Rotondi [38] and Jayalath and Chhikara [23].
Using the proposed methods in Section 3 for the data set of active repair times with L = 0.2 , we can obtain the MLEs, CK-BCMLEs, B-BCMLEs and BEs of μ , λ and C L . The proposed Bayesian estimation method was implemented with N = 50,000 and the first N 1 = 1000 generated estimates were removed for teh burn-in operation. The spacing operation, which selects one of every ten generated estimates, was used for cutting the Markov chain to reduce the first-order autocorrelation in every Markov chain. To check the quality of the MCMC method, the Markov chain plots based on 5000 generated estimates of μ ˜ B E and λ ˜ B E are given in Figure A1 and Figure A2, respectively. The first-order autocorrelation coefficients based on the 5000 generated estimates of μ ˜ B E and λ ˜ B E are 0.046 and 0.06, respectively. We note that two first-order autocorrelation coefficients are close to 0. These findings indicate that the generated Markov chains are almost independent chains. All of the obtained estimates are reported in Table 6.
From Table 6, we can find that the B-BCMLE and BE of μ are slightly larger than the MLE and CK-BCMLE of μ . The MLE and BE of λ are larger than two BCMLEs of λ . The BE of C L is the smallest one among the four obtained estimates. The parts per million based on the MLE, CK-BCMLE, B-BCMLE and BE of C L are 6232, 8160, 7813 and 6071, respectively.
Table 7 reports the 95% ACI and HPDI of C L via using the delta method and Bayesian estimation method, respectively. Two proposed BCML methods generate close ACIs. We also find that the lower limit of the ACI based on the ML method is slightly larger than the lower limits of the ACIs based on the two proposed BCML methods and the lower limits of HPDI based on the proposed Bayesian method.

6. Concluding Remarks

Considering the restriction of the sample resource when evaluating the quality of lifetime products, we used the bias correction estimation method proposed by Cordeiro and Klein [29] and the bootstrap bias correction method to improve the estimation quality of the typical ML estimation method for estimating the process performance index under the IG distribution. We derived the exact forms of CK-BCMLEs and provided an algorithm used to obtain the B-BCMLEs for the IG model parameters and process performance index. The delta method was used to obtain an ACI for the process performance index. Moreover, a Bayesian estimation procedure was proposed to obtain the BEs of the IG model parameters and the process performance index. The HPDI of the process performance index was obtained via using the proposed MCMC hybrid algorithm of the Gibbs sampling and Metropolis–Hastings algorithm.
An intensive simulation study was conducted to compare the performance of the proposed CK-BCML, B-BCML and Bayesian estimation methods with the typical ML estimation method. Simulation results show that the ACIs based on the delta method with the MLEs, CK-BCMLEs and B-BCMLEs performed less satisfactory, with a seriously underestimated CP when the sample size and lower specification limit were small. The proposed Bayesian estimation method can be an alternative method, other than the delta method, used to obtain a reliable HPDI for the process performance index when the sample size is small. Because the Jeffrey’s prior distribution is a parameter-free prior, the proposed Bayesian estimation method is less subjective and easy to be implemented. A data set composed of 46 active repair times for an airborne communication transceiver was used to illustrate the applications of the proposed methods.
For saving the testing time and cost, censoring schemes are often adopted in engineering applications to collect censored lifetime data. Extending two proposed BCML estimation methods and the Bayesian estimation procedures for the IG distribution with different censoring schemes is interesting and can be studied in the future.

Author Contributions

Conceptualization, T.-R.T. and Y.L.; methodology, T.-R.T., H.X. and Y.L.; software, Y.-Y.F. and T.-R.T.; validation, T.-R.T., H.X. and Y.L.; formal analysis, H.X.; investigation, T.-R.T., H.X., Y.-Y.F. and Y.L.; resources, H.X. and Y.-Y.F.; data curation, T.-R.T., H.X. and Y.L.; writing—original draft preparation, T.-R.T., Y.-Y.F. and Y.L.; writing—review and editing, T.-R.T. and Y.L.; visualization, T.-R.T. and Y.L.; supervision, T.-R.T. and Y.L.; project administration, T.-R.T. and Y.L.; funding acquisition, T.-R.T. and H.X. All authors have read and agreed to the published version of the manuscript.

Funding

Ministry of Science and Technology, Taiwan, grant number MOST 110-2221-E-032-034-MY2; National Natural Science Foundation of China, grant number 52174060.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in Section 4 were obtained from Chhikara and Folks [36].

Acknowledgments

This research was funded by the Ministry of Science and Technology, Taiwan, grant number MOST 110-2221-E-032-034-MY2; National Natural Science Foundation of China, grant number 52174060. We thank you for the excellent suggestions from reviewers and editors to improve the quality of paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Markov Chain Plots

Figure A1. The Markov chain with 5000 iterations of μ ˜ B E after burn-in.
Figure A1. The Markov chain with 5000 iterations of μ ˜ B E after burn-in.
Stats 05 00064 g0a1
Figure A2. The Markov chain with 5000 iterations of λ ˜ B E after burn-in.
Figure A2. The Markov chain with 5000 iterations of λ ˜ B E after burn-in.
Stats 05 00064 g0a2

Appendix B. Fisher Information Matrix

The second derivatives of with respect to μ and λ can be obtained, respectively, by
11 = 2 μ 2 = n λ μ 3 2 3 x ¯ μ ,
22 = 2 λ 2 = n 2 λ 2 ,
12 = 21 = 2 μ λ = n μ 3 ( x ¯ μ ) .
Let η i j = E ( i j ) , i , j = 1 , 2 . It is trivial to show that η 12 = η 21 = 0 . Moreover, we can obtain the following results:
η 11 = n λ μ 3
and
η 22 = n 2 λ 2 .
The Fisher information matrix can be presented by
I ( Θ ) = η i j = η 11 η 12 η 21 η 22 = n λ μ 3 0 0 1 2 λ 2 .

Appendix C. For MLE Bias Correction

Using the equations of i j and η i j , i , j = 1 , 2 , the analytic forms of η i j k and η i j ( k ) , i , j , k = 1 , 2 are obtained and reported in Table A1 and Table A2. The entries of the sub-matrices B ( 1 ) and B ( 2 ) are evaluated and reported in Table A3.
Table A1. The entries of η i j k , i , j , k = 1 , 2 .
Table A1. The entries of η i j k , i , j , k = 1 , 2 .
η 111 = E 11 μ = 6 n λ μ 4 ; η 112 = E 11 λ = n μ 3 ;
η 121 = E 12 μ = n μ 3 ; η 122 = E 12 λ = 0 ;
η 211 = E 21 μ = n μ 3 ; η 212 = E 21 λ = 0 ;
η 221 = E 22 μ = 0 ; η 222 = E 22 λ = n λ 3
Table A2. The entries of η i j ( k ) , i , j , k = 1 , 2 .
Table A2. The entries of η i j ( k ) , i , j , k = 1 , 2 .
η 11 ( 1 ) = η 11 μ = 3 n λ μ 4 ; η 12 ( 1 ) = η 12 μ = 0 ;
η 21 ( 1 ) = η 11 μ = 0 ; η 22 ( 1 ) = η 12 μ = 0 ;
η 11 ( 2 ) = η 11 λ = n μ 3 ; η 12 ( 2 ) = η 12 λ = 0 ;
η 21 ( 2 ) = η 21 λ = 0 ; η 22 ( 2 ) = η 22 λ = n λ 3 .
Table A3. The entries of B .
Table A3. The entries of B .
b 11 ( 1 ) = η 11 ( 1 ) 1 2 η 111 = 3 n λ μ 4 6 n λ 2 μ 4 = 0 ;
b 12 ( 1 ) = η 12 ( 1 ) 1 2 η 121 = 0 + n λ 2 μ 3 = n λ 2 μ 3 ;
b 21 ( 1 ) = η 21 ( 1 ) 1 2 η 211 = 0 + n λ 2 μ 3 = n λ 2 μ 3 ;
b 22 ( 1 ) = η 22 ( 1 ) 1 2 η 221 = 0 .
b 11 ( 2 ) = η 11 ( 2 ) 1 2 η 112 = n μ 3 + n 2 μ 3 = n 2 μ 3 ;
b 12 ( 2 ) = η 12 ( 2 ) 1 2 η 122 = 0 ;
b 21 ( 2 ) = η 21 ( 2 ) 1 2 η 212 = 0 ;
b 22 ( 2 ) = η 22 ( 2 ) 1 2 η 222 = n λ 3 n 2 λ 3 = n 2 λ 3 .

References

  1. Pan, J.; Wu, S. Process capability analysis for non-normal relay test data. Microelectron. Reliab. 1997, 37, 421–428. [Google Scholar] [CrossRef]
  2. Kane, V.E. Process capability indices. J. Qual. Technol. 1986, 18, 41–52. [Google Scholar] [CrossRef]
  3. Kocherlakota, S. Process capability index: Recent developments. Sankhya 1992, 54B, 352–362. [Google Scholar]
  4. Kotz, S.; Johnson, N.L. Process Capability Indices; Chapman & Hall: New York, NY, USA, 1993. [Google Scholar]
  5. Rodriguez, R.N. Recent developments in process capability analysis. J. Qual. Technol. 1992, 24, 176–187. [Google Scholar] [CrossRef]
  6. Palmer, K.; Tsui, K.-L. A review and interpretations of process capability. Ann. Oper. Res. 1999, 87, 31–47. [Google Scholar] [CrossRef]
  7. Montgomery, D.C. Introduction to Quality Control; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2020. [Google Scholar]
  8. Hong, C.-W.; Wu, J.-W.; Cheng, C.-H.; Cheng, A. Computational procedure of performance assessment of lifetime index of businesses for the pareto lifetime model with the right type II censored sample. Appl. Math. Comput. 2007, 184, 336–350. [Google Scholar] [CrossRef]
  9. Ahmadi, M.V.; Doostparast, M.; Ahmadi, J. Statistical inference for the lifetime performance index based on generalised order statistics from exponential distribution. Int. J. Syst. Sci. 2015, 46, 1094–1107. [Google Scholar] [CrossRef]
  10. Lee, W.-C.; Wu, J.-W.; Hong, C.-W. Assessing the lifetime performance index of products from progressively type II right censored data using Burr XII model. Math. Comput. Simul. 2009, 79, 2167–2179. [Google Scholar] [CrossRef]
  11. Lee, H.-M.; Wu, J.-W.; Lei, C.-L.; Hung, W.L. Implementing lifetime performance index of products with two-parameter exponential distribution. Int. J. Syst. Sci. 2011, 42, 1305–1321. [Google Scholar] [CrossRef]
  12. Lee, W.-C.; Wu, J.-W.; Hong, M.-L. Assessing the lifetime performance index of Rayleigh products based on the Bayesian estimation under progressive type II right censored samples. J. Comput. Appl. Math. 2011, 235, 1676–1688. [Google Scholar] [CrossRef]
  13. Ahmadi, M.V.; Doostparast, M.; Ahmadi, J. Estimating the lifetime performance index with Weibull distribution based on progressive first-failure censoring scheme. J. Comput. Appl. Math. 2013, 239, 93–102. [Google Scholar] [CrossRef]
  14. Wu, S.-F.; Chiu, C.-J. Computational testing algorithmic procedure of assessment for lifetime performance index of products with two-parameter exponential distribution based on the multiply type II censored sample. J. Stat. Comput. Simul. 2014, 84, 2016–2122. [Google Scholar] [CrossRef]
  15. Wu, S.-F.; Lin, Y.-P. Computational testing algorithmic procedure of assessment for lifetime performance index of products with one-parameter exponential distribution under progressive type I interval censoring. Math. Comput. Simul. 2016, 120, 79–90. [Google Scholar] [CrossRef]
  16. Zhu, J.; Xin, H.; Zheng, C.; Tsai, T.-R. Inference for the process performance index of products on the basis of power-normal distribution. Mathematics 2022, 10, 35. [Google Scholar] [CrossRef]
  17. Banerjee, A.K.; Bhattacharyya, G.K. Bayesian results for the inverse Gaussian distribution with an application. Technometrics 1979, 21, 247–251. [Google Scholar] [CrossRef]
  18. Amry, Z. Bayes estimator for inverse Gaussian distribution. SCIRFA J. Math. 2021, 6, 44–50. [Google Scholar]
  19. Chhikara, R.S.; Folks, J.L. The Inverse Gaussian Distribution, Theory: Methodology and Applications; CRC Press: Boca Raton, FL, USA, 1988. [Google Scholar]
  20. Johnson, N.L.; Kotz, S.; Balakrishnan, N. Continuous Univariate Distributions; John Wiley & Sons: New York, NY, USA, 1995; Volume 2. [Google Scholar]
  21. Sun, D.; Ye, K. Frequentist validity of posterior quantiles for a two-parameter exponential family. Biometrika 1996, 83, 55–65. [Google Scholar] [CrossRef]
  22. Rostamian, S.; Nematollahi, N. Estimation of stress-strength reliability in the inverse Gaussian distribution under progressively type II censored data. Math. Sci. 2019, 13, 175–191. [Google Scholar] [CrossRef] [Green Version]
  23. Jayalath, K.P.; Chhikara, R.S. Survival analysis for the inverse Gaussian distribution with the Gibbs sampler. J. Appl. Stat. 2022, 49, 656–675. [Google Scholar] [CrossRef]
  24. Bera, S.; Jana, N. Estimating reliability parameters for inverse Gaussian distributions under complete and progressively type-II censored samples. Qual. Technol. Quant. Manag. 2022. [Google Scholar] [CrossRef]
  25. Sundaraiyer, V.H. Estimation of a process capability index for inverse Gaussian distribution. Commun. Stat.-Theory Methods 1996, 25, 2381–2393. [Google Scholar] [CrossRef]
  26. Clements, J.A. Process Capability Calculations for Non-Normal Distributions. Qual. Prog. 1989, 24, 95–100. [Google Scholar]
  27. Balay, I.G. Estimation of the generalized process capability index Cpyk based on bias-corrected maximum-likelihood estimators for the generalized inverse Lindley distribution and bootstrap confidence intervals. J. Stat. Comput. Simul. 2021, 91, 1960–1979. [Google Scholar] [CrossRef]
  28. Maiti, S.S.; Mahendra, S.; Asok, K.N. On generalizing process capability indices. Qual. Technol. Quant. Manag. 2010, 7, 279–300. [Google Scholar] [CrossRef]
  29. Cordeiro, G.M.; Klein, R. Bias correction in ARMA models. Stat. Probab. Lett. 1994, 19, 169–176. [Google Scholar] [CrossRef]
  30. Gunter, B.H. The use and abuse of Cpk-Part I. Qual. Prog. 1989, 22, 72–73. [Google Scholar]
  31. Constable, G.K.; Hobbs, J.R. Small samples and non-normal capability. In ASQC Quality Congress Transactions; New York, NY, USA, 1992; pp. 37–43. [Google Scholar]
  32. Mukherjee, S.P.; Singh, N.K. Sampling properties of an estimator of a new process capability index for Weibull distributed quality characteristics. Qual. Eng. 1997, 10, 291–294. [Google Scholar] [CrossRef]
  33. Tang, L.C.; Goh, T.N.; Yam, H.S. Six Sigma: Advanced Tools for Black Belts and Master Black Belts; John Wiley & Sons: New York, NY, USA, 2007. [Google Scholar]
  34. Chen, P.; Wan, B.X.; Yeh, Z.-S. Yield-based process capability indices for nonnormal continuous data. J. Qual. Technol. 2019, 51, 171–180. [Google Scholar] [CrossRef]
  35. Efron, B.; Tibshirani, R.J. An Introduction to the Bootstrap; Chapman & Hall: New York, NY, USA, 1993. [Google Scholar]
  36. Chhikara, R.S.; Folks, J.L. The inverse Gaussian distribution as a lifetime model. Technometrics 1977, 19, 461–468. [Google Scholar] [CrossRef]
  37. Sinha, S.K. Bayesian estimation of the reliability function of the inverse Gaussian distribution. Stat. Probab. Lett. 1986, 4, 319–323. [Google Scholar] [CrossRef]
  38. Betrò, B.; Rotondi, R. On Bayesian inference for the inverse Gaussian distribution. Stat. Probab. Lett. 1991, 11, 219–224. [Google Scholar] [CrossRef]
Table 1. Summary of estimation methods for process performance index.
Table 1. Summary of estimation methods for process performance index.
MethodSamplingDistribution
Hong et al. [8]MLEType-IIPareto
Ahmadi et al. [9]Statistical inferenceGeneralized order statisticsexponential
Lee et al. [10]Optimal inferencesProgressively type-IIBurr type XII
Lee et al. [11]UMVUEType-IITwo-parameter
  exponential
Lee et al. [12]Bayesian inferenceProgressively type-IIRayleigh
Ahmadi et al. [13]MLEProgressively first-failureWeibull
Wu and Chiu [14]Fourteen estimatesA multiple type-IITwo-parameter
  exponential
Wu and Lin [15]Statistical inferenceType-IIExponential
Zhu et al. [16]MLERandom samplePower-normal
Table 2. The rBias and rRMSEs (in parentheses) of different estimators when μ = 8 and λ = 5 .
Table 2. The rBias and rRMSEs (in parentheses) of different estimators when μ = 8 and λ = 5 .
Ln μ ˜ M λ ˜ M λ ˜ CK μ ˜ B λ ˜ B μ ˜ BE λ ˜ BE
0.530−0.00200.1098−0.0012−0.0021−0.01350.10440.1084
(0.2293)(0.3318)(0.2818)(0.2296)(0.2789)(0.3700)(0.3312)
50−0.00160.0632−0.0006−0.0016−0.00460.04890.0625
(0.1772)(0.2341)(0.21190)(0.1772)(0.2114)(0.2047)(0.2339)
0.6300.00010.11820.00640.0001−0.00630.10800.1168
(0.2317)(0.3423)(0.2891)(0.2317)(0.2860)(0.3814)(0.3415)
500.00160.0635−0.00030.0017−0.00440.05240.0629
(0.1794)(0.2309)(0.2086)(0.1797)(0.2081)(0.2074)(0.2307)
0.830−0.00300.11120.0001−0.0033−0.01240.10080.1099
(0.2313)(0.3347)(0.2841)(0.2315)(0.2814)(0.3503)(0.3341)
50−0.00080.0590−0.0045−0.0008−0.00860.04990.0584
(0.1775)(0.2283)(0.2074)(0.1775)(0.2066)(0.2059)(0.2283)
1300.00230.1065−0.00420.0022−0.01640.11170.1052
(0.2311)(0.3302)(0.2812)(0.2313)(0.2784)(0.3869)(0.3297)
500.00020.0612−0.00250.0001−0.00650.05080.0606
(0.1792)(0.2317)(0.2102)(0.1792)(0.2095)(0.2059)(0.2315)
Table 3. The rBias and rRMSEs (in parentheses) of different estimators when μ = 10 and λ = 8 .
Table 3. The rBias and rRMSEs (in parentheses) of different estimators when μ = 10 and λ = 8 .
Ln μ ˜ M λ ˜ M λ ˜ CK μ ˜ B λ ˜ B μ ˜ BE λ ˜ BE
0.5300.00050.1083−0.00250.0004−0.01470.07410.1069
(0.2042)(0.3320)(0.2825)(0.2045)(0.2798)(0.2744)(0.3309)
50−0.00270.07100.0067−0.00300.00250.03480.0704
(0.1562)(0.2373)(0.2128)(0.1562)(0.2121)(0.1726)(0.2371)
0.6300.00060.1088−0.00210.0006−0.01430.07380.1076
(0.2017)(0.3292)(0.2796)(0.2020)(0.2775)(0.2717)(0.3286)
500.00040.06410.00030.0003−0.00380.03860.0635
(0.1575)(0.2349)(0.2124)(0.1578)(0.2119)(0.1749)(0.2345)
0.8300.00530.1094−0.00150.0053−0.01370.08050.1081
(0.2049)(0.3372)(0.2871)(0.2052)(0.2843)(0.2768)(0.3367)
500.00220.0619−0.00180.0023−0.00580.04120.0612
(0.1584)(0.2341)(0.2124)(0.1584)(0.2117)(0.1924)(0.2339)
1300.00090.11240.00110.0007−0.01120.07430.1111
(0.2040)(0.3349)(0.2839)(0.2040)(0.805)(0.2717)(0.3342)
50−0.00090.06400.0001−0.0009−0.00400.03700.0633
(0.1584)(0.2287)(0.2064)(0.1584)(0.2064)(0.1752)(0.2285)
Table 4. The mLBs, mUBs and CPs for the 95% ACI and HDPI when μ = 8 and λ = 5 .
Table 4. The mLBs, mUBs and CPs for the 95% ACI and HDPI when μ = 8 and λ = 5 .
MLCK-BCMLB-BCMLBayesian
L n mLB, mUBCPmLB, mUBCPmLB, mUBCPmLB, mUBCP
0.5300.984, 1.0200.7940.972, 1.0240.8820.971, 1.0240.8900.970, 1.0090.933
500.989, 1.0160.8360.984, 1.0180.8990.983, 1.0180.9020.982, 1.0090.938
0.6300.957, 1.0290.8160.938, 1.0340.8960.936, 1.0340.9030.942, 1.0090.926
500.966, 1.0210.8680.957, 1.0230.9180.956, 1.0230.9200.959, 1.0080.935
0.8300.873, 1.0510.8770.837, 1.0570.9360.832, 1.0580.9400.868, 1.0040.931
500.892, 1.0330.9200.873, 1.0340.9490.872, 1.0340.9510.895, 1.0000.934
1300.756, 1.0770.9140.705, 1.0830.9560.699, 1.0830.9590.783, 0.9910.926
500.790, 1.0430.9450.764, 1.0430.9690.762, 1.0430.9710.817, 0.9810.930
Table 5. The mLBs, mUBs and CPs for the 95% ACI and HDPI when μ = 10 and λ = 8 .
Table 5. The mLBs, mUBs and CPs for the 95% ACI and HDPI when μ = 10 and λ = 8 .
MLCK-BCMLB-BCMLBayesian
L n mLB, mUBCPmLB, mUBCPmLB, mUBCPmLB, mUBCP
0.5301.007, 1.0110.7321.005, 1.0120.8361.004, 1.0120.8461.003, 1.0100.932
501.008, 1.0110.7671.007, 1.0110.8461.007, 1.0110.8491.006, 1.0110.931
0.6301.002, 1.0140.7590.997, 1.0160.8560.996, 1.0160.8640.994, 1.0100.933
501.004, 1.0120.8031.002, 1.0130.8721.002, 1.0130.8761.000, 1.0100.934
0.8300.978, 1.0240.8100.964, 1.0290.8950.962, 1.0300.9000.966, 1.0090.934
500.984, 1.0190.8560.978, 1.0210.9130.977, 1.0210.9160.979, 1.0090.938
1300.933, 1.0410.8510.907, 1.0490.9170.903, 1.0500.9240.925, 1.0080.930
500.946, 1.0300.8990.934, 1.0320.9340.933, 1.0330.9430.946, 1.0070.938
Table 6. The MLEs, CK-BCMLEs, B-BCMLEs and BEs of μ , λ and PPI for L = 0.2 .
Table 6. The MLEs, CK-BCMLEs, B-BCMLEs and BEs of μ , λ and PPI for L = 0.2 .
MethodsEstimates
μ λ C L
ML3.6071.6590.998
CK-BCML3.6071.5510.994
B-BCML3.6461.5671.000
Bayesian3.8721.6570.993
Table 7. The 95% ACI and HPDI of PPI for L = 0.2 .
Table 7. The 95% ACI and HPDI of PPI for L = 0.2 .
MethodsLower LimitUpper LimitLength
ML0.9761.0200.044
CK-BCML0.9661.0210.055
B-BCML0.9681.0210.053
Bayesian0.9621.0090.047
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tsai, T.-R.; Xin, H.; Fan, Y.-Y.; Lio, Y. Bias-Corrected Maximum Likelihood Estimation and Bayesian Inference for the Process Performance Index Using Inverse Gaussian Distribution. Stats 2022, 5, 1079-1096. https://doi.org/10.3390/stats5040064

AMA Style

Tsai T-R, Xin H, Fan Y-Y, Lio Y. Bias-Corrected Maximum Likelihood Estimation and Bayesian Inference for the Process Performance Index Using Inverse Gaussian Distribution. Stats. 2022; 5(4):1079-1096. https://doi.org/10.3390/stats5040064

Chicago/Turabian Style

Tsai, Tzong-Ru, Hua Xin, Ya-Yen Fan, and Yuhlong Lio. 2022. "Bias-Corrected Maximum Likelihood Estimation and Bayesian Inference for the Process Performance Index Using Inverse Gaussian Distribution" Stats 5, no. 4: 1079-1096. https://doi.org/10.3390/stats5040064

APA Style

Tsai, T. -R., Xin, H., Fan, Y. -Y., & Lio, Y. (2022). Bias-Corrected Maximum Likelihood Estimation and Bayesian Inference for the Process Performance Index Using Inverse Gaussian Distribution. Stats, 5(4), 1079-1096. https://doi.org/10.3390/stats5040064

Article Metrics

Back to TopTop