Next Article in Journal
Cronbach’s Alpha under Insufficient Effort Responding: An Analytic Approach
Previous Article in Journal
A Non-Mixture Cure Model for Right-Censored Data with Fréchet Distribution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Statistical Inference for Progressive Stress Accelerated Life Testing with Birnbaum-Saunders Distribution

Department of Mathematical Sciences, University of Texas at El Paso, El Paso, TX 79968, USA
Current address: 500W University Ave., El Paso, TX 79968, USA.
Stats 2018, 1(1), 189-203; https://doi.org/10.3390/stats1010014
Submission received: 11 November 2018 / Revised: 7 December 2018 / Accepted: 10 December 2018 / Published: 12 December 2018

Abstract

:
As a result of the two-parameter Birnbaum–Saunders (BS) distribution being successful in modelling fatigue failure times, several extensions of this model have been explored from different aspects. In this article, we consider a progressive stress accelerated life testing for the BS model to introduce a generalized Birnbaum–Saunders (we call it Type-II GBS) distribution on the lifetime of products in the test. We outline some interesting properties of this highly flexible distribution, present the Fisher’s information in the maximum likelihood estimation method, and propose a new Bayesian approach for inference. Simulation studies are carried out to assess the performance of the methods under various settings of parameter values and sample sizes. Real data are analyzed for illustrative purposes to demonstrate the efficiency and accuracy of the proposed Bayesian method over the likelihood-based procedure.

1. Introduction

The Birnbaum–Saunders (BS) model is based on a physical argument of cumulative damage that produces fatigue in materials, which is exerted by cyclical stress. The failure follows from the development and growth of a dominant crack in the material. Considering the basic characteristics of the fatigue process, Ref. [1] derived the BS ( α , β ) distribution function of the failure time T expressed by:
F ( t ) = Φ 1 α t β 1 / 2 β t 1 / 2 , t > 0 ,
where α > 0 , β > 0 are shape and scale parameters and Φ ( · ) is the distribution function of the standard normal variate. The BS distribution can be widely applied to describe fatigue life and lifetimes in general. Its field of application has been extended beyond the original context of material fatigue, and the model becomes the most versatile within the popular distributions for failure times. Over the years, various approaches of parameter inference, generalizations, and applications of the distribution have been introduced and developed by many authors (see [2,3,4,5,6,7,8], just to name a few).
In recent years, more research work has lied in the study of accelerated life testing (ALT) with the BS distribution. In industrial experiments, it is often very costly and time consuming to obtain information about the lifetime of highly reliable products under normal experimental conditions. To collect failure data rapidly and improve the efficiency of the experiment, people commonly apply ALT where products or materials are subjected to elevated stress conditions compared to those normally applied in practice. These stresses often include temperature, voltage, vibration, cycling rate, pressure, etc., which can be applied in mainly three ways: constant stress, step stress, and progressive stress. Many authors have contributed extensively to the development and parameter inference on ALT for various lifetime distributions; see, for example, [9,10,11,12,13]. An ALT on the BS distribution was considered in [14], who developed the model under the inverse power law accelerated form and explored the inference procedure based on the lifetime data collected under several elevated stress levels. The work in [15] presented an inference approach for the BS distribution in step-stress accelerated life testing (SSALT) with the Type-II censoring scheme. In practice, to make a more efficient test plan for collecting the lifetime of highly reliable products, people usually resort to an ALT by a progressive stress, and we will consider such a model for the BS distribution.

2. Progressive Stress Accelerated Life Testing

We briefly introduce the SSALT first and then extend it to the progressive stress model for the BS distribution.

2.1. Step-Stress Test

Suppose that, for a particular pattern of SSALT with I steps in total, step i runs at stress level V i , starts at time t i 1 , and runs to time t i ( t 0 = 0 ) , i = 1 , 2 , , I . The life distribution for specimens at the stress level V i is assumed as BS ( α , β i ) , whose distribution function F i ( t ) in (1), that is, F i ( t ) = Φ ( [ ( u i ( t ) ) 1 / 2 ( u i ( t ) ) 1 / 2 ] / α ) with u i ( t ) = t / β i , where α is the common shape parameter, and the stress V i only influences the scale parameter β i through the inverse power law, i.e., β i = ( V 0 / V i ) p with a standard level of stress V 0 > 0 and the power p > 0 being only associated with the characteristics of products. Accordingly, based on the principle of cumulative exposure (CE) [16], the population cumulative fraction of specimens failing in step i is:
F ( t ) = F i ( t t i 1 + s i 1 ) , t i 1 t t i , i = 1 , 2 , , I
where the equivalent start time s i 1 at step i satisfies:
F i ( s i 1 ) = F i 1 ( t i 1 t i 2 + s i 2 ) .
It brings about [ u i ( s i 1 ) ] 1 / 2 [ u i ( s i 1 ) ] 1 / 2 = [ u i 1 ( t i 1 t i 2 + s i 2 ) ] 1 / 2 [ u i 1 ( t i 1 t i 2 + s i 2 ) ] 1 / 2 , leading to the following recursive expression from u i ( s i 1 ) = u i 1 ( t i 1 t i 2 + s i 2 ) by the fact that g ( u ) = u 1 / 2 u 1 / 2 is an increasing function in u > 0 :
s i 1 β i = t i 1 t i 2 β i 1 + s i 2 β i 1 .
By using the recursive equation above with the time duration being Δ i = t i t i 1 at the i th step, the population fraction having failed in (2) over the exposure time t I = Δ 1 + Δ 2 + + Δ I after I steps is:
F ( t I ) = F i ( t I t I 1 + s I 1 ) = Φ 1 α ( u ( t I ) ) 1 / 2 ( u ( t I ) ) 1 / 2 ,
where the “cumulative exposure” u ( t I ) is the sum of the “scaled” time experienced at each step given by:
u ( t I ) = ( Δ 1 / β 1 ) + ( Δ 2 / β 2 ) + + ( Δ I / β I ) .
As all time lengths Δ i 0 , the step-stress levels V i become a progressive or varying stress V ( t ) continuously over time in ALT, and as a result, the scale parameter β ( V ) = ( V 0 / V ) p is a function of time, namely β ( t ) , and the corresponding cumulative exposure u ( t ) (or damage), which appears as a sum in (6) for a step-stress testing, becomes the integral u ( t ) = 0 t 1 / β ( x ) d x . Specifically, here, we consider a ramp stress V ( t ) = R t with R > 0 being the rate of rise of stress, and then, the scale parameter becomes β ( t ) = [ V 0 / V ( t ) ] p = V 0 p / ( R p t p ) . The corresponding cumulative exposure time of products at time t is:
u ( t ) = 0 t d x β ( x ) = 0 t R p x p V 0 p d x = R p t p + 1 ( p + 1 ) V 0 p ,
and the lifetime distribution of products at the progressive stress V ( t ) becomes:
F ( t ) = Φ 1 α ( u ( t ) ) 1 / 2 ( u ( t ) ) 1 / 2 = Φ 1 α t β m β t m ,
where:
m = ( p + 1 ) / 2 , β = [ ( p + 1 ) V 0 p / R p ] 1 / ( p + 1 ) .
The distribution in (8) is a generalization of the Birnbaum–Saunders distribution (GBS) first proposed in [17] by allowing the exponent (1/2 on the BS) to take on other values. Recently, considering the original BS distribution derived from a homogeneous Poisson process, [18] obtained the same GBS depending on a non-homogeneous Poisson process. To distinguish from the GBS developed in [19], we refer to the distribution as Type-II GBS, denoted as GBS-II ( m , α , β ) , whose density function is given by:
f ( t ) = m α t t β m + β t m ϕ 1 α t β m β t m , t > 0 ,
where m > 0 is another shape-type parameter and ϕ ( · ) is the density function of the standard normal distribution. As the BS distribution, the transformation Z = ( ( T / β ) m ( β / T ) m ) / α is a standard normal variate, and thus, it is easy to generate the GBS-II random variates from the standard normal variables. In the following, we briefly summarize the main properties of the distribution.

2.2. Properties of GBS-II

The three-parameter GBS-II distribution in (10) is a flexible family of distributions, and the shape of the density widely varies with different values of the parameters. Specifically (the detailed proof is long, and so omitted here): (i) when 0 < m 1 , the density is unimodal or an upside-down bathtub; (ii) when m > 1 and α 2 m / ( m 1 ) , the density is also unimodal (upside-down); (iii) if m > 1 and α 2 > m / ( m 1 ) , then the density is either unimodal or bimodal. Figure 1 shows various graphs of the density function for different values of m and α with the scale parameter β fixed at unity. Additionally, Figure 2 presents the failure rate function given by λ ( t ) = f ( t ) / ( 1 F ( t ) ) with various values of m and α , showing that it could have an increasing, decreasing, bathtub, and upside-down bathtub shape. Hence, it seems that the distribution is flexible enough to model various situations of product life.
The GBS-II has some interesting properties as the BS distribution. For example, β remains the median for the distribution. The reciprocal property is also preserved by that if T GBS-II ( m , α , β ) , then T 1 GBS-II ( m , α , β 1 ) . In fact, the GBS-II generally describes the distribution family of power transformations for a BS random variable from the following: if T BS ( α , β ) , then for any nonzero real-valued constant r , T r GBS-II ( 0 . 5 | r | 1 , α , β r ) , where | · | represents the absolute value function. Conversely, given T GBS-II ( m , α , β ) , then T 2 m BS ( α , β 2 m ) . In addition, similar to the BS distribution, which can be written as an equal mixture of an inverse normal distribution and a distribution of the reciprocal of an inverse normal random variable [20], the GBS-II distribution can be also expressed as a mixture of power inverse normal-type distributions [21].
In the aspect of numeric characteristics for GBS-II, generally, there is no analytic form of moments except for some special cases. For example, from the fact that T m = β m α Z + α 2 Z 2 + 4 / 2 with a standard normal variate Z, one may easily obtain the expressions of the moments E ( T k m ) with an even number k, such as E ( T 2 m ) = β 2 m ( 1 + α 2 / 2 ) , E ( T 4 m ) = β 4 m ( 1 + 2 α 2 + 11 α 4 / 4 ) , etc. The general moment expression was obtained in [22], who used the relationship between the GBS-II and three-parameter sinh-normal distribution (see [23]):
E ( T r ) = β r exp ( α 2 ) α 2 π K ( r / m + 1 ) / 2 ( α 2 ) + K ( r / m 1 ) / 2 ( α 2 ) ,
where r is a real number and K ω ( x ) is an order ω modified Bessel function of the third kind, which can be expressed in an integral form as K ω ( x ) = 0 exp { x cosh ( s ) } cosh ( ω s ) d s with the hyperbolic cosine function cosh ( x ) = [ exp ( x ) + exp ( x ) ] / 2 . Numerous software packages can be used to evaluate K ω ( x ) for specific values of ω and x for calculating the moments such as the mean and variance. The finite moments guarantee the rationality of the moment-based estimation methods.
So far, little research has been seen to study the parameter estimation and inference for the GBS-II distribution. The work in [21] discussed the maximum likelihood (ML) estimation of parameters and provided interval estimation based on an “observed” Fisher’s information. To contribute a novel research of precise inference, in this article, we further investigate the ML method to obtain an analytic expression of Fisher’s information and propose a new Bayesian approach for parameter inference. The rest of the article is arranged as follows. Section 3 presents the methodology of the estimation procedure. Subsequently, we carry out simulation studies to investigate the performance of proposed methods in Section 4. For illustrative purpose, two real datasets are analyzed in Section 5, followed by some concluding remarks in Section 6.

3. Estimations

Let t = ( t 1 , t 2 , , t n ) be n observational data from the GBS-II distribution. We first consider the likelihood-based approach in the following.

3.1. Likelihood-Based Method

To make the notation simple, let ε ( t ) = ( t / β ) m ( β / t ) m , δ ( t ) = ( t / β ) m + ( β / t ) m . Then, the likelihood and log-likelihood functions, up to a constant, are given by:
L ( m , α , β | t ) = m n α n i = 1 n ( δ ( t i ) ) exp 1 2 α 2 i = 1 n ε 2 ( t i ) ,
= ( m , α , β | t ) = n log ( m ) n log ( α ) + i = 1 n log ( δ ( t i ) ) 1 2 α 2 i = 1 n ε 2 ( t i ) ,
and we have the first partial derivatives w.r.t the parameters:
α = n α + 1 α 3 i = 1 n ε 2 ( t i ) , β = m β i = 1 n ε ( t i ) δ ( t i ) 1 α 2 i = 1 n ε ( t i ) δ ( t i ) ,
m = n m + i = 1 n log ( β 1 t i ) ε ( t i ) δ ( t i ) 1 α 2 i = 1 n log ( β 1 t i ) ε ( t i ) δ ( t i ) .
Due to the complexity of the expression above, a numerical method has to be applied to obtain maximum likelihood estimates (MLEs) m ^ , α ^ , β ^ by solving the equations / m = 0 , / α = 0 and / β = 0 simultaneously. The work in [21] used the “observed” Fisher’s information matrix to obtain a large sample-based interval estimation for the parameters. However, our study shows that a “partially” analytic form of it can be obtained, so that more precise inference can be made. Let the Fisher’s information matrix be the following form:
I ( m , α , β ) = u m m u m α u m β u α m u α α u α β u β m u β α u β β ,
whose elements are the negative expectation of the second partial derivatives of parameters for the log-likelihood function in (13), such as u α α = E ( 2 / α 2 ) , u α β = E ( 2 / α β ) , etc. These elements are given by (the detailed derivations are provided in Appendix A):
u α α = 2 n α 2 , u α β = u β α = u β m = u m β = 0 ,
u α m = u m α = 2 n α 2 m E Z g ( Z ) α 2 Z 2 + 4 ,
u β β = 2 n m 2 α 2 β 2 α 2 2 α h ( α ) + 2 ,
u m m = n m 2 4 n m 2 E g 2 ( Z ) α 2 Z 2 + 4 + 2 n m 2 α 2 E g 2 ( Z ) ( α 2 Z 2 + 2 ) ,
where Z is the standard normal variate, g ( z ) = log [ ( α z + α 2 z 2 + 4 ) / 2 ] and h ( α ) = π / 2 × e 2 / α 2 [ 1 Φ ( 2 / α ) ] . Therefore, the asymptotic normal distribution of MLEs has the following covariance matrix:
I 1 ( m , α , β ) = u α α / c u m α / c 0 u α m / c u m m / c 0 0 0 1 / u β β ,
with c = u m m u α α u m α 2 . Notice from (21) that the MLEs m ^ and β ^ , α ^ , and β ^ are asymptotically independent. It is also clear that when m = 1 / 2 , the GBS-II reduces to the BS distribution whose Fisher’s information is the lower 2 × 2 matrix in (16) with u β β = n α 2 / 2 α h ( α ) + 1 / ( α β ) 2 , the same as the one provided in [4].
By the fact of positive parameters and the asymptotic normality of MLEs, we may use log transformation to obtain approximate confidence intervals (CIs) for the parameters [24]. In particular, for parameter m and its MLE m ^ , we have the approximate normal distribution log ( m ^ ) N ( log ( m ) , Var ( log ( m ^ ) ) ) , where the variance can be approximated by the delta method as Var ( log ( m ^ ) ) ^ = Var ( m ^ ) ^ / m ^ 2 = u ^ α α / ( c ^ m ^ 2 ) with u ^ α α , c ^ being the values of u α α and c, respectively, evaluated at MLEs m ^ , α ^ , β ^ . Then, a ( 1 γ ) 100 % CI for m is then given by:
m ^ × exp z γ / 2 Var ( m ^ ) m ^ , m ^ × exp z γ / 2 Var ( m ^ ) m ^
where z γ / 2 is the upper 100 × γ / 2 th percentile of the standard normal distribution. The normal-approximated CIs for α and β can be constructed in the same way to have the form in (22) with m ^ being replaced by α ^ and β ^ , respectively, and Var ( α ^ ) ^ = u ^ m m / c ^ , Var ( β ^ ) ^ = 1 / u ^ β β . It should be noted that these intervals can exhibit less accurate coverage for small samples.

3.2. Bayesian Inference

Since there are no tractable forms of MLEs and CIs, this is not very applicable and efficient by the use of the ML estimation method. We propose a Bayesian inference approach as an alternative. From the model development of the GBS-II distribution in [18], we notice that the parameter m is independent of the other two parameters α and β , which have a relation α 2 β , and so, we propose a joint prior π ( m , α , β ) = π ( m ) π ( α | β ) π ( β ) with the conditional prior mean E ( α 2 | β ) β . From the likelihood function in (12), it is easily discernible that for α 2 , an inverse gamma is a conjugate prior for the conditional likelihood L ( α | m , β , t ) . Therefore, we specify that α 2 | β I G ( a 0 / 2 , a 1 β / 2 ) , and thus, the prior density of α | β is:
π ( α | β ) α ( a 0 + 1 ) exp a 1 β 2 α 2
with the hyperparameters a 0 > 4 and a 1 > 0 to ensure the existence of the variance. It is also clear that there is no conjugate prior for m and β . However, we may consider a prior distribution that has a similar functional form as their conditional likelihood functions. In this case, we pick both priors of m and β to be a gamma distribution,
m Gamma d 0 2 , d 1 2 , β Gamma b 0 2 , b 1 2 ,
with b 0 , b 1 , d 0 , d 1 > 0 . The hyperparameter values can be specified based on the following factors: (i) Since β is the median of the GBS-II distribution, we can refer to the sample median as the estimate β ˜ = median ( t 1 , t 2 , , t n ) in our attempt to specify b 0 and b 1 . Further, by the fact that E ( Z ) = 0 , E ( Z 2 ) = 1 from Z = [ ( T / β ) m ( β / T ) m ] / α N ( 0 , 1 ) and by the principle of method of moments, we establish two equations by equating the first two sample and theoretical moments ( β is estimated by β ˜ ), i = 1 n [ ( t i / β ˜ ) m ( β ˜ / t i ) m ] = 0 and i = 1 n [ ( t i / β ˜ ) m ( β ˜ / t i ) m ] 2 = n α 2 , to obtain the roots m ˜ and α ˜ for specifying a 0 , a 1 , d 0 and d 1 . (ii) The MLEs m ^ , α ^ , β ^ can also be used to determine the hyperparameters. The non-informative distribution with a i = 0 , b i = 0 , d i = 0 , i = 0 , 1 can be chosen if no prior knowledge is available. The joint posterior distribution of the parameters ( m , α , β ) with the sample data t is given by:
π ( m , α , β | t ) L ( m , α , β | t ) × π ( m ) π ( α | β ) π ( β ) .
It follows that the full conditional posteriors are:
π ( m | α , β , t ) m n + d 0 2 1 i = 1 n ( δ ( t i ) ) exp 1 2 d 1 m + i = 1 n ε 2 ( t i ) α 2 ,
α 2 | ( m , β , t ) IG ν 0 2 , ν 1 2 , ν 0 = n + a 0 , ν 1 = a 1 β + i = 1 n ε 2 ( t i ) ,
π ( β | m , α , t ) β b 0 2 1 i = 1 n ( δ ( t i ) ) exp 1 2 b 1 β + ν 1 α 2 .
We implement a Markov chain Monte Carlo (MCMC) algorithm, specifically here a Gibbs sampling procedure (see, for example, Casella and George [25]), to draw posterior samples from their full conditional posterior distributions. First, we take MLEs of m , α and β as the initial values to make the algorithm converge more quickly and then repeat the following steps M times, among which, given the values at the k th iteration, the ( k + 1 ) th iteration is as follows:
  • Draw m k + 1 from π ( m | α k , β k , t ) in (26) using a Metropolis–Hastings (MH) procedure (see, for example, Chib and Greenberg [26]). We first propose m p Gamma ( c m m k , c m ) , where the mean of the proposal is m k and c m is a tuning parameter to make the algorithm efficient, and then take m k + 1 = m p with probability:
    λ m = min 1 , π ( m p | α k , β k , t ) × q m ( m k | m p ) π ( m k | α k , β k , t ) × q m ( m p | m k ) ,
    with the Gamma proposal density function q m ( · ) , and so:
    q m ( m k | m p ) q m ( m p | m k ) = Γ ( c m m k ) c m c m m k Γ ( c m m p ) c m c m m p × m k c m m p 1 exp { c m m k } m p c m m k 1 exp { c m m p } .
  • Draw α k + 1 2 IG ν 0 / 2 , ν 1 / 2 with updated ν 0 and ν 1 in (27).
  • Draw β k + 1 from π ( β | m k + 1 , α k + 1 , t ) in (28) using an MH procedure. We propose β p from a lognormal distribution centered at the previous value, i.e., log β p N log β k , c β 2 σ k 2 , where c β > 0 is a tuning parameter. For the purpose of sampling efficiency, we intend to specify a proposal distribution, which closely resembles the conditional posterior of β . This consideration prompts us to specify the variance term σ k 2 , evaluated at updated values ( m k + 1 , α k + 1 , β k ) , as the reciprocal of Fisher information of the conditional posterior of log β , whose log-likelihood is log π ( β | m k + 1 , α k + 1 , t ) + log | J | . The Jacobian term J = β is needed here as we make a log transformation on β .
    σ k 2 = E 2 log π ( β | m k + 1 , α k + 1 , t ) + log | J | ( log β ) 2 1 m = m k + 1 , α = α k + 1 , β = β k = β k 2 E 2 β 2 + a 1 β k 2 α k + 1 2 + b 1 2 β k 1 = β k 2 u β k β k + a 1 β k 2 α k + 1 2 + b 1 2 β k 1 ,
    where u β k β k = 2 n m k + 1 2 / ( α k + 1 2 β k 2 ) [ α k + 1 2 2 α k + 1 h ( α k + 1 ) + 2 ] from Equation (19). Finally, take β k + 1 = β p with probability:
    λ β = min 1 , π ( β p | m k + 1 , α k + 1 , t ) × q β ( β k | β p ) π ( β k | m k + 1 , α k + 1 , t ) × q β ( β p | β k ) ,
    with the proposal density q β ( · ) , and so q β ( β k | β p ) / q β ( β p | β k ) = β p / β k .
The posterior inference of the parameters m , α and β is made by their posterior samples ( m k , α k , β k ) , k = 1 , 2 , , M , such as posterior mean, credible interval (CI), etc.

4. Simulation Study

We conduct a simulation study to assess the performance of parameter estimation by ML and the Bayesian methods, where we fix the scale parameter β = 1 and take four settings of the other two as ( m , α ) = ( 0.25 , 0.50 ) , ( 0.50 , 0.50 ) , ( 1.00 , 1.00 ) , ( 1.50 , 2.00 ) . We generate 1000 datasets for each of these parameter settings with three sample sizes n = 20 , 30 , 50 . For the Bayesian analysis, we choose the hyperparameter values a 0 = a 1 = b 0 = b 1 = d 0 = d 1 = 0 , such that the prior distributions are rather “flat” or “less informative” to reflect little prior knowledge about the parameters. With each simulated data, we find that the rule of thumb c m = 15 , c β = 2.4 as the tuning parameter values are adequate to ensure the acceptance rates to hover around 35–40%, and we run five MCMC chains with fairly different initial values and each with a burn-in period of 2000 followed by 8000 iterations. The scale reduction factor estimate R F ^ = V a r ( θ ) / W is used to monitor the convergence of MCMC simulations [27], where θ is the estimand of the parameter of interest, V a r ( θ ) = ( N 1 ) W / N + B / N , with the iteration number N for each chain and the between- and within-sequence variances B and W. The scale factors for the sequences of m , α , and β are within 1.00–1.02 for all five MCMC chains, indicating their convergence. The remaining 8000 samples are used to compute the average biases, mean squared error (MSE) of the estimates, average lengths (AL) of the 95% credible intervals (CI), and coverage probability (CP) for the parameters. The results are shown in Table 1 along with these estimates from the ML method for comparison purposes. The main features are summarized as follows: (i) as expected, the bias of estimates, MSE, and AL of 95% CI decrease, and CP is closer to the nominal level as sample size n increases for all cases; (ii) the estimation of all parameters from the Bayesian method is much better than from the ML approach in terms of smaller biases and MSEs, narrower CIs and higher CPs. The MLE α ^ dose not perform well for the small to moderate sample sizes ( n = 20 , 30 ). (iii) comparatively, both methods produce much more accurate estimation of β , less precise for m, and least for α . With the larger sample size ( n = 50 ), the performances of the β estimates are similar for both methods; (iv) it seems that the estimate of α has a smaller MSE under a smaller true value of α , whereas the estimate of m has a smaller MSE under a larger true m. Overall, the Bayesian inference method outperforms the ML approach for all parameter settings, especially when the sample size is small.

5. Real Data Analysis

To further illustrate the usefulness of our method in parameter inference for the GBS-II, we consider a real data example given by [28] about the breakdown time in an accelerated test that employed a pair of parallel disk electrodes immersed in an insulating oil. Voltage V across the pair was increased linearly with time t at a specified rate R, and breakdown time was recorded at a one square inch electrode. The data are presented in Table 2, consisting of 60 measured breakdown times (seconds). Fitting the GBS-II distribution with the Bayesian method, we choose the hyperparameters b 0 and b 1 such that the prior mean of β is close to the sample median 4.3 of the data, a 0 and a 1 such that the conditional prior mean E ( α 2 | β = 4.3 ) is close to the MLE α ^ 2 = 1.3656 , and the CV (coefficient of variation) of the gamma or inverse gamma priors is close to the CV of standard uniform distribution ( 1 / 3 ) . Hence, we have the hyperparameter values as follows: a 0 = 10 , a 1 = 2.54 , b 0 = 0.67 , b 1 = 0.16 , d 0 = d 1 = 1 . We run a chain of 20,000 iterations with a burn-in period of 5000. To reduce the correlation among the samples, every fifth sample of the remaining 15,000 samples is used for posterior inference. The results are tabulated in Table 3, where, due to the relatively large sample size ( n = 60 ), the point estimates obtained by both ML and Bayesian methods are close to each other. However, the 95% CIs of the Bayesian method are narrower than the ones of the ML approach, especially for the intervals of m and α . Additionally, the Chi-squared goodness of fit statistic and BIC values of model fitting by the Bayesian method are smaller than these by the ML method, indicating the greater accuracy in the Bayesian analysis. Based on the estimates of m and β and the given standard level of stress V 0 = 42.30 , together with the relation in (9), the power p in the inverse power law and the rate of rise of voltage R are computed, respectively from ML and Bayesian approaches, as p ^ = 8.9455 , 8.4045 and R ^ = 11.0734 , 10.1067 . Our results (especially from the Bayesian) are close to the estimates obtained by [28], who fitted a Weibull distribution for these data. For graphical comparison, Figure 3 shows the histogram of the data and fitted GBS-II density curves estimated by both methods.
The second real data were presented in [29] on active repair times (in hours) for an airborne communications transceiver. To illustrate the estimation performance on a small sample size, we randomly select 20 repair times out of the total of 46 observations to have the following data: 0.3, 0.5, 0.6, 0.6, 0.7, 0.7, 0.8, 1.0, 1.3, 1.5, 1.5, 2.0, 2.2, 2.5, 4.0, 4.7, 5.0, 7.5, 8.8, 22.0. Modeling the data by the GBS-II distribution, we adopt the same procedure as discussed to specify the hyperparameter values by using the sample median 1.5 of the data and MLE α ^ 2 = 2.83 . In summary, we choose the following hyperparameter values: a 0 = 10 , a 1 = 15.07 , b 0 = 0.67 , b 1 = 1 , d 0 = d 1 = 1 . An MCMC chain of 20,000 iterations with a burn-in period of 5000 produces the estimation results of the Bayesian approach, together with the results by the ML method, tabulated in Table 4. For these data with a relatively small sample size ( n = 20 ), the produced 95% CIs are much narrower, as well as having much smaller Chi-squared goodness of fit statistic and BIC values of model fitting by the Bayesian method. Finally, for illustration, Figure 4 shows that the fitted GBS-II density curve estimated by the Bayesian method is better fit to the histogram than the one by the ML estimation. These outcomes demonstrate that the proposed Bayesian method produces much more accurate inference under the small sample size.

6. Conclusions Remarks

We presented a Bayesian inference approach of parameter estimation in progressive stress accelerated life testing with the Birnbaum–Saunders (BS) model, which induced the Type-II generalized Birnbaum–Saunders (GBS-II) distribution followed by the lifetime of products in the testing. We summarized the properties of GBS-II and studied its Fisher’s information used in the likelihood-based inference method. The simulation study demonstrated that the Bayesian method outperformed the traditional likelihood-based approach, especially its efficient and impressive performance under small sample sizes. We have also illustrated, with two real datasets, that our Bayesian method can be readily applied for efficient, reliable, and precise inference.

Funding

This research was funded by NSF CMMI-0654417 and NIMHD-2G12MD007592.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

We present a detailed derivation of the Fisher information matrix for the GBS-II( m , α , β ). From the fact that:
ε ( t ) β = m β 1 δ ( t ) , ε ( t ) m = log ( β 1 t ) δ ( t ) ,
δ ( t ) β = m β 1 ε ( t ) , δ ( t ) m = log ( β 1 t ) ε ( t ) ,
( ε ( t ) δ ( t ) ) β = m β ε 2 ( t ) + δ 2 ( t ) , ( ε ( t ) δ ( t ) ) m = log ( β 1 t ) ε 2 ( t ) + δ 2 ( t ) ,
ε ( t ) δ ( t ) β = m β 1 ε 2 ( t ) δ 2 ( t ) = 4 m β δ 2 ( t ) ,
ε ( t ) δ ( t ) m = log ( β 1 t ) 1 ε 2 ( t ) δ 2 ( t ) = 4 log ( β 1 t ) δ 2 ( t ) ,
and then the second partial derivatives of parameters for the log-likelihood function in (13) are in the following:
2 α 2 = n α 2 3 α 4 i = 1 n ε 2 ( t i ) , 2 l α β = 2 m α 3 β i = 1 n ε ( t i ) δ ( t i ) ,
2 α m = 2 α 3 i = 1 n log ( β 1 t i ) ε ( t i ) δ ( t i ) ,
2 β 2 = m β 2 i = 1 n ε ( t i ) δ ( t i ) 1 α 2 i = 1 n ε ( t i ) δ ( t i ) + m 2 β 2 4 i = 1 n 1 δ 2 ( t i ) 1 α 2 i = 1 n ( ε 2 ( t i ) + δ 2 ( t i ) ) ,
2 β m = 1 β i = 1 n ε ( t i ) δ ( t i ) 1 α 2 i = 1 n ε ( t i ) δ ( t i ) m β 4 i = 1 n log ( β 1 t i ) δ 2 ( t i ) 1 α 2 i = 1 n log ( β 1 t i ) ( ε 2 ( t i ) + δ 2 ( t i ) ) ,
2 m 2 = n m 2 + 4 i = 1 n log 2 ( β 1 t i ) δ 2 ( t i ) 1 α 2 i = 1 n log 2 ( β 1 t i ) ( ε 2 ( t i ) + δ 2 ( t i ) ) .
Let z = ε ( t ) / α , and so, δ ( t ) = α 2 z 2 + 4 and log ( t / β ) = log ( α z + α 2 z 2 + 4 ) / 2 / m = g ( z ) / m . By the fact that Z = ε ( T ) / α N ( 0 , 1 ) and g ( z ) = log ( α z + α 2 z 2 + 4 ) / 2 is an odd function of z, we have:
E ( ε 2 ( T ) ) = α 2 E ( Z 2 ) = α 2 , E ( δ 2 ( T ) ) = E ( α 2 Z 2 + 4 ) = α 2 + 4 ,
E ( ε ( T ) δ ( T ) ) = α E ( Z α 2 Z 2 + 4 ) = 0 , E ε ( T ) δ ( T ) = α E Z α 2 Z 2 + 4 = 0 ,
E 1 δ 2 ( T ) = E 1 α 2 Z 2 + 4 = h ( α ) α , E log ( β 1 T ) δ 2 ( T ) = 1 m E g ( Z ) α 2 Z 2 + 4 = 0 ,
E log 2 ( β 1 T ) δ 2 ( T ) = 1 m 2 E g 2 ( Z ) α 2 Z 2 + 4 ,
E log ( β 1 T ) ε ( T ) δ ( T ) = α m E Z g ( Z ) α 2 Z 2 + 4 ,
E log ( β 1 T ) ( ε 2 ( T ) + δ 2 ( T ) ) = 2 m E [ g ( Z ) ( α 2 Z 2 + 2 ) ] = 0 ,
E log 2 ( β 1 T ) ( ε 2 ( T ) + δ 2 ( T ) ) = 2 m 2 E [ g 2 ( Z ) ( α 2 Z 2 + 2 ) ] ,
where E ( 1 / ( α 2 Z 2 + 4 ) ) = h ( α ) / α with h ( α ) = π / 2 e 2 / α 2 ( 1 Φ ( 2 / α ) ) being provided in [4]. Then, the elements of Fisher’s information in (16) are:
u m m = E 2 m 2 = n m 2 4 n m 2 E g 2 ( Z ) α 2 Z 2 + 4 + 2 n m 2 α 2 E g 2 ( Z ) ( α 2 Z 2 + 2 ) ,
u α m = u m α = E 2 α m = 2 n α 2 m E Z g ( Z ) α 2 Z 2 + 4 ,
u β m = u m β = E 2 β m = 0 ,
u α α = E 2 α 2 = 2 n α 2 , u α β = u β α = E 2 α β = 0 ,
u β β = E 2 β 2 = 2 m 2 n α 2 β 2 α 2 2 α h ( α ) + 2 .
Therefore, the Fisher’s information matrix and its inverse have the following forms:
I ( m , α , β ) = u m m u m α 0 u α m u α α 0 0 0 u β β , I 1 ( m , α , β ) = u α α / c u m α / c 0 u α m / c u m m / c 0 0 0 1 / u β β
with c = u m m u α α u m α 2 . When α is small, some approximations can be implemented for the following terms:
E ( Z g ( Z ) α 2 Z 2 + 4 ) = z g ( z ) α 2 z 2 + 4 ϕ ( z ) d z = g ( z ) α 2 z 2 + 4 d ϕ ( z ) = ϕ ( z ) d ( g ( z ) α 2 z 2 + 4 ) = ϕ ( z ) α + α 2 z α 2 z 2 + 4 g ( z ) d z = α + α 2 z ϕ ( z ) d g 2 ( z ) = α α 2 g 2 ( z ) ( 1 z 2 ) ϕ ( z ) d z = α α 2 E α 2 Z 2 4 1 α 2 Z 2 12 + α 4 Z 4 90 + O ( α 6 ) ( 1 Z 2 ) = α α 3 8 E ( Z 2 ) 1 + α 2 12 E ( Z 4 ) + α 2 12 + α 4 90 E ( Z 6 ) + O ( α 6 ) = α 8 ( 8 + 2 α 2 + α 4 ) + O ( α 7 ) ,
E g 2 ( Z ) α 2 Z 2 + 4 = E α 2 Z 2 4 1 α 2 Z 2 12 + O ( α 4 ) α 2 Z 2 + 4 = 1 48 E ( α 2 Z 2 + 4 ) 2 + 20 ( α 2 Z 2 + 4 ) 64 α 2 Z 2 + 4 + O ( α 4 ) = 1 48 ( α 2 E ( Z 2 ) + 4 ) + 20 64 E 1 α 2 Z 2 + 4 + O ( α 4 ) = 1 48 16 α 2 64 h ( α ) α + O ( α 4 ) ,
E ( g 2 ( Z ) ( α 2 Z 2 + 2 ) ) = E α 2 Z 2 4 1 α 2 Z 2 2 + O ( α 4 ) ( α 2 Z 2 + 2 ) = 1 4 2 α 2 E ( Z 2 ) + 5 α 4 6 E ( Z 4 ) + O ( α 6 ) = α 2 8 ( 4 + 5 α 2 ) + O ( α 6 ) .

References

  1. Birnbaum, Z.W.; Saunders, S.C. A new family of life distributions. J. Appl. Probab. 1969, 6, 319–327. [Google Scholar] [CrossRef] [Green Version]
  2. Dupuis, D.J.; Mills, J.E. Robust estimation of the Birnbaum–Saunders distribution. IEEE Trans. Reliab. 1998, 47, 88–95. [Google Scholar] [CrossRef]
  3. Engelhardt, M.; Bain, L.J.; Wright, F.T. Inferences on the parameters of the Birnbaum–Saunders fatigue life distribution based on maximum likelihood estimation. Technometrics 1981, 23, 251–256. [Google Scholar] [CrossRef]
  4. Lemonte, A.J.; Cribari-Neto, F.; Vasconcellos, K.L.P. Improved statistical inference for the two-parameter Birnbaum–Saunders distribution. Comput. Stat. Data Anal. 2007, 51, 4656–4681. [Google Scholar] [CrossRef]
  5. Ng, H.K.T.; Kundub, D.; Balakrishnan, N. Modified moment estimation for the two-parameter Birnbaum–Saunders distribution. Comput. Stat. Data Anal. 2003, 43, 283–298. [Google Scholar] [CrossRef]
  6. Sha, N.; Ng, T.L. Bayesian inference for Birnbaum–Saunders distribution and its generalization. J. Stat. Comput. Simul. 2017, 87, 2411–2429. [Google Scholar] [CrossRef]
  7. Wang, M.; Sun, X.; Park, C. Bayesian analysis of Birnbaum–Saunders distribution via the generalized ratio-of-uniforms method. Comput. Stat. 2016, 31, 207–225. [Google Scholar] [CrossRef]
  8. Xu, A.; Tang, Y. Bayesian analysis of Birnbaum–Saunders distribution with partial information. Comput. Stat. Data Anal. 2011, 55, 2324–2333. [Google Scholar] [CrossRef]
  9. Bagdonavicius, V.B.; Gerville-Reache, L.; Nikulin, M. Parametric Inference for step-stress models. IEEE Trans. Reliab. 2002, 51, 27–31. [Google Scholar] [CrossRef]
  10. Bai, D.S.; Chun, Y.R. Optimum simple step-stress accelerated life tests with competing causes of failure. IEEE Trans. Reliab. 1991, 40, 622–627. [Google Scholar] [CrossRef]
  11. Khamis, I.H. Optimum M-step step-stress test with K stress variables. Commun. Stat. Simul. Comput. 1997, 26, 1301–1313. [Google Scholar] [CrossRef]
  12. Sha, N.; Pan, R. Bayesian analysis for step-stress accelerated life testing using weibull proportional hazard model. Stat. Pap. 2014, 55, 715–726. [Google Scholar] [CrossRef]
  13. Srivastava, P.W.; Shukla, R. A log-logistic step-stress model. IEEE Trans. Reliab. 2008, 57, 431–434. [Google Scholar] [CrossRef]
  14. Owen, W.J.; Padgett, W.J. A Birnbaum–Saunders accelerated life model. IEEE Trans. Reliab. 2000, 49, 224–229. [Google Scholar] [CrossRef]
  15. Sun, T.; Shi, Y. Estimation for Birnbaum–Saunders distribution in simple step stress-accelerated life test with Type-II censoring. Commun. Stat. Simul. Comput. 2016, 45, 880–901. [Google Scholar] [CrossRef]
  16. Nelson, W. Accelerated life testing step-stress models and data analysis. IEEE Trans. Reliab. 1980, 29, 103–108. [Google Scholar] [CrossRef]
  17. Díaz-García, J.A.; Domínguez-Molina, J.R. Some generalizations of Birnbaum–Saunders and sinh-normal distributions. Int. Math. Forum 2006, 1, 1709–1727. [Google Scholar] [CrossRef]
  18. Fierro, R.; Leiva, V.; Ruggeri, F.; Sanhuezad, A. On a Birnbaum–Saunders distribution arising from a non-homogeneous Poisson process. Stat. Probab. Lett. 2013, 83, 1233–1239. [Google Scholar] [CrossRef]
  19. Owen, W.J. A new three-parameter extension to the Birnbaum–Saunders distribution. IEEE Trans. Reliab. 2006, 55, 475–479. [Google Scholar] [CrossRef]
  20. Desmond, A.F. On the relationship between two fatigue-life models. IEEE Trans. Reliab. 1986, 35, 167–169. [Google Scholar] [CrossRef]
  21. Owen, W.J.; Ng, H.K.T. Revisit of relationships and models for the Birnbaum–Saunders and inverse-Gaussian distribution. J. Stat. Distrib. Appl. 2015, 2. [Google Scholar] [CrossRef]
  22. Rieck, J.R. A moment-generating function with application to the Birnbaum–Saunders distribution. Commun. Stat. Theory Methods 1999, 28, 2213–2222. [Google Scholar] [CrossRef]
  23. Johnson, N.L.; Kotz, S.; Balakrishnan, N. Continuous Univariate Distributions; John Wiley & Sons: New York, NY, USA, 1991; Volume 2. [Google Scholar]
  24. Meeker, W.Q.; Escobar, L.A. Statistical Methods for Reliability Data; John Wiley & Sons: New York, NY, USA, 1998. [Google Scholar]
  25. Casella, G.; George, E.I. Explaining the Gibbs sampler. Am. Stat. 1992, 46, 167–174. [Google Scholar] [CrossRef]
  26. Chib, S.; Greenberg, E. Understanding the Metropolis-Hastings algorithm. Am. Stat. 1995, 49, 327–335. [Google Scholar] [CrossRef]
  27. Gelman, A.; Carlin, J.B.; Stern, H.S.; Rubin, D.B. Bayesian Data Analysis, 2nd ed.; Chapman & Hall: London, UK, 2004. [Google Scholar]
  28. Nelson, W. Accelerated Testing: Statistical Models, Test Plans, and Data Analysis; John Wiley & Sons, Inc.: New York, NY, USA, 2004; pp. 507–509. [Google Scholar]
  29. Balakrishnan, N.; Leiva, V.; Sanhueza, A.; Cabrera, E. Mixture inverse Gaussian distribution and its transformations, moments and applications. Statistics 2009, 43, 91–104. [Google Scholar] [CrossRef]
Figure 1. The generalized Birnbaum–Saunder-II ( m , α , β ) density curves for various parameter values.
Figure 1. The generalized Birnbaum–Saunder-II ( m , α , β ) density curves for various parameter values.
Stats 01 00014 g001
Figure 2. GBS-II ( m , α , β ) failure rate curves for various parameter values.
Figure 2. GBS-II ( m , α , β ) failure rate curves for various parameter values.
Stats 01 00014 g002
Figure 3. Failure data: histogram and fitted density curves by ML and Bayesian estimation methods.
Figure 3. Failure data: histogram and fitted density curves by ML and Bayesian estimation methods.
Stats 01 00014 g003
Figure 4. Repair data: histogram and fitted density curves by ML and Bayesian estimation methods.
Figure 4. Repair data: histogram and fitted density curves by ML and Bayesian estimation methods.
Stats 01 00014 g004
Table 1. GBS-II estimation results for simulated data. AL, average length; CP, coverage probability.
Table 1. GBS-II estimation results for simulated data. AL, average length; CP, coverage probability.
n ML MethodBayesian
BiasMSEALCP (%)BiasMSEALCP (%)
m = 0.25 , α = 0.5 , β = 1.0
20m0.12290.28611.225892.450.11660.18890.749393.11
α 0.13750.15780.471892.270.13020.11450.452393.19
β 0.03020.07360.290593.870.02160.06220.232294.69
30 0.10920.23631.105293.170.10070.15670.532994.82
0.12360.13530.454693.210.11490.10720.407694.65
0.02440.06680.211894.290.02100.05160.120995.08
50 0.10350.13390.977794.430.09160.11130.224494.94
0.11210.11510.403394.670.11080.09680.332395.15
−0.01120.02400.135295.150.00360.01160.105495.20
m = 0.5 , α = 0.5 , β = 1.0
20m0.11040.13041.014193.270.10720.11840.734593.87
α 0.12290.10730.369493.510.11150.10160.354193.50
β 0.03140.09400.318193.85−0.02140.07200.229994.71
30 0.10510.12310.938694.520.10150.11250.568794.84
0.11810.10630.345794.400.10470.09050.318694.77
0.01580.06740.242294.880.00830.06130.164695.14
50 0.08090.11500.550394.480.02020.08220.378995.18
0.04920.07380.212894.530.03270.05780.183295.22
0.00800.03450.195795.120.00230.02070.081995.29
m = 1.0 , α = 1.0 , β = 1.0
20m0.17920.13570.977891.940.16200.12120.710793.02
α 0.22420.24330.621892.190.20320.14930.486592.89
β 0.03710.10670.472893.560.02720.08310.219194.05
30 0.16280.12300.901893.520.14640.11080.610994.28
0.18540.20380.589693.880.14940.12470.403394.29
−0.02320.07280.348094.26−0.01050.06240.112294.89
50 0.11550.11030.513593.490.09740.10420.427894.58
0.13550.16650.407194.310.11340.10460.308394.82
0.01380.04160.259895.030.00940.03100.084495.26
m = 1.5 , α = 2.0 , β = 1.0
20m0.28320.13030.965092.380.21760.11630.705893.41
α 0.34820.27770.867493.200.28870.22140.719593.14
β −0.05260.14320.521693.920.04110.09330.285294.17
30 0.25830.12980.896093.350.17340.10910.677994.11
0.30090.23560.774493.83−0.19390.20420.643894.71
−0.04230.10610.423894.330.03200.07200.166194.86
50 0.15980.10910.704794.380.11450.09280.568295.03
−0.20640.18360.520994.730.13030.15550.410595.17
−0.02090.06740.267594.920.01060.04110.078495.30
Table 2. Oil breakdown time (seconds) in an accelerated test employed in an insulating oil.
Table 2. Oil breakdown time (seconds) in an accelerated test employed in an insulating oil.
3.43.43.43.53.53.53.63.83.83.83.83.93.93.94.0
4.04.04.04.14.14.14.14.14.14.24.24.24.24.24.3
4.34.34.34.34.44.44.44.44.44.44.44.54.54.64.6
4.64.64.64.74.74.74.74.74.84.94.94.95.05.15.2
Table 3. Breakdown time: estimation results.
Table 3. Breakdown time: estimation results.
Methodm (95% CI) α (95% CI) β (95% CI) χ 2 , BIC
ML method4.9728 (2.0185, 12.2511)1.1686 (0.3807, 3.5874)4.2058 (3.7681, 4.7816)16.19, 125.81
Bayesian4.6523 (3.1521, 6.2339)1.2391 (0.9279, 2.3936)4.5614 (4.4250, 4.6945)8.82, 121.62
Table 4. Repair time: estimation results.
Table 4. Repair time: estimation results.
Methodm (95% CI) α (95% CI) β (95% CI) χ 2 , BIC
ML Method0.8326 (0.2170, 1.8444)1.6813 (0.3743, 7.5531)2.6093 (1.3854, 3.2163)97.30, 103.37
Bayesian0.4717 (0.3229, 0.6202)1.0291 (0.7826, 1.5031)1.7718 (1.0633, 2.9523)12.98, 92.36

Share and Cite

MDPI and ACS Style

Sha, N. Statistical Inference for Progressive Stress Accelerated Life Testing with Birnbaum-Saunders Distribution. Stats 2018, 1, 189-203. https://doi.org/10.3390/stats1010014

AMA Style

Sha N. Statistical Inference for Progressive Stress Accelerated Life Testing with Birnbaum-Saunders Distribution. Stats. 2018; 1(1):189-203. https://doi.org/10.3390/stats1010014

Chicago/Turabian Style

Sha, Naijun. 2018. "Statistical Inference for Progressive Stress Accelerated Life Testing with Birnbaum-Saunders Distribution" Stats 1, no. 1: 189-203. https://doi.org/10.3390/stats1010014

Article Metrics

Back to TopTop