Next Article in Journal
Generation of 3D Turbine Blades for Automotive Organic Rankine Cycles: Mathematical and Computational Perspectives
Previous Article in Journal
A Valid Dynamical Control on the Reverse Osmosis System Using the CESTAC Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Unknown Parameters of Truncated Normal Distribution under Adaptive Progressive Type II Censoring Scheme

Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(1), 49; https://doi.org/10.3390/math9010049
Submission received: 27 November 2020 / Revised: 20 December 2020 / Accepted: 21 December 2020 / Published: 28 December 2020

Abstract

:
In reality, estimations for the unknown parameters of truncated distribution with censored data have wide utilization. Truncated normal distribution is more suitable to fit lifetime data compared with normal distribution. This article makes statistical inferences on estimating parameters under truncated normal distribution using adaptive progressive type II censored data. First, the estimates are calculated through exploiting maximum likelihood method. The observed and expected Fisher information matrices are derived to establish the asymptotic confidence intervals. Second, Bayesian estimations under three loss functions are also studied. The point estimates are calculated by Lindley approximation. Importance sampling technique is applied to discuss the Bayes estimates and build the associated highest posterior density credible intervals. Bootstrap confidence intervals are constructed for the purpose of comparison. Monte Carlo simulations and data analysis are employed to present the performances of various methods. Finally, we obtain optimal censoring schemes under different criteria.

1. Introduction

1.1. Truncated Normal Distribution

Normal distribution is one of the most significant probability distributions in statistics on account that it fits lots of natural phenomena. With its satisfactory performance on fitting data and nice properties, analysts always discuss the characteristics of data based on normal distribution. However, negative data seldom appear in most cases of reality such as life test experiments. To overcome this, truncation is introduced to limit the values within a specific scope, which fits the data better so that the final analytical results after computations and estimations will be more accurate. Thus, truncated normal distribution, restricting the domain of normal distribution between one or two bounds, can be found to have theoretical and practical values.
Truncated normal distribution has not received extensive attention in academia until recent years for the complexity of its numeric characteristics when making statistical inferences. Based on this distribution, Ref. [1] compared the maximum likelihood estimations of mean and variance under censored samples with complete samples. Moreover, using the progressive type-II censoring model, Ref. [2] estimated the parameters of left truncated normal distribution. On the aspect of applications, Ref. [3] approximated the length of queue with abandonment based on truncated normal distribution. So far, studies related to this distribution still leave plenty of space to explore.
In this article, a normal distribution that is left truncated at zero is investigated. We denote it as T N ( μ , τ ) for convenience. Notice that μ , τ are the parameters of mean and variance. The corresponding probability density function (pdf) is described as
f ( x ; μ , τ ) = e 1 2 τ ( x μ ) 2 2 π τ Φ ( μ τ ) , x , μ , τ > 0 ,
and the cumulative distribution function (cdf) is written as
F ( x ; μ , τ ) = 1 1 Φ ( x μ τ ) Φ ( μ τ ) .
Here Φ ( · ) represents the cdf of standard normal distribution. The survival function can be written as
S ( x ; μ , τ ) = 1 F ( x ; μ , τ ) = 1 Φ ( x μ τ ) Φ ( μ τ ) .
Figure 1 reflects that the truncated normal distribution pdf is unimodal. Similar to normal distribution, as  μ increases, location of truncated normal distribution moves along the right direction while the shape becomes narrower and sharper as τ decreases. However, because of truncation, the density location change is no longer a translation under the condition that the variance is fixed. Instead, the value of pdf becomes lower as μ moves away from the truncated point in positive direction. Figure 2 presents some characteristics of cdf. The fastest increasing point among all points in the domain appears at the mean. The growth of cdf becomes slower with τ decreasing. Furthermore, larger μ leads to lower rising rate.
In general, we denote a random variable following truncated normal distribution as X and assume that this distribution is left truncated at a. Different from normal distribution random variable, the expectation and variance of X are not only determined on the parameters μ and τ but also depended on the truncated point. According to [4], the expectation is E ( X ) = μ + τ ϕ ( α ) 1 Φ ( α ) and variance is V a r ( X ) = τ [ 1 + α ϕ ( α ) 1 Φ ( α ) ( ϕ ( α ) 1 Φ ( α ) ) 2 ] , where α = a μ τ . Meanwhile, the characteristic function is given as χ ( t ) = e i μ t 1 2 τ t 2 [ 1 Φ ( α i t τ ) 1 Φ ( α ) ] and the moment generating function is M ( t ) = e μ t + 1 2 τ t 2 [ 1 Φ ( α t τ ) 1 Φ ( α ) ] , which are helpful to calculate the moments.
Particularly, if  a = 0 , its expectation, variance, characteristic function, as well as moment generating function are expressed separately as
E ( X ; μ , τ ) = μ + τ μ τ Φ ( μ τ ) ,
V a r ( X ; μ , τ ) = τ [ 1 μ τ ϕ ( μ τ ) Φ ( μ τ ) ( ϕ ( μ τ ) Φ ( μ τ ) ) 2 ] ,
χ ( X ; μ , τ ) = e i μ t 1 2 t 2 τ [ Φ ( μ τ + i t τ ) Φ ( μ τ ) ] ,
M ( X ; μ , τ ) = e μ t + 1 2 t 2 τ [ Φ ( μ τ + t τ ) Φ ( μ τ ) ] .

1.2. Adaptive Progressive Type-II Censored Scheme

In survival analysis and reliability test, incomplete observation is a universal problem perplexing lots of scholars for the limitation of time and cost. Thus, censored schemes are applied to improve the efficiency of experiments. Various censoring schemes are put forward and studied by many statisticians. Type I and II censoring have found the most universal applications among these censoring models. In type-I censoring, the life span experiment stops at a prefixed time and the units remaining are right censored while under type-II censoring, m, the number of failures is predetermined and the experiment stops once the m-th failure happens. However, being unable to remove the survival units during the test is one of the weaknesses in the two censoring schemes. To conquer this, progressive type II censoring model was proposed. In this scheme, there are totally n sample units in an experiment. On the occurrence of the first failure, R 1 live units are removed at random from n 1 live units remaining. Similarly, from remaining n j i = 1 j 1 R i units remove R j units when the j-th failure appears. This process is ongoing until m-th failure happens. Here the failure time of m units are described as X ( 1 : m : n ) , , X ( m 1 : m : n ) , X ( m : m : n ) . In addition, ( R 1 , , R m 1 , R m ) represents the censoring scheme. Note that the number m and censoring scheme are prescribed at the beginning. Readers who are interested in progressive censoring can consult [5,6] for further information.
However, progressive type II censoring scheme is lack of flexibility. For example, during the life testing experiment, researchers may want to control the experiment time accordingly under the practical condition. Adaptive progressive type-II censored scheme is proposed to deal with that problem. This scheme is implemented in sequel. At the start of a life test experiment with n units totally, m, the size of observed units and  ( R 1 , , R m 1 , R m ) are predetermined. In addition, T, the expected total testing time, is preprovided by researchers. Before T, this test is carried out according to the censoring plan which is given in advance like progressive type II censoring. However, once the actual time runs over T, researchers tend to obtain more observations in a shorter time by leaving more live units. The survival units would no longer be removed until the m-th failure appears. Assume that the practical test time exceeds over T exactly right after the J-th failure happens. (i.e., X ( J : m : n ) < T < X ( J + 1 : m : n ) , X ( m + 1 : m : n ) , X ( 0 : m : n ) 0 , J = 0 , 1 , , m . ). Therefore, when the actual time overruns T, the censoring scheme in the following becomes R m = n m i = 1 J R i and R J + 1 = R J + 2 = = R m 2 = R m 1 = 0 . Particularly, on condition that T is equal to 0, this scheme turns into conventional type-II censoring scheme. If the time of test is sufficient, T = is the corresponding premise. This model eventually changes to progressive type-II censoring model.
Considering the limitations of time and efficiency in practical situations, adaptive progressive type-II censoring censoring scheme enables researchers to control the experiment with flexibility in the process. This good property has attracted more and more scholars to do correlative investigation. Ref. [7] first proposed adaptive progressive type-II censoring model. Under the one- and two-parameter exponential distribution, Ref. [8] elaborated inferences related to estimating unknown parameters on the basis of this censoring scheme. Ref. [6] introduced the concepts of this censoring model and illustrated other related models in detail. Ref. [9] extended this censoring model by considering competing risks under exponential distribution and presented statistical inferences. On the basis of this scheme, Ref. [10] estimated the unknown parameters of a special distribution whose failure rate function is bathtub-shaped through maximum likelihood estimation and Bayesian approach.
For estimation methods, classical statistics and Bayesian inference are the mainstream approaches. Maximum likelihood estimation has been widely used as an important classical method of estimation. The life of experiment units are assumed to follow a certain lifetime distribution and be independent and identically distributed (i.i.d.). Then the maximum likelihood estimates are computed through maximizing the likelihood function. However, the small sample size leads to inaccurate estimation under this method. Bayesian estimation tackles this disadvantage by incorporating prior knowledge. Given the probability density function of sample and the prior distribution of parameters, Bayes estimates are obtained by calculating the expected value under posterior distribution. However, improper prior distribution selection may cause large errors and there is a certain degree of subjectivity in selection. Hence, giving careful consideration to choose prior distribution is very significant. In addition, the integral is difficult to be computed in Bayes estimation sometimes; importance sampling is a suitable approach to perform numerical calculations with an easily sampled distribution.
This paper aims to study estimation problems for the truncated normal distribution parameters under adaptive progressive type II censoring model. Both point estimations and interval estimations are discussed by applying classical approaches and Bayesian methods. Here the same distribution with Lodhi et al. (2019) is considered, but we mainly focus on enriching Bayesian estimation section and choosing three kinds of loss functions to calculate, which is comprehensive and comparable. Under the more flexible and complex censoring model, deducing functions, taking derivatives and writing codes to make simulations are more difficult. In addition, some evidences are needed before employing methods are proved, making this article rigorous.
In Section 2, via Newton–Raphson method, the maximum likelihood estimates of the truncated normal distribution parameters are obtained. Meanwhile, observed Fisher matrix is computed to build the corresponding aymptotic confidence intervals. Furthermore, the distribution of the j-th failure and expected Fisher information matrix are also discussed. Section 3 carrys out Bayesian estimations under different loss functions including square error loss Function (SELF), Linex loss function (LLF) and general entropy loss function (GELF). Lindley approximation and importance sampling are mainly adopted to calculate the estimates of parameters. On the basis of that, highest posterior density credible intervals (HPD intervals) are also established. Meanwhile, bootstrap confidence intervals are constructed in accordance with the algorithm steps in Section 4. In Section 5, the results obtained with different methods are compared and evaluated by carrying out simulations. To illustrate the effectiveness of estimation methods, analysis on a real data set are carried out in Section 6. Furthermore, Section 7 investigates the problem of selecting optimal censoring scheme under adaptive type II censored model. Eventually, Section 8 summarizes the whole article.

2. Maximum Likelihood Estimation

The estimators are derived in this section from the perspective of point estimation and interval estimation by utilizing maximum likelihood method. The equations inferred from likelihood function to obtain the estimates are demonstrated. Through observed Fisher information matrix, we also construct asymptotic confidence intervals. Some inferences about expected Fisher matrix are also presented in sequel. Assume that X ( 1 : m : n ) , , X ( m 1 : m : n ) , X ( m : m : n ) represent the censored sample under the prefixed censoring plan ( R 1 , , R m 1 , R m ) and the ideal total time is provided as T. In particular, before T, the last failure occurs at X J : m : n . For the sake of simplicity, X ( 1 : m : n ) , , X ( m 1 : m : n ) , X ( m : m : n ) are denoted respectively by X ( 1 ) , , X ( m 1 ) , X ( m ) .

2.1. Point Estimation

Suppose that the failure data x ( 1 ) , , x ( m 1 ) , x ( m ) are observed in the experiment. Let x ̲ represent the observed data ( x ( 1 ) , , x ( m 1 ) , x ( m ) ). Given the pdf f ( x ) , the cdf F ( x ) and the J which is obtained by setting T in advance, the likelihood function is described as
L ( x ̲ ) = D J [ 1 F ( x ( m ) ) ] ( n m i = 1 J R i ) [ i = 1 m f ( x ( i ) ) ] { i = 1 J [ 1 F ( x ( i ) ) ] R i } ,
where
D J = i = 1 m [ n + 1 i k = 1 max { J , i 1 } R k ] .
In the situation that data are subject to truncated normal distribution, f ( x ) and F ( x ) are the pdf and cdf of T N ( μ , τ ) separately. The corresponding likelihood function and log-likelihood function are expressed by
L ( μ , τ | x ̲ ) = D J [ i = 1 m e ξ x ( i ) 2 2 2 π τ Φ ( ξ ) ] { i = 1 J [ 1 Φ ( ξ x ( i ) ) Φ ( ξ ) ] R i } [ 1 Φ ( ξ x ( m ) ) Φ ( ξ ) ] ( n m i = 1 J R i ) ,
ln L ( μ , τ ) = constant 1 2 i = 1 m ln τ n ln Φ ( ξ ) + i = 1 J R i [ ln ( 1 Φ ( ξ x ( i ) ) ) ] + ( n m i = 1 J R i ) [ ln ( 1 Φ ( ξ x ( m ) ) ) ] ,
where ξ = μ τ , ξ x ( i ) = x ( i ) μ τ and ϕ ( · ) is the pdf of standard normal distribution. μ ^ and τ ^ are used to denote the MLEs of μ and τ . For the purpose of obtaining corresponding μ ^ and τ ^ , the partial derivatives of likelihood function for μ and τ are set to be equal to 0. μ ^ and τ ^ are calculated by solving the equations, which maximizes the likelihood function. Then the equations are given as follows
ln L ( μ , τ ) μ = 1 τ [ i = 1 m ξ x ( i ) n ϕ ( ξ ) Φ ( ξ ) + i = 1 J R i ϕ ( ξ x ( i ) ) 1 Φ ( ξ x ( i ) ) + ( n m i = 1 J R i ) ϕ ( ξ x ( m ) ) 1 Φ ( ξ x ( m ) ) ] = 0 ,
ln L ( μ , τ ) τ = 1 2 τ [ i = 1 m ξ x ( i ) 2 m + n ξ ϕ ( ξ ) Φ ( ξ ) + i = 1 J R i ξ x ( i ) ϕ ( ξ x ( i ) ) 1 Φ ( ξ x ( i ) ) + ( n m i = 1 J R i ) ξ x ( m ) ϕ ( ξ x ( m ) ) 1 Φ ( ξ x ( m ) ) ] = 0 .
Theorem 1.
When μ is given, the MLE of τ exists.
Proof. 
Please see Appendix A.  □
However, under this circumstance the Equations (11) and (12) are nonlinear and have no closed form solution. The denominators in the equations contain complex and nonlinear functions. They cannot be simplified. It is impossible to get the analytic solutions. So numerical techniques are essential to be introduced in order to settle this problem. Here to obtain the final estimates μ ^ and τ ^ , Newton–Raphson method is chosen, which can be achieved by R software.

2.2. Asymptotic Confidence Interval

Aiming at constructing the approximate intervals of the two parameters, asymptotic variance-covariance matrix of μ ^ and τ ^ is discussed by performing Fisher information matrix inversion.
Through taking the expectation of the negative second derivative of the log-likelihood function, Fisher information matrix can be calculated. Thus it can be expressed as
I ( μ , τ ) = E 2 l μ , τ μ 2 2 l μ , τ μ τ 2 l μ , τ τ μ 2 l μ , τ τ 2 .

2.2.1. Observed Fisher Information Matrix

The observed Fisher matrix obtained based on samples is helpful and convenient to estimate the intervals of μ ^ and τ ^ . For truncated normal distribution, it can be given as
I ( μ ^ , τ ^ ) = a 11 a 12 a 21 a 22 ,
where
a 11 = 1 τ [ m + n A 11 ( ξ ) + i = 1 J R i A 11 ( i ) ( ξ x ( i ) ) + ( n m i = 1 J R i ) A 11 ( m ) ( ξ x ( m ) ) ] , a 12 = a 21 = 1 2 τ τ [ 2 i = 1 m ξ x ( i ) + n A 12 ( ξ ) + i = 1 J R i A 12 ( i ) ( ξ x ( i ) ) + ( n m i = 1 J R i ) A 12 ( m ) ( ξ x ( m ) ) ] , a 22 = 1 4 τ 2 [ 4 i = 1 J ξ x ( i ) 2 2 m + n A 22 ( ξ ) + i = 1 J R i A 22 ( i ) ( ξ x ( i ) ) + ( n m i = 1 J R i ) A 22 ( m ) ( ξ x ( m ) ) ] ,
and
A 11 ( ξ ) = Q 2 ( ξ ) + Q ( ξ ) , A 12 ( ξ ) = Q ( ξ ) ξ Q ( ξ ) + ξ Q 2 ( ξ ) , A 22 ( ξ ) = 3 ξ Q ( ξ ) + ξ 2 Q ( ξ ) ξ 2 Q 2 ( ξ ) , A 11 ( i ) ( ξ x ( i ) ) = P i ( ξ x ( i ) ) + P i 2 ( ξ x ( i ) ) , A 12 ( i ) ( ξ x ( i ) ) = P ( i ) ( ξ x ( i ) ) + ξ x ( i ) P ( i ) ( ξ x ( i ) ) + ξ x ( i ) P ( i ) 2 ( ξ x ( i ) ) , A 22 ( i ) ( ξ x ( i ) ) = 3 ξ x ( i ) P ( i ) ( ξ x ( i ) ) + ξ x ( i ) 2 P ( i ) ( ξ x ( i ) ) + ξ x ( i ) 2 P ( i ) 2 ( ξ x ( i ) ) ,
ξ = μ τ , ξ x ( i ) = x ( i ) μ τ , Q ( ξ ) = ϕ ( ξ ) Φ ( ξ ) , P ( i ) ( ξ x ( i ) ) = ϕ ( ξ x ( i ) ) 1 Φ ( ξ x ( i ) ) , Q ( ξ ) = ϕ ( ξ ) Φ ( ξ ) , P ( i ) ( ξ x ( i ) ) = ϕ ( ξ x ( i ) ) 1 Φ ( ξ x ( i ) ) , Q 2 ( ξ ) = ϕ ( ξ ) Φ ( ξ ) 2 , P ( i ) 2 ( ξ x ( i ) ) = ϕ ( ξ x ( i ) ) 1 Φ ( ξ x ( i ) ) 2 , and  ϕ ( · ) is the first derivative of the pdf of standard normal distribution. Then the associated variance-covariance matrix is described as
V a r ( μ ^ , τ ^ ) = σ 11 σ 12 σ 21 σ 22 = a 11 a 12 a 21 a 22 1 .
Thus, the  100 % ( 1 α ) asymptotic confidence intervals are constructed separately as ( μ ^ z α 2 σ 11 , μ ^ + z α 2 σ 11 ) and ( τ ^ z α 2 σ 22 , τ ^ + z α 2 σ 22 ) , where z α / 2 is the α 2 -th percentile of the standard normal distribution.

2.2.2. Expected Fisher Information Matrix

Before experiment, n , m , R , T are predetermined. J is the order of failure right after the time point T. So it is relevant to T and the actual process of test. Because of the randomness of the life testing experiment, only when the test is over can we get the exact value of J. Thus J can be treated as a random variable. However, unknown J leads to the difficulty of obtaining the expectation expression for the pdf of experiment units involving J. Through making the assumptions that the J = j is known, J is regarded as a constant in this case. Then the specific expectation expression can be derived and the expected Fisher information matrix using adaptive progressive type II censored data is demonstrated as follows.
The expected Fisher matrix is decided by the distribution of the experiment unit X ( i ) , i = 1 , 2 , , m . As a matter of fact, the pdf of X ( i ) based on T N ( μ , τ ) is described as (see Appendix B)
f x ( i ) T N ( x ( i ) ) = c i 1 0 k = 1 i d k , i 0 ϕ ( x ( i ) μ τ ) [ 1 Φ ( x ( i ) μ τ ) ] r k 0 1 [ Φ ( μ τ ) ] r k 0 , i = 1 , 2 , , j , c i 1 1 c j 1 1 k = j + 1 i d k , i 1 ϕ ( x ( i ) μ τ ) [ 1 Φ ( x ( i ) μ τ ) ] r k 1 1 [ 1 Φ ( x ( j ) μ τ ) ] r k 1 , i = j + 1 , j + 2 , , m
where,
r i 0 = m i + 1 + k = 1 m R k , c i 1 0 = k = 1 i r i 0 ,
r i 1 = n i + 1 k = 1 j R k , c i 1 1 = k = 1 i r i 1 ,
and
d 1 , 1 0 = 1 , d k , i 0 = h = 1 , h k i 1 r h 0 r k 0 , 1 k i j ,
d j + 1 , j + 1 1 = 1 , d k , i 1 = h = j + 1 , h k i 1 r h 1 r k 1 , j + 1 k i m .
Thus based on Equation (14), the expected Fisher information matrix is expressed by
E [ I ( μ ^ , τ ^ ) ] = a 11 * a 12 * a 21 * a 22 * ,
where
a 11 * = 1 τ [ m + n A 11 ( ξ ) + i = 1 J R i A 11 ( i ) ( μ , τ ) + ( n m i = 1 J R i ) A 11 ( m ) ( μ , τ ) ] , a 12 * = a 21 = 1 2 τ τ [ 2 i = 1 m ξ ( i ) ( μ , τ ) + n A 12 ( ξ ) + i = 1 J R i A 12 ( i ) ( μ , τ ) + ( n m i = 1 J R i ) A 12 ( m ) ( μ , τ ) ] , a 22 * = 1 4 τ 2 [ 4 i = 1 J ξ ( i ) ( μ , τ ) 2 2 m + n A 22 ( ξ ) + i = 1 J R i A 22 ( i ) ( μ , τ ) + ( n m i = 1 J R i ) A 22 ( m ) ( μ , τ ) ]
and A 11 ( i ) ( μ , τ ) , A 12 ( i ) ( μ , τ ) , A 22 ( i ) ( μ , τ ) , ξ ( i ) ( μ , τ ) where i = 1 , 2 , , n can be calculated by multiplying A 11 ( i ) ( ξ x ( i ) ) , A 12 ( i ) ( ξ x ( i ) ) , A 22 ( i ) ( ξ x ( i ) ) , ( ξ x ( i ) ) , i = 1 , 2 , , n respectively by f x ( i ) T N ( x ( i ) ) , i = 1 , 2 , , n then integrating them from zero to infinity to obtain the expectation of these random variables. In fact, the expectation expressions about X ( i ) only depend on μ , τ , i and J. Denote the asymptotic variance-covariance matrix as V a r * ( μ ^ , τ ^ ) . Then V a r * ( μ ^ , τ ^ ) can be written as
V a r * ( μ ^ , τ ^ ) = σ 11 * σ 12 * σ 21 * σ 22 * = a 11 * a 12 * a 21 * a 22 * 1 .
Hence the 100 % ( 1 α ) aymptotic confidence intervals computed by expected Fisher information matrix can be given as ( μ ^ z α 2 σ 11 * , μ ^ + z α 2 σ 11 * ) and ( τ ^ z α 2 σ 22 * , τ ^ + z α 2 σ 22 * ) .
Furthermore, on the basis of the pdf of X ( i ) , the distribution of J is also inferred. The probability function (pf) of J is obtained as (see Appendix C)
P T N ( J = j ) = c j 1 0 [ 1 Φ ( T μ τ ) ] r j + 1 1 × { k = 1 j d k , j 0 r k 0 r j + 1 1 [ [ Φ ( μ τ ) ] r k 0 r j + 1 1 [ 1 Φ ( T μ τ ) ] r k 0 r j + 1 1 ] }
where j = 0 , 1 , , m , r m + 1 0 .

3. Bayesian Estimation

Bayesian estimation regards the unknown parameters as random variables. For the selection of prior distribution of unknown parameters is discussed and its simulation results are satisfactory in [2]. We adopt the same prior distribution as this reference. Suppose that μ follows a truncated normal distribution depending on τ , that is T N ( a , τ b ) , and  τ follows an Inverse Gamma distribution I G ( c , d 2 ) . Then the prior density functions of unknown parameters in T N ( μ , τ ) are described as
π 1 ( μ | τ ) 1 τ Φ ( a b τ ) e b ( μ a ) 2 2 τ ,
π 2 ( τ ) ( 1 τ ) c + 1 e d 2 τ ,
where hyperparameters a , b , c , d > 0 . So the density function of joint prior distribution is written as
π ( μ , τ ) = π 1 ( μ | τ ) π 2 ( τ ) .
Hence we have
π ( μ , τ ) 1 Φ ( a b τ ) ( 1 τ ) c + 3 2 e 1 2 τ [ b ( μ a ) 2 + d ] , a , b , c , d , μ , τ > 0 .
Then given the data, the posterior distribution density function derived by the above inferences becomes
π ( μ , τ | x ̲ ) = N C 1 Φ ( a b ) [ Φ ( μ τ ) ] n ( 1 τ ) c + m + 3 2 e 1 2 τ [ ( a μ ) 2 b + d + i = 1 m ( x ( i ) μ ) 2 ] × i = 1 J [ 1 Φ ( x ( m ) μ τ ) ] n i = 1 J R i m [ 1 Φ ( x ( i ) μ τ ) ] R i ,
where the normalized coefficient N C = 0 0 π ( μ , τ | x ̲ ) d μ d τ .

3.1. Three Loss Functions

Choosing loss function is significant in Bayesian estimation. The symmetric loss function and asymetric loss functions are considered in this section.

3.1.1. Square Error Loss Function (SELF)

The Square error loss function (SELF) is a symmetric loss function with extensive application. The corresponding function is given by
L S ( θ , θ ˜ ) = ( θ ˜ θ ) 2 ,
where θ ˜ is the estimate of θ . Then for T N ( μ , τ ) , the Bayesian estimates under SELF can be shown as
μ ˜ S = 0 0 μ π ( μ , τ | x ̲ ) d μ d τ ,
τ ˜ S = 0 0 τ π ( μ , τ | x ̲ ) d μ d τ .

3.1.2. Linex Loss Function (LLF)

In some situations, using symmetric loss function is inappropriate so asymmetric loss function is utilized to deal with this problem. Ref. [11] first proposed Linex loss function, as yet it has been used widely by a number of scholars. Based on progressive type-II censoring scheme, Ref. [12] discussed Bayesian estimations of inverse Weibull (IW) distribution under LLF. Under progressive first failure censoring scheme [13] adopted LLF to compute Bayes estimates of the log-logistic distribution. LLF is a linear exponential loss function and it penalizes heavily on one side of 0 and increases linearly on the other side of 0, which makes it become a asymmetric loss function. This loss function is helpful to solve overestimation and underestimation problems. The LLF is given as
L L ( θ , θ ˜ ) = e p ( θ ˜ θ ) p ( θ ˜ θ ) 1 ,
where p 0 and p denotes the direction and penalizing intensity. If  p > 0 , LLF increases exponentially in positive direction and linearly in negative direction while if p < 0 the negative one will be penalized more heavily. Furthermore, the punishment intensity rises as the absolute value of p becomes larger. As for how to choose p, one can set a series of values and resample using Monte Carlo method. Then compare the simulation results to choose the most effective value. Or refer to [14] for more information.
Under LLF, the Bayesian estimates of μ and τ are described as
μ ˜ S = 1 p ln [ 0 0 e p μ π ( μ , τ | x ̲ ) d μ d τ ] ,
τ ˜ S = 1 p ln [ 0 0 e p τ π ( μ , τ | x ̲ ) d μ d τ ] .

3.1.3. General Entropy Loss Function (GELF)

General entropy loss function is a suitable modified function on the basis of LLF. One can refer to [15] for detailed information. The GELF is expressed as
L E ( θ , θ ˜ ) ( θ ˜ θ ) q q ln θ ˜ θ 1 .
Hence the Bayes estimates under GELF can be computed as
μ ˜ S = [ 0 0 μ q π ( μ , τ | x ̲ ) d μ d τ ] 1 q , τ ˜ S = [ 0 0 τ q π ( μ , τ | x ̲ ) d μ d τ ] 1 q .
However, the above Bayes estimation expressions cannot give a closed form of unknown parameters. In the next section, on the basis of three loss functions, estimates of parameters are computed by introducing Lindley approximation method.

3.2. Lindley Approximation

Lindley method was first put forward by [16] to determine the point estimation of unknown parameters. Through approximating the two integrals appeared in Bayes analysis by Taylor expansion at the MLE point, the numerical solutions of approximate Bayes estimation can be obtained. Let u denote any function about μ and τ . Furthermore, u ˜ is the associated Bayes estimate. Lindley’s approximation method in two parameters’ case is written as
u ˜ = u + u 11 σ 11 + u 12 σ 12 + u 21 σ 21 + u 22 σ 22 + 1 2 [ l 30 ( u 1 σ 11 + u 2 σ 12 ) σ 11 + l 03 ( u 2 σ 22 + u 1 σ 21 ) σ 22 + l 21 ( 3 u 1 σ 11 σ 12 + u 2 ( 2 σ 12 2 + σ 11 σ 22 ) ) + l 12 ( 3 u 2 σ 22 σ 21 + u 1 ( 2 σ 21 2 + σ 22 σ 11 ) ) ] + ρ 1 ( u 1 σ 11 + u 2 σ 21 ) + ρ 2 ( u 2 σ 22 + u 1 σ 12 ) ,
where
l = ln L ( μ , τ ) , l i j = l μ i τ j , ρ = ln π ( μ , τ ) , ρ 1 = ρ μ , ρ 2 = ρ τ , u 1 = u μ , u 2 = u τ , u 11 = 2 u μ 2 , u 12 = u 21 = 2 u μ τ , u 22 = 2 u τ 2 ,
and l represents the log-likelihood function in (10), ρ represents the prior distribution log-likelihood function of unknown parameters, σ i j is equal to the corresponding (i,j)-th entry of V a r ( μ ^ , τ ^ ) mentioned in (13). The above expressions including their derivatives are computed in detail in Appendix D. u is the function we want to approximate, which takes different forms when considering different loss functions.
(1)
Estimation under the square error loss function.
Bayes estimate μ ˜ L S is written as
u = μ , u 1 = 1 , u 2 = u 11 = u 12 = u 21 = u 22 = 0 , μ ˜ L S = μ ^ + 1 2 [ l 30 σ 11 2 + l 03 σ 12 σ 22 + 3 l 21 σ 11 σ 12 + l 12 σ 22 σ 11 + 2 l 12 σ 21 2 ] + ρ 1 σ 11 + ρ 2 σ 12 .
Similarly, Bayes estimate τ ˜ L S is given as
u = τ , u 1 = u 11 = u 12 = u 21 = u 22 = 0 , u 2 = 1 , τ ˜ L S = τ ^ + 1 2 [ l 30 σ 12 σ 11 + l 03 σ 22 2 + l 21 σ 11 σ 22 + 2 l 21 σ 12 2 + 3 l 12 σ 22 σ 21 ] + ρ 1 σ 21 + ρ 2 σ 22 .
(2)
Estimation under Linex loss function.
In this case u becomes an exponential function about unknown parameter. Then the Bayes estimate μ ˜ L L is written as
u = e p μ , u 1 = p e p μ , u 11 = p 2 e p μ , u 2 = u 12 = u 21 = u 22 , μ ˜ L L = 1 h ln { e p μ + u 11 σ 11 + 1 2 [ l 30 u 1 σ 11 2 + l 03 u 1 σ 21 σ 22 + 3 l 21 μ 1 σ 11 σ 12 + l 12 u 1 σ 22 σ 11 + 2 l 12 u 1 σ 21 2 ] + ρ 1 u 1 σ 11 + ρ 2 u 1 σ 12 } .
Similarly, Bayes estimate τ ˜ L L is given as
u = e p τ , u 2 = p e p τ , u 22 = p 2 e p τ , u 1 = u 11 = u 12 = u 21 = 0 , τ ˜ L L = 1 h ln { e p τ + u 22 σ 22 + 1 2 [ l 30 u 2 σ 12 σ 11 + l 03 u 2 σ 22 2 + l 21 u 2 σ 11 σ 22 + 2 l 21 u 2 σ 12 2 + 3 l 12 u 2 σ 22 σ 21 ] + ρ 1 u 2 σ 21 + ρ 2 u 2 σ 22 } .
(3)
Estimation under general entropy loss function.
In this situation, u changes to a power function then the Bayes estimate μ ˜ L E is obtained as
u = μ q , u 1 = q μ q 1 , u 11 = q ( q + 1 ) μ q 2 , u 2 = u 12 = u 21 = u 22 = 0 , μ ˜ L E = { u q + u 11 σ 11 + 1 2 [ l 30 u 1 σ 11 2 + l 03 u 1 σ 21 σ 22 + 3 l 21 u 1 σ 11 σ 12 + l 12 u 1 σ 22 σ 11 + 2 l 12 u 1 σ 21 2 ] + ρ 1 u 1 σ 11 + ρ 2 u 1 σ 12 } 1 q .
Similarly, Bayes estimate τ ˜ L E is given as -4.6cm0cm
θ L ( μ , τ ) = τ q , u 2 = q τ ( q + 1 ) , u 1 = u 11 = u 12 = u 21 = 0 , u 22 = q ( q + 1 ) τ ( q + 2 ) , μ ˜ L E = { τ q + u 22 σ 22 + 1 2 [ l 30 u 2 σ 12 σ 11 + l 03 u 2 σ 22 2 + l 21 u 2 σ 11 σ 22 + 2 l 21 u 2 σ 12 2 + 3 l 12 u 2 σ 22 σ 21 ] + ρ 1 u 2 σ 21 + ρ 2 u 2 σ 22 } 1 q .
Using these expressions the approximate Bayes estimates can be computed under different loss functions. Though this method is helpful to obtain point estimate, it cannot estimate the intervals of unknown parameters. In order to overcome this difficulty, importance sampling method is adopted to construct credible intervals in the following subsection.

3.3. Importance Sampling

As a useful technique in Monte Carlo method, importance sampling can be applied to calculate both point estimates and interval estimates. Based on the features of posterior expression in (19), τ samples from Inverse Gamma distribution while μ samples from truncated normal distribution on the condition that τ is known. Then the pdfs can be written as
Π * ( τ ) ( 1 τ ) c + m 2 + 1 e 1 τ ( a 2 b + d + i = 1 m x ( i ) 2 ( i = 1 m x ( i ) + a b ) 2 b + m ) ,
Π * ( μ | τ ) 1 τ Φ ( i = 1 m x ( i ) + a b τ ( b + m ) ) e b + m 2 τ ( μ i m x ( i ) b + m ) 2 .
Theorem 2.
Π * ( τ ) is the density function from Inverse Gamma distribution.
Proof. 
Let I G ( α ˜ , β ˜ ) denote the Inverse Gamma distribution. The expression of Π * ( τ ) is in accordance with Inverse Gamma distribution density function. Thus to prove Theorem 2, only need to illustrate that α ˜ and β ˜ are non-negative.
  • α ˜ = c + m 2 > 0 .
    According to Function (18), all the hyperparameters are positive. Then c and m > 0 are known. So α ˜ = c + m 2 > 0 .
  • β ˜ = 1 2 ( a 2 b + d + i = 1 m x ( i ) 2 ( i = 1 m x ( i ) + a b ) 2 b + m ) > 0 .
    According to Cauchy–Schwarz inequality, we can obtain m i = 1 m x ( i ) 2 ( i = 1 m x ( i ) ) 2 , and 
    β ˜ = 1 2 ( a 2 b + d + i = 1 m x ( i ) 2 ( i = 1 m x ( i ) + a b ) 2 b + m ) = 1 2 [ b ( i = 1 m x ( i ) 2 2 a i = 1 m x ( i ) + a 2 ) + ( m i = 1 m x ( i ) 2 ( i = 1 m x ( i ) ) 2 ) + d b + ( m 1 ) a b + m d ] .
    then β ˜ is divided to three parts.
    (a)
    b ( i = 1 m x ( i ) 2 2 a i = 1 m x ( i ) + a 2 ) 0 .
    b ( i = 1 m x ( i ) 2 2 a i = 1 m x ( i ) + a 2 ) b ( ( i = 1 m x ( i ) ) 2 2 a i = 1 m x ( i ) + a 2 ) = b ( i = 1 m x ( i ) a ) 2 0 .
    (b)
    ( m i = 1 m x ( i ) 2 ( i = 1 m x ( i ) ) 2 ) 0 .
    (c)
    d b + ( m 1 ) a b + m d > 0 .
    Because all hyperparameters and integer n , m are positive, d b + m d > 0 and ( m 1 ) a b 0 . Thus d b + ( m 1 ) a b + m d > 0 .
    To sum up, all three parts are positive, so β ˜ > 0 . Then Π * ( τ ) belongs to Inverse Gamma distribution naturally.
 □
Theorem 3.
Π * ( μ | τ ) is the density function from truncated normal distribution which is left truncated at 0.
Proof. 
Here the left truncated normal distribution is denoted as T N ( η ˜ , δ ˜ ) . The form of Π * ( μ | τ ) is identical to the pdf of T N ( η ˜ , δ ˜ ) . Showing both parameters η ˜ > 0 and δ ˜ > 0 is all we need to do. Then η ˜ = i = 1 m x ( i ) + a b b + m and δ ˜ = τ b + m can be obtained. From the assumption of our model, the sample data and hyperparameters are positive. Hence η ˜ > 0 and δ ˜ > 0 .  □
Using Function (33) and (34), (19) can be simplified as
π ( μ , τ | x ̲ ) I G τ m 2 + c , 1 2 a 2 b + d + i = 1 m x ( i ) 2 ( i = 1 m x ( i ) + a b ) 2 b + m × T N μ | τ i = 1 m x ( i ) + a b b + m , τ b + m × Q ( μ , τ ) Π * ( τ ) Π * ( μ | τ ) Q ( μ , τ ) ,
where
Q ( μ , τ ) = Φ ( i = 1 m x ( i ) + a b τ ( b + m ) ) Φ ( a b τ ) [ Φ ( μ τ ) ] n [ 1 Φ ( x ( m ) μ τ ) ] n m i = 1 J R i i = 1 J [ 1 Φ ( x ( i ) μ τ ) ] R i
Then by utilizing the expressions above, generating samples from π ( μ , τ | x ̲ ) is carried out by the following steps.
  • Prefix the number of samples S n .
  • Generate τ from Π * ( τ ) .
  • Generate μ from Π * ( μ | τ ) .
  • Repeat step 2 and 3 to produce a series of samples ( μ 1 , τ 1 ), ( μ 2 , τ 2 ), ⋯, ( μ S n , τ S n ).
Let θ ( μ , τ ) denote the function that is used to conduct Bayesian estimation. Then its Bayes estimate θ ˜ * ( μ , τ ) can be obtained naturally as
θ ˜ * ( μ , τ ) = i = 1 S n θ ( μ i , τ i ) Q ( μ i , τ i ) i = 1 S n Q ( μ i , τ i ) .
In sequal, the HPD intervals of unknown parameters can also be obtained on the basis of samples generated. Assume that P [ θ ( μ , τ ) θ p ] = p , 0 < p < 1 . For p is determined in advance, establishing a corresponding HPD interval by conducting estimation of θ p is what we intend to do.
Suppose that
w i * = Q ( μ i , τ i ) i = 1 S n Q ( μ i , τ i ) .
For the sake of brevity, θ ˜ * ( μ i , θ i ) is replaced by θ ˜ i * . Then sort { θ ˜ 1 * , θ ˜ 2 * , , θ ˜ S n 1 * , θ ˜ S n * } in ascending order as { θ ˜ ( 1 ) * , θ ˜ ( 2 ) * , , θ ˜ ( S n 1 ) * , θ ˜ ( S n ) * } and w i * is in accordance with θ ˜ i * then after reordering θ ˜ i * as θ ˜ ( i ) * , w ( i ) * is the corresponding value. Based on the Bayes estimate θ ^ p * = θ ˜ * ( z p ) where z p is the integer that satisfies
i = 1 z p w ( i ) * p i = 1 z p + 1 w ( i ) *
Hence, the  100 % ( 1 α ) credible interval can be expressed as ( θ ^ γ * , θ ^ γ + 1 p * ) , γ = w ( 1 ) , w ( 1 ) + w ( 2 ) , , i = 1 z p w ( i ) . Finally, the HPD interval ( θ ^ γ * * , θ ^ γ * + 1 p * ) is determined by the shortest interval length. Note that γ * makes θ ^ γ * + 1 p * θ ^ γ * * θ ^ γ + 1 p * θ ^ γ * for all γ .

4. Bootstrap Confidence Interval

Bootstrap method is a useful tool to construct the confidence interval of unknown parameters by resampling. Here the percentile bootstrap method (Boot-p method) is mainly considered and the algorithm is demonstrated as follows.
step 1 :
Set the number of simulation N b o o t times in advance. Obtain the initial MLEs of parameters μ ^ , τ ^ from the original sample X = ( x 1 , x 2 , , x J , , x m 1 , x m ) .
step 2 :
Using the estimates μ ^ , τ ^ to generate a new sample of size n. Denote it as X * = ( x 1 1 , x 2 1 , , x J 1 , , x m 1 1 , x m 1 ) .
step 3 :
Estimate MLEs μ ^ ( 1 ) , τ ^ ( 1 ) based on X .
step 4 :
Repeat step 2 and 3 N b o o t times and gain μ ^ 1 , μ ^ 2 , , μ ^ N b o o t 1 , μ ^ N b o o t , τ ^ 1 , τ ^ 2 , , τ ^ N b o o t 1 , τ ^ N b o o t .
step 5 :
Sort μ ^ 1 , μ ^ 2 , , μ ^ N b o o t 1 , μ ^ N b o o t in ascending order and represent them as μ ^ ( 1 ) , μ ^ ( 2 ) , , μ ^ ( N b o o t 1 ) , μ ^ ( N b o o t ) . Similarly, τ ^ ( 1 ) , τ ^ ( 2 ) , , τ ^ ( N b o o t 1 ) , τ ^ ( N b o o t ) is the arranged sample in ascending order. Then the 100 % ( 1 α ) Boot-p interval of μ and τ can be written as ( μ ^ ( [ α N b o o t 2 ] ) , μ ^ ( [ ( 1 α 2 ) N b o o t ] ) ) and ( τ ^ ( [ α N b o o t 2 ] ) , τ ^ ( [ ( 1 α 2 ) N b o o t ] ) ) , where [ · ] is the rounded number.

5. Simulation Results

In this section, the performances of different point estimation and interval estimation methods are evaluated by Monte Carlo simulations. Comparing their expected value and mean squared error, the advantages and weaknesses of diverse methods can be figured out. Here R software is used for computations and simulations. Taking the size of samples n and effective samples m, the ideal test time T and different censoring schemes into account, simulation schemes are designed for the comparison purpose. Because of the randomness of simulation, sampling repeats N = 5000 times and the final value is regarded as the average of computation results, which are tabulated in the following. To simplicity, the censoring schemes are described by abbreviation. For example, ( 1 , 1 , 0 7 ) represents ( 1 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) and ( ( 1 , 1 , 0 ) 3 ) represents ( 1 , 1 , 0 , 1 , 1 , 0 , 1 , 1 , 0 ) similarly. For the combination of sample size ( n , m ) = ( 30 , 12 ) , ( 40 , 16 ) , ( 50 , 20 ) , two types of censoring schemes ( m 2 , m 2 , 0 ( n m 2 ) ) and ( ( 1 , 1 , 0 ) n m 3 ) are considered. For the prefixed ideal test time, T = 1 and T = 3 are taken into account. Please see Appendices E for selected R codes.
Generally, μ = 2 and τ = 1 are predetermined and truncated normal distribution samples are generated under the adaptive progressive type II censoring schemes through applying the algorithm which is proposed by [7]. Using optim command with predetermining the initial value as the true value in R software, the ML estimations of unknown parameters based on samples are obtained. In Table 1 the ML estimation values (EV) and mean square errors (MSE) are presented. To construct Bayesian point estimation, on one hand, Lindley approximation is applied on the basis of known ML estimates, on the other hand, the importance sampling method as a numerical simulation method is also used. For comparison purposes, three loss functions SELF, LLF and GELF are considered and calculated respectively. The hyperparameters taken for simulations is a = 0.6 , b = 0.3 , c = 37 , d = 0.6 . For asymmetric loss function, p = 1 and p = 1 are compared with each other under Linex loss function while q = 1 and q = 1 are evaluated under general entropy loss function. In Table 2 and Table 3, the computed Bayesian estimates and assessments are reported. Additionally, the detailed results of asymptotic confidence intervals, bootstrap-p intervals as well as HPD intervals at the 95 % confidence level are also shown in Table 4 through the interval mean length (ML) and coverage rate (CR).
From Table 1, Table 2 and Table 3, some conclusions can be drawn as follows.
(1)
Among all methods, there is a common tendency that all the estimation values are approaching true value and meanwhile mean square errors are decreasing as the sample size n and observed failure numbers m increase.
(2)
Generally, the estimates using Bayes method are more accurate compared with ML estimates from their estimated values and MSE. That is because MLEs are only calculated on the basis of data while Bayesian method also takes prior information of unknown parameters in T N ( μ , τ ) into account. For inappropriate prior distribution and hyperparameters selection, Bayesian estimates may have larger bias. The simulation results show that our prior distribution selection is relatively suitable.
(3)
In Bayesian estimation, the estimates under SELF perform slightly better than under LLF and GELF, which illustrates that symmetric loss function is a suitable choice in these cases. Both LLF and GELF with p = 1 , q = 1 have smaller estimate values than with p = 1 , q = 1 . But between LLF and GELF, the results show little difference.
(4)
As for setting expected time, ML estimates under T = 3 are lower than under T = 1 . However, the effects of T = 1 and T = 3 are not obvious in Bayesian estimation. Meanwhile, different censoring schemes also show no significant tendency.
(5)
In the respect of computation techniques, importance sampling provides estimation values that are closer to true value and smaller MSEs than Lindley approximation.
In Table 4, the results of interval estimations are presented. The properties of different estimation methods can be concluded as
(1)
The interval mean length is narrower and coverage rate is higher as sample size n and observed failure numbers m increase.
(2)
Although the results of Aymptotic confidence interval is not very satisfactory, Boot-p intervals and HPD credible intervals obtain highly good performances with lower ML and closer CL.
(3)
For different censoring scheme, the first type scheme ( m 2 , m 2 , 0 ( n m 2 ) ) performs less ML when obtaining asymptotic confidence intervals than other intervals when T = 1 while in T = 3 cases, the regulation is contrary.

6. Real Data Analysis

To show the effectivenesses of estimation methods, we adopt the data set on the fatigue failure times of twenty three ball bearings in [17], which has been applied in a number of studies. Ref. [18] used this data set to discuss the fitness of exponentiated exponential distribution, Gamma dirtribution and Weibull distribution. Ref. [19] illustrated the estimation of reliability procedures in multicomponent system and evaluated its results based on this data set. Ref. [20] applied the real data set to show the characteristics of Inverted Exponentiated Weibull (IEW) distribution. Particularly, Ref. [2] studied this data set and assessed the goodness of fit under four criterions among half normal distribution, folded normal and truncated normal distributions. Finally, the conclusion that truncated normal distribution fits this data set best was drawn.
So in this section, the data set mentioned above is analyzed on the basis of our model. Table 5 presents the complete data and adaptive type II censored data under various schemes. Two types plans are mainly considered. Here denote scheme ( 4 , 4 , 0 13 ) as R I and scheme ( ( 1 , 1 , 0 ) 4 , 1 , 0 ) as R I I respectively. In addition, different prescribed expected time T = 0.4 and T = 0.9 are calculated for the purpose of evaluation. The point estimation and interval estimation results computed based on the ball bearing fatigue failure test data set are tabulated in Table 6 and Table 7.
In Table 6, the estimates of unknown parameters under real data set are presented. As for interval estimation, the interval bounds and mean lengths are shown in Table 7. For convenience, lower bound and upper bound of interval are denoted as LB and UB respectively. ML means the mean length of intervals. From Table 6 and Table 7, the following conclusions are drawn
(1)
The results are relatively close between Bayesian estimates under importance sampling and estimates under Lindley approximation. Particularly, their estimates of parameter μ are very similar no matter what the expexted time T and censoring scheme R are. However, for the value of parameter τ , estimations using importance sampling are lower than that using Lindley method.
(2)
Compared with Bayesian estimator using importance sampling, the estimation values of μ applying MLE are relatively smaller except the situation of T = 0.4 , R I I while MLE results of τ are close when T = 0.4 but become higher when T = 0.9 .
(3)
According to the interval estimation results Table, the lower and higher bound of Boot-p interval are larger slightly. Generally, in the respect of interval length, the results of HPD credible interval are better than the other two.

7. Optimal Censoring Scheme

In simulations and real data analysis, different censoring schemes influence the effectiveness of estimates to a certain degree. Thus, the optimal censoring scheme is put forward to estimate parameters in higher efficiency and accuracy. Selecting the optimal schemes has attracted lots of scholars to study. To solve this problem, various criterions are raised and analyzed. Ref. [21] discussed the aymptotic variance through the Fisher information matrix based on Type-I censoring data. Ref. [22] regarded choosing the optimal censoring scheme as a discrete optimization problem and compared different censoring schemes under six criterion. Ref. [23] adopted three criterion to find the optimal censoring scheme under progressively censored lognormal distribution.
In the following, the criteria we adopt to evaluate different censoring schemes are tabulated.
Criterion I.
Minimize trace ( V a r ( μ ^ , τ ^ ) ).
Criterion II.
Minimize det( V a r ( μ ^ , τ ^ ) ).
The optimal censoring scheme is obtained through simulations. By repeating sampling procedures N=1000 times, The trace and determinant of V a r ( μ ^ , τ ^ ) are averaged as final results. Meanwhile, the total expected time of every censoring scheme is computed using the same method. In sequel, evaluating three kinds of censoring schemes is considered. Given the number of total sample units n and number of failures m, ( n 10 4 , 0 ( n 4 4 n 10 ) ) , ( 1 , 1 , 0 ) ( n 5 ) and ( 0 ( n 4 4 n 10 ) , n 10 4 ) mean censoring at first, censoring uniformly and censoring in the end and they are denoted as R I , R I I and R I I I respectively. By taking different n, m, the more flexible rules of choosing the optimal censoring schemes are found. Under situations of different prefixed T, all the calculation results, optimal censoring scheme and total expected time are shown in Table 8.
From Table 8, some conclusions are summarized as follows.
(1)
With the rising number of sample size, the determinant and trace of variance-covariance matrix decrease.
(2)
Setting a larger value for T can reduce the trace and determinant of V a r ( μ ^ , τ ^ ) slightly.
(3)
The first kind of censoring scheme R I is always the optimal censoring scheme no matter depending on Criteria I or Criteria II except when n = 30 , m = 12 , T = 3 . However, according to the column named E ( X ( m ) ) , the total expected experiment time of R I under different T or ( n , m ) are always the largest among the three kinds of censoring schemes. Thurs R I costs more time on carrying out the test.
So experimenters should weigh and consider balancing the effectiveness and time when designing test schemes. To maximize the effectiveness of estimation, increasing sample size, censoring in the beginning and prescribing a larger T are all good choices. However, if saving time is more important, censoring uniformly or in the end ought to be adopted.

8. Conclusions

In this paper, parameter estimation problem is investigated and statistical inferences are derived under adaptive type II censoring scheme based on the truncated normal distribution. The maximum likelihood method is applied to determine the point estimations. To overcome the difficulty of solving nonlinear equations, Newton–Raphson algorithm is utilized to obtain unknown parameter estimates. Bayesian estimation under three loss functions SELF, LLF and GELF are also considered. The estimate values are calculated by employing importance sampling as well as Lindley approximation. Based on theoretical results, simulations and analysis to real data are studied to evaluate the effectiveness of diverse methods. From the numerical simulation, the Bayes estimation method is better than that maximum likelihood estimation. From the numerical simulation, on one hand, the Bayes estimation method is better than that maximum likelihood estimation. This rule can also be found in [2] which studies based on progressive type II censoring model with the same distribution and [10] with the exponentiated Weibull distribution under the same model. On the other hand, importance sampling is more effective than Lindley approximation but the two performance of two methods are incomparable in [2].
Furthermore, confidence and credible intervals of two unknown parameters are also constructed. Aymptotic confidence intervals are established on the basis of observed and expected Fisher information matrices. In sequal, the HPD credible intervals are presented by importance sampling procedures. Additionally, Boot-p intervals are computed for the purpose of comparison. Among the interval estimation methods, Boot-p intervals and HPD intervals perform better relatively than aymptotic confidence intervals for small samples from simulation results. In [2], HPD intervals have the best performance on the interval length. However, there is no significant effect on the estimation methods in [10]. It hinted that the approaches applied to carry out interval estimation on truncated normal distribution may be an important factor that influences the estimate results. Finally, the optimal censoring schemes are investigated under two criterion.
Truncated normal distribution with censoring data can be more practical and helpful when doing survival analysis in reality, which is worth further studying. One can extend the data in this article by considering more flexible and complex censoring schemes such as generalized progressive hybrid censor scheme. Besides, combining with competing risks is another potential way to develop a more practical model.

Author Contributions

Investigation, S.C.; Supervision, W.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Project 202110004003 which was supported by National Training Program of Innovation and Entrepreneurship for Undergraduates.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of MLE Existence

Proof of Theorem 1.
When μ is given, the MLE of τ exists.
  • Assume that
    f 2 ( x ) = l n L ( μ , τ ) τ = 1 2 τ [ i = 1 m ξ x ( i ) 2 m + n ξ ϕ ( ξ ) Φ ( ξ ) + i = 1 J R i ξ x ( i ) ϕ ( ξ x ( i ) ) 1 Φ ( ξ x ( i ) ) + ( n m i = 1 J R i ) ξ x ( m ) ϕ ( ξ x ( m ) ) 1 Φ ( ξ x ( m ) ) ] .
    For given μ > 0
    (a)
    When τ 0 + , then 1 2 τ + , ξ + and ξ x ( i ) + . Thus, we have
    lim τ 0 + i = 1 m ξ x ( i ) 2 = + , lim τ 0 + n ξ ϕ ( ξ ) Φ ( ξ ) = lim ξ + n ξ 2 π e ξ 2 2 Φ ( ξ ) = 0 ,
    where ξ = μ τ , ξ x ( i ) = x ( i ) μ τ .
    • ξ x ( i ) > 0 , then when τ 0 + , ξ x ( i ) + . In addition, 1 Φ ( ξ x ( i ) ) > 0 , R i ξ x ( i ) ϕ ( ξ x ( i ) ) > 0 . So R i ξ x ( i ) ϕ ( ξ x ( i ) ) 1 Φ ( ξ x ( i ) ) > 0 , i = 1 , 2 , , m .
    • ξ x ( i ) < 0 , then when τ 0 + , ξ x ( i ) . Therefore lim τ 0 + R i ξ x ( i ) ϕ ( ξ x ( i ) ) 1 Φ ( ξ x ( i ) ) = lim ξ x ( i ) ξ x ( i ) 2 π e ξ x ( i ) 2 2 [ 1 Φ ( ξ x ( i ) ) ] = 0 , i = 1 , 2 , , m .
    • ξ x ( i ) = 0 , then R i ξ x ( i ) ϕ ( ξ x ( i ) ) 1 Φ ( ξ x ( i ) ) = 0 , i = 1 , 2 , , m .
    In conclusion, whatever the value of ξ x ( i ) is,
    lim τ 0 + [ i = 1 J R i ξ x ( i ) ϕ ( ξ x ( i ) ) 1 Φ ( ξ x ( i ) ) + ( n m i = 1 J R i ) ξ x ( m ) ϕ ( ξ x ( m ) ) 1 Φ ( ξ x ( m ) ) ] 0 .
    In Equation (A1), m is a positive constant. Hence f 2 ( x ) + .
    (b)
    When τ + , then 1 2 τ 0 + , ξ 0 , ξ x ( i ) 0 so that ϕ ( ξ ) 1 2 π  , Φ ( ξ x ( i ) ) 1 2 . Here we have
    lim τ + [ i = 1 m ξ x ( i ) 2 m + n ξ ϕ ( ξ ) Φ ( ξ ) + i = 1 J R i ξ x ( i ) ϕ ( ξ x ( i ) ) 1 Φ ( ξ x ( i ) ) + ( n m i = 1 J R i ) ξ x ( m ) ϕ ( ξ x ( m ) ) 1 Φ ( ξ x ( m ) ) ] = m < 0 .
    as a result, f 2 ( x ) 0 .
    To sum up, when τ 0 + , f 2 ( x ) is positive and when τ + f 2 ( x ) , is negative.
    Hence the solution of Equation (12) exists.
 □

Appendix B. Inference of Density Function of X(i) under TN(μ, τ)

Based on the pdf of x ( i ) , the expected Fisher information matrix that takes the form of an N × N matrix can be computed accordingly. Suppose that J = j is known. From the definition of adaptive progressive type II censoring, one can find that the experiment proceeds like progressive type II censoring model before the time x ( j ) . Then starting from the ( j + 1 ) th failure, the following test becomes conventional type II censoring method under the distribution that is left truncated at x ( j ) . In general case, assume that f ( · ) is the pdf and F ( · ) is the cdf of the distribution of x ( i ) . Therefore, the pdf of X i , i = 1 , 2 , , j is written as (refer to [5])
f x ( i ) ( x ( i ) ) = c i 1 0 k = 1 i d k , i 0 f ( x ( i ) ) [ 1 F ( x ( i ) ) ] r k 0 1 ,
where
c i 1 0 = k = 1 i r k 0 , r i 0 = m i + 1 + k = i m R k , i = 1 , 2 , , j ,
d 11 0 = 1 , d k , i 0 = h = 1 , h k i 1 r h 0 r k 0 , 1 k i j .
then when the actual experiment time passes T, the censoring plan becomes ( R 1 , , R j , 0 , , 0 , n m i = 1 J R i ) . In other words, the following procedures are based on a new truncated distribution under the new scheme ( 0 , 0 , , 0 , n m i = 1 J R i ) . Denote the pdf and cdf of the above truncated distribution as g ( x ( i ) ) and G ( x ( i ) ) separately then they are expressed by
g ( x ( i ) ) = f ( x ( i ) ) 1 F ( x ( j ) ) , G ( x ( i ) ) = 1 1 F ( x ( i ) ) 1 F ( x ( j ) ) .
So in general case the pdf of X i , i = j + 1 , j + 2 , , m is given as follows.
f x ( i ) ( x ( i ) ) = c i 1 1 c j 1 1 k = j + 1 i d k , i 1 g ( x ( i ) ) [ 1 G ( x ( i ) ) ] r k 1 1 ,
where
c i 1 1 = k = 1 i r k 1 , r i 1 = n i + 1 k = 1 j R k , i = j + 1 , j + 2 , , m ,
d j + 1 , j + 1 1 = 1 , d k , i 1 = h = j + 1 , h k i 1 r h 1 r k 1 , j + 1 k i m .
For truncated normal distribution, the pdf of X ( i ) divided into two conditions is expressed by
f x ( i ) T N ( x ( i ) ) = c i 1 0 k = 1 i d k , i 0 ϕ ( x ( i ) μ τ ) [ 1 Φ ( x ( i ) μ τ ) ] r k 0 1 [ Φ ( μ τ ) ] r k 0 , i = 1 , 2 , , j , c i 1 1 c j 1 1 k = j + 1 i d k , i 1 ϕ ( x ( i ) μ τ ) [ 1 Φ ( x ( i ) μ τ ) ] r k 1 1 [ 1 Φ ( x ( j ) μ τ ) ] r k 1 , i = j + 1 , j + 2 , , m

Appendix C. Inference of the Probability Mass Function of J

According to Equation (A4), the pdf of X ( j ) can be described as
f x ( j ) ( x ( j ) ) = c j 1 0 k = 1 j d k , j 0 ϕ ( x ( j ) μ τ ) [ 1 Φ ( x ( j ) μ τ ) ] r k 0 1 [ Φ ( μ τ ) ] r k 0 ,
where
c j 1 0 = k = 1 j r k 0 , r k 0 = m k + 1 + h = k m R h , d k , j 0 = h = 1 , h k j 1 r h 0 r k 0 .
X ( j ) can be considered as the first order statistic of random samples from the distribution left truncated at x ( j ) so its cdf under T N ( μ , τ ) is obtained as
F x ( j + 1 ) ( x | X ( j ) = x ( j ) ) = x ( j ) x { r j + 1 1 ϕ ( x ( j + 1 ) μ τ ) [ 1 Φ ( x ( j + 1 ) μ τ ) ] r j + 1 1 1 × [ 1 Φ ( x ( j ) μ τ ) ] r j + 1 1 } d x ( j + 1 ) = 1 [ 1 Φ ( x μ τ ) ] r j + 1 1 [ 1 Φ ( x ( j ) μ τ ) ] r j + 1 1 .
Then the PMF of J is deduced as follows
P ( J = j ) = P ( X ( j ) < T X ( j + 1 ) ) = 0 f x ( j ) ( x ) P ( X ( j ) < T X ( j + 1 ) | X ( j ) = x ) d x = 0 T f x ( j ) ( x ) [ 1 F x ( j ) ( T | X ( j ) = x ( j ) ) ] d x = 0 T c j 1 0 k = 1 j d k , j 0 ϕ ( x μ τ ) [ 1 Φ ( x μ τ ) ] r k 0 1 [ Φ ( μ τ ) ] r k 0 × [ 1 Φ ( T μ τ ) ] r j + 1 1 [ 1 Φ ( x μ τ ) ] r j + 1 1 d x = c j 1 0 [ 1 Φ ( T μ τ ) ] r j + 1 1 × { k = 1 j d k , j 0 r k 0 r j + 1 1 [ [ Φ ( μ τ ) ] r k 0 r j + 1 1 [ 1 Φ ( T μ τ ) ] r k 0 r j + 1 1 ] } ,
where j = 0 , 1 , , m , r m + 1 0 .

Appendix D. Lindley Approximation Expressions

ρ = ln π ( μ , τ ) = ln Φ ( a b τ ) ( c + 3 2 ) ln τ 1 2 τ ( b ( μ a ) 2 + D ) , ρ 1 = ρ μ = b τ ( μ a ) , ρ 2 = ρ τ = a b ϕ ( a b τ ) 2 τ τ Φ ( a b τ ) c + 3 2 τ + b ( μ a ) 2 + d 2 τ 2 ,
σ 11 σ 12 σ 21 σ 22 = I ( μ ^ , τ ^ ) 1 = a 11 a 12 a 21 a 22 1 ,
N = a 11 a 12 a 21 a 22 = a 11 a 22 a 12 2 ,
σ 11 = a 22 N , σ 12 = a 12 N , σ 22 = a 11 N ,
[ Q ( ξ ) ] μ = Φ ( ξ ) ϕ ( ξ ) ϕ ( ξ ) ϕ ( ξ ) τ Φ 2 ( ξ ) , [ Q 2 ( ξ ) ] μ = 2 [ Φ ( ξ ) ϕ ( ξ ) ϕ ( ξ ) ϕ 3 ( ξ ) ] τ Φ 3 ( ξ ) ,
[ P ( i ) ( ξ x ( i ) ) ] μ = ( 1 Φ ( ξ x ( i ) ) ) ϕ ( ξ x ( i ) ) + ϕ ( ξ x ( i ) ) ϕ ( ξ x ( i ) ) τ ( 1 Φ ( ξ x ( i ) ) ) 2 ,
[ P ( i ) ( ξ x ( i ) ) ] τ = ( ϕ ( ξ x ( i ) ) ( 1 Φ ( ξ x ( i ) ) ) + ϕ ( ξ x ( i ) ) ϕ ( ξ x ( i ) ) 2 τ ( 1 Φ ( ξ x ( i ) ) ) 2 ) ξ x ( i ) ,
[ P ( i ) 2 ( ξ x ( i ) ) ] μ = 2 ( ϕ ( ξ x ( i ) ) ϕ ( ξ x ( i ) ) ( 1 Φ ( ξ x ( i ) ) ) + ϕ 3 ( ξ x ( i ) ) τ ( 1 Φ ( ξ x ( i ) ) ) 3 ) ,
[ P ( i ) 2 ( ξ x ( i ) ) ] τ = ( ϕ ( ξ x ( i ) ) ϕ ( ξ x ( i ) ) ( 1 Φ ( ξ x ( i ) ) ) + ϕ 3 ( ξ x ( i ) ) τ ( 1 Φ ( ξ x ( i ) ) ) 3 ) ξ x ( i ) ,
[ ξ x ( i ) 2 ] μ = 2 ξ x ( i ) τ , [ ξ x ( i ) 2 ] μ = ξ x ( i ) 2 τ ,
[ ξ Q ( ξ ) ] μ = Φ ( ξ ) ϕ ( ξ ) ξ ϕ 2 ( ξ ) + ξ Φ ( ξ ) ϕ ( ξ ) τ Φ 2 ( ξ ) ,
[ ξ Q ( ξ ) ] τ = ( Φ ( ξ ) ϕ ( ξ ) ξ ϕ 2 ( ξ ) + ξ Φ ( ξ ) ϕ ( ξ ) 2 τ Φ 2 ( ξ ) ) ξ ,
[ ξ Q ( ξ ) ] μ = Φ ( ξ ) ϕ ( ξ ) ξ ϕ ( ξ ) ϕ ( ξ ) + ξ Φ ( ξ ) ϕ ( ξ ) τ Φ 2 ( ξ ) ,
[ ξ Q ( ξ ) ] τ = ( Φ ( ξ ) ϕ ( ξ ) ξ ϕ ( ξ ) ϕ ( ξ ) + Φ ( ξ ) ξ ϕ ( ξ ) 2 τ Φ 2 ( ξ ) ) ξ ,
[ ξ 2 Q ( ξ ) ] μ = 2 Φ ( ξ ) ξ ϕ ( ξ ) ξ 2 ϕ ( ξ ) ϕ ( ξ ) + Φ ( ξ ) ξ 2 ϕ ( ξ ) τ Φ 2 ( ξ ) ,
[ ξ 2 Q ( ξ ) ] τ = ( 2 ξ Φ ( ξ ) ϕ ( ξ ) ξ 2 ϕ ( ξ ) ϕ ( ξ ) + ξ 2 Φ ( ξ ) ϕ ( ξ ) 2 τ Φ 2 ( ξ ) ) ξ ,
[ ξ 2 Q 2 ( ξ ) ] μ = 2 ( ξ Φ ( ξ ) ϕ 2 ( ξ ) ξ 2 ϕ 3 ( ξ ) + ξ 2 Φ ( ξ ) ϕ ( ξ ) ϕ ( ξ ) τ Φ 3 ( ξ ) ) ,
[ ξ 2 Q 2 ( ξ ) ] τ = ( ξ Φ ( ξ ) ϕ 2 ( ξ ) ξ 2 ϕ 3 ( ξ ) + ξ 2 Φ ( ξ ) ϕ ( ξ ) ϕ ( ξ ) τ Φ 3 ( ξ ) ) ξ ,
[ ξ x ( i ) P ( i ) ( ξ x ( i ) ) ] μ = ( [ 1 Φ ( ξ x ( i ) ) ] ϕ ( ξ x ( i ) ) + ξ x ( i ) ϕ ( ξ x ( i ) ) [ 1 ϕ ( ξ x ( i ) ) ] + ξ x ( i ) ϕ 2 ( ξ x ( i ) ) τ [ 1 Φ ( ξ x ( i ) ) ] 2 ) ,
[ ξ x ( i ) P ( i ) ( ξ x ( i ) ) ] τ = ( ( [ 1 Φ ( ξ x ( i ) ) ] ϕ ( ξ x ( i ) ) + ξ x ( i ) ϕ ( ξ x ( i ) ) [ 1 ϕ ( ξ x ( i ) ) ] + ξ x ( i ) ϕ 2 ( ξ x ( i ) ) 2 τ [ 1 Φ ( ξ x ( i ) ) ] 2 ) ξ x ( i ) ,
[ ξ x ( i ) 2 P ( i ) ( ξ x ( i ) ) ] μ = ( 2 [ 1 Φ ( ξ x ( i ) ) ] ξ x ( i ) ϕ ( ξ x ( i ) ) + ξ x ( i ) 2 ϕ ( ξ x ( i ) ) [ 1 Φ ( ξ x ( i ) ) ] + ξ x ( i ) 2 ϕ ( ξ x ( i ) ) ϕ ( ξ x ( i ) ) τ [ 1 Φ ( ξ x ( i ) ) ] 2 ) ,
[ ξ x ( i ) 2 P ( i ) ( ξ x ( i ) ) ] τ = ( 2 [ 1 Φ ( ξ x ( i ) ) ] ξ x ( i ) ϕ ( ξ x ( i ) ) + ξ x ( i ) 2 ϕ ( ξ x ( i ) ) [ 1 Φ ( ξ x ( i ) ) ] + ξ x ( i ) 2 ϕ ( ξ x ( i ) ) ϕ ( ξ x ( i ) ) τ [ 1 Φ ( ξ x ( i ) ) ] 2 ) ξ x ( i ) ,
[ ξ x ( i ) 2 P ( i ) 2 ( ξ x ( i ) ) ] μ = 2 ( [ 1 Φ ( ξ x ( i ) ) ] ξ x ( i ) ϕ 2 ( ξ x ( i ) ) + ξ x ( i ) 2 ϕ ( ξ x ( i ) ) ϕ ( ξ x ( i ) ) [ 1 Φ ( ξ x ( i ) ) ] + ξ x ( i ) 2 ϕ 3 ( ξ x ( i ) ) τ [ 1 Φ ( ξ x ( i ) ) ] 3 ) ,
[ ξ x ( i ) 2 P ( i ) 2 ( ξ x ( i ) ) ] τ = ( [ 1 Φ ( ξ x ( i ) ) ] ξ x ( i ) ϕ 2 ( ξ x ( i ) ) + ξ x ( i ) 2 ϕ ( ξ x ( i ) ) ϕ ( ξ x ( i ) ) [ 1 Φ ( ξ x ( i ) ) ] + ξ x ( i ) 2 ϕ 3 ( ξ x ( i ) ) τ [ 1 Φ ( ξ x ( i ) ) ] 3 ) ξ x ( i ) .
l 30 = 1 τ [ n [ Q ( ξ ) ] μ n [ Q 2 ( ξ ) ] μ + i = 1 J R i ( [ p ( i ) ( ξ x ( i ) ) ] μ + [ p ( i ) 2 ( ξ x ( i ) ) ] ) + ( n m i = 1 J R i ) ( [ P ( m ) ( ξ x ( m ) ) ] μ + [ P ( m ) 2 ( ξ x ( m ) ) ] μ ) ] ,
l 21 = a 11 τ 1 τ [ n [ Q 2 ( ξ ) ] τ n [ Q 2 ( ξ ) ] τ + i = 1 J R i ( [ P ( i ) ( ξ x ( i ) ) ] μ + [ P ( i ) 2 ( ξ x ( i ) ) ] μ ) + ( n m i = 1 J R i ) ( [ P m ( ξ x ( m ) ) ] τ + [ P ( m ) 2 ( ξ x ( m ) ) ] τ ) ] ,
l 12 = 2 τ 2 τ i = 1 m ξ x ( i ) 1 4 τ 2 [ 3 n [ ξ Q ( ξ ) ] μ + n [ ξ 2 Q ( ξ ) ] μ n [ ξ 2 Q 2 ( ξ ) ] μ + i = 1 J R i ( 3 [ ξ x ( i ) P ( i ) ( ξ x ( i ) ) ] μ + [ ξ x ( i ) 2 P ( i ) ( ξ x ( i ) ) ] μ + [ ξ x ( i ) 2 P ( i ) 2 ( ξ x ( i ) ) ] μ ) + ( n m i = 1 J R i ) ( 3 [ ξ x ( m ) P ( m ) ( ξ x ( m ) ) ] μ + [ ξ x ( m ) 2 P ( m ) ( ξ x ( m ) ) ] μ + [ ξ x ( m ) 2 P ( m ) 2 ( ξ x ( m ) ) ] μ ) ] ,
l 03 = 3 τ 3 i = 1 m ξ x ( i ) 2 + 1 2 τ 3 [ 2 m + 3 n ξ Q ( ξ ) + n ξ 2 Q ( ξ ) n ξ 2 Q 2 ( ξ ) + i = 1 J R i ( 3 ξ x ( i ) P ( i ) ( ξ x ( i ) ) + ξ x ( i ) 2 P ( i ) ( ξ x ( i ) ) + ξ x ( i ) 2 P ( i ) 2 ( ξ x ( i ) ) ) + ( n m i = 1 J R i ) ( 3 ξ x ( m ) P ( m ) ( ξ x ( m ) ) + ξ x ( m ) P ( m ) ( ξ x ( m ) ) + ξ x ( m ) 2 P ( m ) 2 ( ξ x ( m ) ) ) n 4 τ 2 [ [ ξ 2 Q ( ξ ) ] τ + 3 [ ξ Q ( ξ ) ] τ [ ξ 2 Q 2 ( ξ ) ] τ + i = 1 J R i ( 3 [ ξ x ( i ) P ( i ) ( ξ x ( i ) ) ] τ + [ ξ x ( i ) 2 P ( i ) ] τ + [ ξ x ( i ) 2 P ( i ) ( ξ x ( i ) ) ] τ ) + ( n m i = 1 J R i ) ( 3 [ ξ x ( m ) P ( m ) ( ξ x ( m ) ) ] τ + [ ξ x ( m ) 2 P ( m ) ] τ + [ ξ x ( m ) 2 P ( m ) ( ξ x ( m ) ) ] τ ) ] ] .

Appendix E. Selected Computational Codes

		library(numDeriv)
		
  • library(numDeriv)
  • ###generate adaptive type II censoring data
  • adapttype2<−function(mu,tau,R,T){
  • m<−length(R)
  • n=sum(R)+m
  • W1<−runif(m)
  • V1<−rep(0,m)
  • U1<−rep(0,m)
  • x = rep(0,m)
  • for(i in 1:m)
  • {
  • V1[i]<−W1[i]^(1/(i + sum(R[(m − i + 1):m])))
  • }
  • for(i in 1:m)
  • {
  • U1[i]<−1-prod(V1[m:(m − i + 1)])
  • }
  • for(i in 1:m){
  • x[i] = sqrt(tau)∗qnorm(1-(1-U1[i])∗pnorm(mu/sqrt(tau)))+mu
  • }
  • i = 1
  • repeat{
  • if(x[i]>T){j = i − 1
  • break}
  • else{i = i + 1}
  • if(i==m){j=m
  • break}
  • }
  • begin = j + 2
  • if(j == 0){begin = 1}
  • if((j + 2) > m){end = m}
  • else{end = j + 1}
  • if(end == m)
  • {return(x)}
  • len = length(begin:m)
  • W2<−runif(len)
  • V2<−rep(0,len)
  • U2<−rep(0,len)
  • for(i in 1:len)
  • {
  • V2[i]<−W2[i]^(1/i)
  • }
  • for(i in 1:len)
  • {
  • U2[i]<−1-prod(V2[len:(len-i + 1)])
  • }
  • xi = function(x,mu,tau){(x-mu)/sqrt(tau)}
  • for(i in begin:m){
  • if(begin == 1)
  • {x[i] = qnorm(U2[i]∗(1-pnorm(xi(x[j+1],mu,tau)))
  •       +pnorm(xi(x[j+1],mu,tau)))∗sqrt(tau)+mu
  • }
  • else{
  • x[i] = qnorm(U2[i-j-1]∗(1-pnorm(xi(x[j+1],mu,tau)))
  •      +pnorm(xi(x[j+1],mu,tau)))∗sqrt(tau)+mu
  • }}
  • return(x)}
  • ##MLE
  • Xii = function(xi,mu,tau){
  • (xi − mu)/sqrt(tau)}
  • Xi = function(mu,tau){
  • mu/sqrt(tau)}
  • mu = 2
  • tau = 1
  • num=1
  • T = 1
  • p = 0.2
  • covermu = 0
  • covertau = 0
  • z = qnorm(1 − p/2)
  • sim = 1000
  • leftmu = rep(0,sim)
  • rightmu = rep(0,sim)
  • lefttau = rep(0,sim)
  • righttau = rep(0,sim)
  • mumle = rep(0,sim)
  • taumle = rep(0,sim)
  • repeat{
  • R1 = rep(c(1,1,0),6)
  • R=R1
  • datax = adapttype2(mu,tau,R,T)
  • i = 1
  • m = length(R)
  • n = sum(R) + m
  • repeat{if(datax[i]>T){j = i − 1
  • break}
  • else{i = i + 1}
  • if(i == m){j = m
  • break}
  • }
  • mu1 = mu
  • tau1 = tau
  • test = try({
  • mle<−function(datax,R,x){
  • loglike = −sum(Xii(datax,x[1],x[2])^2/2)-0.5∗m∗log(x[2])-n∗log(pnorm(Xi(x[1],x[2])))
  •     +sum(R[1:j]∗log(1-pnorm(Xii(datax[1:j],x[1],x[2]))))
  •     +(n-m-sum(R[1:j]))*log(1-pnorm(Xii(datax[m],x[1],x[2])))
  • return(−loglike)
  • }
  • a=optim(c(mu1,tau1),mle,datax = datax,R=R1,method=’L-BFGS-B’,lower = c(0.01,0.01),
  •        control=list(trace = FALSE,maxit = 1000))
  • mumle[num] = a$par[1]
  • taumle[num] = a$par[2]
  • })
  • if(“try-error” %in% class(test) | taumle[num]>1.3 | mumle[num]>2.3){next}
  • else{
  • h1 = hessian(func = mle,x = c(a$par[1],a$par[2]),datax = datax,R=R1)
  • var = solve(h1)
  • leftmu[num] = mumle[num] − z∗sqrt(var[1,1])
  • rightmu[num] = mumle[num] + z∗sqrt(var[1,1])
  • lefttau[num] = taumle[num] − z∗sqrt(var[2,2])
  • righttau[num] = taumle[num] + z∗sqrt(var[2,2])
  • if(mu>leftmu[num] & mu<rightmu[num]){covermu = covermu + 1}
  • if(tau>lefttau[num] & tau<righttau[num]){covertau = covertau + 1}
  • num = num + 1}
  • if(num>sim){break}
  • }
  • mean(mumle)
  • mean((mumle − mu)^2)
  • mean(taumle)
  • mean((taumle − tau)^2)
  • mean(leftmu)
  • mean(rightmu)
  • mean(lefttau)
  • mean(righttau)
  • covermu/sim
  • covertau/sim
  • mean(rightmu-leftmu)
  • ###Bayesian~estimation
  • ###SELF
  • a = 0.6
  • b = 0.3
  • c = 37
  • d = 0.6
  • p = 1
  • q = 1
  • SIM = 1000
  • sim = 5000
  • mum = 2
  • taum = 1
  • R = rep(c(1,1,0),10)
  • T = 3
  • i = 1
  • repeat{if(datax[i]>T){j = i − 1
  • break}
  • else{i = i + 1}
  • if(i == m){J = m
  • break}
  • }
  • muappro = rep(0,SIM)
  • tauappro = rep(0,SIM)
  • mullfappro = rep(0,SIM)
  • taullfappro = rep(0,SIM)
  • mugelfappro = rep(0,SIM)
  • taugelfappro = rep(0,SIM)
  • muleft = rep(0,SIM)
  • muright = rep(0,SIM)
  • mulen = rep(0,SIM)
  • tauleft = rep(0,SIM)
  • tauright = rep(0,SIM)
  • taulen = rep(0,SIM)
  • mucover = 0
  • taucover = 0
  • for(i in 1:SIM){
  • mu = rep(0,sim)
  • tau = rep(0,sim)
  • Q = rep(0,sim)
  • datax = adapttype2(mum,taum,R,T)
  • m = length(R)
  • n = sum(R)+m
  • IG1 = c + m/2
  • IG2 = 1/2∗(a^2∗b+d+sum(datax^2))
  • for(j in 1:sim){
  • tau[j] = rinvgamma(1,IG1,IG2)
  • TN1 = (sum(datax)+a∗b)/(b+m)
  • TN2 = tau[j]/(b+m)
  • mu[j] = rtruncnorm(1,0,Inf,TN1,sqrt(TN2))
  • re = 1
  • for(k in 1:J){
  • re = re∗(1-pnorm((datax[k] − mu[j])/sqrt(tau[j])))^R[k]}
  • Q[j] = pnorm((sum(datax)+a∗b)/(sqrt(tau[j]∗(b+m))))/pnorm(a∗sqrt(b)
  •      /sqrt(tau[j]))∗pnorm(mu[j]/sqrt(tau[j]))^(−n)∗re∗(1 − pnorm((datax[m] − mu[j])
  •      /sqrt(tau[j])))^(n-m-sum(R[1:J]))}
  • muappro[i] = sum(mu∗Q)/sum(Q)
  • tauappro[i] = sum(tau∗Q)/sum(Q)
  • mullfappro[i] = −1/p∗log(sum(exp(−p∗mu)∗Q)/sum(Q))
  • taullfappro[i] = −1/p∗log(sum(exp(−p∗tau)∗Q)/sum(Q))
  • mugelfappro[i] = (sum(mu^(−q)*Q)/sum(Q))^(−1/q)
  • taugelfappro[i] = (sum(tau^(−q)*Q)/sum(Q))^(−1/q)
  • summu = 0
  • prob = 0.05
  • muord = order(mu)
  • for(nu in 1:sim){
  • summu = summu + Q[muord[nu]]/sum(Q)
  • if(summu >= prob){
  • break}
  • }
  • muinterval = matrix(0,nu-1,2)
  • len = rep(0,nu − 1)
  • for(k in 1:(nu − 1)){
  • summ2 = 0
  • l = k
  • repeat{
  • summ2 = summ2 + Q[muord[l]]/sum(Q)
  • l = l + 1
  • if(summ2>=1-prob){
  • break}
  • }
  • l = l − 1
  • len[k] = summ2
  • muinterval[k,1] = mu[muord[k]]
  • muinterval[k,2] = mu[muord[l]]
  • }
  • muinterval = muinterval[is.na(muinterval[,2]) == 0,]
  • minvalue = min(muinterval[,2]-muinterval[,1])
  • judge1 = (muinterval[,2] − muinterval[,1]) == minvalue
  • muleft[i] = muinterval[judge1][1]
  • muright[i] = muinterval[judge1][2]
  • mulen[i] = muright[i] − muleft[i]
  • sumtau = 0
  • prob = 0.05
  • tauord = order(tau)
  • for(nu in 1:sim){
  • sumtau = sumtau+Q[tauord[nu]]/sum(Q)
  • if(sumtau >= prob){
  • break}
  • }
  • tauinterval = matrix(0,nu-1,2)
  • len = rep(0,nu − 1)
  • for(k in 1:(nu − 1)){
  • summ2 = 0
  • l = k
  • repeat{
  • summ2 = summ2 + Q[tauord[l]]/sum(Q)
  • l = l + 1
  • if(summ2 >= 1 − prob){
  • break}
  • }
  • l = l − 1
  • len[k] = summ2
  • tauinterval[k,1] = tau[tauord[k]]
  • tauinterval[k,2] = tau[tauord[l]]
  • }
  • tauinterval = tauinterval[is.na(tauinterval[,2])==0,]
  • minvalue = min(tauinterval[,2]-tauinterval[,1])
  • judge2 = (tauinterval[,2]-tauinterval[,1])==minvalue
  • tauleft[i] = tauinterval[judge2][1]
  • tauright[i] = tauinterval[judge2][2]
  • taulen[i] = tauright[i]-tauleft[i]
  • if(mum >= muleft[i] & mum <= muright[i]){mucover = mucover + 1}
  • if(taum >= tauleft[i] & taum <= tauright[i]){taucover = taucover + 1}
  • }
  • mean(muleft)
  • mean(muright)
  • mean(mulen)
  • mucover/SIM
  • mean(tauleft)
  • mean(tauright)
  • mean(taulen)
  • taucover/SIM
  • mean(muappro)
  • mean((muappro − mum)^2)
  • mean(tauappro)
  • mean((tauappro − taum)^2)
  • mean(mullfappro)
  • mean((mullfappro − mum)^2)
  • mean(taullfappro)
  • mean((taullfappro − taum)^2)
  • mean(mugelfappro)
  • mean((mugelfappro − mum)^2)
  • mean(taugelfappro)
  • mean((taugelfappro − taum)^2)

References

  1. Barr, D.R.; Sherrill, E.T. Mean and Variance of Truncated Normal Distributions. Am. Statian. 1999, 53, 357–361. [Google Scholar]
  2. Lodhi, C.; Tripathi, Y.M.; Rastogi, M.K. Estimating the parameters of a truncated normal distribution under progressive type II censoring. Commun. Stat. Simul. Comput. 2019, 1–25. [Google Scholar] [CrossRef]
  3. Pender, J. The truncated normal distribution: Applications to queues with impatient customers. Oper. Res. Lett. 2015, 43, 40–45. [Google Scholar] [CrossRef]
  4. Horrace, W.C. Moments of the truncated normal distribution. J. Product. Anal. 2015, 43, 133–138. [Google Scholar] [CrossRef]
  5. Balakrishnan, N.; Aggarwala, R. Progressive Censoring; Birkhäuser: Boston, MA, USA, 2000. [Google Scholar]
  6. Balakrishnan, N.; Cramer, E. The Art of Progressive Censoring; Birkhäuser: New York, NY, USA, 2014. [Google Scholar]
  7. Ng, H.K.T.; Kundu, D.; Ping, S.C. Statistical analysis of exponential lifetimes under an adaptive Type-II progressive censoring scheme. Nav. Res. Logist. 2009, 56, 687–698. [Google Scholar] [CrossRef] [Green Version]
  8. Cramer, E.; Iliopoulos, G. Adaptive progressive Type-II censoring. Test Off. J. Span. Soc. Stats Oper. Res. 2010, 19, 342–358. [Google Scholar] [CrossRef]
  9. Hemmati, F.; Khorram, E. On adaptive progressively Type-II censored competing risks data. Commun. Stat.-Simul. Comput. 2016, 46, 4671–4693. [Google Scholar] [CrossRef]
  10. Chen, S.; Gui, W. Statistical Analysis of a Lifetime Distribution with a Bathtub-Shaped Failure Rate Function under Adaptive Progressive Type-II Censoring. Mathematics 2020, 8, 670. [Google Scholar] [CrossRef]
  11. Varian, H.R. A Third Remark on the Number of Equilibria of an Economy. Econometrica 1975, 43, 985–986. [Google Scholar] [CrossRef]
  12. Sultan, K.S.; Alsadat, N.H.; Kundu, D. Bayesian and maximum likelihood estimations of the inverse Weibull parameters under progressive type-II censoring. J. Stat. Comput. Simul. 2014, 84, 2248–2265. [Google Scholar] [CrossRef]
  13. Xie, Y.; Gui, W. Statistical Inference of the Lifetime Performance Index with the Log-Logistic Distribution Based on Progressive First-Failure-Censored Data. Symmetry 2020, 12, 937. [Google Scholar] [CrossRef]
  14. Khatun, N.; Matin, M.A. A Study on LINEX Loss Function with Different Estimating Methods. Open J. Stat. 2020, 10, 52–63. [Google Scholar] [CrossRef] [Green Version]
  15. Calabria, R.; Pulcini, G. An engineering approach to Bayes estimation for the Weibull distribution. Microelectron. Reliab. 1994, 34, 789–802. [Google Scholar] [CrossRef]
  16. Lindley, D.V. Approximate Bayesian methods. Trab. De Estad. Y De Investig. Oper. 1980, 31, 223–245. [Google Scholar] [CrossRef]
  17. Lawless, J.F. Statistical MODELS and Methods for Lifetime Data; John Wiley & Sons: New York, NY, USA, 1982. [Google Scholar]
  18. Gupta, R.D.; Kundu, D. Exponentiated Exponential Family: An Alternative to Gamma and Weibull Distributions. Biom. J. 2015, 43, 117–130. [Google Scholar] [CrossRef]
  19. Rao, G.S. Estimation of Reliability in Multicomponent Stress-strength Based on Generalized Exponential Distribution. Rev. Colomb. De Estadística 2012, 35, 67–76. [Google Scholar]
  20. Lee, S.; Noh, Y.; Chung, Y. Inverted exponentiated Weibull distribution with applications to lifetime data. Commun. Stat. Appl. Methods 2017, 24, 227–240. [Google Scholar] [CrossRef] [Green Version]
  21. Gupta, R.D.; Kundu, D. On the comparison of Fisher information of the Weibull and GE distributions. J. Statal Plan. Inference 2006, 136, 3130–3144. [Google Scholar] [CrossRef]
  22. Pradhan, B.; Kundu, D. Inference and optimal censoring schemes for progressively censored Birnbaum–Saunders distribution. J. Stat. Plan. Inference 2013, 143, 1098–1108. [Google Scholar] [CrossRef]
  23. Singh, S.; Tripathi, Y.M.; Wu, S.J. On estimating parameters of a progressively censored lognormal distribution. J. Stat. Comput. Simul. 2013, 85, 1071–1089. [Google Scholar] [CrossRef]
Figure 1. Pdf of truncated normal distribution.
Figure 1. Pdf of truncated normal distribution.
Mathematics 09 00049 g001
Figure 2. Cdf of truncated normal distribution.
Figure 2. Cdf of truncated normal distribution.
Mathematics 09 00049 g002
Table 1. The MLE results of μ and τ .
Table 1. The MLE results of μ and τ .
T ( n , m ) R EV ( μ ) MSE ( μ ) EV ( τ ) MSE ( τ )
1(30,12)(6,6,0*16)2.14810.03891.14170.0445
((1,1,0)*6)1.89340.03920.92530.0499
(40,16)(8,8,0*22)2.12230.03711.13890.0436
((1,1,0)*8)1.89390.03890.92300.0495
(50,20)(10,10,0*28)2.08020.03791.13670.0431
((1,1,0)*10)1.90250.03750.92080.0495
3(30,12)(6,6,0*16)1.88490.02920.92070.204
((1,1,0)*6)1.86570.05400.83950.0586
(40,16)(8,8,0*22)1.88530.02900.92930.0193
((1,1,0)*8)1.90340.04710.89410.0380
(50,20)(10,10,0*28)1.89340.02790.93740.0188
((1,1,0)*10)1.92250.03720.97230.0258
Table 2. The Bayesian estimation results using importance sampling under SELF, LLF and GELF.
Table 2. The Bayesian estimation results using importance sampling under SELF, LLF and GELF.
T ( n , m ) R SELFLLFGELF
p = 1 p = 1 q = 1 q = 1
EVMSEEVMSEEVMSEEVMSEEVMSE
1(30,12)(6,6,0*16) μ 1.94200.04761.93250.04812.10180.02641.93220.04852.07600.0273
τ 0.87820.04040.89240.05780.86440.03370.88390.05960.85670.0359
(1,1,0)*6 μ 1.96300.05131.95330.05951.12030.04861.95300.05601.11070.0464
τ 1.19940.05641.18270.04891.24980.04561.17200.04521.23740.0697
(40,16)(8,8,0*22) μ 1.97110.01621.96000.01652.04830.02571.95970.01672.04940.0261
τ 0.88650.03860.91050.03280.98210.02080.90110.03440.97730.0206
(1,1,0)*8 μ 1.97080.04131.95720.05122.05870.02701.95680.05422.07060.0479
τ 1.12420.04731.10970.04191.06020.03231.09930.04011.05070.0314
(50,20)(10,10,0*28) μ 2.00060.01591.99000.01582.00460.00391.98990.01601.99280.0037
τ 0.89950.01661.00810.01170.98790.02040.99740.01400.98670.0214
(1,1,0)*10 μ 1.97830.01951.96900.01982.01940.02761.96880.02002.01950.0279
τ 1.01780.02741.01450.02681.02930.02201.01170.02661.01980.0217
3(30,12)(6,6,0*16) μ 1.94200.03641.93090.03711.95860.03601.93050.03751.95840.0363
τ 0.87760.02330.87130.02461.13390.05370.86360.02661.14040.0371
(1,1,0)*6 μ 1.95070.02991.94000.03072.09790.04291.93960.03102.09810.0432
τ 0.96180.02450.95220.02421.15310.04410.94300.02511.15590.0461
(40,16)(8,8,0*22) μ 1.95050.02681.94120.02741.98150.02251.94080.02771.98120.0228
τ 0.92090.02260.91370.02331.06080.01820.90560.02461.05090.0169
(1,1,0)*8 μ 1.97080.02411.95720.02122.03410.02211.95680.02182.03390.0218
τ 0.97140.02500.96270.02481.08670.03650.95380.02571.07170.0349
(50,20)(10,10,0*28) μ 2.02020.02752.03320.02072.00500.00552.02270.01972.00490.0056
τ 1.01560.02201.01040.01141.02410.01701.00520.01191.03740.0135
(1,1,0)*10 μ 1.99930.02601.99110.01572.01390.01711.99110.01592.01380.0173
τ 0.98150.02420.97240.02391.06750.02160.96350.02451.05940.0270
Table 3. The Bayesian estimation results of μ and τ by applying Lindley approximation under SELF, LLF and GELF.
Table 3. The Bayesian estimation results of μ and τ by applying Lindley approximation under SELF, LLF and GELF.
T ( n , m ) R SELFLLFGELF
p = 1 p = 1 q = 1 q = 1
EVMSEEVMSEEVMSEEVMSEEVMSE
1(30,12)(6,6,0*16) μ 1.94720.03921.93500.03961.96720.06091.93460.04011.96390.0696
τ 0.89710.06670.90310.06861.14010.07400.91370.07201.12920.0738
(1,1,0)*6 μ 1.91180.05821.89930.05202.07530.05501.89860.05072.07550.0552
τ 1.17630.08431.15770.07401.13850.04341.14720.06881.13690.0475
(40,16)(8,8,0*22) μ 1.97260.02281.96310.02311.98100.02181.96290.02331.98080.0212
τ 0.92100.05480.93920.05401.10210.07090.93920.05411.09060.0654
(1,1,0)*8 μ 1.92430.03251.91310.03372.03470.04121.91250.03412.02200.0395
τ 1.11480.04171.21000.04331.08670.03731.20340.04581.08630.0370
(50,20)(10,10,0*28) μ 2.03970.02182.02220.02252.05700.02002.02240.02252.05680.0196
τ 0.95640.01150.94790.01201.06760.02530.93900.01311.05710.0262
(1,1,0)*8 μ 1.93900.02811.92450.03482.01530.02141.92400.04891.99690.0207
τ 1.08130.03401.07080.03191.95270.03731.07180.03171.04300.0368
3(30,12)(6,6,0*16) μ 1.91180.04101.89330.4241.20460.08961.89180.04251.21440.0898
τ 0.96950.04260.97110.04381.16750.06080.97050.04611.17830.0470
(1,1,0)*6 μ 1.96020.02221.94690.02282.09320.07381.94640.02322.09410.0774
τ 0.88070.09860.89900.09861.16230.04070.89880.09901.15160.0375
(40,16)(8,8,0*22) μ 1.93150.03271.91750.03430.90640.02921.91670.03480.90620.0296
τ 1.00420.03210.99510.03111.15590.04610.99280.03261.15670.0575
(1,1,0)*8 μ 1.97750.01691.96280.01762.05090.05441.96230.02322.05150.0535
τ 0.90010.02760.91970.02651.10460.03690.91000.02681.10730.0394
(50,20)(10,10,0*28) μ 1.95210.02461.93560.02562.04910.01221.93490.02612.04910.0118
τ 1.00410.01780.99340.01731.07360.02110.98330.01761.08080.0297
(1,1,0)*10 μ 2.00220.00411.98320.01392.02210.04001.98330.01402.01230.0123
τ 0.93020.02600.94170.02531.07110.01850.94360.02561.05930.0165
Table 4. Mean lengths (ML) and coverage rates (CR) of confidence intervals and credible intervals at the 95 % credibility level.
Table 4. Mean lengths (ML) and coverage rates (CR) of confidence intervals and credible intervals at the 95 % credibility level.
T ( n , m ) R Asymptotic Intervalboot-p IntervalHPD Interval
MLCRMLCRMLCR
1(30,12)(6,6,0*16) μ 0.70510.85260.65640.85240.62400.8688
τ 1.28070.89760.95090.88870.95020.8548
(1,1,0)*6 μ 0.56610.78970.77190.89760.70270.9679
τ 1.27020.85261.71330.91611.74020.9211
(40,16)(8,8,0*22) μ 0.58690.89000.56810.92370.56930.8739
τ 0.99090.91280.83370.93800.75060.8676
(1,1,0)*8 μ 0.49960.86480.66980.93770.68880.9637
τ 1.15210.87231.54110.90461.54190.9486
(50,20)(10,10,0*28) μ 0.51500.89320.50710.93680.49800.8273
τ 0.83970.91700.75080.94440.53220.8787
(1,1,0)*10 μ 0.45410.90850.59940.96240.60160.9251
τ 1.07140.92311.41050.93871.40320.9316
3(30,12)(6,6,0*16) μ 0.67840.89980.65600.98600.65160.8934
τ 0.94680.91980.95040.90000.95230.8892
(1,1,0)*6 μ 0.51330.90800.59830.83400.60010.9225
τ 0.91380.84961.05070.89531.03410.9165
(40,16)(8,8,0*22) μ 0.58110.90230.56670.91710.60980.8717
τ 0.94680.91820.83270.91530.71430.8856
(1,1,0)*8 μ 0.45220.90180.51410.95680.48240.9385
τ 0.82310.86580.93230.97230.93480.9649
(50,20)(10,10,0*28) μ 0.51530.91880.50700.92560.56520.9095
τ 0.82960.92680.75210.93560.67910.9101
(1,1,0)*10 μ 0.41070.88760.45680.97990.45930.9474
τ 0.76170.88360.85600.92160.83980.9178
Table 5. The complete data and adaptive progressive type II censoring data under different schemes based on data set.
Table 5. The complete data and adaptive progressive type II censoring data under different schemes based on data set.
Complete Data0.1788, 0.2892, 0.3300, 0.4152, 0.4212, 0.4560, 0.4840, 0.5184, 0.5196, 0.5412,
0.5556, 0.6780, 0.6864, 0.6888, 0.8412, 0.9312, 0.9864, 1.0512, 1.0584, 1.2792,
1.2804, 1.7340.
R I , T = 0.4 0.1788, 0.4560, 0.4840, 0.5184, 0.5196, 0.5412, 0.5556, 0.6780, 0.6864, 0.6864,
0.6888, 0.8412, 0.9312, 0.9864, 1.0512, 1.0584, 1.2792, 1.2804, 1.7340.
R I , T = 0.9 0.1788, 0.4560, 0.5556, 0.6780, 0.6864, 0.6864, 0.6888, 0.8412, 0.9312, 0.9864,
1.0512, 1.0584, 1.2792, 1.2804, 1.7340.
R I I , T = 0.4 0.1788, 0.3300, 0.4212, 0.4560, 0.4840, 0.5184, 0.5196, 0.5412, 0.5556, 0.6780,
0.6864, 0.6864, 0.6888, 0.8412, 0.9312, 0.9864, 1.0512, 1.0584, 1.2792, 1.2804,
1.7340.
R I I , T = 0.9 0.1788, 0.3300, 0.4212, 0.4560, 0.5184, 0.5412, 0.5556, 0.6864, 0.6888, 0.8412,
0.9864, 1.0512, 1.0584, 1.2792, 1.2804, 1.7340.
Table 6. Point estimations based on the real data set.
Table 6. Point estimations based on the real data set.
T ( n , m ) R MLEBayes-importance sampling
SELFLLFGELF
p = 1 p = 1 q = 1 q = 1
0.4(23,15)(4,4,0*13) μ 0.80881.01791.01151.02471.00501.0184
τ 0.17280.18730.18690.18700.18280.1866
(23,14)((1,1,0*4),1,0) μ 0.88900.88350.87770.89100.86990.8848
τ 0.13390.17560.17520.17660.17130.1762
0.9(23,15)(4,4,0*13) μ 0.83961.20521.19951.21181.19551.2058
τ 0.29300.19080.19030.19110.18610.1907
(23,14)((1,1,0*4),1,0) μ 0.88690.99270.98770.99760.98240.9925
τ 0.25350.16830.16790.16850.16400.1681
T ( n , m ) R Bayes-Lindley Approximation
SELFLLFGELF
p = 1 p = 1 q = 1 q = 1
0.4(23,15)(4,4,0*13) μ 1.01841.00081.10670.99101.0075
τ 0.25860.25750.26020.25010.2590
(23,14)((1,1,0*4),1,0) μ 0.86560.85600.87650.84150.8671
τ 0.24540.24430.24570.23700.2446
0.9(23,15)(4,4,0*13) μ 1.21621.20801.22641.20251.2184
τ 0.26150.26030.26320.25290.2619
(23,14)((1,1,0*4),1,0) μ 0.99850.99081.01220.98251.0043
τ 0.23370.23280.23540.22590.2344
Table 7. Interval estimation results of μ and τ on the real data set.
Table 7. Interval estimation results of μ and τ on the real data set.
TR Asymptotic IntervalBoot-p IntervalHPD Interval
LBUBMLLBUBMLLBUBML
0.4(4,4,0*13) μ 0.61631.00130.38490.79781.21820.42030.80031.24640.4460
τ 0.04590.29970.25380.10720.41520.30790.13310.24860.1155
((1,1,0*4),1,0) μ 0.70411.07390.36980.84361.26010.41650.66661.08730.4208
τ 0.07730.19040.11310.12900.45570.32670.12340.23070.1074
0.9(4,4,0*13) μ 0.59751.08180.48430.75451.24950.49491.00151.42060.4191
τ 0.00000.65310.65310.14930.68210.53280.13410.25020.1161
((1,1,0*4),1,0) μ 0.63481.13890.50400.65111.22730.57620.78851.17420.3857
τ 0.00280.50410.50130.19020.85850.66830.11860.22480.1062
Table 8. Total expected time and optimal censoring schemes under different criterion.
Table 8. Total expected time and optimal censoring schemes under different criterion.
T(n,m)SchemeCriteria IOptimal Plan(I)Criteria IIOptimal Plan(II)E( X ( m ) )
1(30,12)(3*4,0*14)0.2722 R I 0.0145 R I 3.5264
(1,1,0)*60.3762 0.0187 2.8765
(0*14,3*4)0.3747 0.0177 2.8634
(40,16)(4*4,0*20)0.1841 R I 0.0071 R I 3.7767
(1,1,0)*80.3056 0.0117 2.9185
(0*20,4*4)0.3038 0.0110 2.9083
(50,20)(5*4,0*26)0.1362 R I 0.0040 R I 3.9394
(1,1,0)*100.2578 0.0080 2.9486
(0*26,5*4)0.2565 0.0076 2.9421
3(30,12)(3*4,0*14)0.2040 R I 0.0098 R I I 3.7690
(1,1,0)*60.2181 0.0096 3.3055
(0*14,3*4)0.2344 0.0101 2.4204
(40,16)(4*4,0*20)0.1542 R I 0.0055 R I 3.8994
(1,1,0)*80.1781 0.0061 3.3466
(0*20,4*4)0.1906 0.0061 2.3948
(50,20)(5*4,0*26)0.1280 R I 0.0037 R I 4.0243
(1,1,0)*100.1431 0.0038 3.4533
(0*26,5*4)0.1618 0.0042 2.3903
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, S.; Gui, W. Estimation of Unknown Parameters of Truncated Normal Distribution under Adaptive Progressive Type II Censoring Scheme. Mathematics 2021, 9, 49. https://doi.org/10.3390/math9010049

AMA Style

Chen S, Gui W. Estimation of Unknown Parameters of Truncated Normal Distribution under Adaptive Progressive Type II Censoring Scheme. Mathematics. 2021; 9(1):49. https://doi.org/10.3390/math9010049

Chicago/Turabian Style

Chen, Siqi, and Wenhao Gui. 2021. "Estimation of Unknown Parameters of Truncated Normal Distribution under Adaptive Progressive Type II Censoring Scheme" Mathematics 9, no. 1: 49. https://doi.org/10.3390/math9010049

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop