Next Article in Journal
Sensitivity Analysis of Start Point of Extreme Daily Rainfall Using CRHUDA and Stochastic Models
Previous Article in Journal
Active Learning for Stacking and AdaBoost-Related Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Estimation of Shannon’s Entropy of Maxwell Distribution Based on Progressively First-Failure Censored Data

by
Kapil Kumar
1,†,
Indrajeet Kumar
2,† and
Hon Keung Tony Ng
3,*,†
1
Department of Statistics, Central University of Haryana, Mahendergarh 123031, India
2
Department of Statistics, Central University of South Bihar, Gaya 824236, India
3
Department of Mathematical Sciences, Bentley University, Waltham, MA 02452, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Stats 2024, 7(1), 138-159; https://doi.org/10.3390/stats7010009
Submission received: 29 December 2023 / Revised: 5 February 2024 / Accepted: 6 February 2024 / Published: 8 February 2024
(This article belongs to the Section Reliability Engineering)

Abstract

:
Shannon’s entropy is a fundamental concept in information theory that quantifies the uncertainty or information in a random variable or data set. This article addresses the estimation of Shannon’s entropy for the Maxwell lifetime model based on progressively first-failure-censored data from both classical and Bayesian points of view. In the classical perspective, the entropy is estimated using maximum likelihood estimation and bootstrap methods. For Bayesian estimation, two approximation techniques, including the Tierney-Kadane (T-K) approximation and the Markov Chain Monte Carlo (MCMC) method, are used to compute the Bayes estimate of Shannon’s entropy under the linear exponential (LINEX) loss function. We also obtained the highest posterior density (HPD) credible interval of Shannon’s entropy using the MCMC technique. A Monte Carlo simulation study is performed to investigate the performance of the estimation procedures and methodologies studied in this manuscript. A numerical example is used to illustrate the methodologies. This paper aims to provide practical values in applied statistics, especially in the areas of reliability and lifetime data analysis.

1. Introduction

Information theory offers a feasible way for quantifying uncertainty and reciprocal information of random variables in which entropy plays a significant role. Shannon Shannon [1] established the concept of statistical entropy, namely Shannon’s entropy, which measures the average level of missing information in a random source, as a fundamental idea in information theory. The relationship between Shannon’s entropy and the information available in classical and quantum systems is well established. In addition, statistical mechanics interpretations can be made through Shannon’s entropy. Shannon’s entropy is typically used in statistical mechanics to represent the entropy for classical systems with configurations taken from canonical ensembles. Moreover, Shannon’s entropy is useful in many disciplines, such as computer science, molecular biology, hydrology, and meteorology, to solve different scientific problems involving uncertainties in random quantities. For example, molecular biologists use the principle of Shannon’s entropy to study trends in gene sequences. Recently, Saraiva [2] exemplified Shannon’s entropy in biological diversity and student migration studies and provided an intuitive introduction to Shannon’s entropy. For more details, one may refer to the excellent monograph by Cover [3] on the theory and implications of entropy with applications in various disciplines. Shannon’s entropy is one of the most widely used entropies in statistics and information theory.
Suppose a random variable X has probability density function (pdf) f ( · ) , the Shannon’s entropy of the random variable X, is given by
H ( f ) = E ln f ( X ) = f ( x ) ln f ( x ) d x .
Please note that the entropy in Equation (1) is a continuous entropy, and the measurement is relative to the coordinate system ([1] Section 20). In other words, the continuous entropy is not invariant under a variable change in general. However, the entropy based on the variable change can be expressed as the original entropy less the expected logarithm of the Jacobian of the variable change. Recently, researchers studied parametric statistical inference for measuring the entropy under different lifetime models based on complete or censored data. For example, entropy for several shifted exponential populations was studied by Kayal and Kumar [4]. Another author Cho et al. [5] studied the entropy estimation for Rayleigh distribution under doubly generalized Type-II hybrid censored data. Another paper Du et al. [6] developed statistical inference for the information entropy of the log-logistic distribution based on progressively Type-I interval-censored data, and Liu and Gui [7] investigated the entropy estimation for Lomax distribution based on generalized progressively hybrid censoring. Another paper Yu et al. [8] developed statistical inference on Shannon’s entropy for inverse Weibull distribution based on progressively first-failure censored data.
The Maxwell–Boltzmann distribution, popularly known as the Maxwell distribution (MWD), was initially developed by James Clerk Maxwell and Ludwig Boltzmann in the late 1800s as a distribution of velocities in a gas at a specific temperature. In this paper, we consider the MWD due to its simplicity and nice physical interpretation. For further information about the reliability characteristics of the MWD, see Bekker and Roux [9]. The MWD is frequently employed in physics and chemistry for several reasons, including the fact that the MWD can be used to describe several important gas properties, such as pressure and diffusion. The MWD has become a well-known lifespan model in recent years. Many researchers studied this statistical distribution in depth for modeling lifespan data. As Bekker and Roux [9] pointed out, the MWD is useful in life-testing and reliability studies because of its desirable properties, especially for situations where the assumption of a constant failure rate is not realistic. Recently, in the reliability report for Gallium Nitride (GaN) power devices by Pozo et al. [10], they found that the Maxwell–Boltzmann distribution fits the hot carrier distribution in the high energy regime tails well.
The classical estimation of the model parameters of MWD has been studied by [11,12] for complete and censored samples, respectively. Following this, Krishna and Malik [13] studied the MWD under progressive censoring, Krishna et al. [14] discussed the MWD under randomly censored data, Tomer and Panwar [15] developed an estimation procedure for the MWD based on Type-I progressive hybrid censored data, Panwar and Tomer [16] discussed robust Bayesian analysis for the MWD, and Kumari et al. [17] discussed classical and Bayesian estimation for the MWD based on adaptive progressive Type-II censored data.
The pdf and cumulative distribution function (cdf), respectively, for a random variable X that follows the Maxwell distribution (MWD) with parameter λ (denoted as M W D ( λ ) ) are given by
f ( x ; λ ) = 4 π 1 λ 3 / 2 x 2 e x 2 λ ; 0 < x < , λ > 0 ,
F ( x ; λ ) = Γ x 2 λ , 3 2 ; 0 < x < , λ > 0 .
where Γ ( x , b ) = 1 Γ ( b ) 0 x e t t b 1 d t is the incomplete gamma ratio. The MWD has an increasing failure rate function [13]. The MWD is a special case of many generalized distributions, such as the generalized gamma distribution [18], generalized Rayleigh distribution ([19] p.452), and generalized Weibull distribution [20].
Assume that the lifetime of the items of interest follow the MWD with pdf in Equation (2), from Equation (1), the Shannon’s entropy is given by
H ( f ) = 1 2 ln λ + γ + 1 2 ln π 1 2 = d e f . H ( λ ) say ,
where γ = 0.5772156649 is the Euler–Mascheconi constant.
Censoring often occurs in reliability engineering or life testing procedures when the actual lifetimes of the items of interest are not observed; for instance, when subjects or experimental units are taken out of the experiments intentionally or accidentally. Pre-planned censoring saves time and costs for experiments, and it has been applied in various fields, including but not limited to engineering, survival analysis, clinical trials, medical research, etc. Several censoring schemes, such as Type-I and Type-II censoring schemes (also known as time and item censoring schemes, respectively), are often utilized in life-testing experiments. When products or items have a long lifespan, a life-testing experiment may require a long time to complete, even with Type-I or Type-II censoring. For these situations, Balasooriya [21] established the first-failure censoring plan that provides a time and cost-saving strategy for life-testing experiments. In this censoring scheme, an experimenter examines k × n units by grouping them into n groups, each having k items, and then conducts all the tests jointly until the first failure is seen in each group. Although the first-failure censoring plan saves time and costs, it does not allow for the periodic removal of surviving units or subjects during the life-testing experiment. Therefore, the progressive censoring scheme proposed by Cohen [22] that allows for removing units throughout the life-testing experiment can be considered. For this reason, first-failure and progressive censoring were combined to create a more flexible life-testing strategy known as the progressive first-failure censoring (PFFC) scheme [23]. Due to its compatible features with other censoring plans, the PFFC scheme has gained much attention in the literature. For instance, the Lindley distribution based on PFFC data was studied by Dube et al. [24], the exponentiated exponential distribution based on PFFC data was studied by Mohammed et al. [25], the estimation of stress-strength reliability for generalized inverted exponential distribution based on PFFC data was studied by Krishna et al. [26]. One author Kayal et al. [27] studied the PFFC scheme and developed statistical inference on the Chen distribution. Following on this, Saini et al. [28] studied the estimation of stress-strength reliability for generalized Maxwell distribution based on PFFC data, and Kumar et al. [29] discussed the reliability estimation in inverse Pareto distribution based on PFFC data, etc.
The PFFC scheme can be described as follows: suppose there are n independent groups, each with k items, these are placed on a life test, and we have a prespecified number of observed failures m ( n ) and a prefixed progressive censoring plan G ˜ = ( G 1 , G 2 , , G m ) , where G i ( i = 1 , 2 , , m ) are the prefixed number of groups without item failure to be removed at i-th failure such that n = m + G 1 + G 2 + + G m . In other words, the test will be terminated when the m-th failure is observed. When the first failure occurs at time X 1 : m : n : k , G 1 groups without item failure and the group where the first failure is observed are removed from the experiment. When the second failure occurs at time X 2 : m : n : k , G 2 groups without item failure and the group containing the second failure are removed from the experiment, and so on. Finally, when the m-th failure occurs at time X m : m : n : k , the remaining G m groups without item failure and the group containing m-th failure are removed from the experiment. Consequently, the observed failure times, X 1 : m : n : k ( G ˜ ) < X 2 : m : n : k ( G ˜ ) < < X m : m : n : k ( G ˜ ) , are called progressively first-failure censored order statistics with the progressive censoring plan G ˜ = ( G 1 , G 2 , , G m ) . Figure 1 represents the schematic diagram of the PFFC scheme. Note that the PFFC scheme bears the following special cases: (i) it reduces to a complete sample case when k = 1 , n = m and G i = 0 ; i = 1 , 2 , m ; (ii) it reduces to a conventional Type-II censoring plan, if k = 1 and G i = 0 ; i = 1 , 2 , m 1 , and G m = n m ; (iii) it becomes a progressive Type-II censoring plan when k = 1 , and (iv) it reduces to a first-failure censoring plan, when m = n and G i = 0 ; i = 1 , 2 , m . Moreover, the applicability or practicality of PFFC may be a particular setup of a parallel-series system in which n homogeneous system like a batch of electric bulbs are connected parallel, each batch has k bulbs or electronics components connected in series. For the testing procedures or collecting lifetimes, the same mechanism presented in Figure 1 can be used.
Let x ˜ = ( x 1 : m : n : k ( G ˜ ) , x 2 : m : n : k ( G ˜ ) , , x m : m : n : k ( G ˜ ) ) be a PFFC sample drawn from a continuous population with cdf F ( x ) and pdf f ( x ) . For notation convenience, we suppress the notations G ˜ and ( m : n : k ) in the observed data and denote the observed data as x ˜ = ( x 1 , x 2 , , x m ) . The likelihood function can be expressed as [23]
L x ˜ = A k m i = 1 m f ( x i ) 1 F ( x i ) k ( G i + 1 ) 1 , 0 < x 1 < x 2 < < x m < ,
where A = n ( n G 1 1 ) ( n G 1 G 2 2 ) ( n G 1 G 2 G m 1 m + 1 ) .
To establish any information based on the available data, we can consider a statistical probability model involving some parameter(s), and first, we need to estimate the model parameter(s). The two popular estimation methods in the literature are classical and Bayesian. The Bayesian estimation can be used when the prior information is available. In this study, we consider both estimation methods for feasibility purposes. The main objective of this study is to estimate the associate parameter and Shannon’s entropy for the MWD based on progressively first-failure censored data. Through the study in this manuscript, we aim to provide practical values in applied statistics, especially in the areas of reliability and lifetime data analysis.
The rest of this paper is organized as follows. Section 2 develops the frequentist estimation techniques, including the maximum likelihood estimation and asymptotic and bootstrap confidence intervals. Section 3 is devoted to Bayesian estimation techniques using the Tierney-Kadane (T-K) approximation and Markov Chain Monte Carlo (MCMC) methods. In Section 4, a Monte Carlo simulation study is used to evaluate the performance of the estimation procedures developed in this manuscript. A numerical example is provided in Section 5 to illustrate the methodologies developed in this manuscript. Finally, in Section 6, some concluding remarks are presented.

2. Frequentist Estimation Approach

In this section, we develop the expectation-maximization (EM) algorithm to obtain the maximum likelihood estimates (MLEs) of the related parameters λ and Shannon’s entropy H ( λ ) . The EM algorithm is one of the popularly used iterative methods for the censored data; for more details, one may refer to Dempster et al. [30], McLachlan and Krishnan [31], and Casella and Berger [32]. Additionally, we consider the asymptotic and bootstrap confidence intervals for λ and H ( λ ) .

2.1. Maximum Likelihood Estimation

Based on the observed sample under the PFFC scheme with censoring plan G ˜ , x ˜ = ( x 1 , x 2 , , x m ) , and assuming that the lifetimes of the items follow the MWD( λ ), the likelihood function in Equation (5) can be obtained as
L ( x ˜ ; λ ) = A k m 4 π m λ 3 m 2 exp 1 λ j = 1 m x j 2 j = 1 m x j 2 1 Γ x j 2 λ , 3 2 k ( G j + 1 ) 1 .
The MLE of the parameter λ can be obtained by maximizing the likelihood function in Equation (6). Instead of directly maximizing the likelihood function, we take advantage of the closed-form MLE of λ based on a complete sample by considering the EM algorithm to obtain the MLE of λ .
Suppose the observed and censored data are X ˜ = x ˜ = ( x 1 , x 2 , , x m ) , and Z ˜ = Z 11 , , Z 1 [ k ( G 1 + 1 ) 1 ] , , Z m 1 , , Z m [ k ( G m + 1 ) 1 ] , respectively, the combined forms of complete data set is given by Y ˜ = ( X ˜ , Z ˜ ) . If a complete sample Y ˜ = ( y 1 , y 2 , , y n k ) from MWD is available, the MLE of λ , denoted as λ ^ C can be expressed in a closed form as
λ ^ C = 2 3 n k i = 1 n k y i 2 .
Based on the complete sample Y ˜ = ( X ˜ , Z ˜ ) , the log-likelihood function can be expressed as
ln L c ( Y ˜ ; λ ) = c o n s t a n t 3 n k 2 ln λ 1 λ i = 1 m x i 2 1 λ i = 1 m j = 1 k ( G i + 1 ) 1 Z i j 2 .
The EM algorithm treats the censored data as missing observations and replaces those missing values with their corresponding expected values. In the expectation step (E-step) of the EM algorithm, we replace the functions of the unobserved (censored) data Z ˜ with the corresponding expected values given the observed data x ˜ . Hence, the pseudo-complete log-likelihood function can be obtained as
ln L c ( Y ˜ ; λ ) = c o n s t a n t 3 n k 2 ln λ 1 λ i = 1 m x i 2 1 λ i = 1 m j = 1 k ( G i + 1 ) 1 E Z i j 2 | Z i j > x i .
For given X i = x i , the conditional distribution of Z i j , follows a truncated MWD with left truncation at x i ,
f ( z i j | x i , λ ) = f ( z i j , λ ) 1 F ( x i , λ ) ; z i j > x i , i = 1 , 2 , , m ; j = 1 , 2 , , [ k ( G i + 1 ) 1 ] .
Then, the conditional expectations involved in Equation (9) can be computed as
A ( c , λ ) = E Z i j 2 | Z i j > c = 3 λ 2 [ 1 F ( x i , λ ) ] 1 Γ c 2 λ , 3 2 .
In the maximization step (M-step) of the EM algorithm, suppose the estimate of λ in the r-th stage of the EM algorithm is λ ^ ( r ) , then the updated estimate in the ( r + 1 ) -th stage can be obtained as
λ ^ ( r + 1 ) = 2 3 n k i = 1 m x j 2 + i = 1 m j = 1 k ( G i + 1 ) 1 A ( x i , λ ( r ) ) .
With an initial estimate λ ( 0 ) of parameter λ , the E-step and M-step iterate until convergence occurs. Here, we consider convergence as | λ ( r + 1 ) λ ( r ) | < ϵ , where ϵ is a small value. Once the MLE λ ^ of parameter λ is computed, using the invariance property of MLE, the MLE of Shannon’s entropy H ( λ ) is obtained as
H ^ ( λ ^ ) = 1 2 ln λ ^ + γ + 1 2 ln π 1 2 .

2.2. Asymptotic Confidence Interval

In this subsection, we construct confidence intervals for the parameter λ and Shannon’s entropy H ( λ ) based on the asymptotic theory of the MLE. The standard errors of the MLE can be obtained by direct computation or numerical differentiation by evaluating the negative of the second derivative of the observed log-likelihood function Tanner [33] [Section 4.4]. However, the observed information can be difficult to compute because of the incomplete data. Therefore, the missing information principle computes the observed Fisher information by using the difference between the information in the pseudo-complete sample, and the information in the missing observations can be used to obtain the asymptotic variance of the MLE λ . The missing information principle presented by Louis [34] (see also, [33]) can be described as follows:
Observe Information = Complete Information Missing Information
Let I X ( λ ) be the Fisher information based on the observed data, I Y ( λ ) be the Fisher information based on a complete sample, and I Y | X ( λ ) be the missing information, then Equation (12) can be expressed as
I X ( λ ) = I Y ( λ ) I Y | X ( λ ) .
For MWD, the Fisher information based on complete data is given by
I Y ( λ ) = E 2 L c ( Y ; λ ) λ 2 = 3 n k 2 λ 2 1 λ 2 i = 1 m x i 2 .
At the time of the i-th failure at time x i , the Fisher information in a censored observation can be obtained as
I Y | X ( i ) ( λ ) = E Z i j | x i 2 ln f ( Z i j | x i , λ ) λ 2 = 3 2 1 λ 2 + ψ ( λ )
where ψ ( λ ) = 3 2 λ 1 Γ x i 2 λ , 3 2 3 2 Γ x i 2 λ , 5 2 Γ x i 2 λ , 3 2 ,
ψ ( λ ) = ψ ( λ ) λ = 3 2 λ 2 1 Γ x i 2 / λ , 3 / 2 Q 1 + Q 2 1 Γ x i / λ , 3 / 2 , Q 1 = 5 2 Γ x i 2 / λ , 7 / 2 2 Γ x i 2 / λ , 5 / 2 + Γ x i 2 / λ , 3 / 2 , Q 2 = 3 2 Γ x i 2 / λ , 5 / 2 Γ x i 2 / λ , 3 / 2 .
Consequently, the missing information can be obtained as
I Y | X ( λ ) = i = 1 m [ k ( G i + 1 ) 1 ] I Y | X ( i ) ( λ ) .
From Equation (12), the Fisher information of the observed data I X ( λ ) can be obtained by using Equations (14) and (15). Then, under suitable regularity conditions (e.g., the true parameter value of λ must be interior to the parameter space, the derivatives of the log-likelihood function with respect to λ exist up to third order and bounded), the asymptotic variance of the MLE of λ can be obtained as V a r ^ ( λ ^ ) = I X 1 ( λ ) | λ = λ ^ = I Y ( λ ) I Y | X ( λ ) 1 | λ = λ ^ . Therefore, an asymptotic 100 ( 1 α ) % confidence interval of λ is λ ^ ± z α / 2 V a r ^ ( λ ^ ) , where z α / 2 is the upper ( α / 2 ) -th quantile of the standard normal distribution N ( 0 , 1 ) .
To construct an asymptotic confidence interval of Shannon’s entropy H ( λ ) , we first use the delta method to approximate the variance of the MLE of H ( λ ) [32]. Specifically, suppose λ ^ is the MLE of λ , the asymptotic variance of H ^ ( λ ) using the delta method (see, for example, Krishnamoorthy and Lin [35]) is given by
V a r ( H ) = [ b C I X 1 ( λ ) b C ] ,
where b C = H ( λ ) λ = 1 2 λ . Hence, the approximate variance of H ^ ( λ )
V a r ^ ( H ^ ) [ b C I X 1 ( λ ) b C ] λ = λ ^ .
Based on the asymptotic theory of MLE, H ^ H V a r ^ ( H ^ ) N ( 0 , 1 ) asymptotically. Therefore, an asymptotic 100 ( 1 α ) % confidence interval of H is given by H ^ ± z α / 2 V a r ^ ( H ^ ) .

2.3. Bootstrap Confidence Intervals

The bootstrap method is a well-known resampling technique used in statistics to estimate the sampling distribution of a statistic by repeatedly resampling with replacement from the observed data. The main objective of this method is to provide an empirical approximation of the sampling distribution of a statistic, which can be useful for making inferences and constructing confidence intervals. In the literature, the concept of the bootstrap technique was first introduced by Efron [36], and since then, several applications have been discussed. For more details, one may refer to Efron and Tibshirani [37]. Here, we construct confidence intervals for the parameter λ and Shammon’s entropy using two parametric bootstrap methods—the percentile bootstrap (boot-p) method [38] and the bootstrap-t (boot-t) method [39]. Following Efron and Tibshirani [37], the procedures to obtain the two parametric bootstrap confidence intervals are described as follows.

2.3.1. Percentile Bootstrap (Boot-P) Confidence Intervals

Step A1:
Based on the observed PFFC sample X ˜ = x 1 , x 2 , , x m with effective sample size m from n groups each having k items and progressive censoring plan G ˜ , compute the MLEs λ ^ and H ^ of parameter λ and entropy H ( λ ) , respectively.
Step A2:
Generate an independent bootstrap PFFC sample X ˜ ( b ) = x 1 ( b ) , x 2 ( b ) , , x m ( b ) from M W D ( λ ^ ) with effective sample size m from n groups each having k items and progressive censoring plan G ˜ .
Step A3:
Based on the bootstrap sample X ˜ ( b ) , compute the MLEs λ ^ ( b ) and H ^ ( b ) of parameter λ and entropy H ( λ ) , respectively.
Step A4:
Repeat Step A2 and Step A3 B times to obtain a set of bootstrap estimates as ( λ ^ ( b ) , H ^ ( b ) ); b = 1 , 2 , , B .
Step A5:
Order the bootstrap estimates ( λ ^ ( 1 ) , λ ^ ( 2 ) , , λ ^ ( B ) ) and ( H ^ ( 1 ) , H ^ ( 2 ) , , H ^ ( B ) ) in ascending order as ( λ ^ [ 1 ] < λ ^ [ 2 ] < < λ ^ [ B ] ) and ( H ^ [ 1 ] < H ^ [ 2 ] < < H ^ [ B ] ) , respectively. The 100 ( 1 α ) % percentile bootstrap confidence intervals for λ and H, respectively, are given by
λ ^ [ ( α / 2 ) B ] , λ ^ [ ( 1 α / 2 ) B ] and H ^ [ ( α / 2 ) B ] , H ^ [ ( 1 α / 2 ) B ] ,
where [ a ] is the integral part of a.

2.3.2. Boostrap-t (Boot-t) Confidence Intervals

Step B1:
Based on the observed PFFC sample X ˜ = x 1 , x 2 , , x m with effective sample size m from n groups each having k items and progressive censoring plan G ˜ , compute the MLEs λ ^ and H ^ of parameter λ and entropy H ( λ ) , respectively.
Step B2:
Generate an independent bootstrap PFFC sample X ˜ ( b ) = x 1 ( b ) , x 2 ( b ) , , x m ( b ) from M W D ( λ ^ ) with effective sample size m from n groups each having k items and progressive censoring plan G ˜ .
Step B3:
Compute the bootstrap-t statistics for λ as
τ λ ( b ) = λ ^ ( b ) λ ^ V a r ^ ( λ ^ ( b ) ) ,
and the bootstrap-t statistic for H as
τ H ( b ) = H ^ ( b ) H ^ V a r ^ ( H ^ ( b ) ) .
Step B4:
Repeat Step B2 and Step B3 B times to obtain a set of bootstrap statistics as ( τ λ ( b ) , τ H ( b ) ); b = 1 , 2 , , B .
Step B5:
Order the bootstrap statistics ( τ λ ( 1 ) , τ λ ( 2 ) , , τ λ ( B ) ) and ( τ H ( 1 ) , τ H ( 2 ) , , τ H ( B ) ) in ascending order as ( τ λ [ 1 ] < τ λ [ 2 ] < < τ λ [ B ] ) and ( τ H [ 1 ] < τ H [ 2 ] < < τ H [ B ] ) , respectively.
Step B6:
The 100 ( 1 α ) % bootstrap-t confidence intervals for λ and H, respectively, are given by
λ ^ τ λ [ ( 1 α / 2 ) B ] V a r ^ ( λ ^ ) , λ ^ + τ λ [ ( 1 α / 2 ) B ] V a r ^ ( λ ^ )
and
H ^ τ H [ ( 1 α / 2 ) B ] V a r ^ ( H ^ ) , H ^ + τ H [ ( 1 α / 2 ) B ] V a r ^ ( H ^ ) .

3. Bayesian Estimation Approach

In this section, we derive Bayes estimators under the linear exponential (LINEX) loss function and construct the highest posterior density (HPD) credible intervals for the parameters λ and Shannon’s entropy H ( λ ) . For details of Bayesian statistical inference and data analysis methods, one may refer to the books by Box and Tiao [40] and Gelman et al. [41]. The Bayesian approach to reliability analysis involves previous knowledge of lifespan parameters, technical knowledge of failure mechanisms, as well as experimental data to be incorporated into the inferential procedure. As pointed out by Tian et al. [42], employing the Bayesian approach in reliability analysis has the advantages of making statistical inferences using information from prior experience with the failure mechanism or physics-of-failure and avoiding making inferences based on plausibly inaccurate large-sample theory in frequentist approach. For more details about Bayesian inference methods and specifying prior distribution in reliability applications, one may refer to Tian et al. [42]. As a result, Bayesian techniques are frequently applied to small sample data, which is highly advantageous in the case of pricey life testing tests. For Bayesian estimation, the inverted gamma distribution is commonly used as the natural conjugate prior density for the parameter λ of MWD (see, for example, Bekker and Roux [9], Chaudhary et al. [43]). Following Bekker and Roux [9] and Chaudhary et al. [43], we consider the prior distribution of the unknown parameter λ is the inverted gamma distribution with pdf
g ( λ ) 1 λ a + 1 exp ( b / λ ) ; λ > 0 , a > 0 , b > 0 ,
where a and b are hyper-parameters. Thus, by incorporating the prior information in Equation (16) to the likelihood function in Equation (6), the posterior distribution of λ can be expressed as
π ( λ | x ˜ ) = L ( x ˜ | λ ) g ( λ ) 0 L ( x ˜ , λ ) g ( λ ) d λ , = K λ 3 m 2 + a + 1 exp 1 λ i = 1 m x i 2 + b i = 1 m 1 Γ x i 2 λ , 3 2 k ( G i + 1 ) 1 ,
where K 1 is the normalizing constant given by
K 1 = 0 1 λ 3 m 2 + a + 1 exp 1 λ i = 1 m x i 2 + b i = 1 m 1 Γ x i 2 λ , 3 2 k ( G i + 1 ) 1 d λ .
Here, we consider the LINEX loss function proposed by [44], which is one of the most commonly used asymmetric loss functions. The LINEX loss is defined as
L ( Δ ) = exp ( c Δ ) c Δ 1 ; c 0 , Δ = λ ˜ λ ,
where the loss function’s scaling parameter is c and λ ˜ is an estimate of λ . The LINEX loss function provides greater weight to overestimation or underestimation, depending on whether the value of c is positive or negative, and for small values of c 0 , it is virtually identical to the squared error loss function. This loss function is appropriate when overestimation is more expensive than underestimating. Thus, under the LINEX loss function, the Bayes estimator of any function of the parameter λ , say ϕ ( λ ) , is given by
E ϕ ( λ ) = 1 c ln 0 e c ϕ ( λ ) π ( λ | x ˜ ) d λ 0 π ( λ | x ˜ ) d λ .
From Equation (19), the Bayes estimators take the ratio of two integrals for which has no closed-form solution. To obtain the ratio of the two integrals in Equation (19), we suggest using two approximation techniques—the T-K approximation and the MCMC methods. The T-K approximation method is one of the oldest deterministic approximation techniques, whereas the MCMC method is one of the newest popularized techniques based on the posterior sample algorithm. The MCMC could be expensive to compute, especially for large sample sizes n. Moreover, many MCMC algorithms require a rough estimate of key posterior quantities, such as the posterior variance. Compared to MCMC methods, the T-K approximation cannot be reduced by running the algorithm longer. However, deterministic approximation is typically very fast to compute and sufficiently reliable in several applied contexts. These issues motivate us to develop both techniques in this study. The details of these two approximation techniques are presented in the following subsections.

3.1. Tierney-Kadane (T-K) Approximation Technique

According to T-K approximation’s technique proposed by Tierney and Kadane [45], the approximation of the posterior mean of the function of the parameter, say ϕ ( λ ) is given by
E ϕ ( λ ) | x ˜ = 0 e n δ ϕ * ( λ ) d λ 0 e n δ ( λ ) d λ | Σ ϕ * | | Σ | 1 2 e n [ δ ϕ * ( λ ^ ϕ * ) δ ( λ ^ ϕ ) ] ,
where δ ( λ ) = 1 n [ l ( λ ) + ρ ( λ ) ] , δ * ( λ ) = δ ( λ ) + 1 n ln ϕ ( λ ) , l ( λ ) is the log-likelihood function, ρ ( λ ) = ln g ( λ ) , and | Σ ϕ * | and | Σ | are the determinants of inverse of the negative Hessian of δ * ( λ ) and δ ( λ ) at λ ^ δ * and λ ^ δ , respectively. Here, λ ^ δ and λ ^ δ * maximize δ ( λ ) and δ * ( λ ) , respectively. We observe that
δ ( λ ) = 1 n [ 3 m 2 + a + 1 ln λ 1 λ i = 1 m x i 2 + b + 2 i = 1 m ln x i + i = 1 m [ k ( G i + 1 ) 1 ] ln 1 Γ x i 2 λ , 3 2 ] .
To determine the value of λ ^ δ , we solve the following non-linear equation:
δ ( λ ) λ = 1 n 3 m 2 + a + 1 1 λ + 1 λ 2 i = 1 m x i 2 + b + i = 1 m [ k ( G i + 1 ) 1 ] ψ ( λ ) = 0 ,
where
ψ ( λ ) = ln 1 Γ x i 2 λ , 3 2 λ = 3 2 λ 1 Γ x i 2 λ , 3 2 3 2 Γ x i 2 λ , 5 2 Γ x i 2 λ , 3 2 .
Now, | Σ | can be obtained from
Σ 1 = 1 n 2 δ ( λ ) λ 2 = 1 n 3 m 2 + a + 1 1 λ 2 2 λ 3 i = 1 m x i 2 + b + i = 1 m [ k ( G i + 1 ) 1 ] ψ ( λ ) ,
where
ψ ( λ ) = ψ ( λ ) λ = 3 2 λ 2 1 Γ x i 2 / λ , 3 / 2 Q 1 + Q 2 1 Γ x i / λ , 3 / 2 , Q 1 = 5 2 Γ x i 2 / λ , 7 / 2 2 Γ x i 2 / λ , 5 / 2 + Γ x i 2 / λ , 3 / 2 , Q 2 = 3 2 Γ x i 2 / λ , 5 / 2 Γ x i 2 / λ , 3 / 2 .
To calculate the Bayes estimator of λ under the LINEX loss function, we take ϕ ( λ ) = e c λ , consequently the function δ * ( λ ) becomes
δ * ( λ ) = δ ( λ ) c λ n
and then λ ^ δ * * , is computed as solutions of the following non-linear equation
δ * ( λ ) λ = δ ( λ ) λ c n = 0 , and obtain | Σ * | from Σ λ * 1 = 1 n 2 δ * ( λ ) λ 2 .
Thus, the approximate Bayes estimator of λ under the LINEX loss function is given by
λ ^ T K = 1 c ln | Σ λ * | | Σ | 1 2 exp n δ λ * ( λ ^ δ * ) δ ( λ ^ δ ) .
Similarly, the Bayes estimator of Shannon’s entropy H ( λ ) under the LINEX loss function is given by
H ^ T K = 1 c ln | Σ H * | | Σ | 1 2 exp n δ H * ( H ^ δ * ) δ ( H ^ δ ) .

3.2. Markov Chain Monte Carlo (MCMC) Techniques

In this subsection, we use the MCMC techniques to obtain the Bayes estimates of the parameter λ and Shannon’s entropy H ( λ ) under the LINEX loss function. The Metropolis-Hastings (M-H) algorithm was initially established by Metropolis et al. [46] and subsequently extended by Hastings [47] and popularized as one of the most commonly used MCMC techniques. The candidate points are created from a normal distribution to a sample from the posterior distribution of λ using the observed data X ˜ in (17). The following steps are used to obtain MCMC sequences:
Step C1.
For parameter λ , set λ ( 0 ) as the initial guess value
Step C2.
From proposal density η ( λ ( j ) | λ ( j 1 ) ) , generate a candidate point λ c ( j ) .
Step C3.
Generate u from uniform distribution in ( 0 , 1 ) .
Step C4.
Compute α ( λ c ( j ) | λ ( j 1 ) ) = min π λ c ( j ) | w ˜ η λ ( j 1 ) | λ c ( j ) π λ ( j 1 ) | w ˜ η λ c ( j ) | λ ( j 1 ) , 1 .
Step C5.
If u α , set λ ( j ) = λ c ( j ) with acceptance probability α ; otherwise, set λ ( j ) = λ ( j 1 ) .
Step C6.
Compute H ( j ) = H ( λ ( j ) ) from Equation (1).
Step C7.
Repeat Steps C2–C6 M times to obtain the sequence of the parameter λ as ( λ 1 , λ 2 , , λ M ) and Shannon’s entropy H as ( H 1 , H 2 , , H M ) .
To acquire an independent sample from the stationary distribution of the Markov chain, we consider a burn-in period of size M 0 by discarding the first M 0 values in the MCMC sequences. Thus, the Bayes estimators of λ and H ( λ ) under the LINEX loss function, respectively, are given by
λ ^ M H = 1 c ln 1 M M 0 j = M 0 + 1 M e c λ j , and   H ^ M H = 1 c ln 1 M M 0 j = M 0 + 1 M e c H ( λ j ) .
Based on the MCMC samples, we can obtain the HPD credible interval for the parameter λ and Shannon’s entropy H ( λ ) . Suppose ( λ [ 1 ] < λ [ 2 ] < < λ [ M * ] ) and H [ 1 ] < H [ 2 ] < < H [ M * ] denotes the ordered values of ( λ 1 , λ 2 , , λ M * ) and ( H 1 , H 2 , , H M * ) , respectively, after the burn-in period, where M * = M M 0 . Then, following Chen and Shao [48], the 100 ( 1 α ) % HPD credible interval for λ can be obtained as λ ( j ) , λ ( j + [ ( 1 α ) M * ] ) , where j is chosen such that
λ j + [ ( 1 α ) M * ] λ ( j ) = min 1 i α M * λ ( i + [ ( 1 α ) M * ] ) λ ( i ) ; j = 1 , 2 , , M * .
Similarly, the 100 ( 1 α ) % HPD credible interval for H ( λ ) is given by H ( j ) , H ( j + [ ( 1 α ) M * ] ) , where j is chosen such that
H j + [ ( 1 α ) M * ] H ( j ) = min 1 i α M * H ( i + [ ( 1 α ) M * ] ) H ( i ) ; j = 1 , 2 , , M * .

4. Monte Carlo Simulation Study

In this section, a Monte Carlo simulation study is conducted to evaluate the performance of the proposed estimation procedures. The frequentist and Bayesian point estimation procedures for the parameter λ and Shannon’s entropy H ( λ ) are compared by means of the average estimates (AE) and mean squared errors (MSE). For interval estimation procedures, we compare the asymptotic confidence intervals (Asym), the percentile bootstrap confidence intervals (boot-p), the bootstrap-t confidence intervals (boot-t), and the HPD credible intervals in terms of their simulated average lengths (AL) and the simulated coverage probabilities (CP). For the bootstrap confidence intervals, the intervals are obtained based on B = 1000 bootstrap samples.
In the simulation study, we consider that the PFFC samples are generated from the MWD with parameter λ = 0.75 and 1.5 (the corresponding entropy H ( λ ) are H ( 0.75 ) = 0.5057 and H ( 1.5 ) = 0.8523 , respectively) with various combinations of a number of groups n, effective sample size m, group size k, and censoring scheme G ˜ ) . We consider group sizes k = 3 and 5, the number of groups n = 20 and 50, and effective sample sizes m = 0.4 n and 0.8 n . Three different censoring schemes G ˜ for each combination of n and m are considered:
(I)
[ ( k , n , m ) , ( G 1 = n m , G j = 0 , j = 2 , 3 , m ) ] : ( n m ) groups are removed from the experiment at the first failure only;
(II)
[ ( k , n , m ) , ( G j = 0 , j = 1 , 2 , , m 1 , G m = n m ) ] : ( n m ) groups are removed at m t h failure;
(III)
[ ( k , n = m ) , G j = 0 , j = 1 , 2 , , m ] : first-failure censored sample.
The censoring schemes ([CS]) used in the Monte Carlo simulation study are summarized in Table 1. Note that simplified notations are used to denote the censoring schemes, for example, ( 0 7 ) denotes ( 0 , 0 , 0 , 0 , 0 , 0 , 0 ) and ( 4 3 ) stands for ( 4 , 4 , 4 ) .
For the Bayesian estimation approach, the Bayes estimates of Shannon’s entropy are computed with informative inverted gamma prior under the LINEX loss function. The hyper-parameters ( a , b ) are selected for Bayesian computations of the parameter λ and Shannon’s entropy in such a manner that the prior mean is precisely identical to the true values of the parameter, i.e., λ = a / b . Specifically, we consider ( a , b ) = ( 3 , 4 ) and ( 3 , 2 ) for λ = 0.75 and 1.5 , respectively. When computing Bayes estimators under the LINEX loss function, we consider the loss function parameter c = 0.5 and 0.5. We use M = 10,000 with a burn-in period M 0 = 2000 for the M-H algorithm. The simulation results are based on 1000 repetitions in this study. All the computations are performed using the statistical software R (https://www.r-project.org/) [49].
The simulated results for point estimation are presented in Table 2 and Table 3, and the simulation results for interval estimation are presented in Table 4 and Table 5. For point estimation, from Table 2 and Table 3, we observe that the MLEs and Bayes estimates of Shannon’s entropy are performing well with small MSEs. The simulated MSEs decrease as n or m increases. The Bayes estimates perform better than the MLEs in terms of MSEs when the prior information matches the true value of λ . Among the two approaches to obtain the Bayes estimates, the Bayes estimates obtained by the M-H algorithm techniques outperform the Bayes estimates obtained by the T-K approximation in terms of the MSEs.
For interval estimation, from Table 4 and Table 5, it can be seen that the simulated average lengths of the 95% asymptotic, percentile bootstrap, bootstrap-t confidence intervals, and the Bayesian HPD credible intervals are decreasing as the number of failures (m) increases. All the interval estimation procedures provide reasonable simulated coverage probability (CP) that are close to the nominal level 95 % . The HPD credible intervals have smaller simulated average lengths than those frequentist confidence intervals.

5. Practical Data Analysis

To demonstrate the effectiveness of the MWD in modeling lifetime data and illustrate the methodologies developed in the paper, a practical data analysis of a real data set is conducted. We consider the tensile strength (in GPa) of 100 carbon fibers. This data set was originally reported by Nichols and Padgett [50] and further studied by Mohammed et al. [25] and Xie and Gui [51]. The data set is presented in Table 6.
First, we use the scaled total time on test (TTT) transform to understand the behavior of the failure rate function of the data set. The scaled TTT transform is given by
ψ ( r / n ) = j = 1 r t ( i ) + ( n r ) t r / j = 1 r t ( i ) , r = 1 , 2 , , n ,
where t ( i ) , i = 1 , 2 , , n represent the i-th order statistic of the sample. If the plot ( r / n , ψ ( r / n ) ) is convex (concave), the failure rate function has a decreasing (increasing) shape. For more details about scaled TTT transform, see, for example, Mudholkar et al. [52]. The scaled TTT plot of the data set in Table 6 is displayed in Figure 2. Figure 2 shows that the considered data set follows an increasing failure rate function. This empirical behavior of the failure rate function indicates that the MWD model can be considered a suitable model for this data set.
Furthermore, we check whether the MWD is well-fit for the data set in Table 6 using two goodness-of-fit tests. We consider the Kolmogrov–Smirnov (KS) and Anderson–Darling (AD) test statistics and obtain the corresponding p-values. The KS and AD statistics with their corresponding p-values (in paratheses) are 0.0884 (0.4145) and 0.7977 (0.4824), respectively. According to these p-values of the goodness-of-fit tests, the MWD fits quite well for the data set in Table 6. In addition to the goodness-of-fit test, we also assess the feasibility of fitting the data set using MWD graphically using the empirical and fitted cdfs plot and the probability-probability (P-P) plots in Figure 3. These plots are tools used in statistics to assess the goodness-of-fit of a statistical model to observed data. A good fit is indicated when the points in the P-P plot lie close to a straight line (usually a 45-degree line), suggesting that the empirical and theoretical cumulative probabilities are a good fit or similar. From Figure 3, one can observe that the observed data points show almost similar patterns to the theoretical distribution, i.e., the MWD fits the data set in Table 6 reasonably well.
To illustrate the methodologies developed in this paper, we generate first-failure censored data based on the data set in Table 6. After grouping the 100 carbon fiber into n = 25 groups and k = 4 individuals within each group. The grouped data and the corresponding first-failure censored samples are reported in Table 7. The items with “+” within each group indicate the first failure. Then, we obtain six different PFFC samples using different censoring schemes for m = 10 and 20 based on the first-failure censored data in Table 7. The censoring schemes and the corresponding PFFC samples are presented in Table 8. To avoid ambiguity with the censoring schemes in Table 1, we named these censored schemes [CS1]–[CS6].
Based on each PFFC sample in Table 8, we compute the MLEs and Bayes estimates of the parameter λ and Shannon’s entropy H ( λ ) . As we do not have prior information on the parameter λ , we use a non-informative prior for obtaining the Bayes estimates. The Bayes estimates are computed using the T-K approximation and MCMC methods under the LINEX loss function at two values of loss parameter c = 0.5 and 0.5. We construct 95% asymptotic, percentile bootstrap, bootstrap-t confidence intervals and the Bayesian HPD credible intervals for the parameter λ and Shannon’s entropy H ( λ ) . The point and interval estimation results are presented in Table 9 and Table 10, respectively.
For the Bayesian estimation procedures, we validate the convergence of the generated MCMC sequence of the parameter λ samples from the posterior distribution by using the M-H algorithm for their stationary distributions using graphical diagnostic tools, such as the trace plot, boxplot, and histogram with Gaussian density plots, as shown in Figure 4 for c = 0.5 . The trace plot shows a random scatter around the mean (shown by a thick red line) and a fine mixture of parameter chains. The posterior distribution is almost symmetric, as seen by the boxplots and histograms of produced samples, implying that the posterior mean can be used as a reasonable Bayes estimate of the parameter λ . For illustration, we also present the simulated posterior predictive densities in Figure 5.

6. Concluding Remarks

In this study, we developed statistical inference for the associated parameter and Shannon’s entropy of MWD based on PFFC data using frequentist and Bayesian approaches. For frequentist estimation procedures, we applied the EM algorithm to compute the MLEs. In addition, we obtained asymptotic, percentile bootstrap, and bootstrap-t confidence intervals for the parameter and Shannon’s entropy of MWD. For Bayesian estimation procedures, we applied two approximation techniques—the T-K approximation and the M-H algorithm—to obtain the Bayes estimates under the LINEX loss function. Moreover, we use the M-H algorithm to obtain the HPD credible intervals for the parameter and Shannon’s entropy of MWD. A Monte Carlo simulation study is used to evaluate the performance of the estimation procedures and a real data analysis is used to illustrate the methodologies. Based on the simulation results, we recommend using the Bayesian point and interval estimation of the parameter and Shannon’s entropy based on the MCMC method for the MWD when prior information about the model parameter is available. If the prior information about the model parameter is unavailable, then the MLEs are recommended.
For future research, it will be interesting to study the optimal censoring plan (i.e., to determine the values of n, k, and the censoring scheme G ˜ = ( G 1 , G 2 , , G m ) ) for specific effective sample size m and the total sample size available for the experiment (i.e., n k ) to maximize the information about the Shaanon’s entropy can be obtained. On the other hand, we consider MWD due to its simplicity. Considering the trade-off between model performance/usability and complexity, if one is willing to use a more complex model with higher flexibility, the current work can be extended to some generalized distributions.

Author Contributions

Conceptualization, Methodology and Original Draft Preparation: K.K. and I.K.; Investigation, Writing, Review, and Editing: H.K.T.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors would like to thank the two anonymous referees for their valuable comments, which helped improve the quality of this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  2. Saraiva, P. On Shannon entropy and its applications. Kuwait J. Sci. 2023, 50, 194–199. [Google Scholar] [CrossRef]
  3. Cover, T.M. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 1999. [Google Scholar]
  4. Kayal, S.; Kumar, S. Estimation of the Shannon’s entropy of several shifted exponential populations. Stat. Probab. Lett. 2013, 83, 1127–1135. [Google Scholar] [CrossRef]
  5. Cho, Y.; Sun, H.; Lee, K. An estimation of the entropy for a Rayleigh distribution based on doubly-generalized Type-II hybrid censored samples. Entropy 2014, 16, 3655–3669. [Google Scholar] [CrossRef]
  6. Du, Y.; Guo, Y.; Gui, W. Statistical Inference for the Information Entropy of the Log-Logistic Distribution under Progressive Type-I Interval Censoring Schemes. Symmetry 2018, 10, 445. [Google Scholar] [CrossRef]
  7. Liu, S.; Gui, W. Estimating the entropy for Lomax distribution based on generalized progressively hybrid censoring. Symmetry 2019, 11, 1219. [Google Scholar] [CrossRef]
  8. Yu, J.; Gui, W.; Shan, Y. Statistical Inference on the Shannon Entropy of Inverse Weibull Distribution under the Progressive First-Failure Censoring. Entropy 2019, 21, 1209. [Google Scholar] [CrossRef]
  9. Bekker, A.; Roux, J. Reliability characteristics of the Maxwell distribution: A Bayes estimation study. Commun.-Stat.-Theory Methods 2005, 34, 2169–2178. [Google Scholar] [CrossRef]
  10. Pozo, A.; Zhang, S.; Stecklein, G.; Garcia, R.; Glaser, J.; Tang, Z.; Strittmatter, R. GaN Reliability and Lifetime Projections: Phase 14; Technical report; Reliability Report; EPC Corp.: El Segundo, CA, USA, 2002. [Google Scholar]
  11. Woolley, R.v.R. A note on the MVU estimation of reliability for the Maxwell failure distribution. Estadistica 1989, 41, 73–79. [Google Scholar]
  12. Harter, H.L. Order Statistics and their Use in Testing and Estimation, Volume 2; Technical report; Aerospace Research Laboratories, Office of Aerospace Research, United States Air Force: 1970. Available online: https://www.amazon.com/Order-statistics-testing-estimation-Volumes/dp/B0006C3I0Q (accessed on 13 December 2023).
  13. Krishna, H.; Malik, M. Reliability estimation in Maxwell distribution with progressively Type-II censored data. J. Stat. Comput. Simul. 2012, 82, 623–641. [Google Scholar] [CrossRef]
  14. Krishna, H.; Vivekanand; Kumar, K. Estimation in Maxwell distribution with randomly censored data. J. Stat. Comput. Simul. 2015, 85, 3560–3578. [Google Scholar] [CrossRef]
  15. Tomer, S.K.; Panwar, M. Estimation procedures for Maxwell distribution under Type-I progressive hybrid censoring scheme. J. Stat. Comput. Simul. 2015, 85, 339–356. [Google Scholar] [CrossRef]
  16. Panwar, M.; Tomer, S.K. Robust Bayesian Analysis of Lifetime Data from Maxwell Distribution. Austrian J. Stat. 2019, 48, 38–55. [Google Scholar] [CrossRef]
  17. Kumari, A.; Kumar, K.; Kumar, I. Bayesian and classical inference in Maxwell distribution under adaptive progressively Type-II censored data. Int. J. Syst. Assur. Eng. Manag. 2023, 1–22. [Google Scholar] [CrossRef]
  18. Stacy, E.W. A Generalization of the Gamma Distribution. Ann. Math. Stat. 1962, 33, 1187–1192. [Google Scholar] [CrossRef]
  19. Johnson, N.L.; Kotz, S.; Balakrishnan, N. Continuous Univariate Distributions, 2nd ed.; John Wiley & Sons: New York, NY, USA, 1994; Volume 1. [Google Scholar]
  20. Lai, C.D. Generalized Weibull Distributions; Springer: New York, NY, USA, 1994. [Google Scholar]
  21. Balasooriya, U. Failure–censored reliability sampling plans for the exponential distribution. J. Stat. Comput. Simul. 1995, 52, 337–349. [Google Scholar] [CrossRef]
  22. Cohen, A.C. Progressively Censored Samples in Life Testing. Technometrics 1963, 5, 327–339. [Google Scholar] [CrossRef]
  23. Wu, S.J.; Kuş, C. On estimation based on progressive first-failure-censored sampling. Comput. Stat. Data Anal. 2009, 53, 3659–3670. [Google Scholar] [CrossRef]
  24. Dube, M.; Garg, R.; Krishna, H. On progressively first failure censored Lindley distribution. Comput. Stat. 2016, 31, 139–163. [Google Scholar] [CrossRef]
  25. Mohammed, H.S.; Ateya, S.F.; AL-Hussaini, E.K. Estimation based on progressive first-failure censoring from exponentiated exponential distribution. J. Appl. Stat. 2017, 44, 1479–1494. [Google Scholar] [CrossRef]
  26. Krishna, H.; Dube, M.; Garg, R. Estimation of P(Y < X) for progressively first-failure-censored generalized inverted exponential distribution. J. Stat. Comput. Simul. 2017, 87, 2274–2289. [Google Scholar]
  27. Kayal, T.; Tripathi, Y.M.; Wang, L. Inference for the Chen Distribution Under Progressive First-Failure Censoring. J. Stat. Theory Pract. 2019, 13, 52. [Google Scholar] [CrossRef]
  28. Saini, S.; Chaturvedi, A.; Garg, R. Estimation of stress–strength reliability for generalized Maxwell failure distribution under progressive first failure censoring. J. Stat. Comput. Simul. 2021, 91, 1366–1393. [Google Scholar] [CrossRef]
  29. Kumar, I.; Kumar, K.; Ghosh, I. Reliability estimation in inverse Pareto distribution using progressively first failure censored data. Am. J. Math. Manag. Sci. 2023, 42, 126–147. [Google Scholar] [CrossRef]
  30. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Statist. Soc. Ser. B 1977, 39, 1–38. [Google Scholar] [CrossRef]
  31. McLachlan, G.J.; Krishnan, T. The EM Algorithm and Extensions, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  32. Casella, G.; Berger, R.L. Statistical Inference, 2nd ed.; Duxbury Press: Pacific Grove, CA, USA, 2002. [Google Scholar]
  33. Tanner, M.A. Tools for Statistical Inference, 3rd ed.; Springer: New York, NY, USA, 1996. [Google Scholar]
  34. Louis, T.A. Finding the observed information matrix when using the EM algorithm. J. R. Stat. Soc. Ser. B (Methodol.) 1982, 44, 226–233. [Google Scholar] [CrossRef]
  35. Krishnamoorthy, K.; Lin, Y. Confidence limits for stress—Strength reliability involving Weibull models. J. Stat. Plan. Inference 2010, 140, 1754–1764. [Google Scholar] [CrossRef]
  36. Efron, B. Bootstrap Methods: Another Look at the Jackknife. Ann. Stat. 1979, 7, 1–26. [Google Scholar] [CrossRef]
  37. Efron, B.; Tibshirani, R.J. An Introduction to The Bootstrap. Monogr. Stat. Appl. Probab. 1993, 57, 1–436. [Google Scholar]
  38. Efron, B. The Jackknife, the Bootstrap and Other Resampling Plans; SIAM: Philadelphia, PA, USA, 1982. [Google Scholar]
  39. Hall, P. Theoretical comparison of bootstrap confidence intervals. Ann. Stat. 1988, 16, 927–953. [Google Scholar] [CrossRef]
  40. Box, G.E.P.; Tiao, G.C. Bayesian Inference Statistical Analysis; John Wiley & Sons: New York, NY, USA, 1992. [Google Scholar]
  41. Gelman, A.; Carlin, J.B.; Stern, H.S.; Dunson, D.B.; Vehtari, A.; Rubin, D.B. Bayesian Data Analysis, 3rd ed.; CRC Press: Boca Raton, FL, USA, 2014. [Google Scholar]
  42. Tian, Q.; Lewis-Beck, C.; Niemi, J.B.; Meeker, W.Q. Specifying prior distributions in reliability applications. In Applied Stochastic Models in Business and Industry; 2023; Available online: https://onlinelibrary.wiley.com/doi/10.1002/asmb.2752 (accessed on 13 December 2023).
  43. Chaudhary, S.; Kumar, J.; Tomer, S.K. Estimation of P[Y < X] for Maxwell distribution. J. Stat. Manag. Syst. 2017, 20, 467–481. [Google Scholar]
  44. Varian, H.R. A Bayesian approach to real estate assessment. Stud. Bayesian Econom. Stat. Honor. Leonard J. Savage 1975, 4, 195–208. [Google Scholar]
  45. Tierney, L.; Kadane, J.B. Accurate approximations for posterior moments and marginal densities. J. Am. Stat. Assoc. 1986, 81, 82–86. [Google Scholar] [CrossRef]
  46. Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equation of state calculations by fast computing machines. J. Chem. Phys. 1953, 21, 1087–1091. [Google Scholar] [CrossRef]
  47. Hastings, W.K. Monte Carlo sampling methods using Markov chains and their applications. Biometrica 1970, 57, 97–109. [Google Scholar] [CrossRef]
  48. Chen, M.H.; Shao, Q.M. Monte Carlo estimation of Bayesian credible and HPD intervals. J. Comput. Graph. Stat. 1999, 8, 69–92. [Google Scholar]
  49. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2023. [Google Scholar]
  50. Nichols, M.D.; Padgett, W. A bootstrap control chart for Weibull percentiles. Qual. Reliab. Eng. Int. 2006, 22, 141–151. [Google Scholar] [CrossRef]
  51. Xie, Y.; Gui, W. Statistical Inference of the Lifetime Performance Index with the Log-Logistic Distribution Based on Progressive First-Failure-Censored Data. Symmetry 2020, 12, 937. [Google Scholar] [CrossRef]
  52. Mudholkar, G.S.; Srivastava, D.K.; Kollia, G.D. A generalization of the Weibull distribution with application to the analysis of survival data. J. Am. Stat. Assoc. 1996, 91, 1575–1583. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of PFFC scheme.
Figure 1. Schematic diagram of PFFC scheme.
Stats 07 00009 g001
Figure 2. TTT plots under consideration of real data set.
Figure 2. TTT plots under consideration of real data set.
Stats 07 00009 g002
Figure 3. Empirical and fitted Maxwell distribution plots for real data.
Figure 3. Empirical and fitted Maxwell distribution plots for real data.
Stats 07 00009 g003
Figure 4. MCMC diagnostic plots for different PFFC samples in Table 8.
Figure 4. MCMC diagnostic plots for different PFFC samples in Table 8.
Stats 07 00009 g004
Figure 5. Simulated posterior predictive densities for the Bayesian estimates based on different PFFC samples in Table 8.
Figure 5. Simulated posterior predictive densities for the Bayesian estimates based on different PFFC samples in Table 8.
Stats 07 00009 g005
Table 1. Censoring plans used in the simulation study.
Table 1. Censoring plans used in the simulation study.
nm[CS]Schemesnm[CS]Schemes
208[1] ( 12 1 , 0 7 ) 5020[7] ( 30 1 , 0 19 )
[2] ( 4 3 , 0 5 ) [8] ( 5 6 , 0 14 )
[3] ( 0 7 , 12 1 ) [9] ( 0 19 , 30 1 )
2016[4] ( 4 1 , 0 15 ) 5040[10] ( 10 1 , 0 39 )
[5] ( 2 2 , 0 14 ) [11] ( 5 2 , 0 38 )
[6] ( 0 15 , 4 1 ) [12] ( 0 39 , 10 1 )
Table 2. Simulated average MLEs and Bayes estimates and the MSEs of Shannon’s entropy H ( λ ) when λ = 1.5 and H ( 1.5 ) = 0.8523 .
Table 2. Simulated average MLEs and Bayes estimates and the MSEs of Shannon’s entropy H ( λ ) when λ = 1.5 and H ( 1.5 ) = 0.8523 .
MLETK ApproximationMCMC
c = 0 . 50 c = 0 . 50 c = 0 . 50 c = 0 . 50
( k , n , m ) [CS]AEMSEAEMSEAEMSEAEMSEAEMSE
(3,20,8)[1]0.83720.01890.80660.01740.79910.01810.83290.00240.82980.0025
[2]0.83710.01850.80740.01710.80000.01780.83330.00240.83020.0025
[3]0.83710.01720.81010.01600.80310.01660.83420.00230.83140.0024
(3,20,16)[4]0.84750.00910.83050.00870.82630.00890.84130.00160.83940.0017
[5]0.84750.00910.83050.00870.82630.00890.84130.00160.83940.0017
[6]0.84750.00880.83120.00840.82720.00860.84160.00160.83980.0016
(3,50,20)[7]0.84850.00740.83480.00710.83140.00730.84330.00140.84180.0014
[8]0.84850.00720.83530.00700.83200.00710.84350.00140.84200.0014
[9]0.84830.00660.83660.00640.83360.00650.84420.00130.84280.0013
(3,50,40)[10]0.84960.00390.84250.00390.84070.00390.84720.00090.84630.0009
[11]0.84960.00390.84250.00390.84070.00390.84720.00090.84630.0009
[12]0.84950.00380.84280.00370.84110.00380.84740.00080.84650.0008
(5,20,8)[1]0.83710.01800.85050.01240.80110.01740.84830.00230.83060.0025
[2]0.83700.01770.85040.01220.80200.01710.84840.00230.83100.0024
[3]0.83690.01660.85030.01170.80460.01610.84850.00220.83190.0023
(5,20,16)[4]0.84740.00870.85360.00720.82760.00850.85110.00160.84000.0016
[5]0.84740.00860.85360.00720.82760.00850.85110.00160.84000.0016
[6]0.84740.00840.85360.00700.82830.00820.85110.00150.84030.0016
(5,50,20)[7]0.84850.00700.85350.00610.83250.00690.85150.00130.84220.0014
[8]0.84840.00690.85350.00600.83290.00680.85150.00130.84240.0014
[9]0.84820.00640.85340.00560.83420.00630.85150.00120.84310.0013
(5,50,40)[10]0.84950.00380.85210.00350.84130.00370.85170.00080.84660.0008
[11]0.84950.00380.85210.00350.84130.00370.85170.00080.84660.0008
[12]0.84950.00360.85200.00340.84160.00360.85170.00080.84680.0008
Table 3. Simulated average MLEs and Bayes estimates and the MSEs of Shannon’s entropy H ( λ ) , when λ = 0.75 and H = 0.5057 .
Table 3. Simulated average MLEs and Bayes estimates and the MSEs of Shannon’s entropy H ( λ ) , when λ = 0.75 and H = 0.5057 .
MLETK ApproximationMCMC
c = 0 . 50 c = 0 . 50 c = 0 . 50 c = 0 . 50
( k , n , m ) [CS]AEMSEAEMSEAEMSEAEMSEAEMSE
(3,20,8)[1]0.49060.01890.58230.01510.57460.01390.53710.00370.53380.0035
[2]0.49050.01850.58110.01480.57350.01370.53650.00360.53330.0035
[3]0.49050.01720.57680.01380.56960.01290.53470.00340.53170.0032
(3,20,16)[4]0.50090.00910.55070.00830.54650.00790.52520.00210.52320.0021
[5]0.50090.00910.55060.00830.54640.00790.52510.00210.52320.0020
[6]0.50090.00880.54930.00800.54530.00770.52460.00200.52270.0020
(3,50,20)[7]0.50200.00740.54240.00680.53900.00660.52190.00170.52030.0017
[8]0.50190.00720.54170.00670.53840.00640.52150.00170.52000.0017
[9]0.50180.00660.53910.00620.53600.00600.52040.00160.51900.0015
(3,50,40)[10]0.50300.00390.52420.00370.52250.00370.51420.00090.51340.0009
[11]0.50300.00390.52420.00370.52240.00370.51420.00090.51340.0009
[12]0.50300.00380.52350.00360.52180.00350.51390.00090.51310.0009
(5,20,8)[1]0.49050.01800.57950.01440.57210.01340.53590.00360.53270.0034
[2]0.49040.01770.57850.01420.57120.01320.53540.00350.53230.0033
[3]0.49030.01660.57490.01340.56790.01250.53390.00330.53090.0031
(5,20,16)[4]0.50080.00870.54880.00790.54480.00760.52430.00200.52250.0020
[5]0.50080.00860.54880.00790.54470.00760.52430.00200.52250.0020
[6]0.50080.00840.54760.00770.54370.00740.52380.00200.52200.0019
(5,50,20)[7]0.50190.00700.54090.00650.53760.00630.52120.00170.51970.0016
[8]0.50180.00690.54030.00640.53710.00620.52090.00160.51940.0016
[9]0.50160.00640.53810.00600.53510.00580.51990.00150.51850.0015
(5,50,40)[10]0.50290.00380.52330.00350.52160.00350.51380.00090.51300.0009
[11]0.50290.00380.52330.00350.52160.00350.51380.00090.51300.0009
[12]0.50290.00360.52270.00340.52110.00340.51350.00090.51280.0009
Table 4. Simulated average lengths and coverage probabilities of 95% asymptotic, percentile bootstrap, and bootstrap-t confidence interval and Bayesian HPD credible intervals of Shannon’s entropy H ( λ ) , when λ = 1.5 and H = 0.8523 .
Table 4. Simulated average lengths and coverage probabilities of 95% asymptotic, percentile bootstrap, and bootstrap-t confidence interval and Bayesian HPD credible intervals of Shannon’s entropy H ( λ ) , when λ = 1.5 and H = 0.8523 .
AsymBootstrapHPD
Boot-pBoot-t
( k , n , m ) [CS]ALCPALCPALCPALCP
(3,20,8)[1]0.52570.9480.53240.9270.53350.9520.30550.999
[2]0.51980.9470.52710.9270.52790.9520.30270.999
[3]0.50070.9510.51050.9250.51030.9510.29410.999
(3,20,16)[4]0.37460.9480.37210.9450.37210.9500.23870.998
[5]0.37430.9480.37190.9450.37180.9500.23850.998
[6]0.36800.9450.36570.9460.36580.9510.23500.998
(3,50,20)[7]0.33430.9450.34580.9550.34590.9540.21700.997
[8]0.33040.9440.34190.9560.34160.9540.21460.998
[9]0.31650.9440.32730.9540.32730.9530.20650.997
(3,50,40)[10]0.23720.9410.24720.9500.24720.9500.16020.987
[11]0.23710.9410.24710.9500.24710.9500.16010.987
[12]0.23270.9420.24270.9500.24270.9500.15730.986
(5,20,8)[1]0.51290.9480.52100.9250.52190.9520.30500.997
[2]0.50810.9470.51620.9250.51750.9530.30270.997
[3]0.49170.9500.50240.9230.50230.9510.29500.997
(5,20,16)[4]0.36510.9460.36280.9450.36290.9510.23450.997
[5]0.36490.9460.36260.9450.36270.9510.23440.997
[6]0.35920.9450.35760.9460.35770.9510.23120.997
(5,50,20)[7]0.32600.9460.33690.9530.33700.9530.21300.997
[8]0.32280.9460.33360.9540.33360.9530.21110.997
[9]0.31090.9440.32150.9530.32140.9540.20410.996
(5,50,40)[10]0.23120.9420.24090.9500.24090.9500.15650.988
[11]0.23110.9420.24080.9500.24080.9500.15640.988
[12]0.22720.9420.23680.9490.23680.9500.15390.988
Table 5. Simulated average lengths and coverage probabilities of 95% asymptotic, bootstrap confidence and HPD credible intervals of entropy H ( λ ) , when λ = 0.75 and H = 0.5057 .
Table 5. Simulated average lengths and coverage probabilities of 95% asymptotic, bootstrap confidence and HPD credible intervals of entropy H ( λ ) , when λ = 0.75 and H = 0.5057 .
AsymBootstrapHPD
Boot-pBoot-t
( k , n , m ) [CS]ALCPALCPALCPALCP
(3,20,8)[1]0.52190.9480.52560.9270.53120.9520.31580.987
[2]0.51630.9470.52070.9270.52580.9520.31330.988
[3]0.49810.9510.50550.9250.50890.9510.30510.988
(3,20,16)[4]0.37460.9480.37210.9450.37220.9500.24130.988
[5]0.37430.9480.37190.9450.37180.9500.24110.988
[6]0.36800.9450.36570.9460.36580.9510.23770.987
(3,50,20)[7]0.33430.9450.34580.9550.34590.9540.21890.989
[8]0.33040.9450.34190.9560.34160.9540.21670.989
[9]0.31650.9440.32730.9540.32730.9530.20890.989
(3,50,40)[10]0.23720.9410.24720.9500.24720.9500.16070.983
[11]0.23710.9410.24710.9500.24710.9500.16060.983
[12]0.23270.9420.24270.9500.24270.9500.15790.985
(5,20,8)[1]0.50980.9480.51510.9250.52010.9520.31040.988
[2]0.50520.9470.51070.9240.51580.9530.30830.988
[3]0.48950.9500.49790.9230.50110.9510.30120.988
(5,20,16)[4]0.36510.9460.36280.9450.36290.9510.23620.989
[5]0.36490.9460.36250.9450.36260.9510.23610.989
[6]0.35920.9450.35760.9460.35770.9510.23310.99
(5,50,20)[7]0.32600.9460.33690.9530.33700.9530.21430.989
[8]0.32280.9460.33360.9540.33360.9530.21250.989
[9]0.31090.9440.32150.9520.32140.9540.20560.989
(5,50,40)[10]0.23120.9420.24090.9500.24090.9500.15690.985
[11]0.23110.9420.24080.9500.24080.9500.15680.985
[12]0.22720.9420.23680.9490.23690.9500.15440.985
Table 6. Tensile strength of 100 carbon fibers originally reported by Nichols and Padgett [50].
Table 6. Tensile strength of 100 carbon fibers originally reported by Nichols and Padgett [50].
3.703.114.423.283.752.963.393.313.152.811.412.763.191.592.17
3.511.841.611.571.892.743.272.413.092.432.532.813.312.352.77
2.684.911.572.001.172.170.392.791.082.882.732.873.191.872.95
2.674.202.852.552.172.973.680.811.225.081.693.684.702.032.82
2.501.473.223.152.972.933.332.562.592.831.361.845.561.122.48
1.252.482.031.612.053.603.111.694.903.393.222.553.562.381.92
0.981.591.731.711.184.380.851.802.123.65
Table 7. Grouped real data set (Observation with + indicates the first failure (FF) in the group).
Table 7. Grouped real data set (Observation with + indicates the first failure (FF) in the group).
Groups →12345678910111213
Items↓
I3.272.053.331.872.033.682.872.67 +1.841.732.822.772.73
II2.971.614.382.972.122.681.59 +2.960.39 +3.192.41 +2.172.88
III3.113.111.691.57 +0.85 +4.901.893.092.171.57 +3.603.513.75
IV2.03 +1.25 +1.18 +1.591.842.38 +2.434.202.352.933.221.08 +1.69 +
FF Obs.2.031.251.181.570.852.381.592.670.391.572.411.081.69
Groups→141516171819202122232425
Items↓
I5.562.812.554.701.36 +3.221.613.154.913.312.760.98 +
II1.41 +3.392.172.592.831.71 +2.854.421.171.925.083.39
III2.483.683.563.192.743.651.47 +2.00 +1.12 +1.80 +3.282.50
IV2.950.81+1.22 +2.56 +2.533.152.792.813.702.552.48 +3.31
FF Obs.1.410.811.222.561.361.711.472.001.121.802.480.98
Table 8. Censoring schemes and progressively first-failure censored samples corresponding to considered real data set.
Table 8. Censoring schemes and progressively first-failure censored samples corresponding to considered real data set.
( k , n , m ) [CS]SchemesProgressively First-Failure Censored Samples
(4,25,10)[CS1] ( 15 1 , 0 9 ) 0.39, 1.80, 1.84, 2.03, 2.12, 2.17, 2.48, 2.50, 2.73, 2.77
[CS2] ( 5 3 , 0 7 ) 0.39, 1.18, 1.57, 2.03, 2.12, 2.17, 2.48, 2.50, 2.73, 2.77
[CS3] ( 0 9 , 15 1 ) 0.39, 0.81, 0.85, 0.98, 1.08, 1.12, 1.18, 1.22, 1.25, 1.36
(4,25,20)[CS4] ( 5 1 , 0 19 ) 0.39, 1.18, 1.22, 1.25, 1.36, 1.41, 1.47, 1.57, 1.59, 1.61,
1.69, 1.80, 1.84, 2.03, 2.12, 2.17, 2.48, 2.50, 2.73, 2.77
[CS5] ( 2 1 , 3 1 , 0 18 ) 0.39, 0.98, 1.22, 1.25, 1.36, 1.41, 1.47, 1.57, 1.59, 1.61,
1.69, 1.80, 1.84, 2.03, 2.12, 2.17, 2.48, 2.50, 2.73, 2.77
[CS6] ( 0 19 , 5 1 ) 0.39, 0.81, 0.85, 0.98, 1.08, 1.12, 1.18, 1.22, 1.25, 1.36,
1.41, 1.47, 1.57, 1.59, 1.61, 1.69, 1.80, 1.84, 2.03, 2.12
Table 9. MLEs and Bayes estimates of λ and H ( λ ) for the PFFC samples in Table 8 with k = 4 , n = 25 , m = 10 and 20, and c = 0.5 and 0.5.
Table 9. MLEs and Bayes estimates of λ and H ( λ ) for the PFFC samples in Table 8 with k = 4 , n = 25 , m = 10 and 20, and c = 0.5 and 0.5.
T-K BayesM-H Bayes
MLE c = 0 . 5 c = 0 . 5 c = 0 . 5 c = 0 . 5
( k , n , m ) [CS] λ ^ H ^ λ ^ H λ ^ H ^ λ ^ H ^ λ ^ H ^
(4,25,10)[CS1]9.28971.764013.41531.78008.70781.77149.61631.74678.59061.7436
[CS2]10.66951.833311.56201.84939.90311.841111.13671.81679.82621.8137
[CS3]5.66741.51696.59581.53305.54631.53045.72611.50165.37841.4988
(4,25,20)[CS4]6.68061.59927.27641.60836.55721.60466.74451.59036.45751.5887
[CS5]6.76371.60547.37071.61456.63621.61076.83011.59656.53711.5949
[CS6]5.76351.52546.20461.53445.69441.53085.80111.51695.59371.5153
Table 10. Ninety-five percent asymptotic (Asym), percentile bootstrap (boot-p), bootstrap-t (boot-t) confidence intervals, and the Bayesian HPD credible intervals for the parameter λ and Shannon’s entropy H ( λ ) for the PFFC samples in Table 8 with k = 4 , n = 25 , m = 10 and 20, and c = 0.5 and 0.5.
Table 10. Ninety-five percent asymptotic (Asym), percentile bootstrap (boot-p), bootstrap-t (boot-t) confidence intervals, and the Bayesian HPD credible intervals for the parameter λ and Shannon’s entropy H ( λ ) for the PFFC samples in Table 8 with k = 4 , n = 25 , m = 10 and 20, and c = 0.5 and 0.5.
Interval estimates for parameter λ
BootstrapHPD
( k , n , m ) [CS]Asymboot-pboot-t c = 0.5 c = 0.5
(4,25,10)[CS1](4.973, 13.606)(6.636, 12.366)(6.213, 11.944)(6.357, 11.872)(6.357, 11.872)
[CS2](5.804, 15.535)(7.816, 13.602)(7.736, 13.523)(7.365, 13.587)(7.365, 13.587)
[CS3](3.157, 8.178)(3.880, 8.365)(2.970, 7.454)(3.967, 7.187)(3.967, 7.187)
(4,25,20)[CS4](4.478, 8.883)(5.006, 8.771)(4.590, 8.355)(5.125, 8.058)(5.125, 8.058)
[CS5](4.538, 8.989)(5.044, 8.824)(4.703, 8.483)(5.192, 8.156)(5.192, 8.156)
[CS6](3.893, 7.634)(4.064, 7.962)(3.565, 7.463)(4.443, 6.937)(4.443, 6.938)
Interval estimates for Shannon’s entropy H ( λ )
BootstrapHPD
( k , n , m ) [CS]Asymboot-pboot-t c = 0.5 c = 0.5
(4,25,10)[CS1](1.532, 1.996)(1.596, 1.907)(1.621, 1.932)(1.591, 1.900)(1.591, 1.900)
[CS2](1.605, 2.061)(1.678, 1.955)(1.712, 1.989)(1.664, 1.968)(1.664, 1.968)
[CS3](1.295, 1.738)(1.327, 1.712)(1.322, 1.706)(1.353, 1.648)(1.353, 1.648)
(4,25,20)[CS4](1.434, 1.764)(1.455, 1.735)(1.463, 1.743)(1.472, 1.698)(1.472, 1.698)
[CS5](1.441, 1.770)(1.459, 1.738)(1.472, 1.752)(1.478, 1.704)(1.478, 1.704)
[CS6](1.363, 1.688)(1.351, 1.687)(1.364, 1.700)(1.400, 1.623)(1.400, 1.623)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kumar, K.; Kumar, I.; Ng, H.K.T. On Estimation of Shannon’s Entropy of Maxwell Distribution Based on Progressively First-Failure Censored Data. Stats 2024, 7, 138-159. https://doi.org/10.3390/stats7010009

AMA Style

Kumar K, Kumar I, Ng HKT. On Estimation of Shannon’s Entropy of Maxwell Distribution Based on Progressively First-Failure Censored Data. Stats. 2024; 7(1):138-159. https://doi.org/10.3390/stats7010009

Chicago/Turabian Style

Kumar, Kapil, Indrajeet Kumar, and Hon Keung Tony Ng. 2024. "On Estimation of Shannon’s Entropy of Maxwell Distribution Based on Progressively First-Failure Censored Data" Stats 7, no. 1: 138-159. https://doi.org/10.3390/stats7010009

APA Style

Kumar, K., Kumar, I., & Ng, H. K. T. (2024). On Estimation of Shannon’s Entropy of Maxwell Distribution Based on Progressively First-Failure Censored Data. Stats, 7(1), 138-159. https://doi.org/10.3390/stats7010009

Article Metrics

Back to TopTop