**1. Introduction**

To analyze and evaluate the reliability of products, life tests are often carried out. For products with long lives and high reliability, a censoring scheme is often adopted during the test to save on time and costs. Two commonly used censoring schemes are Type-I and Type-II censoring, but these two censoring schemes do not have the flexibility of allowing the removal of units at points other than the terminal point of the experiment. To allow for more flexibility in removing surviving units from the test, more general censoring approaches are required. The progressive Type-II censoring scheme is appealing and has attracted much attention in the literature. This topic can be found in [1]. One may also refer to [2] for a comprehensive review on progressive censoring. One drawback of the Type-II progressive censoring scheme is that the length of the experiment may be quite long for long-life products. Therefore, Kundu and Joarder [3] proposed a Type-II progressive hybrid censoring scheme where the experiment terminates at a pre-specified time. However, for the Type-II progressive hybrid censoring scheme, the drawback is that the effective sample size is a random variable, which may be very small or even zero. To strike a balance between the total testing time and the efficiency in statistical inference, Ng et al. [4] introduced an adaptive Type-II progressive hybrid censoring scheme (ATII-PHCS). This censoring scheme is described as follows. Suppose that *n* units are placed on test and *X*1, *X*2, ... , *Xn* denote the corresponding lifetimes from a distribution with the cumulative distribution function

**Citation:** Shi, X.; Shi, Y.; Zhou, K. Estimation for Entropy and Parameters of Generalized Bilal Distribution under Adaptive Type II Progressive Hybrid Censoring Scheme. *Entropy* **2021**, *23*, 206. https://doi.org/10.3390/e23020206

Received: 17 December 2020 Accepted: 4 February 2021 Published: 8 February 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

(CDF) *F*(*x*) and the probability density function (PDF) *f*(*x*). The number of observed failures *m* and time *T* are specified in advance and *m* < *n*. At the first failure time *X*1:*m*:*n*, *R*<sup>1</sup> units are randomly removed from the remaining *n* − 1 units. Similarly, at the second failure time *X*2:*m*:*n*, *R*<sup>2</sup> units from the remaining *n* − 2 − *R*<sup>1</sup> units are randomly removed, and so on. If the *m*th failure occurs before time *T* (i.e., *Xm*:*m*;*<sup>n</sup>* < *T*), the test terminates at time *Xm*:*m*:*<sup>n</sup>* and all remaining *Rm* units are removed, where *Rm* <sup>=</sup> *<sup>n</sup>* <sup>−</sup> *<sup>m</sup>* <sup>−</sup> <sup>∑</sup>*m*−<sup>1</sup> *<sup>i</sup>*=<sup>1</sup> *Ri* and *Ri* is specified in advance (*i* = 1, 2, ... , *m*). If the *J*th failure occurs before time *T* (i.e., *XJ*:*m*:*<sup>n</sup>* < *T* < *XJ*+1:*m*:*<sup>n</sup>* where *J* + 1 < *m*), then we will not withdraw any units from the test by setting *RJ*+<sup>1</sup> = *RJ*+<sup>2</sup> = ... = *Rm*−<sup>1</sup> = 0, and the test will continue until the failure unit number reaches the prefixed number *m*. At the time of the *m*th failure, all remaining *Rm* units are removed and the test terminates, where *Rm* <sup>=</sup> *<sup>n</sup>* <sup>−</sup> *<sup>m</sup>* <sup>−</sup> <sup>∑</sup>*<sup>J</sup> <sup>i</sup>*=<sup>1</sup> *Ri*.

The main advantage of ATII-PHCS is that it speeds up the test when the test duration exceeds the predetermined time *T* and ensures we get the effective number of failures *m*. It also illustrates how an experimenter can control the experiment. If one is interested in getting observations early, one will remove fewer units (or even none). For convenience, we let *Xi* = *Xi*:*m*:*n*, *i* = 1, 2, ... , *m*. After the above test, we get one of the following observation data cases:

Case I: (*X*1, *<sup>R</sup>*1),(*X*2, *<sup>R</sup>*2),...,(*Xm*, *Rm*) if *Xm* <sup>&</sup>lt; *<sup>T</sup>*, where *Rm* <sup>=</sup> *<sup>n</sup>* <sup>−</sup> <sup>∑</sup>*m*−<sup>1</sup> *<sup>i</sup>*=<sup>1</sup> *Ri* − *m*. Case II: (*X*1, *R*1),(*X*2, *R*2), ... ,(*XJ*, *RJ*),(*XJ*+1, 0), ... ,(*Xm*−1, 0),(*Xm*, *Rm*) if *XJ* < *T* < <sup>X</sup>*J*+<sup>1</sup> and *<sup>J</sup>* <sup>&</sup>lt; *<sup>m</sup>*, where *Rm* <sup>=</sup> *<sup>n</sup>* <sup>−</sup> *<sup>m</sup>* <sup>−</sup> <sup>∑</sup>*<sup>J</sup> <sup>i</sup>*=<sup>1</sup> *Ri*.

The ATII-PHCS has been studied in recent years. Mazen et al. [5] discussed the statistical analysis of the Weibull distribution under an adaptive Type-II progressive hybrid censoring scheme. Zhang et al. [6] investigated the maximum likelihood estimations (MLEs) of the unknown parameters and acceleration factors in the step-stress accelerated life test, based on the tampered failure rate model with ATII-PHC samples. Cui et al. [7] studied the point and interval estimates of the parameters from the Weibull distribution, based on adaptive Type-II progressive hybrid censored data in a constant-stress accelerated life test. Ismail [8] proposed that the MLE of the Weibull distribution parameters and the acceleration factor were derived based on ATII-PHC schemes under a step-stress partially accelerated life test model. The statistical inference of the dependent competitive failure system under the constant-stress accelerated life test with ATII-PHC data was studied by Zhang et al. [9]. Under an adaptive Type-II progressive censoring scheme, Ye et al. [10] investigated the general statistical properties and then used the maximum likelihood technique to estimate the parameters of the extreme value distribution. Some other studies on the statistical inference of life models using ATII-PHCS were presented by Sobhi and Soliman [11] and Nassar et al. [12]. Xu and Gui [13] studied entropy estimation for the two-parameter inverse Weibull distribution under adaptive type-II progressive hybrid censoring schemes.

Entropy measures the uncertainty associated with a random variable. Let X be a random variable having a continuous CDF *F*(*x*) and PDF *f*(*x*). Then, the Shannon entropy is defined as

$$H(f) = -\int\_{-\infty}^{+\infty} f(x) \ln f(x) dx. \tag{1}$$

In recent years, several scholars have studied the entropy estimation of different life distributions. Kang et al. [14] investigated the entropy estimators of a double exponential distribution based on multiply Type-II censored samples. Cho et al. [15] derived an estimation for the entropy function of a Rayleigh distribution based on doubly generalized Type-II hybrid censored samples. Baratpour et al. [16] developed the entropy of the upper record values and provided several upper and lower bounds for this entropy by using the hazard rate function. Cramer and Bagh [17] discussed the entropy of the Weibull distribution under progressive censoring. Cho et al. [18] obtained estimators for the entropy function of the Weibull distribution based on a generalized Type-II hybrid censored sample. Yu et al. [19] studied statistical inference in the Shannon entropy of the inverse Weibull distribution under progressive first-failure censoring.

In addition to the above-mentioned life distributions, the generalized Bilal (GB) distribution is also an important life distribution for analyzing lifetime data. The PDF and the CDF of the GB distribution, respectively, are given as

$$f(\mathbf{x}; \boldsymbol{\beta}, \boldsymbol{\lambda}) = 6\beta \boldsymbol{\lambda} \mathbf{x}^{\lambda - 1} \exp(-2\beta \mathbf{x}^{\lambda}) [1 - \exp(-\beta \mathbf{x}^{\lambda})], \mathbf{x} > 0, \boldsymbol{\beta} > 0, \boldsymbol{\lambda} > 0,\tag{2}$$

$$F(\mathbf{x}; \boldsymbol{\beta}, \boldsymbol{\lambda}) = 1 - \exp(-2\beta \mathbf{x}^{\boldsymbol{\lambda}}) [3 - 2\exp(-\beta \mathbf{x}^{\boldsymbol{\lambda}})], \mathbf{x} > 0, \boldsymbol{\beta} > 0, \boldsymbol{\lambda} > 0,\tag{3}$$

The Shannon entropy of the GB distribution is given by

$$H(f) = H(\beta, \lambda) = 2.5 + \gamma - \ln(27/4) - \ln(\lambda \beta^{\frac{1}{\lambda}}) + \frac{1}{\lambda}(\ln(\theta/8) - \gamma), \beta > 0, \lambda > 0,$$

where *γ* denotes the Euler–Mascheroni constant and *γ* = 0.5772.

The GB distribution was first introduced by Abd-Elrahman [20]. He investigated the properties of the probability density and failure rate function of this distribution. A comprehensive mathematical treatment of the GB distribution was provided, and the maximum likelihood estimations of unknown parameters were derived under the complete sample. Abd-Elrahman [21] provided the MLEs and Bayesian estimations of the unknown parameters and the reliability function based on a Type-II censored sample. Since the failure rate function of GB distribution has an upside-down bathtub shape, and it can also be monotonically decreasing or monotonically increasing at some selected values of the shape parameters *λ*, the GB model is very useful in survival analysis and reliability studies.

To the best of our knowledge, there has been no published work on the estimation of the entropy and parameters of GB distribution under an ATII-PHCS. As such, these issues are considered in this paper. The main objective of this paper is to provide the estimation of the entropy and unknown parameters of GB distribution under an ATII-PHCS by using the frequency and Bayesian methods.

The rest of this paper is organized as follows. In Section 2, the MLEs of the parameters and entropy of GB distribution are obtained, and approximate confidence intervals are constructed using the ATII-PHC data. In Section 3, the Bayesian estimation of the parameters and entropy under three different loss functions are provided using Lindley's approximation method. In addition, the Bayesian credible intervals of the parameters and entropy are also obtained by using the Markov chain Monte Carlo (MCMC) method. In Section 4, Monte Carlo simulations are carried out to investigate the performance of different point estimates and interval estimates. In Section 5, a real data set is analyzed for illustrative purposes. Some conclusions are presented in Section 6.

#### **2. Maximum Likelihood Estimation**

In this section, the MLE and approximate confidence intervals of the parameters and entropy of GB distribution will be discussed under the ATII-PHCS. Based on the data in Case I and Case II, the likelihood functions can be respectively written as

$$\text{Case } I: L\_I(\boldsymbol{\beta}, \boldsymbol{\lambda} | \stackrel{\rightarrow}{\mathbf{x}}) \propto \prod\_{i=1}^{m} f(\mathbf{x}\_i; \boldsymbol{\beta}\_\prime \boldsymbol{\lambda}) [1 - F(\mathbf{x}\_i; \boldsymbol{\beta}\_\prime \boldsymbol{\lambda})) ]^{R\_i} \tag{4}$$

$$\text{Case } II: L\_{II}(\boldsymbol{\beta}, \boldsymbol{\lambda} | \stackrel{\rightarrow}{\mathbf{x}}) \propto \prod\_{i=1}^{m} f(\mathbf{x}\_{i}; \boldsymbol{\beta}\_{i} \boldsymbol{\lambda}) ) \prod\_{i=1}^{l} [1 - F(\mathbf{x}\_{i}; \boldsymbol{\beta}, \boldsymbol{\lambda})]^{R\_{i}} [1 - F(\mathbf{x}\_{m}; \boldsymbol{\beta}, \boldsymbol{\lambda})]^{n - m - \sum\_{i=1}^{l} R\_{i}}.\tag{5}$$

where <sup>→</sup> *x* = (*x*1, *x*2,..., *xm*).

By combining *LI*(*β*, *λ*| → *x* ) and *LI I*(*β*, *λ*| → *x* ), the likelihood functions can be written uniformly as

$$\begin{split} &L(\boldsymbol{\beta},\boldsymbol{\lambda}|\boldsymbol{\bar{x}}) \approx \prod\_{i=1}^{m} f(\mathbf{x}\_{i};\boldsymbol{\beta},\boldsymbol{\lambda}) \Big) \prod\_{i=1}^{D} \left[1 - F(\mathbf{x}\_{i};\boldsymbol{\beta},\boldsymbol{\lambda})\right]^{R\_{i}} \Big[1 - F(\mathbf{x}\_{i};\boldsymbol{\beta},\boldsymbol{\lambda})\Big]^{R^{\*}} \\ &= \prod\_{i=1}^{m} \mathsf{6} \boldsymbol{\beta} \boldsymbol{\lambda} \boldsymbol{x}\_{i}^{\lambda - 1} \exp(-2\beta \boldsymbol{x}\_{i}^{\lambda}) [1 - \exp(-\beta \boldsymbol{x}\_{i}^{\lambda})] \prod\_{i=1}^{D} \left[\exp(-2\beta \boldsymbol{x}\_{i}^{\lambda}) (3 - 2\exp(-\beta \boldsymbol{x}\_{i}^{\lambda})) \right]^{R\_{i}} \times \left[\exp(-2\beta \boldsymbol{x}\_{m}^{\lambda}) (3 - 2\exp(-\beta \boldsymbol{x}\_{m}^{\lambda})) \right]^{R^{\*}}, \end{split} \tag{6}$$

where *<sup>R</sup>*<sup>∗</sup> <sup>=</sup> *<sup>n</sup>* <sup>−</sup> *<sup>m</sup>* <sup>−</sup> <sup>∑</sup>*<sup>D</sup> <sup>i</sup>*=<sup>1</sup> *Ri* and, for Case I, *D* = *m*, *R*<sup>∗</sup> = 0, and for Case II, *D* = *J*, *R*<sup>∗</sup> = *<sup>n</sup>* <sup>−</sup> *<sup>m</sup>* <sup>−</sup> <sup>∑</sup>*<sup>J</sup> <sup>i</sup>*=<sup>1</sup> *Ri*.

The log-likelihood function is given by

$$\begin{split} l &= \ln L(\boldsymbol{\beta}, \boldsymbol{\lambda} | \stackrel{\scriptstyle \boldsymbol{\Delta}}{=}) \propto \ln \ln(6\boldsymbol{\beta}\boldsymbol{\lambda}) + \sum\_{i=1}^{m} \left[ (\boldsymbol{\lambda} - 1)\ln \boldsymbol{\boldsymbol{x}}\_{i} - 2\boldsymbol{\beta}\boldsymbol{\boldsymbol{x}}\_{i}^{\boldsymbol{\lambda}} + \ln(1 - \exp(-\beta\boldsymbol{\boldsymbol{x}}\_{i}^{\boldsymbol{\lambda}})) \right] + \\ &+ \sum\_{i=1}^{D} \left[ -2R\_{i}\boldsymbol{\beta}\boldsymbol{\boldsymbol{x}}\_{i}^{\boldsymbol{\lambda}} + R\_{i}\ln(3 - 2\exp(-\beta\boldsymbol{\boldsymbol{x}}\_{i}^{\boldsymbol{\lambda}})) \right] - 2R^{\*}\beta\boldsymbol{\boldsymbol{x}}\_{m}^{\boldsymbol{\lambda}} + R^{\*}\ln(3 - 2\exp(-\beta\boldsymbol{\boldsymbol{x}}\_{m}^{\boldsymbol{\lambda}})). \end{split} \tag{7}$$

By taking the first partial derivative of the log-likelihood function with regard to *β* and *λ* and equating them to zero, the following results can be obtained:

$$\frac{\partial l}{\partial \boldsymbol{\theta}} = \frac{m}{\boldsymbol{\beta}} + \sum\_{i=1}^{m} \left[ -3\mathbf{x}\_{i}^{\lambda} + \mathbf{x}\_{i}^{\lambda} [y\_{1}(\boldsymbol{\theta})]^{-1} \right] + \sum\_{i=1}^{D} \left[ -3\mathbf{R}\_{i}\mathbf{x}\_{i}^{\lambda} + 3\mathbf{R}\_{i}\mathbf{x}\_{i}^{\lambda} [y\_{2}(\boldsymbol{\theta})]^{-1} \right] - 3\mathbf{R}^{\*}\mathbf{x}\_{m}^{\lambda} + 3\mathbf{R}^{\*}\mathbf{x}\_{m}^{\lambda} [y\_{3}(\boldsymbol{\theta})]^{-1} = \mathbf{0},\tag{8}$$

$$\begin{split} \frac{\partial \mathcal{J}}{\partial \lambda} &= \frac{\mathbf{m}}{\lambda} + \sum\_{i=1}^{m} \left[ \ln \mathbf{x}\_{i} - 3 \boldsymbol{\beta} \mathbf{x}\_{i}^{\lambda} \ln \mathbf{x}\_{i} + \boldsymbol{\beta} \mathbf{x}\_{i}^{\lambda} \ln \mathbf{x}\_{i} [\boldsymbol{y}\_{1}(\boldsymbol{\theta})]^{-1} \right] + \sum\_{i=1}^{D} \left[ -3 \mathbf{R}\_{i} \boldsymbol{\beta} \mathbf{x}\_{i}^{\lambda} \ln \mathbf{x}\_{i} + 3 \mathbf{R}\_{i} \boldsymbol{\beta} \mathbf{x}\_{i}^{\lambda} \ln \mathbf{x}\_{i} [\boldsymbol{y}\_{2}(\boldsymbol{\theta})]^{-1} \right] - \\ & \qquad - 3 \mathbf{R}^{\*} \boldsymbol{\beta} \mathbf{x}\_{m}^{\lambda} \ln \mathbf{x}\_{m} + 3 \mathbf{R}^{\*} \boldsymbol{\beta} \mathbf{x}\_{m}^{\lambda} \ln \mathbf{x}\_{m} [\boldsymbol{y}\_{3}(\boldsymbol{\theta})]^{-1} = \mathbf{0}, \end{split} \tag{9}$$

where *<sup>θ</sup>*= (*β*, *<sup>λ</sup>*), *<sup>y</sup>*1(*θ*) = <sup>1</sup>−exp(−*βx<sup>λ</sup> <sup>i</sup>* ), *<sup>y</sup>*2(*θ*) = <sup>3</sup>−<sup>2</sup> exp(−*βx<sup>λ</sup> <sup>i</sup>* ), *<sup>y</sup>*3(*θ*) = <sup>3</sup>−<sup>2</sup> exp(−*βx<sup>λ</sup> <sup>m</sup>*). The MLEs of *β* and *λ* can be obtained by solving Equations (7) and (8), but the above two equations do not yield an analytical solution. Thus, we use the Newton–Raphson iteration method to obtain the MLEs of the parameters. For this purpose, we firstly calculate the second partial derivatives of the log-likelihood function with regard to *β* and *λ*:

*∂*2*l ∂β*<sup>2</sup> <sup>=</sup> <sup>−</sup> *<sup>m</sup> <sup>β</sup>*<sup>2</sup> <sup>−</sup> <sup>∑</sup>*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> [*x*2*<sup>λ</sup> <sup>i</sup>* exp(−*βx<sup>λ</sup> <sup>i</sup>* )][*y*1(*θ*)]−<sup>2</sup> <sup>−</sup> ∑*<sup>D</sup> <sup>i</sup>*=<sup>1</sup> <sup>6</sup>*Rix*<sup>2</sup>*<sup>λ</sup> <sup>i</sup>* exp(−*βx<sup>λ</sup> <sup>i</sup>* )[*y*2(*θ*)]−<sup>2</sup> <sup>−</sup> <sup>6</sup>*R*∗*x*2*<sup>λ</sup> <sup>m</sup>* exp(−*βx<sup>λ</sup> m*)[*y*3(*θ*)]−<sup>2</sup> , (10) *∂*2*l ∂β∂λ* <sup>=</sup> <sup>∑</sup>*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> [−3*x<sup>λ</sup> <sup>i</sup>* ln *xi* + *<sup>x</sup><sup>λ</sup> <sup>i</sup>* ln *xi*(*y*1(*θ*) −1 [<sup>1</sup> <sup>−</sup> *<sup>β</sup>x<sup>λ</sup> <sup>i</sup>* exp(−*βx<sup>λ</sup> <sup>i</sup>* )(*y*1(*θ*))−<sup>1</sup> ]+ +∑*<sup>D</sup> <sup>i</sup>*=<sup>1</sup> [−3*Rix<sup>λ</sup> <sup>i</sup>* ln *xi*+3*Rix<sup>λ</sup> <sup>i</sup>* ln *xi*(*y*2(*θ*) −1 (<sup>1</sup> <sup>−</sup> <sup>2</sup>*βx<sup>λ</sup> <sup>i</sup>* exp(−*βx<sup>λ</sup> <sup>i</sup>* )(*y*2(*θ*))−<sup>1</sup> )]− <sup>−</sup>3*R*∗*x<sup>λ</sup> <sup>m</sup>* + 3*R*∗*x<sup>λ</sup> <sup>m</sup>* ln *xm*[*y*3(*θ*)]−<sup>1</sup> [<sup>1</sup> <sup>−</sup> <sup>2</sup>*βx<sup>λ</sup> <sup>m</sup>* exp(−*βx<sup>λ</sup> m*)(*y*3(*θ*))−<sup>1</sup> )], (11) *∂*2*l ∂λ*<sup>2</sup> <sup>=</sup> <sup>−</sup> *<sup>m</sup> <sup>λ</sup>*<sup>2</sup> <sup>+</sup> <sup>∑</sup>*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> [*βx<sup>λ</sup> <sup>i</sup>* (ln *xi*) 2 [−3+ (*y*1(*θ*))−<sup>1</sup> ] <sup>−</sup> *<sup>β</sup>*2*x*2*<sup>λ</sup> <sup>i</sup>* (ln *xi*) <sup>2</sup> exp(−*βx<sup>λ</sup> <sup>i</sup>* )(*y*1(*θ*))−<sup>2</sup> " +∑*<sup>D</sup> <sup>i</sup>*=<sup>1</sup> [−3*Riβx<sup>λ</sup> <sup>i</sup>* (ln *xi*) 2 (<sup>1</sup> <sup>−</sup> (*y*2(*θ*))−<sup>1</sup> <sup>−</sup>6*Riβ*2*x*2*<sup>λ</sup> <sup>i</sup>* (ln *xi*) <sup>2</sup> exp(−*βx<sup>λ</sup> <sup>i</sup>* )(*y*2(*θ*))−<sup>2</sup> " (12)

2

$$-3R^{\ast}\beta\mathbf{x}\_{m}^{\lambda}(\ln\mathbf{x}\_{\mathcal{W}})^{2}(1-\left(y\varepsilon(\theta)\right)^{-1})-6R^{\ast}\beta^{2}\mathbf{x}\_{m}^{2\lambda}(\ln\mathbf{x}\_{\mathcal{W}})^{2}\exp\left(-\beta\mathbf{x}\_{m}^{\lambda}\right)(y\varepsilon(\theta))^{-2}.$$

$$\text{Let } I(\boldsymbol{\beta},\lambda)=\begin{bmatrix}I\_{11}&I\_{12}\\I\_{21}&I\_{22}\end{bmatrix}, \text{where}$$

$$I\_{11}\qquad\partial^{2}I\_{12}\qquad\partial^{2}I\_{21}\qquad\dots\qquad\partial^{2}I\_{12}\qquad\dots\qquad\partial^{2}I\_{12}\qquad\dots\tag{21}$$

$$I\_{11} = -\frac{\partial^2 l}{\partial \beta^2},\ I\_{22} = -\frac{\partial^2 l}{\partial \lambda^2}, I\_{12} = I\_{21} = -\frac{\partial^2 l}{\partial \beta \partial \lambda}.\tag{13}$$

On the basis of the above calculation results, we can implement the Newton–Raphson iteration method to obtain the MLEs of unknown parameters. The specific steps of this iteration method can be seen in Appendix B. After obtaining the MLE *β*ˆ and *λ*ˆ of the parameters *β* and *λ*, using the invariant property of MLEs, the MLE of the entropy H (*f*) for the generalized Bilal distribution is given by

$$\hat{H}(f) = 2.5 + \gamma - \ln(27/4) - \frac{1}{\hat{\lambda}} \ln \hat{\beta} - \ln \hat{\lambda} + \frac{1}{\hat{\lambda}} (\ln(9/8) - \gamma). \tag{14}$$

#### *Approximate Confidence Interval*

In this subsection, the approximate confidence intervals of the parameters *β*, *λ* and the Shannon entropy H (*f*) are derived. Based on regularity conditions, the MLEs (*β*ˆ, *λ*ˆ) are an approximately bivariate normal distribution *N*((*β*, *λ*), I <sup>−</sup>1(*β*ˆ, *λ*ˆ)), where the covari-

ance matrix I <sup>−</sup>1(*β*, *λ*) is an estimation of I <sup>−</sup>1(*β*, *λ*) and I <sup>−</sup>1(*β*ˆ, *<sup>λ</sup>*ˆ) = -*I*<sup>11</sup> *I*<sup>12</sup> *<sup>I</sup>*<sup>21</sup> *<sup>I</sup>*<sup>22</sup> −<sup>1</sup> (*β*,*λ*)=(*β*ˆ,*λ*ˆ)

*I*11, *I*22, *I*<sup>12</sup> and *I*<sup>21</sup> are given by Equations (10)–(13), respectively.

Thus, the approximate 100(1 − *α*)% two-sided confidence intervals (CIs) for parameters *β*, *λ* are given by

$$(\beta \pm z\_{a/2}\sqrt{Var(\beta)}), \left(\lambda \pm z\_{a/2}\sqrt{Var(\lambda)}\right),\tag{15}$$

,

where *zα*/2 is the upper *α*/2 percentile of the standard normal distribution and *Var*(*β*ˆ), *Var*(*λ*ˆ) are the main diagonal elements of the matrix I−1(*β*ˆ, *λ*ˆ).

Next, we use the delta method to obtain the asymptotic confidence interval of the entropy H (*f*). The delta method is a general approach to compute CIs for functions of MLEs. Under a progressive Type-II censored sample, the authors of [22] used the delta method to study the estimation of a new Weibull–Pareto distribution. The authors of [23] also used this method to investigate the estimation of the two-parameter bathtub lifetime model.

 $\text{Let } M^{\text{T}} = (\frac{\partial H(f)}{\partial \beta}, \frac{\partial H(f)}{\partial \lambda}), \text{ where } \frac{\partial H(f)}{\partial \beta} = -\frac{1}{\hat{\beta}\lambda}, \frac{\partial H(f)}{\partial \lambda} = \frac{1}{\lambda^2} \ln \beta - \frac{1}{\lambda} - \frac{1}{\lambda^2} (\ln \frac{9}{8} - \gamma).$   $\text{Then, the approximate estimates of } var(\hat{H}(f)) \text{ is given by}$ 

$$\text{vàr}(\hat{H}(f)) = \left[M^T \ I^{-1}(\beta, \lambda)M\right]|\_{(\beta, \lambda) = (\beta, \lambda)}.$$

where *β*ˆ and *λ*ˆ are the MLEs of *β* and *λ*, respectively, and *I*−1(*β*, *λ*) denotes the inverse of the matrix *<sup>I</sup>*(*β*, *<sup>λ</sup>*) = - *I*<sup>11</sup> *I*<sup>12</sup> *<sup>I</sup>*<sup>21</sup> *<sup>I</sup>*<sup>22</sup> . The elements of the matrix *I*(*β*, *λ*) are given by Equations (10)–(13), respectively. Thus, *<sup>H</sup>*<sup>ˆ</sup> √ (*f*)−*H*(*f*) <sup>v</sup>*ar*<sup>ˆ</sup> (*H*<sup>ˆ</sup> (*f*)) is asymptotically distributed as *<sup>N</sup>*(0, 1). The asymptotic 100(1 − *α*)% CI for the entropy H (*f*) is given by

$$\left(\hat{H}(f) \pm Z\_{\mathfrak{a}/2} \sqrt{\text{v\sharp}r(\hat{H}(f))}\right)^2$$

where *zα*/2 is the upper *α*/2 percentile of the standard normal distribution.

#### **3. Bayesian Estimation**

In this section, we discuss the Bayesian point estimation of the parameters and entropy H (*f*) for generalized Bilal distribution using Lindley's approximation method under symmetric as well as asymmetric loss functions. Furthermore, the Bayesian CI of the parameters and entropy are also derived by using the Markov chain Monte Carlo method.

#### *3.1. Loss Functions and Posterior Distribution*

Choosing the loss function is an important part in the Bayesian inference. The commonly used symmetric loss function is the squared error loss (SEL) function, which is defined as

$$L\_1(\mathbf{U}, \hat{\mathcal{U}}) = \left(\hat{\mathcal{U}} - \mathcal{U}\right)^2. \tag{16}$$

Two popular asymmetric loss functions are the Linex loss (LL) and general entropy loss (EL) functions, which are respectively given by

$$L\_2(\mathbb{U}, \hat{\mathcal{U}}) = \exp(h(\hat{\mathcal{U}} - \mathcal{U})) - h\left(\hat{\mathcal{U}} - \mathcal{U}\right) - 1, \ h \neq 0,\tag{17}$$

$$L\_3(\mathbb{U}, \hat{\mathcal{U}}) \approx \left(\frac{\hat{\mathcal{U}}}{\mathcal{U}}\right)^q - q \ln\left(\frac{\hat{\mathcal{U}}}{\mathcal{U}}\right) - 1, \; q \neq 0. \tag{18}$$

Here, *U* = *U*(*β*, *λ*) is any function of *β* and *λ*, and *U*ˆ is an estimate of *U*. The constant *h* and *q* represent the weight of errors on different decisions. Under the above loss functions, the Bayesian estimate of function *U* can be calculated by

$$
\hat{\mathcal{U}}\_{\mathcal{S}} = E(\mathcal{U}|\stackrel{\rightarrow}{\mathfrak{x}}).\tag{19}
$$

$$\hat{\mathcal{U}}\_{L} = -\frac{1}{\hbar} \ln[E(\exp(-h\mathcal{U})|\stackrel{\rightarrow}{\mathbf{x}})], \qquad h \neq 0 \;. \tag{20}$$

$$
\hat{\mathcal{U}}\_E = \left[ E(\mathcal{U}^{-q}|\vec{\mathbf{x}}^\dagger) \right]^{-1/q}, \quad q \neq 0 \; . \tag{21}
$$

To derive the Bayesian estimates of the function *U*(*β*, *λ*), we consider prior distributions of the unknown parameters *β* and *λ* as independent Gamma distributions *Ga* (*a*, *b*) and *Ga* (*c*, *d*), respectively. Therefore, the joint prior distribution of *β* and *λ* becomes

$$\pi(\beta;\lambda) = \frac{b^a \beta^{a-1}}{\Gamma(a)} \exp(-b\beta) \, \frac{d^c \lambda^{c-1}}{\Gamma(c)} \exp(-d\lambda), \,\, (\beta, \lambda, a, b, c, d > 0).$$

Based on the likelihood function *L*(*β*, *λ*| → *x* ) and the joint prior distribution of *β* and *λ*, the joint posterior density of parameters *β* and *λ* can be written as

$$
\pi(\boldsymbol{\beta},\boldsymbol{\lambda}|\stackrel{\rightarrow}{\mathbf{x}}) = \frac{\pi(\boldsymbol{\beta},\boldsymbol{\lambda})L(\boldsymbol{\beta},\boldsymbol{\lambda}|\stackrel{\rightarrow}{\mathbf{x}})}{\int\_{0}^{\infty}\int\_{0}^{\infty}\pi(\boldsymbol{\beta},\boldsymbol{\lambda})L(\boldsymbol{\beta},\boldsymbol{\lambda}|\stackrel{\rightarrow}{\mathbf{x}})d\boldsymbol{\beta}d\boldsymbol{\lambda}}
$$

$$
\begin{cases}
\approx \pi(\boldsymbol{\beta},\boldsymbol{\lambda})L(\boldsymbol{\beta},\boldsymbol{\lambda}|\stackrel{\rightarrow}{\mathbf{x}}) \\
= \boldsymbol{\beta}^{a-1}\exp(-b\boldsymbol{\beta})\boldsymbol{\lambda}^{c-1}\exp(-d\boldsymbol{\lambda})A\_{1}(\boldsymbol{\beta},\boldsymbol{\lambda})A\_{2}(\boldsymbol{\beta},\boldsymbol{\lambda})A\_{3}(\boldsymbol{\beta},\boldsymbol{\lambda}),
\end{cases} \tag{22}
$$

where

$$A\_{1}(\boldsymbol{\beta},\boldsymbol{\lambda}) = \prod\_{i=1}^{m} 6\beta\lambda \mathbf{x}\_{i}^{\lambda-1} \exp(-2\beta \mathbf{x}\_{i}^{\lambda}) [1 - \exp(-\beta \mathbf{x}\_{i}^{\lambda})],$$

$$A\_{2}(\boldsymbol{\beta},\boldsymbol{\lambda}) = \prod\_{i=1}^{D} \left[\exp(-2\beta \mathbf{x}\_{i}^{\lambda})(3 - 2\exp(-\beta \mathbf{x}\_{i}^{\lambda}))\right]^{R\_{i}}.$$

$$A\_{3}(\boldsymbol{\beta},\boldsymbol{\lambda}) = \left[\exp(-2\beta \mathbf{x}\_{m}^{\lambda})(3 - 2\exp(-\beta \mathbf{x}\_{m}^{\lambda}))\right]^{R^{\*}}.$$

Therefore, the Bayesian estimate of *U*(*β*, *λ*) under the SEL, LL and GEL functions are respectively given by

$$
\hat{\Pi}\_S(\boldsymbol{\beta},\boldsymbol{\lambda}) = \frac{\int\_0^\infty \int\_0^\infty \mathcal{U}(\boldsymbol{\beta},\boldsymbol{\lambda})\pi(\boldsymbol{\beta},\boldsymbol{\lambda})L(\boldsymbol{\beta},\boldsymbol{\lambda}|\stackrel{\rightarrow}{\mathbf{x}})d\boldsymbol{\beta}d\boldsymbol{\lambda}}{\int\_0^\infty \int\_0^\infty \pi(\boldsymbol{\beta},\boldsymbol{\lambda})L(\boldsymbol{\beta},\boldsymbol{\lambda}|\stackrel{\rightarrow}{\mathbf{x}})d\boldsymbol{\beta}d\boldsymbol{\lambda}},\tag{23}
$$

$$\hat{\mathcal{U}}\_{L}(\boldsymbol{\beta},\lambda) = -\frac{1}{\hbar} \ln \left[ \frac{\int\_{0}^{\infty} \int\_{0}^{\infty} \exp(-\boldsymbol{h}\boldsymbol{\varbeta}(\boldsymbol{\beta},\lambda)) \pi(\boldsymbol{\upbeta},\lambda) L(\boldsymbol{\upbeta},\lambda|\boldsymbol{\upbeta}) d\boldsymbol{\upbeta} d\boldsymbol{\uplambda}}{\int\_{0}^{\infty} \int\_{0}^{\infty} \pi(\boldsymbol{\upbeta},\lambda) L(\boldsymbol{\upbeta},\lambda|\boldsymbol{\upalpha}) d\boldsymbol{\upbeta} d\boldsymbol{\upbeta}} \right],\tag{24}$$

$$
\hat{\Omega}\_{E}(\boldsymbol{\beta},\lambda) = \left[\frac{\int\_{0}^{\infty}\int\_{0}^{\infty} (\boldsymbol{L}(\boldsymbol{\beta},\lambda))^{-q}\pi(\boldsymbol{\beta},\lambda)\boldsymbol{L}(\boldsymbol{\beta},\lambda|\overset{\rightarrow}{\boldsymbol{x}})d\boldsymbol{\beta}d\lambda}{\int\_{0}^{\infty}\int\_{0}^{\infty}\pi(\boldsymbol{\beta},\lambda)\boldsymbol{L}(\boldsymbol{\beta},\lambda|\overset{\rightarrow}{\boldsymbol{x}})d\boldsymbol{\beta}d\lambda}\right]^{-\frac{1}{q}}.\tag{25}
$$

#### *3.2. Lindley's Approximation*

From Equations (23)–(25), it is observed that all of these estimates of the *U*(*β*, *λ*) are in the form of the ratio of two integrals which cannot be reduced to a closed form. Therefore, we use Lindley's approximation method to obtain the Bayesian estimates. If we

let *θ* = (*θ*1, *θ*2), then the posterior expectation of a function U (*θ*1, *θ*2) can be approximated as in [18]:

$$\hat{\mathcal{U}} = \mathcal{U}(\boldsymbol{\theta}\_1, \boldsymbol{\theta}\_2) + 0.5(\boldsymbol{A} + z\_{30}\boldsymbol{B}\_{12} + z\_{03}\mathbf{B}\_{21} + z\_{21}\mathbf{C}\_{12} + z\_{12}\mathbf{C}\_{21}) + p\_1\boldsymbol{A}\_{12} + p\_2\boldsymbol{A}\_{21},\tag{26}$$

where U(ˆ *θ*1, ˆ *θ*2) is the MLE of U(*θ*1, *θ*2) and

$$A = \sum\_{i=1}^{2} \sum\_{j=1}^{2} u\_{ij} \tau\_{ij}, \\ B\_{ij} = (u\_i \tau\_{ii} + u\_j \tau\_{ij}) \tau\_{ii}, \\ \mathbb{C}\_{ij} = 3u\_i \tau\_{ii} \tau\_{ij} + u\_j (\tau\_{ii} \tau\_{jj} + 2\tau\_{ij}^2),$$

$$\begin{split} p\_i &= \frac{\partial p}{\partial \theta\_i}, \mu\_i = \frac{\partial \mathcal{U}}{\partial \theta\_i}, \mu\_{ij} = \frac{\partial^2 \mathcal{U}}{\partial \theta\_i \partial \theta\_j}, \\ z\_{ij} &= \frac{\partial^{i+j} \mathcal{l}(\theta\_1 \theta\_2)}{\partial \theta\_1 \partial \theta\_2 \mathcal{l}}, \ i, j = 0, 1, 2, 3, i + j = 3, \end{split}$$

where *l* denotes the log-likelihood function and *τij*(*i*, *j*) denotes the (*i*, *j*)−th element of the matrix [−*∂*2*l*/*∂θ*<sup>1</sup> *i ∂θ*<sup>2</sup> *j* ] −1 . All terms are estimated by MLEs of the parameters *θ*<sup>1</sup> and *θ*2.

Based on the above equations, we have

$$\begin{split} z\_{30} &= \frac{\partial^{\lambda} l}{\partial \boldsymbol{\beta}^{\lambda}} = \frac{2\boldsymbol{m}}{\beta^{3}} + \sum\_{i=1}^{m} \left\{ \mathbf{x}\_{i}^{3\lambda} \exp(-\beta \mathbf{x}\_{i}^{\lambda}) (y\_{1}(\boldsymbol{\theta}))^{-2} [1 + 2(y\_{1}(\boldsymbol{\theta}))^{-1} \exp(-\beta \mathbf{x}\_{i}^{\lambda})] \right\} \\ &+ \sum\_{i=1}^{D} \left\{ 6R\_{i} \mathbf{x}\_{i}^{3\lambda} \exp(-\beta \mathbf{x}\_{i}^{\lambda}) (y\_{2}(\boldsymbol{\theta}))^{-2} [1 + 4 \exp(-\beta \mathbf{x}\_{i}^{\lambda}) (y\_{2}(\boldsymbol{\theta}))^{-1}] \right\} \\ &+ 6R^{\*} \mathbf{x}\_{m}^{3\lambda} \exp(-\beta \mathbf{x}\_{m}^{\lambda}) (y\_{3}(\boldsymbol{\theta}))^{-2} [1 + 4 \exp(-\beta \mathbf{x}\_{m}^{\lambda}) (y\_{3}(\boldsymbol{\theta}))^{-1}]. \end{split} \tag{27}$$

*<sup>z</sup>*<sup>03</sup> <sup>=</sup> *<sup>∂</sup>*3*<sup>l</sup> ∂λ*<sup>3</sup> <sup>=</sup> <sup>2</sup>*<sup>m</sup> <sup>λ</sup>*<sup>3</sup> <sup>+</sup> <sup>∑</sup>*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> {*βx<sup>λ</sup> <sup>i</sup>* (ln *xi*) 3 (−<sup>3</sup> + (*y*1(*θ*))−<sup>1</sup> ) <sup>−</sup> *<sup>β</sup>*2*x*2*<sup>λ</sup> <sup>i</sup>* (ln *xi*) <sup>3</sup> exp(−*βx<sup>λ</sup> <sup>i</sup>* ) (*y*1(*θ*))−<sup>2</sup> <sup>×</sup> [<sup>3</sup> <sup>−</sup> *<sup>β</sup>x<sup>λ</sup> <sup>i</sup>* <sup>−</sup> <sup>2</sup>*βx<sup>λ</sup> <sup>i</sup>* exp(−*βx<sup>λ</sup> <sup>i</sup>* )(*y*1(*θ*))−<sup>1</sup> ]} <sup>+</sup> <sup>∑</sup>*<sup>D</sup> <sup>i</sup>*=<sup>1</sup> {[−3*Riβx<sup>λ</sup> <sup>i</sup>* (ln *xi*) 3 [1−(*y*2(*θ*))−<sup>1</sup> ]+ +6*Riβ*2*x*2*<sup>λ</sup> <sup>i</sup>* (ln *xi*) <sup>3</sup> exp(−*βx<sup>λ</sup> <sup>i</sup>* )(*y*2(*θ*))−<sup>2</sup> (−<sup>3</sup> <sup>+</sup> *<sup>β</sup>x<sup>λ</sup> <sup>i</sup>* + <sup>4</sup>*βx<sup>λ</sup> <sup>i</sup>* (*y*2(*θ*))−<sup>1</sup> exp(−*βx<sup>λ</sup> <sup>i</sup>* ))}+ +3*R*∗*βx<sup>λ</sup> <sup>m</sup>*(ln *xm*) 3 [−<sup>1</sup> + (*y*3(*θ*))−<sup>1</sup> ] + 6*R*∗*β*2*x*2*<sup>λ</sup> <sup>m</sup>* (ln *xm*) <sup>3</sup> exp(−*βx<sup>λ</sup> m*)(*y*3(*θ*))−<sup>2</sup> [−<sup>3</sup> <sup>+</sup> *<sup>β</sup>x<sup>λ</sup> <sup>m</sup>* + 4*βx<sup>λ</sup> m*(*y*3(*θ*))−<sup>1</sup> exp(−*βx<sup>λ</sup> <sup>m</sup>*)]. (28)

*<sup>z</sup>*<sup>21</sup> <sup>=</sup> *<sup>∂</sup>*3*<sup>l</sup> ∂β*2*∂λ* <sup>=</sup> <sup>∑</sup>*<sup>m</sup> i*=1 ! <sup>−</sup>*x*2*<sup>λ</sup> <sup>i</sup>* ln *xi* exp(−*βx<sup>λ</sup> <sup>i</sup>* )(*y*1(*θ*))−<sup>2</sup> )[<sup>2</sup> <sup>−</sup> *<sup>β</sup>x<sup>λ</sup> <sup>i</sup>* <sup>−</sup> <sup>2</sup>*βx<sup>λ</sup> <sup>i</sup>* exp(−*βx<sup>λ</sup> <sup>i</sup>* )(*y*1(*θ*))−<sup>1</sup> ] <sup>−</sup>∑*<sup>D</sup> <sup>i</sup>*=<sup>1</sup> [6*Rix*<sup>2</sup>*<sup>λ</sup> <sup>i</sup>* ln *xi* exp(−*βx<sup>λ</sup> <sup>i</sup>* )(*y*2(*θ*))−<sup>2</sup> [<sup>2</sup> <sup>−</sup> *<sup>β</sup>x<sup>λ</sup> <sup>i</sup>* <sup>−</sup> <sup>4</sup>*βx<sup>λ</sup> <sup>i</sup>* exp(−*βx<sup>λ</sup> <sup>i</sup>* )(*y*2(*θ*))−<sup>1</sup> ] <sup>−</sup>6*R*∗*x*2*<sup>λ</sup> <sup>m</sup>* ln *xm* exp(−*βx<sup>λ</sup> m*)(*y*3(*θ*))−<sup>2</sup> ][<sup>2</sup> <sup>−</sup> *<sup>β</sup>x<sup>λ</sup> <sup>m</sup>* <sup>−</sup> <sup>4</sup>*βx<sup>λ</sup> m*(*y*3(*θ*))−<sup>1</sup> exp(−*βx<sup>λ</sup> <sup>m</sup>*)]. (29)

*<sup>z</sup>*<sup>12</sup> <sup>=</sup> *<sup>∂</sup>*3*<sup>l</sup> ∂β∂λ*<sup>2</sup> <sup>=</sup> <sup>∑</sup>*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> [−3*x<sup>λ</sup> <sup>i</sup>* (ln *xi*) <sup>2</sup> + *x<sup>λ</sup> <sup>i</sup>* (ln *xi*) 2 (*y*1(*θ*))−<sup>1</sup> +*βx*2*<sup>λ</sup> <sup>i</sup>* (ln *xi*) <sup>2</sup> exp(−*βx<sup>λ</sup> <sup>i</sup>* )(*y*1(*θ*))−<sup>2</sup> [−<sup>3</sup> <sup>+</sup> *<sup>β</sup>x<sup>λ</sup> <sup>i</sup>* <sup>+</sup> *<sup>y</sup>*1(*θ*))−<sup>1</sup> *βx<sup>λ</sup> <sup>i</sup>* exp(−*βx<sup>λ</sup> <sup>i</sup>* )]] + <sup>∑</sup>*<sup>D</sup> <sup>i</sup>*=<sup>1</sup> {−3*Rix<sup>λ</sup> <sup>i</sup>* (ln *xi*) <sup>2</sup> + <sup>3</sup>*Rix<sup>λ</sup> <sup>i</sup>* (ln *xi*) 2 (*y*2(*θ*))−<sup>1</sup> +6*βRix*<sup>2</sup>*<sup>λ</sup> <sup>i</sup>* (ln *xi*) <sup>2</sup> exp(−*βx<sup>λ</sup> <sup>i</sup>* )(*y*2(*θ*))−<sup>2</sup> [−<sup>3</sup> <sup>+</sup> *<sup>β</sup>x<sup>λ</sup> <sup>i</sup>* exp(−*βx<sup>λ</sup> <sup>i</sup>* ) + <sup>4</sup>(*y*2(*θ*))−<sup>1</sup> *βx<sup>λ</sup> <sup>i</sup>* exp(−*βx<sup>λ</sup> i* )] <sup>−</sup>3*R*∗*x<sup>λ</sup> <sup>m</sup>*(ln *xm*) <sup>2</sup> + 3*R*∗*x<sup>λ</sup> <sup>m</sup>*(ln *xm*) 2 (*y*3(*θ*))−<sup>1</sup> } +6*βR*∗*x*2*<sup>λ</sup> <sup>m</sup>* (ln *xm*) <sup>2</sup> exp(−*βx<sup>λ</sup> m*)(*y*3(*θ*))−<sup>2</sup> [−<sup>3</sup> <sup>+</sup> *<sup>β</sup>x<sup>λ</sup> <sup>m</sup>* exp(−*βx<sup>λ</sup> <sup>m</sup>*) + <sup>4</sup>(*y*3(*θ*))−<sup>1</sup> *βx<sup>λ</sup> <sup>m</sup>* exp(−*βx<sup>λ</sup> <sup>m</sup>*)]. (30)

$$p\_1 = \frac{a-1}{\beta} - b, p\_2 = \frac{c-1}{\lambda} - d,$$

$$\tau\_{11} = -\frac{z\_{02}}{z\_{20}z\_{02} - z\_{11}^2}, \tau\_{22} = -\frac{z\_{20}}{z\_{20}z\_{02} - z\_{11}^2}, \tau\_{12} = \tau\_{21} = \frac{z\_{11}}{z\_{20}z\_{02} - z\_{11}^2},$$

$$z\_{20} = \frac{\partial^2 l}{\partial \beta^2}, z\_{11} = \frac{\partial^2 l}{\partial \beta \partial \lambda'}, z\_{02} = \frac{\partial^2 l}{\partial \lambda^2}.$$

where *z*20, *z*11, *z*<sup>02</sup> are given by Equations (10)–(12), respectively.

Based on Lindley's approximation, we can derive the Bayesian estimation of the two parameters, *β* and *λ*, and the entropy under different loss functions.

3.2.1. Squared Error Loss Function

When U (*β*, *λ*) = *β* or *λ*, the Bayesian estimations of the parameters *β* and *λ* under the SEL function are given by, respectively,

*β*ˆ *<sup>S</sup>* = *β*ˆ + 0.5[*τ*<sup>2</sup> <sup>11</sup>*z*<sup>30</sup> + *<sup>τ</sup>*21*τ*22*z*<sup>03</sup> + <sup>3</sup>*τ*11*τ*12*z*<sup>21</sup> + (*τ*11*τ*<sup>22</sup> + <sup>2</sup>*τ*<sup>2</sup> <sup>21</sup>)*z*12] + *τ*<sup>11</sup> *p*<sup>1</sup> + *τ*<sup>12</sup> *p*2,

$$
\hat{\lambda}\_S = \hat{\lambda} + 0.5[\tau\_{11}\tau\_{12}z\_{30} + \tau\_{22}^2 z\_{03} + 3\tau\_{22}\tau\_{21}z\_{12} + (\tau\_{11}\tau\_{22} + 2\tau\_{21}^2)z\_{21}] + \tau\_{21}p\_1 + \tau\_{22}p\_{24}
$$

where *β*ˆ and *λ*ˆ are the MLEs of the parameters *β* and *λ*, respectively.

Similarly, the Bayesian estimation of the entropy can be derived. We notice that

$$\begin{cases} \mathcal{U}(\boldsymbol{\beta},\boldsymbol{\lambda}) = H(\boldsymbol{\beta},\boldsymbol{\lambda}) = 2.5 + \gamma - \ln(27/4) - \ln \lambda - \frac{1}{\lambda} \ln \beta + \frac{1}{\lambda} (\ln(\boldsymbol{\beta}/8) - \gamma), \\\ u\_{1} = -\frac{1}{\mathbb{R}\boldsymbol{\lambda}}, \boldsymbol{\nu}\_{2} = -\frac{1}{\lambda} + \frac{1}{\lambda^{2}} (\ln \beta - \ln(\boldsymbol{\theta}/8) + \gamma), \\\ u\_{11} = \frac{1}{\beta^{2}\lambda}, \boldsymbol{\nu}\_{22} = \frac{1}{\lambda^{2}} - \frac{2}{\lambda^{3}} (\ln \beta - \ln(\boldsymbol{\theta}/8) + \gamma), \boldsymbol{\nu}\_{12} = \boldsymbol{\nu}\_{21} = \frac{1}{\beta\lambda^{2}}. \end{cases}$$

Thus, the Bayesian estimation of the entropy H (*f*) under the SEL function is given by

$$\begin{aligned} \mathsf{H}\_{S}(f) &= \mathsf{H}^{\*}(f) + 0.5(\mathsf{u}\_{11}\mathsf{r}\_{11} + 2\mathsf{u}\_{12}\mathsf{r}\_{12} + \mathsf{u}\_{22}\mathsf{r}\_{22} + \mathsf{x}\_{30}(\mathsf{u}\_{1}\mathsf{r}\_{11} + \mathsf{u}\_{2}\mathsf{r}\_{12})\ \mathsf{r}\_{11} + \mathsf{x}\_{03}(\mathsf{u}\_{2}\mathsf{r}\_{22} + \mathsf{u}\_{1}\mathsf{r}\_{12})\mathsf{r}\_{22} \\ &+ \mathsf{x}\_{21}(3\mathsf{u}\_{1}\mathsf{r}\_{11}\mathsf{r}\_{12} + \mathsf{u}\_{2}(\mathsf{r}\_{11}\mathsf{r}\_{22} + 2\mathsf{r}\_{12}^{2})) + \mathsf{x}\_{12}(3\mathsf{u}\_{2}\mathsf{r}\_{22}\mathsf{r}\_{21} + \mathsf{u}\_{1}(\mathsf{r}\_{11}\mathsf{r}\_{22} + 2\mathsf{r}\_{21}^{2}))) \\ &+ p\_{1}(\mathsf{u}\_{1}\mathsf{r}\_{11} + \mathsf{u}\_{2}\mathsf{r}\_{21}) + p\_{2}(\mathsf{u}\_{2}\mathsf{r}\_{22} + \mathsf{u}\_{1}\mathsf{r}\_{12}), \end{aligned} \tag{31}$$

where Hˆ (*f*) represents the maximum likelihood estimate of H (*f*).

3.2.2. Linex Loss Function

Based on Lindley's approximation, the Bayesian estimations of two parameters, *β* and *λ*, and the entropy under the LL function can, respectively, be given by

$$
\begin{split}
\hat{\beta}\_{L} &= -\frac{1}{\hbar} \ln \{ \exp(-h\hat{\boldsymbol{\beta}}) + 0.5[\mathbf{u}\_{11}\boldsymbol{\tau}\_{11} + \mathbf{u}\_{1}\boldsymbol{\tau}\_{11}^{2}\mathbf{z}\_{30} + \mathbf{u}\_{1}\boldsymbol{\tau}\_{21}\boldsymbol{\tau}\_{22}\mathbf{z}\_{30} + 3\mathbf{u}\_{1}\boldsymbol{\tau}\_{11}\boldsymbol{\tau}\_{12}\mathbf{z}\_{21} \\ &+ (\boldsymbol{\tau}\_{11}\boldsymbol{\tau}\_{22} + 2\mathbf{u}\_{1}\boldsymbol{\tau}\_{21}^{2})\mathbf{u}\_{1}\mathbf{z}\_{12} \} + \mathbf{u}\_{1}\boldsymbol{\tau}\_{11}p\_{1} + \mathbf{u}\_{1}\boldsymbol{\tau}\_{12}p\_{2}
\end{split}
$$

$$
\begin{split}
\hat{\lambda}\_{L} &= -\frac{1}{\hbar} \ln \{ \exp(-h\hat{\lambda}) + 0.5[\mathbf{u}\_{22}\boldsymbol{\tau}\_{22} + \mathbf{u}\_{2}\boldsymbol{\tau}\_{11}\boldsymbol{\tau}\_{12}\mathbf{z}\_{30} + \mathbf{u}\_{2}\boldsymbol{\tau}\_{22}^{2}\mathbf{z}\_{03} + (\boldsymbol{\tau}\_{11}\boldsymbol{\tau}\_{22} + 2\boldsymbol{\tau}\_{12}^{2})\mathbf{u}\_{2}\mathbf{z}\_{21} \\ &+ 3\mathbf{u}\_{2}\boldsymbol{\tau}\_{22}\mathbf{z}\_{1}\mathbf{z}\_{21} \} + \mathbf{u}\_{2}\boldsymbol{\tau}\_{12}p\_{1} + \mathbf{u}\_{2}\boldsymbol{\tau}\_{22}p\_{2}
\end{split}
$$

$$\begin{split} \dot{H}\_{\mathcal{L}}(f) &= -\frac{1}{\hbar} \ln \{ \exp[-h\hat{\mathcal{H}}(f)] + 0.5[\mathbf{u}\_{11}\boldsymbol{\tau}\_{11} + 2\mathbf{u}\_{12}\boldsymbol{\tau}\_{12} + \mathbf{u}\_{22}\boldsymbol{\tau}\_{22} + \mathbf{z}\_{30}(\mathbf{u}\_{1}\boldsymbol{\tau}\_{11} + \mathbf{u}\_{2}\boldsymbol{\tau}\_{12}) \} \boldsymbol{\tau}\_{11} + \mathbf{z}\_{03}(\mathbf{u}\_{2}\boldsymbol{\tau}\_{22} + \mathbf{u}\_{1}\boldsymbol{\tau}\_{21}) \boldsymbol{\tau}\_{22} \\ &+ \mathbf{z}\_{21}(3\mathbf{u}\_{1}\boldsymbol{\tau}\_{11}\boldsymbol{\tau}\_{12} + \mathbf{u}\_{2}(\boldsymbol{\tau}\_{11}\boldsymbol{\tau}\_{22} + 2\mathbf{r}\_{12}^{2})) + \mathbf{z}\_{12}(3\mathbf{u}\_{2}\boldsymbol{\tau}\_{22}\boldsymbol{\tau}\_{21} + \mathbf{u}\_{1}(\boldsymbol{\tau}\_{11}\boldsymbol{\tau}\_{22} + 2\mathbf{r}\_{21}^{2})) \} \\ &+ p\_{1}(\mathbf{u}\_{1}\boldsymbol{\tau}\_{11} + \mathbf{u}\_{2}\boldsymbol{\tau}\_{21}) + p\_{2}(\mathbf{u}\_{2}\boldsymbol{\tau}\_{22} + \mathbf{u}\_{1}\boldsymbol{\tau}\_{12})) . \end{split} \tag{32}$$

Here, *β*ˆ and *λ*ˆ are the MLEs of the parameters *β* and *λ*, and Hˆ (*f*) represents the MLE of H (*f*). The detailed derivation of these Bayesian estimates is shown in Appendix C.

3.2.3. General Entropy Loss Function

Using Lindley's approximation method, the Bayesian estimations of two parameters, *β* and *λ*, and the entropy under the GEL function can, respectively, be given by

*<sup>β</sup>*ˆ*<sup>E</sup>* <sup>=</sup> {*β*ˆ−*<sup>q</sup>* <sup>+</sup> 0.5[u11*τ*11+u1*τ*<sup>2</sup> <sup>11</sup> z30+u1*τ*21*τ*22z03+3u1*τ*11*τ*12z21 + (*τ*11*τ*22+2u1*τ*<sup>2</sup> <sup>21</sup>)u1z12] + u1*τ*<sup>11</sup> *p*<sup>1</sup> + u1*τ*<sup>12</sup> *p*2} − 1/*q <sup>λ</sup>*<sup>ˆ</sup> *<sup>L</sup>* <sup>=</sup> {*λ*<sup>ˆ</sup> <sup>−</sup> *<sup>q</sup>* <sup>+</sup> 0.5[u22*τ*22+u2*τ*11*τ*12z30+u2*τ*<sup>2</sup> 22z03+(*τ*11*τ*22+2*τ*<sup>2</sup> <sup>12</sup>)u2z21 + 3u2*τ*22*τ*21*z*21] + u2*τ*<sup>21</sup> *p*<sup>1</sup> + u2*τ*<sup>22</sup> *p*2} − 1/*q <sup>H</sup>*<sup>ˆ</sup> *<sup>E</sup>*(*f*) = {[*H*<sup>ˆ</sup> (*f*)]−*<sup>q</sup>* <sup>+</sup> 0.5[(<sup>u</sup> <sup>11</sup>*τ*11+2u12*τ*<sup>12</sup> <sup>+</sup> *<sup>u</sup>*22*τ*22) + z30(u1*τ*11+u2*τ*12 *τ*11+z03(u2*τ*22+u1*τ*12)*τ*<sup>22</sup> +z21(3u1*τ*11*τ*12+u2(*τ*11*τ*22+2*τ*<sup>2</sup> <sup>12</sup>)) + z12(3u2*τ*22*τ*21+u1(*τ*11*τ*22+2*τ*<sup>2</sup> <sup>21</sup>))] <sup>+</sup>*p*1(u1*τ*<sup>11</sup> <sup>+</sup> u2*τ*21)+*p*2(u2*τ*<sup>22</sup> <sup>+</sup> u1*τ*12)}−1/*<sup>q</sup>* . (33)

Here, *β*ˆ and *λ*ˆ are the MLEs of the parameters *β* and *λ*, and Hˆ (*f*) represents the MLE of H (*f*). The detailed derivation of these Bayesian estimates is shown in Appendix D.

#### *3.3. Bayesian Credible Interval*

In the previous subsection, we used the Lindley's approximation method to obtain the Bayesian point estimation of the parameters and entropy. However, this approximation method cannot determine the Bayesian CIs. Thus, the MCMC method is applied to obtain the Bayesian CI for the parameters and entropy. The MCMC method is a useful technique for estimating complex Bayesian models. The Gibbs sampling and Metropolis– Hastings algorithm are the two most frequently applied MCMC methods which are used in reliability analysis, statistical physics and machine learning, among other applications. Due to their practicality, they have gained some attention among researchers, and interesting results have been obtained. For example, Gilks and Wild [24] proposed adaptive rejection sampling to handle non-conjugacy in applications of Gibbs sampling. Koch [25] studied the Gibbs sampler by means of the sampling–importance resampling algorithm. Martino et al. [26] established a new approach, namely by recycling the Gibbs sampler to improve the efficiency without adding any extra computational cost. Panahi and Moradi [27] developed a hybrid strategy, combining the Metropolis–Hastings [28,29] algorithm with the Gibbs sampler to generate samples from the respective posterior, arising from the inverted, exponentiated Rayleigh distribution. In this paper, we adopt the method proposed in [27] to generate samples from the respective posterior arising from the GB distribution. From Equations (6) and (22), the joint posterior of the parameters *β*, *λ* can be written as

$$\begin{split} \mathbb{E}\left(\boldsymbol{\beta},\boldsymbol{\lambda}\middle|\boldsymbol{\bar{x}}^{\boldsymbol{\lambda}}\right) & \propto \pi(\boldsymbol{\beta},\boldsymbol{\lambda})L(\boldsymbol{\beta},\boldsymbol{\lambda}\middle|\boldsymbol{\bar{x}}^{\boldsymbol{\lambda}}) \propto \left[V(\boldsymbol{\lambda})\right]^{m+d}\boldsymbol{\beta}^{m+d-1}\exp\left[-\boldsymbol{\beta}V(\boldsymbol{\lambda})\right]\prod\_{i=1}^{m}\left[1-\exp\left(-\boldsymbol{\beta}\boldsymbol{x}\_{i}^{\boldsymbol{\lambda}}\right)\right] \\ & \times \frac{1}{\left[V(\boldsymbol{\lambda})\right]^{m+d}}\prod\_{i=1}^{D}\left(3-2\exp(-\boldsymbol{\beta}\boldsymbol{x}\_{i}^{\boldsymbol{\lambda}})\right)^{R\_{i}}\left(3-2\exp\left(-\boldsymbol{\beta}\boldsymbol{x}\_{i}^{\boldsymbol{\lambda}}\right)\right)^{R^{\*}}\boldsymbol{\lambda}^{m+c-1}\exp(-d\boldsymbol{\lambda})\prod\_{i=1}^{m}\boldsymbol{x}\_{i}^{\boldsymbol{\lambda}-1} \end{split} \tag{34}$$

Here, *V*(*λ*)=(*b* + 2∑*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> *<sup>x</sup><sup>λ</sup> <sup>i</sup>* <sup>+</sup>2∑*<sup>D</sup> <sup>i</sup>*=<sup>1</sup> *Rix<sup>λ</sup> <sup>i</sup>* +2*R*∗*x<sup>λ</sup> <sup>m</sup>*). Therefore, we have

$$
\pi(\beta,\lambda|\stackrel{\frown}{\mathbf{x}}) \propto \pi\_1(\beta|\lambda,\stackrel{\frown}{\mathbf{x}}) \pi\_2(\lambda|\beta,\stackrel{\frown}{\mathbf{x}}),\tag{35}
$$

where

$$
\pi\_1\left(\mathcal{S}|\lambda, \stackrel{\rightarrow}{\mathbf{x}}\right) \propto [V(\lambda)]^{m+a} \beta^{m+a-1} \exp[-\beta V(\lambda)] \tag{36}
$$

$$\log \pi\_2(\lambda | \boldsymbol{\beta}, \stackrel{\rightarrow}{\mathbf{x}}) \approx \frac{\lambda^{m+\varepsilon-1}}{\left[V(\lambda)\right]^{0+2}} \exp(-d\lambda) \exp\left[-\beta (2\sum\_{l=1}^m \boldsymbol{\alpha}\_l^{\lambda} + 2\sum\_{l=1}^D R\_l \boldsymbol{\alpha}\_l^{\lambda} + 2R^\* \boldsymbol{\alpha}\_m^{\lambda})\right]$$

$$\times \prod\_{l=1}^{m} \left[ 1 - \exp(-\beta x\_l^{\lambda}) \right] \prod\_{l=1}^{D} \left( 3 - 2 \exp(-\beta x\_l^{\lambda}) \right)^{R\_l} \left( 3 - 2 \exp(-\beta x\_m^{\lambda}) \right)^{R^\*} \prod\_{l=1}^{m} x\_l^{\lambda - 1}. \tag{37}$$

It is observed that the posterior density *π*1(*β*|*λ*, → *x* ) of *β*, given *λ*, is the PDF of the Gamma distribution *Gamma*(*m* + *a*, *b* + 2∑*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> *<sup>x</sup><sup>λ</sup> <sup>i</sup>* <sup>+</sup>2∑*<sup>D</sup> <sup>i</sup>*=<sup>1</sup> *Rix<sup>λ</sup> <sup>i</sup>* +2*R*∗*x<sup>λ</sup> <sup>m</sup>*). However, the posterior density *π*2(*λ*|*β*, → *x* ) of *λ*, given *β*, cannot be reduced analytically to a known distribution. Therefore, we use the Metropolis–Hastings method with normal proposal distribution to generate random numbers from Equation (37). We use the next algorithm (Algorithm 1), proposed in [27], to generate random numbers from Equation (34) and construct the Bayesian credible interval of *λ*, *β* and the entropy H (*f*).

**Algorithm 1** The MCMC method

**Step 1:** Choose the initial value (*β*(0), *λ*(0)).

**Step 2:** At stage *i* and for the given m, n and ATII-PH censored data, generate *β*(*i*) from the following:

$$
gamma(m+a\_{\shortparallel}b+2\sum\_{l=1}^{m}\boldsymbol{\alpha}\_{l}^{\lambda}+2\sum\_{l=1}^{D}R\_{l}\boldsymbol{\alpha}\_{l}^{\lambda}+2R^\*\boldsymbol{\alpha}\_{m}^{\lambda}).
$$

**Step 3:** Generate *<sup>λ</sup>*(*i*) from *<sup>π</sup>*2(*λ*(*i*−1)|*β*(*i*), → *x* ) using the following steps.

**Step 3-1:** Generate *λ* from *N*(*λ*(*i*−1), var(*λ*)).

**Step 3-2:** Generate the *ω* from the uniform distribution U(0, 1).

**Step 3-3:** Set *<sup>λ</sup>*(*i*) <sup>=</sup> { *<sup>λ</sup>* , *i f ω* ≤ *r*<sup>∗</sup> *<sup>λ</sup>*(*i*−1), *i f <sup>ω</sup>* <sup>&</sup>gt; *<sup>r</sup>*<sup>∗</sup> , where *<sup>r</sup>*<sup>∗</sup> <sup>=</sup> min{1, *<sup>π</sup>*2(*λ* <sup>|</sup>*β*(*i*), → *x* ) *<sup>π</sup>*2(*λ*(*i*−1)|*β*(*i*), → *x* ) }.

**Step 4:** Set *i* = *i* + 1.

**Step 5:** By repeating Steps 2–4 N times, we get (*β*1, *λ*1),(*β*2, *λ*2),...,(*βN*, *λN*). Furthermore, we compute *H*1, *H*2,..., *HN*, where *Hi* = *H*(*βi*, *λi*), *i* = 1, 2, . . . , *N* and *H*(*β*, *λ*) is the Shannon entropy of the GB distribution.

Rearrange (*β*1, *β*2, ... , *βN*), and (*H*1, *H*2, ... , *HN*) into (*β*(1), *β*(2), ... , *β*(*N*)), (*λ*(1), *λ*(2), ... , *λ*(*N*)) and (*H*(1)*H*(2), ... , *H*(*N*)), where (*β*(1) < *β*(2) < ... < *β*(*N*)), (*λ*(1) < *λ*(2) < ... < *λ*(*N*)) and (*H*(1) < *H*(2) < ... < *H*(*N*)).

Then, the 100(1 − *α*)% Bayesian credible interval of the two parameters *β*, *λ* and the entropy are given by (*β*(*Nα*/2), *<sup>β</sup>*(*N*(1−*α*/2))), (*λ*(*Nα*/2), *<sup>λ</sup>*(*N*(1−*α*/2))) and (*H*(*Nα*/2), *<sup>H</sup>*(*N*(1−*α*/2))).

#### **4. Simulation Study**

In this section, a Monte Carlo simulation study is carried out to observe the performance of different estimators of the entropy, in terms of the MSEs for different values of *n*, *m*, *T* and censoring schemes. In addition, the average 95% asymptotic confidence intervals (ACIs), Bayesian credible intervals (BCIs) of *β*, *λ* and the entropy, as well as the average interval length (IL), are computed, and the performances are also compared. We consider the following three different progressive censoring schemes (CSs):


Based on the following algorithm proposed by Balakrishnan and Sandhu [30] (Algorithm 2), we can generate an adaptive Type-II progressive hybrid censored sample from the GB distribution.

**Algorithm 2.** Generating a adaptive Type-II progressive hybrid censored sample from the GB distribution.

**Step1:** Generate *m* independent observations *Z*1, *Z*2,..., *Zm*, where *Zi* follows the uniform distribution *U*(0, 1), *i* = 1, 2, . . . , *m*. **Step 2:** For the known censoring scheme (*R*1, *R*2,..., *Rm*), let *ξ<sup>i</sup>* = *Zi* 1/(*i*+*Rm*+*Rm*−<sup>1</sup>+...+*Rm*−*i*+1), *i* = 1, 2, . . . , *m*. **Step 3:** By setting *Ui* = <sup>1</sup> − *<sup>ξ</sup>mξm*−<sup>1</sup> ... *<sup>ξ</sup>m*−*i*+1, then *<sup>U</sup>*1, *<sup>U</sup>*2,..., *Um* is a Type-II progressive censored sample from the uniform distribution *U*(0, 1). **Step 4:** Using the inverse transformation *Xi*:*m*:*<sup>n</sup>* = *F*−1(*Ui*), *i* = 1, 2, . . . , *m*, we obtain a Type-II progressive censored sample from the GB distribution; that is, *X*1:*m*:*n*, *X*2:*m*:*n*,..., *Xm*:*m*:*n*, where *<sup>F</sup>*−1(·) denotes the GB distribution's inverse cumulative functional expression with the parameter (*β*, *λ*). The following theorem1 gives the uniqueness of the solution for the equation *Xi*:*m*:*<sup>n</sup>* = *F*−1(*Ui*), *i* = 1, 2, . . . , *m*. **Step 5:** If there exists a real number *J* satisfying *X*J:m:n < *T* ≤ *X*J+1:m:n, then we set index *J* and record *X*1:*m*:*n*, *X*2:*m*:*n*,..., *XJ*+1:*m*:*n*. **Step 6:** Generate the first *m* − *J* − 1 order statistics *XJ*+2:*m*:*n*, *XJ*+3:*m*:*n*,..., *Xm*:*m*:*<sup>n</sup>* from the truncated distribution *<sup>f</sup>*(*x*; *<sup>β</sup>*, *<sup>λ</sup>*)/[<sup>1</sup> <sup>−</sup> *<sup>F</sup>*(*xJ*+1; *<sup>β</sup>*, *<sup>λ</sup>*)] with a sample size *<sup>n</sup>* <sup>−</sup> *<sup>J</sup>* <sup>−</sup> <sup>1</sup> <sup>−</sup> <sup>∑</sup>*<sup>J</sup> <sup>i</sup>*=<sup>1</sup> *Ri*.

**Theorem 1.** *The equation Xi*:*m*:*<sup>n</sup>* = *F*−1(*Ui*) *has a unique solution, i* = 1, 2, . . . , *m.*

#### **Proof. See Appendix A.** -

In the simulation study, we took the values of the parameters of the GB distribution as *β* = 1, *λ* = 2. In this case, H(*f*) = 0.2448. The hyperparameter values of the prior distribution were taken as *a* = 1, *b* = 3, *c* = 2, *d* = 3. For the Linex loss function and general entropy loss function, we set *h* = −1.0, 1.0 and *q* = −1.0, 1.0, respectively. In the Newton iterative algorithm and MCMC sampling algorithm, we chose the initial values of *β* and *λ* as *β*(0) = 0.9, *λ*(0) = 1.9; the value of *ε* was taken as 10−6. For different sample sizes n and different effective samples *m* and time *T*, we used 3000 simulated samples in each case. The average values and mean square errors (MSEs) of the MLEs and Bayesian estimations (BEs) for *β*, *λ* and the entropy were calculated. These results are reported in Tables 1–6.

**Table 1.** The average maximum likelihood estimations (MLEs) and mean square errors (MSEs) of *β*, *λ* and the entropy (*β* = 1, *λ* = 2, H(*f*) = 0.2448).


From Tables 1–6, the following observations can be made:



**Table 2.** The average Bayesian estimations and MSEs of *β*, *λ* and the entropy under the squared error loss functon (*β* = 1, *λ* = 2; *β* = 1, *λ* = 2, H(*f*) = 0.2448).

**Table 3.** The average Bayesian estimations and MSEs of *β*, *λ* and the entropy under the Linex loss function (*β* = 1, *λ* = 2, T = 0.6, H(*f*) = 0.2448).



**Table 4.** The average Bayesian estimations and MSEs of *β*, *λ* and the entropy under the Linex loss function (*β* = 1, *λ* = 2, T = 1.5, H(*f*) = 0.2448).

**Table 5.** The average Bayesian estimations and MSEs of *β*, *λ* and the entropy under the general entropy loss function (*β* = 1, *λ* = 2, T = 0.6, H(*f*) = 0.2448).



**Table 6.** The average Bayesian estimations and MSEs of *β*, *λ* and the entropy under the general entropy loss function (*β* = 1, *λ* = 2, T = 1.5, H(*f*) = 0.2448).

To further demonstrate the conclusions, the MSEs are plotted when the sample size increases under different censoring schemes. The trends are shown in Figure 1 (values come from Tables 1–6).

Furthermore, the average 95% ACIs and BCIs of *β*, *λ* and the entropy, as well as the average lengths (ALs) and coverage probabilities of the confidence intervals, were computed. These results are displayed in Tables A1–A4 (See Appendix E).

From Tables A1–A4, the following can be observed:


**Figure 1.** MSEs of different entropy estimations. (**a**) MSEs of MLEs of entropy in the case of T = 0.6 and T = 1.5. (**b**) MSEs of Bayesian estimations of entropy under a squared error loss function in the case of T = 0.6 and T = 1.5. (**c**) MSEs of Bayesian estimations of entropy under a Linex loss function in the case of T = 0.6. (**d**) MSEs of Bayesian estimations of entropy under a Linex loss function in the case of T = 1.5. (**e**) MSEs of Bayesian estimations of entropy under a general entropy loss function in the case of T = 0.6. (**f**) MSEs of Bayesian estimations of entropy under a general entropy loss function in the case of T = 1.5.
