**5. Real Data Analysis**

In this subsection, a real data set is considered to illustrate the use of the inference procedures discussed in this paper. This data set consisted of 30 successive values of March precipitation in Minneapolis–Saint Paul, which were reported by Hinkley [31]. The data set points are expressed in inches as follows: 0.32, 0.47, 0.52, 0.59, 0.77, 0.81, 0.81, 0.9, 0.96, 1.18, 1.20, 1.20, 1.31, 1.35, 1.43, 1.51, 1.62, 1.74, 1.87, 1.89, 1.95, 2.05, 2.10, 2.20, 2.48, 2.81, 3.0, 3.09, 3.37 and 4.75 in.

This data was used by Barreto-Souza and Cribari-Neto [32] for fitting the generalized exponential-Poisson (GEP) distribution and by Abd-Elrahman [20] for fitting the Bilal and GB distributions. In the complete sample case, the MLEs of *β* and *λ* were 0.4168 and 1.2486, respectively. In this case, we calculated the maximum likelihood estimate of the entropy as H(*f*) = 1.2786. For the above data set, Abd-Elrahman [20] pointed out that the negative of the log likelihood, Kolmogorov–Smirnov (K–S) test statistics and its corresponding p value related to these MLEs were 38.1763, 0.0532 and 1.0, respectively. Based on the value of p, it is clear that the GB distribution was found to fit the data very well. Using the above data set, we generated an adaptive Type-II progressive hybrid censoring scheme with an effective failure number m (m = 20).

When we took T = 4.0 and *R*<sup>1</sup> = *R*<sup>2</sup> = ... = *R*<sup>5</sup> = 1, *R*<sup>6</sup> = *R*<sup>7</sup> = ... = *R*<sup>15</sup> = 0, *R*<sup>16</sup> = *R*<sup>17</sup> = ... = *R*<sup>20</sup> = 1, the obtained data in Case I were as follows:

Case I: 0.32, 0.52, 0.77, 0.81, 0.96, 1.18, 1.20, 1.31, 1.35, 1.43, 1.51, 1.62, 1.74, 1.87, 1.89, 1.95, 2.10, 2.48, 2.81 and 3.37.

When we took T = 2.0, *R*<sup>1</sup> = 1, *R*<sup>2</sup> = *R*<sup>3</sup> = ... = *R*<sup>8</sup> = 0, *R*<sup>9</sup> = *R*<sup>10</sup> = ... *R*<sup>15</sup> = 1, *R*<sup>16</sup> = *R*<sup>17</sup> = ... = *R*19= 0 and *R*<sup>20</sup> = 2, the obtained data in Case II were as follows:

Case II: 0.32, 0.47, 0.52, 0.59, 0.77, 0.81, 0.9, 0.96, 1.18, 1.20, 1.35, 1.43, 1.74, 1.87, 1.95, 2.10, 2.20, 2.48, 2.81 and 3.09.

Based on the above data, the maximum likelihood estimation and Bayesian estimation of the entropy and the two parameters could be calculated. For the Bayesian estimation, since we had no prior information about the unknown parameters, we considered the noninformative gamma priors of the unknown parameters as a = b = c = d = 0. For the Linex loss and general entropy functions, we set *h* = −1.0, 1.0 and *q* = −1.0, 1.0, respectively. The MLEs and Bayesian estimations of the entropy and the two parameters were calculated by using the Newton–Raphson iteration and Lindley's approximation method. These results are tabulated in Tables 7 and 8. In addition, the 95% asymptotic confidence intervals (ACIs) and Bayesian credible intervals (BCs) of the two parameters and the entropy were calculated using the Newton–Raphson iteration, delta method and MCMC method. These results are displayed in Table 9.


**Table 7.** MLEs and Bayesian estimations of the parameters and the entropy.

**Table 8.** Bayesian estimations of the parameters and the entropy under two loss functions.


From Tables 7–9, we can observe that the MLEs and Bayesian estimations of the parameters and the entropy were close to the estimations in the complete sample case. In most cases, the length of the Bayesian credible intervals was smaller than that of the asymptotic confidence intervals.


**Table 9.** The 95% asymptotic confidence intervals (ACIs) and Bayesian credible intervals (BCIs) with the corresponding interval lengths (ILs) of the two parameters and the entropy.

#### **6. Conclusions**

In this paper, we considered the estimation of parameters and entropy for generalized Bilal distribution using adaptive Type-II progressive hybrid censored data. Using an iterative procedure and asymptotic normality theory, we developed the MLEs and approximate confidence intervals of the unknown parameters and the entropy. The Bayesian estimates were derived by Lindley's approximation under the square, Linex and general entropy loss functions. Since Lindley's method failed to construct the intervals, we utilized Gibbs sampling together with the Metropolis–Hastings sampling procedure to construct the Bayesian credence intervals of the unknown parameters and the entropy. A Monte Carlo simulation was provided to show all the estimation results. The results illustrate that the proposed methods performed well. The applicability of the considered model in a real situation was illustrated, based on the data of March precipitation in Minneapolis–Saint Paul. It was observed that the considered model could be utilized to analyze this real data appropriately.

**Author Contributions:** Methodology and writing, X.S.; supervision, Y.S.; simulation study, K.Z. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work is supported by the National Natural Science Foundation of China (71571144, 71401134, 71171164, 11701406) and the Program of International Cooperation and Exchanges in Science and Technology funded by Shaanxi Province (2016KW-033).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not available.

**Acknowledgments:** The authors would like to thank the editors and the anonymous reviewers.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A. Proof of Theorem 1**

We set *<sup>y</sup>* <sup>=</sup> exp (−*βxλ*), then 0 <sup>&</sup>lt; *<sup>y</sup>* <sup>&</sup>lt; 1. The cumulative distribution function of GB distribution can be written as

$$F(x; \beta, \lambda) = 1 - 3y^2 + 2y^3, 0 < y < 1$$

By setting *<sup>u</sup>* = <sup>1</sup> − <sup>3</sup>*y*<sup>2</sup> + <sup>2</sup>*y*3, 0 <*u*< 1, then we get 3*y*<sup>2</sup> − <sup>2</sup>*y*<sup>3</sup> + *<sup>u</sup>* − <sup>1</sup> = 0,0 <*y*< 1. Set *<sup>ρ</sup>*(*y*) = <sup>3</sup>*y*<sup>2</sup> − <sup>2</sup>*y*<sup>3</sup> + *<sup>u</sup>* − 1, take the first derivative of *<sup>ρ</sup>*(*y*) with respect to *<sup>y</sup>*, and we have *<sup>d</sup>ρ*(*y*) *dy* <sup>=</sup> <sup>6</sup>*<sup>y</sup>* <sup>−</sup> <sup>6</sup>*y*<sup>2</sup> <sup>&</sup>gt; 0, as 0 <sup>&</sup>lt; *<sup>y</sup>* <sup>&</sup>lt; 1.

Notice that *ρ*(*y*) is a monotonically increasing function when 0 < *y* < 1. Thus, there is a unique solution to the equation 3*y*<sup>2</sup> − <sup>2</sup>*y*<sup>3</sup> + *<sup>u</sup>* − <sup>1</sup> = 0 when 0 < *<sup>y</sup>* < 1. As such, we have proven that the equation *Xi*:*m*:*<sup>n</sup>* = *F*−1(*Ui*) has a unique solution (*i* = 1, 2, . . . , *m*).

#### **Appendix B. The Specific Steps of the Newton–Raphson Iteration Method**

**Step 1:** Give the initial values of *θ*= (*β*, *λ*); that is, *θ*(0)= (*β*(0), *λ*(0)). 

**Step 2:** In the *<sup>k</sup>*th iteration, calculate *<sup>∂</sup><sup>l</sup> ∂β* , *<sup>∂</sup><sup>l</sup> ∂λ β* = *β*(*k*) *λ* = *λ*(*k*) and *I*(*β*(*k*), *λ*(*k*)), where 

*<sup>I</sup>*(*β*(*k*), *<sup>λ</sup>*(*k*)) = - *I*<sup>11</sup> *I*<sup>12</sup> *<sup>I</sup>*<sup>21</sup> *<sup>I</sup>*<sup>22</sup> *β* = *β*(*k*) *λ* = *λ*(*k*) is the observed information matrix of the parame-

.

ters *β* and *λ*, and *Iij*, *i* = 1, 2, 3 are given by Equations (10)–(13).

**Step 3:** Update (*β*, *λ*) *<sup>T</sup>* with

$$\left(\boldsymbol{\beta}^{(k+1)},\boldsymbol{\lambda}^{(k+1)}\right)^{T} = \left(\boldsymbol{\beta}^{(k)},\boldsymbol{\lambda}^{(k)}\right)^{T} + I^{-1}(\boldsymbol{\beta}^{(k)},\boldsymbol{\lambda}^{(k)}) \times \begin{pmatrix} \frac{\partial l}{\partial \boldsymbol{\beta}^{\prime}} \frac{\partial l}{\partial \boldsymbol{\lambda}} \end{pmatrix}^{T} \\ \boldsymbol{\p} = \boldsymbol{\p}^{(k)} \\ \boldsymbol{\lambda} = \boldsymbol{\lambda}^{(k)}$$

Here, (*β*, *λ*) *<sup>T</sup>* is the transpose of vector (*β*, *λ*), and *I*−1(*β*(*k*), *λ*(*k*)) represents the inverse of the matrix *I*(*β*(*k*), *λ*(*k*)).

**Step 4:** Setting *k* = *k* + 1, the MLEs of the parameters (denoted by *β*ˆ and *λ*ˆ ) can be obtained by repeating Steps 2 and 3 until <sup>|</sup>(*β*(*k*+1), *<sup>λ</sup>*(*k*+1)) *<sup>T</sup>* <sup>−</sup> (*β*(*k*), *<sup>λ</sup>*(*k*)) *T* | < *ε*, where *ε* is a threshold value that is fixed in advance.
