*2.5. Entropy*

Entropy is a measure of uncertainty of a random variable. The entropy of a discrete random variable *X* with pmf *p*(*x*) and alphabet X is given by

$$\mathbb{H}(X) = -E(\log p(X)) = -\sum\_{\mathbf{x} \in \mathcal{X}} p(\mathbf{x}) \log(p(\mathbf{x})) .$$

Entropy can be interpreted as the measure of average uncertainty in *X* or the average number of bits needed to describe *X*. For more details on entropy and information theory, we refer the reader to Gray [23].

Now, if *X* ∼ BNDL(*p*), then the entropy of the random variable *X* can be calculated by the following formula

$$\begin{split} \mathbb{H}(X) &= \frac{1}{\left(2-p\right)^{2}(1+p)} \left\{ \left(2-p\right)^{2} \left[ \left(-2+p+p^{2}\right) \log(1-p) + \left(4+p-p^{2}\right) \log(2-p) + (1+p)\log(1+p) \right] \right\} \\ &+ LernhPi^{\left(0,1,0\right)} \left[ \frac{1-p}{2-p}, 1, 1+2p-p^{2} \right] \right\}, \end{split}$$

where LerchPhi(0,1,0) [*z*,*s*, *a*] gives the Lerch transcendent **Φ**(*z*,*s*, *a*) = ∑<sup>∞</sup> *<sup>k</sup>*=<sup>0</sup> *<sup>z</sup><sup>k</sup>* (*a*+*k*) *<sup>s</sup>* . Table 2 presents some numerical values of the entropy of *X* ∼ BNDL(*p*) for different choices of *p*. From Table 2, one can observe that <sup>H</sup>(*X*) is monotonically decreasing in *<sup>p</sup>* <sup>∈</sup> (0, 1) with its limits tending to be 1.88 as *p* tends to 0 as *p* → 1.

**Table 2.** Numerical results of H(*X*) for different values of the parameter *p*.


Figure 5 relates the H(*X*) to the values of parameter *p*. One may note that (*X*) is monotonically decreasing in *p* ∈ (0, 1) with its limit inclining to zero as *p* tends to 1.

**Figure 5.** H(*X*) of *X* versus *p*.

#### **3. Estimation and Simulation**

In this section, we determine the estimation of unknown parameter *p* by the maximum likelihood, moment and proportion methods.

#### *3.1. Method of Maximum Likelihood Estimation*

Let *x*1, *x*2, ... , *xn* be the observed values from the BNDL distribution with parameter *p*. The likelihood and log-likelihood function are given, respectively, as

$$L(p) = \prod\_{i=1}^{n} f(\mathbf{x}\_i) = \prod\_{i=1}^{n} \frac{(1-p)^{\mathbf{x}\_i} \left(1 + \mathbf{x}\_i + 2p - p^2\right)}{(p+1)(2-p)^{\mathbf{x}\_i+2}}.$$

and

.

$$l(p) = \log(1 - p) \sum\_{i=1}^{n} \mathbf{x}\_i + \sum\_{i=1}^{n} \log\left(1 + \mathbf{x}\_i + 2p - p^2\right) - n\log(p + 1) - 2n\log(2 - p) - \log(2 - p)\sum\_{i=1}^{n} \mathbf{x}\_i.$$

The maximum likelihood estimate (MLE) of the parameter *p* can be obtained by solving the following equation using some numerical procedures.

$$\frac{\partial l(p)}{\partial p} = \frac{3pn}{2+p-p^2} - \frac{\sum\_{i=1}^{n} \mathbf{x}\_i}{2-3p+p^2} + 2\sum\_{i=1}^{n} \frac{1-p}{1+2p-p^2+\mathbf{x}\_i} = 0$$
