*3.4. Entropy*

The <sup>R</sup>ény<sup>i</sup> entropy of a positive random variable *X* representing a measure of variation of the uncertainty is defined by

$$\begin{split} I\_{\rho}(X) &= \frac{1}{1-\rho} \log \left( \int\_{0}^{\infty} f^{\rho}(\mathbf{x}; \lambda, \boldsymbol{\beta}, \mathbf{a}) d\mathbf{x} \right); \rho > 0 \text{ and } \rho \neq 1. \\ &= \frac{1}{1-\rho} \log \left( \sum\_{k,l=0}^{\infty} \Omega\_{k,l} \int\_{0}^{\infty} \mathbf{g}^{\rho}(\mathbf{x}) \mathbf{G}^{k+l}(\mathbf{x}) d\mathbf{x} \right) \\ &= \frac{1}{1-\rho} \log \left( \sum\_{k,l=0}^{\infty} \Omega\_{k,l} \frac{\mathbf{f}^{1-\rho} \ \Gamma(2\rho-1)}{\left(\rho+k+l\right)^{2\rho-1}} \right), \end{split} \tag{16}$$

where (<sup>2</sup>*ρ* − 1) = −1, −2, −3, . . . . and

$$
\Omega\_{k,l} = (2a\lambda)^{\rho} \sum\_{j,i=0}^{\infty} \frac{(-1)^{j+k+l} (\lambda(i+j+1))^k}{k!} \binom{-a\rho-\rho}{i} \binom{\rho a-\rho}{j} \binom{-k-2\rho}{l} \dots
$$

The binomial coefficients will be computed with the negative values using built-in functions in Maple software. The Shannon entropy is a special case of the <sup>R</sup>ény<sup>i</sup> entropy when *ρ* → 1. Tables 7–9 list some numerical values of entropies using Maple software.

**Table 7.** Entropies for different values of *β*.


**Table 8.** Entropies for different values of *α*.


**Table 9.** Entropies for different values of λ.


Regarding Tables 7–9, it is clear that the entropy increases in two cases, one of them for fixed values of *α* and *λ* with *β* → <sup>∞</sup>, and the other for fixed values of *β* and *λ* with *α* → ∞. Whereas, the entropy decreases for fixed values of *α* and *β* with *λ* → ∞.
