*Article* **On Geometric Mean and Cumulative Residual Entropy for Two Random Variables with Lindley Type Distribution**

**Marius Giuclea 1,2,\* and Costin-Ciprian Popescu <sup>1</sup>**


**Abstract:** In this paper, we focus on two generalizations of the Lindley distribution and investigate, for each one separately, some special properties related to the geometric mean (*GM*) and the cumulative residual entropy (*CRE*), both of them being of great importance from the theoretical as well as from the practical point of view.

**Keywords:** random variable; mean; geometric mean; entropy; cumulative residual entropy; Lindley distribution

**MSC:** 60E05

#### **1. Introduction**

One of the most widely used numerical characteristics of a random variable is its mean. If *X* is a continuous random variable whose values are strictly positive and the probability density function of *X* is *f*(*x*), then the geometric mean [1,2] is

*GM*(*X*) = *e* <sup>∞</sup> <sup>0</sup> (ln *<sup>x</sup>*)*f*(*x*)*dx*, (1)

where *x* > 0.

The concept of geometric mean has various uses [1,3–7] in many fields of science. A detailed approach can be found in [1]. The formulas for the geometric mean of some probability distributions are also provided in [1]. In the present work, one of the topics of discussion is the geometric mean of two continuous random variables that will be specified in the next section.

Another look at a random variable is given by information theory. In this framework, a central role is played by the concept of entropy, which is a measure of uncertainty. If *X* is a discrete random variable with possible values *xi*, *i* = 1, ..., *n*, *n* ∈ N<sup>∗</sup> and

$$p\_i = P(X = x\_i)\_\prime i \in \{1, ..., n\}\_\prime$$

Shannon entropy of *X* [8] is

$$H(X) = -\sum\_{i=1}^{n} p\_i \log\_a p\_i. \tag{2}$$

The basis of the logarithm can be 2 but, more generally, it can be chosen depending on the application. If this base is equal to the number *e*, then it is obtained

$$H(X) = -\sum\_{i=1}^{n} p\_i \ln p\_i. \tag{3}$$

**Citation:** Giuclea, M.; Popescu, C.-C. On Geometric Mean and Cumulative Residual Entropy for Two Random Variables with Lindley Type Distribution. *Mathematics* **2022**, *10*, 1499. https://doi.org/10.3390/ math10091499

Academic Editor: Christophe Chesneau

Received: 23 March 2022 Accepted: 28 April 2022 Published: 30 April 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

If *X* is a continuous random variable with the probability density function *f*(*x*) and *D* is the set where *f*(*x*) is strictly positive, then the differential entropy of *X* [9] is

$$h(\mathbf{x}) = -\int\_{D} f(\mathbf{x}) \ln f(\mathbf{x}) d\mathbf{x}.\tag{4}$$

The differential entropy of a continuous random variable has some interesting properties [9] but compared to Shannon entropy for the discrete case it has certain limitations [10] that must be taken into account. For example, the Shannon entropy is positive but the differential entropy does not always have this property. To overcome such inconveniences, another measure of uncertainty is proposed [10], namely the cumulative residual entropy. If *X* is a non-negative random variable with cumulative distribution function *F*(*x*), then the cumulative residual entropy of *X* is

$$\mathcal{E}(X) = -\int\_0^\infty \mathbb{P}(x) \ln \mathbb{P}(x) dx,\tag{5}$$

where

$$
\overline{F}(\mathbf{x}) = 1 - F(\mathbf{x}).\tag{6}
$$

In [10] some properties of the cumulative residual entropy are given and the relationship between it and the differential entropy is established. Also in [10], the usefulness of *CRE* in reliability engineering and computer vision is shown. In various works, the concept of *CRE* is a good starting point for obtaining new and interesting results. For instance, in [11], the Bayesian estimator of the dynamic cumulative residual Rényi entropy is discussed. In [12], there are studied some properties of dynamic cumulative residual entropy and in [13] is investigated the *CRE* for coherent and mixed systems where the component lifetimes are identically distributed. In [14] is generated the *CRE* for the case of fractional order and its properties are given, and in [15] is proposed a consistent estimator for *CRE*, which has the property that its asymptotic distribution is normal.

The Lindley distribution [16,17] is one of the random variables that is important not only for its direct applications but also for the many theoretical developments that have followed it. For instance, in [17], some of its characteristics such as moments, entropies and so on, are extensively studied. In addition, the Lindley distribution is proposed for modeling the waiting time in a bank [17]. The probability density function of the Lindley distribution is

$$f(\mathbf{x}; \theta) : (0, \infty) \to \mathbb{R}, f(\mathbf{x}; \theta) = \frac{\theta^2}{\theta + 1} (1 + \mathbf{x}) e^{-\theta \mathbf{x}},\tag{7}$$

with *θ* > 0.

The cumulative distribution function of the Lindley distribution [17] is

$$F(\mathfrak{x}; \theta) = 1 - \frac{1 + \theta + \theta \mathfrak{x}}{1 + \theta} e^{-\theta \mathfrak{x}}, \mathfrak{x} > 0.$$

Regarding the developments based on the Lindley distribution, it is worth noting the introduction of new random variables [18–27]. In [18], two new families of distributions with applications in repairable data are considered. A new model, namely the generalized Lindley of integer order is given in [19] and its application in studying some medical data is also emphasized. In [20], a new distribution that can be used in insurance is proposed. The model of distribution discussed in [21] is suitable in reliability and fatigue life probems. In [22], a three-parameter Lindley distribution is introduced. A five-parameter generalized Lindley distribution is given in [23]. It was used in the study of four data sets, among them a set of medical data and a set of data regarding the strength of glass in a certain environment [23]. A discrete Lindley distribution is given in [24]. It is compared with geometric and Poisson distributions and its usefulness in analyzing some data sets, including medical data, is studied. A Lindley distribution of discrete type is given in [25] and it is employed in the study of automobile claim data, a situation in which it is compared

with the Poisson model. In [26], a distribution called exponential-modified discrete Lindley distribution is proposed and used in modelling exceedances of flood peaks for a river or the period between earthquakes having a certain magnitude. The three-parameter Lindley distribution given in [22] is considered in [27] where some medical data are modeled. In the present paper, two continuous distributions [22,23] that generalize the Lindley distribution are discussed. Following the results already obtained [22,23], some new relationships regarding these two distributions are given.

#### **2. Preliminaries Materials and Methods**

This work focuses on two random variables that are related to the Lindley distribution. It is about a continuous random variable with three parameters [22] and one with five parameters [23]. For each one, the geometric mean and the cumulative residual entropy will be determined. There is a relationship between cumulative residual entropy and differential entropy [10] but in this paper the formulas for the cumulative residual entropy will be deduced using only its definition. For both random variables that will be analyzed we will consider that all parameters are strictly positive, except for *β* that is nonnegative. The three-parameter Lindley distribution *X* [22] has the probability density function

$$f\_X(\mathbf{x}; \theta, \mathfrak{a}, \mathfrak{\mu}) : (0, \infty) \to \mathbb{R},\ f\_X(\mathbf{x}; \theta, \mathfrak{a}, \mathfrak{\mu}) = \frac{\theta^2}{\theta \mathfrak{a} + \mu} (\mathfrak{a} + \mu \mathbf{x}) e^{-\theta \mathfrak{x}}.\tag{8}$$

The corresponding cumulative distribution function is [22]

$$\,\_2F\_X(\mathbf{x};\theta,\mathbf{a},\mu):\mathbb{R}\to\mathbb{R},\,\_2F\_X(\mathbf{x};\theta,\mathbf{a},\mu)=\left\{\begin{array}{c}1-\left(1+\frac{\theta\mu\mathbf{x}}{\theta\mathbf{a}+\mu}\right)e^{-\theta\mathbf{x}},\mathbf{x}>0\\0,\mathbf{x}\leq 0\end{array}\right.\_2$$

The five-parameter Lindley distribution *Y* [23] has the probability density function

$$f\_Y(y; \delta, a, \eta, \theta, \beta) : (\beta, \infty) \to \mathbb{R},\ f\_Y(y; \delta, a, \eta, \theta, \beta) = \frac{\theta}{\delta a + \eta} [\delta a + \eta \theta(y - \beta)] e^{-\theta(y - \beta)}.\tag{9}$$

In this case, the cumulative distribution function is [23]

$$\mathbb{F}\_{\mathbf{Y}}(y;\delta,a,\eta,\theta,\beta) : \mathbb{R} \to \mathbb{R},\\ \mathbb{F}\_{\mathbf{Y}}(y;\delta,a,\eta,\theta,\beta) = \left\{ \begin{array}{c} 1 - \left[1 + \frac{\theta\eta(y-\beta)}{\delta a + \eta} \right] e^{-\theta(y-\beta)}, y > \beta \\\ 0, y \le \beta \end{array} \right.\\ \left. \end{array}$$

The three-parameter distribution [22] can be viewed as a sub-model of the fiveparameter distribution [23] because the five-parameter distribution reduces to the threeparameter distribution for *β* = 0, *δ* = *θ* and *η* = *μ* [23]. Some details about the relations between the parameters of these two random variables are given in [23].

In the next section of the paper, some notions and results related to mathematical analysis will be used. These are briefly presented below.

The Euler–Mascheroni constant is

$$\gamma = \lim\_{n \to \infty} \left( \sum\_{k=1}^{n} \frac{1}{k} - \ln n \right) \approx 0.57721$$

and one of the ways this constant can be written [28] is

$$
\gamma = -\int\_0^\infty e^{-x} \ln x dx. \tag{10}
$$

.

If *p* > 0, gamma function [29] is defined as

$$
\Gamma(p) = \int\_0^\infty x^{p-1} e^{-x} dx.\tag{11}
$$

Among the many properties of the gamma function [29], there are the following relationships:

$$
\Gamma(1) = \int\_0^\infty e^{-x} dx = 1 \tag{12}
$$

and

$$
\Gamma(n) = (n-1)! , \text{ for } n \in \mathbb{N} \, n \ge 2. \tag{13}
$$

The integral

$$E\_1(\mathbf{x}) = \int\_{\mathbf{x}}^{\infty} \frac{e^{-t}}{t} dt \tag{14}$$

is related to the exponential integral [30].

#### **3. Results**

**Theorem 1.** *If X is a random variable having the probability density function*

$$f\_X(\mathbf{x}; \theta, \mathfrak{a}, \mu) : (0, \infty) \to \mathbb{R},\ f\_X(\mathbf{x}; \theta, \mathfrak{a}, \mu) = \frac{\theta^2}{\theta \mathfrak{a} + \mu} (\mathfrak{a} + \mu \mathfrak{x}) \mathfrak{e}^{-\theta \mathfrak{x}},$$

*with θ* > 0*, α* > 0*, μ* > 0*, then*

$$GM(X) = \frac{1}{\theta} e^{\frac{\mu}{\theta a + \mu} - \gamma} \, \Big|\, \tag{15}$$

*where γ is the Euler–Mascheroni constant.*

**Proof.** We have

$$GM(X) = e^{l\_1},$$

where

$$I\_1 = \int\_0^\infty (\ln x) f\_X(x; \theta, \alpha, \mu) \, d\alpha.$$

Consider the integrals

$$J\_1 = \int\_0^\infty (\ln \ge) e^{-\theta \ge} d\pi,\newline J\_2 = \int\_0^\infty \mathfrak{x}(\ln \ge) e^{-\theta \ge} d\pi.$$

We have

$$\begin{aligned} J\_1 &= \int\_0^\infty (\ln x) e^{-\theta x} dx = \int\_0^\infty \left( \ln \frac{t}{\theta} \right) e^{-t} \frac{1}{\theta} dt = \\ &= \frac{1}{\theta} \left[ \int\_0^\infty (\ln t) e^{-t} dt - \ln \theta \int\_0^\infty e^{-t} dt \right] = \frac{-\gamma - \ln \theta}{\theta}. \end{aligned}$$

Consider

$$J\_{21} = \int\_0^w \mathbf{x}(\ln \mathbf{x}) e^{-\theta \mathbf{x}} d\mathbf{x},\; J\_{22} = \int\_w^\infty \mathbf{x}(\ln \mathbf{x}) e^{-\theta \mathbf{x}} d\mathbf{x},$$

where *w* ∈ (0, ∞).

We have

$$\begin{aligned} f\_{21} &= \lim\_{\substack{u \to 0 \\ u>0}} \int\_{u}^{w} \mathbf{x}(\ln \mathbf{x}) e^{-\theta \mathbf{x}} d\mathbf{x} = \lim\_{\substack{u \to 0 \\ u>0}} \int\_{u}^{w} \mathbf{x}(\ln \mathbf{x}) \left(\frac{e^{-\theta \mathbf{x}}}{-\theta}\right)' d\mathbf{x} = \\ &= \lim\_{\substack{u \to 0 \\ u>0}} \left\{ -\frac{1}{\theta} \left[ w(\ln w) e^{-\theta w} - u(\ln u) e^{-\theta u} \right] + \frac{1}{\theta} \int\_{u}^{w} (1 + \ln x) e^{-\theta x} d\mathbf{x} \right\} = \end{aligned}$$

$$\begin{aligned} &= -\frac{1}{\theta}w(\ln w)e^{-\theta w} + \frac{1}{\theta}\lim\_{\begin{subarray}{c}u>0\\u>0\end{subarray}}\int\_{\theta u}^{\theta w} \left(1 + \ln \frac{t}{\theta}\right)e^{-t}\frac{1}{\theta}dt = \\ &= -\frac{1}{\theta}w(\ln w)e^{-\theta w} + \frac{1}{\theta^2}\lim\_{\begin{subarray}{c}u\to 0\\u>0\end{subarray}}\left[\int\_{\theta u}^{\theta w} (\ln t)e^{-t}dt + (1 - \ln \theta)\int\_{\theta u}^{\theta w} e^{-t}dt\right].\end{aligned}$$

and

$$\begin{split} \mathcal{I}\_{22} &= \lim\_{v \to \infty} \int\_{w}^{v} x(\ln x) e^{-\theta x} dx = \lim\_{v \to \infty} \int\_{w}^{v} x(\ln x) \left( \frac{e^{-\theta x}}{-\theta} \right)' dx = \\ &= \lim\_{v \to \infty} \left\{ -\frac{1}{\theta} \left[ v(\ln v) e^{-\theta v} - w(\ln w) e^{-\theta w} \right] + \frac{1}{\theta} \int\_{w}^{v} (1 + \ln x) e^{-\theta x} dx \right\} = \\ &= \frac{1}{\theta} w(\ln w) e^{-\theta w} + \frac{1}{\theta} \lim\_{v \to \infty} \int\_{\theta w}^{\theta v} \left( 1 + \ln \frac{t}{\theta} \right) e^{-t} \frac{1}{\theta} dt = \\ &= \frac{1}{\theta} w(\ln w) e^{-\theta w} + \frac{1}{\theta^2} \lim\_{v \to \infty} \left[ \int\_{\theta w}^{\theta v} (\ln t) e^{-t} dt + (1 - \ln \theta) \int\_{\theta w}^{\theta v} e^{-t} dt \right]. \end{split}$$

We obtain

$$J\_2 = J\_{21} + J\_{22} = \frac{1}{\theta^2} \left[ \int\_0^\infty (\ln t) e^{-t} dt + (1 - \ln \theta) \int\_0^\infty e^{-t} dt \right] = \frac{1}{\theta^2} (-\gamma + 1 - \ln \theta).$$

Finally,

$$\begin{split} I\_{1} &= \int\_{0}^{\infty} (\ln x) f(x;\theta,\mu,\mu) dx = \frac{\theta^{2}}{\theta\alpha + \mu} \int\_{0}^{\infty} (\ln x)(\alpha + \mu x) e^{-\theta x} dx = \\ &= \frac{\theta^{2}}{\theta\alpha + \mu} (\mu I\_{1} + \mu I\_{2}) = \frac{\theta^{2}}{\theta\alpha + \mu} \left( \alpha \frac{-\gamma - \ln \theta}{\theta} + \mu \frac{-\gamma + 1 - \ln \theta}{\theta^{2}} \right) = \\ &= -\ln \theta + \frac{\mu}{\theta\alpha + \mu} - \gamma \end{split}$$

and

$$GM(X) = e^{I\_1} = \frac{1}{\theta} e^{\frac{\mu}{\theta\alpha + \mu} - \gamma}.$$

**Theorem 2.** *If Y is a random variable having the probability density function*

$$f\_{\varGamma}(y;\delta,\mathfrak{a},\eta,\theta,\beta):(\mathfrak{f},\mathfrak{c}\mathfrak{s}) \to \mathbb{R},\\f\_{\varGamma}(y;\delta,\mathfrak{a},\eta,\theta,\beta) = \frac{\theta}{\delta\mathfrak{a}+\eta}[\delta\mathfrak{a}+\eta\theta(y-\beta)]e^{-\theta(y-\beta)},$$

*with δ*,*α*, *η*, *θ* ∈ (0, ∞)*, β* ∈ [0, ∞)*, then*

$$GM(Y) = \begin{cases} \ \epsilon^{l\_2}, \beta > 0\\ \ \frac{1}{\theta} \epsilon^{\frac{\eta}{\delta a + \eta} - \gamma}, \beta = 0 \end{cases},\tag{16}$$

*where*

$$I\_2 = \ln \beta + \frac{\eta}{\delta \alpha + \eta} + \left(1 - \frac{\eta \theta \beta}{\delta \alpha + \eta}\right) e^{\theta \beta} E\_1(\theta \beta). \tag{17}$$

**Proof.** If *β* > 0, we have

$$I\_2 = \int\_{\beta}^{\infty} (\ln y) f\_{\overline{\lambda}}(y; \delta, a, \eta, \theta, \beta) dy = \frac{1}{\delta a + \eta} \lim\_{v \to \infty} \int\_{\beta}^{v} (\ln y) [\delta a + \eta \theta(y - \beta)] \theta e^{-\theta(y - \beta)} dy = 0$$

<sup>=</sup> <sup>1</sup> *δα* + *η* lim*v*→<sup>∞</sup> *<sup>θ</sup>*(*v*−*β*) 0 ln *θβ* <sup>+</sup> *<sup>z</sup> θ* (*δα* + *ηz*)*e* −*z dz* = <sup>=</sup> <sup>1</sup> *δα* + *η* lim*v*→<sup>∞</sup> *<sup>θ</sup>*(*v*−*β*) 0 ln *θβ* <sup>+</sup> *<sup>z</sup> θ* (*δα* + *ηz*)(−*e* −*z* ) *dz* = <sup>=</sup> <sup>1</sup> *δα* + *η* lim*v*→<sup>∞</sup> - ln *θβ* <sup>+</sup> *<sup>z</sup> θ* (*δα* + *ηz*)(−*e* −*z* ) % % % % *θ*(*v*−*β*) 0 + + *<sup>θ</sup>*(*v*−*β*) 0 *δα* + *ηz θβ* <sup>+</sup> *<sup>z</sup>* <sup>+</sup> *<sup>η</sup>* ln *θβ* <sup>+</sup> *<sup>z</sup> θ e* −*z dz* = <sup>=</sup> *δα* ln *<sup>β</sup> δα* + *η* + 1 *δα* + *η* lim*v*→<sup>∞</sup> *<sup>θ</sup>*(*v*−*β*) 0 *η* + *δα* − *ηθβ θβ* <sup>+</sup> *<sup>z</sup>* <sup>+</sup> *<sup>η</sup>* ln *θβ* <sup>+</sup> *<sup>z</sup> θ e* −*z dz* = <sup>=</sup> *δα* ln *<sup>β</sup> δα* + *η* + 1 *δα* + *η* lim*v*→<sup>∞</sup> *η <sup>θ</sup>*(*v*−*β*) 0 *e* −*z dz* + *<sup>θ</sup>*(*v*−*β*) 0 *δα* − *ηθβ θβ* <sup>+</sup> *<sup>z</sup> <sup>e</sup>* −*z dz*+ +*η <sup>θ</sup>*(*v*−*β*) 0 ln *θβ* <sup>+</sup> *<sup>z</sup> θ e* −*z dz* = <sup>=</sup> *<sup>η</sup>*Γ(1) <sup>+</sup> *δα* ln *<sup>β</sup> δα* + *η* + 1 *δα* + *η* lim*v*→<sup>∞</sup> *<sup>θ</sup>*(*v*−*β*) 0 *δα* − *ηθβ θβ* <sup>+</sup> *<sup>z</sup> <sup>e</sup>* −*z dz*+ +*η <sup>θ</sup>*(*v*−*β*) 0 ln *θβ* <sup>+</sup> *<sup>z</sup> θ* (−*e* −*z* ) *dz* = <sup>=</sup> *<sup>η</sup>* <sup>+</sup> *δα* ln *<sup>β</sup> δα* + *η* + 1 *δα* + *η* lim*v*→<sup>∞</sup> *<sup>θ</sup>*(*v*−*β*) 0 *δα* − *ηθβ θβ* <sup>+</sup> *<sup>z</sup> <sup>e</sup>* −*z dz*− −*η* ln *θβ* <sup>+</sup> *<sup>z</sup> θ e* −*z* % % % % *θ*(*v*−*β*) 0 + *η <sup>θ</sup>*(*v*−*β*) 0 *e*−*<sup>z</sup> θβ* + *z dz*& = <sup>=</sup> *<sup>η</sup>* + (*δα* <sup>+</sup> *<sup>η</sup>*)ln *<sup>β</sup> δα* + *η* + 1 *δα* + *η* lim*v*→<sup>∞</sup> (*δα* − *ηθβ*) *<sup>θ</sup>*(*v*−*β*) 0 *e*−*<sup>z</sup> θβ* + *z dz* + *η <sup>θ</sup>*(*v*−*β*) 0 *e*−*<sup>z</sup> θβ* + *z dz* = <sup>=</sup> ln *<sup>β</sup>* <sup>+</sup> *<sup>η</sup> δα* + *η* + *δα* − *ηθβ* + *η δα* + *η* lim*v*→<sup>∞</sup> *<sup>θ</sup>*(*v*−*β*) 0 *e*−*<sup>z</sup> θβ* + *z dz* = <sup>=</sup> ln *<sup>β</sup>* <sup>+</sup> *<sup>η</sup> δα* + *η* + <sup>1</sup> <sup>−</sup> *ηθβ δα* + *η e θβ* lim*v*→<sup>∞</sup> *<sup>θ</sup>*(*v*−*β*) 0 *e*−*θβ*−*<sup>z</sup> θβ* + *z dz* = <sup>=</sup> ln *<sup>β</sup>* <sup>+</sup> *<sup>η</sup> δα* + *η* + <sup>1</sup> <sup>−</sup> *ηθβ δα* + *η e θβ* lim*v*→<sup>∞</sup> *<sup>θ</sup><sup>v</sup> θβ e*−*<sup>t</sup> t dt* = <sup>=</sup> ln *<sup>β</sup>* <sup>+</sup> *<sup>η</sup> δα* + *η* + <sup>1</sup> <sup>−</sup> *ηθβ δα* + *η e θβ* <sup>∞</sup> *θβ e*−*<sup>t</sup> t dt* = <sup>=</sup> ln *<sup>β</sup>* <sup>+</sup> *<sup>η</sup> δα* + *η* + <sup>1</sup> <sup>−</sup> *ηθβ δα* + *η e θβE*1(*θβ*). If *β* = 0, we have *GM*(*Y*) = *eI*<sup>3</sup> ,

where

$$\begin{split} I\_3 &= \int\_0^\infty (\ln y) \frac{\theta}{\delta \alpha + \eta} (\delta \alpha + \eta \theta y) e^{-\theta y} dy = \frac{\theta}{\delta \alpha + \eta} (\delta \alpha I\_1 + \eta \theta I\_2) = \\ &= \frac{\theta}{\delta \alpha + \eta} \left( \delta \alpha \frac{-\gamma - \ln \theta}{\theta} + \eta \theta \frac{-\gamma + 1 - \ln \theta}{\theta^2} \right) = -\ln \theta + \frac{\eta}{\delta \alpha + \eta} - \gamma. \end{split}$$

**Theorem 3.** *If Y is a random variable having the cumulative distribution function*

$$F\_Y(y; \delta, a, \eta, \theta, \beta) : \mathbb{R} \to \mathbb{R},\ F\_Y(y; \delta, a, \eta, \theta, \beta) = \left\{ \begin{array}{c} 1 - \left[ 1 + \frac{\theta \eta(y - \beta)}{\delta a + \eta} \right] e^{-\theta(y - \beta)}, y > \beta \\\ 0, y \le \beta \end{array} \right. $$

*with δ*,*α*, *η*, *θ* ∈ (0, ∞)*, β* ∈ [0, ∞)*, then*

$$\mathcal{E}(Y) = \frac{1}{\theta(\delta\alpha + \eta)} \left[ \delta\alpha + 2\eta - \eta e^{\frac{\delta\alpha + \eta}{\eta}} E\_1\left(\frac{\delta\alpha + \eta}{\eta}\right) \right]. \tag{18}$$

**Proof.** We have

$$\overline{F}\_Y(y;\delta,\mathfrak{a},\eta,\theta,\beta) = \left[1 + \frac{\theta\eta(y-\beta)}{\delta\mathfrak{a}+\eta}\right]e^{-\theta(y-\beta)}, \text{ for } y > \beta.$$

and

E(*Y*) = − <sup>∞</sup> *β FY*(*y*; *δ*, *α*, *η*, *θ*, *β*)ln *FY*(*y*; *δ*, *α*, *η*, *θ*, *β*)*dy* = <sup>=</sup> <sup>−</sup> lim*v*→<sup>∞</sup> *<sup>v</sup> β* <sup>1</sup> <sup>+</sup> *θη*(*<sup>y</sup>* <sup>−</sup> *<sup>β</sup>*) *δα* + *η e* <sup>−</sup>*θ*(*y*−*β*) ln<sup>1</sup> <sup>+</sup> *θη*(*<sup>y</sup>* <sup>−</sup> *<sup>β</sup>*) *δα* + *η e* −*θ*(*y*−*β*) *dy* = <sup>=</sup> <sup>−</sup><sup>1</sup> *<sup>θ</sup>* lim*v*→<sup>∞</sup> *<sup>θ</sup>*(*v*−*β*) 0 <sup>1</sup> <sup>+</sup> *<sup>η</sup><sup>z</sup> δα* + *η e* <sup>−</sup>*<sup>z</sup>* ln<sup>1</sup> <sup>+</sup> *<sup>η</sup><sup>z</sup> δα* + *η e* −*z dz* = <sup>=</sup> <sup>−</sup><sup>1</sup> *<sup>θ</sup>* lim*v*→<sup>∞</sup> *<sup>θ</sup>*(*v*−*β*) 0 <sup>1</sup> <sup>+</sup> *<sup>η</sup><sup>z</sup> δα* + *η e* −*z* <sup>−</sup>*<sup>z</sup>* <sup>+</sup> ln <sup>1</sup> <sup>+</sup> *<sup>η</sup><sup>z</sup> δα* + *η dz* <sup>=</sup> <sup>=</sup> <sup>1</sup> *<sup>θ</sup>* lim*v*→<sup>∞</sup> *<sup>θ</sup>*(*v*−*β*) 0 *ze*−*<sup>z</sup>* <sup>+</sup> *<sup>η</sup> δα* <sup>+</sup> *<sup>η</sup> <sup>z</sup>*2*<sup>e</sup>* <sup>−</sup>*<sup>z</sup>* <sup>−</sup> <sup>1</sup> <sup>+</sup> *<sup>η</sup><sup>z</sup> δα* + *η* ln <sup>1</sup> <sup>+</sup> *<sup>η</sup><sup>z</sup> δα* + *η e* −*z dz* = <sup>=</sup> <sup>1</sup> *θ* <sup>Γ</sup>(2) <sup>+</sup> *<sup>η</sup> δα* + *η* <sup>Γ</sup>(3) <sup>−</sup> lim*v*→<sup>∞</sup> *<sup>θ</sup>*(*v*−*β*) 0 <sup>1</sup> <sup>+</sup> *<sup>η</sup><sup>z</sup> δα* + *η* ln <sup>1</sup> <sup>+</sup> *<sup>η</sup><sup>z</sup> δα* + *η e* −*z dz* = <sup>=</sup> <sup>1</sup> *θ* <sup>Γ</sup>(2) <sup>+</sup> *<sup>η</sup> δα* + *η* <sup>Γ</sup>(3) <sup>−</sup> lim*v*→<sup>∞</sup> *<sup>θ</sup>*(*v*−*β*) 0 *δα* + *η* + *ηz δα* + *η* ln *δα* <sup>+</sup> *<sup>η</sup>* <sup>+</sup> *<sup>η</sup><sup>z</sup> δα* + *η e* −*z dz* = <sup>=</sup> <sup>1</sup> *θ* 1 + 2*η δα* + *η* − 1 *<sup>θ</sup>* lim*v*→<sup>∞</sup> *<sup>θ</sup>*(*v*−*β*) 0 *δα* + *η* + *ηz δα* + *η* ln *δα* <sup>+</sup> *<sup>η</sup>* <sup>+</sup> *<sup>η</sup><sup>z</sup> δα* + *η* (−*e* −*z* ) *dz* = <sup>=</sup> *δα* <sup>+</sup> <sup>3</sup>*<sup>η</sup> <sup>θ</sup>*(*δα* <sup>+</sup> *<sup>η</sup>*) <sup>−</sup> <sup>1</sup> *<sup>θ</sup>* lim*v*→<sup>∞</sup> - *δα* + *η* + *ηz δα* + *η* ln *δα* <sup>+</sup> *<sup>η</sup>* <sup>+</sup> *<sup>η</sup><sup>z</sup> δα* + *η* −*e* −*z* % % % % *θ*(*v*−*β*) 0 + <sup>+</sup> *<sup>η</sup> δα* + *η <sup>θ</sup>*(*v*−*β*) 0 <sup>1</sup> <sup>+</sup> ln *δα* <sup>+</sup> *<sup>η</sup>* <sup>+</sup> *<sup>η</sup><sup>z</sup> δα* + *η e* −*z dz* = <sup>=</sup> *δα* <sup>+</sup> <sup>3</sup>*<sup>η</sup> <sup>θ</sup>*(*δα* <sup>+</sup> *<sup>η</sup>*) <sup>−</sup> *<sup>η</sup> <sup>θ</sup>*(*δα* <sup>+</sup> *<sup>η</sup>*) lim*v*→<sup>∞</sup> *<sup>θ</sup>*(*v*−*β*) 0 *e* <sup>−</sup>*<sup>z</sup>* + ln *δα* <sup>+</sup> *<sup>η</sup>* <sup>+</sup> *<sup>η</sup><sup>z</sup> δα* + *η e* −*z dz* = <sup>=</sup> *δα* <sup>+</sup> <sup>3</sup>*<sup>η</sup> <sup>θ</sup>*(*δα* <sup>+</sup> *<sup>η</sup>*) <sup>−</sup> *<sup>η</sup> θ*(*δα* + *η*) <sup>Γ</sup>(1) <sup>+</sup> lim*v*→<sup>∞</sup> *<sup>θ</sup>*(*v*−*β*) 0 ln *δα* <sup>+</sup> *<sup>η</sup>* <sup>+</sup> *<sup>η</sup><sup>z</sup> δα* + *η* (−*e* −*z* ) *dz* = <sup>=</sup> *δα* <sup>+</sup> <sup>3</sup>*<sup>η</sup> <sup>θ</sup>*(*δα* <sup>+</sup> *<sup>η</sup>*) <sup>−</sup> *<sup>η</sup> <sup>θ</sup>*(*δα* <sup>+</sup> *<sup>η</sup>*) <sup>−</sup> *<sup>η</sup> <sup>θ</sup>*(*δα* <sup>+</sup> *<sup>η</sup>*) lim*v*→<sup>∞</sup> ' −*e* <sup>−</sup>*<sup>z</sup>* ln *δα* <sup>+</sup> *<sup>η</sup>* <sup>+</sup> *<sup>η</sup><sup>z</sup> δα* + *η* % % % % *θ*(*v*−*β*) 0 + + *<sup>θ</sup>*(*v*−*β*) 0 *η δα* + *η* + *ηz e* −*z dz* = <sup>=</sup> *δα* <sup>+</sup> <sup>2</sup>*<sup>η</sup> <sup>θ</sup>*(*δα* <sup>+</sup> *<sup>η</sup>*) <sup>−</sup> *<sup>η</sup> <sup>θ</sup>*(*δα* <sup>+</sup> *<sup>η</sup>*) lim*v*→<sup>∞</sup> *<sup>θ</sup>*(*v*−*β*) 0 *η δα* + *η* + *ηz e* −*z dz* = <sup>=</sup> *δα* <sup>+</sup> <sup>2</sup>*<sup>η</sup> <sup>θ</sup>*(*δα* <sup>+</sup> *<sup>η</sup>*) <sup>−</sup> *<sup>η</sup> θ*(*δα* + *η*) *e δα*+*η <sup>η</sup>* lim*v*→<sup>∞</sup> *<sup>θ</sup>*(*v*−*β*) 0 *η δα* + *η* + *ηz e* <sup>−</sup>*z*<sup>−</sup> *δα*+*<sup>η</sup> <sup>η</sup> dz* = <sup>=</sup> *δα* <sup>+</sup> <sup>2</sup>*<sup>η</sup> <sup>θ</sup>*(*δα* <sup>+</sup> *<sup>η</sup>*) <sup>−</sup> *<sup>η</sup> θ*(*δα* + *η*) *e δα*+*η <sup>η</sup>* lim*v*→<sup>∞</sup> *<sup>θ</sup>*(*v*−*β*)+ *δα*+*<sup>η</sup> η δα*+*η η* 1 *t e* −*t dt* =

$$\begin{split} &= \frac{\delta\alpha + 2\eta}{\theta(\delta\alpha + \eta)} - \frac{\eta}{\theta(\delta\alpha + \eta)} e^{\frac{\delta\alpha + \eta}{\eta}} \int\_{\frac{\delta\alpha + \eta}{\eta}}^{\infty} \frac{1}{t} e^{-t} dt = \\ &= \frac{1}{\theta(\delta\alpha + \eta)} \left[ \delta\alpha + 2\eta - \eta e^{\frac{\delta\alpha + \eta}{\eta}} E\_1 \left( \frac{\delta\alpha + \eta}{\eta} \right) \right]. \end{split}$$

**Theorem 4.** *If X is a random variable having the cumulative distribution function*

$$F\_{\mathcal{X}}(\mathbf{x};\theta,\mathfrak{a},\mu):\mathbb{R}\to\mathbb{R},\ F\_{\mathcal{X}}(\mathbf{x};\theta,\mathfrak{a},\mu)=\left\{\begin{array}{c}1-\left(1+\frac{\theta\mu\mathbf{x}}{\theta\mathfrak{a}+\mu}\right)e^{-\theta\mathfrak{a}},\,\mathfrak{x}>0\\0,\,\mathfrak{x}\leq 0\end{array}\right\}$$

*with θ* > 0*, α* > 0*, μ* > 0*, then*

$$\mathcal{E}(X) = \frac{1}{\theta(\theta\alpha + \mu)} \left[ \theta\alpha + 2\mu - \mu e^{\frac{\theta\alpha + \mu}{\mu}} E\_1 \left( \frac{\theta\alpha + \mu}{\mu} \right) \right]. \tag{19}$$

**Proof.** The proof comes directly from Theorem 3, by choosing *β* = 0, *δ* = *θ* and *η* = *μ*.

#### **4. Discussion**

Regarding the characteristics of the random variables, one can notice that in some papers the geometric mean is considered [1–7]. In the field of the study of uncertainty related to a random variable, the cumulative residual entropy [10] overcomes some drawbacks of differential entropy.

In this paper, two generalizations of the Lindley distribution [22,23] were discussed. The three-parameter distribution [22] is a submodel of the five-parameter [23] one. The work focused on the geometric mean and cumulative residual entropy of these two distributions. The cumulative residual entropy of the one with three parameters can be deduced directly from the one with five parameters, as shown in Theorems 3 and 4.

In connection with the geometric mean, remark that the integral *I*<sup>2</sup> from Theorem 2 can be transformed as follows:

*<sup>I</sup>*<sup>2</sup> <sup>=</sup> ln *<sup>β</sup>* <sup>+</sup> *<sup>η</sup> δα* + *η* + <sup>1</sup> <sup>−</sup> *ηθβ δα* + *η e θβE*1(*θβ*) = <sup>=</sup> ln *<sup>β</sup>* <sup>+</sup> *<sup>η</sup> δα* + *η* + <sup>1</sup> <sup>−</sup> *ηθβ δα* + *η e θβ* <sup>∞</sup> *θβ e*−*<sup>t</sup> t dt* = <sup>=</sup> ln *<sup>β</sup>* <sup>+</sup> *<sup>η</sup> δα* + *η* + <sup>1</sup> <sup>−</sup> *ηθβ δα* + *η e θβ* lim*v*→<sup>∞</sup> *<sup>v</sup> θβ e*−*<sup>t</sup> t dt* = <sup>=</sup> ln *<sup>β</sup>* <sup>+</sup> *<sup>η</sup> δα* + *η* + <sup>1</sup> <sup>−</sup> *ηθβ δα* + *η e θβ* lim*v*→<sup>∞</sup> *<sup>v</sup> θβ e* −*t* (ln *t*) *dt* = <sup>=</sup> ln *<sup>β</sup>* <sup>+</sup> *<sup>η</sup> δα* + *η* + <sup>1</sup> <sup>−</sup> *ηθβ δα* + *η e θβ* lim*v*→<sup>∞</sup> *e* <sup>−</sup>*<sup>t</sup>* ln *t* % % *v θβ* + *<sup>v</sup> θβ e* <sup>−</sup>*<sup>t</sup>* ln *tdt* = <sup>=</sup> ln *<sup>β</sup>* <sup>+</sup> *<sup>η</sup> δα* + *η* + <sup>1</sup> <sup>−</sup> *ηθβ δα* + *η* <sup>−</sup> ln *<sup>θ</sup>* <sup>−</sup> ln *<sup>β</sup>* <sup>+</sup> *<sup>e</sup> θβ* <sup>∞</sup> *θβ e* <sup>−</sup>*<sup>t</sup>* ln *tdt* = <sup>=</sup> *<sup>η</sup> δα* + *η* <sup>+</sup> *ηθβ* ln *<sup>β</sup> δα* + *η* + <sup>1</sup> <sup>−</sup> *ηθβ δα* + *η* <sup>−</sup> ln *<sup>θ</sup>* <sup>+</sup> *<sup>e</sup> θβ* <sup>∞</sup> *θβ e* <sup>−</sup>*<sup>t</sup>* ln *tdt* .

We have

$$\lim\_{\substack{\beta \to 0\\ \beta > 0}} \left[ \frac{\eta}{\delta \alpha + \eta} + \frac{\eta \theta \beta \ln \beta}{\delta \alpha + \eta} + \left( 1 - \frac{\eta \theta \beta}{\delta \alpha + \eta} \right) \left( -\ln \theta + e^{\theta \beta} \int\_{\theta \beta}^{\infty} e^{-t} \ln t \, dt \right) \right] = 0$$

$$\frac{\eta}{\delta \alpha + \eta} - \ln \theta + \int\_0^{\infty} e^{-t} \ln t \, dt = -\ln \theta + \frac{\eta}{\delta \alpha + \eta} - \gamma.$$

Therefore the geometric mean of the five-parameter distribution is right continuous at zero with respect to the parameter *β*. By taking, in Theorem 2, *β* = 0 and then making the substitutions *δ* = *θ*, *η* = *μ*, the geometric mean of the three-parameter distribution with three parameters can be deduced from the geometric mean of the five-parameter distribution. Due to the special position of the parameter *β* in the calculation of integrals, the geometric mean was independently calculated for each distribution, as seen in Theorems 1 and 2.

#### **5. Conclusions**

From the rather large set of Lindley-type distributions, two related distributions were selected for study. For each of them, the formulas for geometric mean and cumulative residual entropy were obtained. These results are in addition to those already known from previous works, thus increasing the area of knowledge concerning the theme of Lindley-type distributions.

**Author Contributions:** Conceptualization, M.G. and C.-C.P.; methodology, M.G. and C.-C.P.; writing original draft preparation, M.G. and C.-C.P. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable

**Informed Consent Statement:** Not applicable

**Data Availability Statement:** Not applicable

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

