*2.3. Moments and Moment Generating Function*

Let *X* be an av with the minLLx distribution, then the ordinary moment, say *μ <sup>r</sup>*, is given by

*μ <sup>r</sup>* <sup>=</sup> *<sup>E</sup>*(*Xr*) = ∞ −∞ *x<sup>r</sup> f*(*x*) *dx* = *λβ* 1+*θ* <sup>∞</sup> <sup>0</sup> *<sup>x</sup>r*(<sup>1</sup> + *<sup>θ</sup>* + *<sup>θ</sup>x*)(<sup>1</sup> + *<sup>λ</sup>x*) −*β*−1 *e*−*<sup>θ</sup>xdx* + *<sup>θ</sup>*<sup>2</sup> 1+*θ* <sup>∞</sup> <sup>0</sup> *<sup>x</sup>r*(<sup>1</sup> + *<sup>x</sup>*)(<sup>1</sup> + *<sup>λ</sup>x*) <sup>−</sup>*βe*−*<sup>θ</sup>xdx* <sup>=</sup> <sup>∞</sup> ∑ *j*=0 <sup>−</sup>*<sup>β</sup>* <sup>−</sup> <sup>1</sup> *j λj*+1*β* 1+*θ* <sup>∞</sup> <sup>0</sup> *<sup>x</sup>r*+*<sup>j</sup>* (<sup>1</sup> <sup>+</sup> *<sup>θ</sup>* <sup>+</sup> *<sup>θ</sup>x*)*e*−*<sup>θ</sup>xdx* <sup>+</sup> <sup>∞</sup> ∑ *j*=0 <sup>−</sup>*<sup>β</sup> j θ*2*λ<sup>j</sup>* 1+*θ* <sup>∞</sup> <sup>0</sup> *<sup>x</sup>r*+*<sup>j</sup>* (1 + *x*)*e*−*<sup>θ</sup>xdx* <sup>=</sup> <sup>∞</sup> ∑ *j*=0 <sup>−</sup>*<sup>β</sup>* <sup>−</sup> <sup>1</sup> *j λj*+1*β*(*r*+*θ*+*j*+2)Γ(*r*+*j*+1) (1+*θ*)*θr*+*j*+<sup>1</sup> <sup>+</sup> <sup>∞</sup> ∑ *j*=0 <sup>−</sup>*<sup>β</sup> j λj* (*r*+*θ*+*j*+1)Γ(*r*+*j*+1) (1+*θ*)*θr*+*<sup>j</sup>* <sup>=</sup> <sup>∞</sup> ∑ *j*=0 *λj* Γ(*r*+*j*+1) (1+*θ*)*θr*+*j*+<sup>1</sup> *λβ*(*r* + *θ* + *j* + 2) <sup>−</sup>*<sup>β</sup>* <sup>−</sup> <sup>1</sup> *j* + *θ*(*r* + *θ* + *j* + 1) <sup>−</sup>*<sup>β</sup> j* , (5)

where Γ(*n*) = <sup>∞</sup> <sup>0</sup> *<sup>x</sup>n*−<sup>1</sup> *<sup>e</sup>*−*<sup>x</sup> dx* is the gamma function. Substituting *<sup>r</sup>* <sup>=</sup> 1, 2, 3, 4 into (5), we obtain the mean = *μ* <sup>1</sup>,variance = *μ* <sup>2</sup> <sup>−</sup> *<sup>μ</sup>*<sup>2</sup> <sup>1</sup>, *skewness* <sup>=</sup> \$ *μ* <sup>3</sup> − 3*μ* 2*μ* <sup>1</sup> <sup>+</sup> <sup>2</sup>*μ* <sup>3</sup> 1 2 *μ*´2 − (*μ* 1) 2 −<sup>3</sup> and *kurtosis* = *μ* <sup>4</sup> − 4*μ* 3*μ* <sup>1</sup> + 6 *μ* 2*μ* <sup>2</sup> <sup>1</sup> <sup>−</sup> <sup>3</sup>*μ* <sup>4</sup> 1 *μ* <sup>2</sup> − (*μ* 1) 2 −<sup>2</sup> . Table 1 provides the mean, variance, standard deviation, skewness, and kurtosis of *X* for different combinations of *θ*, *λ*, *β* as *A*<sup>1</sup> : *θ* = 3.5, *λ* = 0.4 , *β* = 0.5;*A*<sup>2</sup> : *θ* = 0.3, *λ* = 1, *β* = 0.8 ;*A*<sup>3</sup> : *θ* = 1.5, *λ* = 0.1, *β* = 1.5, and *A*<sup>4</sup> : *θ* = 0.3, *λ* = 0.5, *β* = 0.3 .


**Table 1.** Moments, variance, standard deviation, skewness and kurtosis of *X* for randomly selected parameter values of minLLx(*θ*,*λ*,*β*).

The empirical findings from Table 1 allow us to deduce that the skewness is greater than zero, indicating a lack of symmetry of the tails, specifically an elongated right tail. This signifies that the mean and median are pulled to the right. Moreover, kurtosis values are less than three, demonstrating that the distribution is platykurtic.

The *n*th principal moment of the minLLx distribution, say *μn*, can be acquired from

$$\begin{split} \mu\_{n} &= \sum\_{r=0}^{n} \binom{n}{r} \left(-\mu\_{1}^{\prime}\right)^{n-r} E(\mathbf{x}^{r}) \\ &= \sum\_{r=0}^{n} \sum\_{j=0}^{\infty} \binom{n}{r} \frac{(-\mu\_{1}^{\prime})^{n-r} \lambda^{j} \Gamma(r+j+1)}{(1+\theta)^{n+j+1}} \left\{ \lambda \theta (r+\theta+j+2) \binom{-\beta-1}{j} + \theta (r+\theta+j+1) \binom{-\beta}{j} \right\}. \end{split} \tag{6}$$

The *r*th incomplete moment of the minLLx distribution, symbolized by *ϕs*(*t*), is

$$\begin{aligned} q\_{\mathbb{P}}(t) &= \int\_{-\infty}^{t} x^{s} f(x) \, dx \\ &= \sum\_{i=0}^{\infty} \frac{\lambda^{i}}{(1+\theta)\theta^{s+i+1}} \left\{ \begin{array}{ll} \lambda \beta [(1+\theta)\gamma(s+i+1,t) + \gamma(s+i+2,t)] \begin{pmatrix} -\beta-1 \\ i \end{pmatrix} \\ &+ \theta [\theta\gamma(s+i+1,t) + \gamma(s+i+2,t)] \begin{pmatrix} -\beta \\ ji \end{pmatrix} \end{pmatrix} \end{aligned} \tag{7}$$

where *γ*(*a*, *x*) = *<sup>x</sup>* <sup>0</sup> *<sup>t</sup>a*−<sup>1</sup> *<sup>e</sup>*−*<sup>t</sup> dt* is the lower incomplete gamma function.

The moment generating function, signified by *Mx*(*t*), of the minLLx distribution can be acquired as

$$M\_{\mathbf{x}}(t) = E(\varepsilon^{tx}) = \sum\_{j=0}^{\infty} \frac{\lambda^j \Gamma(j+1)}{(1+\theta)\theta^{j+2}} \left\{ \begin{array}{c} \lambda \beta[\theta(\theta+j-t+2)-t] \binom{-\beta-1}{j} \\ + \theta^2(\theta+j-t+1) \binom{-\beta}{j} \end{array} \right\}. \tag{8}$$

#### *2.4. Probability Weighted Moments*

Ordinary moments of order statistics are generalized by probability weighted moments of a stochastic process, which naturally arise while dealing with ordinary moments. They also play a significant role in several parametric estimate techniques. The formulation for the probability weighted moments of a chance variable with the minLLx distribution is as follows.

The (*r* + *s*)th probability weighted moments (PWMs) of a chance variable *X* with the minLLx distribution, about *Mr*,*s*, follows

*Mr*,*<sup>s</sup>* = *E XrF*(*x*) *s* = ∞ −∞ *xrF*(*x*) *<sup>s</sup> f*(*x*)*dx* = ∞ −∞ *x<sup>r</sup>* (1+*λx*) −*β*−1 1+*θ* \$ *λβ*(1 + *θ* + *θx*) + *θ*2(1 + *x*)(1 + *λx*) % *e*−*θ<sup>x</sup>* × 1 − (1 + *λx*) −*β* 1+*θ*+*θ<sup>x</sup>* <sup>1</sup>+*<sup>θ</sup> e*−*θ<sup>x</sup> s dx* <sup>=</sup> <sup>∞</sup> ∑ *j*=0 (−1) *j* (1+*θ*) *j*+1 *s j* ∞ −∞ *xr*(1 + *λx*) −*β*(*j*+1)−1 (1 + *θ* + *θx*) *j e*−*θ*(*j*+1)*<sup>x</sup>* ×\$ *λβ*(1 + *θ* + *θx*) + *θ*2(1 + *x*)(1 + *λx*) % *dx* <sup>=</sup> <sup>∞</sup> ∑ *j*=0 (−1) *j λβ* (1+*θ*) *j*+1 *s j* <sup>∞</sup> −∞ *xr* (1 + *λx*) −*β*(*j*+1)−1 (1 + *θ* + *θx*) *j*+1 *e* <sup>−</sup>*θ*(*j*+1)*xdx* + ,- . *A* <sup>+</sup> <sup>∞</sup> ∑ *j*=0 (−1) *j θ*2 (1+*θ*) *j*+1 *s j* <sup>∞</sup> −∞ *xr* (1 + *λx*) −*β*(*j*+1) (1 + *x*)(1 + *θ* + *θx*) *j e* <sup>−</sup>*θ*(*j*+1)*xdx* + ,- . *<sup>B</sup>*

where

$$\mathbf{A} = \sum\_{i=0}^{\infty} \sum\_{w=0}^{j+1} \frac{\lambda^i \theta^w (1+\theta)^{j-w-1} \Gamma(r+i+w+1)}{\left(\theta(j+1)\right)^{r+i+w+1}} \binom{-\beta(j+1)-1}{i} \binom{j+1}{w}$$

,

.

and

$$\mathbf{B} = \sum\_{i=0}^{\infty} \sum\_{w=0}^{j} \frac{\lambda^i \theta^w (1+\theta)^{j-w} [\theta(j+1) + r + i + w + 1] \Gamma(r + i + w + 1)}{\left(\theta(j+1)\right)^{r+i+w+2}} \binom{-\beta(j+1)}{i} \binom{j}{w} \binom{j}{w}$$

Consequently, we arrive at

$$\begin{split} M\_{r,s} &= \sum\_{j,i=0}^{\infty} \frac{(-1)^{j} \lambda^{i}}{(1+\theta)^{j+1} \theta^{r+i} (1+j)^{r+i}} \binom{s}{j} \\ &\times \left\{ \begin{array}{ll} \sum\_{w=0}^{j+1} \frac{\lambda \beta (1+\theta)^{j-w-1} \Gamma(r+i+w+1)}{(1+j)^{w+1}} \binom{-\beta(j+1)-1}{i} \binom{j+1}{w} \\ + \sum\_{w=0}^{j} \frac{(1+\theta)^{j-w} [\theta(j+1)+r+i+w+1] \Gamma(r+i+w+1)}{(1+j)^{w+2}} \binom{-\beta(j+1)}{i} \binom{j}{w} \end{array} \right\}. \end{split} \tag{9}$$

#### *2.5. Order Statistics*

The inclusion of sorted random variables, often known as order statistics, is crucial in the modeling of various longevity systems with distinct component structures. David and Nagaraja [10] laid the all-important foundation for this paradigm. The order statistics of the minLLx distribution are linked to having conventional distributional modules; hence their importance is an inarguable fact.

Consider the given scenario as *X*1: *<sup>n</sup>* ≤ *X*2:*n*, ... ≤ *Xn*:*<sup>n</sup>* be the *Xk*:*<sup>n</sup>* th order statistics corresponding to a sample of size *n* from the minLLx distribution. The pdf of *Xk*:*n*, the *k*th order statistic, is given by

$$f\_{\chi\_{\mathbf{k},\mathbf{u}}}(\mathbf{x}) = \frac{1}{\beta(k, n-k+1)} \sum\_{w=0}^{n-k} (-1)^w \binom{n-k}{w} f(\mathbf{x}) F(\mathbf{x})^{k+w-1},\tag{10}$$

where *β*(., .) is the exact beta function. From (5) and (6), we have

$$\begin{split} f(\mathbf{x})F(\mathbf{x})^{k+w-1} &= \sum\_{j=0}^{\infty} \frac{(-1)^j (1+\lambda \mathbf{x})^{-\beta(j+1)-1} \mathbf{c}^{-\theta(j+1)\mathbf{x}}}{(1+\theta)^{j+1}} \\ &\times \left\{ \lambda \beta (1+\theta+\theta \mathbf{x}) + \theta^2 (1+\mathbf{x})(1+\lambda \mathbf{x}) \right\} \binom{k+w-1}{j} .\end{split} \tag{11}$$

Inserting Equation (11) into Equation (10), we have

$$\begin{split} f\_{\mathbf{X}\_{kn}}(\mathbf{x}) &= \sum\_{\mathbf{w}=0}^{n-k} \sum\_{j=0}^{\infty} \frac{(-1)^{\mathbf{w}+j} (1+\lambda \mathbf{x})^{-\beta(j+1)-1} \mathbf{c}^{-\theta(j+1)\mathbf{x}}}{\beta (k\mathbf{x} - k + 1)(1+\theta)^{j+1}} \\ &\times \left\{ \lambda \beta (1+\theta+\theta \mathbf{x}) + \theta^2 (1+\mathbf{x})(1+\lambda \mathbf{x}) \right\} \binom{n-k}{w} \binom{k+w-1}{j} .\end{split} \tag{12}$$

Furthermore, the *r*th moment of *k*th order statistic for the minLLx distribution is given by

$$\begin{split} E(\mathbf{x}\_{k:n}^{I}) &= \sum\_{w=0}^{n-k} \sum\_{j,i=0}^{\infty} \frac{(-1)^{w+j} i^{\mathbf{i}} \Gamma(r+i+1)}{\beta(k,n-k+1)(1+\theta)^{j+1}(\theta(1+j))^{r+i+1}} \binom{n-k}{w} \binom{k+w-1}{j} \\ &\times \left\{ \lambda \not\!\!/ (r+\theta+i+2) \binom{-\beta(j+1)-1}{i} + \theta(1+j)(\theta(1+j)+r+i+1) \binom{-\beta(j+1)}{i} \right\}. \end{split} \tag{13}$$

#### *2.6. Rényi Entropy*

Entropy is a mathematical concept that encapsulates the logical understanding of quantifying various mechanisms. The entropy technique is adaptable in different fields, including bioenergetics, queuing theory, thermodynamics, colligative properties of solutions, and statistics. There are several mechanisms to quantify the entropy of the minLLx distribution. Rényi entropy is established here by subjecting a feasible expression that may be appraised using any analytical software. In the perspective of the minLLx distribution, the following result incorporates a series expansion of this entropy system of measurement.

Rényi entropy is defined as

$$I\_{\mathbb{R}}(X) = (1 - \mu)^{-1} \log \int\_{-\infty}^{\infty} f(x)^{\mu} dx, \ \mu > 0, \ \mu \neq 0.$$

Using Equation (6) and after some manipulations, we have

$$I\_R(X) = (1 - \mu)^{-1} \log \left\{ \sum\_{i, \ell, w = 0}^{\infty} \sum\_{j = i}^{\infty} \frac{\frac{\lambda^{\mu + w - j} \beta^{\mu - j} \Gamma(i + \ell + w + 1)}{\theta^{i + w - 2j + 1} (1 + \theta)^{j + \ell} \mu^{i + \ell + w + 1}}{\mu^{i + \ell + w + 1}}} \right. \right\}. \tag{14}$$

#### *2.7. Stochastic Dominance*

Across many distinct fields of probability and statistics, stochastic ordering and inequalities are being employed more extensively to examine the comparative behavior. Biometrics, robustness, econometrics, and actuarial sciences are all fields that have developed this presumption. According to Shaked and Shanthikumar [11], an av *X*<sup>1</sup> is said to be smaller than another av *X*<sup>2</sup> in the likelihood ratio order (*X*<sup>1</sup> ≤*lr X*2) if *f*1(*x*)/ *f*2(*x*) decreases in *x*. The following theorem shows that the minLLx distribution is ordered in likelihood ratio ordering if the appropriate assumptions exist.

**Theorem 1:** *Let X*<sup>1</sup> ∼ minLLx(*θ*1, *λ*1, *β*1) *and X*<sup>2</sup> ∼ minLLx(*θ*2, *λ*2, *β*2). *If θ*<sup>1</sup> = *θ*2, *λ*<sup>1</sup> = *λ*<sup>2</sup> *and β*<sup>1</sup> ≥ *β*<sup>2</sup> (*or if θ*<sup>1</sup> = *θ*2, *β*<sup>1</sup> = *β*<sup>2</sup> *and λ*<sup>1</sup> ≥ *λ*2), *then X*<sup>1</sup> ≤*lr X*2.

**Proof:** We have

$$\frac{f\_1(\mathbf{x})}{f\_2(\mathbf{x})} = \frac{(1+\theta\_2)(1+\lambda\_2\mathbf{x})^{1+\theta\_2}e^{-(\theta\_1-\theta\_2)\mathbf{x}}}{(1+\theta\_1)(1+\lambda\_1\mathbf{x})^{1+\theta\_1}} \left\{ \frac{\lambda\_1\theta\_1(1+\theta\_1+\theta\_1\mathbf{x}) + \theta\_1^2(1+\mathbf{x})(1+\lambda\_1\mathbf{x})}{\lambda\_2\theta\_2(1+\theta\_2+\theta\_2\mathbf{x}) + \theta\_2^2(1+\mathbf{x})(1+\lambda\_2\mathbf{x})} \right\}.$$

Then

$$\begin{split} &\log\frac{f\_1(\mathbf{x})}{f\_2(\mathbf{x})} = -(\theta\_1 - \theta\_2) - (1 + \beta\_1)\log(1 + \lambda\_1 \mathbf{x}) + (1 + \beta\_2)\log(1 + \lambda\_2 \mathbf{x}) + \log\left(\frac{1 + \theta\_2}{1 + \theta\_1}\right) \\ &+ \log\left[\lambda\_1 \beta\_1 (1 + \theta\_1 + \theta\_1 \mathbf{x}) + \theta\_1^2 (1 + \mathbf{x})(1 + \lambda\_1 \mathbf{x})\right] \\ &- \log\left[\lambda\_2 \beta\_2 (1 + \theta\_2 + \theta\_2 \mathbf{x}) + \theta\_2^2 (1 + \mathbf{x})(1 + \lambda\_2 \mathbf{x})\right]. \end{split}$$

If *θ*<sup>1</sup> = *θ*2, *λ*<sup>1</sup> = *λ*<sup>2</sup> and *β*<sup>1</sup> ≥ *β*<sup>2</sup> or if *θ*<sup>1</sup> = *θ*2, *β*<sup>1</sup> = *β*<sup>2</sup> and *λ*<sup>1</sup> ≥ *λ*2, then we have

$$\begin{split} \frac{d}{dx} \log \frac{f\_1(x)}{f\_2(x)} &= \frac{-\lambda\_1(1+\theta\_1)}{1+\lambda\_1 x} + \frac{\lambda\_2(1+\theta\_2)}{1+\lambda\_2 x} + \frac{\theta\_1 \{\lambda\_1 \theta\_1 + \theta\_1 [1+\lambda\_1(1+2x)]\}}{\lambda\_1 \theta\_1 (1+\theta\_1+\theta\_1 x) + \theta\_1^2 (1+x)(1+\lambda\_1 x)} \\ &- \frac{\theta\_2 \{\lambda\_2 \theta\_2 + \theta\_2 [1+\lambda\_2(1+2x)]\}}{\lambda\_2 \theta\_2 (1+\theta\_2+\theta\_2 x) + \theta\_2^2 (1+x)(1+\lambda\_2 x)} < 0. \end{split}$$

Resultantly, *f*1(*x*)/ *f*2(*x*) declines in *x* and hence *X*<sup>1</sup> ≤*lr X*2.

#### *2.8. Stress Strength Model*

Acquired resistance metrics are used in lifetime testing to ascertain a system's durability. The stress-strength parameter, for instance, is based on the likelihood that a framework would work proficiently if the stress concentration will be less than its toughness. In the perspective of the minLLx distribution, the following result exemplifies a primitive outline for this parameter.

Let *X*<sup>1</sup> and *X*<sup>2</sup> be two independent chance variables with minLLx(*θ*1, *λ*1, *β*1) and minLLx(*θ*2, *λ*2, *β*2) distributions. Then, the stress−strength model is given by

$$\begin{split} R &= \Pr(X\_2 < X\_1) = \underbrace{\int\_0^\infty f\_1(\theta\_1, \lambda\_1, \beta\_1) F\_2(\theta\_2, \lambda\_2, \beta\_2) \, d\mathbf{x}}\_{\text{if } \theta\_1 = \left(1 + \beta\_1 \mathbf{x}\right)^{-\beta\_1 - 1} \left(1 + \lambda\_2 \mathbf{x}\right)^{-\beta\_2} \left(1 + \theta\_1 + \theta\_2\right) \left(1 + \theta\_1 + \theta\_2 \mathbf{x}\right) e^{-\left(\theta\_1 + \theta\_2\right) \mathbf{x}} \, d\mathbf{x}}\_{\text{if } \mathbf{H} \end{split}} \\ &= \underbrace{\frac{\theta\_1^2}{\left(1 + \theta\_1\right)\left(1 + \theta\_2\right)}}\_{\text{E}} \underbrace{\int\_0^\infty \left(1 + \lambda\_1 \mathbf{x}\right)^{-\beta\_1} \left(1 + \lambda\_2 \mathbf{x}\right)^{-\beta\_2} \left(1 + \mathbf{x}\right) \left(1 + \theta\_2 + \theta\_2 \mathbf{x}\right) e^{-\left(\theta\_1 + \theta\_2\right) \mathbf{x}} \, d\mathbf{x}}\_{\text{E}} \end{split}$$

where

H = ∞ ∑ *j*,*i*=0 *λj* 1*λi* <sup>2</sup> Γ(*j* + *i* + 1) (*θ*<sup>1</sup> + *θ*2) *j*+*i*+3 (<sup>1</sup> + *<sup>θ</sup>*1)(<sup>1</sup> + *<sup>θ</sup>*2)(*θ*<sup>1</sup> + *<sup>θ</sup>*2) <sup>2</sup> + (*θ*<sup>1</sup> + *<sup>θ</sup>*2)(*<sup>j</sup>* + *<sup>i</sup>* + <sup>1</sup>) ×[*θ*2(1 + *θ*1) + *θ*1(1 + *θ*2)] + *θ*1*θ*2(*j* + *i* + 1)(*j* + *i* + 2) <sup>−</sup>*β*<sup>1</sup> <sup>−</sup> <sup>1</sup> *j* <sup>−</sup>*β*<sup>2</sup> *i* ,

and

$$\mathbf{E} = \sum\_{j,i=0}^{\infty} \frac{\lambda\_1^j \lambda\_2^i \Gamma(j+i+1)}{\left(\theta\_1 + \theta\_2\right)^{j+i+3}} \left\{ \begin{array}{c} (1+\theta\_2)(\theta\_1+\theta\_2)^2 + (\theta\_1+\theta\_2)(1+2\theta\_2)(j+i+1) \\ + \theta\_2(j+i+1)(j+i+2) \end{array} \right\} \left(\begin{array}{c} -\theta\_1 \\ j \end{array}\right) \left(\begin{array}{c} -\theta\_2 \\ i \end{array}\right).$$

Therefore, the stress−strength model for the minLLx distribution is

$$\begin{split} R &= 1 - \sum\_{j,i=0}^{\infty} \frac{\lambda\_1^j \lambda\_1^j \Gamma(i+i+1)}{(1+\theta\_1)(1+\theta\_2)(\theta\_1+\theta\_2)^{i+i+3}} \binom{-\beta\_2}{i} \\ & \times \begin{pmatrix} \lambda\_1 \theta\_1 \left\{ \begin{array}{c} (1+\theta\_1)(1+\theta\_2)(\theta\_1+\theta\_2)^2 + (\theta\_1+\theta\_2)(j+i+1) \\ \times [\theta\_2(1+\theta\_1)+\theta\_1(1+\theta\_2)] + \theta\_1 \theta\_2(j+i+1)(j+i+2) \end{array} \right\} \binom{-\beta\_1-1}{j} \\ & + \theta\_1^2 \left\{ \begin{array}{c} (1+\theta\_2)(\theta\_1+\theta\_2)^2 + (\theta\_1+\theta\_2)(1+2\theta\_2)(j+i+1) \\ +\theta\_2(j+i+1)(j+i+2) \end{array} \right\} \binom{-\beta\_1}{j} \end{split} \tag{15}$$

#### **3. Characterization Results**

This section outlines how to characterize the minLLx distribution in two ways: (i) on the basis of ratio of two truncated moments and (ii) by using the conditional expectation of certain functions of the av. It is worth emphasizing that for the characterization, (i) the cdf need not have a closed form, but instead relies on the solution of a first order differential equation, which serves as a link between the probability and differential equation. We would also like to highlight that due to the nature of minLLx density function, our characterizations may be the only versions available. Further bear in mind that the characterization (i) is stable in the sense of weak convergence (Glanzel [12]). We present our characterizations (i)–(ii) in the following two subsections.

#### *3.1. Characterizations on the Basis of Two Truncated Moments*

This subsection deals with the characterizations of minLLx distribution based on the ratio of two truncated moments. Our initial characterization employs a theorem of Glanzel [13], see Theorem A1 of Appendix A. The result is robust even if interval *H* is not closed, whereas the Theorem's constraint is on the interior of interval *H*.

**Proposition 1.** *Let <sup>X</sup>* : *Omega* <sup>→</sup> (0, <sup>∞</sup>) *be a continuous av and let q*<sup>1</sup> <sup>=</sup> *λβ*(1 + *θ* + *θx*) + *θ*2(1 + *x*)(1 + *λx*) −<sup>1</sup> *eθ<sup>x</sup> and q*2(*x*) = *q*1(*x*) (1 + *λx*) <sup>−</sup><sup>1</sup> *for x* > 0. *The av X has pdf (2) iff the function ψ defined in Theorem 1 is of the expression*

$$
\psi(x) = \frac{\beta (1 + \beta)^{-1}}{(1 + \lambda x)}, \qquad \qquad x > 0.
$$

**Proof**. Let us presume that the av *X* has pdf(2), then

$$\left[\left(1 - F(\mathbf{x})\right)E\right]q\_1(X)|X \ge \mathbf{x}\right] = \frac{\left(1 + \theta\right)^{-1}}{\lambda\left\beta\left(1 + \lambda\mathbf{x}\right)^{\beta}}, \qquad \mathbf{x} > \mathbf{0},$$

and

$$\left(1 - F(\mathbf{x})\right) E[q\_2(\mathbf{X}) | \mathbf{X} \ge \mathbf{x}] = \frac{\left(1 + \theta\right)^{-1}}{\lambda \left(\beta + 1\right) \left(1 + \lambda \mathbf{x}\right)^{\left(\beta + 1\right)}}, \qquad \mathbf{x} > \mathbf{0}.$$

Furthermore,

$$
\psi(\mathbf{x}) \, q\_1(\mathbf{x}) - q\_2(\mathbf{x}) = -\frac{q\_1(\mathbf{x})}{(\beta + 1)(1 + \lambda \mathbf{x})} < 0, \qquad \text{for } \mathbf{x} > 0.
$$

Conversely, if *ξ* is of the above form, then

$$s'(\mathbf{x}) = \frac{\psi'(\mathbf{x})\,q\_1(\mathbf{x})}{\psi(\mathbf{x})\,q\_1(\mathbf{x}) - q\_2(\mathbf{x})} = \frac{\lambda\,\mathcal{\beta}}{(1+\lambda\mathbf{x})'},\ \mathbf{x} > \mathbf{0},\ \mathbf{x}$$

and consequently

$$s(\mathbf{x}) = -\log\left\{ (1 + \lambda \mathbf{x})^{-\beta} \right\}, \qquad \mathbf{x} > \mathbf{0}.$$

Now, according to Theorem 1, *X* has density (2).

**Corollary 1.** *Let X* : Ω → (0, ∞) *be a continuous av and let q*1(*x*) *be as in proposition 3.1. The chance variable X has pdf (2) iff there exist functions q*<sup>2</sup> *and ψ defined in theorem 1 fullfilling the following differential equation*

$$\frac{\psi'(\mathbf{x})\,q\_1(\mathbf{x})}{\psi(\mathbf{x})\,q\_1(\mathbf{x}) - q\_2(\mathbf{x})} = \frac{\lambda\beta}{(1+\lambda\mathbf{x})'} , \qquad \mathbf{x} > \mathbf{0}.$$

**Corollary 2.** *The general solution of the differential equation in Corollary 1 is*

$$\psi(\mathbf{x}) = (1 + \lambda \mathbf{x})^{\beta} \left[ -\int \lambda \beta (1 + \lambda \mathbf{x})^{-1} \left( 1 + \lambda \mathbf{x} \right)^{-1} (q\_1(\mathbf{x}))^{-(\beta + 1)} q\_2(\mathbf{x}) \, d\mathbf{x} + D \right],$$

where *D* is a constant. It is worth emphasizing that one set of functions satisfying the above differential equation is given in Proposition 1 with *D* = 0. Clearly, there are other triplets (*q*1, *q*2, *ψ*) that satisfy constraints of Theorem 1.

## *3.2. Characterizations on the Basis of Conditional Expectation of Certain Functions of an Arbitrary Variable*

In this subsection, we employ a single function Ψ of *X* and characterize the distribution of *X* in terms of the truncated moment of Ψ(*X*). The following proposition has already appeared in Hamedani [14], so we will just state it here that it can be used to characterize the minLLx distribution.

**Proposition 2.** *Let X* : Ω → (*e*, *f*) *be a continuous av with cdf F*. *Let* Ψ(*x*) *be a differentiable function on* (*e*, *<sup>f</sup>*) *with* lim*x*→*e*<sup>+</sup> <sup>Ψ</sup>(*x*) = 1. *Then for <sup>δ</sup>* = 1,

$$E[\Psi(X)|X \ge x] = \delta \,\Psi(x), \qquad x \in (\mathfrak{e}, f)\_{\sharp}$$

*iff*

$$\Psi(\mathbf{x}) = [1 - F(\mathbf{x})]^{\frac{1}{2} - 1}, \qquad \qquad \mathbf{x} \in (\mathbf{e}\_\prime f).$$

**Remark 1.** *For* (*e*, *f*)=(0, ∞), Ψ(*x*) = *<sup>e</sup>*−*θx*/*<sup>β</sup>* (1+*λx*) 1+*θ*+*θ<sup>x</sup>* <sup>1</sup>+*<sup>θ</sup>* 1/*<sup>β</sup> and δ* = *<sup>β</sup> <sup>β</sup>*+<sup>1</sup> , *Proposition 2. provides a characterization of the minLLX.*

#### **4. Maximum Likelihood Estimation**

The maximum likelihood estimates (MLEs) and the observed information matrix for the model parameters of the minLLx distribution will be investigated in this section. Let *x*1, *x*2, ... , *xn* be a random sample from the minLLx distribution, then the corresponding log-likelihood function is given by

$$\begin{aligned} \ell &= -n \log(1+\theta) - \theta \sum\_{i=1}^{n} \mathbf{x}\_i - (1+\beta) \sum\_{i=1}^{n} \log(1+\lambda \mathbf{x}\_i) \\ &+ \sum\_{i=1}^{n} \log\{\lambda\beta(1+\theta+\theta \mathbf{x}\_i) + \theta^2 (1+\lambda \mathbf{x}\_i)(1+\lambda \mathbf{x}\_i)\}. \end{aligned} \tag{16}$$

The modules of the score vector ∇*l* = *∂<sup>l</sup> ∂θ* , *<sup>∂</sup><sup>l</sup> ∂λ* , *<sup>∂</sup><sup>l</sup> ∂β* are:

$$\frac{\partial \ell}{\partial \theta} = \frac{-n}{1+\theta} - \sum\_{i=1}^{n} x\_i + \sum\_{i=1}^{n} \left\{ \frac{(1+\mathbf{x}\_i)[\lambda \beta + 2\theta(1+\lambda \mathbf{x}\_i)]}{\lambda \beta (1+\theta+\theta \mathbf{x}\_i) + \theta^2 (1+\lambda \mathbf{x}\_i)(1+\lambda \mathbf{x}\_i)} \right\},\tag{17}$$

$$\frac{\partial \ell}{\partial \lambda} = -(1+\beta) \sum\_{i=1}^{n} \left( \frac{\mathbf{x}\_{i}}{1+\lambda \mathbf{x}\_{i}} \right) + \sum\_{i=1}^{n} \left\{ \frac{\beta(1+\theta+\theta \mathbf{x}\_{i}) + \theta^{2} \mathbf{x}\_{i}(1+\mathbf{x}\_{i})}{\lambda \beta(1+\theta+\theta \mathbf{x}\_{i}) + \theta^{2}(1+\lambda \mathbf{x}\_{i})(1+\lambda \mathbf{x}\_{i})} \right\},\tag{18}$$

and

$$\frac{\partial \ell}{\partial \beta} = -\sum\_{i=1}^{n} \log(1 + \lambda \mathbf{x}\_i) + \sum\_{i=1}^{n} \left\{ \frac{\lambda (1 + \theta + \theta \mathbf{x}\_i)}{\lambda \beta (1 + \theta + \theta \mathbf{x}\_i) + \theta^2 (1 + \lambda \mathbf{x}\_i)(1 + \lambda \mathbf{x}\_i)} \right\}.\tag{19}$$

The MLEs, say Θˆ = (ˆ *θ*, *λ*ˆ , *β*ˆ), of Θ = (*θ*, *λ*, *β*) *<sup>T</sup>*, can be obtained by equating the system of nonlinear Equations (17)–(19) to zero and solving them concurrently. The components of the observed information matrix *J*(Θ) = {*Jwv*} (for *w*, *v* = *θ*, *λ*, *β*( of Θ = (*θ*, *λ*, *β*) *<sup>T</sup>* are given in Appendix B.

## **5. Simulation Study**

It is very difficult to compare the theoretical performances of the different estimators for the minLLx distribution. Therefore, simulation is needed to compare the performances of the different methods of estimation, mainly with respect to their biases, mean square errors, and variances for different sample sizes. A numerical study is performed using Mathematica (v9) software. A portion of the used codes are provided as Supplementary Materials. Different sample sizes are considered through the experiments at size *n* = 50, 100, 200, 300, and 500. For the defined sample size *n*, the experimental bias and MSE values are the aggregate of values from *N* = 2000 replicated samples of the different values of parameters *θ*, *λ* and *β*, respectively. Traditionally, qf, which is the inverse of cdf, i.e., *<sup>Q</sup>*(*u*) <sup>=</sup> *<sup>F</sup>*−1(*p*) <sup>=</sup> min{*<sup>x</sup>* : *<sup>F</sup>*(*x*) <sup>≥</sup> *<sup>p</sup>*}, is employed. However, in this case, it is not possible to obtain the qf of the minLLx distribution unequivocally. To obtain the minLLx variates, instead, we can implement the Newton−Raphosn algorithm as follows:


$$\mathbf{x}\_{\*} = \mathbf{x}\_{0} - R(\;\mathbf{x}\_{0}; \lambda, \;\theta, \;\beta).$$

where *<sup>R</sup>*( *<sup>x</sup>*0; <sup>λ</sup>, <sup>θ</sup>, <sup>β</sup>) <sup>=</sup> *<sup>F</sup>*(*x*0;*λ*,*θ*, *<sup>β</sup>*) *<sup>f</sup>*(*x*0;*λ*,*θ*, *<sup>β</sup>*), and *<sup>F</sup>*(*x*0; *<sup>λ</sup>*, *<sup>θ</sup>*, *<sup>β</sup>*) and *<sup>f</sup>*(*x*0; *<sup>λ</sup>*, *<sup>θ</sup>*, *<sup>β</sup>*) are cdf and pdf (in Equations (1) and (2)) of minLLx distribution, respectively.


The average estimates, biases, MSEs, coverage probabilities (CPs), and confidence intervals (CIs), at 95% and 99%, on the basis of different parameter combinations, are reported in Tables 2–5 respectively.

**Table 2.** The MLEs, Bias, MSE, and CPs for the model parameters of the minLLx distribution based on some initial (Init) values.



**Table 2.** *Cont.*

**Table 3.** The MLEs, Bias, MSE, CPs for the model parameters of the minLLx distribution based on some initial (Init) values.


From Tables 2 and 3, we deduced that when the postulated model differs significantly from the genuine model, as anticipated, the MSE of the estimators rises. The MSE drops as the sample size is increased and the homogeneity disintegrates. In general, when the kurtosis increases the MSE declines. Likewise, if the asymmetry widens, so does the bias, and vice versa. The bias lessens as the kurtosis increases. Therefore, it is evident that as sample size n gets larger, the MSEs and biases reduce. Similarly, the CPs of the confidence interval seems to be quite near to the conventional levels of certainty (95% and 99%), which endorses the already established empirical findings. In a nutshell, we may infer that MLEs perform impressively in estimating the parameters of the minLLx distribution.


**Table 4.** The MLEs, Bias, MSE, and CPs for the model parameters of the minLLx distribution based on some initial (Init) values.

**Table 5.** The MLEs, Bias, MSE, and CPs for the model parameters of the minLLx distribution based on some initial (Init) values.


#### **6. Applications**

In this portion, we consider two actual cases of the minLLx distribution to showcase its effectiveness. When the pressure is at % anxiety levels, the first data set reflects the failure times of the Kevlar 49/epoxy strands. This data are leptokurtic, unimodal, and substantially right skewed, with a likely outlier (skewness = 3.05 and kurtosis = 14.47). This data set is taken from Andrews and Herzberg [15] and the original source is Barlow et al. [16].The data are: 0.01, 0.01,0.02, 0.02, 0.02, 0.03, 0.03, 0.04, 0.05, 0.06, 0.07, 0.07, 0.08, 0.09, 0.09, 0.10, 0.10, 0.11, 0.11, 0.12, 0.13, 0.18, 0.19, 0.20, 0.23, 0.24, 0.24, 0.29, 0.34, 0.35, 0.36, 0.38, 0.40, 0.42, 0.43, 0.52, 0.54, 0.56, 0.60, 0.60, 0.63, 0.65, 0.67, 0.68, 0.72, 0.72, 0.72, 0.73, 0.79, 0.79, 0.80, 0.80, 0.83, 0.85, 0.90, 0.92, 0.95, 0.99, 1.00, 1.01, 1.02, 1.03, 1.05, 1.10, 1.10, 1.11, 1.15, 1.18, 1.20, 1.29, 1.31, 1.33, 1.34, 1.40, 1.43, 1.45, 1.50, 1.51, 1.52, 1.53, 1.54, 1.54, 1.55, 1.58, 1.60, 1.63, 1.64, 1.80, 1.80, 1.81, 2.02, 2.05, 2.14, 2.17, 2.33, 3.03, 3.03, 3.34, 4.20, 4.69, and 7.89. These data are also used by Cooray and Ananda [17] and Al-Aqtash et al. [18].

The second data set signifies the failure time of 20 components from Murthy et al. [19]. The data are: 0.072, 4.763, 8.663, 12.089, 0.477, 5.284, 9.511, 13.036, 1.592, 7.709, 10.636, 13.949, 2.475, 7.867, 10.729, 16.169, 3.597, 8.661, 11.501, and 19.809.

We obtained the MLEs for the unknown parameters of all competitive models and then compared the results via goodness-of-fit statistics: Anderson-Darling (A∗), Cramér-von Mises (W∗), AIC (Akaike information criterion), and BIC (Bayesian information criterion). The better model corresponds to the smaller of these criteria. The values for the Kolmogorov Smirnov (KS) statistic and its p-value are also presented.

We compared the minLLx distribution with those of Weibull Lindley (WL) (Asgharzadeh et al. [20]), Lomax (Lx), Lindley (L), quasi Lindley (QL) (Shanker and Mishra [21]), and power Lomax (PLx) (Rady et al. [22]). The MLEs, their standard errors (SEs), and some goodness of fit statistics of the models for the respective data sets are introduced in Tables 6–9. The estimated pdf and cdf plots of all competitive distributions for the two data sets are displayed in Figures 3 and 4, respectively.


**Table 6.** The MLEs alongside their accompanying SEs (in parenthesis) for the first data set.

**Table 7.** Some goodness of fit statistics for the fitted models to the first data set.



**Table 7.** *Cont.*

**Table 8.** The MLEs alongside their accompanying SEs (in parenthesis) for the second data set.


**Table 9.** Some goodness of fit statistics for the models fitted to the second data set.


The values in Tables 7 and 9 clearly show that the minLLx distribution has the smallest values for A\*, W\*, AIC, BIC, and KS, and the largest *p*-values among all competitive models, compelling it to be chosen as the best model. It is clear from Figures 3 and 4, that the new minLLx distribution provides the best fits for the two data sets.

**Figure 3.** Estimated pdf and cdf plots of the minLLx distribution for the first data set.

**Figure 4.** Estimated pdf and cdf plots of the minLLx distribution for the second data set.
