*5.1. Estimation of the Parameters*

Here, we find the maximum likelihood estimates (MLEs) of the parameters of the new family of distributions from complete samples only. Let *x*1, *x*2, ... , *xn* be observed values from the GOLE − F family with parameters *a*, *b*, *c* and *φ*. Let *ξ* = (*a*, *b*, *c*, *φ*) *<sup>T</sup>* be the parameters vector. The total log-likelihood function for *ξ* is obtained by

$$\begin{split} l(\boldsymbol{\xi}) &= n \log \boldsymbol{c} \quad + \sum\_{i=1}^{n} \log f(\mathbf{x}\_{i}; \boldsymbol{\Phi}) + (\boldsymbol{c} - 1) \sum\_{i=1}^{n} \log \mathrm{F}(\mathbf{x}\_{i}; \boldsymbol{\Phi}) + \sum\_{i=1}^{n} \log \left( a + (b - a) \mathrm{F}(\mathbf{x}\_{i}; \boldsymbol{\Phi})^{c} \right) \\ &- 3 \sum\_{i=1}^{n} \log \left( 1 - \mathrm{F}(\mathbf{x}\_{i}; \boldsymbol{\Phi})^{c} \right) - a \sum\_{i=1}^{n} H(\mathbf{x}\_{i}; \boldsymbol{\Phi}) - \frac{b}{2} \sum\_{i=1}^{n} H(\mathbf{x}\_{i}; \boldsymbol{\Phi})^{2}, \end{split} \tag{40}$$

where *<sup>H</sup>*( *xi*, *<sup>φ</sup>*) <sup>=</sup> *<sup>F</sup>*(*xi*;*φ*) *c* 1−*F*(*xi*;*φ*) *<sup>c</sup>* and *H*( *xi*, *φ*) <sup>2</sup> = *<sup>F</sup>*(*xi*;*φ*) *c* 1−*F*(*xi*;*φ*) *c* 2 . The components of the score vector *U*(*ξ*) are obtained by

$$\ln L\_d = \sum\_{i=1}^{n} \frac{1 - F(\mathbf{x}\_i; \boldsymbol{\Phi})^\varepsilon}{a + (b - a)F(\mathbf{x}\_i; \boldsymbol{\Phi})^\varepsilon} - \sum\_{i=1}^{n} H(\mathbf{x}\_i; \boldsymbol{\Phi})\_\prime \tag{41}$$

$$MI\_b = \sum\_{i=1}^{n} \frac{F(\mathbf{x}\_i; \boldsymbol{\Phi})^\varepsilon}{a + (b - a)F(\mathbf{x}\_i; \boldsymbol{\Phi})^\varepsilon} - \frac{1}{2} \sum\_{i=1}^{n} H(\|\mathbf{x}\_i; \boldsymbol{\Phi}\|^2) \tag{42}$$

$$\begin{split} \mathcal{U}\_{c} = \frac{\mu}{\varepsilon} + \sum\_{i=1}^{n} \log F(\mathbf{x}\_{i}; \boldsymbol{\Phi}) + \sum\_{i=1}^{n} \frac{(b-a)F(\mathbf{x}\_{i}; \boldsymbol{\Phi})^{c} \log F(\mathbf{x}\_{i}; \boldsymbol{\Phi})}{a + (b-a)F(\mathbf{x}; \boldsymbol{\Phi})^{c}} + 3 \sum\_{i=1}^{n} \frac{F(\mathbf{x}\_{i}; \boldsymbol{\Phi})^{c} \log F(\mathbf{x}\_{i}; \boldsymbol{\Phi})}{1 - F(\mathbf{x}; \boldsymbol{\Phi})^{c}} \\ -a \sum\_{i=1}^{n} \frac{F(\mathbf{x}\_{i}; \boldsymbol{\Phi})^{c} \log F(\mathbf{x}\_{i}; \boldsymbol{\Phi})}{\left(1 - F(\mathbf{x}\_{i}; \boldsymbol{\Phi})^{c}\right)^{2}} - b \sum\_{i=1}^{n} \frac{F(\mathbf{x}\_{i}; \boldsymbol{\Phi})^{2c} \log F(\mathbf{x}\_{i}; \boldsymbol{\Phi})}{\left(1 - F(\mathbf{x}\_{i}; \boldsymbol{\Phi})^{c}\right)^{3}}, \end{split} \tag{43}$$

and

$$\begin{split} \mathrm{L}\mathbf{L}\boldsymbol{\Phi}\_{k} &= \sum\_{i=1}^{\mathrm{n}} \frac{\frac{\partial f(\mathbf{x}\_{i};\boldsymbol{\Phi})}{\partial \mathbf{x}\_{k}}}{f(\mathbf{x}\_{i};\boldsymbol{\Phi})} \quad + (c-1) \sum\_{i=1}^{\mathrm{n}} \frac{\frac{\partial f(\mathbf{x}\_{i};\boldsymbol{\Phi})}{\partial \mathbf{x}\_{k}}}{F(\mathbf{x}\_{i};\boldsymbol{\Phi})} + \sum\_{i=1}^{\mathrm{n}} \frac{c(b-a)F(\mathbf{x}\_{i};\boldsymbol{\Phi})^{c-1} \frac{\partial f(\mathbf{x}\_{i};\boldsymbol{\Phi})}{\partial \mathbf{x}\_{k}}}{a + (b-a)F(\mathbf{x}\_{i};\boldsymbol{\Phi})^{c}} \\ &+ 3 \sum\_{i=1}^{\mathrm{n}} \frac{cF(\mathbf{x}\_{i};\boldsymbol{\Phi})^{c-1} \frac{\partial f(\mathbf{x}\_{i};\boldsymbol{\Phi})}{\partial \mathbf{x}\_{k}}}{1 - F(\mathbf{x}\_{i};\boldsymbol{\Phi})^{c}} - a \sum\_{i=1}^{\mathrm{n}} \frac{\partial H(\mathbf{x}\_{i};\boldsymbol{\Phi})}{\partial \mathbf{\Phi}\_{k}} - b \sum\_{i=1}^{\mathrm{n}} H(\mathbf{x}\_{i};\boldsymbol{\Phi}) \frac{\partial H(\mathbf{x}\_{i};\boldsymbol{\Phi})}{\partial \mathbf{\Phi}\_{k}} \end{split} \tag{44}$$

Setting *Ua*, *Ub*, *Uc* and *U<sup>φ</sup>* equal to zero, and solving the equations simultaneously, yields the MLE ˆ *ξ* = *a*ˆ, ˆ *b*, *c*ˆ, *φ*ˆ *<sup>T</sup>* of *ξ* = (*a*, *b*, *c*, *φ*) *<sup>T</sup>*. These equations cannot be solved analytically, and statistical software can be used to solve them numerically using iterative methods such as the Newton–Raphson type algorithms.

#### *5.2. Simulation Study*

In this section, a graphical Monte Carlo simulation study is conducted to compare the performance of the different estimators of the unknown parameters for the GOLE-E (*a*, *b*, *c*, *λ*) distribution. All the computations in this section are conducted using the R program. We generate *N* = 1000 samples of size *n* = 20, 25, ... , 500 from the GOLE-W and GOLE-E distributions. The true parameter values for GOLE-W (*λ* = 1) are a = 1.8, b = 0.5, c = 1.7 and *β* = 2.8, and those for GOLE-E are *a* = 2, *b* = 1.5, *c* = 2 and *λ* = 2.5, respectively. We also calculate the bias and mean square error (*MSE*) of the MLEs empirically. The bias and *MSE* are computed by

$$\hat{B}\hat{i}as\_{\hbar} = \frac{1}{N} \sum\_{i=1}^{N} \left(\hat{h}\_{i} - h\right) , \text{ } \hat{M}\hat{s}E\_{\hbar} = \frac{1}{N} \sum\_{i=1}^{N} \left(\hat{h}\_{i} - h\right)^{2}.$$

For *h* = *a*, *b*, *c*, *λ*, respectively.

We provide the results of this simulation study in Figures 3–6. From these figures, we can perceive that when the sample size increases, the empirical biases and *MSEs* approach zero in all cases for the two models.

**Figure 3.** The biases of the estimates of parameters of the GOLE-W distribution.

**Figure 4.** The *MSEs* of the estimates of parameters of the GOLE-W distribution.

**Figure 5.** The biases of the estimates of parameters of the GOLE-E distribution.

**Figure 6.** The *MSE* of the estimates of parameters of the GOLE-E distribution.

#### **6. Applications on Real-Life Data Sets**

In this section, we illustrate the suitability of the proposed family by fitting two real data sets on the special models viz-a-viz GOLE − W(*a*, *b*, *c*, *λ*, *β*) and GOLE − E (*a*, *b*, *c*, *λ*), arising due to this family with PDF mentioned in Sections 3.1 and 3.2, respectively. The comparison is conducted with some of the existing models via numerical maximizations of log-likelihood functions using the method of a limited memory quasi-Newton code for bound–constrained maximization (L-BFGS-B). We determine the log-likelihood function adjudicated at the MLEs by estimating the parameters.

Data I: The first data set is related to the measurements of nicotine levels in 346 cigarettes. [https://arxiv.org/ftp/arxiv/papers/1509/1509.08108.pdf, accessed on 19 May 2022]. Data II: The second data set consists of 74 observations of gauge lengths of 20 mm of single carbon fibers pertaining to failure stresses. (Kundu and Raqab, 2009 [17]). The descriptive statistics related to this data sets are given in Table 2.

**Table 2.** Descriptive Statistics for the data set I and data set II.


The total time on test (TTT) plot proposed by Aarset (1987) [18] is a technique to extract information about the shape of the hazard function. This is drawn by plotting *<sup>T</sup>*(*i*/*n*) <sup>=</sup> {( *<sup>i</sup>* ∑ *r*=1 *<sup>y</sup>*(*r*))+(*<sup>n</sup>* <sup>−</sup> *<sup>i</sup>*)*y*(*i*)}/ *<sup>n</sup>* ∑ *r*=1 *y*(*r*), where *i* = 1, 2, ... , *n* and *y*(*r*) where *r* = 1, 2, ... , *n* is the order statistics of the sample against (*i*/*n*). The constant hazard plot is a straight diagonal, while for decreasing (increasing) hazards, it is convex (concave), respectively. The TTT plots for the data sets in Figure 7 indicate that the data sets have an increasing hazard rate.

**Figure 7.** TTT plots of the data set I and II.

The best model is chosen on the basis of information criteria such as AIC (Akaike Information Criterion), CAIC (Consistent Akaike Information Criterion), BIC (Bayesian Information Criterion) and HQIC (Hannan–Quinn Information Criterion) with the goodness of fit measures as A\* (Anderson–Darling criterion), W\* (Cramér–von Mises criterion) and Kolmogorov–Smirnov (K-S) tests with p-values. The model with minimum values for these statistics could be chosen as the best model to fit the data except for the KS *p*-value, whose maximum value is the desired outcome. Asymptotic standard errors and 95% confidence intervals of the MLEs of the parameters for each competing model are also computed. For visual comparison, the fitted PDFs and the fitted CDFs are plotted with the corresponding observed histograms and ogives.

#### *6.1. Application of GOLE-E*

The GOLE-E (*a*, *b*, *c*, *λ*) distribution is compared with some models, namely exponential (E), moment exponential (ME) (Dara and Ahmad, 2012 [19]), exponentiated moment exponential (EM-E) (Hasnain et al., 2015 [20]), exponentiated exponential (E-E) (Gupta and Kundu, 2001 [21]), beta exponential (B-E) (Nadarajah and Kotz, 2006 [22]) and Kumaraswamy exponential (Kw-E) (Cordeiro and de Castro, 2011 [4]) distributions for all data sets.

In Tables 3–6, the MLEs, standard errors (SEs) and confidence interval (in parentheses) of the parameters from all the fitted distributions along with the AIC, BIC, CAIC and HQIC for the two data sets are presented. From Tables 3–6, it is evident that for the data sets, the GOLE-E distribution is the best model with the lowest values of the AIC, BIC, CAIC, HQIC, A\*, W\* and highest *p*-value of the K-S statistics. Hence, it is a better model than some recently introduced models, namely exponential (E), moment exponential (ME), exponentiated moment exponential (EM-E), exponentiated exponential (E-E), beta exponential (B-E) and Kumaraswamy exponential (Kw-E) distribution, for the two data sets. More information is provided for a visual comparison in the form of histograms, ogives or cumulative frequency curves of the observed data with the fitted densities and fitted cdfs displayed in Figures 8 and 9. These plots show that the proposed distributions provide the closest fit to all the observed data sets.


**Table 3.** MLEs, standard error (in parentheses), confidence interval values [in brackets] for the data set I.

**Table 4.** The AIC, BIC, CAIC, HQIC, A\*, W\* and KS (*p*-value) values for data set I.



**Table 5.** MLEs, standard error (in parentheses) and confidence interval values [in brackets] for data set II.

**Table 6.** The AIC, BIC, CAIC, HQIC, A\*, W\* and KS (*p*-value) values for data set II.


**Figure 8.** Plots of (**a**) the fitted PDF and (**b**) estimated CDF for the GOLE-E distribution for data set I.

**Figure 9.** Plots of (**a**) the fitted PDF and (**b**) estimated CDF for the GOLE-E distribution for data set II.

#### *6.2. Application of GOLE-W*

The GOLE-W (*a*, *b*, *c*, *λ*, *β*) distribution with (*λ* = 1) is compared with some models, namely Weibull (W), moment exponential (ME), exponentiated Weibull (EW) (Mudholker and Srivastava, 1993 [23]), generalized Weibull (GW) (Lai 2014 [24]), beta Weibull (B-W) (Lee et al., 2007 [25]) and Kumaraswamy Weibull (Kw-W) (Cordeiro et al. 2010 [26]) distributions for all data sets.

Likewise, in Tables 7–10, the MLEs, standard errors (in parentheses) and confidence interval [in brackets] of the parameters from all the competitive models along with AIC, CAIC, BIC and HQIC for the two data sets are presented. From these tables, it is quite obvious that for the two data sets, GOLE-W distribution is the best model with the lowest values of AIC, BIC, CAIC, HQIC, A\*, W\* and highest *p*-value of the K-S statistics. Hence, it is worth emphasizing that the proposed GOLE-F provides a more useful generalization (with exponential and Weibull as special models) than the competitive models for both of the datasets. A much more useful depiction is presented in the form of a visual comparison in Figures 10 and 11, where the densities and distribution function of observed data are compared against the fitted models, respectively. These plots reveal that the proposed distributions provide the closest fit to all the observed data sets.


**Table 7.** MLEs, standard errors (in parentheses) and confidence interval [in brackets] values for data set I.

**Table 8.** The AIC, CAIC, BIC, HQIC, A\*, W\* and KS (*p*-value) values for data set I.



**Table 9.** MLEs, standard errors (in parentheses), confidence interval values [in brackets] for data set II.

**Table 10.** The AIC, CAIC, BIC, HQIC, A\*, W\* and KS (*p*-value) values for data set II.


**Figure 10.** Plots of (**a**) the fitted PDF for the GOLE-W distribution and (**b**) estimated CDF for the GOLE-W distribution for data set I.

**Figure 11.** Plots of (**a**) the fitted PDF for the GOLE-W distribution and (**b**) estimated CDF for the GOLE-W distribution for data set II.

#### **7. Conclusions**

Through this paper, we provide a new general family of distributions to generalize any continuous baseline distribution. The main properties of the new family and other properties associated with the area of reliability are discussed. It was noted that the distributions generated by the new family are highly flexible in data modeling where we used one member to fit two real data to illustrate the importance of this family. This member provided consistently better fits than the other comparative distributions.

**Author Contributions:** Data curation, L.H.; Formal analysis, L.H.; Investigation, S.S.; Project administration, A.H.N.A.; Resources, S.K.; Supervision, W.M.; Writing—original draft, F.J. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Acknowledgments:** The authors are thankful to the Editor-in-Chief and the anonymous referees for their meticulous and thorough reading, which significantly enhanced the readability of this paper, and special thanks to Christophe Chesneau, Department of Mathematics, University of Caen-Normandie, LMNO, France, for their appreciable directions regarding Propositions 1 and 2.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A**

Recalling Equations (3) and (4) and assigned new numbers as (A1) and (A2), respectively, as follows

$$G(\mathbf{x};a,b,c,\boldsymbol{\phi}) = 1 - \exp\left[ -\left( \frac{aF(\mathbf{x};\boldsymbol{\phi})^{c}}{1 - F(\mathbf{x};\boldsymbol{\phi})^{c}} + \frac{b}{2} \left( \frac{F(\mathbf{x};\boldsymbol{\phi})^{c}}{1 - F(\mathbf{x};\boldsymbol{\phi})^{c}} \right)^{2} \right) \right].\tag{A1}$$

$$\log(\mathbf{x};a,b,c,\boldsymbol{\Phi}) = \left[\frac{\varepsilon f(\mathbf{x};\boldsymbol{\Phi})F(\mathbf{x};\boldsymbol{\Phi})^{\varepsilon-1}\left(a+(b-a)F(\mathbf{x};\boldsymbol{\Phi})^{\varepsilon}\right)}{\left(1-F(\mathbf{x};\boldsymbol{\Phi})^{\varepsilon}\right)^{3}}\right] \times \exp\left[-\left(\frac{aF(\mathbf{x};\boldsymbol{\Phi})^{\varepsilon}}{1-F(\mathbf{x};\boldsymbol{\Phi})^{\varepsilon}}+\frac{b}{2}\left(\frac{F(\mathbf{x};\boldsymbol{\Phi})^{\varepsilon}}{1-F(\mathbf{x};\boldsymbol{\Phi})^{\varepsilon}}\right)^{2}\right)\right].\tag{A2}$$

**Proposition A1.** *Given <sup>x</sup>* = *<sup>F</sup>*(*x*)*, by using the equivalence: <sup>e</sup><sup>y</sup>* ∼ <sup>1</sup> + *<sup>y</sup> when <sup>y</sup>* → <sup>0</sup> *since* lim *<sup>x</sup>*→−∞*F*(*x*)*<sup>c</sup>* <sup>→</sup> <sup>0</sup>*. Then, by the properties of the CDF in Equation (A1), we arrive at*

$$G(\mathbf{x}) \sim \frac{aF(\mathbf{x})^c}{1 - F(\mathbf{x})^c} + \frac{b}{2} \left( \frac{F(\mathbf{x})^c}{1 - F(\mathbf{x})^c} \right)^2,$$

*and, by asymptotic dominance, we obtain*

$$G(\mathbf{x}) \sim a \, F(\mathbf{x})^{\varepsilon}. \tag{A3}$$

*Using the same arguments, we obtain*

$$g(\mathbf{x}) \sim c \, a \, f(\mathbf{x}) F(\mathbf{x})^{c-1}. \tag{A4}$$

*In addition, the survival function is close to one; thus, the denominator in the hazard function is close to one. Then, using Equations (A3) and (A4), we obtain*

$$h(\mathbf{x}) \sim \ c \, a \, f(\mathbf{x}) F(\mathbf{x})^{c-1}. \tag{A5}$$

**Proposition A2.** *Similarly, using the same arguments when* lim *<sup>x</sup>*→+∞*F*(*x*)*<sup>c</sup>* <sup>→</sup> <sup>1</sup>*, we can prove that the survival function can be approximately reduced as follows*

$$1 - G(\mathbf{x}) \sim 1 - e^{-\left(\frac{a}{1 - F(\mathbf{x})^{\mathbf{x}}} + \frac{b}{2}\left(\frac{1}{1 - F(\mathbf{x})^{\mathbf{x}}}\right)^2\right)}.\tag{A6}$$

*Using the same arguments, we obtain*

$$g(\mathbf{x}) \sim \frac{bcf(\mathbf{x})}{\left(1 - F(\mathbf{x})^c\right)^3} \ e^{-\left(\frac{a}{1 - F(\mathbf{x})^c} + \frac{b}{2}\left\{\frac{1}{1 - F(\mathbf{x})^c}\right\}^2\right)}.\tag{A7}$$

*Using Equations (A6) and (A7), we obtain*

$$h(\mathbf{x}) \sim \frac{bcf(\mathbf{x})}{\left(1 - F(\mathbf{x})^c\right)^3}.$$

*This completes the proof.*

#### **References**

