Next Article in Journal
An Extended AHP-Based Corpus Assessment Approach for Handling Keyword Ranking of NLP: An Example of COVID-19 Corpus Data
Next Article in Special Issue
Moderate Deviation Principle for Linear Processes Generated by Dependent Sequences under Sub-Linear Expectation
Previous Article in Journal
Representation of Some Ratios of Horn’s Hypergeometric Functions H7 by Continued Fractions
Previous Article in Special Issue
Bayesian and Non-Bayesian Estimation for a New Extension of Power Topp–Leone Distribution under Ranked Set Sampling with Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sampling Plan for the Kavya–Manoharan Generalized Inverted Kumaraswamy Distribution with Statistical Inference and Applications

1
Department of Quantitative Analysis, College of Business Administration, King Saud University, P.O. Box 71115, Riyadh 11587, Saudi Arabia
2
Faculty of Graduate Studies for Statistical Research, Cairo University, Giza 12613, Egypt
3
Mathematics and Computer Science Department, Faculty of Science, Beni-Suef University, Beni-Suef 62521, Egypt
4
Department of Mathematics, Université de Caen Normandie, Campus II, Science 3, 14032 Caen, France
5
Department of Basic Sciences, Obour High Institute for Management & Informatics, Obour 11848, Egypt
*
Authors to whom correspondence should be addressed.
Axioms 2023, 12(8), 739; https://doi.org/10.3390/axioms12080739
Submission received: 3 June 2023 / Revised: 21 July 2023 / Accepted: 24 July 2023 / Published: 27 July 2023
(This article belongs to the Special Issue Probability, Statistics and Estimation)

Abstract

:
In this article, we introduce the Kavya–Manoharan generalized inverse Kumaraswamy (KM-GIKw) distribution, which can be presented as an improved version of the generalized inverse Kumaraswamy distribution with three parameters. It contains numerous referenced lifetime distributions of the literature and a large panel of new ones. Among the essential features and attributes covered in our research are quantiles, moments, and information measures. In particular, various entropy measures (Rényi, Tsallis, etc.) are derived and discussed numerically. The adaptability of the KM-GIKw distribution in terms of the shapes of the probability density and hazard rate functions demonstrates how well it is able to fit different types of data. Based on it, an acceptance sampling plan is created when the life test is truncated at a predefined time. More precisely, the truncation time is intended to represent the median of the KM-GIKw distribution with preset factors. In a separate part, the focus is put on the inference of the KM-GIKw distribution. The related parameters are estimated using the Bayesian, maximum likelihood, and maximum product of spacings methods. For the Bayesian method, both symmetric and asymmetric loss functions are employed. To examine the behaviors of various estimates based on criterion measurements, a Monte Carlo simulation research is carried out. Finally, with the aim of demonstrating the applicability of our findings, three real datasets are used. The results show that the KM-GIKw distribution offers superior fits when compared to other well-known distributions.

1. Introduction

Nowadays, in order to communicate and use data, developing effective statistical models remains a challenge. The sciences of the environment, biology, economics, and engineering are particularly demanding on such models. The desired characteristics of the (probability) distributions that form the bases of these models have, thus, been the focus of numerous generations of statisticians. In particular, various types of extensions or generalization techniques have been elaborated to enhance the properties of existing distributions. In parallel, modern informatics advancements have stimulated the practical application of complex mathematical changes that have emerged in this area. A traditional approach involves adding scale or shape parameters to make an existing distribution more sensitive to certain key modeling elements (mean, variance, tails of the distributions, skewness, kurtosis, etc.). Consequently, several novel families of continuous distributions were put forth, some of which were created in the references listed in Refs. [1,2,3,4,5,6,7,8,9,10,11,12]. In particular, Ref. [13] suggested the Dinesh–Umesh–Sanjay (DUS) transformation to obtain original lifetime distributions. Its primary advantage involves the capacity of the created distributions to keep their parameter-parsimonious nature and possesses new functionalities. The cumulative distribution function (CDF) and probability density function (PDF) of the random variable (RV) V based on the DUS transformation are given by
F ( v ) = 1 e 1 e G ( v ) 1 , v R ,
and
f ( v ) = 1 e 1 g ( v ) e G ( v ) , v R ,
respectively, where G ( . ) and g ( . ) are the CDF and the PDF of a baseline continuous distribution, respectively. Ref. [14] suggested an alternative approach that created new lifetime distributions using the sine function. Based on this, Ref. [15] developed a generalized DUS transformation for generating several relevant lifetime distributions. Still in the spirit of the DUS transformation, Ref. [16] proposed a new class of parsimonious distributions, known as the Kavya–Manoharan (KM) transformation, with the following CDF and PDF:
F ( v ) = e e 1 1 e G ( v ) , v R ,
and
f ( v ) = e e 1 g ( v ) e G ( v ) , v R ,
respectively. Using the exponential and Weibull distributions as the baseline distributions for this modification, Ref. [16] introduced two new distributions. Some analytical properties, parameter estimates, and data analysis were presented. Based on the KM transformation, some recently modified distributions were established. The one-parameter distribution called the KM inverse length biased exponential distribution was suggested in Ref. [17]. Ref. [18] proposed an enhanced version of the Burr X (BX) distribution based on the KM transformation and used ranked set sampling for the estimation of the parameters involved. In regard to biomedical data, Ref. [19] presented an extended version of the log-logistic distribution using this transformation. A new expanded form of the Kumaraswamy (Kw) distribution, still based on the KM distribution, was presented in Ref. [20]. Further, Ref. [21] provided a new three-parameter KM exponentiated Weibull distribution.
On the other hand, the distribution of the inverse of an RV is also a famous transformation technique known as the inverse distribution. Based on an RV, say U, it consists of considering the distribution of the inverse RV of U, i.e., V = 1 / U . Here are a few applications where the inverse distribution arises: In finance, the inverse distribution is used to model the distribution of returns on investments. More precisely, the returns on investments are often modeled by log-normal or other distributions, and the inverse of these distributions is used to model the distribution of the time to reach a certain investment goal. In actuarial science, the inverse distribution is used to model the time until a claim is made. In queuing theory, the distribution of the time between two arrivals in a queue is often modeled by an inverse distribution. The inverse distribution is also used to model the service times of customers in a queue. Overall, many studies involving inverse distributions have been treated in the literature by different researchers (see, for instance, Refs. [22,23,24,25,26,27,28,29]). In particular, the inverse Kw (IKw) distribution was created in Ref. [29] from the original Kw distribution. The corresponding CDF is indicated as follows:
G ( v ) = 1 1 + v ϑ 1 ϑ 2 , v > 0 ,
where ϑ 1 > 0 and ϑ 2 > 0 are shape parameters, and G ( v ) = 0 for v 0 . Ref. [30] has published the generalized version of Equation (3), known as the generalized IKw (GIKw) distribution, along with a new shape parameter. The CDF and PDF of the GIKw distribution are as follows:
G ( v ) = 1 1 + v ϑ 1 ϑ 2 ϑ 3 , v > 0 ,
and G ( v ) = 0 for v 0 , and
g ( v ) = ϑ 1 ϑ 2 ϑ 3 v ϑ 1 1 1 + v ϑ 1 ϑ 2 1 1 1 + v ϑ 1 ϑ 2 ϑ 3 1 , v > 0 ,
and g ( v ) = 0 for v 0 , respectively, where ϑ 1 > 0 , ϑ 2 > 0 , and ϑ 3 > 0 are shape parameters.
In light of the above, this article provides a contribution to the topic by introducing the Kavya–Manoharan-GIKw (KM-GIKw) distribution as a new three-parameter distribution based on the KM transformation. The following points provide sufficient justification for studying it:
  • The KM-GIKw distribution has a PDF that possesses both symmetric and asymmetric forms (unimodal, inverse J-shaped, and right-skewed).
  • The KM-GIKw distribution provides a great deal of versatility and contains a plethora of novel and published sub-distributions.
  • The hazard function (HF) forms of the MK-GIKw distribution include decreasing and upside-down shapes.
  • The KM-GIKw distribution has a closed-form quantile function (QF); it is easy to compute numerous properties and generate random numbers using it.
  • In the setting of the MK-GIKw distribution, an accurate acceptance sampling plan (ASP) based on the truncated life test can be constructed.
  • The parameters of the MK-GIKw distribution can be estimated quite efficiently using the Bayesian, maximum likelihood (ML), and maximum product of spacings (MPS) methods.
  • In terms of data fitting, thanks to its high level of flexibility, the superiority of the KM-GIKw distribution in comparison to other well-known and comparable distributions is quite possible (this will be shown using three actual datasets; model selection results demonstrated that the suggested distribution is the most suitable choice for them).
All these items are developed through the article, with a maximum amount of information and details.
The rest is divided into the following sections: The KM-GIKw distribution is described in Section 2. Section 3 discusses some of its structural aspects. For the truncated life tests based on the KM-GIKw distribution, the ASP and related numerical work are provided in Section 4. The parameter estimation and study using a Monte Carlo simulation are presented in Section 5. Data analyses for actual world data are given in Section 6. Section 7 summarizes the article and provides its conclusion.

2. The New KM-GIKw Distribution

The CDF of the KM-GIKw distribution is obtained by replacing the CDF of the GIKw distribution in Equation (4) into Equation (1); that is:
F ( v ) = e e 1 1 e 1 1 + v ϑ 1 ϑ 2 ϑ 3 , v > 0 ,
and F ( v ) = 0 for v 0 , where ϑ 1 > 0 , ϑ 2 > 0 and ϑ 3 > 0 are shape parameters.
In order to mention the parameters, the KM-GIKw distribution can be expressed by KM-GIKw ( ϑ 1 , ϑ 2 , ϑ 3 ) . The PDF associated with Equation (6) is determined by
f ( v ) = e ϑ 1 ϑ 2 ϑ 3 v ϑ 1 1 e 1 1 + v ϑ 1 ϑ 2 1 1 1 + v ϑ 1 ϑ 2 ϑ 3 1 e 1 1 + v ϑ 1 ϑ 2 ϑ 3 , v > 0 ,
and f ( v ) = 0 for v 0 .
The survival function (SF) and HF of the KM-GIKw distribution are defined as follows:
F ¯ ( v ) = 1 e e 1 1 e 1 1 + v ϑ 1 ϑ 2 ϑ 3 , v > 0 ,
and F ¯ ( v ) = 1 for v 0 , and
h ( v ) = e ϑ 1 ϑ 2 ϑ 3 v ϑ 1 1 1 + v ϑ 1 ϑ 2 1 1 1 + v ϑ 1 ϑ 2 ϑ 3 1 e 1 1 + v ϑ 1 ϑ 2 ϑ 3 e 1 e 1 e 1 1 + v ϑ 1 ϑ 2 ϑ 3 , v > 0 ,
and h ( v ) = 0 for v 0 , respectively.
Figure 1 displays the PDF and HF of the KM-GIKw distribution for certain parameter values.
This figure shows that the PDF can have symmetric and asymmetric forms. On this side, the HF can be decreasing and upside-down-shaped.
By introducing two RVs Z and V, the following new or existing parsimonious distributions are offered, mainly based on the CDF in Equation (6).
  • For ϑ 1 = 1 , the CDF in Equation (6) provides the KM-IKw distribution as a new sub-distribution.
  • Using the transformation Z = δ V ϑ 1 , where V has the CDF in Equation (6), then Z has the KM-exponentiated Lomax distribution with parameters ( δ , ϑ 2 , ϑ 3 ) as the new distribution. Hence, the CDF of Z obtains the following form:
    F ( z ) = e e 1 1 e 1 1 + z δ ϑ 2 ϑ 3 , z > 0 ,
    and F ( z ) = 0 for z 0 .
  • Using the transformation Z = log ( 1 + V ϑ 1 ) , where V has the CDF in Equation (6), then Z has the KM-exponentiated exponential distribution with parameters ( ϑ 2 , ϑ 3 ) (see Ref. [31]). Hence, the CDF of Z obtains the following form:
    F ( z ) = e e 1 1 e 1 e ϑ 2 z ϑ 3 , z > 0 .
    and F ( z ) = 0 for z 0 . For ϑ 3 = 1 ,  Z has the KM-exponential distribution with parameter ( ϑ 2 ) (see Ref. [16]).
  • Using the transformation Z = 1 ϑ 1 [ log ( 1 + V ϑ 1 ) ] 1 / δ , where V has the CDF in Equation (6) and let ϑ 2 = 1 , then Z has the KM-exponentiated Weibull distribution with parameters ( ϑ 3 , ϑ 1 , δ ) (see Ref. [21]). Hence, the CDF of Z obtains the following structure:
    F ( z ) = e e 1 1 e 1 e ( ϑ 1 z ) δ ϑ 3 , z > 0 .
    and F ( z ) = 0 for z 0 . For ϑ 3 = 1 ,  Z has the KM–Weibull (KM-W) distribution with parameters ( ϑ 1 , δ ) (see Ref. [16]).
  • Using the transformation Z = [ log ( 1 + V ϑ 1 ) ] 1 / 2 , where V has the CDF in Equation (6) and let ϑ 3 = 1 , then Z has the KM–Rayleigh distribution with parameter ϑ 2 (see Ref. [16]). Hence, the CDF of Z has the following structure:
    F ( z ) = e e 1 1 e 1 e ϑ 2 z 2 , z > 0 ,
    and F ( z ) = 0 for z 0 .
  • Using the transformation Z = 1 ϑ 1 [ log ( 1 + V ϑ 1 ) ] 1 / 2 , where V has the CDF in Equation (6) and let ϑ 2 = 1 , then Z has the KM–Burr X (KM-BX) distribution with parameters ( ϑ 1 , ϑ 3 ) (see Ref. [18]). Hence, the CDF of Z has the following structure:
    F ( z ) = e e 1 1 e 1 e ( ϑ 1 z ) 2 ϑ 3 , z > 0 ,
    and F ( z ) = 0 for z 0 .
  • Using the transformation Z = δ V , where V has the CDF in Equation (6) and let ϑ 2 = ϑ 3 = 1 , then Z has the KM–Log logistic distribution with parameters ( δ , ϑ 1 ) (see Ref. [19]). Hence, the CDF of Z has the following structure:
    F ( z ) = e e 1 1 e 1 1 + z δ ϑ 1 1 , z > 0 ,
    and F ( z ) = 0 for z 0 .
  • Using the transformation Z = V ( ϑ 1 ( ϑ 1 δ ) δ ) , where V has the CDF in Equation (6) and let ϑ 3 = 1 , then Z has the KM–Burr III distribution with parameters ( δ , ϑ 2 ) as the new distribution. Hence, the CDF of Z has the following structure:
    F ( z ) = 1 e e 1 1 e 1 ( 1 + z δ ) ϑ 2 , z > 0 ,
    and F ( z ) = 0 for z 0 .
  • Using the transformation Z = V ( ϑ 1 ( ϑ 1 δ ) δ ) , where V has the CDF in Equation (6), then Z has the KM–Kumaraswamy Burr III distribution with parameters ( δ , ϑ 2 , ϑ 3 ) as the new distribution. Hence, the CDF of Z has the following structure:
    F ( z ) = 1 e e 1 1 e 1 ( 1 + z δ ) ϑ 2 ϑ 3 , z > 0 ,
    and F ( z ) = 0 for z 0 .
  • Using the transformation Z = V ( ϑ 1 ( ϑ 1 δ ) δ ) , where V has the CDF in Equation (6), then Z has the KM-exponentiated Burr XII distribution with parameters ( δ , ϑ 2 , ϑ 3 ) as the new distribution. Hence, the CDF of Z has the following structure:
    F ( z ) = e e 1 1 e 1 1 + z δ ϑ 2 ϑ 3 , z > 0 ,
    and F ( z ) = 0 for z 0 .
  • Using the transformation Z = V ( ϑ 1 ( ϑ 1 δ δ ) , where V has the CDF in Equation (6), then Z has the KM–Burr XII (KM-XBII) distribution with parameters ( δ , ϑ 2 ) as the new distribution. Hence, the CDF of Z has the following structure:
    F ( z ) = e e 1 1 e 1 1 + z δ ϑ 2 , z > 0 ,
    and F ( z ) = 0 for z 0 .

3. Statistical Properties

In this section, we examine the statistical features of the KM-GIKw distribution, such as the QF, moments, and entropy measures.

3.1. Quantile Function

Theoretical considerations, statistical applications, and Monte Carlo techniques all involve the QF. The QF of the KM-GIKw distribution is represented by
v u = Q ( u ) = 1 log 1 u 1 e 1 1 1 ϑ 1 ϑ 3 ( 1 ( 1 ϑ 2 ϑ 2 ) 1 1 1 ϑ 3 ϑ 1 ,
where u ( 0 , 1 ) . For u = 0.5 , we obtain the median of the KM-GIKw distribution.

3.2. Moment Measures

The moments with various orders play a key role in defining the characteristics of the variability of a distribution. Here, we determine the main moment measures for the KM-GIKw distribution. To this aim, here and after, we consider an RV V having the KM-GIKw distribution. For any integer q, the qth moment of V is obtained as μ q = E ( V q ) , so that
μ q = e ϑ 1 ϑ 2 ϑ 3 e 1 0 v q + ϑ 1 1 1 + v ϑ 1 ϑ 2 1 1 1 + v ϑ 1 ϑ 2 ϑ 3 1 e 1 1 + v ϑ 1 ϑ 2 ϑ 3 d v .
Let us now investigate its existence in the functions of the involved parameters. When v 0 , we have
v q + ϑ 1 1 1 + v ϑ 1 ϑ 2 1 1 1 + v ϑ 1 ϑ 2 ϑ 3 1 e 1 1 + v ϑ 1 ϑ 2 ϑ 3 ϑ 2 ϑ 3 1 v q + ϑ 1 ϑ 3 1 ,
and, since q 0 , ϑ 1 > 0 and ϑ 3 > 0 , we always have q + ϑ 1 ϑ 3 1 > 1 . On the other hand, when v , we obtain
v q + ϑ 1 1 1 + v ϑ 1 ϑ 2 1 1 1 + v ϑ 1 ϑ 2 ϑ 3 1 e 1 1 + v ϑ 1 ϑ 2 ϑ 3 e 1 v q ϑ 1 ϑ 2 1
and we have q + ϑ 1 ϑ 2 + 1 > 1 if q < ϑ 1 ϑ 2 . Hence, according to the Riemann integral convergence rules, the qth moment of V exists if and only if q < ϑ 1 ϑ 2 .
This integral is complicated to simplify, but two complementary options are possible: (i) numerical computation for the fixed values of the parameters and (ii) series expansions that incorporate as much moment information as possible into discrete coefficients. The numerical approach is taken into account throughout the rest of the study.
Furthermore, based on these moments, the qth central moment of V is defined by
μ q = E [ V μ 1 q ] = l = 0 q ( 1 ) l q l ( μ 1 ) l μ q l .
Diverse moment measures can be obtained based on Equation (10). The main ones are provided in Table 1, considering the first few moments, i.e., variance (var), skewness (SK), kurtosis (KU), coefficient of variation (CV), and index of dispersion (ID) of V. We conclude from this table that
  • As the value of ϑ 3 increases, at fixed values of ϑ 1 and ϑ 3 , the values of μ 1 , μ 2 , μ 3 , μ 4 , SK, and KU are rising, whereas those of var, CV, and ID are decreasing.
  • The values of μ 1 , μ 2 , μ 3 , μ 4 , var, SK, KU, CV, and ID are decreasing when the values of ϑ 2 increase at a fixed value of ϑ 1 and ϑ 3 in the majority of situations.
  • The KM-GIKw distribution is positively skewed, as indicated by the values of SK.
  • The KM-GIKw distribution is platykurtic and leptokurtic based on the values of KU.
Figure 2, Figure 3 and Figure 4 provide further trends by displaying the 3D plots of the mean, variance, skewness, and kurtosis of V for various values of ϑ 1 , ϑ 2 , and ϑ 3 . Various non-monotonic shapes are observed, illustrating the versatility of these measures.

3.3. Entropy Measures

In several disciplines, including physics, engineering, and economics, the entropy of an RV is a measure of variation in uncertainty. As evidenced by Rényi in Ref. [32], the Rényi (Ri) entropy is defined as follows:
I ( τ ) = 1 1 τ log f ( v ) τ d v ,
with τ 1 , τ > 0 , and where f ( v ) denotes the PDF of the considered RV. In the precise context of the KM-GIKw distribution, for v > 0 , we have
f ( v ) τ = e ϑ 1 ϑ 2 ϑ 3 e 1 τ v τ ( ϑ 1 1 ) 1 + v ϑ 1 τ ( ϑ 2 + 1 ) 1 1 + v ϑ 1 ϑ 2 τ ( ϑ 3 1 ) e τ 1 1 + v ϑ 1 ϑ 2 ϑ 3 .
For v 0 , we obviously have f ( v ) τ = 0 . Let us now investigate the existence of 0 f ( v ) τ d v , and the Ri entropy as well, in the function of the involved parameters. When v 0 , we have
v τ ( ϑ 1 1 ) 1 + v ϑ 1 τ ( ϑ 2 + 1 ) 1 1 + v ϑ 1 ϑ 2 τ ( ϑ 3 1 ) e τ 1 1 + v ϑ 1 ϑ 2 ϑ 3 ϑ 2 τ ( ϑ 3 1 ) v τ ( ϑ 1 ϑ 3 1 ) .
Hence, according to the Riemann integral convergence rules, we must have τ ( ϑ 1 ϑ 3 1 ) > 1 . On the other hand, when v , we obtain
v τ ( ϑ 1 1 ) 1 + v ϑ 1 τ ( ϑ 2 + 1 ) 1 1 + v ϑ 1 ϑ 2 τ ( ϑ 3 1 ) e τ 1 1 + v ϑ 1 ϑ 2 ϑ 3 e τ v τ ( ϑ 1 ϑ 2 + 1 ) .
Hence, according to the Riemann integral convergence rules, we must have τ ( ϑ 1 ϑ 2 + 1 ) > 1 . To summarize, the Ri entropy makes mathematical sense if and only if τ ( ϑ 1 ϑ 3 1 ) > 1 and τ ( ϑ 1 ϑ 2 + 1 ) > 1 .
Under these conditions, it is challenging to simplify this integral, and there are two complementary approaches that can be used: (i) numerical computation of the integral for fixed values of the parameters, and (ii) series expansions that incorporate as much moment information as possible into discrete coefficients. The first point will be investigated in this study.
Based on the Ri entropy, the Shannon entropy can be derived as
S = lim τ 1 I ( τ ) = f ( v ) log f ( v ) d v .
See [33]. By applying the previous findings, it exists if and only if ϑ 1 ϑ 3 > 0 and ϑ 1 ϑ 2 > 0 , which is always the case.
On the other hand, the following formula is used to compute the Tsallis entropy of V:
T τ = 1 τ 1 1 f ( v ) τ d v ,
where τ 1 , τ > 0 .
The Arimoto entropy is specified by
A τ = τ 1 τ f ( v ) τ d v 1 τ 1 ,
where τ 1 and τ > 0 .
Also, another famous entropy measure, the Havrda–Charvat entropy, is specified by
H C τ = 1 2 1 τ 1 f ( v ) τ d v 1 τ 1 ,
where τ 1 and τ > 0 .
The three above entropy measures make sense in the setting of the KM-GIKw distribution if and only if τ ( ϑ 1 ϑ 3 1 ) > 1 and τ ( ϑ 1 ϑ 2 + 1 ) > 1 .
Table 2 displays some numerical measures of the introduced entropy measures. We conclude from this table that:
  • All the entropy measures decrease when the value of τ increases, giving us additional information.
  • As the value of ϑ 3 increases, at fixed values of ϑ 1 and ϑ 2 , except at ( ϑ 1 , ϑ 2 ) = ( 3 , 2 ) , we observe that all the entropy measures decrease.
  • As the value of ϑ 2 increases, at fixed values of ϑ 1 and ϑ 3 , we observe that all the entropy measures decrease.

4. Acceptance Sampling Plans

This section is devoted to the construction of an ASP in the context of the KM-GIKw distribution. Hence, we assume that the lifetime of a product can be modeled as an RV with the KM-GIKw ( ϑ 1 , ϑ 2 , ϑ 3 ) distribution given by the CDF in Equation (6), and the producer-assumed specified median lifetime of the units is M 0 . The main objective is to determine if the suggested lot should be approved or disapproved based on the criterion that the unit’s actual median lifetime, M, is longer than the indicated lifetime, M 0 . The test is typically terminated after a certain period t and the failure number is recorded. In order to detect the median lifetime, the experiment is conducted for t = a M 0 units of time, a multiple of the expected median lifetime multiplied by any positive constant a.
The ASP has been the subject of several studies. In particular, a single ASP for the three-parameter inverse Topp–Leone (ITL) and power ITL distributions was found in Refs. [34,35] using the median lifetime of the provided distribution and the truncated life test.
The following is how the idea is described in Ref. [36], regarding acceptance of the offered lot, based on proof that M M 0 , given the probability of at least p * (consumer’s risk) and the ASP:
  • Pick a sample of n units at random from the indicated lot.
  • Execute the test below for t units of time:
    If the acceptance number, denoted by c, or fewer units malfunction during the test, accept the entire lot; elsewhere, reject the entire lot.
According to the proposed ASP, the probability of accepting a lot is provided by taking into account lots that are large enough to allow for the implementation of the binomial distribution. It is given by
L ( δ ) = i = 0 c n i δ i ( 1 δ ) n i , i = 1 , 2 , , n ,
where δ = F ( t ) = F t ; ϑ 1 , ϑ 2 , ϑ 3 , as defined by the CDF in Equation (6). The function L ( δ ) represents the sampling plan’s operational characteristic or the acceptance probability of the lot as a function of the failure probability. Additionally, by using the formula t = a M 0 , δ 0 can be expressed as follows:
δ 0 = F ( a M 0 ; ϑ 1 , ϑ 2 , ϑ 3 ) = e e 1 1 e 1 1 + ( a M 0 ) ϑ 1 ϑ 2 ϑ 3 .
The current problem is to find the lowest positive integer n for given values of p * , a M 0 , and c. Thus, the operating characteristic function can be rewritten as follows:
L δ 0 = i = 0 c n i δ 0 i 1 δ 0 n i 1 p * ,
where δ 0 is provided in Equation (13).
The low values of n satisfying the inequality in Equation (14) and its corresponding operating characteristic probability are computed and mentioned in Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10, for the supposed parameters listed below:
  • The consumer risk is assumed as follows: p * = 0.1 , 0.25 , 0.5 , 0.75 , and 0.99 .
  • The acceptance number of each proposed lot is assumed as follows: c = 0 , 2 , 4 , 10 , and 20.
  • The factor median lifetime is assumed as a = 0.15 , 0.30 , 0.60 , 0.90 , and 1. If a = 1 , then t = M 0 = 0.5 for all values of ϑ 1 , ϑ 2 , and ϑ 3 .
  • Eight cases for parameters of the KM-GIKw distribution are considered, where the following values are assumed: 0.25 , 0.75 , 1.25 , 1.5 , and 2.
The following observations may be drawn based on the information shown in the tables:
  • For the parameters of the ASP, when p * and c rise, the necessary sample size n rises as well, whereas L ( δ 0 ) reduces. While a rises, the necessary n falls, while L ( δ 0 ) rises.
  • For the parameters of the KM-GIKw distribution: With increasing any parameters of ϑ 1 , ϑ 2 , ϑ 3 , and keeping the other parameters constant, the required n rises, but L ( δ 0 ) decreases.
Lastly, we verify each of our outcomes: L δ 0 1 p * . Also, when a = 1 , we have δ 0 = 0.5 as t = M 0 and, hence, all numerical results n , L δ 0 for any vector of parameters ( ϑ 1 , ϑ 2 , λ ) are the same.

5. Methods of Estimation for the Unknown Parameters

In this section, we investigate the estimation of the parameters of the KM-GIKw distribution based on the ML, MPS, and Bayesian methods.

5.1. Maximum Likelihood Estimates

In this part, the ML estimates (MLEs) of the parameters of the KM-GIKw distribution are determined on the basis of complete samples. Thus, let v 1 , , v n be the observed values of an RV with the KM-GIKw distribution. We denote the parameter vector as Ψ ( ϑ 1 , ϑ 2 , ϑ 3 ) T . Let us set f ( v ) = f ( v ; Ψ ) and F ( v ) = F ( v ; Ψ ) . The associated log-likelihood function is provided by
log L Ψ = i = 1 n log [ f ( v i ; Ψ ) ] = n n log e 1 + n log ϑ 1 + n log ϑ 2 + n log ϑ 3 + ϑ 1 1 i = 1 n log v i ϑ 2 + 1 i = 1 n log 1 + v i ϑ 1 + ϑ 3 1 i = 1 n log 1 1 + v i ϑ 1 ϑ 2 i = 1 n 1 1 + v i ϑ 1 ϑ 2 ϑ 3 .
The MLEs ϑ ^ 1 , ϑ ^ 2 , and ϑ ^ 3 of ϑ 1 , ϑ 2 , and ϑ 3 , respectively, are obtained by maximizing log L Ψ with respect to Ψ . They can be obtained numerically based on the first partial derivatives of log L Ψ with respect to Ψ . These derivatives are given by
log L Ψ ϑ 1 = n ϑ 1 + i = 1 n log v i ϑ 2 + 1 i = 1 n log v i 1 + v i ϑ 1 + ϑ 2 ϑ 3 1 i = 1 n v i ϑ 1 log v i 1 + v i ϑ 1 ϑ 2 1 1 1 + v i ϑ 1 ϑ 2 ϑ 2 ϑ 3 i = 1 n v i ϑ 1 log v i 1 + v i ϑ 1 ϑ 2 1 1 1 + v i ϑ 1 ϑ 2 ϑ 3 1 ,
log L Ψ ϑ 2 = n ϑ 2 i = 1 n log 1 + v i ϑ 1 + ϑ 3 1 i = 1 n 1 + v i ϑ 1 ϑ 2 log 1 + v i ϑ 1 1 1 + v i ϑ 1 ϑ 2 ϑ 3 i = 1 n 1 + v i ϑ 1 ϑ 2 log 1 + v i ϑ 1 1 1 + v i ϑ 1 ϑ 2 ϑ 3 1 ,
and
log L Ψ ϑ 3 = n ϑ 3 + i = 1 n log 1 1 + v i ϑ 1 ϑ 2 i = 1 n 1 1 + v i ϑ 1 ϑ 2 ϑ 3 log 1 1 + v i ϑ 1 ϑ 2 .
Setting the system of nonlinear equations log L Ψ ϑ 1 = log L Ψ ϑ 2 = log L Ψ ϑ 3 = 0 and solving them simultaneously yields the MLEs. It is often more feasible to use nonlinear optimization techniques, such as the quasi-Newton algorithm, to numerically maximize log L Ψ .

5.2. Maximum Product Spacing Estimates

A good alternative to the ML method is the MPS method, which consists of approximating the Kullback–Leibler information measure. In this method, an ordered sample of values of size n is taken from an RV with the KM-GIKw distribution, say v ( 1 ) , , v ( n ) . Then, the uniform spacings can be computed as follows:
ζ * ( Ψ ) = 1 n + 1 log i = 1 n + 1 Θ ( i ) ,
where
Θ ( 1 ) = F ( v ( 1 ) ; Ψ ) Θ ( i ) = F ( v ( i ) ; Ψ ) F ( v ( i 1 ) ; Ψ ) i = 2 , , n Θ ( n + 1 ) = 1 F ( v ( n ) ; Ψ ) .
with F v ( 0 ) ; Ψ = 0 and F v ( n + 1 ) ; Ψ = 1 .
The logarithm of the geometric mean of the spacings is defined as follows:
ζ * ( Ψ ) = 1 n + 1 i = 1 n + 1 log e e 1 e 1 1 + v i 1 ϑ 1 ϑ 2 ϑ 3 e 1 1 + v i ϑ 1 ϑ 2 ϑ 3 .
The MPS estimates (MPSEs) ϑ ˜ 1 , ϑ ˜ 2 and ϑ ˜ 3 of ϑ 1 , ϑ 2 and ϑ 3 , respectively, are obtained by maximizing ζ * ( Ψ ) with respect to Ψ . However, the obtained estimates cannot be expressed analytically. As a result, a numerical technique using nonlinear optimization algorithms can be used.

5.3. Bayesian Estimates

This section examines the estimation of the unknown parameters of the KM-GIKw distribution using the Bayesian method. The squared error (SE) loss function (LF) and the linear exponential LF (LELF) are two different types of LFs that may be considered for the Bayesian estimates (BEs). We suggest employing separate gamma priors of parameters ϑ 1 , ϑ 2 , and ϑ 3 with the following PDFs:
π 1 ( ϑ 1 ) ϑ 1 a 1 1 e b 1 ϑ 1 ϑ 1 > 0 , a 1 > 0 , b 1 > 0 , π 2 ( ϑ 2 ) ϑ 2 a 2 1 e b 2 ϑ 2 ϑ 2 > 0 , a 2 > 0 , b 2 > 0 , π 3 ( ϑ 3 ) ϑ 3 a 3 1 e b 3 ϑ 3 ϑ 3 > 0 , a 3 > 0 , b 3 > 0 ,
where the hyperparameters a s and b s , with s = 1 , 2 , 3 , are picked to represent the prior information of the unknown parameters. The joint prior distribution of Ψ = ( ϑ 1 , ϑ 2 , ϑ 3 ) is given as follows:
π ( Ψ ) = π 1 ( ϑ 1 ) π 2 ( ϑ 2 ) π 3 ( ϑ 3 ) ,
that is
π ( Ψ ) ϑ 1 a 1 1 ϑ 2 a 2 1 ϑ 3 a 3 1 e b 1 ϑ 1 b 2 ϑ 2 b 3 ϑ 3 .
Given the observed data v = v 1 , v 2 , , v n , the posterior density is provided via the following equation:
π ( Ψ v ) = π ( Ψ ) L ( Ψ ) ( 0 , ) 3 π ( Ψ ) L ( Ψ ) d Ψ .
Thus, it is expressed as follows:
π ( Ψ v ) ϑ 1 n + a 1 1 ϑ 2 n + a 2 1 ϑ 3 n + a 3 1 e n + b 1 ϑ 1 + b 2 ϑ 2 + b 3 ϑ 3 ( e 1 ) n i = 1 n v i ϑ 1 1 Ω i ϑ 2 1 1 Ω i ϑ 2 ϑ 3 1 e 1 Ω i ϑ 2 ϑ 3 ,
where Ω i = 1 + v i ϑ 1 . The BE of L ( Ψ ) under the SE loss function, denoted by BE-SELF, is provided via
ϑ ^ B E S E L F = E L ( Ψ ) v = ( 0 , ) 3 L ( Ψ ) π ( Ψ v ) d Ψ .
On the other hand, the LELF is an asymmetric LF that equally emphasizes under- and overestimations. Underestimation can be less beneficial than overestimation in a number of real-world scenarios, and vice versa. In these circumstances, a LELF can be proposed as an alternative to the SELF, which is offered through the followin equation:
L ( Ψ ) , L ^ ( Ψ ) = e L ^ ( Ψ ) L ( Ψ ) τ * L ^ ( Ψ ) L ( Ψ ) 1 ,
where τ * 0 . Here, τ * > 0 demonstrates that an overestimation is more serious than an underestimation, and τ * < 0 indicates the opposite. As τ * moves closer to zero, it replicates the BE-SELF itself. For further information on this subject, see Refs. [37,38]. The BE of L ( Ψ ) under this loss can be calculated as follows:
Ψ ^ B E L E L F = E e τ * L ( Ψ ) v = 1 τ * log ( 0 , ) 3 e τ * L ( Ψ ) π ( Ψ v ) d Ψ .
As can be seen, there is no way to convert the estimates given in Equations (19) and (20) into closed-form expressions. Thus, we use the Metropolis–Hasting (MH) algorithm to create posterior samples and the Markov chain Monte Carlo (MCMC) method to generate the appropriate BEs.

5.4. Markov Chain Monte Carlo

The MCMC technique is a generic simulation technique employed for computing and sampling from posterior values of concern. In fact, the posterior uncertainty regarding the parameters Ψ as well as a kernel estimate of the posterior distribution may both be fully summarized by the MCMC samples (see Ref. [39]).
A discrete-time Markov chain (MC) serves as the foundation for the MCMC technique. An MC is a stochastic process: Ψ ( 0 ) , Ψ ( 1 ) , Ψ ( 2 ) , . There are numerous methods to generate proposals in the MCMC technique, like the MH algorithm.

5.5. MH Algorithm

A recommended distribution and beginning values for the unknown parameters Ψ must be specified in order to implement the MH algorithm for the KM-GIKw distribution. To this end, a multivariate normal distribution is considered; that is, N 3 Ψ , S Ψ , where S Ψ comprises the variance–covariance matrix (V-CM) for the recommended distribution. In fact, it is possible to gather unfavorable observations. The MLEs for Ψ are taken into account for the starting values, i.e., Ψ ( 0 ) = Ψ ^ M L E . As the asymptotic V-CM, S Ψ is selected. I 1 Ψ ^ M L E , where I ( . ) is the Fisher information matrix. The selection of S Ψ is shown to be a key factor in the MH algorithm, where the acceptance rate depends on it. When taking all of this into account, the MH algorithm’s steps for selecting a sample from the designated posterior density in Equation (18) are as follows:
I. 
Put the value of Ψ ’s initial parameter to Ψ ( 0 ) = ϑ 1 ^ M L E , ϑ 2 ^ M L E , ϑ 3 ^ M L E .
II. 
Perform the following operations for i = 1 , 2 , , M :
II.1:
Put Ψ = Ψ ( i 1 ) .
II.2:
Using N 3 log Ψ , S Ψ , create a new candidate parameter value Ψ .
II.3:
Specify ξ = e Ψ .
II.4:
Determine β = π ( ξ v ) π ( Ψ v ) , where π ( · v ) is given in Equation (18).
II.5:
Create a sample U from the uniform distribution over (0, 1).
II.6:
Take the new candidate or leave it out ξ
If U β set Ψ ( i ) = ξ otherwise set Ψ ( i ) = Ψ .
Finally, it is possible to reject a portion of the size M of random samples taken from the posterior density (burn-in), and then use the remaining samples to obtain the BEs. More precisely, the BEs of Ψ ( i ) = ϑ 1 ( i ) , ϑ 2 ( i ) , ϑ 3 ( i ) utilizing the MCMC technique under the SELF and LELF can be estimated as follows:
Ψ ^ B E S E L F = 1 M l B i = l B M Ψ ( i ) ,
and
Ψ ^ B E L E L F = 1 τ * log 1 M l B i = l B M e τ * Ψ ( i ) ,
where l B represents the number of burn-in samples.

5.6. Simulation Study

The aim of this section is to examine the behaviors of the MLE, MPSE, and BE, which were covered in the previous sub-sections. To evaluate the effectiveness of the suggested estimating methods, an MC analysis is employed. The computation is done using the statistical programming language R. Additionally, the bbmle and BMT packages in R are considered to compute the MPSEs and MLEs, respectively.
Utilizing a number of recommended estimation techniques, the MC simulation is run. The KM-GIKw distribution may be used to create one thousand elements of random data under the following assumptions:
  • The target sample sizes for the KM-GIKw distribution are n = 20 , 30 , 50 , 100 , and 200.
  • The parameters of the KM-GIKw distribution are assumed to be
    Case 1:
    ϑ 1 = 1.25 , ϑ 2 = 1.25 , ϑ 3 = 1.25 .
    Case 2:
    ϑ 1 = 1.50 , ϑ 2 = 1.75 , ϑ 3 = 1.50 .
    Case 3:
    ϑ 1 = 2.00 , ϑ 2 = 1.75 , ϑ 3 = 2.50 .
    Case 4:
    ϑ 1 = 2.50 , ϑ 2 = 2.50 , ϑ 3 = 2.50 .
    Case 5:
    ϑ 1 = 2.50 , ϑ 2 = 2.50 , ϑ 3 = 3.50 .
  • Monte Carlo steps:
Step 1:
Generate random data from the KM-GIKw distribution from Equation (8) with ϑ 1 , ϑ 2 , and ϑ 3 , given the sample size n.
Step 2:
Compute the MLEs of ϑ 1 , ϑ 2 , and ϑ 3 using the true value of these parameters as the initial values for solving the normal equations.
Step 3:
Compute the MPSEs of ϑ 1 , ϑ 2 , and ϑ 3 .
Step 4:
The MH method and MCMC technique are used with an informative prior (IP) to determine the BEs. For the IP, assume that
a 1 = 0.5 , b 1 = 1.5 , a 2 = 0.75 , b 2 = 1.75 , a 3 = 0.65 , b 3 = 1.65 .
After that, the estimated values are calculated using these values. When utilizing the MH algorithm, the MLEs take into account the initial guess values. Out of the 10,000 samples created from the posterior density and subsequently derived BEs under two distinct LFs, the SELF and LELF (at τ * = 0.5 , and τ * = 0.5 ), 2000 burn-in samples are ultimately deleted.
Step 5:
Repeat Step 1 to Step 4, the number of times: 1000, saving all estimates.
Step 6:
Compute the following statistical measures of the performances for the point estimates, i.e., the average estimated bias (ABias) and root mean square errors (RMSE). These measures are computed as follows:
A B i a s ( Ψ ) = 1 1000 l = 1 1000 Ψ ^ l Ψ , R M S E ( Ψ ) = 1 1000 l = 1 1000 Ψ ^ l Ψ 2 ,
where Ψ refers to the parameter vector, Ψ ^ refers to the estimated value of the given parameter, and l indicates the number of the considered sample.
Hence, MLEs and MPSEs are computed for two methods.
All the results of the Monte Carlo simulation for each case for the given parameters ( ϑ 1 , ϑ 2 , ϑ 3 ) are reported in Table 11, Table 12, Table 13, Table 14 and Table 15, respectively. From these tabulated values, one can indicate the following:
  • As n increases, the ABiases tend to zero and the RMSEs decrease.
  • The ABiases and RMSEs of parameters ϑ 1 and ϑ 2 for the MLEs have larger magnitudes than those of the MPSEs. But, for ϑ 3 , they have larger magnitudes for the MPSEs than for the MLEs.
  • For the BEs, one can order the RMSEs as follows: RMSE (BE-LELF: τ * = 0.5 ) < RMSE (BE-SELF) < RMSE (BE-LELF: τ * = 0.5 ) .

6. Real Data Analysis

In this section, we present applications to two datasets to demonstrate the usefulness and adaptability of the KM-GIKw distribution. More precisely, adopting the modeling viewpoint, the KM-GIKw model is fitted to these two datasets and compared with the three-parameter GIKw model as well as two-parameter models, including the KM–Lomax (KM-L), Lomax (L), KM–Burr XII (KM-BXII), BXII, KM-BX, BX, KM-W, and W models. For the decision about the best-fitting competing model, we compute two important criteria, e.g., the Kolmogorov–Smirnov statistic (KS) and the p-value (PVKS). The model that has the lowest KS value and the largest PVKS value is the best model. Table 16 displays several statistical measures for the two datasets.

6.1. The First Dataset

The first dataset consists of a sample of 30 failure times of an airplane’s air-conditioning system. It was provided in Ref. [40]. The data are reported in Table 17.

6.2. The Second Dataset

The second dataset consists of the relief times of twenty patients receiving an analgesic introduced in Ref. [41]. The data are described in Table 18.
Table 19 and Table 20 show the numerical results of the MLEs, standard errors (SEs), KS, and PVKS for both datasets. From these tables, we can note that the KM-GIKw model has the lowest value of KS and the largest value of PVKS for both datasets. Then, we can conclude that the KM-GIKw model gives the best fit. Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 display the plots of the empirical CDF (ECDF), empirical PDF (EPDF), and probability–probability (PP) plots for both datasets. They support the numerical values in Table 19 and Table 20; visually, the KM-GIKw model provides the best fitting results. Table 21 and Table 22 show the estimates for the KM-GIKw model by using different methods of estimation.

7. Discussion and Conclusions

This study combines the generalized inverse Kumaraswamy distribution with the Kavya–Manoharan-G family to produce a new three-parameter distribution known as the Kavya–Manoharan generalized inverse Kumaraswamy distribution, abbreviated as KM-GIKw. The corresponding PDF includes a wide range of forms, which increases its versatility in estimating a variety of data types. This was shown by precise graphics. The corresponding HF is a decreasing or upside-down function. Some fundamental mathematical characteristics of the KM-GIKw distribution were derived, including the Rényi, Tsallis, Arimoto, and Havrda and Charvat entropy measures. On the other hand, an ASP was constructed using the KM-GIKw distribution when the life test was terminated at the median lifetime of the suggested distribution. The required sample size was determined using a variety of truncation periods and different characteristics of the proposed distribution and degrees of consumer risk. Additionally, it was determined that the probability of acceptance at the obtained sample sizes must be less than or equal to the complement of the consumer’s risk. In the statistical part, the model parameters were estimated using the ML, MPS, and Bayesian estimation methods. Based on these different methods, the simulation study examined the performance of the model parameters. The adaptability and possibilities of the KM-GIKw model were then illustrated by looking at real-world data applications. It was shown that it can provide a better fit than previous competing lifetime models.

Author Contributions

Conceptualization, N.A., A.S.H., M.E., C.C. and A.R.E.-S.; methodology, N.A., A.S.H., M.E., C.C. and A.R.E.-S.; software, N.A., A.S.H., M.E., C.C. and A.R.E.-S.; validation, N.A., A.S.H., M.E., C.C. and A.R.E.-S.; formal analysis, N.A., A.S.H., M.E., C.C. and A.R.E.-S.; investigation, N.A., A.S.H., M.E., C.C. and A.R.E.-S.; writing—original draft preparation, N.A., A.S.H., M.E., C.C. and A.R.E.-S.; writing—review and editing, N.A., A.S.H., M.E., C.C. and A.R.E.-S.; visualization, N.A., A.S.H., M.E., C.C. and A.R.E.-S.; funding acquisition, N.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data supporting the reported results are given in the text, and the associated references are given.

Acknowledgments

Researchers Supporting Project number (RSPD2023R548), King Saud University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Marshall, A.; Olkin, I. A new method for adding a parameter to a family of distributions with applications to the exponential and Weibull families. Biometrika 1997, 84, 641–652. [Google Scholar]
  2. Gupta, R.D.; Kundu, D. Exponentiated exponential family: An alternative to Gamma and Weibull distributions. Biom. J. 2001, 43, 117–130. [Google Scholar]
  3. Eugene, N.; Lee, C.; Famoye, F. Beta-normal distribution and its applications. Commun. Stat. Theory Methods 2002, 31, 497–512. [Google Scholar]
  4. Cordeiro, G.M.; de Castro, M. A new family of generalized distributions. J. Stat. Comput. Simul. 2011, 81, 883–898. [Google Scholar]
  5. Alzaatreh, A.; Lee, C.; Famoye, F. A new method for generating families of continuous distributions. Metron 2013, 71, 63–79. [Google Scholar]
  6. Chesneau, C.; Djibrila, S. The generalized odd inverted exponential-G family of distributions: Properties and applications. Eurasian Bull. Math. 2019, 2, 86–110. [Google Scholar]
  7. Alizadeh, M.; Afify, A.Z.; Eliwa, M.S.; Ali, S. The odd loglogistic Lindley-G family of distributions: Properties, Bayesian and non-Bayesian estimation with applications. Comput. Stat. 2020, 35, 281–308. [Google Scholar]
  8. Shaw, W.T.; Buckley, I.R. The Alchemy of probability distributions: Beyond gram-charlier expansions, and a skew-kurtotic-normal distribution from a rank transmutation map. arXiv 2009, arXiv:0901.0434. [Google Scholar]
  9. Reyes, J.; Iriarte, Y.A. A New Family of Modified Slash Distributions with Applications. Mathematics 2023, 11, 3018. [Google Scholar]
  10. Gillariose, J.; Balogun, O.S.; Almetwally, E.M.; Sherwani, R.A.K.; Jamal, F.; Joseph, J. On the Discrete Weibull Marshall-Olkin Family of Distributions: Properties, Characterizations, and Applications. Axioms 2021, 10, 287. [Google Scholar]
  11. Liu, B.; Ananda, M.M.A. A Generalized Family of Exponentiated Composite Distributions. Mathematics 2022, 10, 1895. [Google Scholar]
  12. Kharazmi, O.; Alizadeh, M.; Contreras-Reyes, J.E.; Haghbin, H. Arctan-Based Family of Distributions: Properties, Survival Regression, Bayesian Analysis and Applications. Axioms 2022, 11, 399. [Google Scholar]
  13. Kumar, D.; Singh, U.; Singh, S.K. A method of proposing new distribution and its application to bladder cancer patient data. J. Stat. Probab. Lett. 2015, 2, 235–245. [Google Scholar]
  14. Kumar, D.; Singh, U.; Singh, S.K. the new distribution using sine function- its application to bladder cancer patients data. J. Stat. Appl. Probab. 2015, 4, 417–427. [Google Scholar]
  15. Maurya, S.K.; Kaushik, A.; Singh, R.K.; Singh, U. A new class of distribution having decreasing, increasing, and bathtub-shaped failure rate. Commun. Stat. Theory Methods 2017, 46, 10359–10372. [Google Scholar]
  16. Kavya, P.; Manoharan, M. Some parsimonious models for lifetimes and applications. J. Stat. Comput. Simul. 2021, 91, 3693–3708. [Google Scholar]
  17. Alotaibi, N.; Hashem, A.F.; Elbatal, I.; Alyami, S.A.; Al-Moisheer, A.S.; Elgarhy, M. Inference for a Kavya–Manoharan inverse length biased exponential distribution under progressive-stress model based on progressive type-II censoring. Entropy 2022, 24, 1033. [Google Scholar] [CrossRef]
  18. Hassan, O.H.M.; Elbatal, I.; Al-Nefaie, A.H.; Elgarhy, M. On the Kavya-Manoharan-Burr X Model: Estimations under ranked set sampling and applications. Risk Financ. Manag. 2023, 16, 19. [Google Scholar]
  19. Al-Nefaie, A.H. Applications to bio-medical data and statistical inference for a Kavya-Manoharan log-logistic model. J. Radiat. Res. Appl. Sci. 2023, 16, 100523. [Google Scholar] [CrossRef]
  20. Alotaibi, N.; Elbatal, I.; Shrahili, M.; Al-Moisheer, A.S.; Elgarhy, M.; Almetwally, E.M. Statistical Inference for the Kavya–Manoharan Kumaraswamy model under ranked set sampling with applications. Symmetry 2023, 15, 587. [Google Scholar] [CrossRef]
  21. Alotaibi, N.; Elbatal, I.; Almetwally, E.M.; Alyami, S.A.; Al-Moisheer, A.S.; Elgarhy, M. Bivariate step-stress accelerated life tests for the Kavya–Manoharan exponentiated Weibull model under progressive censoring with applications. Symmetry 2022, 14, 1791. [Google Scholar] [CrossRef]
  22. Dubey, S.D. Compound Gamma, Beta and F Distributions. Metrika 1970, 16, 27–31. [Google Scholar] [CrossRef]
  23. Voda, V.G. On the inverse Rayleigh distributed random variable. Rep. Stat. Appl. Res. 1972, 19, 13–21. [Google Scholar]
  24. Folks, J.L.; Chhikara, R.S. The inverse Gaussian distribution and its statistical application—A review. J. R. Stat. Soc. Ser. B (Methodol.) 1978, 40, 263–289. [Google Scholar]
  25. Calabria, R.; Pulcini, G. On the maximum likelihood and least-squares estimation in the inverse Weibull distribution. Stat. Appl. 1990, 2, 53–66. [Google Scholar]
  26. Sharma, V.K.; Singh, S.K.; Singh, U.; Agiwal, V. The inverse Lindley distribution: A stress-strength reliability model with application to head and neck cancer data. J. Ind. Eng. Int. 2015, 32, 162–173. [Google Scholar]
  27. Barco, K.V.P.; Mazucheli, J.; Janeiro, V. The inverse power Lindley distribution. Commun. Stat. Simul. Comput. 2017, 46, 6308–6323. [Google Scholar] [CrossRef]
  28. Tahir, M.H.; Cordeiro, G.M.; Ali, S.; Dey, S.; Manzoor, A. The inverted Nadarajah–Haghighi distribution: Estimation methods and applications. J. Stat. Comput. Simul. 2018, 88, 2775–2798. [Google Scholar]
  29. Abd AL-Fattah, A.M.; El-Helbawy, A.A.; Al-Dayian, G.R. Inverted Kumaraswamy distribution: Properties and estimation. Pak. J. Stat. 2017, 33, 37–61. [Google Scholar]
  30. Iqbal, Z.; Tahir, M.M.; Riaz, N.; Ali, S.A.; Ahmad, M. Generalized inverted Kumaraswamy distribution: Properties and application. Open J. Stat. 2017, 7, 645–662. [Google Scholar]
  31. Abdelwahab, M.M.; Ghorbal, A.B.; Hassan, A.S.; Elgarhy, M.; Almetwally, E.M.; Hashem, A.F. Classical and Bayesian inference for the Kavya–Manoharan generalized exponential distribution under gneralized progressively hybrid censored data. Symmetry 2023, 15, 1193. [Google Scholar] [CrossRef]
  32. Rényi, A. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 20–30 June 1960; pp. 547–561. [Google Scholar]
  33. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  34. Nassr, S.G.; Hassan, A.S.; Alsultan, R.; El-Saeed, A.R. Acceptance sampling plans for the three-parameter inverted Topp–Leone model. Math. Biosci. Eng. 2022, 19, 13628–13659. [Google Scholar] [CrossRef]
  35. Abushal, T.A.; Hassan, A.S.; El-Saeed, A.R.; Nassr, S.G. Power inverted Topp-Leone in acceptance sampling plans. Comput. Mater. Contin. 2021, 67, 991–1011. [Google Scholar]
  36. Singh, S.; Tripathi, Y.M. Acceptance sampling plans for inverse Weibull distribution based on truncated life test. Life Cycle Reliab. Saf. Eng. 2017, 6, 169–178. [Google Scholar] [CrossRef] [Green Version]
  37. Varian, H.R. A Bayesian approach to real estate assessment. In Variants in Economic Theory: Selected Works of H. R. Varian; Varian, H.R., Ed.; Edward Elgar Publishing: Cheltenham, UK, 2000; pp. 144–155. [Google Scholar]
  38. Doostparast, M.; Akbari, M.G.; Balakrishnan, N. Bayesian analysis for the two-parameter Pareto distribution based on record values and times. J. Stat. Comput. Simul. 2011, 81, 1393–1403. [Google Scholar] [CrossRef]
  39. Ravenzwaaij, D.V.; Cassey, P.; Brown, S.D. A simple introduction to Markov Chain Monte-Carlo sampling. Psychon. Bull. Rev. 2018, 25, 143–154. [Google Scholar] [CrossRef] [Green Version]
  40. Linhart, H.; Zucchini, W. Model Selection; Wiley: New York, NY, USA, 1986. [Google Scholar]
  41. Gross, A.J.; Clark, V.A. Survival Distributions: Reliability Applications in the Biomedical Sciences; John Wiley and Sons: New York, NY, USA, 1975. [Google Scholar]
Figure 1. Selected plots of (a) f ( v ) and (b) h ( v ) .
Figure 1. Selected plots of (a) f ( v ) and (b) h ( v ) .
Axioms 12 00739 g001
Figure 2. The 3D plots of the mean (light green), variance (dark green), skewness (dark pink), and kurtosis (dark blue) associated with the KM-GIKw distribution at ϑ 3 = 15.0.
Figure 2. The 3D plots of the mean (light green), variance (dark green), skewness (dark pink), and kurtosis (dark blue) associated with the KM-GIKw distribution at ϑ 3 = 15.0.
Axioms 12 00739 g002aAxioms 12 00739 g002b
Figure 3. The 3D plots of the mean (light green), variance (dark green), skewness (dark pink), and kurtosis (dark blue) associated with the KM-GIKw distribution at ϑ 2 = 2.5.
Figure 3. The 3D plots of the mean (light green), variance (dark green), skewness (dark pink), and kurtosis (dark blue) associated with the KM-GIKw distribution at ϑ 2 = 2.5.
Axioms 12 00739 g003aAxioms 12 00739 g003b
Figure 4. The 3D plots of the mean (light green), variance (dark green), skewness (dark pink), and kurtosis (dark blue) associated with the KM-GIKw distribution at ϑ 1 = 3.0.
Figure 4. The 3D plots of the mean (light green), variance (dark green), skewness (dark pink), and kurtosis (dark blue) associated with the KM-GIKw distribution at ϑ 1 = 3.0.
Axioms 12 00739 g004aAxioms 12 00739 g004b
Figure 5. ECDF plots of all the competitive models for the first dataset.
Figure 5. ECDF plots of all the competitive models for the first dataset.
Axioms 12 00739 g005
Figure 6. EPDF plots of all the competitive models for the first dataset.
Figure 6. EPDF plots of all the competitive models for the first dataset.
Axioms 12 00739 g006
Figure 7. PP plots of all the competitive models for the first dataset.
Figure 7. PP plots of all the competitive models for the first dataset.
Axioms 12 00739 g007
Figure 8. ECDF plots of all the competitive models for the second dataset.
Figure 8. ECDF plots of all the competitive models for the second dataset.
Axioms 12 00739 g008
Figure 9. EPDF plots of all the competitive models for the second dataset.
Figure 9. EPDF plots of all the competitive models for the second dataset.
Axioms 12 00739 g009
Figure 10. PP plots of all the competitive models for the second dataset.
Figure 10. PP plots of all the competitive models for the second dataset.
Axioms 12 00739 g010
Table 1. Numerical values of certain moments associated with the KM-GIKw distribution.
Table 1. Numerical values of certain moments associated with the KM-GIKw distribution.
ϑ 1 ϑ 2 ϑ 3 μ 1 μ 2 μ 3 μ 4 varSKKUCVID
3.02.02.00.9150.9691.2221.9550.1391.93814.1480.3980.145
3.01.0361.2111.6452.7780.1372.10115.9270.3570.132
4.01.1231.4022.0113.5390.1322.21117.1620.3360.126
5.01.1901.5632.3374.2520.1272.28918.0810.3220.123
3.02.00.7590.6440.6130.6600.0681.1726.4120.3440.090
3.00.8510.7900.8040.9110.0651.2797.0350.3000.077
4.00.9150.9010.9631.1320.0641.3587.4950.2760.070
5.00.9630.9911.0991.3310.0631.4187.8520.2610.066
4.02.00.6730.5000.4090.3680.0470.8974.8960.3220.070
3.00.7510.6080.5300.5010.0430.9825.2790.2770.058
4.00.8040.6880.6290.6160.0411.0475.5660.2520.051
5.00.8450.7530.7120.7180.0401.0995.7940.2360.047
5.02.02.00.9320.9140.9461.0390.0451.0176.0360.2280.048
3.01.0081.0581.1611.3410.0421.1986.8440.2030.042
4.01.0591.1631.3281.5890.0401.3137.4090.1900.038
5.01.0981.2461.4671.8030.0401.3947.8320.1810.036
3.02.00.8360.7280.6600.6230.0290.6024.1990.2030.034
3.00.8990.8320.7960.7860.0250.7504.6060.1750.028
4.00.9400.9060.8980.9150.0230.8504.9110.1610.024
5.00.9700.9630.9801.0230.0220.9235.1500.1520.022
4.02.00.7790.6290.5270.4560.0220.4233.6800.1920.029
3.00.8350.7160.6300.5700.0190.5543.9520.1640.022
4.00.8710.7760.7070.6580.0170.6454.1590.1480.019
5.00.8980.8220.7670.7310.0150.7124.3270.1390.017
Table 2. Some numerical values of the considered entropy measures.
Table 2. Some numerical values of the considered entropy measures.
ϑ 1 ϑ 2 ϑ 3 τ = 0.5 τ = 0.8 τ = 1.2
I T A HC I T A HC I T A HC
3.02.02.00.2650.7140.8422.0330.1530.3660.3690.6210.0830.1870.1880.241
3.00.2710.7330.8682.0950.1550.3690.3720.6260.0810.1840.1840.237
4.00.2770.7510.8912.1520.1570.3750.3790.6370.0820.1860.1870.240
5.00.2820.7670.9142.2070.1600.3840.3870.6510.0850.1910.1920.247
3.02.00.1220.3020.3250.7850.0330.0770.0770.130−0.027−0.063−0.063−0.081
3.00.1140.2810.3010.7260.0210.0480.0480.080−0.042−0.098−0.098−0.126
4.00.1090.2670.2850.6880.0130.0300.0300.051−0.051−0.120−0.119−0.154
5.00.1060.2590.2760.6660.0080.0190.0190.031−0.057−0.133−0.133−0.171
4.02.00.0400.0930.0960.231−0.039−0.090−0.090−0.151−0.095−0.224−0.223−0.287
3.00.0240.0560.0570.138−0.059−0.134−0.134−0.225−0.117−0.277−0.276−0.355
4.00.0130.0310.0310.076−0.072−0.163−0.163−0.274−0.132−0.312−0.311−0.400
5.00.0060.0130.0130.032−0.081−0.184−0.183−0.308−0.142−0.337−0.335−0.432
5.02.02.00.0450.1060.1080.262−0.046−0.106−0.105−0.177−0.108−0.256−0.255−0.328
3.00.0250.0590.0600.145−0.070−0.158−0.158−0.265−0.134−0.318−0.316−0.407
4.00.0150.0350.0350.084−0.083−0.186−0.186−0.312−0.148−0.352−0.350−0.451
5.00.0090.0200.0200.049−0.090−0.203−0.202−0.340−0.156−0.373−0.371−0.478
3.02.0−0.056−0.125−0.121−0.292−0.134−0.299−0.297−0.499−0.189−0.455−0.452−0.582
3.0−0.088−0.193−0.183−0.442−0.169−0.375−0.372−0.625−0.227−0.550−0.545−0.702
4.0−0.108−0.233−0.219−0.530−0.191−0.421−0.416−0.700−0.249−0.608−0.602−0.776
5.0−0.121−0.260−0.243−0.587−0.206−0.452−0.447−0.751−0.265−0.648−0.642−0.826
4.02.0−0.114−0.245−0.230−0.556−0.186−0.410−0.406−0.682−0.238−0.579−0.574−0.739
3.0−0.152−0.322−0.296−0.714−0.227−0.497−0.491−0.825−0.281−0.692−0.684−0.881
4.0−0.177−0.368−0.334−0.807−0.254−0.551−0.543−0.914−0.309−0.764−0.755−0.972
5.0−0.194−0.400−0.360−0.870−0.272−0.589−0.580−0.975−0.328−0.815−0.804−1.036
Table 3. The ASP ( n , L ( δ 0 ) ) for the KM-GIKw distribution with parameters: ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 0.25 , 0.25 , 0.25 ) for selected values of p * , c, and a.
Table 3. The ASP ( n , L ( δ 0 ) ) for the KM-GIKw distribution with parameters: ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 0.25 , 0.25 , 0.25 ) for selected values of p * , c, and a.
p * c a 0.15 0.30 0.60 0.90 1
n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 )
0.100 11.000011.000011.000011.000011.0000
2 31.000031.000031.000031.000031.0000
4 70.918170.904860.968560.965260.9643
10 170.9351170.9164160.9399160.9302160.9274
20 360.9191350.9174340.9171330.9298330.9258
0.250 11.000011.000011.000011.000011.0000
2 40.897740.886540.874340.866740.8647
4 80.823080.798680.772080.755380.7508
10 200.7712190.8000190.7573180.8104180.8044
20 400.7662390.7526370.7944370.7586360.8036
0.500 20.532320.515911.000011.000011.0000
2 60.560460.529750.686250.671150.6671
4 100.5791100.539090.634890.612890.6070
10 230.5377220.5585210.5849210.5495210.5402
20 440.5486430.5211410.5582410.5079400.5599
0.750 30.283430.266120.499120.489220.4866
2 80.282980.253470.342170.323770.3189
4 130.2623120.3113120.2724120.2505110.3444
10 270.2583260.2614250.2677240.3013240.2926
20 490.2875480.2558460.2717450.2755450.2641
0.990 80.012170.018870.015570.013770.0133
2 150.0123140.0150140.0110130.0160130.0153
4 210.0128200.0137190.0152190.0122190.0115
10 380.0113360.0136350.0118340.0128340.0119
20 640.0111610.0130590.0119580.0109570.0134
Table 4. The ASP ( n , L ( δ 0 ) ) for the KM-GIKw distribution with parameters ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 0.25 , 0.25 , 0.75 ) for selected values of p * , c, and a.
Table 4. The ASP ( n , L ( δ 0 ) ) for the KM-GIKw distribution with parameters ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 0.25 , 0.25 , 0.75 ) for selected values of p * , c, and a.
p * c a 0.15 0.30 0.60 0.90 1
n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 )
0.100 11.000011.000011.000011.000011.0000
2 40.948140.927840.900531.000031.0000
4 80.927670.950970.921360.971160.9687
10 210.9183190.9235170.9393170.9057160.9407
20 440.9189400.9159360.9265350.9017340.9186
0.250 11.000011.000011.000011.000011.0000
2 50.850550.801340.900540.880640.8750
4 100.787390.799880.829080.785880.7734
10 240.7978220.7824200.7825190.7796190.7596
20 490.7823440.7895400.7823380.7734370.7974
0.500 20.627020.583520.536620.507611.0000
2 70.600370.510260.568360.514350.6874
4 130.5163110.5913100.5894100.518890.6366
10 290.5156260.5189230.5537220.5282210.5880
20 560.5034500.5141450.5147420.5392410.5625
0.750 30.393230.340530.287930.257720.5000
2 100.286190.282080.290870.358270.3437
4 160.2857140.3088130.2721120.2919120.2743
10 340.2609300.2790270.2727250.2962250.2705
20 620.2783560.2569500.2645470.2635460.2756
0.990 100.015090.013480.012870.017170.0156
2 200.0103170.0133150.0133140.0129140.0112
4 270.0137240.0129210.0141200.0114190.0154
10 490.0113430.0125380.0129360.0106350.0121
20 820.0112730.0105650.0101600.0125590.0124
Table 5. The ASP ( n , L ( δ 0 ) ) for the KM-GIKw distribution with parameters ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 0.25 , 0.75 , 0.25 ) for selected values of p * , c, and a.
Table 5. The ASP ( n , L ( δ 0 ) ) for the KM-GIKw distribution with parameters ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 0.25 , 0.75 , 0.25 ) for selected values of p * , c, and a.
p * c a 0.15 0.30 0.60 0.90 1
n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 )
0.100 11.000011.000011.000011.000011.0000
2 40.921540.905931.000031.000031.0000
4 70.944570.927570.907460.969960.9687
10 190.9076180.9111170.9202170.9005160.9408
20 390.9166370.9161350.9227340.9251340.9186
0.250 11.000011.000011.000011.000011.0000
2 50.786850.752040.888740.877940.8750
4 90.779780.840780.803380.779880.7734
10 210.8097200.8043190.8073190.7699190.7597
20 430.7844410.7686390.7646380.7593370.7975
0.500 20.571820.545220.519020.503911.0000
2 60.632860.584360.535560.507350.6875
4 110.5611100.6100100.5466100.509590.6367
10 250.5406240.5091220.5698220.5144210.5881
20 480.5469460.5058430.5371420.5199410.5627
0.750 30.327030.297230.269330.253920.5000
2 90.259480.307180.258870.351170.3437
4 140.2791130.2926120.3187120.2832120.2744
10 290.2874270.3028260.2716250.2835250.2706
20 540.2730510.2633480.2696460.2935460.2757
0.990 90.011480.014380.010170.016470.0156
2 170.0105150.0156140.0159140.0121140.0112
4 230.0141220.0113200.0147200.0105190.0154
10 420.0113390.0122370.0105350.0136350.0122
20 710.0100660.0111620.0111600.0107590.0124
Table 6. The ASP ( n , L ( δ 0 ) ) for the KM-GIKw distribution with parameters ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 0.25 , 0.75 , 0.75 ) for selected values of p * , c, and a.
Table 6. The ASP ( n , L ( δ 0 ) ) for the KM-GIKw distribution with parameters ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 0.25 , 0.75 , 0.75 ) for selected values of p * , c, and a.
p * c a 0.15 0.30 0.60 0.90 1
n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 )
0.100 11.000011.000011.000011.000011.0000
2 50.928040.952740.913531.000031.0000
4 100.919780.936270.935970.901260.9688
10 270.9094220.9047180.9269170.9111160.9408
20 570.9113460.9037380.9141350.9096340.9186
0.250 11.000011.000011.000011.000011.0000
2 70.773550.862150.768840.883640.8750
4 130.7649100.808090.754380.792280.7734
10 320.7554250.7824210.7727190.7898190.7597
20 640.7702510.7634420.7724380.7881370.7975
0.500 30.512820.638320.557820.511720.5000
2 100.506380.505660.607460.521960.5000
4 170.5071130.5492110.5246100.5287100.5000
10 380.5099300.5090240.5579220.5432220.5000
20 730.5136570.5324470.5209420.5599420.5000
0.750 50.262940.260130.311130.261830.2500
2 130.2933100.310780.331770.365970.3438
4 210.2885170.2568130.3239120.3014120.2744
10 450.2573350.2642280.2906250.3102250.2706
20 830.2511640.2773520.2827470.2819460.2757
0.990 140.0130110.011280.016870.017970.0156
2 270.0106200.0132160.0124140.0139140.0112
4 370.0121280.0135230.0101200.0125190.0154
10 660.0110510.0104400.0132360.0120350.0122
20 1100.0109850.0107680.0114610.0110590.0124
Table 7. The ASP ( n , L ( δ 0 ) ) for the KM-GIKw distribution with parameters ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 0.75 , 0.75 , 0.75 ) for selected values of p * , c, and a.
Table 7. The ASP ( n , L ( δ 0 ) ) for the KM-GIKw distribution with parameters ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 0.75 , 0.75 , 0.75 ) for selected values of p * , c, and a.
p * c a 0.15 0.30 0.60 0.90 1
n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 )
0.100 11.000011.000011.000011.000011.0000
2 60.912450.901240.926131.000031.0000
4 120.909090.925970.949270.905660.9688
10 320.9112240.9185190.9194170.9175160.9408
20 690.9021510.9141400.9093350.9190340.9186
0.250 20.765711.000011.000011.000011.0000
2 80.788360.810850.797440.887240.8750
4 150.7861110.815290.794480.800080.7734
10 380.7666280.7846220.7734190.8021190.7597
20 780.7503570.7770440.7768390.7561370.7975
0.500 30.586320.680920.580320.516820.5000
2 120.504890.503770.503660.531460.5000
4 200.5302150.5218110.5831100.5412100.5000
10 460.5067340.5049260.5058220.5618220.5000
20 880.5209650.5151490.5445430.5258420.5000
0.750 60.263240.315730.336830.267130.2500
2 160.2813120.266190.275780.255070.3438
4 260.2698190.2719140.3005120.3135120.2744
10 540.2733400.2563300.2673260.2644250.2706
20 1000.2656730.2693550.2774480.2598460.2757
0.990 180.0107120.014690.012970.019070.0156
2 330.0112230.0132170.0125140.0153140.0112
4 460.0110330.0108240.0119200.0140190.0154
10 810.0110580.0115430.0112360.0140350.0122
20 1350.0105970.0112720.0115620.0102590.0124
Table 8. The ASP ( n , L ( δ 0 ) ) for the KM-GIKw distribution with parameters ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 1.25 , 1.25 , 1.25 ) for selected values of p * , c, and a.
Table 8. The ASP ( n , L ( δ 0 ) ) for the KM-GIKw distribution with parameters ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 1.25 , 1.25 , 1.25 ) for selected values of p * , c, and a.
p * c a 0.15 0.30 0.60 0.90 1
n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 )
0.100 20.944111.000011.000011.000011.0000
2 210.902290.904140.967040.901731.0000
4 450.9025180.913590.924370.922760.9687
10 1280.9007510.9025240.9157180.9017160.9408
20 2780.90171100.9002510.9099370.9029340.9186
0.250 60.750120.855711.000011.000011.0000
2 320.7509130.755260.808440.901740.8750
4 610.7565240.7696110.811980.831780.7734
10 1550.7562610.7582280.7788200.7875190.7596
20 3190.75401250.7527570.7685400.7894370.7975
0.500 130.501650.536120.679220.538511.0000
2 480.5074190.507480.599660.571950.6875
4 840.5027330.5009150.5162100.594090.6367
10 1910.5042740.5114330.5445230.5609210.5881
20 3700.50271430.5122650.5033450.5249410.5626
0.750 250.251690.287440.313430.290020.5000
2 700.2517270.2544120.262180.294470.3437
4 1120.2510430.2560190.2668130.2767120.2744
10 2320.2523890.2590390.2836270.2793250.2706
20 4260.25171640.2550730.2591500.2734460.2757
0.990 810.0101300.0109120.014280.013170.0156
2 1480.0101550.0112230.0127150.0138140.0112
4 2040.0103770.0106330.0103210.0147190.0154
10 3560.01011350.0103580.0107380.0137350.0121
20 5860.01022230.0104970.0102650.0109590.0124
Table 9. The ASP ( n , L ( δ 0 ) ) for the KM-GIKw distribution with parameters ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 1.5 , 1.5 , 1.5 ) for selected values of p * , c, and a.
Table 9. The ASP ( n , L ( δ 0 ) ) for the KM-GIKw distribution with parameters ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 1.5 , 1.5 , 1.5 ) for selected values of p * , c, and a.
p * c a 0.15 0.30 0.60 0.90 1
n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 )
0.100 60.906720.919611.000011.000011.0000
2 580.9010150.903150.939540.909931.0000
4 1270.9004320.9008110.901370.931960.9687
10 3640.9004890.9052290.9004180.9195160.9408
20 7950.90091940.9029610.9043380.9017340.9186
0.250 150.760140.777711.000011.000011.0000
2 900.7508220.764170.803950.760740.8750
4 1740.7533430.7537140.751280.849280.7734
10 4450.75191080.7589340.7550210.7555190.7597
20 9170.75012230.7503680.7715410.7931370.7975
0.500 360.503690.511530.538320.551711.0000
2 1380.5022330.5194100.555060.596250.6875
4 2410.5013580.5121180.5116110.508790.6367
10 5500.50101330.5040400.5277240.5342210.5881
20 10650.50112570.5058780.5074460.5407410.5627
0.750 710.2537170.261750.289730.304320.5000
2 2020.2502480.2604140.285580.319770.3437
4 3230.2504770.2594230.2639130.3085120.2744
10 6700.25061610.2532480.2577280.2691250.2706
20 12290.25092960.2508880.2623520.2538460.2757
0.990 2350.0102550.0108150.013180.015570.0156
2 4300.01011020.0102290.0105160.0110140.0112
4 5950.01001410.0103400.0112220.0131190.0154
10 10330.01012460.0102710.0104400.0109350.0122
20 17000.01004060.01011180.0104670.0113590.0124
Table 10. The ASP ( n , L ( δ 0 ) ) for the KM-GIKw distribution with parameters ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 2 , 2 , 2 ) for selected values of p * , c, and a.
Table 10. The ASP ( n , L ( δ 0 ) ) for the KM-GIKw distribution with parameters ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 2 , 2 , 2 ) for selected values of p * , c, and a.
p * c a 0.15 0.30 0.60 0.90 1
n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 ) n L ( δ 0 )
0.100 910.900870.905611.000011.000011.0000
2 9510.9000680.902380.909740.926831.0000
4 20980.90001500.9004160.917470.949960.9688
10 60520.90014300.9009450.9090190.9211160.9408
20 132590.90009410.9005970.9079400.9120340.9186
0.250 2480.7507180.755020.837311.000011.0000
2 14890.75021060.7522110.786150.799040.8750
4 29040.75012060.7524220.751690.796680.7734
10 74300.75005270.7509540.7641220.7771190.7597
20 153020.750110850.75021110.7541440.7820370.7975
0.500 5970.5006420.507840.586920.581620.5000
2 23050.50001630.5032170.504170.506360.5000
4 40250.50022850.5018290.5121110.5864100.5000
10 91940.50016510.5009660.5049260.5111220.5000
20 178110.500112610.50051270.5105500.5032420.5000
0.750 11940.2503840.253680.288430.338330.2500
2 33780.25012390.2504240.252890.278270.3438
4 54070.25003820.2512380.2584140.3039120.2744
10 112190.25007930.2509790.2568300.2720250.2706
20 205800.250114550.25091450.2586550.2839460.2757
0.990 39670.01002790.0101260.011890.013170.0156
2 72410.01005100.0101490.0105170.0128140.0112
4 99970.01007050.0100680.0105240.0123190.0154
10 173560.010012240.01011190.0105430.0117350.0122
20 285210.010020130.01001970.0105720.0121590.0124
Table 11. ABiases and RMSEs of different estimation methods for the KM-GIKw distribution with ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 1.25 , 1.25 , 1.25 ) at different sample sizes n.
Table 11. ABiases and RMSEs of different estimation methods for the KM-GIKw distribution with ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 1.25 , 1.25 , 1.25 ) at different sample sizes n.
n MLEMPSEBE-SELFBE-LELF: τ * = 0.5 BE-LELF: τ * = 0.5
ABiasRMSEABiasRMSEABiasRMSEABiasRMSEABiasRMSE
20 ϑ 1 0.73161.7584−0.04251.3785−0.59160.6724−0.55320.7331−0.62250.6705
ϑ 2 1.88984.73330.85444.0574−0.55880.5863−0.53270.5676−0.58080.6038
ϑ 3 0.34241.51121.09262.26190.83280.93790.96021.08040.78271.2488
30 ϑ 1 0.66931.6406−0.03031.3035−0.54400.6488−0.52810.5777−0.58230.6159
ϑ 2 1.97254.78450.68173.3038−0.51820.5532−0.49170.5356−0.54070.5700
ϑ 3 0.25471.25470.88931.81000.91111.48130.88690.99900.79771.3782
50 ϑ 1 0.41621.2621−0.04791.1619−0.48740.5438−0.45580.5424−0.51270.5614
ϑ 2 1.30373.60180.51933.0619−0.45040.5064−0.42180.4925−0.47480.5212
ϑ 3 0.17060.88180.61281.23200.82051.61510.82341.19180.72811.2419
100 ϑ 1 0.41841.01130.00260.8663−0.37170.4531−0.34590.4475−0.39520.4653
ϑ 2 1.02952.59590.39592.2452−0.34340.4337−0.31490.4260−0.36840.4437
ϑ 3 −0.01750.58280.29600.76700.51660.63650.56510.68790.49150.7394
200 ϑ 1 0.21370.7564−0.09190.6796−0.33500.4342−0.31370.4240−0.35520.4451
ϑ 2 0.56321.84950.12471.4341−0.29660.4204−0.26860.4202−0.32110.4253
ϑ 3 0.03610.50490.28050.65530.45920.59610.49960.65020.42400.5587
Table 12. ABiases and RMSEs of different estimation methods for the KM-GIKw distribution with ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 1.50 , 1.75 , 1.50 ) at different sample sizes n.
Table 12. ABiases and RMSEs of different estimation methods for the KM-GIKw distribution with ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 1.50 , 1.75 , 1.50 ) at different sample sizes n.
n MLEMPSEBE-SELFBE-LELF: τ * = 0.5 BE-LELF: τ * = 0.5
ABiasRMSEABiasRMSEABiasRMSEABiasRMSEABiasRMSE
20 ϑ 1 0.52461.6695−0.38381.3168−0.77050.7938−0.73910.7666−0.79910.8192
ϑ 2 2.34956.37250.44804.6099−0.87610.8986−0.83740.8665−0.90850.9268
ϑ 3 0.46311.55131.51442.55221.02251.10761.19021.28880.88730.9657
30 ϑ 1 0.55101.6154−0.15201.4451−0.72200.7483−0.69320.7237−0.74870.7716
ϑ 2 2.62106.60311.16455.7568−0.82510.8504−0.78630.8193−0.85750.8782
ϑ 3 0.36481.53631.19332.28141.01411.11101.16321.27510.89240.9809
50 ϑ 1 0.49391.4495−0.08691.2708−0.63010.6669−0.60200.6438−0.65610.6890
ϑ 2 2.35786.10630.96714.7559−0.73990.7775−0.69840.7468−0.77480.8057
ϑ 3 0.21931.15840.79101.61300.93911.48510.99781.11210.81951.0244
100 ϑ 1 0.33371.0898−0.11690.9897−0.56220.6114−0.53770.5925−0.58530.6299
ϑ 2 1.39543.98280.43823.0738−0.66010.7196−0.62020.6962−0.69440.7431
ϑ 3 0.09700.82040.50651.09290.76720.87450.84010.95100.70150.8066
200 ϑ 1 0.23690.8830−0.15930.8164−0.49430.5620−0.47200.5471−0.51580.5771
ϑ 2 0.92882.82650.16942.1749−0.58440.6790−0.54370.6690−0.61940.6943
ϑ 3 0.06410.63430.42250.87340.66130.77450.71710.83350.61080.7222
Table 13. ABiases and RMSEs of different estimation methods for the KM-GIKw distribution with ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 2.00 , 1.75 , 2.50 ) at different sample sizes n.
Table 13. ABiases and RMSEs of different estimation methods for the KM-GIKw distribution with ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 2.00 , 1.75 , 2.50 ) at different sample sizes n.
n MLEMPSEBE-SELFBE-LELF: τ * = 0.5 BE-LELF: τ * = 0.5
ABiasRMSEABiasRMSEABiasRMSEABiasRMSEABiasRMSE
20 ϑ 1 0.56111.7686−0.11331.7178−.81470.8634−0.75950.8172-0.86530.9073
ϑ 2 2.75157.35331.84326.6930−0.74070.7662−0.68870.7238−0.78340.8033
ϑ 3 0.98083.70882.49135.55860.94361.03201.20181.30050.81971.8058
30 ϑ 1 0.44771.6176−0.16061.5830−0.71260.7578−0.66160.7147−0.75990.7991
ϑ 2 2.11236.00991.08975.1813−0.65800.6906−0.60140.6479−0.70410.7292
ϑ 3 0.68102.80212.06664.62280.96201.06371.19151.30430.77980.8808
50 ϑ 1 0.34791.2937−0.10611.2867−0.58210.6500−0.53800.6158−0.62400.6839
ϑ 2 1.54014.57690.85614.0290−0.56810.6308−0.51220.5986−0.61440.6630
ϑ 3 0.36702.01771.27113.27430.90651.03941.09071.23930.75830.8870
100 ϑ 1 0.19350.9246−0.12350.9579−0.46460.5504−0.42770.5240−0.50020.5773
ϑ 2 0.82892.78210.35072.3355−0.47000.5689−0.41780.5480−0.51500.5933
ϑ 3 0.15231.06080.68961.85250.74810.89860.87331.03220.64170.7910
200 ϑ 1 0.16860.7778−0.06290.7797−.37910.5035−0.34620.4838−0.41110.5243
ϑ 2 0.59591.78570.24761.5078−0.36030.5352−0.30600.5352−0.40720.5463
ϑ 3 0.09541.02960.40971.44280.60880.81070.70640.91820.52450.7242
Table 14. ABiases and RMSEs of different estimation methods for the KM-GIKw distribution with ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 2.50 , 2.50 , 2.50 ) at different sample sizes n.
Table 14. ABiases and RMSEs of different estimation methods for the KM-GIKw distribution with ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 2.50 , 2.50 , 2.50 ) at different sample sizes n.
n MLEMPSEBE-SELFBE-LELF: τ * = 0.5 BE-LELF: τ * = 0.5
ABiasRMSEABiasRMSEABiasRMSEABiasRMSEABiasRMSE
20 ϑ 1 0.44911.9251−0.34901.9073−1.15291.2137−1.05881.3444−1.22021.2509
ϑ 2 4.261611.34582.16018.7924−1.36231.3793−1.30101.3237−1.41251.4263
ϑ 3 1.25254.02933.02146.23431.27241.35301.54821.64131.20702.4814
30 ϑ 1 0.37701.6916−0.17181.7480−1.04271.0855−0.98861.0367−1.09311.1317
ϑ 2 3.54629.61901.99697.6894−1.29111.3116−1.22581.2535−1.34381.3604
ϑ 3 0.91903.26122.22805.34081.32171.41051.56631.67031.12621.2085
50 ϑ 1 0.40731.4816−0.01031.5210−0.87140.9164−0.82390.8745−0.91680.9573
ϑ 2 2.76887.62901.75296.7930−1.12851.1640−1.05821.1073−1.18691.2144
ϑ 3 0.44572.33921.24573.59571.18961.29121.37561.49111.03901.1345
100 ϑ 1 0.16720.9408−0.13190.9760−0.75700.8105−0.71700.7753−0.79590.8453
ϑ 2 1.13304.22570.51913.5660−1.01041.0638−0.94291.0121−1.06791.1111
ϑ 3 0.13910.97520.51571.62691.03521.15181.17171.30140.92071.0307
200 ϑ 1 0.14540.8496−0.08600.8652−0.62530.7003−0.59020.6713−0.65980.7296
ϑ 2 0.82903.02900.45613.3005−0.84570.9369−0.77450.8957−0.90670.9798
ϑ 3 0.11271.00670.38141.45890.82840.96870.92881.08420.74290.8758
Table 15. ABiases and RMSEs of different estimation methods for the KM-GIKw distribution with ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 2.50 , 2.50 , 3.50 ) at different sample sizes n.
Table 15. ABiases and RMSEs of different estimation methods for the KM-GIKw distribution with ( ϑ 1 , ϑ 2 , ϑ 3 ) = ( 2.50 , 2.50 , 3.50 ) at different sample sizes n.
n MLEMPSEBE-SELFBE-LELF: τ * = 0.5 BE-LELF: τ * = 0.5
ABiasRMSEABiasRMSEABiasRMSEABiasRMSEABiasRMSE
20 ϑ 1 0.46931.8287−0.20741.8708−0.95430.9991−0.88960.9418−1.01471.0538
ϑ 2 3.42099.11722.34218.9577−1.12981.1524−1.04411.0763−1.19801.2156
ϑ 3 1.47434.77063.56567.99891.02061.14201.35661.48080.84842.1064
30 ϑ 1 0.45821.6699−0.15201.6820−0.81690.8657−0.75960.8158−0.87110.9140
ϑ 2 2.99698.10201.76387.3588−1.01341.0451−0.92060.9677−1.08681.1106
ϑ 3 0.83903.64532.56706.55151.12611.22771.42551.53560.88740.9900
50 ϑ 1 0.30471.4393−0.13421.5184−0.68350.7323−0.63520.6895-0.73020.7744
ϑ 2 2.18356.90381.41876.5544−0.89280.9375−0.80330.8663−0.96671.0008
ϑ 3 0.55512.47541.83704.77151.09501.21451.34671.48060.89301.0108
100 ϑ 1 0.19400.9748−0.10291.0387−0.54490.6194−0.50350.5858−0.58510.6532
ϑ 2 1.19163.75590.59063.3470−0.72210.8177−0.62440.7661−0.80120.8731
ϑ 3 0.16981.37610.73912.37930.91811.07191.09801.26160.76690.9200
200 ϑ 1 0.14520.8401−0.09280.8728−0.48360.5750−0.44980.5495−0.51680.6011
ϑ 2 0.87883.12600.41422.7938−0.64140.7807−0.55780.7475−0.71240.8211
ϑ 3 0.12421.23960.53952.02250.81200.99400.94761.13760.69450.8747
Table 16. Descriptive analysis for both datasets.
Table 16. Descriptive analysis for both datasets.
DatasetsMinimumVarMedianMeanStandard DeviationMaximumSKKU
Dataset 11.004995.17322.0059.6070.677261.001.7842.569
Dataset 21.100.4711.701.900.6864.101.8620.686
Table 17. The failure time dataset.
Table 17. The failure time dataset.
24621120232618771462472257142205
12711114120113165295141116901
Table 18. The values of the relief-time data.
Table 18. The values of the relief-time data.
1.51.21.41.91.81.62.21.11.41.31.71.72.74.11.8
2.31.6231.7
Table 19. Numerical results of the MLE, SE, KS, and PVKS for the first dataset.
Table 19. Numerical results of the MLE, SE, KS, and PVKS for the first dataset.
MeasuresKM-GIKwGIKwKM-LLKM-BXIIBXIIKM-BXBXKM-WW
ϑ ^ 1 0.3310.352 0.0060.0070.0130.018
ϑ ^ 2 2.8032.9587.5333.2960.0190.029
ϑ ^ 3 43.76545.251 0.3530.290
δ ^ 511.530141.26510.51310.198 0.9380.854
SE( ϑ ^ 1 )0.2500.265 0.0010.0010.0030.004
SE( ϑ ^ 2 )2.7272.84418.2963.0750.0830.142
SE( ϑ ^ 3 )91.05497.513 0.0630.059
SE( δ ^ ) 1365.278168.44345.14149.620 0.1280.119
KS0.1310.1350.1450.1420.3720.3770.1810.1960.1460.153
PVKS0.6800.6410.5570.585<0.001<0.0010.2820.2010.5460.481
Table 20. Numerical results of the MLE, SE, KS, and PVKS for the second dataset.
Table 20. Numerical results of the MLE, SE, KS, and PVKS for the second dataset.
MeasuresKM-GIKwGIKwKM-LLKM-BXIIBXIIKM-BXBXKM-WW
ϑ ^ 1 3.6285.111 0.6550.6910.4220.469
ϑ ^ 2 1.0930.8091670.9301405.7250.0220.032
ϑ ^ 3 9.0506.470 3.5633.246
δ ^ 4716.4502669.97353.62352.862 3.1142.787
SE( ϑ ^ 1 )6.0756.631 0.0850.0860.0330.040
SE( ϑ ^ 2 )2.4181.31619,573.04812,641.6370.0530.077
SE( ϑ ^ 3 )19.3607.799 1.3381.321
SE( δ ^ ) 55,248.33224,014.725129.668125.750 0.4580.427
KS0.0940.0960.4360.4400.2910.2850.1720.1900.1860.185
PVKS0.9950.9930.0010.0010.0680.0780.5950.4650.4920.501
Table 21. Different methods of estimation of the KM-GIKw distribution for the first dataset.
Table 21. Different methods of estimation of the KM-GIKw distribution for the first dataset.
MLEMPSEBE-LELF: τ * = 0.5 BE-LELF: τ * = 0.5 BE-SELF
ϑ ^ 1 0.33100.01170.00900.00900.0090
ϑ ^ 2 2.80304.86982.74852.73462.7417
ϑ ^ 3 43.765045.795659.828235.212343.2794
Table 22. Different methods of estimation of the KM-GIKw distribution for the second dataset.
Table 22. Different methods of estimation of the KM-GIKw distribution for the second dataset.
MLEMPSEBE-LELF: τ * = 0.5 BE-LELF: τ * = 0.5 BE-SELF
ϑ ^ 1 3.62800.81971.07061.06871.0696
ϑ ^ 2 1.09305.636610.12229.68649.9097
ϑ ^ 3 9.05003.92713.87443.70493.7908
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alsadat, N.; Hassan, A.S.; Elgarhy, M.; Chesneau, C.; El-Saeed, A.R. Sampling Plan for the Kavya–Manoharan Generalized Inverted Kumaraswamy Distribution with Statistical Inference and Applications. Axioms 2023, 12, 739. https://doi.org/10.3390/axioms12080739

AMA Style

Alsadat N, Hassan AS, Elgarhy M, Chesneau C, El-Saeed AR. Sampling Plan for the Kavya–Manoharan Generalized Inverted Kumaraswamy Distribution with Statistical Inference and Applications. Axioms. 2023; 12(8):739. https://doi.org/10.3390/axioms12080739

Chicago/Turabian Style

Alsadat, Najwan, Amal S. Hassan, Mohammed Elgarhy, Christophe Chesneau, and Ahmed R. El-Saeed. 2023. "Sampling Plan for the Kavya–Manoharan Generalized Inverted Kumaraswamy Distribution with Statistical Inference and Applications" Axioms 12, no. 8: 739. https://doi.org/10.3390/axioms12080739

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop