Next Article in Journal
Numerical Solution to the Time-Fractional Burgers–Huxley Equation Involving the Mittag-Leffler Function
Previous Article in Journal
Frames of Group Sets and Their Application in Bundle Theory
Previous Article in Special Issue
The Efficiency of Hazard Rate Preservation Method for Generating Discrete Rayleigh–Lindley Distribution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Different Statistical Inference Algorithms for the New Pareto Distribution Based on Type-II Progressively Censored Competing Risk Data with Applications

by
Essam A. Ahmed
1,2,
Tariq S. Alshammari
3 and
Mohamed S. Eliwa
4,5,*
1
College of Business Administration, Taibah University, Al-Madina 41411, Saudi Arabia
2
Department of Mathematics, Sohag University, Sohag 82524, Egypt
3
Department of Mathematics, College of Science, University of Ha’il, Hail, Saudi Arabia
4
Department of Statistics and Operations Research, College of Science, Qassim University, Saudi Arabia
5
Department of Mathematics, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(13), 2136; https://doi.org/10.3390/math12132136
Submission received: 16 May 2024 / Revised: 1 July 2024 / Accepted: 4 July 2024 / Published: 7 July 2024
(This article belongs to the Special Issue Application of the Bayesian Method in Statistical Modeling)

Abstract

:
In this research, the statistical inference of unknown lifetime parameters is proposed in the presence of independent competing risks using a progressive Type-II censored dataset. The lifetime distribution associated with a failure mode is assumed to follow the new Pareto distribution, with consideration given to two distinct competing failure reasons. Maximum likelihood estimators (MLEs) for the unknown model parameters, as well as reliability and hazard functions, are derived, noting that they are not expressible in closed form. The Newton–Raphson, expectation maximization (EM), and stochastic expectation maximization (SEM) methods are employed to generate maximum likelihood (ML) estimations. Approximate confidence intervals for the unknown parameters, reliability, and hazard rate functions are constructed using the normal approximation of the MLEs and the normal approximation of the log-transformed MLEs. Additionally, the missing information principle is utilized to derive the closed form of the Fisher information matrix, which, in turn, is used with the delta approach to calculate confidence intervals for reliability and hazards. Bayes estimators are derived under both symmetric and asymmetric loss functions, with informative and non-informative priors considered, including independent gamma distributions for informative priors. The Monte Carlo Markov Chain sampling approach is employed to obtain the highest posterior density credible intervals and Bayesian point estimates for unknown parameters and reliability characteristics. A Monte Carlo simulation is conducted to assess the effectiveness of the proposed techniques, with the performances of the Bayes and maximum likelihood estimations examined using average values and mean squared errors as benchmarks. Interval estimations are compared in terms of average lengths and coverage probabilities. Real datasets are considered and examined for each topic to provide illustrative examples.

1. Introduction

Lifetime studies offer a valuable approach across engineering sciences, medical fields, and economics for exploring the survival distribution of entities or individuals. Analyzing data from such studies necessitates the consideration of the anticipated lifetime distribution, with typical distributions including the exponential, Weibull, Burr, Pareto, and Gamma distributions. Among these, the Pareto distribution family is widely acknowledged in the literature for its ability to model data with heavy tails. One significant outcome of advancements in Pareto modeling is the spotlight it casts on the advantageous properties for measuring income inequality. The Pareto distribution has found applications in various disciplines such as actuarial science, economics, finance, life testing, reliability, survival analysis, and engineering, owing to its tractable nature as a lifespan model. However, it becomes evident that modeling data with heavy-tailed distributions necessitates considerations beyond the capabilities of the Pareto distribution alone. Hence, there arises a need for alternatives. Several distributions based on the Pareto framework have been proposed to address this need. The new Pareto (NP) distribution, introduced by [1], is one such recently discovered distribution that merits further investigation due to its significance across multiple domains. The probability density function (PDF) and cumulative distribution function (CDF) of the NP distribution, denoted N P ( α , β ) , are as follows:
f ( x , α , β ) = 2 α β α x α 1 x α + β α 2 , x β , α > 0 , β > 0 ,
and
F ( x , α , β ) = 1 2 β α x α + β α , x β , α > 0 , β > 0 ,
where α and β denote the shape and scale parameters of the distribution, respectively. The survival function (SF) for the NP distribution is derived from Equation (2) as follows:
S ( x , α , β ) = 2 β α x α + β α , x β , α > 0 , β > 0 .
The hazard rate function (HRF) can be formulated as
H ( x , α , β ) = f ( x , α , β ) 1 F ( x , α , β ) = α x α 1 x α + β α , x β , α > 0 , β > 0 .
To highlight certain unique characteristics of this function, its first derivative with respect to x can be expressed as follows:
H ( x , α , β ) = α α 1 x α 2 x α + β α α 2 x 2 α 2 x α + β α 2 = α x α 2 Δ x x α + β α 2 .
As mentioned in some of the literature, ref. [2] reported that the NP hazard rate function can be unimodal shaped or decreasing depending on its parameters. For x > β > 0 , it becomes evident that Δ x = α 1 β α x α and Δ β = β α α 2 < 0 if α < 2 . Consequently, for α < 2 , H ( x , α , β ) < 0 , indicating that H ( x , α , β ) is decreasing concerning x. In the case of α > 2 , the function H ( x , α , β ) exhibits a distinct mode at x = x 0 , where H ( x , α , β ) increases for all x < x 0 and decreases for all x > x 0 , with x 0 = β α 1 1 / α . Figure 1 shows how the behavior of H ( x , α , β ) is changed when the significant increasing of the parameters α and β happened, as we can see this effect very clearly from the graph. It is clear to us from this figure that the hazard rate of the NP distribution may exhibit either an upside-down bathtub (unimodal) shape or a decreasing trend, contingent upon the values of various parameters.
The NP distribution is often employed for modeling a wide range of real-world datasets, such as insurance, reliability, engineering, and economics. For example, ref. [1] delved into the mathematical properties of the NP distribution, demonstrating its utility in modeling income and reliability data through the analysis of seven real datasets. Ref. [3] used the new Pareto distribution to explain the income of the upper-class group using the Malaysian household incomes dataset from 2012–2019. Further insights into this distribution were uncovered in [2], where the authors derived simpler formulas for relevant risk measures and inequality indices. Ref. [4] conducted parameter estimation for the NP distribution using two distinct methodologies: classical estimation and Bayesian inference, with the latter utilizing importance sampling. Additionally, ref. [5] explored parameter estimation across various phases of progressively censored samples derived from the NP distribution and made predictions regarding failure times for removed units using a Bayesian approach and Markov chain Monte Carlo techniques. Furthermore, ref. [6] applied this distribution to data on rainfall and COVID-19. Ref. [7] utilized Type-II censored data and full samples to estimate NP distribution parameters employing the mixed Gibbs sampling method. They used the NP distribution to model the data of the air conditioning system in an aircraft, as well as some data for mechanical units. Recently, ref. [8] applied the NP distribution to some different data, including the net worth of the affluent in Singapore and China, the daily increment of NASDAQ-100, and the daily new-case rate of COVID-19.
Recognizing the presence or absence of competing risks is pivotal when considering the distribution of units’ lifetimes in research. In realistic testing scenarios, it is well understood that failures often stem from multiple factors, competing within the lifespan continuum. This phenomenon is referred to as a competing risk model in the statistical literature and finds widespread application across various scientific domains, including economics, medical sciences, electronic engineering, and social sciences. In the medical field, for instance, reducing mortality rates from specific diseases is a key objective of public health initiatives. However, assessing the impact on overall mortality and life expectancy is complicated by the presence of competing causes of death. Individuals face risks from various ailments like heart disease, diabetes, hypertension, cancer, AIDS, and tuberculosis, all vying for their longevity, even though death is typically attributed to a single cause. Similarly, in assessing the reliability of car tires, factors such as sidewall damage, punctures, or tread wear can lead to failure, highlighting the competition among different failure modes. In industrial reliability contexts, the focus on specific system components prone to failure may overlook potential risks posed by other components, thus introducing a competing risk problem. Competition among failure causes exists in each scenario, with only one mode typically leading to failure. Therefore, the precise inference of each failure mode in the presence of others becomes necessary. Moreover, in medical survival analysis and industrial experimentation, the item of interest often becomes inadvertently lost or removed before failure, leading to data censorship as a result of these investigations.
Due to the challenge of obtaining comprehensive lifespan data for every component in life experiments, researchers often categorize control systems into two types: Type-I and Type-II. Type-I censorship occurs when the experimental duration is fixed, and the number of failures becomes random. Conversely, Type-II censorship arises when the experimental time is unpredictable, but the number of failures is predetermined. Both types entail that any remaining items cannot be removed before the test concludes. Therefore, a progressive Type-II censoring approach, introduced in [9], aims to streamline resources and reduce costs by implementing multiple censoring stages according to a predefined scheme. The practical implementation of progressive Type-II censorship involves initiating the experiment with n units to be tested. The experiment concludes at the failure time of the mth (where m < n ) unit. Upon the occurrence of the first failure, R 1 units are randomly selected from the remaining n 1 units, and the failure time, X 1 : m : n , is recorded. Subsequently, R 2 units are randomly removed after noting the second failure time, X 2 : m : n , and this process continues until the mth product fails. At this point, all surviving units are removed, and the failure time, X m : m : n , is noted. Here, R m represents the number of units removed for the mth time. Thus, X 1 : m : n , X 2 : m : n , , X m : m : n constitute the progressive Type-II censored sample. It is essential to note that the total number of units, n, is the sum of R 1 , R 2 , , R m and m, with R 1 , R 2 , , R m being predefined constants. Further insights into censorship-related studies can be found in [9], which serves as a valuable resource in this area. Building upon the progressive Type-II censored sample drawn from an NP distribution, ref. [6] investigated issues related to estimating unknown characteristics and predicting the failure periods of the eliminated units.
In recent years, there has been a growing interest among researchers in statistical inference, particularly concerning the presence of competing risks and various forms of censoring in available data. A multitude of recent studies have addressed this topic, spanning different censoring schemes and statistical models. For instance, ref. [10] explored competing risk analysis under Type-II hybrid censoring for the two-parameter exponential distribution. Ref. [11] investigated statistical inference for competing hazard models using progressive interval censored Weibull data. Ref. [12] analyzed a competing risk model with Weibull distributions under a unified hybrid censoring scheme, deriving maximum likelihood estimators and approximate confidence intervals for distributional parameters. Similarly, ref. [13] focused on statistical inference for competing risk models with inverted exponentiated distributions under a unified hybrid censoring scheme. Additionally, researchers have examined various Bayesian and maximum likelihood estimation techniques under different censoring scenarios and model assumptions. Notably, there is a noticeable gap in the literature regarding the application of the new Pareto lifespan distribution within competing risk models under any form of censorship. Hence, our study aimed to address this gap by studying the maximum likelihood and Bayesian estimation of the unknown parameters, reliability, and hazard rate functions of the NP distribution under progressively censored competing risks models. Moreover, we introduce innovative methodologies, such as employing expectation maximization (EM) and stochastic EM (SEM) algorithms for deriving maximum likelihood estimators for the NP distribution, a novel approach not previously explored. To achieve these objectives, we employ a variety of methods, including the Newton–Raphson method, EM, SEM algorithms, and Bayesian estimation techniques, utilizing different loss functions and prior distributions. Furthermore, we conducted a Monte Carlo simulation analysis to evaluate the efficacy of the proposed estimators using metrics such as absolute bias, mean squared error, average width, and coverage probability. Finally, we illustrate our findings using a real-world dataset, emphasizing the practical relevance of our research.
This paper’s structure is organized as follows: Section 2 provides descriptions of the models under the competing risks scenario. In Section 3, maximum likelihood estimators (MLEs) and corresponding approximate confidence intervals (ACIs) for the NP distribution’s unknown parameters in the progressively censored competing risk setting are derived using the Newton–Raphson (NR) method. The existence and uniqueness of MLEs are also discussed in this section. Section 4 applies the EM and SEM approximation methods to obtain the MLEs of the unknown parameters. Additionally, a Fisher information matrix is constructed to facilitate approximate interval estimations in this section. Section 5 utilizes the Markov chain Monte Carlo (MCMC) technique to generate Bayes estimates for the unknown parameters, reliability, and hazard functions, assuming independent gamma priors for the unknown parameters. HPD credible intervals for the unknown parameters are also established based on MCMC samples. Section 6 presents a Monte Carlo simulation investigation to compare the suggested estimates in terms of average bias, mean squared error (MSE), average width (AW), and coverage probability (CP). In Section 7, real-world datasets and a simulation example are examined for illustrative purposes. Finally, Section 8 concludes the paper with some closing remarks.

2. Model and Data Overview

Consider a life testing experiment commencing with a set of n N identical units, where the failure times of these units are denoted by X 1 , X 2 , , X n . To simplify, let us assume that there are only two distinct causes of failure. Thus, for each unit i = 1 , , n , we have X i = m i n { X i 1 , X i 2 } , where X i k , ( k = 1 , 2 ) represents the latent failure time of the ith unit under the kth cause of failure. We make the assumptions that the latent failure times X i 1 and X i 2 are statistically independent and the pairs ( X i 1 , X i 2 ) are identically and independently distributed (i.i.d).
As is clear from Figure 1, the effect of the β parameter is not significant on the hazard function, unlike the α parameter. Therefore, here, we assume that the failure times follow the NP distribution, with a common scale parameter β and distinct shape parameters ( α k , k = 1 , 2 ). The CDF F k ( x ) and the PDF of the jth failure cause of a random variable X i j are provided in [1] as follows:
f k ( x , Θ ) = 2 α k β α k x α k 1 x α k + β α k 2 , and F k ( x , Θ ) = 1 2 β α k x α k + β α k , x β , α k > 0 , β > 0 .
Given the observation X i = min { X i 1 , X i 2 } , where X i 1 and X i 2 represent two distinct failure times for a test unit, we solely consider the smaller of the two, indicating the overall failure time. Subsequently, the CDF of this overall failure time is readily derived as follows:
F ( x , Θ ) = 1 1 F 1 ( x , Θ ) 1 F 2 ( x , Θ ) = 1 2 β α 1 x α 1 + β α 1 2 β α 2 x α 2 + β α 2 = 1 4 j = 1 2 β α k x α k + β α k , x β , α k > 0 , β > 0 .
Consequently, the PDF can be represented as
f x , Θ = 4 β α 1 + α 2 α 1 + α 2 x α 1 + α 2 + α 2 β α 1 x α 2 + α 1 β α 2 x α 1 x x α 1 + β α 1 2 x α 2 + β α 2 2 = 4 β α 1 + α 2 k = 1 2 α k x α k 1 x α k + β α k 2 x α 3 k + β α 3 k = k = 1 2 f k ( x , Θ ) F ¯ ( x , Θ ) , x β , α k > 0 , β > 0 .
Subsequently, the survival and hazard functions manifest in the subsequent form:
S ( t , Θ ) = 4 j = 1 2 β α k t α k + β α k , t β , α k > 0 , β > 0 ,
and
H ( t , Θ ) = k = 1 2 α k x α k 1 x α k + β α k , t β , α k > 0 , β > 0 .
Let
I ( δ i = k ) = 1 , δ i = k 0 , otherwise : , i = 1 , 2 , , m ,
then
m k = i = 1 m I δ i = k ,
signifies the total count of units that failed because of cause k ( k = 1 , 2 ) and m 1 + m 2 = m . In exploiting the independence of the latent failure times X i 1 and X i 2 for i = 1 , 2 , , n , the relative risk rate attributed to a specific cause (let us say, cause 1) can be derived as follows:
p = P X 1 < X 2 = β F 1 ( x , α 1 , β ) f 2 ( x , α 2 , β ) d x = β 2 α 2 β α 2 x α 2 1 x α 2 + β α 2 2 1 2 β α 1 x α 1 + β α 1 d x = 1 β 4 α 2 β α 1 + α 2 x α 2 1 x α 2 + β α 2 2 x α 1 + β α 1 d x .
A numerical approach is necessary to solve the integral on the right side of Equation (11) since it lacks an analytical solution. Once P X 1 < X 2 is determined, P X 2 < X 1 can be computed using the relationship P X 2 < X 1 = 1 P X 2 < X 1 . Hence, if m 1 Binomial ( m , p ), then m 2 Binomial ( m , 1 p ). In progressive Type-II censoring, the total number of failed individuals m and the predefined censored scheme ( R 1 , R 2 , , R m ) are specified in advance, where R m = n m i = 1 m R i . Consequently, the observed set of progressive Type-II censored data with competing risks can be expressed as
( X 1 : m : n , δ 1 , R 1 ) , ( X J : m : n , δ 2 , R m ) , , ( X m , δ m , R m ) .
To simplify the notation, we denote X i : m : n as X i . Then, the likelihood equation for the progressive Type-II censored data, based on the competing risk model, is expressed as
L = C R i = 1 m f 1 x i F ¯ 2 x i I δ i = 1 f 2 x i F ¯ 1 x i I δ i = 2 F ¯ 1 x i F ¯ 2 x i R i ,
where
C R = n ( n R 1 1 ) ( n R 1 R 2 m + 1 ) , and F ¯ k x i = 1 F k x i , k = 1 , 2 .

3. Estimation of Maximum Likelihood Using Different Algorithms

A widely employed and highly regarded statistical estimation technique is MLE. MLE stands out as a preferred and efficient choice for parametric estimation procedures, owing to its consistency, efficiency, and asymptotic normality properties. This section is dedicated to utilizing maximum likelihood to derive both point and interval estimates for the unknown parameters α 1 , α 2 , and β , as well as for the reliability S ( t , α 1 , α 2 , β ) and hazard H ( t , α 1 , α 2 , β ) functions, within the framework of progressive Type-II censoring with competing risk data.

3.1. MLEs via Newton–Raphson Algorithm

A widely utilized numerical technique for determining MLEs is the NR algorithm. In this context, we outline the NR algorithm for computing the MLEs of α 1 , α 2 , β , S ( t ) , and H ( t ) . Let x 1 : m : n < x 2 : m : n < < x m : m : n denote the order statistics of Type-II progressively censored data with competing risks from independent NP distributions N P ( α 1 , β ) and N P ( α 2 , β ), respectively, and a pre-fixed censoring scheme ( R 1 , R 2 , , R m ). Henceforth, we will use x = ( x 1 , x 2 , , x m ) in lieu of ( x 1 : m : n , x 2 : m : n , , x m : m : n ). The likelihood function utilizing Equations (6) and (12) is presented as follows:
L i = 1 m k = 1 2 α k β α k β α 3 k x i α k 1 x i α k + β α k 2 x i α 3 k + β α 3 k I δ i = k i = 1 m k = 1 2 β α k x i α k + β α k R i .
In disregarding the additive constant, the log-likelihood function is expressed as
L = ln L = k = 1 2 m k log α k + n log β k = 1 2 α k + k = 1 2 i = 1 m α k 1 I δ i = k log x i k = 1 2 i = 1 m Δ k δ i , R i log x i α k + β α k ,
where
Δ k δ i , R i = I δ i = k + R i + 1 , ; i = 1 , 2 , ; and k = 1 , 2 .
Taking the derivative of Equation (15) with respect to α k and β , where k = 1 , 2 , yields the partial derivatives of the likelihood as follows:
L α k m k α k + n log β + i = 1 m I δ i = k log x i i = 1 m Δ k δ i , R i x i α k log x i + β α k log β x i α k + β α k ,
and
L β k = 1 2 n α k β i = 1 m Δ k δ i , R i α k β α k 1 x i α k + β α k .
Concurrently solving the intricate nonlinear equations L α k = 0 for k = 1 , 2 and L β = 0 allows for the determination of the MLEs of α k and β . However, obtaining closed forms for Equations (16) and (17) proves challenging. Hence, various numerical techniques, such as NR, are employed to compute the MLEs for the unknown parameters. Notably, given x β , the MLE for β is straightforwardly derived as β = x 1 . Furthermore, it is evident that the MLE for α k can be obtained as a fixed-point solution of the following equation:
Φ α k = 1 / α k , k = 1 , 2 ,
where
Φ α k = m k 1 n log x 1 + i = 1 m I δ i = k log x i i = 1 m Δ k δ i , R i x i α k log x i + x 1 α k log x 1 x i α k + x 1 α k , k = 1 , 2 .
The prevalent approach for numerically solving Equation (19) is through the straightforward iterative method, Φ α k ( j ) = α k ( j ) , where ( j ) is the value obtained in the jth iteration. The solutions to the likelihood equations are denoted as α ^ k ( k = 1 , 2 ) and β ^ . The following steps illustrate the specific processes involved in this iteration method:
Step 1: Initial values for Θ = ( α 1 , α 2 , β ) should be given with j = 0 ; that is, Θ ( 0 ) = ( α 1 ( 0 ) , α 2 ( 0 ) , β ( 0 ) ) .
Step 2: In the jth iteration, calculate ( L α 1 , L α 2 , L β ) T α 1 = α 1 ( j ) , α 2 = α 2 ( j ) , β = β ( j ) and I = I ( α 1 ( j ) , α 2 ( j ) , β ( j ) ) , where
I = I ( α 1 ( j ) , α 2 ( j ) , β ( j ) ) = L 11 L 12 L 13 L 21 L 22 L 23 L 31 L 32 L 33 α 1 = α 1 ( j ) , α 2 = α 2 ( j ) , β = β ( j ) .
The observed information matrix of the parameters α 1 , α 2 , and β is denoted as I. The elements of the matrix I are as follows:
L k k = 2 L α k 2 m k α k 2 i = 1 m Δ k δ i , R i β α k x i α k log 2 x i / β x i α k + β α k 2 , k = 1 , 2 ,
L 12 = L 21 = 2 L α k α 3 k = 2 L α 3 k α k = 0
L k 3 = L 3 k = 2 L α k β = 2 L β α k = n β i = 1 m Δ k δ i , R i β α k 1 x i α k 1 α k log x i / β + β α k x i α k + β α k 2 ,
and
L 33 = 2 L β 2 n β 2 k = 1 2 α k 2 k = 1 2 i = 1 m α k β α k 2 I δ i = k β α k + α k 1 x i α k x i α k + β α k 2 k = 1 2 i = 1 m I δ i = 3 k + R i α k β α k 2 β α k + α k 1 x i α k x i α k + β α k 2 , k = 1 , 2 .
Step 3: Set
( α 1 ( j + 1 ) , α 2 ( j + 1 ) , β ( j + 1 ) ) T = A × ( L α 1 , L α 2 , L β ) T α 1 = α 1 ( j ) , α 2 = α 2 ( j ) , β = β ( j ) ,
where A = ( α 1 ( j ) , α 2 ( j ) , β ( j ) ) T + I 1 ( α 1 ( j ) , α 2 ( j ) , β ( j ) ), α 1 , α 2 , β T is the transpose of vector ( α 1 , α 2 , β ), and I 1 ( α 1 ( j ) , α 2 ( j ) , β ( j ) ) represents the inverse of the matrix I ( α 1 ( j ) , α 2 ( j ) , β ( j ) ) .
Step 4: In setting j = j + 1 , the MLEs of the parameters (denoted by α ^ 1 , α ^ 2 , β ^ ) can be obtained by repeating Steps 2 and 3 until α 1 ( j + 1 ) , α 2 ( j + 1 ) , β ( j + 1 ) T α 1 ( j ) , α 2 ( j ) , β ( j ) T < ε , where ε is a threshold value that is fixed in advance.
Utilizing the acquired point estimators and the invariance property of MLEs, we can derive the estimates of the reliability and hazard functions from Equations (9) and (10), which are expressed as follows:
S ^ ( t ) = 4 β α ^ 1 + α ^ 2 k = 1 2 ( t α ^ k + β ^ α ^ k ) 1 , and H ^ ( t ) = k = 1 2 α ^ k x α ^ k 1 ( t α ^ k + β ^ α ^ k ) , t > β > 0 .

3.2. Existence and Uniqueness of MLEs

The existence and uniqueness of the MLEs are fundamental aspects to consider in statistical inference.
Theorem 1. 
The maximum likelihood estimator of α k , where k = 1 , 2 , exists and is unique for the lifetimes of objects subject to competing risks and following the NP distribution with parameters ( α 1 , β ) and ( α 2 , β) when m k > 0 for k = 1 , 2 .
Proof. 
Since x β , we determine the MLE β ^ = x 1 , and the MLE α ^ k of α k , k = 1 , 2 , can be reported from the solution of the following equation:
L α k m k α k + n log x 1 + i = 1 m I δ i = k log x i i = 1 m Δ k δ i , R i x i α k log x i + x 1 α k log x 1 x i α k + x 1 α k , k = 1 , 2 .
The left-hand side of Equation (26) is a continuous function. As α k approaches 0, the left-hand side of Equation (26) tends to infinity, and as α k , the left-hand side of Equation (26) tends to ϕ , where ϕ represents
ϕ = lim α k L α k = n log x 1 i = 1 m I δ i = k log x i i = 1 m I δ i = 3 k + R i log x i = n log x 1 i = 1 m log x i = i = 2 m log x i < 0 .
Hence, the solution of Equation (26) exists. Additionally, we find that the second derivative of α k ( k = 1 , 2 ) takes the following form:
2 L α k 2 m k α k 2 i = 1 m Δ k δ i , R i β α k x i α k log 2 x i / β x i α k + β α k 2 < 0 .
As Equation (28) is always negative, Equation (26) has a unique solution, and this solution represents the maximum likelihood estimator (MLE) of α k ( k = 1 , 2 ) . Consequently, we deduce that L α k is a continuous function on ( 0 , ), and it monotonically decreases from ∞ to negative values. This demonstrates the existence and uniqueness of the MLE of α k ( k = 1 , 2 ) .    □

3.3. Approximate Confidence Intervals

In this section, we derive the approximate confidence intervals (ACIs) for the unknown parameters α 1 , α 2 , and β , as well as for the reliability and hazard functions, utilizing the asymptotic normality of the MLEs. This approach is based on the asymptotic properties of MLEs. Based on the regularity conditions, the MLEs ( α ^ 1 , α ^ 2 , β ^ ) are approximately normally distributed with mean α 1 , α 2 , β and variance–covariance matrix I 1 ( α ^ 1 , α ^ 2 , β ^ ) . Or equivalently,
α ^ 1 , α ^ 2 , β ^ α 1 , α 2 , β N 0 , I 1 ( α ^ 1 , α ^ 2 , β ^ ) ,
where from (20),
I 1 ( α ^ 1 , α ^ 2 , β ^ ) = L 11 L 12 L 13 L 21 L 22 L 23 L 31 L 32 L 33 Θ = Θ ^ 1 = V a r ( α ^ 1 ) C a v ( α ^ 1 , α ^ 2 ) C a v ( α ^ 1 , β ^ ) C a v ( α ^ 2 , α ^ 1 ) V a r ( α ^ 2 ) C a v ( α ^ 2 , β ^ ) C a v ( β ^ , α ^ 2 ) C a v ( β ^ , α ^ 2 ) V a r ( β ^ ) ,
is the inverse of the observed Fisher information matrix with Θ = α 1 , α 2 , β . Furthermore, the expression L i j = 2 L Θ i Θ j , for i , j = 1 , 2 , 3 , is given by Equations (21)–(24). Thus, the 100 ( 1 γ ) % ACIs for α 1 , α 2 , and β are given by
α ^ 1 ± Z γ / 2 V a r ( α ^ 1 ) , α ^ 2 ± Z γ / 2 V a r ( α ^ 2 ) , and β ^ ± Z γ / 2 V a r ( β ^ ) ,
where Z γ / 2 is the upper ( γ / 2 ) -th point of the standard normal distribution N ( 0 , 1 ) . Alternatively, the ACIs for S ( t ) and the hazard function H ( t ) can be constructed using the asymptotic normality of the MLE. To derive these intervals, we employ the delta method (Greene [14]) to estimate the variances of their estimators. The delta method is a statistical technique used to approximate the probability distribution of a function of an asymptotically normal estimator by employing the Taylor series approximation. In utilizing this method, the approximate variances of S ^ ( t ) and H ^ ( t ) are determined through the following steps:
Step 1: Let Ω 1 and Ω 2 be two quantities that take the following forms:
Ω 1 = S t α 1 , S t α 2 , S t β and Ω 2 = H t α 1 , H t α 2 , H t β ,
where from Equations (9) and (10),
S ( t ) α k = 4 β α k + α 3 k t α k log t log β t α k + β α k 2 t α 3 k + β α 3 k , k = 1 , 2 ,
S ( t ) β = 4 β α 1 + α 2 1 α 1 + α 2 t α 1 + α 2 + α 1 β α 2 t α 1 + α 2 β α 1 t α 2 t α 1 + β α 1 2 t α 2 + β α 2 2 ,
H ( t ) α k = t α k 1 t α k + β α k 1 + α k log t log β t α k + β α k 2 , k = 1 , 2 ,
and
H ( t ) β = 1 β t k = 1 2 t α k α k 2 β α k t α k + β α k 2 , k = 1 , 2 .
Step 2: Using the following formulas, determine the approximate variances of S ( t ) and  H ( t ) :
V a r ( S ^ ( t ) ) Ω 1 V a r ( Θ ^ ) Ω 1 T Θ = Θ ^ , and V a r ( H ^ ( t ) ) Ω 2 V a r ( Θ ^ ) Ω 2 T Θ = Θ ^ ,
where V a r ( Θ ^ ) is obtained from (30) for Θ ^ = ( α ^ 1 , α ^ 2 , β ^ ) .
Step 3: Determine the 100 ( 1 γ ) % asymptotic confidence intervals for S ( t ) and H t using the following formula:
S ^ ( t ) ± Z γ / 2 V a r ( S ^ ( t ) ) , and H ^ ( t ) ± Z γ / 2 V a r ( H ^ ( t ) )

3.4. Log-Normal Approximation Confidence Intervals (LACIs)

In instances where the lower bound of the asymptotic confidence intervals may fall below 0, conflicting with the prerequisite Θ > 0 , the issue can be circumvented through log transformation and the delta method. We have ln Θ ^ N ( ln Θ , V a r ( ln Θ ^ ) ) , where V a r ( ln Θ ^ ) = V a r ( Θ ^ ) Θ ^ 2 . The log-transformed MLEs of Θ = ( α 1 , α 2 , β ) obtain their asymptotic confidence intervals at 100 ( 1 γ ) % as follows:
Θ ^ exp Z ( γ / 2 ) V a r ( Θ ^ ) Θ ^ 2 , Θ ^ exp Z ( γ / 2 ) V a r ( Θ ^ ) Θ ^ 2 , Θ ^ = ( α ^ 1 , α ^ 2 , β ^ ) .
Similarly, the 100 ( 1 γ ) % LACIs of S ( t ) and H t can be listed as
S ^ ( t ) exp Z ( γ / 2 ) V a r ( S ^ ( t ) ) S ^ ( t ) 2 , S ^ ( t ) exp Z ( γ / 2 ) V a r ( S ^ ( t ) ) S ^ ( t ) 2 ,
and
H ^ ( t ) exp Z ( γ / 2 ) V a r ( H ^ ( t ) ) H ^ ( t ) 2 , H ^ ( t ) exp Z ( γ / 2 ) V a r ( H ^ ( t ) ) H ^ ( t ) 2 .

4. Expectation Maximization (EM) Algorithm

Given that the ML estimators are not straightforwardly derived and that initial values can impact the maximum likelihood estimates obtained from the NR method, we resort to the EM algorithm to compute the estimations of the unknown parameters ( α 1 , α 2 , β ), alongside the reliability function S ( t ) and hazard function H ( t ) of the NP models. Ref. [15] introduced a generic iterative approach for computing the MLEs of unknown parameters when observed and censored data are available, known as the EM algorithm. This method offers several advantages, such as its capability to address complex problems, the assured increase in log-likelihood with each iteration, the straightforward yet meticulous computations, and the elimination of the necessity for second- and higher-order derivatives. The EM algorithm comprises two key steps: the expectation step (E-step) and the maximization step (M-step). In the E-step, the conditional expectation of the missing data is determined based on the observed data and the current parameter estimates, and the pseudo-log-likelihood function is calculated. Subsequently, in the M-step, the likelihood function under the observed, and censored data are maximized. This method is particularly useful for analyzing partially or censored datasets, such as those encountered in the Type-II progressive censoring with competing risks sample model. Thus, we employed the EM algorithm to obtain the MLEs of α 1 , α 2 , β , S ( t ) , and H ( t ) .

4.1. Point Estimation via EM Algorithm

Consider the observed sample denoted as X = ( x 1 : m : n , x 2 : m : n , , x m : m : n ) and the censored data as Z = ( Z 1 , Z 2 , . . , Z m ) , where Z i   = ( z i 1 , z i 2 , , z i R i ) for i = 1 , 2 , , m . Here, the censored data represent the missing data. It is important to note that the complete sample comprises both the observed sample and the censored data. Thus, the complete sample is represented as W = ( X , Z ) . For the parameter set Θ = ( α 1 , α 2 , β ) , the likelihood function of the complete sample data W is expressed as
L c Θ i = 1 m k = 1 2 f k x i : m : n F ¯ 3 k x i : m : n j = 1 R i [ f k z i j F ¯ 3 k z i j I δ i = k .
Hence, the log-likelihood function for α 1 , α 2 , and β derived from a complete sample is given by
L c = k = 1 2 n k ln α k + n ln β k = 1 2 α k + k = 1 2 α k 1 i = 1 m I δ i = k log x i + j = 1 R i log z i j k = 1 2 i = 1 m Ψ δ i log x i α k + β α k + j = 1 R i log ( z i j α k + β α k ) ,
where n k = i = 1 m I δ i = k R i + 1 represents the total number of failures for each cause in the complete dataset with size n, n = n 1 + n 2 , and Ψ d = 2 I d = k + I d = 3 k = I d = k + 1 . After obtaining the log-likelihood function in Equation (38) with respect to α and β and setting the normal equations to zero, the maximum likelihood estimators (MLEs) of the parameters α 1 , α 2 , and β for the entire sample W can be determined as follows:
L c α k = n k α k + n ln β + i = 1 m I δ i = k log x i + j = 1 R i log z i j i = 1 m Ψ δ i x i α k log x i + β α k log β x i α k + β α k + j = 1 R i z i j α k log z i j + β α k log β ( z i j α k + β α k ) = 0 , k = 1 , 2 ,
and
L c β = n k = 1 2 α k β k = 1 2 i = 1 m Ψ δ i α k β α k 1 x i α k + β α k + j = 1 R i α k β α k 1 z i j α k + β α k = 0 .
The pseudo-log-likelihood function changes during the E-step to become
n k α k + n ln β + i = 1 m I δ i = k log x i + i = 1 m I δ i = k j = 1 R i E log z i j | z i j > x i i = 1 m Ψ δ i x i α k log x i + β α k log β x i α k + β α k + E z i j α k log z i j + β α k log β z i j α k + β α k | z i j > x i = 0 , k = 1 , 2 ,
and
n k = 1 2 α k β k = 1 2 i = 1 m Ψ δ i α k β α k 1 x i α k + β α k + j = 1 R i E α k β α k 1 z i j α k + β α k | z i j > x i = 0 .
In the following steps, we need to derive the following result.
Theorem 2. 
Given X 1 = x 1 , , X i = x i , the conditional distribution of z i j , j = 1 , , R i , which has the left-truncated NP distribution at x i , is given by
f Z | x ( z i j | X i = x i = x i , α 1 , α 2 , β ) = f Z ( z i j | α 1 , α 2 , β ) 1 F X ( x i | α 1 , α 2 , β ) , z i j > x i ,
where F X ( x i ; α 1 , α 2 , β ) and f Z ( z i j ; α 1 , α 2 , β ) can be found in Equations (7) and (8), respectively.
Proof. 
The proof is comprehensible and can be explored in detail in [16]. Hence, based on Equation (43), the conditional expectations described in Equations (39) and (42) are generated, as illustrated below:
E 1 x i , α 1 , α 2 , β = E log z i | z i > x i = 4 β α 1 + α 2 1 F X ( x i ; α 1 , α 2 , β ) k = 1 2 x i α k y α k 1 log y y α k + β α k 2 y α 3 k + β α 3 k d y ,
E 2 x i , α 1 , α 2 , β = E z i j α k log z i j + β α k log β z i j α k + β α k | z i j > x i = 4 β α 1 + α 2 1 F X ( x i ; α 1 , α 2 , β ) k = 1 2 x i α k y α k 1 log y α k log y + β α k log β y α k + β α k 3 y α 3 k + β α 3 k d y ,
and
E 3 x i , α 1 , α 2 , β = E α k β α k 1 z i j α k + β α k | z i j > x i = 4 β α 1 + α 2 1 F X ( x i ; α 1 , α 2 , β ) k = 1 2 x i α k 2 y α k 1 β α k 1 y α k + β α k 3 y α 3 k + β α 3 k d y .
In the M-step of the EM algorithm, at the ( l + 1 ) iteration, the value of β l + 1 is first obtained by solving the following equation:
n k = 1 2 α k l β l + 1 k = 1 2 i = 1 m Ψ δ i α k l β l + 1 α k l 1 x i α k l + β l + 1 α k l 1 + j = 1 R i E 3 x i , α 1 l , α 2 l , β l = 0 .
The estimate of β might then be obtained by
β ^ = n k = 1 2 α k l k = 1 2 i = 1 m Ψ δ i α k l β l + 1 α k l 1 x i α k l + β l + 1 α k l 1 + j = 1 R i E 3 ( x i , α 1 l , α 2 l , β l ) 1 .
After obtaining β l + 1 , solve the following to obtain α 1 l + 1 and α 2 l + 1 :
n k α k l + 1 + n ln β l + 1 + i = 1 m I δ i = k log x i i = 1 m Ψ δ i x i α k l + 1 log x i + β l + 1 α k l + 1 log β l + 1 x i α k l + 1 + β l + 1 α k l + i = 1 m I δ i = k j = 1 R i E 1 ( x i , α 1 l , α 2 l , β l + 1 ) i = 1 m Ψ δ i j = 1 R i E 2 ( x i , α 1 l , α 2 l , β l + 1 ) = 0 , k = 1 , 2 .
The estimate of α k might then be obtained by
α ^ k = n k i = 1 m Ψ δ i x i α k l + 1 log x i + β l + 1 α k l + 1 log β l + 1 x i α k l + 1 + β l + 1 α k l i = 1 m I δ i = k log x i n ln β l + 1 + n k i = 1 m Ψ δ i j = 1 R i E 2 ( x i , α 1 l , α 2 l , β l + 1 ) i = 1 m I δ i = k j = 1 R i E 1 ( x i , α 1 l , α 2 l , β l + 1 ) .
The next iteration uses ( α 1 l + 1 , α 2 l + 1 , β l + 1 ) as the new value of ( α 1 , α 2 , β ). Now, an iterative process can be employed to obtain the necessary maximum likelihood estimates of α 1 , α 2 , and β . This iterative procedure continues until
α 1 ( l + 1 ) α 1 ( l ) + α 2 ( l + 1 ) α 2 ( l ) + β ( l + 1 ) β ( l ) < ε ,
for a predetermined small value of ε and some l. This suggested approach converges to the local maximum likelihood as the log-likelihood increases with each iteration. We use the maximum likelihood estimates of the parameters based on the entire sample as starting values in the expectation maximization technique. Henceforth, we denote the maximum likelihood estimates of α 1 , α 2 , and β as their respective MLEs: α ^ 1 E M , α ^ 2 E M , and β ^ E M . Furthermore, the MLEs of S ( t ) and H ( t ) can be straightforwardly derived using the in-variance property of the MLEs. These are given as
S ^ E M ( t ) = 4 β E M α ^ 1 E M + α ^ 2 E M k = 1 2 ( t α ^ k E M + β ^ α ^ k E M ) , and H ^ ( t ) = k = 1 2 α ^ k E M x α ^ k E M 1 ( t α ^ k E M + β ^ α ^ k E M ) , t > β > 0 .

4.2. Point Estimation via Stochastic Expectation Maximization (SEM) Algorithm

Most EM algorithms follow a simple closed-form approach, iterating through two phases. However, a notable limitation of the EM method is its susceptibility to becoming trapped in a saddle point, particularly when handling high-dimensional or complex data such as censored lifespan models (see [17]). Notably, the EM equations mentioned earlier lack a closed form, necessitating numerical computation. Hence, to obtain maximum likelihood estimators, we employed the SEM method in this context. The SEM algorithm, proposed in [18], replaces the expectation step (E-step) of the EM method with a stochastic step (S-step). This modification involves substituting each missing datum with a randomly generated value from the conditional distribution of the missing data, based on the observed data and current parameter values. By incorporating random values obtained from the conditional distribution of the missing data, the SEM algorithm augments the observed sample. It is often preferred over the EM approach due to its simplicity, avoidance of complex computations, and lack of reliance on calculating conditional expectations. Moreover, studies have demonstrated that the SEM approach is robust to initial values and performs effectively with small sample sizes; see [19]. Let W = ( X , Z ) represent the complete dataset, where X = ( x 1 , x 2 , , x m ) denotes the observed data, and Z i = ( z i 1 , z i 2 , , z i R i ) for i = 1 , 2 , , m , and j = 1 , 2 , , R i signifies the censored data when x i fails. Following the SEM algorithm’s principle, we initially generate the missing samples z i j , i = 1 , 2 , , m and j = 1 , 2 , , R i , from the truncated conditional distribution below:
G Z ( z i j ; α 1 , α , β | z i j > x i ) = F Z ( z i j ; Θ ) F X i ( x i ; α 1 , α , β ) 1 F X i ( x i ; α 1 , α , β ) , z i j > x i .
To obtain a random sample from Equation (50), we start by generating a random value from the uniform distribution U ( 0 , 1 ) , denoted as u. Subsequently, the realization of z i j is derived as
z i j = F 1 ( u + ( 1 u ) F X i ( x i ; Θ ) ) ,
where F 1 ( . ) represents the inverse function of F ( . ) , as defined in Equation (7). Subsequently, in replacing each value of z i j with the value generated by Equation (53), the ML estimators of α 1 , α 2 , and β at the ( l + 1 )th stage are obtained from Equations (39) and (40) using
β l + 1 = n k = 1 2 α k l k = 1 2 i = 1 m Ψ δ i α k l β l α k l 1 x i α k l + β l α k l 1 + j = 1 R i α k l β l α k l 1 z i j α k l + β l α k l 1 ,
and
α k l + 1 = n k i = 1 m Ψ δ i x i α k l log x i + β l + 1 α k l log β l + 1 x i α k l + β l + 1 α k l i = 1 m I δ i = k log x i n ln β l + 1 + n k i = 1 m Ψ δ i j = 1 R i z i j α k l log z i j + β l + 1 α k l log β l + 1 z i j α k l + β l + 1 α k l i = 1 m I δ i = k j = 1 R i log z i j .
Like in the EM algorithm, the iterations can be terminated when α 1 ( h + 1 ) α 1 ( h ) + α 2 ( h + 1 ) α 2 ( h ) + β ( h + 1 ) β ( h ) < ε .

4.3. Fisher Information Matrix

This section presents the observed Fisher information matrix derived from [20]’s concept of missing values. It is worth noting that this observed Fisher information matrix can be utilized for constructing asymptotic confidence intervals. The missing information principle, referenced in various scholarly articles, can be summarized as follows:
Observed information = Complete information Missing information .
Let us employ the notation shown as follows: Θ = ( α 1 , α , β ) ;  X: the observed data; W: the complete data; and I X ( Θ ) , I W ( Θ ) , and I W | X ( Θ ) represent the observed, complete, and missing information, respectively. By the definition in [20], observed information is the difference between complete and missing information; this may be stated in the following manner:
I X ( Θ ) = I W ( Θ ) I W | X ( Θ ) .
The complete information matrix I W ( Θ ) is provided as follows:
I W ( Θ ) = E 2 ln L ( W , Θ ) Θ 2 Θ = ( α 1 , α 2 , β ) = a 11 α 1 , α 2 , β a 12 α 1 , α 2 , β a 13 α 1 , α 2 , β a 21 α 1 , α 2 , β a 22 α 1 , α 2 , β a 23 α 1 , α 2 , β a 31 α 1 , α 2 , β a 32 α 1 , α 2 , β a 33 α 1 , α 2 , β | Θ = Θ ^ .
The symbol a i j ( α 1 , α 2 , β ) represents the matrices’ elements for I W ( Θ ) . They are listed in the following order:
a k k = E 2 ln L ( Θ ) α k 2 = 2 i = 1 n E β α k ( log x i log β ) 2 x i α k x i α k + β α k 2 i = 1 n E U 1 ( x ̲ , α k , α 3 k , β ) 2 V ( x ̲ , α k , α 3 k , β ) 2 + i = 1 n E U 2 ( x ̲ , α k , α 3 k , β ) V ( x ̲ , α k , α 3 k , β ) ,
where
U 1 ( x ̲ , α k , α 3 k , β ) = β α 3 k 1 + α k ln x i x i α k + α 3 k β α k x i α 3 k ln β + 1 + α k + α 3 k ln x i x i α k + α 3 k ,
U 2 ( x ̲ , α k , α 3 k , β ) = β α 3 k ln x i 2 + α k ln x i x i α k + α 2 β α k x i α 3 k ln 2 β + ln x i 2 + α k + α 3 k ln x i x i α k + α 3 k ,
and
V ( x ̲ , α k , α 3 k β ) = α k β α 3 k x i α k + α 3 k β α k x i α 3 k + α k + α 3 k x i α k + α 3 k ,
a k ( 3 k ) = a ( 3 k ) k = E 2 ln L ( Θ ) α k α 3 k = E 2 ln L ( Θ ) α 3 k α k = i = 1 n E k = 1 2 U 1 ( x ̲ , α k , α 3 k , β ) 1 + V ( x ̲ , α k , α 3 k , β ) V ( x ̲ , α k , α 3 k , β ) 2 ,
a k 3 = a 3 k = E 2 ln L ( η ) α k β = E 2 ln L ( η ) β α k = n β + 2 i = 1 n E β α k 1 β α k + 1 + α k log β α k log x i x i α k x i α k + β α k 2 + i = 1 n α 3 k x i α 3 k x i α k β α k + α 3 k α 3 k + α k α k α 3 k log β + α k α 3 k α k log x i V ( x ̲ , α k , α 3 k , β ) 2 + i = 1 n α 3 k x i α 3 k α 3 k β α 3 k x i 2 α k + α 3 k β 2 α k x i α 3 k V ( x ̲ , α k , α 3 k , β ) 2 + i = 1 n α 3 k x i α 3 k β α k ( α 3 k + α k α k + α 3 k log β α k α k + α 3 k log x i ) x i α k + α 3 k V ( x ̲ , α k , α 3 k , β ) 2 ,
and
a 33 = E 2 ln L ( η ) β 2 = n β 2 k = 1 2 α k + 2 k = 1 2 i = 1 n E α k β α k 2 α k 1 x i α k β α k x i α k + β α k 2 i = 1 n E α k 2 α 3 k 2 β α k x i α 3 k + β α 3 k x i α k 2 β 2 V ( x ̲ , α k , α 3 k β ) 2 + i = 1 n E α k α 3 k α 3 k 1 β α 3 k x i α k + α k 1 β α k x i α 3 k β 2 V ( x ̲ , α k , α 3 k , β ) ,
where for every function g ( y ) , the expected value is provided by
E g ( y ) = 0 g ( y ) f y , α 1 , α 2 , β d y = 4 β α 1 + α 2 k = 1 2 α k β g ( y ) y α k 1 y α k + β α k 2 y α 3 k + β α 3 k d y .
Moreover, when considering a single observation that was censored at the time of the ith failure, the Fisher information matrix is obtained as follows:
I W | X i ( Θ ) = E Z i | X i 2 ln f Z | X z i | z i > x i , α 1 , α 2 , β Θ 2 Θ = ( θ , β , λ ) = b 11 x i , α 1 , α 2 , β b 12 x i , α 1 , α 2 , β b 13 x i , α 1 , α 2 , β b 21 x i , α 1 , α 2 , β b 22 x i , α 1 , α 2 , β b 23 x i , α 1 , α 2 , β b 31 x i , α 1 , α 2 , β b 32 x i , α 1 , α 2 , β b 33 x i , α 1 , α 2 , β | Θ = Θ ^ .
wherein f Z | X z i | z i > x i , α 1 , α 2 , β is given by Equation (43). It is therefore simple to retrieve the expected missing information as
I W | X ( Θ ) = i = 1 m R i I W | X i ( Θ ) ,
where I W | X i ( Θ ) and I W | X m ( Θ ) are the information matrix of a single observation for the truncated NP distribution with left truncation at x.
For brevity and simplicity, assume that 𝟊 = f Z | X z i | z i > x i , α 1 , α 2 , β . From Equations (7), (8), and (43), the logarithm of the PDF of the truncated NP distribution with left truncation at x i can be listed as the PDF of the truncated NP distribution with left truncation at x i , which can be listed as
ln 𝟊 = ln α 1 + α 2 z i α 1 + α 2 + α 2 β α 1 z i α 2 + α 1 β α 2 z i α 1 B + k = 1 2 ln ( x i α k + β α k ) ,
where
B = ln z i + 2 k = 1 2 ln ( x i α k + β α k ) .
The negative expected value of the second partial derivatives with respect to α 1 , α 2 , and β are obtained by
b k k . = E 2 ln 𝟊 α k 2 = x i α k β α k ln x i ln β 2 ( x i α k + β α k ) 2 + 2 k = 1 2 E z i j α k β α k ln z i j ln β 2 ( z i j α k + β α k ) 2 + E z i 2 α 1 ( z i j α 2 + β α 2 ) 2 z i α 1 + α 2 α 2 β α 1 ( z i j α 2 α 1 + α 2 + α 1 β α 2 ) ln 2 z i j α 1 + α 2 z i α 1 + α 2 + α 2 β α 1 z i α 2 + α 1 β α 2 z i α 1 2 + E 2 z i α 1 + α 2 α 2 β α 1 ( z i j α 2 + β α 2 ) ln β z i α 1 + α 2 α 2 β α 1 ( z i j α 2 α 1 + α 2 + α 1 β α 2 ) ln 2 z i j α 1 + α 2 z i α 1 + α 2 + α 2 β α 1 z i α 2 + α 1 β α 2 z i α 1 2 + E 2 z i α 1 + α 2 α 2 β α 1 ln z i j ( z i j α 2 β α 2 + ( z i j α 2 α 1 + α 2 + α 1 β α 2 ) ln β ) α 1 + α 2 z i α 1 + α 2 + α 2 β α 1 z i α 2 + α 1 β α 2 z i α 1 2 ,
b k 3 . = b 3 k . = E 2 ln 𝟊 α k β = β α k 1 x i α k + β α k + α k x i α k ln z i j ln β ( x i α k + β α k ) 2 + E β α k 1 z i j α k + β α k + α k z i j α k ln z i j ln β ( z i j α k + β α k ) 2 , k = 1 , 2 ,
b 33 . = E 2 ln 𝟊 β 2 = α 1 β α k 2 α 1 1 x i α k β α k ) ( x i α k + β α k ) 2 + E α 1 β α k 2 α 1 1 z i j α k β α k ) ( z i j α k + β α k ) 2 .
The inverse of the observed Fisher information matrix I X ( Θ ) at the maximum likelihood estimates Θ ^ E M = ( Θ ^ 1 E M , Θ ^ 2 E M , Θ ^ E M ) is the asymptotic variance–covariance matrix of the  Θ ^ E M .
I X 1 ( Θ ^ E M ) = I W ( Θ ^ E M ) I W | X ( Θ ^ E M ) 1 .
Accordingly, for Θ = ( α 1 , α 2 , β ) , the 100 ( 1 γ ) % asymptotic confidence interval is then given by
α ^ 1 E M ± Z γ / 2 V a r ( α ^ 1 E M ) , α ^ 2 E M ± Z γ / 2 V a r ( α ^ 2 E M ) , and β ^ E M ± Z γ / 2 V a r ( β ^ E M ) ,
and the 100 ( 1 γ ) % log-transformed MLE confidence intervals of Θ = ( α 1 , α 2 , β ) through the EM algorithm are as follows:
Θ ^ E M exp Z ( γ / 2 ) V a r ( Θ ^ E M ) Θ ^ E M 2 , Θ ^ exp Z ( γ / 2 ) V a r ( Θ ^ E M ) Θ ^ E M 2 , Θ ^ = ( α ^ 1 , α ^ 2 , β ^ ) .
In using the delta method in the same way as presented in Section 3, the estimators of S ( t ) and H ( t ) can be obtained.

5. Bayesian Estimation

In this section, Bayesian estimates for the NP distributions are generated based on progressively censored competing risks samples, focusing on the unknown parameters α 1 , α 2 , and β , alongside the reliability function S ( t ) and the hazard function H ( t ) . Estimations are derived considering three distinct loss functions: the general entropy loss function (GELF), LINEX loss function (LLF), and squared error loss function (SELF). While the LINEX and GE loss functions exhibit asymmetry, the squared error loss function is symmetric, affording equal weight to both over- and under-estimations. Given its prevalence in the Bayesian literature, the SELF is often chosen for its symmetry; however, in cases where over-estimation or under-estimation holds varying degrees of significance, asymmetric loss functions such as the LINEX come into play. In Bayesian estimation, prior distributions for parameters are essential, drawing from previous experiences and available parameter information. Since the NP distribution lacks a natural conjugate prior, and joint conjugate priors are not feasible for unknown parameters, independent gamma priors with hyper-parameters ( a i , b i ) , i = 1 , 2 , 3 , are recommended for all positive parameters in the model.
π 1 α 1 α 1 a 1 1 e b 1 α 1 , π 2 α 2 α 2 a 2 1 e b 2 α 2 and π 3 β β a 3 1 e b 3 β ; α 1 , α 2 , β 2 > 0 .
In this context, it is presumed that each hyper-parameter, denoted as ( a i , b i ) , i = 1 , 2 , 3 , is known and non-negative. This assumption leads to the proposition of a specific prior distribution, representing a scenario where non-informative priors for the parameters are available. Consequently, the joint prior distribution of ( α 1 , α 2 , β ) can be expressed as follows:
π α 1 , α 2 , β α 1 a 1 1 α 2 a 2 1 β a 3 1 e b 1 α 1 + b 2 α 2 + b 3 β .
In combining the prior information from Equation (65) with the likelihood function presented in Equation (14), the joint posterior distribution can be represented as follows:
π * α 1 , α 2 , β | x ̲ α 1 a 1 1 α 2 a 2 1 β a 3 1 e b 1 α 1 + b 2 α 2 + b 3 β i = 1 m k = 1 2 α k β α k β α 3 k x i α k 1 x i α k + β α k 2 x i α 3 k + β α 3 k I δ i = k × i = 1 m k = 1 2 β α k x i α k + β α k R i .
The Bayes estimator of any function of α 1 , α 2 , and β , say Φ ( α 1 , α 2 , β ) under the SE, LINEX, and GE loss functions, can be expressed as
Φ ^ B S = E Φ ( α 1 , α 2 , β ) | x ̲ = x 1 0 0 Φ ( α 1 , α 2 , β ) π * α 1 , α 2 , β | x ̲ d α 1 d α 2 d β x 1 0 0 π * α 1 , α 2 , β | x ̲ d α 1 d α 2 d β ,
Φ ^ L I N E X = 1 c log x 1 0 0 e c Φ ( α 1 , α 2 , β ) π * α 1 , α 2 , β | x d α 1 d α 2 d β x 1 0 0 π * α 1 , α 2 , β | x ̲ d α 1 d α 2 d β ,
and
E α 1 , α 2 , β | x ̲ g ( α 1 , α 2 , β ) q = x 1 0 0 Φ ( α 1 , α 2 , β ) q π * α 1 , α 2 , β | x ̲ d α 1 d α 2 d β x 1 0 0 π * α 1 , α 2 , β | x ̲ d α 1 d α 2 d β .
Numerically solving the Bayes estimators becomes necessary as they are not explicitly obtained under the SE or LINEX loss functions. Hence, we propose deriving the Bayes estimators of α 1 , α 2 , and β using the MCMC method. Initially, the fully conditional posterior distribution π * α k | x ̲ , β of α k is
π * α k | x ̲ , β α k a k + m k 1 β n α k e b k α k i = 1 m x i α k 1 I δ i = k i = 1 m x i α k + β α k Δ k δ i , R i , k = 1 , 2 ,
and the fully conditional posterior distribution π * β | x ̲ , α 1 , α 2 of β is
π * β | x ̲ , α 1 , α 2 β a 3 1 + n α k + α 3 k e b 3 β i = 1 m k = 1 2 x i α k 1 x i α k + β α k 2 x i α 3 k + β α 3 k I δ i = k × i = 1 m k = 1 2 1 x i α k + β α k R i .
The absence of closed-form solutions for the three posterior distributions necessitates the utilization of the Metropolis–Hastings algorithm to derive our Bayes estimators from posterior samples. The Metropolis–Hastings (M-H) algorithm is a valuable tool for generating random samples from the posterior distribution by leveraging a proposal density, as detailed in [21,22]. The Algorithm 1 proceeds through the following stages:
Algorithm 1 MCMC algorithm
Step 1: 
Choose an initial guess of ( α 1 , α 2 , β ) denoted by ( α 1 0 , α 2 0 , β 0 ), and set i = 1 .
Step 2: 
Using the following Metropolis–Hustings, generate β l from π * , with the normal proposal distribution, N β i 1 , V a r β , where ( β i 1 ) is the current value of β , and V a r β is a variance of β . Perform the following:
  • Generate a proposal β * from N β i 1 , V a r β ;
  • Evaluate the acceptance probability ρ = min 1 , π * ( β * | α 1 i 1 , α 2 i 1 , x ̲ ) π * ( β j 1 | α 1 i 1 , α 2 i 1 , x ̲ ) ;
  • Generate a u from a Uniform ( 0 , 1 ) distribution;
  • If u ρ , accept the proposal and set β l = β * ; else, set β l = β l 1 .
Step 3: 
In the same way as in the previous step, generate α k i from π * α k i 1 | x ̲ , β i with the normal proposal distribution N α k i 1 , V a r α k , where ( α k i 1 ) is the current value of α k , and V a r α k is a variance of α k .
Step 4: 
For given t, compute the reliability and hazard functions:
S ( t , α 1 i , α 2 i , β i ) = 4 j = 1 2 β i α k i t α k i + β i α k i ,
and
H ( t , Θ ) = k = 1 2 β i α k i t α k i + β i α k i .
Step 5: 
Set i = i + 1 .
Step 6: 
Repeat steps ( 2 4 ) N times and obtain the desired number of samples. After discarding the first M burn-in samples, the remaining N M samples are used to obtain the Bayesian estimates.
Step 7: 
It is now possible to calculate the Bayes estimate of Φ = Φ ( α 1 , α 2 , β ) under the SE, LINEX, and GE loss functions as
Φ ^ S = 1 N M i = M + 1 N Φ ( α 1 1 , α 2 1 , β 1 ) ,
Φ ^ L I N E X = 1 c log 1 N M i = M + 1 N e c Φ ( α 1 1 , α 2 1 , β 1 ) ,
and
β ^ G E = 1 N M l = M + 1 N Φ ( α 1 1 , α 2 1 , β 1 ) q 1 q ,
where Φ = Φ ( α 1 , α 2 , β ) denotes parameters α 1 , α 2 , β , S ( t ) , and H ( t ) .
Step 8: 
Order Φ ( M + 1 ) , Φ ( M + 2 ) , …, Φ ( N ) as Φ ( 1 ) < Φ ( 2 ) < < Φ ( N M ) . Consequently,
Φ ( N M ) α / 2 , Φ ( N M ) 1 α / 2
yields the 100 ( 1 γ ) % Bayesian credible interval of Φ . In this case, [ q ] stands for the integer part of q.

6. Actual Data Illustration

The main objective of this part is to illustrate how the recommended techniques may be used in real-world situations. The first example includes the failure times of electrical appliances. The source of this dataset is listed in [23]. This real data collection, which [24] first looked at, contains 18 electrical equipment failure times. Only failure mode 11 is seen to occur more than twice out of all the failure modes found in these data. In this article, failure mode 11 was regarded as reason number one (cause-1), and the other failure modes as cause number two (cause-2)). In this instance, we treat X 1 and X 2 as failing because of two competing risks under cause-1 and cause-2, respectively. It is observed that there are 8 failures attributed to cause-1 and 13 failures attributed to cause-2. Data analysis begins with dividing the original set of data by 100. Since this transformation is only a scale change, it will not have an impact on this study’s results. Table 1 displays the electrical appliance survival times results.
We first look at whether or not these datasets can be analyzed using the NP distribution before moving on. It is commonly known that the Kolmogorov–Simirnov (K–S) test may be used with both extremely small and large samples. The Kolmogorov–Smirnov distances and the accompanying p-values (with bracket) for cause-1 and -2 are, respectively, 0.3637 ( 0.2405 ) and 0.1792 ( 0.7981 ). Because both causes’ associated p-values are clearly higher than the significance level of γ = 0.05 , we are unable to reject the null hypothesis that the data for electronic applications originate from the NP distribution. Figure 2 displays the plot of the empirical vs. the fitted CDFs. Furthermore, the most widely used graphical approach for model validation is quantile–quantile (Q-Q), which determines if the fitted model agrees with the provided data. Let F ^ ( x ) be the estimated value of F ( x ) given x 1 , x 2 , , x n . Point F ^ ( i / n ) vs. x i : n , where i = 1 , 2 , , n , was plotted as a Q-Q scatter diagram. The estimated and observed quantiles are displayed in a Q-Q plot. The points on the Q-Q plot should demonstrate a 45 straight line if the model matches the data well. Figure 3 displays the appliance failure data together with the quantile–quantile (Q-Q) plot. As can be seen in Figure 3, there is a rough straight-line pattern that suggests a good match for the NP model. Furthermore, one crucial graphical technique for determining whether or not the data may be applied to a particular distribution is the total time test (TTT) plot. The TTT plot’s empirical form is provided by τ k / n = i = 1 k x i : n ( n k ) x k : n / ( i = 1 n x i ) , k = 1 , 2 , , n , where x i : n denotes the sample’s order statistics. In a graphic representation of the TTT plot, the HRF is growing (or decreasing) if the TTT plot is concave (or convex), and it is constant if the TTT plot is graphically portrayed as a straight diagonal. When the TTT plot is convex and then concave, the HRF is U-shaped; otherwise, it is unimodal. Figure 4 shows the TTT plot for the two datasets. The failure data’s total time on the test (TTT) plot in Figure 4 illustrates the increasing and decreasing hazard rate for cause-1 and cause-2, respectively, which is consistent with the NP distribution.
Three groups of progressive Type-II samples are formed based on the total failure data from Table 1 and are displayed in Table 2. These generated samples are used to estimate the MLEs and Bayes estimates of α 1 , α 2 , β , S ( t ) , and H ( t ) together with their standard errors (in parenthesis).
In real-world data analysis, choosing the starting parameters for an NR algorithm is a challenging task. For Sc. I, the initial guess of β is taken to be the x 1 , and a graphical approach presented in [25] is utilized to calculate the MLE shape parameters α 1 and α 2 , where, from Equation (19), the curves of 1 α k and Φ ( α k ) are plotted in Figure 5. According to Figure 5, for k = 1 , 2 , the crossing points of the two functions ( 1 α k , Φ ( α k )) are around 0.1543 and 0.4605 , respectively. To acquire the MLEs of α 1 and α 2 , we thus recommend starting the iteration with α 1 = 0.15 and α 2 = 0.46 as initial values. Moreover, a contour plot of the log-likelihood function of α 1 and α 2 with respect to β = x 1 is shown in Figure 6.
Based on these initial values, we first calculated the MLEs of α 1 , α 2 , β , S ( t ) , and H ( t ) using the NR approach. Table 3 presents the values of the MLEs and estimated standard errors of the MLEs of α 1 , α 2 , β , S ( t ) , and H ( t ) , where t = 0.4 is the calculation point for the reliability function S ( t ) and the hazard rate H ( t ) function. We also analyzed the dataset (Sc. I) using the EM and SEM methods developed in Section 4. The NR approach is used to set the starting values of α 1 , α 2 and β for the EM and SEM algorithms as related MLEs. Table 3 also includes a list of all point estimation results for the MLEs using the EM and SEM methods, along with estimated standard errors. It can be seen from the standard errors that the estimations produced by the NR approach are often higher than those produced by the EM or SEM algorithms. Next, the 95% asymptotic confidence intervals of α 1 , α 2 , β , S ( t ) ,and H ( t ) were computed using the asymptotic normality of the MLEs and missing information principal techniques.
The next step would be to compute the Bayes estimates of α 1 , α 2 , β , S ( t ) , and H ( t ) against the SE, LINEX, and GE loss functions using the MCMC samples. Selecting the hyper-parameters in a real dataset while using Bayesian estimating is a very difficult operation since the true value is not known in advance. Hence, we combine the non-informative prior assumptions ( a i = b i = 0 , i = 1 , 2 , 3 ) with the MCMC approach to build Bayes estimators. The initial assumptions for executing the MCMC algorithm are thought to be the MLEs estimations of α 1 , α 2 , and β . In employing 30 , 000 posterior sample points and ignoring the burn-in of the first 5000 times, the Bayes estimates of α 1 , α 2 , β , S ( t ) , and H ( t ) are evaluated using the M-H method. A proposed density function that follows a normal distribution may be built in order to use the MCMC approaches, using the variance–covariance matrix and mean equal to the MLEs of the unknown parameters.
It is not a secret that, for the LINEX loss function, overestimation has a larger penalty than underestimation when c > 0 , and the opposite is true when c < 0 . Additionally, when c approaches 0, the LINEX loss function becomes symmetric and behaves somewhat like the SE loss function. The Bayes estimates were produced under the LINEX loss function discussed here for two different values of c = 5 and + 2 . Furthermore, in the general entropy (GE) loss function, the values of parameter q were chosen to be 2 and + 2 . The point Bayes estimates for α 1 , α 2 , β , S ( t ) , and H ( t ) were been computed and are presented in Table 3, along with the estimated standard errors and the corresponding 95% credible ranges in Table 4.
To assess the convergence of the Markov chain Monte Carlo (MCMC) method for the studied dataset, we present density plots as well as trace plots of the MCMC outputs for the parameters α 1 , α 2 , β , S ( t = 0.4 ) , and H ( t = 0.4 ) in Figure 7. These graphs demonstrate the good convergence of all the parameters considered for different chains. Furthermore, it appears that the samples come from the same posterior density because all of the plots display a very strong overlap of the density plots for different chains. Moreover, as Figure 6 illustrates, the estimated estimations of α 1 , α 2 , β , S ( t = 0.4 ) , and H ( t = 0.4 ) for each sample were typically symmetrical. Thus, the trace and density graphs appear to indicate that the MCMC approach has a high degree of convergence.
The results are shown in Table 3 and Table 4, where the recommended Bayes estimates outperform the most frequent estimates in terms of the lowest standard errors. Moreover, the HPD credible intervals estimates outperform the ACIs, LACIs, EM, and SEM estimates with respect to the shortest ILs. This example also shows how similar the results of all the traditional estimations are to one another. However, it should be mentioned that the MLEs that employed the SEM approach had the lowest standard errors. As a result, machine learning estimates obtained from SEM algorithms frequently performed better than estimates obtained from the NR and EM techniques. Furthermore, it is important to keep in mind that the techniques utilized to compute the point and interval estimators based on Sc. I were also applied to Sc. II and Sc. III; Table 4 and Table 5 present the results.

7. Simulation Study

This section compares the performances of suggested estimate methods under progressive Type-II censoring using Monte Carlo simulations. A Monte Carlo simulation analysis was carried out using the Mathematica version 11 statistical program. A comparison is made between point estimators for competing risk lifetimes parameters based on the following perspectives:
(i) Bias = 1 N s i = 1 N s Θ i Θ ^ i , where Θ i and Θ ^ i stand for the unknown parameters and the associated estimations, and N s is the number of simulation repeats. The higher agreement of the experimental data with the estimated model is shown by the smaller value of the average bias.
(ii) Mean squared error (MSE) = 1 N s i = 1 N s Θ i Θ ^ i 2 . An improved estimate performance is shown by a lower MSE value. Each and every result is derived from 1000 replications.
Additionally, average confidence lengths (ACLs) (a better interval estimate performance is correlated with a smaller width) and coverage percentages (CPs) (the probability that the real parameter values lie within the range of the interval estimations) are used to evaluate interval estimators of asymptotic and HPD intervals. If the CP is 95%, then a confidence interval estimator works well. The significance threshold that we employed was γ = 0.05 . We created progressive Type-II censored competing risk data, where the parameters’ true values of the NP distribution are arbitrarily assumed to be ( α 1 , α 2 , β ) = ( 0.7 , 0.8 , 0.5 ) . The hazard rate functions in Figure 1 were used to determine the values of α 1 , α 2 , and β . In order to conduct this simulation research, we chose three distinct censoring schemes, which are detailed below, for ( n , m ) = ( 30 , 20 ) , (30, 25), (50,35), (50,45), and (80, 75):
Sc. I: R 1 = R 2 = = R m 1 = 0 and R m = n m ;
Sc. II: R 1 = n m and R 2 = R 3 = R m 1 = 0 ;
Sc. III: R 1 = R 2 = = R m 1 = 1 and R n m + 1 = = R m = 0 .
Here, Sc. I is equivalent to the conventional Type-II censoring technique, and Sc. II is Type-I censoring. After the samples are created, we use the NR, EM, and SEM algorithms to obtain the MLEs of the parameters α 1 , α 2 , β , S ( t ) , and H ( t ) . The EM and SEM procedures began, in each case, with the true values of the parameters, and the iteration ended when the absolute value of the difference between the two successive iterations for each of the three parameters was less than 10−5. Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 display the ABs and MSEs of the MLEs for α 1 , α 2 , β , S ( t ) , and H ( t ) , respectively. Additionally, we use the missing information principle, the asymptotic distribution of the MLEs, and the log-transformed MLEs to generate the 95% confidence intervals for α 1 , α 2 , β , S ( t ) , and H ( t ) (at time t = 1.0 ) .
In Bayesian calculations, the non-informative prior (NIP) and informative prior (IP) for unknown parameters ( α 1 , α 2 , β , S ( t ) , H ( t ) ) are taken into account. The hyper-parameters allocated to proper Bayes are a 1 = 4 , a 2 = 5 , a 3 = 15 , and b 1 = b 1 = b 3 = 10 , for which the corresponding true means and prior means are the same. The hyper-parameters are allocated values of a i = b i = 0 (NIP) for an improper prior case. Prior I and prior II are used to denote informative and non-informative Bayes estimators, respectively. Additionally, squared error loss (SEL), LINEX loss, and general entropy loss (GEL) are the three distinct loss functions that are taken into consideration when evaluating all Bayes estimators of unknown parameters ( α 1 , α 2 , β , S ( t ) , H ( t ) ). It was assumed that the values for the LINEX and GE loss parameters were ± 2 for both c and q. The Bayesian estimates are obtained using the M-H technique described in Section 4. In order to construct α 1 , α 2 , and β for this procedure, we used normal proposal densities from Equations (70) and (71). The initial values of α 1 , α 2 , and β in this procedure are the MLEs, or actual parameters values. Following Section 5’s instruction, we created N = 10 , 000 MCMC samples and discarded the first M = 1000 values as the burn-in period. Thus, the SE, LINEX, and GE loss functions were used to construct the Bayes estimates of α 1 , α 2 , β , S ( t ) , and H ( t ) , based on 9000 M-H sample data points. Using the M-H samples, we obtained the 95% symmetric credible intervals of α 1 , α 2 , β , S ( t ) , and H ( t ) . Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 show the relationships between the progressively censoring schemes (Cs.), bias, and MSE values for different estimates of α 1 , α 2 , β , S ( t ) , and H ( t ) at different sample sizes.
Overall, the numerical results displayed in each figure demonstrate that the recommended estimates generally outperform the expectations as n increases. In every instance, it is noted that biases and mean square errors decrease with increasing sample size. For all unknown parameters, the Bayesian estimators outperform the MLE estimators, primarily because the Bayesian approach considers both the data and prior information about the unknown parameters, whereas the MLEs just consider the data. Informative prior Bayesian estimators perform better than non-informative prior estimators for the same reason. When m increases with a fixed n or n increases with a fixed m, the estimator performances of all the unknown parameters improve. Both the MSE and the bias of the Bayesian estimators and MLEs decrease. Therefore, one strategy to improve the outcome estimate is to increase the effective sample size. The best Bayes estimates for the reliability function S ( t ) are under the GE loss function with q = + 2 , whereas the Bayesian estimates under the LINEX loss function with c = 2 are better than those under the SE or GE loss functions for α 1 , α 2 , β , and H ( t ) . The values of the MSE and bias of all Bayes estimates for α 1 , α 2 , β , S ( t ) , and H ( t ) , are presented also in Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12. Finally, the ALs and associated CPs of the estimated confidence intervals for α 1 , α 2 , β , S ( t ) , and H ( t ) , are also presented in Table 5, Table 6 and Table 7.
We found that the SEM strategy produces better confidence intervals than either the NR or EM strategies in terms of average length. Still, when compared to the five recommended methods (NR, log normal, EM, SEM, and HPD), credible intervals from the HPD perform best. This suggests that their ALs are smaller than at the nominal level, and their CPs are closer to those at the nominal level. Furthermore, informative prior Bayesian credible intervals outperform non-informative prior intervals. Additionally, Bayesian credible intervals with an informative prior perform better than intervals with a non-informative prior. This is true for all parameters α 1 , α 2 , β , S ( t ) , and H ( t ) .
Finally, it can be seen that for point and interval estimations with various combinations of sample size ( m , n ) and censoring schemes, the Bayesian technique with an informative prior provides excellent results. Additionally, varied estimates of the parameters exhibit varied performances under varied loss functions. As a result, if it is possible to access previous knowledge about the unknown parameters, the Bayesian approach is preferable. If not, we often select the Bayesian approach with a non-informative prior when the sample size is modest.

8. Conclusions

In this study, we took into consideration progressive Type-II censoring data for the statistical inference of unknown lifespan parameters when competing failure mechanisms are present but independent. The lifespan distribution for each cause of failure is assumed to correspond to a NP distribution. The maximum likelihood and Bayesian estimate techniques are taken into consideration in order to accomplish our goal. This study was conducted on the point and approximate confidence interval estimations for the hazard rate functions, reliability, and unknown parameters. We note that in using both the SEM and EM techniques, the complexity related to the numerical computation of the MLEs may have been significantly reduced. To produce the Bayesian estimates under the squared error, LINEX, and GE loss functions, and the corresponding credible intervals, the Metropolis–Hastings method was used in the Bayesian paradigm. We analyzed a real-world dataset to illustrate the techniques presented in this article. We then carried out an extensive simulation analysis to contrast the effectiveness of different estimators. When there is no subjective information, it was found that the MLEs perform rather well. With subjective information available, the Bayesian estimators perform better than the MLEs, as predicted.
While units failing because of two competing risks are subject to progressive Type-II censoring, it should be noted that the results also apply to cases where there are multiple causes of failure and to other types of failure data cases, including complete data, Type-II censoring, and progressive first-failure censoring. The optimal design and sample plan of progressive censoring under the competing risk model appear to be interesting topics for additional investigation and will be looked into in the future.

Author Contributions

Methodology, E.A.A. and M.S.E.; software, E.A.A. and M.S.E.; validation, T.S.A.; formal analysis, E.A.A. and M.S.E.; investigation, E.A.A. and T.S.A.; resources, T.S.A.; data curation, E.A.A.; writing—review and editing, E.A.A. and M.S.E.; visualization, T.S.A. and M.S.E. All authors have read and agreed to the published version of the manuscript.

Funding

Qassim University under project number (QU-APC-2024-9/1).

Data Availability Statement

All datasets are listed within the article.

Acknowledgments

The researchers would like to thank the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support (QU-APC-2024-9/1).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bourguignon, M.; Saulo, H.; Fernandez, R.N. A new Pareto-type distribution with applications in reliability and income data. Phys. A Stat. Mech. Appl. 2016, 457, 166–175. [Google Scholar] [CrossRef]
  2. Sarabia, J.M.; Jorda, V.; Prieto, F. On a new Pareto-type distribution with applications in the study of income inequality and risk analysis. Phys. A Stat. Mech. Appl. 2019, 527, 121277. [Google Scholar] [CrossRef]
  3. Abd Raof, A.S.; Haron, M.A.; Safari, M.A.M.; Siri, Z. Modeling the incomes of the upper-class group in Malaysia using new Pareto-type distribution. Sains Malays. 2022, 51, 3437–3448. [Google Scholar] [CrossRef]
  4. Nik, A.S.; Asgharzadeh, A.; Nadarajah, S. Comparisons of methods of estimation for a new Pareto-type distribution. Statistica 2019, 79, 291–319. [Google Scholar]
  5. Nik, A.S.; Asgharzadeh, A.; Raqab, M.Z. Estimation and prediction for a new Pareto-type distribution under progressive type-II censoring. Math. Comput. Simul. 2021, 190, 508–530. [Google Scholar]
  6. Nik, A.S.; Asgharzadeh, A.; Baklizi, A. Inference based on new Pareto-type records with applications to precipitation and COVID-19 data. Stat. Optim. Inf. Comput. 2023, 11, 243–257. [Google Scholar] [CrossRef]
  7. Li, F.; Wei, S.; Zhao, M. Bayesian estimation of a new Pareto-type distribution based on mixed Gibbs sampling algorithm. Mathematics 2023, 12, 18. [Google Scholar] [CrossRef]
  8. Safari, M.A.M.; Masseran, N. Robust estimation techniques for the tail index of the new Pareto-type distribution. Empir. Econ. 2024, 66, 1161–1189. [Google Scholar] [CrossRef]
  9. Balakrishnan, N.; Aggarwala, R. Progressive Censoring: Theory, Methods, and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  10. Mao, S.; Shi, Y.; Wang, L. Exact inference for two exponential populations with competing risks data. J. Syst. Eng. Electron. 2014, 25, 711–720. [Google Scholar]
  11. Azizi, F.; Haghighi, F.; Tabibi, G.N. Statistical inference for competing risks model under progressive interval censored Weibull data. Commun. Stat. Simul. Comput. 2020, 49, 1931–1944. [Google Scholar] [CrossRef]
  12. Dutta, S.; Kayal, S. Bayesian and non-Bayesian inference of Weibull lifetime model based on partially observed competing risks data under unified hybrid censoring scheme. Qual. Reliab. Eng. Int. 2022, 38, 3867–3891. [Google Scholar] [CrossRef]
  13. Dutta, S.; Ng, H.K.T.; Kayal, S. Inference for a general family of inverted exponentiated distributions under unified hybrid censoring with partially observed competing risks data. J. Comput. Appl. Math. 2023, 422, 114934. [Google Scholar] [CrossRef]
  14. Greene, W.H. Econometric Analysis, 4th ed.; Prentice-Hall: New York, NY, USA, 2000. [Google Scholar]
  15. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. B Stat. Methodol. 1977, 39, 1–22. [Google Scholar] [CrossRef]
  16. Ng, H.K.T.; Chan, P.S.; Balakrishnan, N. Estimation of parameters from progressively censored data using EM algorithm. Comput. Stat. Data Anal. 2002, 39, 371–386. [Google Scholar] [CrossRef]
  17. Zhang, M.; Zhisheng, Y.; Min, X. A stochastic EM algorithm for progressively censored data analysis. Qual. Reliab. Eng. Int. 2013, 30, 711–722. [Google Scholar] [CrossRef]
  18. Celeux, G.; Diebolt, J. The SEM algorithm: A probabilistic teacher algorithm derived from the EM algorithm for the mixture problem. Comput. Stat. Quar. 1985, 2, 73–82. [Google Scholar]
  19. Cariou, C.; Chehdi, K. Unsupervised texture segmentation/classification using 2-d autoregressive modeling and the stochastic expectation-maximization algorithm. Pattern Recognit. Lett. 2008, 29, 905–917. [Google Scholar] [CrossRef]
  20. Louis, T.A. Finding the observed information matrix using the EM algorithm. J. R. Stat. Soc. B Stat. Methodol. 1982, 44, 226–233. [Google Scholar] [CrossRef]
  21. Metropolis, N.; Rosenbluth, A.; Rosenbluth, M.; Teller, A.; Teller, E. Equations of state calculations by fast computing machines. J. Chem. Phys. 1953, 21, 1087–1092. [Google Scholar] [CrossRef]
  22. Hastings, W.K. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 1970, 57, 97–109. [Google Scholar] [CrossRef]
  23. Lawless, J.F. Statistical Models and Methods for Lifetime Data, 2nd ed.; John Wiley and Sons: New York, NY, USA, 2003. [Google Scholar]
  24. Park, C.; Kulasekera, K.B. Parametric inference of incomplete data with competing risks among several groups. IEEE Trans. Reliab. 2004, 53, 11–21. [Google Scholar] [CrossRef]
  25. Balakrishnan, N.; Kateri, M. On the maximum likelihood estimation of parameters of Weibull distribution based on complete and censored data. Stat. Probab. Lett. 2008, 78, 2971–2975. [Google Scholar] [CrossRef]
Figure 1. The effects of parameters α and β on the hazard function of N P ( α , β ) .
Figure 1. The effects of parameters α and β on the hazard function of N P ( α , β ) .
Mathematics 12 02136 g001
Figure 2. Fitted and empirical CDFs associated with NP distribution for two causes.
Figure 2. Fitted and empirical CDFs associated with NP distribution for two causes.
Mathematics 12 02136 g002
Figure 3. Q-Q plots associated with NP distribution for two causes.
Figure 3. Q-Q plots associated with NP distribution for two causes.
Mathematics 12 02136 g003
Figure 4. T-T plots associated with NP distribution for two causes.
Figure 4. T-T plots associated with NP distribution for two causes.
Mathematics 12 02136 g004
Figure 5. The MLE shape parameters with NP distribution.
Figure 5. The MLE shape parameters with NP distribution.
Mathematics 12 02136 g005
Figure 6. Contour plot of the log-likelihood function of α 1 and α 2 .
Figure 6. Contour plot of the log-likelihood function of α 1 and α 2 .
Mathematics 12 02136 g006
Figure 7. MCMC trace plot (first row) and histogram (second row) of α 1 α 2 , β , S ( t ) , and H ( t ) for electrical appliance dataset. Dashed lines (---) represent the posterior means and soled lines (—) represent the lower and upper bounds of the 95% probability interval.
Figure 7. MCMC trace plot (first row) and histogram (second row) of α 1 α 2 , β , S ( t ) , and H ( t ) for electrical appliance dataset. Dashed lines (---) represent the posterior means and soled lines (—) represent the lower and upper bounds of the 95% probability interval.
Mathematics 12 02136 g007
Figure 8. Relationship between the three censoring schemes, bias, and MSE values for different estimates of α 1 at different sample sizes.
Figure 8. Relationship between the three censoring schemes, bias, and MSE values for different estimates of α 1 at different sample sizes.
Mathematics 12 02136 g008
Figure 9. Relationship between the three censoring schemes, bias, and MSE values for different estimates of α 2 at different sample sizes.
Figure 9. Relationship between the three censoring schemes, bias, and MSE values for different estimates of α 2 at different sample sizes.
Mathematics 12 02136 g009
Figure 10. Relationship between the three censoring schemes, bias, and MSE values for different estimates of β at different sample sizes.
Figure 10. Relationship between the three censoring schemes, bias, and MSE values for different estimates of β at different sample sizes.
Mathematics 12 02136 g010
Figure 11. Relationship between the three censoring schemes, bias, and MSE values for different estimates of S ( t ) at different sample sizes.
Figure 11. Relationship between the three censoring schemes, bias, and MSE values for different estimates of S ( t ) at different sample sizes.
Mathematics 12 02136 g011
Figure 12. Relationship between the three censoring schemes, bias, and MSE values for different estimates of H ( t ) at different sample sizes.
Figure 12. Relationship between the three censoring schemes, bias, and MSE values for different estimates of H ( t ) at different sample sizes.
Mathematics 12 02136 g012
Table 1. Electrical appliance survival times.
Table 1. Electrical appliance survival times.
Cause10.984.134.956.9210.6511.9314.6719.37
Cause20.124.950.165.570.166.160.4611.07
0.4614.670.520.982.70
Table 2. Progressive Type-II data for electrical appliance data, with n = 21 and m = 15 .
Table 2. Progressive Type-II data for electrical appliance data, with n = 21 and m = 15 .
Sc. I(6 , 0 14 ) ( 0.12 , 2 ) ( 0.16 , 2 ) ( 0.16 , 2 ) ( 0.46 , 2 ) ( 0.46 , 2 )
( 0.52 , 2 ) ( 0.98 , 2 ) ( 0.98 , 1 ) ( 2.70 , 2 ) ( 4.95 , 2 )
( 55.7 , 2 ) ( 61.6 , 2 ) ( 10.65 , 1 ) ( 11.93 , 1 ) ( 14.67 , 2 )
Sc. II( 1 6 , 0 9 ) ( 0.12 , 2 ) ( 0.16 , 2 ) ( 0.46 , 2 ) ( 0.46 , 2 ) ( 0.52 , 2 )
( 0.98 , 2 ) ( 0.98 , 1 ) ( 2.70 , 2 ) ( 4.13 , 1 ) ( 4.95 , 2 )
( 4.95 , 1 ) ( 6.92 , 1 ) ( 11.93 , 1 ) ( 14.67 , 2 ) ( 14.67 , 1 )
Sc. III( 0 14 , 6 ) ( 0.12 , 2 ) ( 0.16 , 2 ) ( 0.16 , 2 ) ( 0.46 , 2 ) ( 0.46 , 2 )
( 0.52 , 2 ) ( 0.98 , 2 ) ( 0.98 , 1 ) ( 2.70 , 2 ) ( 4.13 , 1 )
( 4.95 , 2 ) ( 4.95 , 1 ) ( 5.57 , 2 ) ( 6.16 , 2 ) ( 10.65 , 1 )
Table 3. ML and Bayesian point estimates (first row) and their standard errors (second row) of α 1 , α 2 , β , S ( t ) , and H ( t ) for the real data.
Table 3. ML and Bayesian point estimates (first row) and their standard errors (second row) of α 1 , α 2 , β , S ( t ) , and H ( t ) for the real data.
MLEBayes
Loss Function
SchemeParameterNREMSEMSELLINEXGEL
c = 2 c = 2 q = 2 c = 2
Sc. I: ( 6 , 0 14 ) α 1 0.15230.11110.10910.15010.15100.14980.15130.1466
(0.0218)(0.0061)(0.0065)(0.0048)(0.0521)(0.0072)(0.0014)(3.1353)
α 2 0.46380.49170.48350.45670.46500.45340.46030.4458
(0.0356)(0.0319)(0.0291)(0.0146)(0.7900)(0.0117)(0.0137)(0.3379)
β 0.12000.12000.12000.11410.11420.11400.11430.1139
(0.0146)(0.0143)(0.0133)(0.0009)(0.0077)(0.0014)(0.0002)(1.1689)
S ( t = 0.5 ) 0.60700.61040.61620.60060.60380.59940.60170.5974
(0.0224)(0.0237)(0.0217)(0.0092)(0.9349)(0.0055)(0.0110)(0.0874)
H ( t = 0.5 ) 0.78070.77730.76160.77420.79870.76500.78020.7561
(0.0611)(0.0474)(0.0432)(0.0247)(0.7132)(0.0105)(0.0392)(0.1149)
Sc. II: ( 1 6 , 0 9 ) α 1 0.23180.17220.18340.15600.15690.15570.15720.1527
(0.0234)(0.0093)(0.0118)(0.0048)(0.0537)(0.0071)(0.0015)(2.7634)
α 2 0.30560.35610.37820.41080.41810.40800.41420.4005
(0.0263)(0.0204)(0.0241)(0.0136)(0.5889)(0.0119)(.0115)(0.4275)
β 0.12000.12000.12000.11560.11560.11550.11560.1154
(0.0157)(0.0157)(0.0137)(0.0009)(0.0077)(0.0014)(0.0002)(1.1245)
S ( t = 0.5 ) 0.65660.65940.64060.62760.63040.62650.62850.6249
(0.0213)(0.0246)(0.0210)(0.0085)(0.9958)(0.0049)(0.0107)(0.0722)
H ( t = 0.5 ) 0.64110.63780.68510.70660.72680.69920.71190.6910
(0.0492)(0.0306)(0.0369)(0.0223)(0.9597)(0.0107)(0.0323)(0.1335)
Sc. III: ( 0 14 , 6 ) α 1 0.12900.21610.22360.14810.14890.14780.14920.1447
(0.0160)(0.0113)(0.0158)(0.0047)(0.0497)(0.0069)(0.0014)(3.1217)
α 2 0.29500.21820.22610.39920.40540.39670.40220.3900
(0.0227)(0.0114)(0.0165)(0.0130)(0.5229)(0.0116)(0.0106)(0.4435)
β 0.12000.12000.12000.11570.11570.11570.11570.1155
(0.0170)(0.0181)(0.0155)(0.0009)(0.0076)(0.0014)(0.0002)(1.1088)
S ( t = 0.5 ) 0.71980.71620.70710.63890.64130.63780.63960.6364
(0.0206)(0.0250)(0.0204)(0.0083)(1.0171)(0.0047)(0.0105)(0.0661)
H ( t = 0.5 ) 0.49710.50100.52120.67840.69530.67210.68320.6644
(0.0367)(0.0204)(0.0296)(0.0211)(0.96918)(0.0107)(0.0293)(0.1419)
Table 4. The 95% interval estimates of α 1 , α 2 , β , S ( t ) , and, H ( t ) for the real data.
Table 4. The 95% interval estimates of α 1 , α 2 , β , S ( t ) , and, H ( t ) for the real data.
SchemeMethod α 1 α 2 β S ( t = 0.5 ) H ( t = 0.5 )
Sc. I: ( 6 , 0 14 )ACI(−0.0127, 0.3172)(0.1936, 0.7340)(0.0094, 0.2306)(0.4370, 0.7768)(0.3165, 1.2448)
LACI(0.0515, 0.4498)(0.2590, 0.8305)(0.0477, 0.3016)(0.4588, 0.8030)(0.4308, 1.4147)
EM(0.0730, 0.1689)(0.0679, 0.8047)(0.0486, 0.2963)(0.4548, 0.8194)(0.4891, 1.2353)
SEM(0.0690, 0.1727)(0.3062, 0.7636)(0.0519, 0.2775)(0.4717, 0.8047)(0.4949, 1.1718)
HPD(0.1149, 0.1892)(0.3496, 0.5749)(0.1077, 0.1208)(0.5305, 0.6692)(0.5986, 0.9751)
Sc. II: ( 1 6 , 0 9 )ACI(0.0535, 0.4101)(0.1064, 0.5049)(0.0010, 0.2390)(0.4951, 0.8180)(0.2679, 1.0142)
LACI(0.1074, 0.5002)(0.1592, 0.5866)(0.0445, 0.3235)(0.5135, 0.8396)(0.3582, 1.1474)
EM(0.1144, 0.2591)(0.1114, 0.5502)(0.0445, 0.3235)(0.4969, 0.8751)(0.4430, 0.9185)
SEM(0.1124, 0.2993)(0.2332, 0.6136)(0.0505, 0.2852)(0.4995, 0.8216)(0.4550, 1.0317)
HPD(0.1206, 0.1949)(0.3145, 0.5210)(0.1090, 0.1221)(0.5594, 0.6896)(0.5536, 0.8917)
Sc. III: ( 0 14 , 6 )ACI(0.0068, 0.2511)(0.12215, 0.4679)(−0.0089, 0.2489)(0.5637, 0.8758)(0.2186, 0.7755)
LACI(0.0500, 0.3325)(0.1642, 0.5301)(0.0410, 0.3514)(0.5795, 0.894)(0.2839, 0.8704)
EM(0.1452, 0.3216)(0.1452, 0.3247)(0.0382, 0.3770)(0.5495, 0.9334)(0.3673, 0.6833)
SEM(0.1306, 0.3829)(0.1299, 0.3934)(0.0449, 0.3204)(0.5678, 0.8805)(0.3389, 0.8016)
HPD(0.1144, 0.1860)(0.3082, 0.5035)(0.1092, 0.1223)(0.5743, 0.6981)(0.535, 0.8502)
Table 5. The average confidence lengths (AL) and the corresponding coverage percentages (CP) of α 1 , α 2 , β , S ( t ) , and H ( t ) based on ML and Bayes estimates for censoring Scheme I.
Table 5. The average confidence lengths (AL) and the corresponding coverage percentages (CP) of α 1 , α 2 , β , S ( t ) , and H ( t ) based on ML and Bayes estimates for censoring Scheme I.
MLEMCMC
( n , m ) NormalLog-NormalEMSEMNIPIP
ALCPALCPALCPALCPALCPALCP
(30, 20) α 1 1.62370.8132.19500.8750.99990.8590.82710.8790.62820.8210.37090.953
α 2 1.80540.8292.12470.8941.13880.8940.95770.8930.66010.8930.39120.962
β 1.15140.9390.98710.9070.65610.9930.54410.9650.11490.9550.08880.962
S ( t ) 0.60430.9310.42010.9180.39860.9740.39840.9930.24810.8870.15470.955
H ( t ) 1.72210.8142.44350.9451.15050.9571.01940.9890.67020.8210.39830.947
α 1 1.34800.9581.78740.9580.95960.9420.76440.9300.60000.9210.36100.951
α 2 1.48750.9592.04420.9671.08250.9750.92650.9250.65090.9320.38910.973
β 0.96360.9670.97150.9810.64220.9920.52300.9750.10860.9400.08180.962
S ( t ) 0.58220.9190.41660.9640.38510.9810.38910.9610.23440.9370.15100.964
H ( t ) 1.37560.9342.12490.9721.01450.9460.90820.9580.66180.9390.39290.947
(50, 35) α 1 1.28390.9181.47030.920.75500.9760.62220.9750.51800.9330.33670.963
α 2 1.45640.9331.67390.9130.82970.9880.68380.9810.53790.9320.35620.957
β 0.76700.9410.84920.9870.49370.9820.41520.9920.06580.9410.05370.951
S ( t ) 0.32530.9390.33100.9330.30320.9790.30180.9980.19070.9540.12990.959
H ( t ) 1.56510.9251.69490.9130.83310.9900.72300.9860.53600.9550.36190.975
(50, 45) α 1 0.85910.9840.87930.9820.74580.980.57860.9270.46090.9620.32550.960
α 2 0.97300.9910.98490.9730.79690.9820.66120.9600.48320.9470.34140.933
β 0.55360.9560.56880.9880.48900.9630.40650.9710.06400.9560.05130.960
S ( t ) 0.30500.9500.30900.9610.30210.9550.30290.9650.17540.9630.12250.967
H ( t ) 1.02250.9800.98910.9790.82270.9680.64320.9680.49200.9530.34740.968
(80, 60) α 1 0.82890.9820.84340.9810.58490.9670.46500.9450.39320.9230.29840.955
α 2 0.93310.9690.95190.9730.62890.9520.51830.9360.40510.9200.31420.946
β 0.53080.9900.55840.9800.39090.9670.32560.9760.03980.9790.03490.954
S ( t ) 0.25130.9640.25350.9670.24080.9820.23970.9870.14620.9420.11200.972
H ( t ) 0.94860.9650.95310.9720.63590.9800.52980.9730.41160.9630.31930.969
(80, 75) α 1 0.65380.9810.67680.9450.56810.9620.44290.9730.37480.9290.29130.962
α 2 0.70930.9570.73270.9800.61580.9570.48130.9640.38950.9370.30380.952
β 0.42660.9660.43940.9820.38270.9810.31710.9580.03620.9630.03370.967
S ( t ) 0.24410.9470.24610.9710.23780.9640.23960.9720.13560.9650.10660.960
H ( t ) 0.72760.9930.74560.9940.62140.9890.46420.9870.39950.9710.31280.946
Table 6. The average confidence lengths (ALs) and the corresponding coverage percentages (CPs) of α 1 , α 2 , β , S ( t ) , and H ( t ) based on ML and Bayes estimates for censoring Scheme II.
Table 6. The average confidence lengths (ALs) and the corresponding coverage percentages (CPs) of α 1 , α 2 , β , S ( t ) , and H ( t ) based on ML and Bayes estimates for censoring Scheme II.
MLEMCMC
( n , m )SC. II.NormalLog-NormalEMSEMNIPIP
ALCPALCPALCPALCPALCPALCP
(30, 20) α 1 1.51930.8951.62580.8820.99350.9280.74260.8930.66980.9390.38170.957
α 2 1.74200.9211.67770.9151.09810.9480.88340.9090.66710.9460.38980.961
β 0.90440.9581.17780.9350.65920.9500.55080.9550.10930.9630.07930.953
S ( t ) 0.42010.9750.47150.9600.39970.9700.38990.9730.25340.9710.15280.958
H ( t ) 1.82890.9611.87240.9651.09640.9740.95070.9640.70840.9800.40410.980
(30, 25) α 1 1.29360.9411.45670.9670.95190.9380.71780.9410.60870.9330.36730.948
α 2 1.46500.9651.54020.9551.02030.9250.85440.9530.62610.9720.38350.936
β 0.80760.9890.92870.9710.65310.9960.52270.9620.10760.9810.07840.968
S ( t ) 0.41570.9860.42580.9890.38970.9860.37330.9710.23310.9670.15080.955
H ( t ) 1.52540.9571.69100.9761.06790.9500.89920.9640.65160.9420.39490.970
(50, 35) α 1 1.20550.9481.32110.9570.71050.9650.61480.8930.52410.9630.33960.965
α 2 1.31690.9631.46680.9690.85460.9700.68630.9250.52270.9570.35300.932
β 0.72460.9590.80160.9790.54340.9790.42670.9620.06560.9510.05110.950
S ( t ) 0.31710.9740.33750.9630.31650.9830.31100.9600.19210.9760.12850.912
H ( t ) 1.43750.9681.51270.9820.93940.9740.84130.9070.54630.9800.36030.956
(50, 45) α 1 0.98630.9391.13460.9860.65590.9540.58350.9640.45960.9780.32510.976
α 2 1.08980.9441.21080.9820.82020.9590.64850.9620.47500.9690.33950.957
β 0.58910.9190.64060.9870.49960.9600.40370.9740.05450.9830.04420.948
S ( t ) 0.30820.9410.32480.9650.30790.9520.30390.9780.17160.9820.12540.969
H ( t ) 1.09690.9631.23960.9840.82630.9030.63340.9800.48700.9850.34600.973
(80, 60) α 1 0.91610.9421.01690.9400.56110.9490.44160.9310.39100.9470.29870.920
α 2 0.99870.9501.15590.9360.61810.9570.49620.9350.41530.9520.31610.927
β 0.56610.9230.61060.9670.38790.9640.32400.9410.04070.9620.03580.940
S ( t ) 0.25790.9460.26190.9770.23910.9590.23110.9400.14780.9680.11160.953
H ( t ) 1.03160.9621.11030.9810.61500.9610.47510.9530.41630.9720.31840.946
(80, 75) α 1 0.60100.9610.62030.9480.55010.9590.42640.9730.34250.9580.27580.949
α 2 0.66640.9470.68610.9620.60820.9710.48350.9640.36040.9670.28890.951
β 0.40670.9560.41770.9670.37570.9610.31240.9570.03810.9550.03430.953
S ( t ) 0.24050.9690.24240.9820.22940.9740.22060.9700.13030.9590.10460.960
H ( t ) 0.66940.9650.68410.9800.60660.980.47280.9520.35750.9690.29040.957
Table 7. The average confidence lengths (ALs) and the corresponding coverage percentages (CPs) of α 1 , α 2 , β , S ( t ) , and H ( t ) based on ML and Bayes estimates for censoring Scheme III.
Table 7. The average confidence lengths (ALs) and the corresponding coverage percentages (CPs) of α 1 , α 2 , β , S ( t ) , and H ( t ) based on ML and Bayes estimates for censoring Scheme III.
MLEMCMC
( n , m )SC. II.NormalLog-NormalEMSEMNIPIP
ALCPALCPALCPALCPALCPALCP
(30, 15) α 1 1.50680.8991.41650.8580.96730.8850.78620.9440.69750.9620.39180.961
α 2 1.67270.9011.79010.8741.16280.9090.83410.9580.88790.9540.40460.958
β 1.09340.9120.91060.9060.67370.9240.53710.9660.09410.9270.07610.974
S ( t ) 0.42880.9330.45750.9610.40200.9640.40250.9840.27800.9320.15320.966
H ( t ) 1.77910.9471.83640.9311.14770.9500.77300.9750.92800.9740.43660.970
(30, 25) α 1 1.35990.9131.37950.9250.92050.9210.72570.9670.65610.8870.37650.964
α 2 1.51550.9251.59600.9301.09070.9550.77930.9350.70430.9270.39680.949
β 0.86010.9730.89830.9410.63280.9560.51460.9690.08710.9470.06600.964
S ( t ) 0.42460.9240.43100.9660.39610.9710.39490.9780.23740.9270.15160.973
H ( t ) 1.59010.9271.65300.9651.05480.9750.75200.9670.72430.9130.41090.982
(50, 35) α 1 1.18100.9271.28320.9340.75420.9350.54380.9610.56910.9430.35290.958
α 2 1.29600.9321.37180.9390.82440.9600.61380.9720.59420.9370.37890.952
β 0.79100.9650.71640.9510.49850.9570.40450.9540.05180.9850.05050.961
S ( t ) 0.32110.9190.34030.9450.30540.9680.31710.9560.20320.9720.13100.964
H ( t ) 1.39490.9411.42980.9670.83210.9740.59270.9640.63230.9680.38920.956
(50, 45) α 1 0.91550.9290.96550.9470.73360.9510.51360.9600.49600.9390.33740.971
α 2 1.00210.9320.86730.9540.81850.9560.60920.9410.51720.9230.35420.957
β 0.58300.9700.63410.9620.49520.9700.39100.9650.04650.9470.04600.962
S ( t ) 0.31600.9610.32280.9690.30260.9680.31160.9710.17740.9120.12780.967
H ( t ) 1.04080.9681.17340.9590.82480.9730.58370.9590.54870.9130.36810.960
(80, 60) α 1 0.70330.9490.76030.9440.58490.9490.44290.9620.48140.9400.32860.967
α 2 0.77830.9380.84300.9570.66540.9070.50210.9560.49950.9120.35370.981
β 0.46800.9680.58100.9620.39090.9680.30820.9720.03620.9730.03360.973
S ( t ) 0.30980.9770.31940.9770.24130.9830.23970.9660.16210.8670.11720.965
H ( t ) 0.98210.9511.01900.9640.65660.9870.46920.9610.52800.8670.36660.947
(80, 75) α 1 0.65380.9550.67680.9780.57260.9680.39960.9690.37480.9530.29130.962
α 2 0.70930.9430.73270.9800.62890.9650.48130.9730.38950.9870.30450.957
β 0.42660.9760.43940.9810.38780.9710.30500.9680.03010.9600.02960.967
S ( t ) 0.24410.9810.24610.9790.24080.9670.23780.9710.13560.9590.10670.961
H ( t ) 0.72760.9630.74560.9940.63590.9890.46400.9870.39950.9810.31260.948
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahmed, E.A.; Alshammari, T.S.; Eliwa, M.S. Different Statistical Inference Algorithms for the New Pareto Distribution Based on Type-II Progressively Censored Competing Risk Data with Applications. Mathematics 2024, 12, 2136. https://doi.org/10.3390/math12132136

AMA Style

Ahmed EA, Alshammari TS, Eliwa MS. Different Statistical Inference Algorithms for the New Pareto Distribution Based on Type-II Progressively Censored Competing Risk Data with Applications. Mathematics. 2024; 12(13):2136. https://doi.org/10.3390/math12132136

Chicago/Turabian Style

Ahmed, Essam A., Tariq S. Alshammari, and Mohamed S. Eliwa. 2024. "Different Statistical Inference Algorithms for the New Pareto Distribution Based on Type-II Progressively Censored Competing Risk Data with Applications" Mathematics 12, no. 13: 2136. https://doi.org/10.3390/math12132136

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop