Next Article in Journal
Advancing Sustainable Urban Development: Navigating Complexity with Spherical Fuzzy Decision Making
Previous Article in Journal
Estimation of Pointing Errors of Large Radio Telescopes under Solar Radiation Based on Digital Twin
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Marshall–Olkin Extended Generalized Extreme Value Distribution Parameters under Progressive Type-II Censoring by Using a Genetic Algorithm

by
Rasha Abd El-Wahab Attwa
1,
Shimaa Wasfy Sadk
1 and
Taha Radwan
2,3,*
1
Department of Mathematics, Faculty of Science, Zagazig University, Zagazig 44519, Egypt
2
Department of Management Information Systems, College of Business and Economics, Qassim University, Buraydah 52571, Saudi Arabia
3
Department of Mathematics and Statistics, Faculty of Management Technology and Information Systems, Port Said University, Port Said 42521, Egypt
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(6), 669; https://doi.org/10.3390/sym16060669
Submission received: 9 March 2024 / Revised: 4 May 2024 / Accepted: 8 May 2024 / Published: 29 May 2024

Abstract

:
In this article, we consider the statistical analysis of the parameter estimation of the Marshall–Olkin extended generalized extreme value under liner normalization distribution (MO-GEVL) within the context of progressively type-II censored data. The progressively type-II censored data are considered for three specific distribution patterns: fixed, discrete uniform, and binomial random removal. The challenge lies in the computation of maximum likelihood estimations (MLEs), as there is no straightforward analytical solution. The classical numerical methods are considered inadequate for solving the complex MLE equation system, leading to the necessity of employing artificial intelligence algorithms. This article utilizes the genetic algorithm (GA) to overcome this difficulty. This article considers parameter estimation through both maximum likelihood and Bayesian methods. For the MLE, the confidence intervals of the parameters are calculated using the Fisher information matrix. In the Bayesian estimation, the Lindley approximation is applied, considering LINEX loss functions and square error loss, suitable for both non-informative and informative contexts. The effectiveness and applicability of these proposed methods are demonstrated through numerical simulations and practical real-data examples.

1. Introduction

The extreme value theory (EVT) is fundamental to various sectors, including environmental sciences, engineering, finance, insurance, and others. In finance, EVT is especially important for studying the changes in price for many areas, like financial assets or market indices. Forecasting for a better understanding of these extremes is important for effective risk management and strategy formulation. The importance of EVT lies in modeling the distribution tail, which is important in evaluating the risk of rare events. This is especially important in many aspects; it is crucial in market crashes or unexpected increases, where typical financial models may not appropriately capture the risk or possible effects. Ref. [1] is an excellent resource that provides in-depth insights into EVT and its applications to financial data, providing a comprehensive understanding of its methodologies and implications in the financial sector. EVT’s primary goal is to provide a probabilistic description of extreme occurrences within a series of random events. The core concepts of EVT, as originally outlined by [2], form the cornerstone of the conventional EVT framework.
EVT identifies three forms of extreme value distributions: Frechet, Gumbel, and inverse Weibull. The Frechet distribution has an unbounded, hefty upper tail, indicating that it can handle very big numbers. Moreover, it is especially beneficial for modeling data with extreme maxima that are much higher than the average. Meanwhile, the Gumbel distribution is famous for its lighter upper tail, particularly when compared to the Frechet distribution. The Gumbel distribution is frequently used in situations when the distribution of extremes lacks a heavy tail. Due to the inverse Weibull distribution being known to have a finite upper tail, this feature makes it appropriate for simulating situations for values to have a clear upper bound. Generalized extreme value liner distributions (GEVL) is a unified framework that combines these three types of distributions, offering a comprehensive approach for modeling extremes. This versatility makes EVT a powerful tool in statistical analysis, especially in fields where understanding and predicting extreme events is crucial. Several authors have delved into studies concerning GEVL distributions, including [3,4,5,6,7,8] among others.The GEVL probability density function (PDF) and cumulative distribution function (CDF) are given, respectively, as,
f ( x , θ ) = 1 σ [ 1 + ζ ( x μ ) σ ] 1 ζ 1 e [ 1 + ζ ( x μ ) σ ] 1 ζ if ζ 0 , 1 σ e ( ( x μ ) σ ) e e ( ( x μ ) σ ) if ζ 0 ,
F ( x , θ ) = e [ 1 + ζ ( x μ ) σ ] 1 ζ if ζ 0 , e e ( ( x μ ) σ ) if ζ 0 .
and the support of the distribution is
x ( μ σ ζ , ) , if ζ > 0 , ( , ) , if ζ 0 , ( , μ σ ζ ) , if ζ < 0 .
Hence, θ indicates several parameter sets based on the ζ value. θ = ( μ , σ , ζ ) when ζ 0 , suggesting a three-parameter model. However, when ζ approaches zero, θ = ( μ , σ ) , indicating a two-parameter model. μ is a location parameter, σ is a scale parameter, and ζ is the shape parameter that affects the distribution’s tail behavior. The values of ζ , whether they be approaching 0, are positive, or are negative, define sub-models representing the Gumbel, Frechét, and Weibull distributions mentioned earlier.
Marshall and Olkin (1997) proposed a technique for modifying any distribution by adding a new parameter, and it is considered a very helpful tool for academics, providing a methodical approach for refining and adapting current distribution models to better suit a wide range of practical applications. Their technology has been widely applied in a variety of industries where an accurate model of data distribution is required. Ref. [9] emphasizes their substantial contribution to statistical theory, especially in extending families of distributions. According to [10], the Marshall–Olkin family of extended distributions is widely applicable and valuable in statistical analysis. They initiated their method by taking a base survival function (SF) and PDF, denoted as H ¯ ( x , θ ) and h ( x , θ ) , respectively, to construct a new survival function and PDF as
G ¯ ( x ; α , θ ) = α H ¯ ( x , θ ) 1 α ¯ H ¯ ( x , θ ) ,
g ( x ; α , θ ) = α h ( x , θ ) [ 1 α ¯ H ¯ ( x , θ ) ] 2
respectively, where, α > 0 and α ¯ = 1 α , which is known as Marshall and Olkin extension. Many authors utilized the Marshall–Olkin method as an extension to the parent distribution, such as [11,12,13,14].
Because of their robust asymptotic features, maximum likelihood estimation (MLE) and Bayesian approaches have become widely known and used for parameter estimation. MLE treats parameters as fixed but unknown values, which is often useful in estimating measures like means or variances. On the other hand, Bayesian approaches estimate parameters using known prior distributions. This fundamental difference allows Bayesian analysis to incorporate prior knowledge or beliefs about the parameters, making it an exceedingly flexible and powerful tool in statistical inference. For those interested in a deeper understanding of these methodologies and their applications, the work by [15] is an excellent resource.
This paper uses the artificial intelligence algorithm in the estimation process due to the limitation of the traditional numerical method in dealing with complex systems of equations. The genetic algorithms (GAs) fall under the broader category of evolutionary computation. GAs employ mechanisms akin to natural selection. A genetic algorithm (GA) encodes potential solutions to a problem as ’chromosomes,’ which undergo iterative processes of selection, crossing, and mutation. Through these iterations, the population evolves, ideally leading to increasingly optimal solutions. The algorithm’s strength lies in its ability to search through vast and complex solution spaces, making it applicable to a wide variety of problems in optimization and machine learning.
Refs. [16,17] offer deep insights into GAs. Ref. [16] provides a comprehensive overview of GAs, delving into their mechanisms, applications, and theoretical underpinnings. Ref. [17] focuses on the practical applications of GAs, offering insights into how these algorithms can be implemented and optimized for various real data problems.
In many experiments, observing the failure of all units under study can be a challenge due to many constraints, such as time limitations, budgetary restrictions, or other practical limitations. To address this problem, censoring schemes are commonly utilized. Censoring in statistical analysis is a technique primarily utilized in the fields of reliability testing and survival analysis.Censoring allows researchers to make inferences about the entire data set, even for units with unknown failure times. It is particularly useful in many fields, such as reliability engineering, where testing until the failure of all components may be impractical or impossible, and in medical research, where patients may leave a study before its conclusion or the event of interest (like recovery or relapse) has not occurred before the study ends.
The type II progressive censored scheme is a popular censoring strategy for parameter estimation research. Type II progressive censored data is described as follows: Consider an experiment with N items. Only a preset number, designated as n (where n < N ), are observed until they fail. When the first failure donated by x 1 : n : N occurs, a certain number of items R 1 are randomly eliminated from the items ( N 1 ) . Subsequently, when the second failure x 2 : n : N occurs, another set of items R 2 is randomly eliminated from the remaining ( N R 1 2 ) items, and this process continues. This process repeats until the nth failure x n : n : N occurs at this point, and the experiment is ended. The type-II progressive censored technique enables researchers to collect and evaluate data without censoring failure times across all objects in the experiment. For more comprehensive information on this scheme, refer to the work by [18]. Also, many others have used type-II progressive censored scheme in their papers, such as [19,20,21,22,23].
This paper’s structure is methodically arranged into sections, beginning with a discussion on the Marshall–Olkin extended generalized extreme value under linear normalization distribution, addressing PDF and CDF along with type-II progressive censoring schemes, in Section 2. It progresses to parameter estimation using type-II progressively censored samples; exploring both point estimates and confidence intervals is discussed in Section 3. In Section 4, Bayesian estimation techniques are detailed, followed by Section 5, which gives the simulation analysis demonstrating theoretical applications. Then, in Section 6, practical applications are exemplified through real data analysis. Finally, Section 7 presents the paper’s conclusions.

2. Marshall–Olkin Extended Generalized Extreme Value under Linear Normalization Distribution and Cases of Type-II Progressive Censored Scheme Considered

2.1. Marshall–Olkin Extended Generalized Extreme Value under Linear Normalization Distribution

Let X Marshall–Olkin extended generalized extreme value Under linear normalization distribution. By using Equations (1) and (2) at Equations (4) and (5) to obtain the survival function (SF) and PDF of the Marshall–Olkin extended GEVL (MO-GEVL) contained Marshall–Olkin extended Gambul (MO-Gambul) when ζ 0 and ζ 0 , respectively, are
G ¯ ( x ; α , θ ) = α [ 1 e [ 1 + ζ ( x μ ) σ ] 1 ζ ] 1 α ¯ [ 1 e [ 1 + ζ ( x μ ) σ ] 1 ζ ] , if ζ 0 α [ 1 e ( e ( x μ ) σ ) ] 1 α ¯ [ 1 e ( e ( x μ ) σ ) ] if ζ 0
and
g ( x ; α , θ ) = α [ 1 + ζ ( x μ ) σ ] 1 ζ 1 e [ 1 + ζ ( x μ ) σ ] 1 ζ σ [ 1 α ¯ ( 1 e [ 1 + ζ ( x μ ) σ ] 1 ζ ) ] 2 , if ζ 0 α e ( x μ ) σ e ( e ( x μ ) σ ) σ [ 1 α ¯ ( 1 e ( e ( x μ ) σ ) ) ] 2 if ζ 0
The parameter set θ , which consists of ( μ , σ , ζ ) for ζ 0 and transitions to ( μ , σ ) as ζ approaches zero, plays a critical role in defining the distribution’s characteristics. The influence of α , μ , σ , and ζ on the distribution’s PDF and CDF shapes is graphically depicted in Figure 1, Figure 2, Figure 3 and Figure 4 to facilitate a deeper understanding of their impacts.
Remark 1.
1: Ref [24] is a special case of our paper at α 1 .
2: 
As α 1 in Equations (4) and (5), we obtained the parent distribution PDF and SF obtained at Equations (2) and (1).
3: 
For small values of ζ and α 1 in Figure 1, Figure 2, Figure 3 and Figure 4, the shape of the CDF and PDF plot of MO-GEVL tends to coincide with plots of the CDF and PDF plot of the parent distribution GEVL.
4: 
The provided plots effectively show the remarkable flexibility of the models introduced.

2.2. Cases of Type-II Progressive Censored Scheme Considered

Assume that * = ( 1 = r 1 , 2 = r 2 , , n = r n ) is the censoring scheme for the progressively type-II censored ordering data from the MO-GEVL distribution X = ( X 1 : ( n : N ) , X 2 : ( n : N ) , , X n : ( n : N ) ) and observed data x = x 1 , x 2 , , x n .
  • The first scenario: Fixed Removal Censoring Scheme
In The first scenario, * = ( 1 , 2 , , n ) is considered to be predefined set values of the censoring structure.
  • The second scenario: Removals With Discrete Uniform Distribution
In this scenario, the censoring scheme * = ( 1 , 2 , , n ) is characterized as an independent random variable adhering to a discrete uniform distribution. The joint likelihood function of is given by,
P * ( * ) = P ( 1 , 2 , , n ) = P ( n = r n | n 1 = r n 1 , n 2 = r n 2 , , 1 = r 1 ) P ( 2 = r 2 | 1 = r 1 ) P ( 1 = r 1 )
where
P ( 1 = r 1 ) = 1 N n + 1 P ( i = r i | R i 1 = r i 1 , i 2 = r i 2 1 = r 1 ) = 1 N n k = 1 i 1 r k + 1
where 0 < r 1 < N n , i = 2 , 3 , , n 1 , 0 < r i < N n k = 1 i 1 r k and n = N n k = 1 n 1 k .
From Equation (9) in Equation (8), the joint distribution of R can be easily obtained as,
P * ( * ) = P ( 1 , 2 , , n ) = 1 N n + 1 i = 2 n 1 1 N n k = 1 i 1 r k + 1
It is noticed that Equation (10) is parameter-free; see [25].
  • The third scenario: Removals With Binomial Distribution
In this particular scenario, the censoring scheme R * is modeled as an independent random variable that adheres to binomial distributions, characterized by a specific probability P; see [26]. Then, the joint distribution of R * is given by
P * * ( * ; P ) = P ( 1 , 2 , , n ; P ) = P ( n = r n | n 1 = r n 1 , n 2 = r n 2 , , 1 = r 1 ; P ) P ( 2 = r 2 | 1 = r 1 ; P ) P ( 1 = r 1 ; P )
where
P ( R 1 = r 1 | P ) = N n r 1 P r 1 ( 1 P ) ( N n r 1 ) , P ( R i = r i | R i 1 = r i 1 , R 1 = r 1 , P ) = N n k = 1 i 1 r k r i P r i ( 1 P ) ( N n k = 1 i r k )
where 0 < r 1 < N n , 0 < r i < N n k = 1 i 1 r k , for i = 2 , 3 , , n 1 and R n = N n k = 1 n 1 R k .
Using Equation (12) at Equation (11), the joint distribution of R * could be obtained as,
P * * ( * ) = ( N n ) ! P k = 1 n 1 r k ( 1 P ) [ ( N n ) ( n 1 ) k = 1 n 1 ( n k ) r k ] ( N n k = 1 n 1 r k ) ! ( k = 1 n 1 r k )

3. Maximum Likelihood Estimation of Parameters and Observed Fisher Information

In the subsequent section, we investigated into the point and interval estimations of the MO-GEVL distribution parameters generated from type-II progressively censored data for the three scenarios explained above. The interval estimate employs observed Fisher information and combines both complete and approximation likelihood equations to establish parameter confidence intervals.

3.1. Maximum Likelihood Estimation of Parameters

To estimate the MO-GEVL distribution parameters based on type-II progressively censored data for all three scenarios discussed above, consider ( X 1 < X 2 < < X n ) as representing the sequence of failure times arranged in ascending order from a selection of n of N being a pre-established number before initiating the test. When a failure occurs at point i, all i units are progressively eliminated from the exam. the joint likelihood function is
  • The first scenario:
    L = C i = 1 n g ( x i ; α , θ ) G ¯ r i ( x i ; α , θ ) ,
  • The second scenario:
    L * = P * ( * ) L L
  • The third scenario:
    L * * = P * * ( * ) L L P i = 1 n 1 r i ( 1 P ) ( n 1 ) ( N n ) k = 1 n 1 ( n k ) r k
where, P * ( * ) , P * * ( * ) are given in Equations (10) and (13) respectively. Meanwhile, C is defined as
C = N N r 1 1 N r 1 r 2 2 N i = 1 n 1 ( r i + 1 )
Clearly, The maximum likelihood equations (ML equations)for the three scenarios are identical; the only difference is the PDF of the censored scheme * . Additionally, in the third scenario, the MLE of P can be identified easily. as
P ^ = k = 1 n 1 r k ( N n ) ( n 1 ) k = 1 n 1 ( n k 1 ) r k
When obtaining the ML equations of parameters by substituting with Equations (6) and (7) at Equation (14), the likelihood function is obtained as
L i n α Φ ( x i ) 1 + ζ exp ( Φ ( x i ) ) σ [ 1 α ¯ ( 1 exp ( Φ ( x i ) ) ) ] 2 α [ 1 exp ( Φ ( x i ) ) ] 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] r i if ζ 0 i n α Ψ ( x i ) e Ψ ( x i ) σ [ 1 α ¯ ( 1 e Ψ ( x i ) ) ] 2 [ α [ 1 e Ψ ( x i ) ] 1 α ¯ [ 1 e Ψ ( x i ) ] ] r i if ζ 0 ,
where Φ ( x i ) = 1 + ζ x i μ σ 1 ζ and Ψ ( x i ) = e ( x i μ σ ) . The ML equations of the MO-GEVL and MO-Gumbel distributions could be easily driven by taking the derivative of log of Equation (18) concerning the parameters of θ = ( μ , σ , ζ ) if ζ 0 , θ = ( μ , σ ) if ζ 0 and making them equal to zero. Then, the MLEs of the MO-GEVL distribution are
δ log L δ α = n + i = 1 n r i α i = 1 n ( 2 + r i ) 1 exp ( Φ ( x i ) ) 1 α ¯ 1 exp ( Φ ( x i ) )
δ log L δ μ = i = 1 n r i [ Φ ( x i ) ] ζ + 1 exp ( Φ ( x i ) ) [ 1 exp ( Φ ( x i ) ) ] i = 1 n [ Φ ( x i ) ] 1 + ζ + i = 1 n ( ζ + 1 ) [ Φ ( x i ) ] ζ + i = 1 n ( 2 + r i ) α ¯ [ Φ ( x i ) ] 1 + ζ exp ( Φ ( x i ) ) 1 α ¯ 1 exp ( Φ ( x i ) )
δ log L δ σ = n + i = 1 n ( ζ + 1 ) ( x i μ ) σ [ Φ ( x i ) ] ζ i = 1 n [ Φ ( x i ) ζ + 1 ( x i μ ) σ ] + i = 1 n r i [ Φ ( x i ) ] ζ + 1 ( x i μ ) σ exp ( Φ ( x i ) ) ( 1 exp ( Φ ( x i ) ) ) + i = 1 n ( 2 + r i ) α ¯ [ Φ ( x i ) ] 1 + ζ ( x i μ ) σ exp ( Φ ( x i ) ) [ 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] ]
and
δ log L δ ζ = i = 1 n Φ ( x i ) ln Φ ( x i ) + ( x i μ ) σ [ Φ ( x i ) ] ζ i = 1 n ζ + 1 ( x i μ ) σ Φ ( x i ) ζ + i = 1 n ( 2 + r i ) α ¯ Φ ( x i ) exp ( Φ ( x i ) ) ln ( Φ ( x i ) ) + ( x i μ ) σ [ Φ ( x i ) ] ζ [ 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] ] i = 1 n ln ( Φ ( x i ) ) + i = 1 n r i exp ( Φ ( x i ) ) Φ ( x i ) ln ( Φ ( x i ) ) + ( x i μ ) σ [ Φ ( x i ) ] ζ [ 1 exp ( Φ ( x i ) ) ]
Meanwhile, the ML equations of the MO-Gumbel distribution are
δ log L δ α = n + i = 1 n r i α i = 1 n ( 2 + r i ) 1 e Ψ ( x i ) 1 ( 1 α ) 1 e Ψ ( x i )
δ log L δ μ = n i = 1 n Ψ ( x i ) + i = 1 n r i Ψ ( x i ) e Ψ ( x i ) ( 1 e Ψ ( x i ) ) + i = 1 n ( 2 + r i ) α ¯ Ψ ( x i ) e Ψ ( x i ) 1 α ¯ 1 e Ψ ( x i )
and
δ log L δ σ = i = 1 n ( x i μ ) σ Ψ ( x i ) + i = 1 n r i Ψ ( x i ) ( x i μ ) σ e Ψ ( x i ) [ 1 e Ψ ( x i ) ] n i = 1 n ( x i μ ) σ + i = 1 n ( 2 + r i ) α ¯ Ψ ( x i ) ( x i μ ) σ e Ψ ( x i ) [ 1 α ¯ [ 1 e Ψ ( x i ) ] ]
Notice:
In type-II progressively censored data, the maximum likelihood estimators (MLEs) for the parameters α ^ , μ ^ , σ ^ and ζ ^ are derived by setting the equations from (19) to (22) and from (23) to (25) to zero. The nonlinear characteristics of the likelihood equations Equations (19)–(22) and Equations (23)–(25) are considered a challenge as they do not have a straightforward explicit solution. Given a specific set of a given value ( N , n , M , m , x , y ) , obtaining the maximum likelihood estimators of parameters involves solving these complex equations, which may not be available by classical techniques. Artificial intelligence algorithms are used to discover the optimum probability estimators based on observable data and equations. We used (GA), a popular AI technique that optimizes MLE using type-II progressively censored data. GA is explained in detail on the following website, https://cran.r-project.org/web/packages/GA/vignettes/GA.html#:~:text=The%20R%20package%20GA%20provides,case%2C%20whether%20constrained%20or%20not, accessed on 1 March 2024.

3.2. Observed Fisher Information and Approximate Confidence Interval Concerning the Distribution of the Censoring Scheme

The observed Fisher information is determined by utilizing both the complete and approximate likelihood equations. This approach enables a thorough evaluation of the precision of the parameter estimates derived from the data, providing critical insights into the variability of the estimation process in statistical analysis. Monte Carlo simulations are used to systematically analyze the performance of pivotal quantities. For sufficiently high sample sizes (i.e., N , n > 0), we can create approximate confidence ranges for the three scenarios previously discussed. Confidence intervals offer insights into parameter estimations’ precision and statistical significance across different contexts.

3.2.1. Observed Fisher Information Corresponding

Under the presumption of censoring scheme distribution, the Fisher information matrix, based on log-likelihood functions given in Equation (18) for scenarios 1 and 2, obtained using I 1 * and I 2 * if ζ 0 and ζ 0 , respectively, is
I 1 * = Ω α 2 1 Ω α μ 1 Ω α σ 1 Ω α ζ 1 Ω μ α 1 Ω μ 2 1 Ω μ σ 1 Ω μ ζ 1 Ω σ α 1 Ω σ μ 1 Ω σ 2 1 Ω σ ζ 1 Ω ζ α 1 Ω ζ μ 1 Ω ζ σ 1 Ω ζ 2 1
and
I 2 * = Ω α 2 1 Ω α μ 1 Ω α σ 1 Ω μ α 1 Ω μ 2 1 Ω μ σ 1 Ω σ α 1 Ω σ μ 1 Ω σ 2 1 Ω ζ α 1 Ω ζ μ 1 Ω ζ σ 1
Meanwhile, for The third scenario, the fisher information matrix I 1 * * and I 2 * * for ζ 0 and ζ 0 , respectively, is obtained as
I 1 * * = Ω α 2 2 Ω α μ 2 Ω α σ 2 Ω α ζ 2 0 Ω μ α 2 Ω μ 2 2 Ω μ σ 2 Ω μ ζ 2 0 Ω σ α 2 Ω σ μ 2 Ω σ 2 2 Ω σ ζ 2 0 Ω ζ α 2 Ω ζ μ 2 Ω ζ σ 2 Ω ζ 2 2 0 0 0 0 0 δ ( P )
and
I 2 * * = Ω α 2 2 Ω α μ 2 Ω α σ 2 0 Ω μ α 2 Ω μ 2 2 Ω μ σ 2 0 Ω σ α 2 Ω σ μ 2 Ω σ 2 2 0 Ω ζ α 2 Ω ζ μ 2 Ω ζ σ 2 0 0 0 0 δ ( P )
where Ω 1 and Ω 2 are the log of likelihood function of Equation (18) for ζ 0 and ζ 0 , respectively. Then, the partial derivatives are given by
Ω α 2 1 = ( n + i = 1 n r i ) α 2 + i = 1 n ( 2 + r i ) [ 1 exp ( Φ ( x i ) ) ] 2 [ 1 α ¯ k [ 1 exp ( Φ ( x i ) ) ] ] 2
Ω α μ 1 = Ω μ α 1 = i = 1 n ( 2 + r i ) [ Φ ( x i ) ] 1 + ζ exp ( Φ ( x i ) ) [ 1 α ¯ 1 exp ( Φ ( x i ) ) ] 2
Ω α σ 1 = Ω σ α 1 = i = 1 n ( 2 + r i ) Φ ( x i ) 1 + ζ ( x i μ ) σ exp ( Φ ( x i ) ) [ 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] ] 2
Ω α ζ 1 = Ω ζ α 1 = i = 1 n ( 2 + r i ) Φ ( x i ) ln ( Φ ( x i ) ) + ( x i μ ) σ Φ ( x i ) ζ exp ( Φ ( x i ) ) [ 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] ] 2
Ω μ 2 1 = i = 1 n ( ζ + 1 ) [ Φ ( x i ) ] 2 ζ i = 1 n ( 1 + ζ ) [ Φ ( x i ) ] 2 ζ 1 + 1 + i = 1 n r i [ Φ ( x i ) ] 1 + 2 ζ exp ( Φ ( x i ) ) [ ( 1 + ζ ) Φ ( x i ) [ 1 exp ( Φ ( x i ) ) ] ] [ 1 exp ( Φ ( x i ) ) ] 2 + i = 1 n ( 2 + r i ) α ¯ [ Φ ( x i ) ] 1 + 2 ζ exp ( Φ ( x i ) ) [ ( 1 + ζ ) Φ ( x i ) [ 1 exp ( Φ ( x i ) ) ] ] [ 1 α ¯ ( 1 exp ( Φ ( x i ) ) ) ] 2
Ω σ μ 1 = Ω μ σ 1 = i = 1 n ( ζ + 1 ) [ Φ ( x i ) ζ ζ α Φ ( x i ) 2 ζ ] i = 1 n [ Φ ( x i ) ] 1 + 2 ζ [ ( 1 + ζ ) ( x i μ ) σ Φ ( x i ) ζ ] i = 1 n r i Φ ( x i ) 1 + 2 ζ exp ( Φ ( x i ) ) [ ( 1 + ζ ) ( x i μ ) σ Φ ( x i ) ζ ( x i μ ) σ Φ ( x i ) [ 1 exp ( Φ ( x i ) ) ] ] [ 1 exp ( Φ ( x i ) ) ] i = 1 n ( 2 + r i ) α ¯ Φ ( x i ) 1 + 2 ζ exp ( Φ ( x i ) ) [ ( 1 + ζ ) ( x i μ ) σ Φ ( x i ) ζ ( x i μ ) σ Φ ( x i ) [ 1 exp ( Φ ( x i ) ) ] ] [ 1 ( α ¯ [ 1 exp ( Φ ( x i ) ) ] ) ]
Ω μ ζ 1 = Ω ζ μ 1 = i = 1 n Φ ( x i ) ζ i = 1 n ( 1 + ζ ) Φ ( x i ) ζ [ ζ ( x i μ ) σ Φ ( x i ) ζ 1 ] + i = 1 n Φ ( x i ) 1 + ζ [ ln ( Φ ( x i ) ) + 2 ( x i μ ) σ Φ ( x i ) ζ ] i = 1 n ζ ( 2 + r i ) α ¯ Φ ( x i ) ζ + 1 exp ( Φ ( x i ) ) ( x i μ ) σ [ 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] ] i = 1 n r i Φ ( x i ) 1 + ζ exp ( Φ ( x i ) ) [ Φ ( x i ) ζ ( x i μ ) σ + ( 1 exp ( Φ ( x i ) ) Φ ( x i ) 1 exp ( Φ ( x i ) ) ) ( ln ( Φ ( x i ) ) + Φ ( x i ) ζ ( x i μ ) σ ) ] [ 1 exp ( Φ ( x i ) ) ] + i = 1 n ( 2 + r i ) α ¯ Φ ( x i ) 2 ζ + 1 exp ( Φ ( x i ) ) ( x i μ ) σ [ 1 + Φ ( x i ) α ¯ Φ ( x i ) exp ( Φ ( x i ) ) 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] ] [ 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] ] + i = 1 n ( 2 + r i ) α ¯ Φ ( x i ) ζ + 1 exp ( Φ ( x i ) ) ln ( Φ ( x i ) ) [ 1 + Φ ( x i ) α ¯ Φ ( x i ) exp ( Φ ( x i ) ) [ 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] ] ] [ 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] ]
Ω σ 2 1 = n i = 1 n ( ζ + 1 ) ( x i μ ) σ [ 2 ζ ( x i μ ) σ Φ ( ρ ) ζ ] Φ ( ρ ) ζ i = 1 n Φ ( ρ ) 1 + ζ ( x i μ ) σ [ ( ζ + ( x i μ ) σ ) Φ ( x i ) ζ ] 2 ζ ] + i = 1 n r i Φ ( x i ) 1 + 2 ζ exp ( Φ ( x i ) ) ( ( x i μ ) σ ) 2 [ ( 1 + ζ ) 2 ( ( ( x i μ ) σ ) Φ ( x i ) ζ ) 1 Φ ( x i ) 1 exp ( Φ ( x i ) ) ] [ 1 exp ( Φ ( x i ) ) ] i = 1 n ( 2 + r i ) α ¯ ( ( x i μ ) σ ) 2 exp ( Φ ( x i ) ) Φ ( x i ) 1 + 2 ζ [ ζ Φ ( x i ) ( 1 + ζ ) + 2 ζ ( Φ ( x i ) ζ ( x i μ ) σ ) α ¯ [ Φ ( x i ) ] 1 + ζ exp ( Φ ( x i ) ) 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] ] [ 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] ]
Ω σ ζ 1 = i = 1 n ( x i μ ) σ [ Φ ( x i ) ] ζ i = 1 n ( ζ + 1 ) [ ( x i μ ) σ Φ ( x i ) ζ + ζ ( ( x i μ ) σ Φ ( x i ) ζ ) 2 ] + i = 1 n ( x i μ ) σ [ Φ ( x i ) ] 1 + ζ [ ln ( Φ ( x i ) ) + ( x i μ ) σ Φ ( x i ) ζ ( 1 + 1 ζ ) ] i = 1 n r i Φ ( x i ) 1 + ζ ( x i μ ) σ exp ( Φ ( x i ) ) [ [ 1 Φ ( x i ) 1 exp ( Φ ( x i ) ) ] [ ln ( Φ ( x i ) ) + ( x i μ ) σ Φ ( x i ) ζ ] + ( x i μ ) σ Φ ( x i ) ] [ 1 exp ( Φ ( x i ) ) ] + i = 1 n ζ ( 2 + r i ) α ¯ Φ ( x i ) 1 + 2 ζ ( ( x i μ ) σ ) 2 exp ( Φ ( x i ) ) [ 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] ] i = 1 n ( 2 + r i ) α ¯ Φ ( x i ) 1 + ζ ( x i μ ) σ exp ( Φ ( x i ) ) [ [ ln ( Φ ( x i ) ) + ( x i μ ) σ Φ ( x i ) ζ ] [ 1 + Φ ( x i ) α ¯ Φ ( x i ) exp ( Φ ( x i ) ) 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] ] ] ζ [ 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] ]
Ω ζ 2 1 = 2 i = 1 n [ ( x i μ ) σ Φ ( x i ) ζ ln ( Φ ( x i ) ) ] + i = 1 n ζ ( ζ + 1 ) ( ( x i μ ) σ Φ ( x i ) ζ ) + i = 1 n Φ ( x i ) [ ( ζ + 2 ) [ ln ( Φ ( x i ) + ( x i μ ) σ Φ ( x i ) ζ ) ] ζ ( Φ ( x i ) ζ ( ( x i μ ) σ ) ) 2 ] + i = 1 n r i Φ ( x i ) exp ( Φ ( x i ) ) [ ln ( Φ ( x i ) ) + ( x i μ ) σ Φ ( x i ) ζ ] 2 [ 1 Φ ( x i ) exp ( Φ ( x i ) ) [ 1 exp ( Φ ( x i ) ) ] ] [ 1 exp ( Φ ( x i ) ) ] + i = 1 n r i Φ ( x i ) exp ( Φ ( x i ) ) [ 2 ( ln ( Φ ( x i ) ) + ( x i μ ) σ Φ ( x i ) ζ ) + [ ζ ( ( x i μ ) σ Φ ( x i ) ζ ) 2 ] ] [ 1 exp ( Φ ( x i ) ) ] i = 1 n ( 2 + r i ) Φ ( x i ) exp ( Φ ( x i ) ) α ¯ [ ln ( Φ ( x i ) ) + ( x i μ ) σ Φ ( x i ) ζ ] 2 [ 1 + Φ ( x i ) α ¯ Φ ( x i ) exp ( Φ ( x i ) ) [ 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] ] ] [ 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] ] + i = 1 n ( 2 + r i ) Φ ( x i ) exp ( Φ ( x i ) ) α ¯ [ 2 ( ln ( Φ ( x i ) ) + ( x i μ ) σ Φ ( x i ) ζ ) + ( ( x i μ ) σ Φ ( x i ) ζ ) 2 ] [ 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] ]
Ω α 2 2 = n + i = 1 n r i α 2 + i = 1 n ( 2 + r i ) [ 1 e Ψ ( x i ) ] 2 [ 1 α ¯ k [ 1 e Ψ ( x i ) ] 2 ) ]
Ω α μ 2 = Ω μ α 2 = i = 1 n ( 2 + r i ) e Ψ ( x i ) Ψ ( x i ) [ 1 α ¯ 1 e Ψ ( x i ) ] 2
Ω α σ 2 = Ω σ α 2 = i = 1 n ( 2 + r i ) e Ψ ( x i ) ( x i μ ) σ Ψ ( x i ) [ 1 α ¯ [ 1 e Ψ ( x i ) ] ] 2
Ω μ 2 2 = i = 1 n Ψ ( x i ) + i = 1 n ( 2 + r i ) α ¯ Ψ ( x i ) e Ψ ( x i ) [ 1 Ψ ( x i ) + α ¯ Ψ ( x i ) e Ψ ( x i ) [ 1 α ¯ [ 1 e Ψ ( x i ) ] ] ] [ 1 α ¯ ( 1 e Ψ ( x i ) ) ] + i = 1 n r i Ψ ( x i ) e Ψ ( x i ) [ 1 Ψ ( x i ) [ 1 e Ψ ( x i ) ] ] [ 1 e Ψ ( x i ) ]
Ω μ σ 2 = Ω σ μ 2 = i = 1 n Ψ ( x i ) [ ( x i μ ) σ 1 ] i = 1 n r i Ψ ( x i ) e Ψ ( x i ) [ ( ( x i μ ) σ 1 ) ( x i μ ) σ Ψ ( x i ) [ 1 e Ψ ( x i ) ] ] [ 1 e Ψ ( x i ) ] + n + i = 1 n ( 2 + r i ) α ¯ Ψ ( x i ) e Ψ ( x i ) [ ( 1 ( x i μ ) σ ) + ( x i μ ) σ Ψ ( x i ) α ¯ ( x i μ ) σ Ψ ( x i ) e Ψ ( x i ) [ 1 α ¯ [ 1 e Ψ ( x i ) ] ] ] [ 1 α ¯ ( 1 e Ψ ( x i ) ) ]
Ω σ 2 2 = n + i = 1 n 2 ( x i μ ) σ + i = 1 n ( x i μ ) σ Ψ ( x i ) [ 2 ( x i μ ) σ ] + i = 1 n r i ( x i μ ) σ Ψ ( x i ) e Ψ ( x i ) [ ( ( x i μ ) σ 2 ) ( x i μ ) σ Ψ ( x i ) [ 1 e Ψ ( x i ) ] ] [ 1 e Ψ ( x i ) ] i = 1 n ( 2 + r i ) α ¯ Ψ ( x i ) e Ψ ( x i ) [ ( 2 ( x i μ ) σ ) + ( x i μ ) σ Ψ ( x i ) α ¯ ( x i μ ) σ Ψ ( x i ) e Ψ ( x i ) [ 1 α ¯ [ 1 e Ψ ( x i ) ] ] ] [ 1 α ¯ ( 1 e Ψ ( x i ) ) ]
,
δ ( P ) = log ( L ) P 2 = l = 1 n 1 r i P 2 ( n 1 ) ( N n ) l = 1 n 1 ( n i ) r i ( 1 P ) 2
where Φ ( x i ) = 1 + ζ ρ μ σ 1 ζ and Ψ ( x i ) = exp ( x i μ σ ) .
Similarly, the observed Fisher information can be constructed by providing the MLE of parameters from Equations (19)–(22) for MO-GEVL and from Equations (23)–(25) for MO-Gumbel.

3.2.2. The Asymptotic Confidence Interval

The variance-covariance matrix I 1 can be obtained by computing the inverse of matrices ( I 1 * , I 2 * , I 1 * * , and I 2 * * ), depending on the scenario under investigation. According to [27], the asymptotic distribution of vector of parameter Q for the scenarios 1, 2, and 3 adheres to the normal distribution Q ^ N ( Q ^ , I 1 ( Q ^ ) ) . The asymptotic 100 ( 1 r ) % confidence interval for parameters Q at significance level r can be found by
[ Q ^ z r 2 I 1 ( Q ^ ) , Q ^ + z r 2 I 1 ( Q ^ ) ]
where I 1 ( θ ^ ) represents the variance–covariance matrix.

4. Bayesian Estimation

We discuss the Bayesian estimation for the MO-GEVL distribution parameters under the three cases proposed in this section. We combine both the observed sample data and the prior knowledge about the sample distribution. Combining the prior probabilities and available information on the population distribution provides a more subjective and informed perspective on unknown parameters. We next explore parameter estimation using Bayesian methods, focusing on two loss functions: square loss and LINEX loss. Our analysis covers two different scenarios: one with informative priors, which rely on specific prior knowledge, and another with non-informative priors, which assume minimal prior information. By integrating these loss functions and types of prior information, we thoroughly examine parameter estimation.
Informative priors. Assume that the unknown parameters in every scenario investigated are independent of one another. The approach of selecting parameter priors for informative cases depends on the parameter validation region, as introduced by [28,29]. So, we considered the priors PDF of the parameters α , μ , σ , ζ follow an exponential distribution with the hyper-parameters a j , j = 1 , 2 , 3 , 4 , while the random variable P follows a beta distribution with parameters ( γ , γ ) . Then, the prior PDF of the parameters are
π 1 ( α ) = 1 a 1 exp ( a 1 α ) , a 1 , α > 0 , π 2 ( μ ) = 1 a 2 exp ( a 2 μ ) , a 2 , μ > 0 , π 3 ( σ ) = 1 a 3 exp ( a 3 σ ) , a 3 , σ > 0 , π 4 ( ζ ) = 1 a 4 exp ( a 4 ζ ) , a 4 , ζ > 0 ,
and
π 5 ( P ) = 1 B ( γ , Γ ) P γ 1 ( 1 P ) 1 Γ , γ , Γ , P > 0 .
where B ( γ , Γ ) is the beta function. The hyperparameters ( γ , Γ , a j if j = 1 , 2 , 3 , 4 ) are estimated by the same method given in [30]. Then, the joint PDF of hyperparameters for MO-GEVL could be written as
Π * * exp ( j = 1 4 a j θ 4 ) For Case 1 and 2 P γ 1 ( 1 P ) 1 Γ exp ( j = 1 4 a j θ 4 ) For Case 3
where θ j ( α , μ , σ , ζ ) if ζ 0 and θ j ( α , μ , σ ) if ζ 0 . It is easy to obtain the joint PDF of hyperparameters for MO-Gambul by putting ζ into Equation (50) equal to 0.
Non-informative prior. For this scenario all parameters prior PDFs are assumed to be equal to 1.
According to the informative prior, the posterior PDFs ( π 1 * , π 2 * ) of MO-GEVL for ζ 0 and ζ 0 , respectively, are given by,
π 1 * i n e j = 1 4 a j θ j α Φ ( x i ) 1 + ζ exp ( Φ ( x i ) ) σ [ 1 α ¯ ( 1 exp ( Φ ( x i ) ) ) ] 2 α [ 1 exp ( Φ ( x i ) ) ] 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] r i For Case 1 , 2 i n P γ 1 ( 1 P ) 1 Γ e j = 1 4 a j θ j α Φ ( x i ) 1 + ζ exp ( Φ ( x i ) ) σ [ 1 α ¯ ( 1 exp ( Φ ( x i ) ) ) ] 2 α [ 1 exp ( Φ ( x i ) ) ] 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] r i For Case 3
π 2 * i n e j = 1 4 a j θ j α Ψ ( x i ) e Ψ ( x i ) σ [ 1 α ¯ ( 1 e Ψ ( x i ) ) ] 2 [ α [ 1 e Ψ ( x i ) ] 1 α ¯ [ 1 e Ψ ( x i ) ] ] r i For Case 1 , 2 i n P γ 1 ( 1 P ) 1 Γ e j = 1 4 a j θ j α Ψ ( x i ) e Ψ ( x i ) σ [ 1 α ¯ ( 1 e Ψ ( x i ) ) ] 2 [ α [ 1 e Ψ ( x i ) ] 1 α ¯ [ 1 e Ψ ( x i ) ] ] r i For Case 3
where θ j = α , μ , σ , ζ if ζ 0 and θ j = α , μ , σ if ζ 0 . Then, the informative Bayesian estimation under the SELF and LINEX loss functions (see, [31]) are given respectively by
θ ^ S = θ π 1 * d θ If ζ 0 θ π 2 * d θ If ζ 0
θ ^ l x = 1 η log ( exp ( η θ ) π 1 * d θ ) If ζ 0 1 η log ( exp ( η θ ) π 2 * d θ ) If ζ 0
where η is a shape parameter, in which its negative value provides more weight to underestimation compared with the overestimation, while for it’s small or large values, the LINEX loss [ L l x ( θ ) = ( exp ( η θ ) ( η θ ) 1 ) ] function is almost symmetric (see [32]).
Meanwhile, for non-informative priors, the posterior PDFs ( π 3 * , π 4 * ) are given by
π 3 * i n α Φ ( x i ) 1 + ζ exp ( Φ ( x i ) ) σ [ 1 α ¯ ( 1 exp ( Φ ( x i ) ) ) ] 2 α [ 1 exp ( Φ ( x i ) ) ] 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] r i For Case 1 , 2 i n α Φ ( x i ) 1 + ζ exp ( Φ ( x i ) ) σ [ 1 α ¯ ( 1 exp ( Φ ( x i ) ) ) ] 2 α [ 1 exp ( Φ ( x i ) ) ] 1 α ¯ [ 1 exp ( Φ ( x i ) ) ] r i For Case 3
π 4 * i n α Ψ ( x i ) e Ψ ( x i ) σ [ 1 α ¯ ( 1 e Ψ ( x i ) ) ] 2 [ α [ 1 e Ψ ( x i ) ] 1 α ¯ [ 1 e Ψ ( x i ) ] ] r i For Case 1 , 2 i n α Ψ ( x i ) e Ψ ( x i ) σ [ 1 α ¯ ( 1 e Ψ ( x i ) ) ] 2 [ α [ 1 e Ψ ( x i ) ] 1 α ¯ [ 1 e Ψ ( x i ) ] ] r i For Case 3
where Φ ( x i ) = 1 + ζ ( ρ μ σ ) 1 ζ and Ψ ( x i ) = e ( x i μ σ ) .
The Bayesianestimation of parameters for MO-GEVL using the SELF and LINEX for the informative prior function. by
θ ^ S = θ π 3 * d θ If ζ 0 θ π 4 * d θ If ζ 0
θ ^ l x = 1 β log ( exp ( β θ ) π 3 * d θ ) If ζ 0 1 β log ( exp ( β θ ) π 4 d θ ) If ζ 0
The systems of equations are presented as Equations (53), (54), (57) and (58) do not have analytical solutions, so it is necessary to use numerical methods for their solution. Among these methods, Lindley’s approximation technique stands out as a popular choice for Bayesian estimation. We will explore this method in detail in the subsequent subsection.

Lindley’s Approximation Method

Lindley’s approximation is a simple technique for approximating the ratio of integrals in systems of equations (Equations (53), (54), (57) and (58)). Other methods, such as the Markov Chain Monte Carlo method and Gibbs sampler, are also used. the easily analytical approach of Lindley’s approximation makes it a popular choice for solving problems among the other analytical approach. The method typically involves expanding the log-likelihood L and the log-prior P around a suitable point, such as the mode of the posterior distribution, and then approximating the integral using this expansion. This approach is particularly useful when the exact computation of the integral is challenging. Lindley’s approximation provides a way to estimate the expected value of U ( ϑ ) under the posterior distribution, which is crucial in Bayesian analysis for making inferences about the parameters based on the observed data. Below is the Lindley approximation of
E ( U ( ϑ ) ) = U ( ϑ ) exp ( L ( ϑ ) + P ( ϑ ) ) d ( ϑ ) exp ( L ( ϑ ) + P ( ϑ ) ) d ( ϑ )
In this context, ϑ = ( ϑ 1 , ϑ 2 , , ϑ r ) is a vector representing a set of parameters. The function U ( ϑ ) is defined as a function of these parameters.
E ( U ( ϑ ) ) = U ( ϑ ^ ) + 1 2 i = 1 r j = 1 r [ U i , j ( ϑ ^ ) + 2 U i ( ϑ ^ ) P j ( ϑ ^ ) ] I 1 ( ϑ ^ ) i , j + 1 2 i = 1 r j = 1 r k = 1 r l = 1 r L i , j , k ( ϑ ^ ) U l ( ϑ ^ ) I 1 ( ϑ ^ ) i , j I 1 ( ϑ ^ ) k , l
where U i = U ϑ i , P i = P ϑ i , U i , j = 2 ϑ ϑ i ϑ j , L i , j , k = 3 L ϑ i ϑ j ϑ k , and I 1 ( ϑ ^ ) i , j is equal to the variance covariance matrix. All partial derivatives are evaluated at the maximum likelihood estimation of parameters. For more details, see [33].

5. Simulation

A Monte Carlo simulation is performed in this section to compare the performance of various estimator parameters discussed in the preceding sections. This simulation offers a comprehensive evaluation of different estimation methods on their effectiveness in parameter estimation. The simulation involves generating 1000 samples for two different sample sizes 100 and 200 for a variable X. In this analysis, the maximum likelihood estimator (MLE) for the parameters ( α , μ , σ , ζ ) is evaluated compared with the Bayesian estimators within the framework of two distinct loss functions: the squared error loss and the linear exponential (LINEX) loss function. The reason for this comparison is to provide insight into their relative performance and suitability for statistical analysis. Moreover, we explore Bayesian estimation using Lindley’s method for the informative and non-informative priors. For the informative priors, the hyperparameters are determined following the approach initiated by [30] and subsequently employed by [34]. Additionally, an approximate 95 % confidence interval for the parameters is calculated, offering a comprehensive statistical analysis based on the established methodologies. The LINEX loss function is evaluated for three different values of β { 1 , 1 , 0.5 } , as shown in Table 1 and Table 2. Table 3 introduces the lower bound (LB), upper bound (UB), and length of the confidence interval (LC). All calculations are performed using the R programming language. The estimators are evaluated under three different scenarios of random removals, with censoring schemes involving a 10% elimination from the sample size. Moreover, fixed random removal is executed either at the beginning or the end of the sample. For details on the algorithm for generating progressive censoring of type-II, see [18].
From Table 1, the informative Bayesian estimation gives a small bias and MSE compared with the non-informative Bayesian estimation. Bayesian estimation utilizing the LINEX loss function tends to give better outcomes in terms of bias and mean squared error (MSE) when compared with the squared error loss function. This advantage is particularly noted when the parameter β holds a negative value, emphasizing a greater penalty for underestimation than overestimation. The LINEX loss function approaches symmetry for extremely small or large values of β , as detailed in the study by [32]. This finding underscores the effectiveness of the LINEX loss function in various estimation scenarios.
Furthermore, in the first situation, eliminating at the beginning provides a more accurate estimate of bias and MSE compared to eliminating at the end. In terms of the sensitivity of GA for the sample size, we find that the behavior is consistent for both small and high sample sizes. Furthermore, MLE provides better bias than Bayesian estimate. In Table 2, it is apparent that the 95 % confidence interval demonstrates that as sample size grows, we obtain a better estimate in terms of shorter intervals encompassing the real value. The Lindley approximation for the second scenario fails to effectively estimate parameters for a sample size of 100. However, for n = 200 , the estimation improves in terms of bias and MSE.

6. Real Data Example

In this section, we apply the discussed algorithms using actual real data, with a focus on extreme weather events. Extreme weather phenomena, including heatwaves, droughts, and wildfires, are often associated with high temperatures. It is important to recognize that such excessively high temperatures can lead to significant health risks and heat-related illnesses.
Given the recent global crisis triggered by an unexpected rise in temperature levels, understanding historical temperature extremes for each country has become vital. This information is crucial for mitigating the impact of catastrophic events caused by these temperature increases. It also helps in assessing the frequency, severity, and consequences of such events, thereby aiding in the development of more effective strategies for preparedness and response.
For practical demonstration, we analyze real data sets representing the temperatures in Egypt from 1999 to 2023. These data was sourced from https://www.ogimet.com/home.phtml.en, accessed on 12 May 2024.
Table 4 provides basic statistics for this data set, including mean, variance, median, and other relevant measures. This statistical analysis offers insights into the temperature trends in Egypt over the specified period, contributing to a broader understanding of the impacts of rising temperatures on a national scale.
In Table 5, these data sets are fitted to both GEVL and MO-GEVL by using computationally intensive ways of comparing model fits like the Kolmogorov–Smirnov(K–S) goodness of fit test at the level of significance of α = 0.01 , log ( L ) , Akaike information criterion (AIC) [35], and Bayesian information criterion (BIC) [36].
Moreover, Figure 5 shows the empirical data and fitted cumulative distribution function plot for GEVL and MO-GEVL.
The results in Table 5 for that data set show that both (GEVL–MO-GEVL) distributions give a good fit for these data, while MO-GEVL gives a better fit than GEVL in terms of lower indicators (KS, log ( L ) , AIC, and BIC). Moreover, the Figure 5 indicates that the MO-GEVL distribution provides a better reasonable fit than GEVL.
In Table 6, we introduce the maximum likelihood estimate and Bayes estimates of MO-GEVL parameters considered in the above sections for the three cases of elimination, considered using a GA.

7. Conclusions

This study focuses on parameter estimation for variables following the Marshall–Olkin extended generalized extreme value distribution (MO-GEVL) within the framework of type-II progressive censoring. This methodological approach aims to enhance the accuracy of statistical inferences drawn from censored data sets, providing significant insights into the distribution’s characteristics and behavior under specified conditions.
We utilize artificial intelligence, specifically genetic algorithms (GAs), for estimating maximum likelihood estimation (MLE) parameters. Additionally, a Bayesian estimation of parameters is examined under two types of loss, functions-symmetric and asymmetric, considering both informative and non-informative priors.
We also use progressive censoring type-II under three different scenarios of elimination: fixed, discrete uniform, and binomial, with a 10 % elimination from the sample size. Within the fixed random removal censoring scheme, we consider two specific cases: elimination at the beginning and at the end of the sample.
A Monte Carlo simulation is conducted to compare the performance of various estimators for the GEVL distribution parameters using Lindley’s method. The simulation results show that informative Bayesian estimation yields results very close to non-informative Bayesian estimation in terms of bias and mean squared error (MSE), but it is still better than the non-informative case. Furthermore, Bayesian estimation under the LINEX loss function shows comparable results with the SELF-loss function regarding both Bias and MSE. It is noted that a negative value of β in the LINEX loss function places more emphasis on underestimation compared with overestimation. For extreme (small or large) values of β , the LINEX loss function tends to be almost symmetric, as illustrated in Table 1.
For the first case of elimination, removing samples at the beginning provides better estimates in terms of bias and MSE than elimination at the end. In terms of the sensitivity of estimates to sample size, the GA’s performance is consistent across both smaller and larger sample sizes. Moreover, MLE tends to offer better Bias compared with Bayesian estimation. As illustrated in Table 3, we noticed that as sample sizes increase, we receive more accurate estimates of the confidence interval in terms of a smaller interval length that contains both the true and estimated values of the parameters.

Author Contributions

Conceptualization, R.A.E.-W.A., S.W.S. and T.R.; methodology, R.A.E.-W.A., S.W.S. and T.R.; software, R.A.E.-W.A., S.W.S. and T.R.; validation, R.A.E.-W.A., S.W.S. and T.R.; formal analysis, R.A.E.-W.A., S.W.S. and T.R.; investigation, R.A.E.-W.A., S.W.S. and T.R.; resources, R.A.E.-W.A., S.W.S. and T.R.; data curation, R.A.E.-W.A., S.W.S. and T.R.; writing—original draft preparation, R.A.E.-W.A., S.W.S. and T.R.; writing—review and editing, R.A.E.-W.A., S.W.S. and T.R.; visualization, R.A.E.-W.A., S.W.S. and T.R.; supervision, R.A.E.-W.A., S.W.S. and T.R.; project administration, R.A.E.-W.A., S.W.S. and T.R.; funding acquisition, T.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Deanship of Graduate Studies and Scientific Research at Qassim University, Saudi Arabia.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The Researchers would like to thank the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support (QU-APC-2024-9/1).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gilli, M.; Këllezi, E. An application of extreme value theory for measuring financial risk. Comput. Econ. 2006, 27, 207–228. [Google Scholar] [CrossRef]
  2. Fisher, R.A.; Tippett, L.H.C. Limiting forms of the frequency distribution of the largest or smallest member of a sample. In Mathematical Proceedings of the Cambridge Philosophical Society; Cambridge University Press: Cambridge, UK, 1928; Volume 24, pp. 180–190. [Google Scholar]
  3. Bali, T.G. The generalized extreme value distribution. Econ. Lett. 2003, 79, 423–427. [Google Scholar] [CrossRef]
  4. Hosking, J.R.M.; Wallis, J.R.; Wood, E.F. Estimation of the generalized extreme-value distribution by the method of probability-weighted moments. Technometrics 1985, 27, 251–261. [Google Scholar] [CrossRef]
  5. Bertin, E.; Clusel, M. Generalized extreme value statistics and sum of correlated variables. J. Phys. A Math. Gen. 2006, 39, 7607. [Google Scholar] [CrossRef]
  6. Hu, X.; Fang, G.; Yang, J.; Zhao, L.; Ge, Y. Simplified models for uncertainty quantification of extreme events using Monte Carlo technique. Reliab. Eng. Syst. Saf. 2023, 230, 108935. [Google Scholar] [CrossRef]
  7. Ali, Y.; Haque, M.M.; Mannering, F. A Bayesian generalised extreme value model to estimate real-time pedestrian crash risks at signalised intersections using artificial intelligence-based video analytics. Anal. Methods Accid. Res. 2023, 38, 100264. [Google Scholar] [CrossRef]
  8. Lovell, C.C.; Harrison, I.; Harikane, Y.; Tacchella, S.; Wilkins, S.M. Extreme value statistics of the halo and stellar mass distributions at high redshift: Are JWST results in tension with ΛCDM? Mon. Not. R. Astron. Soc. 2023, 518, 2511–2520. [Google Scholar] [CrossRef]
  9. Marshall, A.W.; Olkin, I. A new method for adding a parameter to a family of distributions with application to the exponential and Weibull families. Biometrika 1997, 84, 641–652. [Google Scholar] [CrossRef]
  10. Jose, K. Marshall-Olkin family of distributions and their applications in reliability theory, time series modeling and stress-strength analysis. In Proceedings of the 58th World Statistical Congress, Dublin, Ireland, 21–26 August 2011; Volume 201, pp. 3918–3923. [Google Scholar]
  11. Obulezi, O.J.; Anabike, I.C.; Oyo, O.G.; Igbokwe, C.; Etaga, H. Marshall-Olkin Chris-Jerry distribution and its applications. Int. J. Innov. Sci. Res. Technol. 2023, 8, 522–533. [Google Scholar]
  12. Ozkan, E.; Golbasi Simsek, G. Generalized Marshall-Olkin exponentiated exponential distribution: Properties and applications. PLoS ONE 2023, 18, e0280349. [Google Scholar] [CrossRef]
  13. Niyoyunguruza, A.; Odongo, L.O.; Nyarige, E.; Habineza, A.; Muse, A.H. Marshall-Olkin Exponentiated Frechet Distribution. J. Data Anal. Inf. Process. 2023, 11, 262–292. [Google Scholar]
  14. Alsadat, N.; Nagarjuna, V.B.; Hassan, A.S.; Elgarhy, M.; Ahmad, H.; Almetwally, E.M. Marshall–Olkin Weibull–Burr XII distribution with application to physics data. AIP Adv. 2023, 13, 095325. [Google Scholar] [CrossRef]
  15. Phoong, S.Y.; Ismail, M.T. A comparison between Bayesian and maximum likelihood estimations in estimating finite mixture model for financial data. Sains Malays. 2015, 44, 1033–1039. [Google Scholar] [CrossRef]
  16. Haldurai, L.; Madhubala, T.; Rajalakshmi, R. A study on genetic algorithm and its applications. Int. J. Comput. Sci. Eng 2016, 4, 139–143. [Google Scholar]
  17. Scrucca, L. GA: A package for genetic algorithms in R. J. Stat. Softw. 2013, 53, 1–37. [Google Scholar] [CrossRef]
  18. Balakrishnan, N.; Balakrishnan, N.; Aggarwala, R. Progressive Censoring: Theory, Methods, and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2000; pp. 1221–1253. [Google Scholar] [CrossRef]
  19. Khalifa, E.H.; Ramadan, D.A.; Alqifari, H.N.; El-Desouky, B.S. Bayesian Inference for Inverse Power Exponentiated Pareto Distribution Using Progressive Type-II Censoring with Application to Flood-Level Data Analysis. Symmetry 2024, 16, 309. [Google Scholar] [CrossRef]
  20. El-Morshedy, M.; El-Sagheer, R.M.; El-Essawy, S.H.; Alqahtani, K.M.; El-Dawoody, M.; Eliwa, M.S. One-and Two-Sample Predictions Based on Progressively Type-II Censored Carbon Fibres Data Utilizing a Probability Model. Comput. Intell. Neurosci. 2022, 2022, 6416806. [Google Scholar] [CrossRef] [PubMed]
  21. Eliwa, M.S.; Ahmed, E.A. Reliability analysis of constant partially accelerated life tests under progressive first failure type-II censored data from Lomax model: EM and MCMC algorithms. AIMS Math. 2023, 8, 29–60. [Google Scholar] [CrossRef]
  22. EL-Sagheer, R.M.; El-Morshedy, M.; Al-Essa, L.A.; Alqahtani, K.M.; Eliwa, M.S. The Process Capability Index of Pareto Model under Progressive Type-II Censoring: Various Bayesian and Bootstrap Algorithms for Asymmetric Data. Symmetry 2023, 15, 879. [Google Scholar] [CrossRef]
  23. EL-Sagheer, R.M.; Eliwa, M.S.; El-Morshedy, M.; Al-Essa, L.A.; Al-Bossly, A.; Abd-El-Monem, A. Analysis of the Stress–Strength Model Using Uniform Truncated Negative Binomial Distribution under Progressive Type-II Censoring. Axioms 2023, 12, 949. [Google Scholar] [CrossRef]
  24. Attwa, R.; Sadk, S.; Aljohani, H. Investigation the generalized extreme value under liner distribution parameters for progressive type-II censoring by using optimization algorithms. AIMS Math 2024, 9, 15276–15302. [Google Scholar] [CrossRef]
  25. Wu, S.J.; Chang, C.T. Inference in the Pareto distribution based on progressive type II censoring with random removals. J. Appl. Stat. 2003, 30, 163–172. [Google Scholar] [CrossRef]
  26. Ghahramani, M.; Sharafi, M.; Hashemi, R. Analysis of the progressively Type-II right censored data with dependent random removals. J. Stat. Comput. Simul. 2020, 90, 1001–1021. [Google Scholar] [CrossRef]
  27. Lawless, J.F. Statistical Models and Methods for Lifetime Data; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  28. Mokhlis, N.A. Reliability of a Stress-Strength Model with Burr Type III Distributions. Commun. Stat.-Theory Methods 2014, 34, 1643–1657. [Google Scholar] [CrossRef]
  29. Mokhlis, N.A.; Khames, S.K.; Sadk, S.W. Estimation of Stress-Strength Reliability for Marshall-Olkin Extended Weibull Family Based on Type-II Progressive Censoring. J. Stat. Appl. Probab. 2021, 10, 385–396. [Google Scholar]
  30. Ahn, S.E.; Park, C.; Kim, H. Hazard rate estimation of a mixture model with censored lifetimes. Stoch. Environ. Res. Risk Assess. 2007, 21, 711–716. [Google Scholar] [CrossRef]
  31. Tse, C.; Yuen, H. Statistical analysis of Weibull distributed lifetime data under type II progressive censoring with binomial removals. J. Appl. Stat. 2000, 27, 1033–1043. [Google Scholar] [CrossRef]
  32. Khatun, N.; Matin, M.A. A study on LINEX loss function with different estimating methods. Open J. Stat. 2020, 10, 52. [Google Scholar] [CrossRef]
  33. Lindley, D.V. Approximate bayesian methods. Trab. Estadística Y Investig. Oper. 1980, 31, 223–245. [Google Scholar] [CrossRef]
  34. Ali, S.; Aslam, M. Choice of suitable informative prior for the scale parameter of mixture of Laplace distribution using type-I censoring scheme under different loss function. Electron. J. Appl. Stat. Anal. 2013, 6, 32–56. [Google Scholar] [CrossRef]
  35. Akaike, H. On Entropy Maximization Principle. In Applications of Statistics; Krishnaiah, P.R., Ed.; Scientific Research Publishing Inc.: Amsterdam, The Netherland, 1977; Available online: https://www.scirp.org/reference/referencespapers?referenceid=2053767 (accessed on 1 March 2024).
  36. Wright, B.D.; Stone, M.H. Best Test Design: Rasch Measurement; Mesa Press: Chicago, IL, USA, 1979; Available online: https://www.scirp.org/reference/referencespapers?referenceid=1646017 (accessed on 1 March 2024).
Figure 1. Plots of the parent distribution GEVL at ( α = 1 ) and the MO-GEVL cumulative function where ζ 0 for certain parameter values. (a) When α is random and ζ > 0 is fixed where, ( μ = 0.2 , σ = 1 , ζ = 0.05 ) . (b) when α is fixed and ζ > 0 is random, where ( α = 0.6 , μ = 0.3 , σ = 1 ) . (c) when α is random and ζ < 0 is fixed, where ( μ = 0.3 , σ = 1 , ζ = 0.02 ) . (d) when α is fixed and ζ < 0 is random, where ( α = 0.6 , μ = 0.3 , σ = 1 ) .
Figure 1. Plots of the parent distribution GEVL at ( α = 1 ) and the MO-GEVL cumulative function where ζ 0 for certain parameter values. (a) When α is random and ζ > 0 is fixed where, ( μ = 0.2 , σ = 1 , ζ = 0.05 ) . (b) when α is fixed and ζ > 0 is random, where ( α = 0.6 , μ = 0.3 , σ = 1 ) . (c) when α is random and ζ < 0 is fixed, where ( μ = 0.3 , σ = 1 , ζ = 0.02 ) . (d) when α is fixed and ζ < 0 is random, where ( α = 0.6 , μ = 0.3 , σ = 1 ) .
Symmetry 16 00669 g001
Figure 2. Plots of the parent distribution GEVL at ( α = 1 ) and the MO-GEVL cumulative function, where ζ 0 and ( μ = 0.3 , σ = 1 ) .
Figure 2. Plots of the parent distribution GEVL at ( α = 1 ) and the MO-GEVL cumulative function, where ζ 0 and ( μ = 0.3 , σ = 1 ) .
Symmetry 16 00669 g002
Figure 3. Plots of the parent distribution GEVL at ( α = 1 ) and the MO-GEVL probability density function where ζ 0 for certain parameter values. (a) When α is random and ζ > 0 is fixed, where ( μ = 0.2 , σ = 1 , ζ = 0.5 ) . (b) When α is fixed and ζ > 0 is random, where ( α = 0.6 , μ = 0.3 , σ = 1 ) . (c) When α is random and ζ < 0 is fixed, where ( μ = 0.3 , σ = 1 , ζ = 0.2 ) . (d) When α is fixed and ζ < 0 is random, where ( α = 0.6 , μ = 0.3 , σ = 1 ) .
Figure 3. Plots of the parent distribution GEVL at ( α = 1 ) and the MO-GEVL probability density function where ζ 0 for certain parameter values. (a) When α is random and ζ > 0 is fixed, where ( μ = 0.2 , σ = 1 , ζ = 0.5 ) . (b) When α is fixed and ζ > 0 is random, where ( α = 0.6 , μ = 0.3 , σ = 1 ) . (c) When α is random and ζ < 0 is fixed, where ( μ = 0.3 , σ = 1 , ζ = 0.2 ) . (d) When α is fixed and ζ < 0 is random, where ( α = 0.6 , μ = 0.3 , σ = 1 ) .
Symmetry 16 00669 g003
Figure 4. Plots of the parent distribution GEVL at ( α = 1 ) and the MO-GEVL probability density function where ζ 0 for certain parameter values, where ( μ = 0.3 , σ = 1 ) .
Figure 4. Plots of the parent distribution GEVL at ( α = 1 ) and the MO-GEVL probability density function where ζ 0 for certain parameter values, where ( μ = 0.3 , σ = 1 ) .
Symmetry 16 00669 g004
Figure 5. The empirical and fitted cumulative distribution function plot for GEVL and MO-GEVL.
Figure 5. The empirical and fitted cumulative distribution function plot for GEVL and MO-GEVL.
Symmetry 16 00669 g005
Table 1. The bias and MSE for the MLE and Bayesian estimation of parameters ( α = 0.8 , μ = 0.1 , σ = 0.3 , ζ = 0.1 ) for informative and non-informative priors under the SELF(sq) LINEX loss function at ( β = ( 0.5 , 1 , 1 ) ) for n = 100,200 for The first scenario.
Table 1. The bias and MSE for the MLE and Bayesian estimation of parameters ( α = 0.8 , μ = 0.1 , σ = 0.3 , ζ = 0.1 ) for informative and non-informative priors under the SELF(sq) LINEX loss function at ( β = ( 0.5 , 1 , 1 ) ) for n = 100,200 for The first scenario.
MLEBayesian
Non-InformativeInformative
mle sq β = 0 . 5 β = 1 β = 1 sq β = 0 . 5 β = 1 β = 1
Beginign = 100 α Bais−0.048490.144400.282140.131570.288260.162010.267800.115780.29291
MSE0.002350.020850.079600.017310.083090.026250.071720.013400.08579
μ Bais−0.01901−0.05079−0.08582−0.05175−0.08592−0.04878−0.08554−0.05267−0.08598
MSE0.000360.002580.007370.002680.007380.002380.007320.002770.00739
σ Bais0.025400.023610.011510.023470.011410.023900.011710.023320.01132
MSE0.000640.000560.000130.000550.000130.000570.000140.000540.00013
ζ Bais0.006480.00882−0.091570.00591−0.092010.01460−0.090290.00301−0.09233
MSE0.000040.000080.008390.000030.008470.000210.008150.000010.00853
n = 200 α Bais0.002860.071740.144510.064250.140740.085110.149990.056300.13617
MSE0.000010.005150.020880.004130.019810.007240.022500.003170.01854
μ Bais−0.02139−0.03285−0.04908−0.03332−0.04940−0.03188−0.04843−0.03379−0.04970
MSE0.000460.001080.002410.001110.002440.001020.002340.001140.00247
σ Bais0.025400.026270.021410.026210.021350.026390.021520.026150.02130
MSE0.000640.000690.000460.000690.000460.000700.000460.000680.00045
ζ Bais−0.01275−0.01415−0.09499−0.01601−0.09514−0.01042−0.09452−0.01787−0.09522
MSE0.000160.000200.009020.000260.009050.000110.008930.000320.00907
Endn = 100 α Bais−0.14978−0.03872−0.01939−0.04800−0.02756−0.02340−0.00651−0.05832−0.03691
MSE0.022430.001500.000380.002300.000760.000550.000040.003400.00136
μ Bais0.00032−0.01795−0.01635−0.01890−0.01732−0.01599−0.01437−0.01984−0.01827
MSE0.000000.000320.000270.000360.000300.000260.000210.000390.00033
σ Bais0.025400.024800.014970.024670.014860.025060.015170.024540.01476
MSE0.000640.000610.000220.000610.000220.000630.000230.000600.00022
ζ Bais−0.01092−0.02958−0.21639−0.03401−0.21064−0.02049−0.22949−0.03833−0.20533
MSE0.000120.000870.046820.001160.044370.000420.052670.001470.04216
n = 200 α Bais−0.041990.023810.069640.017660.065450.034820.076390.011150.06069
MSE0.001760.000570.004850.000310.004280.001210.005840.000120.00368
μ Bais−0.00963−0.02037−0.02762−0.02082−0.02802−0.01944−0.02679−0.02127−0.02841
MSE0.000090.000410.000760.000430.000790.000380.000720.000450.00081
σ Bais0.025400.025470.019430.025410.019380.025600.019540.025350.01933
MSE0.000640.000650.000380.000650.000380.000660.000380.000640.00037
ζ Bais−0.01493−0.03159−0.14315−0.03402−0.14158−0.02661−0.14637−0.03641−0.14004
MSE0.000220.001000.020490.001160.020040.000710.021420.001330.01961
Table 2. The bias and MSE for the MLE and Bayesian estimation of parameters ( α = 0.8 , μ = 0.1 , σ = 0.3 , ζ = 0.1 ) for informative and non-informative priors under the SELF(sq) LINEX loss function at ( β = ( 0.5 , 1 , 1 ) ) for n = 100 , 200 for The second scenario and 3.
Table 2. The bias and MSE for the MLE and Bayesian estimation of parameters ( α = 0.8 , μ = 0.1 , σ = 0.3 , ζ = 0.1 ) for informative and non-informative priors under the SELF(sq) LINEX loss function at ( β = ( 0.5 , 1 , 1 ) ) for n = 100 , 200 for The second scenario and 3.
MLEBayesian
Non-InformativeInformative
mle sq β = 0 . 5 β = 1 β = 1 sq β = 0 . 5 β = 1 β = 1
The second scenarion = 100 α Bais−0.1683910.8134112.150820.08153−0.042690.800650.757740.11382−0.03394
MSE0.02941757.20413951.786060.043620.054092.727603.019620.088570.05107
μ Bais0.02725−3.00983−3.39137−0.80625−0.85106−0.37712−0.00351−0.54127−0.56237
MSE0.0008857.5263972.398373.333253.684160.733620.002701.359981.49328
σ Bais0.03994−0.44201−0.51031−0.25278−0.282190.020320.01054−0.18350−0.20396
MSE0.001631.480311.898120.513580.602020.001190.002420.285010.32537
ζ Bais−0.00825−7.41651−8.45450−1.11587−1.23368−0.07490−0.11915−0.72414−0.80845
MSE0.00128363.37079461.356266.290096.853140.019610.024922.350962.55924
n = 200 α Bais−0.144040.100460.14950−0.037280.04401−0.000770.03724−0.047890.02488
MSE0.024941.610731.565470.044530.203620.085240.092710.029820.12389
μ Bais0.02102−0.03843−0.05339−0.02538−0.03962−0.01708−0.03771−0.02068−0.03437
MSE0.000650.108610.111100.037700.039690.024440.051150.021850.02346
σ Bais0.039060.030670.026990.031610.027860.034060.021200.032220.02845
MSE0.001580.006020.005130.004520.003990.002840.019980.003730.00334
ζ Bais−0.01357−0.07887−0.14503−0.05161−0.11441−0.03012−0.10538−0.04537−0.10586
MSE0.000690.326220.351070.068340.081350.011100.041850.035120.04605
The third scenarion = 100 α Bais−0.156610.296590.434970.063450.158320.081770.140330.015360.12855
MSE0.029465.182456.724950.210910.305260.148550.212690.088590.18725
μ Bais0.01515−0.10443−0.14572−0.07260−0.10354−0.05720−0.08329−0.06271−0.08987
MSE0.000480.358050.479790.082310.107510.072420.061620.046150.06138
σ Bais0.038030.026430.018380.028610.021180.030280.022250.029680.02255
MSE0.001510.015430.019080.008170.009540.007050.018240.005840.00655
ζ Bais−0.01938−0.17487−0.35649−0.10983−0.25452−0.06443−0.23201−0.09540−0.22588
MSE0.000851.017691.595690.157870.254950.049430.155370.084320.14830
n = 200 α Bais−0.156760.035440.10326−0.05754−0.00603−0.04898−0.00481−0.07226−0.01604
MSE0.028575.872326.708140.071860.078480.052960.065410.053410.06126
μ Bais0.01497−0.03135−0.05191−0.01691−0.03477−0.01085−0.02802−0.01457−0.03137
MSE0.000460.315280.364670.027240.033390.007640.009570.013260.01724
σ Bais0.037770.033340.029570.034980.031410.037060.033360.035520.03203
MSE0.001490.015530.017340.005990.006250.001800.001900.003970.00401
ζ Bais−0.02116−0.06314−0.14052−0.04503−0.11530−0.02866−0.11144−0.04236−0.10937
MSE0.000820.384610.486280.040780.062800.004400.044380.021270.03749
Table 3. The 95% confidence intervals of parameters ( α = 0.8 , μ = 0.1 , σ = 0.3 , ζ = 0.1 ) of MO-GEVL under the three cases of censoring schemes.
Table 3. The 95% confidence intervals of parameters ( α = 0.8 , μ = 0.1 , σ = 0.3 , ζ = 0.1 ) of MO-GEVL under the three cases of censoring schemes.
Cases NParameterLBUBLC
The first scenarioBeginningn = 100 α 0.278151.224880.94673
μ −0.034170.196150.23032
σ 0.285520.365280.07976
ζ −0.070860.283820.35468
n = 200 α 0.500911.104810.60390
μ 0.004260.152950.14869
σ 0.299950.350840.05089
ζ −0.054820.229330.28415
Endn = 100 α 0.292441.008010.71557
μ −0.006110.206760.21288
σ 0.287910.362890.07498
ζ −0.133240.311410.44465
n = 200 α 0.482311.033710.55140
μ 0.017810.162930.14512
σ 0.299620.351170.05155
ζ −0.080240.250380.33062
The third scenario n = 100 α 0.253591.033190.77960
μ −0.005130.235430.24056
σ 0.300170.375890.07572
ζ −0.114540.275780.39032
n = 200 α 0.377210.909280.53207
μ 0.032690.197240.16456
σ 0.311280.364260.05298
ζ −0.046250.203930.25018
The second scenario n = 100 α −0.022921.286131.30905
μ −0.065490.319990.38548
σ 0.293290.386590.09330
ζ −0.247930.431430.67936
n = 200 α 0.358760.953160.59440
μ 0.033420.208630.17521
σ 0.311090.367040.05596
ζ −0.037950.210800.24875
Table 4. The basic statistics of extreme temperatures in Celsius for the Egypt data set.
Table 4. The basic statistics of extreme temperatures in Celsius for the Egypt data set.
CountryMeanMedianVARStandard DeviationMinimumMaximumRangeQuartiles ( 25 % , 50 % , 75 % )
Egypt34.2984334.429.737645.4532232346 23 (30.4 34.4 38.2)
Table 5. The values of K-S, log ( L ) , (AIC), and BIC for both GEVL and MO-GEVL distributions.
Table 5. The values of K-S, log ( L ) , (AIC), and BIC for both GEVL and MO-GEVL distributions.
DistributionParametersK-STabulated Value−log(L)AICBIC
GEVL ( 32.287 , 5.3049 , 0.2481 ) 0.043780.068091780.4513566.9023579.954
MO-GEVL ( 0.9567 , 32.346 , 5.404 , 0.2536 ) 0.03769 152.853 297.7059 254.89
Table 6. MLE and Bayes estimates for α , μ , σ , where X represents the temperature of Egypt and Y represents the temperature of Queensland (Australia).
Table 6. MLE and Bayes estimates for α , μ , σ , where X represents the temperature of Egypt and Y represents the temperature of Queensland (Australia).
Bayesian
Non-InformativeInformative
Complete mle sq β = 0 . 5 β = 1 β = 1 sq β = 0 . 5 β = 1 β = 1
The first scenarioBeginning α 0.90390.90430.91040.93330.91040.93350.90220.92500.91860.9422
μ 44.953944.949344.942144.921544.945044.924544.936244.915144.947944.9275
σ 1.22081.27071.26921.26641.26991.26631.26931.26651.26911.2663
ζ −0.18700.21420.21420.21550.21420.21550.21420.21540.21420.2155
End α 0.90390.90010.28550.68190.36410.6930−0.42570.48760.49810.8063
μ 44.953944.942745.305344.991645.389245.031245.195244.915945.518745.0753
σ 1.22081.27041.37481.29711.32531.29931.36621.29291.38511.3015
ζ −0.18700.18480.13330.18460.13490.18550.12980.18260.13640.1865
The second scenario α 0.90390.90150.98840.91380.99030.91381.00760.93810.96540.8888
μ 44.953944.959844.920244.980444.913044.972744.935544.995344.906244.9649
σ 1.22081.27001.25671.26811.26261.26741.25801.26951.25541.2667
ζ −0.18700.19290.19300.17900.19260.17860.19380.17970.19210.1782
The third scenario α 0.90390.88120.82490.94330.82560.94420.77940.90260.86550.9894
μ 44.953944.955844.959644.867744.971844.881344.935044.836544.984344.8940
σ 1.22081.27061.27651.25191.27411.25261.27541.25071.27761.2532
ζ −0.18700.21640.21790.22820.21810.22830.21770.22790.21820.2285
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Attwa, R.A.E.-W.; Sadk, S.W.; Radwan, T. Estimation of Marshall–Olkin Extended Generalized Extreme Value Distribution Parameters under Progressive Type-II Censoring by Using a Genetic Algorithm. Symmetry 2024, 16, 669. https://doi.org/10.3390/sym16060669

AMA Style

Attwa RAE-W, Sadk SW, Radwan T. Estimation of Marshall–Olkin Extended Generalized Extreme Value Distribution Parameters under Progressive Type-II Censoring by Using a Genetic Algorithm. Symmetry. 2024; 16(6):669. https://doi.org/10.3390/sym16060669

Chicago/Turabian Style

Attwa, Rasha Abd El-Wahab, Shimaa Wasfy Sadk, and Taha Radwan. 2024. "Estimation of Marshall–Olkin Extended Generalized Extreme Value Distribution Parameters under Progressive Type-II Censoring by Using a Genetic Algorithm" Symmetry 16, no. 6: 669. https://doi.org/10.3390/sym16060669

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop