Next Article in Journal
Thau Observer for Insulin Estimation Considering the Effect of Beta-Cell Dynamics for a Diabetes Mellitus Model
Previous Article in Journal
Enhanced Synchrosqueezing Transform for Detecting Non-Traditional Flight Modes in High Angle of Attack Maneuvers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Weighted Flexible Weibull Model: Properties, Applications, and Analysis for Extreme Events

by
Ziaurrahman Ramaki
1,
Morad Alizadeh
1,
Saeid Tahmasebi
1,
Mahmoud Afshari
1,
Javier E. Contreras-Reyes
2,3,* and
Haitham M. Yousof
4
1
Department of Statistics, Faculty of Intelligent Systems Engineering and Data Science, Persian Gulf University, Bushehr 75169, Iran
2
Instituto de Matemática, Física y Estadística, Facultad de Ingeniería y Negocios, Universidad de Las Américas, Sede Viña del Mar, Viña del Mar 2520000, Chile
3
Centro de Modelación Ambiental y Dinámica de Sistemas (CEMADIS), Facultad de Ingeniería y Negocios, Universidad de Las Américas, Santiago 7500975, Chile
4
Department of Statistics, Mathematics and Insurance, Benha University, Benha 13518, Egypt
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2025, 30(2), 42; https://doi.org/10.3390/mca30020042
Submission received: 17 February 2025 / Revised: 13 April 2025 / Accepted: 14 April 2025 / Published: 16 April 2025
(This article belongs to the Section Social Sciences)

Abstract

:
The weighted flexible Weibull distribution focuses on its unique point of flaunting a bathtub-shaped hazard rate, characterized by an initial increase followed by a drop over time. This property plays a major role in reliability analysis. In this paper, this distribution and its main properties are examined, and the parameters are estimated using several estimation methods. In addition, a simulation study is done for different sample sizes. The performance of the proposed model is illustrated through two real-world applications: component failure times and COVID-19 mortality. Moreover, the value-at-risk (VaR), tail value-at-risk (TVaR), peaks over a random threshold VaR (PORT-VaR), the mean of order P ( MOP P ) analysis, and optimal order of P due to the true mean value can help identify and characterize critical events or outliers in failure events and COVID-19 death data across different counties. Finally, the PORT-VaR estimators are provided under a risk analysis for both applications.

1. Introduction

The Weibull distribution is a versatile model widely used in fields such as reliability engineering and survival analysis. However, it struggles to adequately fit data with bathtub-shaped or unimodal hazard rates. To address this limitation, several extensions and modifications have been proposed in the literature. Notable contributions include the works of [1,2,3,4,5,6], among others. More recently, the flexible Weibull (FW) distribution was introduced by [7]. Its cumulative distribution function (CDF) is defined as
F ( x ) = 1 exp e θ x λ x ,
and the corresponding probability density function (PDF) as
f ( x ) = θ + λ x 2 exp θ x λ x exp e θ x λ x .
For λ = 0 and θ = log ( α ) , the FW model is reduced to exponential distribution, demonstrating its role as a generalization of the Weibull distribution. Several extensions of the FW distribution have been developed, such as those provided by [7,8,9,10], and others. From the latter references, it is known that there does not exists a unique distribution suitable for modeling and analyzing all types of data. Thus, the goal of this study is to develop a generalization of the FW distribution, called the weighted flexible Weibull (WFW) distribution. The WFW model is based on a PDF derived from the upper record values of independent FW random variables. This approach has also been adapted by the Lindley distribution [11]. The WFW distribution could be considered in medical data analysis, especially in modeling unimodal and bimodal datasets related to COVID-19 events and cancer decease, as discussed in subsequent sections.
On the other hand, the PORT-VaR analysis plays a crucial role in identifying and characterizing critical events or outliers in failure events and COVID-19 death data across various counties [12]. In this study, PORT-VaR estimators are utilized to gain insight in risk analysis relative to the fields of engineering and medicine. In the context of engineering, the PORT-VaR analysis assists in pinpointing extreme failure events that exceed predetermined risk thresholds. By identifying these peaks over the random threshold, engineers can gain insights into potential weaknesses or vulnerabilities in systems, components, or processes. By applying PORT-VaR estimators to mortality data, healthcare professionals can identify countries with unusually high mortality rates compared to the expected threshold [13]. This analysis enables targeted interventions, resource allocation, and public health measures to mitigate the impact of the pandemic and prioritize healthcare resources effectively.
This study proposes several key contributions to actuarial risk modeling and statistical analysis, addressing critical gaps and introducing novel methodologies with broad applications. One of the primary contributions is the introduction of the WFW distribution, a versatile model with good performance in lifetime data analysis. Its properties (including cumulative and residual cumulative entropy), provide deeper insights into risk modeling. A simulation study confirms the consistency of the maximum likelihood estimators (MLEs), ensuring the reliability of parameter estimation. Furthermore, the study applies the WFW distribution to real-world datasets, including unimodal and COVID-19 case data, demonstrating its effectiveness in practical scenarios. A significant advancement in this work is the development of PORT-VaR estimators, specifically tailored for risk assessment in engineering and medical applications. These estimators provide a refined approach for identifying and quantifying extreme events, which is crucial for making informed decisions in high-risk environments. The study expands the discussion on PORT-VaR, emphasizing its practical significance in modern risk assessment and its potential to improve public health planning, engineering safety measures, and financial risk management. Our key findings go beyond a review of existing ideas and introduce several new aspects of actuarial risk assessment:
  • This study is a pioneer in the use of weighted distributions in actuarial risk modeling, filling a critical research gap and providing a fresh perspective on risk evaluation.
  • Actuarial risk metrics are applied beyond traditional financial contexts to medical data, specifically for assessing COVID-19 risk and managing datasets with extreme values. This cross-disciplinary approach highlights the adaptability of actuarial methods in diverse fields.
  • A new estimation techniques and a sequential sampling plan based on truncated life testing are introduced, enhancing the precision and efficiency of risk assessment models.
  • While the original discussion on PORT-VaR is too general, we have now expanded on its significance and provided a more detailed analysis of its applicability in modern risk assessment scenarios.

2. The WFW Distribution

A random variable X follows the WFW distribution if its CDF and PDF, respectively, are defined as
F ( x ) = 1 exp e θ x λ x 1 + exp e θ x λ x , θ , λ > 0 , x > 0 .
and
f ( x ) = 2 e θ x λ x θ + β x 2 exp e θ x λ x 1 + exp e θ x λ x 2 , θ , λ > 0 , x > 0 .
By using Equations (1) and (2), the hazard rate function of X can be formulated as
h ( x ) = θ + λ x 2 e θ x λ x 1 + exp e θ x λ x .
Figure 1 and Figure 2 provide visual representations of the density and hazard rate functions for varying parameter values. Figure 1 demonstrates that the density function f ( x ) can exhibit symmetrical, right-skewed, left-skewed, or even bimodal forms. On the other hand, Figure 2 shows that the hazard rate function h ( x ) can take on increasing, decreasing, or unimodal shapes depending on the parameter settings.
The reversed hazard rate function of the WFW distribution is obtained as follows:
r ( x : θ , λ ) = 2 θ + λ x 2 e e θ x λ x + θ x λ x 1 exp e θ x λ x 1 + exp e θ x λ x .
The inverse of the hazard function for the WFW distribution exhibits a decreasing trend across different parameter values. The quantile function is a widely used tool in data analysis and probability, providing valuable information about the distribution of data and their observed values. Let Q ( p ) = F 1 ( p ) be the quantile function of X. Then, by solving a quadratic equation, the quantile function (for 0 < p < 1 ) is obtained:
Q ( p ) = 1 2 θ log log 1 p 1 + p + log log 1 p 1 + p 2 + 4 θ λ .

3. Properties

3.1. Asymptotic Properties

Let X be a random variable with parameters θ and λ . The asymptotic behavior of the CDF, PDF, and HRF are given by
F ( x ) e λ x , f ( x ) λ x 2 e λ x , h ( x ) λ x 2 e λ x 1 e λ x ,
as x 0 + . When x , the asymptotic behavior of these functions for the WFW distribution is expressed as
1 F ( x ) 2 e e θ x , f ( x ) 2 θ e θ x e e θ x , h ( x ) θ e θ x .

3.2. Moments and Generating Function

Moments play crucial roles in identifying and quantifying various statistical characteristics, including detecting flattening and spacing, as well as assessing the coefficient of variation. In Table 1, the first four moments, standard deviation (SD), skewness (SK), and kurtosis (KR) are numerically computed. The rth moment of the WFW distribution can be determined using the following relationship [14]:
E ( X n ) = n 0 x n 1 [ 1 F ( x ) ] d x .
Employing the outcomes from the study conducted by [15], it is evident that
E ( X n ) = n 0 x n 1 2 exp ( e θ x λ x ) 1 + exp ( e θ x λ x ) d x = 2 n i = 0 ( 1 ) i 0 x n 1 e ( i + 1 ) e θ x λ x d x .
Further, by using the result 3.471.9 of [16], the nth moment of the WFW distribution is
E ( X n ) = 2 n i = 0 ( 1 ) i + j ( i + 1 ) j j ! 0 x n 1 e j θ x j λ x d x .
Then,
E ( X n ) = 2 n i = 0 ( 1 ) i + j ( i + 1 ) j j ! 2 j λ j θ n 2 K n ( 2 j θ j λ ) = 4 n λ θ n 2 K n ( 2 j θ λ ) i = 0 ( 1 ) i + j ( i + 1 ) j j ! ,
where v = n 1 and
K n ( z ) = π csc ( π n ) 2 [ I n ( z ) I n ( z ) ] .
The modified Bessel function of the first kind is
I n ( x ) = m = 0 1 Γ ( m + m + 1 ) m ! x 2 2 m + n , n Z ,
and Γ ( a ) = 0 x a 1 e x d x denotes the gamma function. Similarly, the nth incomplete moment of X, say
E ( X n | X y ) = 1 F ( y ) 0 y x n f ( x ) d x ,
is
E ( X n | X y ) = 1 F ( y ) 0 x n 2 ( θ + λ x 2 ) e θ x λ x exp ( e θ x λ x ) ( 1 + exp ( e θ x λ x ) ) 2 d x = 2 F ( y ) i = 0 ( 1 ) i ( i + 1 ) 0 x n ( θ + λ x 2 ) e θ x λ x e ( i + 1 ) e θ x λ x d x = 2 F ( y ) i , j = 0 ( 1 ) i + j ( i + 1 ) j + 1 j ! 0 x n ( θ + λ x 2 ) e ( j + 1 ) θ x ( j + 1 ) λ x d x .
Following Theorem 2 of [17], then
E ( X n | X y ) = 2 F ( y ) i , j = 0 ( 1 ) i + j ( i + 1 ) j + 1 j ! × θ γ n + 1 , ( j + 1 ) θ y ( j + 1 ) λ y + λ γ n 1 , ( j + 1 ) λ y ,
for n 1 , where
γ ( a , x ) = 0 x t a 1 e t d t
represents the lower incomplete gamma function.

3.3. Moment Generating Function

For a random variable X, the moment generating function (MGF) is given by
M X ( t ) 0 e t x f ( x ) d x = j = 0 t k k ! E ( X n ) .
Based on (4), the MGF of X can be expressed as
M X ( t ) = 4 n j = 0 i = 0 ( 1 ) i + j ( i + 1 ) j t k j ! k ! λ θ n 2 K n ( 2 j θ λ ) .

4. Entropy

4.1. Cumulative Entropy

Cumulative entropy serves as a measure for quantifying the uncertainty of a random variable. It is defined based on the CDF of the random variable X [18]. Then, the cumulative entropy is defined as:
C E ( X ) = F ( x ) log ( F ( x ) ) d x .
By (1), the C E ( X ) can be expressed as
C E ( X ) = 0 1 exp e θ x λ x 1 + exp e θ x λ x log 1 exp e θ x λ x 1 + exp e θ x λ x d x = 0 1 exp e θ x λ x 1 + exp e θ x λ x log 1 exp e θ x λ x log 1 + exp e θ x λ x d x .
Using the Taylor expansion, the cumulative entropy of the flexible Weibull weighted distribution is
C E ( X ) = = i , j = 0 ( 1 ) i [ 1 + ( 1 ) i + 1 ] i + 1 e ( i + j + 1 ) e θ x λ x e ( i + j + 2 ) e θ x λ x = i , j = 0 ( 1 ) i [ 1 + ( 1 ) i + 1 ] i + 1 0 e ( 2 i + 2 j + 3 ) e θ x λ x d x = i , j = 0 ( 1 ) i [ 1 + ( 1 ) i + 1 ] i + 1 0 e k e θ x λ x d x = l = 0 ( 1 ) l l ! k l i , j = 0 ( 1 ) i + l [ 1 + ( 1 ) i + 1 ] k l l ! , i + 1 0 e l θ x l λ x d x .
Based on [14], it can be seen that
0 e γ x λ 4 x d x = λ γ K 1 ( γ λ )
and
C E ( X ) = i , j = 0 ( 1 ) i [ 1 + ( 1 ) i + 1 ] i + 1 0 ( 1 ) l l ! k l λ l θ K 1 ( 4 l θ λ ) .

4.2. Cumulative Residual Entropy

Cumulative residual entropy [19,20] is a metric for measuring uncertainty in the time remaining until an event occurs. This concept considered the survival and hazard distributions and provides a better description of temporal unpredictability in events. The applications of cumulative residual entropy has been studied in various fields, including insurance and risk assessment [21]. For a random variable X, the cumulative residual entropy is given by
C R E ( X ) = 0 ( 1 F ( x ) ) log ( 1 F ( x ) ) d x .
By using the equation above, the C R E ( X ) can be expanded as
C R E ( X ) = 0 ( 1 F ( x ) ) i = 0 F ( x ) i + 1 i + 1 d x = 0 2 e H ( x ) 1 + e H ( x ) log 2 e H ( x ) 1 + e H ( x ) d x = 0 2 e H ( x ) 1 + e H ( x ) log 2 + H ( x ) log ( 1 + e H ( x ) ) d x = 0 2 e H ( x ) 1 + e H ( x ) log 2 + H ( x ) d x + 2 e H ( x ) 1 + e H ( x ) i = 0 ( 1 ) i e ( i + 1 ) H ( x ) ( i + 1 ) ( 1 + e H ( x ) ) d x = 2 log 2 i , j = 0 ( 1 ) i + j ( i + 1 ) j j ! e θ j x j λ x 2 i , j = 0 ( 1 ) i + j ( i + 1 ) j j ! 0 e 2 [ θ j x j λ x ] d x + 2 i , j = 0 ( 1 ) i + j + k ( i + 1 ) j ( i + j + 1 ) k j ! , k ! 0 e k θ x k λ x d x .
Finally, based on [14], the C R E ( X ) can be expanded as
C R E ( X ) = 2 i , j = 0 ( 1 ) i + j + k ( i + 1 ) j ( i + j + 1 ) k j ! , k ! λ θ K 1 k θ λ .

4.3. Rényi Entropy

For a random variable X, the Rényi entropy [22] is given by
I n ( δ ) = 1 1 δ log 0 f ( x ) δ d x .
Additional properties of Rényi entropy can be seen in [23]. Using (6), the term f ( x ) δ can be expressed as
f ( x ) δ = 2 δ ( θ λ x 2 ) δ [ 1 + e H ( x ) ] 2 δ H ( x ) δ e δ H ( x ) ,
where
I n ( δ ) = 1 1 δ log 0 2 δ ( θ λ x 2 ) δ [ 1 + e H ( x ) ] 2 δ H ( x ) δ e δ H ( x ) d x .
Note that
1 [ 1 + e H ( x ) ] 2 δ = i = 0 2 δ i e i H ( x ) .
Thus,
0 f ( x ) δ d x = 2 δ i = 0 2 δ i ( θ + λ x 2 ) δ H ( x ) δ e ( i + δ ) H ( x ) = i , j = 0 2 j i δ j 2 δ θ δ j λ j 0 x 2 j H ( x ) δ e ( i + δ ) H ( x ) d x = k = 0 i , j = 0 2 j i δ j 2 δ θ δ j λ j 0 x 2 j e θ ( δ k ) x ( δ + k ) λ x d x .
Based on Formula (3.471 9) of [16], I n ( δ ) can be simplified as
I n ( δ ) = 1 1 δ δ log ( 2 ) + i = 0 , j = 0 , k = 0 log ( M ( i , j , k ) × 2 ( δ + k ) β θ ( k δ ) 1 2 j 2 K ( 1 2 j ) 2 ( δ + k ) λ ( δ k ) θ ,
where
M ( i , i , k ) = 2 j i δ j θ δ j λ j .

5. Estimation Methods

5.1. Maximum Likelihood Estimation Method

The parameters of the WFW distribution are estimated using the MLE method. Given a set of random samples, x 1 , x 2 , , x n , following the WFW distribution, the likelihood function is derived. Using (2), the L ( x ) can be expressed as
L ( x ) = i = 1 n 2 θ + λ x i 2 e θ x i λ x i exp e θ x i λ x i 1 + exp e θ x i λ x i 2 = 2 n i = 1 n θ + λ x i 2 exp i = 1 n e θ x i λ x i i = 1 n 1 + e e θ x i λ x i 2 e θ i = 1 n x i i = 1 n λ x i
By taking the natural logarithm, the log-likelihood function is obtained as
( α ) = n log ( 2 ) + i = 1 n log θ + λ x i 2 + θ i = 1 n x i i = 1 n λ x i i = 1 n e ( θ x i λ x i ) 2 n i = 1 n log ( 1 + e e θ x i λ x i ) .
MLEs are numerically calculated using caret package [24] of R software [25].

5.2. Least Squares Estimation Method

The least squares estimation (LSE) approach seeks optimal model parameters to minimize the sum of the square difference between the observed data points and those predicted by the model. For a model defined by α and β parameters and a distribution F, the LSE method defines the objective function:
S L S E ( θ , λ ) = i = 1 n F ( x i : n ) i n + 1 2 ,
where x 1 : n , x 2 : n , , x n : n are ordered samples from the data and F ( x i : n ) represents the value of the hypothetical distribution at x i : n .

5.3. Weighted Least Squares Estimation

In the weighted LSE method, the objective function is
S W L S E ( θ , λ ) = i = 1 n ( n + 2 ) ( n + 1 ) 2 i ( n i + 1 ) F ( x i : n ) i n + 1 2 ,
where x 1 : n , x 2 : n , , x n : n represents the ordered samples from the data, and F ( x i : n ) denotes the value of the hypothetical distribution at x i : n .

5.4. Cramer–Von Mises Estimator

The Cramer–von Mises estimator (CME) is defined as
S C M E ( θ , λ ) = 1 12 n + i = 1 n F ( x i : n ) 2 i 1 2 n 2 ,
where x 1 : n , x 2 : n , , x n : n represents the ordered samples from the data, and F ( x i : n ) denotes the value of the hypothetical distribution at x i : n .

5.5. Anderson–Darling Estimator

The Anderson–Darling estimator (ADE) and right-tailed Anderson–Darling estimator (RTADE) are respectively defined as
S A D E ( θ , λ ) = n 1 n i = 1 n ( 2 i 1 ) log F ( x i ) + log F ¯ ( x n + 1 i )
S R T A D E ( θ , λ ) = n 2 2 i = 1 n F ( x i ) 1 n i = 1 n ( 2 i 1 ) log F ¯ ( x n + 1 i ) ,
where F ¯ ( . ) = 1 F ( . ) . In these equations, x 1 , x 2 , , x n represents the samples from the data, and F ( x i ) denotes the value of the hypothetical distribution at x i . The estimation of each parameter can therefore be obtained when the first partial derivative of the log-likelihood function for each of the parameters is taken, equated to zero, and solved simultaneously. Note that the solution cannot be obtained in closed form, but it can be obtained by solving numerically using the R software [25].

6. Simulations

In the study of the estimation and comparison of different estimation methods, several simulations were conducted to compare the methods for estimating the parameters θ and λ of the WFW distribution with generating N = 1000 Monte Carlo replicates for the WFW model based on Equation (3) with sample sizes n = 50 , 100 , 300 , 500 . The simulations were performed under the following parameter settings: ( θ , λ ) = ( 0.5 , 1 ) and ( θ , λ ) = ( 1 , 1 ) . To assess the accuracy of the estimation methods, the Bias and MSE were calculated using the following equations:
Bias ( α ^ ) = 1 N i = 1 N ( θ ^ i θ ) , Bias ( λ ^ ) = 1 N i = 1 N ( λ ^ i λ ) , MSE ( θ ) = 1 N i = 1 N ( θ ^ i θ ) 2 , MSE ( λ ) = 1 N i = 1 N ( λ ^ i λ ) 2 .
This section discusses the assessment of estimating the θ and λ parameters and concerning bias and root mean square error MSE values. Furthermore, the analysis of bias was conducted. The bias value reflects the estimation method accuracy by averaging the possible deviation of estimates from the true parameter value. The MSE was used as well. This value sums up variance and bias for a deep comparison, where a smaller value is considered as a more accurate estimator. It can be seen in Table 2 that, when sample size n increases, the size of the bias decreases for all methods, which shows an improvement in accuracy. In this table, ADE and RTADE methods have a better performance of θ estimation than other methods. Also, the MLE method is more biased in smaller samples, but reduces its bias as the sample size increases. In Table 3, the value of MSE in all methods indicates the improvement of estimates in large samples sizes, where the RTADE method has the lowest values of MSE compared to other methods. Also, the MLE and CME methods have higher MSE in small samples, but both improve with increasing sample size. Based on the results, it is suggested to use ADE and RTADE methods to estimate model parameters, because in many cases these methods have lower bias and MSE values than other methods. Also, in small samples, WLSE can be a suitable option. The performance of the MLE method can improved with larger sample size.

7. Risk Indicators

7.1. The Mean of Order P and Optimal Order of P

The MOP P analysis [26], formally defined by [27], is used to characterize the central tendency or average behavior of a dataset based on different orders of moments. The MOP P is calculated by raising each data point to the power of P (a positive integer) and then taking the average of these values. The optimal order of P in MOP P analysis refers to determining the most suitable value of P that provides meaningful insights into the dataset’s characteristics or distribution. The choice of P can influence the sensitivity of the analysis to different aspects of the dataset. This is usually done by calculating MOP P for different p values (e.g., p = 1 , 2 , 3 , ) to examine how much influence the dataset has on different moments.

7.2. The PORT-VaR Estimator

The PORT-VaR estimator is a statistical method used for risk analysis and extreme value modeling. This method is particularly useful for identifying and analyzing extreme events or peaks in a dataset that exceed a specified threshold, which is often determined based on a certain level of confidence or risk tolerance. The PORT-VaR estimator calculates the value-at-risk (VaR) associated with the chosen threshold. VaR represents the maximum potential loss that could occur within a given confidence interval (e.g., 95% or 99%) under normal market conditions. The PORT-VaR estimator is widely used in modeling and analyzing rare events or outliers, which may have significant implications for risk assessment and mitigation strategies. It helps in understanding the probability and severity of extreme events, thereby enabling organizations to prepare and respond effectively to potential risks; see [28] for more details.
The necessary steps to obtain the PORT-VaR estimator are as follows:
  • Gather relevant data that capture extreme events or rare occurrences. Clean and preprocess the data to ensure quality and suitability for analysis.
  • Choose an appropriate statistical model.
  • Select a threshold above which extreme events are considered for analysis. This threshold is crucial and should be based on domain expertise and risk management goals.
  • Identify all data points that exceed the chosen threshold to form the PORT subset.
  • Estimate the VaR for each peak in the PORT subset, where VaR represents the maximum expected loss at a specified confidence level based on extreme value modeling.
  • Analyze the distribution of PORT-VaR estimates to quantify the tail risk associated with extreme events. Assess the impact of these events on overall risk exposure.
  • Utilize PORT-VaR results to inform risk management strategies, such as setting reserves, determining insurance premiums, or implementing risk mitigation measures.

8. Applications

This section presents and evaluates three different applications of the proposed model. Each application demonstrates the model’s effectiveness in capturing the underlying patterns within the data. Furthermore, we assess its performance by comparing it against other well-established distributions, highlighting its advantages and potential limitations in various scenarios. Through this comparative analysis, we aim to provide a comprehensive understanding of the model’s strengths and its applicability across different datasets, such as the exponentiated Weibull (EW) [29], modified Weibull (MW) [30], beta Weibull (BW) [3], gamma flexible Weibull (GFW) [31], and Kumaraswamy Burr XII (KwBXII) [32]. The best-fitting model is identified based on the Cramer–von Mises statistic ( W ), Anderson–Darling statistic ( A ), Akaike information criterion (AIC), consistent Akaike information criterion (CAIC), Bayesian information criterion (BIC), and Hannan–Quinn information criterion (HQIC). The MLEs, standard errors (SEs), and relevant statistics are computed using the AdequacyModel package [33] in R software [25].

8.1. Failure Times Dataset

The failure times of 50 components (measured per 1000 h) are given in [34], as follows: 0.036, 0.058, 0.061, 0.074, 0.078, 0.086, 0.102, 0.103, 0.114, 0.116, 0.148, 0.183, 0.192, 0.254, 0.262, 0.379, 0.381, 0.538, 0.570, 0.574, 0.590, 0.618, 0.645, 0.961, 1.228, 1.600, 2.006, 2.054, 2.804, 3.058, 3.076, 3.147, 3.625, 3.704, 3.931, 4.073, 4.393, 4.534, 4.893, 6.274, 6.816, 7.896, 7.904, 8.022, 9.337, 10.940, 11.020, 13.880, 14.730, 15.080. These failure time datasets are used to analyze the failure behavior of components, fit probabilistic models for reliability analysis, and predict the lifespan of components. Table 4 shows the MLEs and their respective standard errors (SEs) for the parameters of different models fitted to the failure time data. From this table, parameter estimates from the WFW model are shown to have relatively small standard error estimates, indicating a good fit for the model to the actual data. On the other hand, estimates from models like MW, EW, and BW result in higher standard errors, suggesting a potential compromise in parameter estimation accuracy. More goodness-of-fit statistics (the measurements are considered test statistics, both W and A ) and model selection criteria (AIC, CAIC, BIC, and HQIC) are summarized in Table 5. Smaller values for these metrics are reflective of a better model fit. As can be seen, the WFW model has the lowest results on all criteria, proving its superiority over competing models. In particular, the value of AIC is equal to 192.294 in the WFW model, and this value is lower than that of the other models, demonstrating that the WFW model is more accurate in generalizing the data. The WWF model aligns the closest with the blue circles (empirical distribution), supporting the goodness of the WFW model in terms of fitting to the real failure time data. Thus, the results of this study support the ability of the WFW model to adequately analyze and predict actual real failure time data.

8.2. COVID-19 Mortality Dataset

The dataset includes the number of COVID-19 deaths recorded across 83 counties in Illinois, United States, up to December 2021. The reported number of deaths are as follows: 169, 13, 28, 91, 13, 107, 4, 41, 31, 89, 46, 57, 108, 146, 35, 30, 32, 156, 38, 52, 21, 113, 73, 59, 130, 93, 10, 40, 101, 25, 36, 16, 15, 95, 90, 101, 21, 150, 57, 32, 34, 127, 184, 38, 69, 115, 78, 121, 165, 24, 53, 58, 72, 15, 38, 108, 85, 104, 39, 110, 82, 16, 58, 7, 15, 7, 107, 67, 74, 14, 8, 56, 29, 124, 52, 19, 72, 30, 66, 34, 196, 201, 98. This dataset can be found at: https://data.world/associatedpress/johns-hopkins-coronavirus-case-tracker (accessed on 16 February 2025).
The obtained results regarding MLEs and their associated SEs are presented in Table 6. The results indicate that the WFW, GFW, and FW models yield precise parameter estimates with relatively low SEs, while other models exhibit higher SEs. Table 7 reports the values of various model selection criteria, showing that the WFW model consistently achieves the lowest values. Consequently, it is identified as the most suitable model for the data. Figure 3 illustrates the estimated density functions for the top-performing models. Among them, the WFW model demonstrates the closest agreement with the empirical histogram, underscoring its suitability for modeling the observed data.

8.3. COVID-19 Times Dataset

The dataset of reported COVID-19 cases, introduced by [35], comprises 30 recorded observations representing the spread of the disease over a specific time interval. These observations offer valuable insights into infection trends and patterns, facilitating a thorough analysis of the disease’s progression and predictive modeling. These observations are as follows: 14.918, 10.656, 12.274, 10.289, 10.832, 7.099, 5.928, 13.211, 7.968, 7.584, 5.555, 6.027, 4.097, 3.611, 4.960, 7.498, 6.940, 5.307, 5.048, 2.857, 2.254, 5.431, 4.462, 3.883, 3.461, 3.647, 1.974, 1.273, 1.416, and 4.235. For more details, see [35]. This dataset requires appropriate statistical modeling techniques to capture their underlying patterns effectively. Therefore, to evaluate the ability of various distribution models to describe the observed trends, different statistical models are fitted to the dataset (see Table 8). The fitting results are presented in the following sections. Based on the low values of AIC, BIC, and HQIC, the WFW model emerges as the best option for describing this dataset (see Table 9). The FW and EW models also demonstrate a good performance, while the KwW model provides the weakest fit. Therefore, the WFW model is recommended for analyzing the COVID-19 times dataset.

9. MOP P Assessments and Risk Analysis Under Real Data

The PORT-VaR estimator helps in identifying and analyzing extreme events or outliers in COVID-19 data. Peaks over a specified VaR threshold may represent unusual or critical situations, such as sudden spikes in infection rates, hospitalizations, or other medical metrics related to COVID-19 [13,36,37]. Below, three subsections are presented: the first focuses on MOP [ P ] assessments and the optimal order of P based on the true mean value (TMV), the second examines the PORT-VaR estimator under failure time data, and the third explores the PORT-VaR estimator in the context of medical data.

9.1. MOP P Assessments and Optimal Order of P

Following [27], selecting the optimal order of P is crucial for MOP P assessments, as it balances capturing sufficient detail in risk profiles with avoiding overfitting or excessive complexity. Empirical and statistical analyses are employed to determine the ideal P, ensuring accurate representation of the underlying risk distribution. This process enhances robust risk evaluation and decision-making. Statistical tools like MOP P distributions enable quantitative risk assessment, hazard identification, and effective mitigation strategies, empowering stakeholders to allocate resources efficiently and build resilience against uncertainties. Table 10 provides the MOP P assessment, including the TMV, MSE, and Bias under P = 1 , 2 , , 5 , n = 1000 and some different parameter values. Based on the results presented in Table 10 (the first scenario) for MOP P assessments under different orders ( P = 1 , 2 , 3 , 4 , 5 ) and with specific parameter values ( θ 0 = 10 , λ 0 = 2.5 ) and a dataset size of n = 1000 , it is seen that the TMVs for MOP [ P ] across all orders ( P = 1 , 2 , 3 , 4 , 5 ) remain stable around 1.001314, confirming a consistent estimate of the dataset’s true mean. The identical MSE values (0.9983274) indicate that prediction errors remain constant across different P orders, reflecting moderate deviation from the TMV. Similarly, the consistent bias values (0.9991634) suggest a systematic underestimation of the TMV. The stability of TMV, MSE, and bias across varying P orders highlights the robustness of MOP [ P ] , ensuring reliable mean estimation and prediction accuracy despite changes in model parameters and dataset size. Based on the results presented in Table 10 for the second scenario, MOP P assessments are conducted under different orders ( P = 1 , 2 , 3 , 4 , 5 ) with specific parameter values ( θ 0 = 2 , λ 0 = 1.5 ) and a dataset size of n = 1000 . The TMVs for MOP [ P ] across all orders remain stable at approximately 0.9973669, confirming consistent mean estimation. Additionally, identical MSE values (0.9947407) indicate a uniform level of prediction error across different P orders. Similarly, the bias values (0.9973669) consistently show a slight underestimation of the TMV. The stability of TMV, MSE, and bias across varying P orders highlights the robustness of MOP [ P ] , ensuring reliable mean estimation with predictable error levels regardless of parameter settings and dataset size. Across all parameter combinations and orders of P, the TMV estimates derived from MOP P assessments remain relatively stable and close to the expected value, indicating the robustness of the method in estimating central tendencies within the dataset, with means stability of the TMV. The MSE and bias values also exhibit consistency across different orders of P and parameter settings. This suggests that while there is a predictable level of MSE and bias in the MOP P estimates, these metrics remain relatively invariant to changes in the order of P and specific parameter values, which means a consistent MSE and consistent bias. The consistent performance of MOP P across varied conditions underscores its reliability as a statistical tool for risk analysis. Despite potential variations in dataset characteristics or model parameters, MOP P consistently provides estimates that capture meaningful aspects of risk distributions.

9.2. VaR, TVaR and PORT-VaR Estimators for Extreme Failure Times

Failure time datasets often represent the occurrence of critical events such as equipment failures, system breakdowns, or component malfunctions. By applying the PORT-VaR estimator, analysts can identify and quantify extreme failure events that exceed a specified risk threshold. This information is crucial for understanding the reliability profile of systems or components. PORT-VaR analysis allows for a deeper assessment of the risk and uncertainty associated with failure times. By setting a threshold based on the desired confidence level (e.g., 95% or 99%), researchers can evaluate the probability of experiencing extreme failures beyond this threshold. This insight helps in assessing the overall risk exposure and potential consequences of critical failures. In this subsection, the PORT-VaR analysis is presented under failure times data. A summary of the analysis results obtained at different confidence levels (CLs: 80%, 85%, 90%, 95%, and 99%) is presented in Table 11. The trend shows that, as CL increases, the inferences of PORT volume decreases. This trend characterizes a shift towards a safer risk assessment method, one that relies on a higher range to incorporate only the most extreme data present in the sample. This relationship is critical for risk tolerance measurement and risk management design when risk exposure is acceptable. Also, Figure 4 shows the PORT-VaR analysis for extreme failure times. Based on Table 11, as the CL increases, the number of identified PORTs generally decreases, reflecting stricter risk thresholds. Lower CLs (e.g., 80% and 85%) yield higher extreme event counts, indicating a greater prevalence of significant failure time events relative to the VaR threshold. The statistical distribution (Min, Median, Mean, Max) highlights variability in severity and frequency, offering key insights into the risk profile of failure times. These findings inform risk assessment and response strategies, helping identify areas requiring targeted interventions and resource allocation to mitigate elevated mortality risks. Understanding the distribution of extreme failure events across CLs aids in quantifying risk exposure, prioritizing mitigation efforts, and optimizing reliability and maintenance practices. Decision-makers can leverage these insights to allocate resources efficiently, implement targeted interventions, and enhance system resilience against extreme failures. Notably, as CL increases, the number of PORTs rises monotonically, while VaR and TVaR decrease, indicating that more extreme failure events are captured at higher CLs, though their severity declines over time. Managing risk based on VaR alone suggests that an 80–85% CL may be overly conservative, while 99% could underestimate extreme failure impacts. A balanced choice (e.g., 90–95%) might be optimal. However, at 99% CL, the number of exceedances doubles, meaning organizations must prepare for more frequent small-to-moderate failures rather than rare, catastrophic events.

9.3. VaR, TVaR, and PORT-VaR Estimators for Extreme COVID-19 Deaths

Analyzing PORT-VaR can aid in optimizing resource allocation within the medical sector. Identification of peaks over the VaR threshold allows for the focused allocation of healthcare resources, such as medical supplies, personnel, and hospital beds, to areas experiencing the highest levels of risk or demand during the pandemic. Insights from the PORT-VaR estimator can enhance decision-making processes and preparedness efforts. Healthcare facilities and public health authorities can use this information to develop robust contingency plans, allocate budgets effectively, and implement targeted interventions to address the emerging challenges highlighted by the peaks in COVID-19 data; for more applications, see [38,39]. According to [28], effective public health responses rely on timely and accurate data analysis. The PORT-VaR approach offers a quantitative framework to monitor and respond to evolving COVID-19 trends. It enables proactive measures to contain outbreaks, implement targeted vaccination campaigns, and mitigate the impact of unforeseen events. PORT-VaR analysis contributes to evidence-based research and policy development in the medical sector. Insights gained from studying peaks over the VaR threshold can inform the design of public health policies, clinical guidelines, and research priorities aimed at combating the COVID-19 pandemic. For this purpose, Table 12 is presented; it provides the PORT-VaR analysis under COVID-19 data. Table 12 provides valuable insights into the distribution and characteristics of extreme events (PORTs), identified using the VaR estimator under different confidence levels. These findings have practical implications for risk assessment, decision-making, and resource allocation in the context of COVID-19 dataset analysis and pandemic response efforts, where minimum (Min.) refers to the smallest PORT value observed above the VaR threshold and maximum (Max.) refers to the largest PORT value observed above the VaR threshold. However, the mean column refers to the average value of the PORT distribution, indicating the central tendency of extreme events. Finally, median refers to the middle value of the PORT distribution, separating the higher and lower halves of the data. Moreover, Figure 5 shows the PORT-VaR analysis for extreme COVID-19 deaths.
Table 12 presents the PORT-VaR analysis for COVID-19 data, where the PORT column indicates the count of extreme events exceeding the VaR threshold at different CLs. As CL increases, fewer extreme events are detected, reflecting stricter risk thresholds, while lower CLs (e.g., 80% and 85%) capture more frequent extreme events. Descriptive statistics of PORT values (Min., Median, Mean, Max.) highlight the severity and frequency of extreme cases. The analysis underscores the sensitivity of risk assessment to CL variations, aiding in resource allocation and pandemic response planning. Notably, as CL rises, PORT counts increase while VaR and TVaR decrease, suggesting that higher thresholds capture more extreme cases but reduce individual severity estimates. A CL of 90–95% may offer a balanced approach for policymakers, avoiding overly conservative or lenient risk assessments.

10. Conclusions and Limitations

This study introduces the weighted flexible Weibull distribution, and its main properties, including cumulative and residual cumulative entropy, are explored. A simulation study confirms the consistency of the MLEs for parameter estimation. Applications to real-world data, including failure times and COVID-19 datasets, demonstrate the model’s good competitive performance against other lifetime distributions. Additionally, peaks over a random threshold value-at-risk (PORT-VaR) estimators are developed for risk analysis in engineering and medicine. These estimators mark a significant advancement in quantitative risk assessment, offering tailored solutions for diverse challenges. The findings highlight the importance of PORT-VaR analysis in various fields, including public health and engineering, contributing to improved risk management strategies.
Future research could integrate advanced statistical techniques and predictive modeling to enhance the accuracy and robustness of PORT-VaR assessments. Comparative studies across different datasets and contexts could further refine their applicability for risk mitigation and decision-making in complex systems. The PORT-VaR analysis provides valuable insights for interdisciplinary risk analysis, addressing contemporary challenges across multiple domains. This study introduces several novel aspects of actuarial risk assessment, expanding beyond a review of existing concepts. It pioneers the use of weighted distributions in actuarial risk modeling, addressing a critical research gap and offering a fresh perspective on risk evaluation. Actuarial risk metrics are applied beyond traditional financial contexts to medical data, particularly in assessing COVID-19 risk and managing datasets with extreme values, highlighting the adaptability of actuarial methods across diverse fields. Additionally, a new estimation technique and a sequential sampling plan based on truncated life testing are introduced, enhancing the precision and efficiency of risk assessment models. Furthermore, the discussion on PORT-VaR has been expanded, offering a more detailed analysis of its significance and applicability in modern risk assessment scenarios. These contributions advance actuarial risk modeling methodologies and reinforce their relevance across multiple disciplines, supporting more robust and informed decision-making in risk management. The PORT-VaR analysis for COVID-19 data highlights the impact of CLs on risk assessment, with higher CLs detecting fewer extreme events due to stricter thresholds, while lower CLs capture more frequent occurrences. Descriptive statistics of PORT values provide insights into the severity and distribution of extreme cases, emphasizing the importance of selecting appropriate CLs for effective risk evaluation. The observed trend of increasing PORT counts alongside decreasing VaR and TVaR at higher CLs suggests that broader thresholds identify more extreme cases while moderating individual severity estimates. A CL range emerges as a balanced choice for policymakers, ensuring a comprehensive yet practical approach to pandemic response and resource allocation.
While this study introduces significant advancements in actuarial risk modeling and PORT-VaR analysis, several limitations should be acknowledged. First, the proposed weighted flexible Weibull distribution, though competitive, has been evaluated on a limited number of datasets. This model can be applied to broader real-world applications, including different domains and larger datasets, requires further validation. Second, the PORT-VaR estimators were developed and tested within specific contexts, such as engineering and medical risk analysis, but their performance in other industries remains unexplored. Additionally, the study assumes that extreme events follow particular statistical patterns, which may not always hold true in highly dynamic or unpredictable environments. Another limitation is that the study primarily relies on historical data, which may not fully account for evolving risk factors, policy changes, or unexpected future developments. While simulation studies confirm the consistency of estimation methods, real-world uncertainties could introduce biases or deviations not captured in the models. Finally, the selection of confidence levels (CLs) in PORT-VaR analysis is based on statistical reasoning and practical considerations, but the optimal choice may vary depending on specific risk tolerance levels and decision-making frameworks. Future works should address these limitations by expanding the application of the proposed methods to diverse datasets, exploring adaptive modeling approaches for evolving risks and refining PORT-VaR techniques to enhance robustness across various domains.

Author Contributions

Conceptualization, Z.R., M.A. (Morad Alizadeh), S.T. and M.A. (Mahmoud Afshari); methodology, Z.R., M.A. (Morad Alizadeh), M.A. (Mahmoud Afshari), S.T. and H.M.Y.; software, Z.R. and H.M.Y.; validation, J.E.C.-R. and H.M.Y.; formal analysis, Z.R., M.A. (Morad Alizadeh), M.A. (Mahmoud Afshari) and H.M.Y.; investigation, M.A. (Morad Alizadeh), M.A. (Mahmoud Afshari) and H.M.Y.; resources, J.E.C.-R., M.A. (Morad Alizadeh) and H.M.Y.; data curation, Z.R., M.A. (Mahmoud Afshari) and H.M.Y.; writing—original draft preparation, Z.R., M.A. (Morad Alizadeh), M.A. (Mahmoud Afshari) and H.M.Y.; writing—review and editing, J.E.C.-R. and H.M.Y.; visualization, Z.R. and H.M.Y.; supervision, J.E.C.-R. and H.M.Y.; project administration, M.A. (Morad Alizadeh); funding acquisition, J.E.C.-R., M.A. (Morad Alizadeh) and H.M.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data used in this work are available in Section 8.1, Section 8.2 and Section 8.3.

Acknowledgments

The authors thank the editor and two anonymous referees for their helpful comments and suggestions.

Conflicts of Interest

The authors declare that they have no known conflicting/competing interests that could have appeared to influence the work reported in this paper.

References

  1. Chen, Y.S.; Lai, S.B.; Wen, C.T. The influence of green innovation performance on corporate advantage in Taiwan. J. Bus. Ethics 2006, 67, 331–339. [Google Scholar] [CrossRef]
  2. Cordeiro, G.M.; Ortega, E.M.; Nadarajah, S. The Kumaraswamy Weibull distribution with application to failure data. J. Frankl. Inst. 2010, 347, 1399–1429. [Google Scholar] [CrossRef]
  3. Famoye, F.; Lee, C.; Olumolade, O. The beta-Weibull distribution. J. Stat. Appl. 2005, 4, 121–136. [Google Scholar]
  4. Xie, M.; Tang, Y.; Goh, T.N. A modified Weibull extension with bathtub-shaped failure rate function. Reliab. Eng. Syst. Saf. 2002, 76, 279–285. [Google Scholar] [CrossRef]
  5. Xie, M.; Lai, C.D. Reliability analysis using an additive Weibull model with bathtub-shaped failure rate function. Reliab. Eng. Syst. Saf. 1996, 52, 87–93. [Google Scholar] [CrossRef]
  6. Mustafa, A.; El-Desouky, B.S.; AL-Garash, S. The Marshall-Olkin Flexible Weibull Extension Distribution. arXiv 2016, arXiv:1609.08997. [Google Scholar]
  7. Bebbington, M.; Lai, C.D.; Zitikis, R. A flexible Weibull extension. Reliab. Syst. Saf. 2007, 92, 719–726. [Google Scholar] [CrossRef]
  8. Mustafa, A.; El-Desouky, B.S.; Al-Garash, S. The exponentiated generalized flexible Weibull extension distribution. Fundam. J. Math. Math. Sci. 2016, 6, 75–98. [Google Scholar]
  9. Amasyali, K.; El-Gohary, N.M. A review of data-driven building energy consumption prediction studies. Renew. Sustain. Energy Rev. 2018, 81, 1192–1205. [Google Scholar] [CrossRef]
  10. Zekri, A.R.N.; Youssef, A.S.E.D.; El-Desouky, E.D.; Ahmed, O.S.; Lotfy, M.M.; Nassar, A.A.M.; Bahnassey, A.A. Serum microRNA panels as potential biomarkers for early detection of hepatocellular carcinoma on top of HCV infection. Tumor Biol. 2016, 37, 12273–12286. [Google Scholar] [CrossRef]
  11. Alizadeh, M.; Afshari, M.; Cordeiro, G.M.; Ramaki, Z.; Contreras-Reyes, J.E.; Dirnik, F.; Yousof, H.M. A Weighted Lindley Claims Model with Applications to Extreme Historical Insurance Claims. Stats 2025, 8, 8. [Google Scholar] [CrossRef]
  12. Kharazmi, O.; Contreras-Reyes, J.E.; Balakrishnan, N. Optimal information, Jensen-RIG function and α-Onicescu’s correlation coefficient in terms of information generating functions. Phys. A 2023, 609, 128362. [Google Scholar] [CrossRef]
  13. Kuschel, K.; Carrasco, R.; Idrovo-Aguirre, B.J.; Duran, C.; Contreras-Reyes, J.E. Preparing Cities for Future Pandemics: Unraveling the influence of urban and housing variables on COVID-19 incidence in Santiago de Chile. Healthcare 2023, 11, 2259. [Google Scholar] [CrossRef] [PubMed]
  14. Zwillinger, D. Table of Integrals, Series, and Products; Elsevier: Amsterdam, The Netherlands, 2014. [Google Scholar]
  15. Ferreira, A.A.; Cordeiro, G.M. The gamma flexible Weibull distribution: Properties and Applications. Span. J. Stat. 2023, 4, 55–71. [Google Scholar] [CrossRef]
  16. Gradshteyn, I.S.; Ryzhik, I.M. Table of Integrals, Series, and Products; Academic Press: Cambridge, MA, USA, 2007. [Google Scholar]
  17. Chaudhry, M.A.; Zubair, S.M. Generalized incomplete gamma functions with applications. J. Comput. Appl. Math. 1994, 55, 99–124. [Google Scholar] [CrossRef]
  18. Di Crescenzo, A.; Longobardi, M. On cumulative entropies. J. Stat. Inference 2009, 139, 4072–4087. [Google Scholar] [CrossRef]
  19. Kharazmi, O.; Contreras-Reyes, J.E. Fractional cumulative residual inaccuracy information measure and its extensions with application to chaotic maps. Int. J. Bifurc. Chaos 2024, 34, 2450006. [Google Scholar] [CrossRef]
  20. Kharazmi, O.; Yalcin, F.; Contreras-Reyes, J.E. Cumulative residual Fisher information based on finite and infinite mixture models. Fluct. Noise Lett. 2025. [Google Scholar] [CrossRef]
  21. Psarrakos, G.; Toomaj, A. On the generalized cumulative residual entropy with applications in actuarial science. J. Comput. Appl. Math. 2017, 309, 186–199. [Google Scholar] [CrossRef]
  22. Rényi, A. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics; University of California Press: Berkeley, CA, USA, 1961; Volume 4, pp. 547–562. [Google Scholar]
  23. Contreras-Reyes, J.E. Mutual information matrix based on Rényi entropy and application. Nonlinear Dyn. 2022, 110, 623–633. [Google Scholar] [CrossRef]
  24. Kuhn, M.; Wing, J.; Weston, S.; Williams, A.; Keefer, C.; Engelhardt, A.; Cooper, T.; Mayer, Z.; Kenkel, B.; Team, R.C. Package ‘caret’. R J. 2020, 223, 48. [Google Scholar]
  25. R Core Team. A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2024; Available online: http://www.R-project.org (accessed on 17 February 2025).
  26. Alizadeh, M.; Afshari, M.; Contreras-Reyes, J.E.; Mazarei, D.; Yousof, H.M. The Extended Gompertz Model: Applications, Mean of Order P Assessment and Statistical Threshold Risk Analysis Based on Extreme Stresses Data. IEEE Trans. Reliab. 2024. [Google Scholar] [CrossRef]
  27. Rice, J.A. Mathematical Statistics and Data Analysis; Thomson/Brooks/Cole: Belmont, CA, USA, 2007; Volume 371. [Google Scholar]
  28. Szubzda, F.; Chlebus, M. Comparison of Block Maxima and Peaks Over Threshold Value-at-Risk models for market risk in various economic conditions. Cent. Econ. J. 2019, 6, 70–85. [Google Scholar] [CrossRef]
  29. Mudholkar, G.S.; Srivastava, D.K. Exponentiated Weibull family for analyzing bathtub failure-rate data. IEEE Trans. Reliab. 1993, 42, 299–302. [Google Scholar] [CrossRef]
  30. Lai, C.D.; Xie, M.; Murthy, D.N.P. A modified Weibull distribution. IEEE Trans. Reliab. 2003, 52, 33–37. [Google Scholar] [CrossRef]
  31. Cordeiro, G.M.; Vasconcelos, J.C.S.; dos Santos, D.P.; Ortega, E.M.; Sermarini, R.A. Three mixed-effects regression models using an extended Weibull with applications on games in differential and integral calculus. Braz. J. Probability Stat. 2022, 36, 751–770. [Google Scholar] [CrossRef]
  32. Paranaíba, P.F.; Ortega, E.M.; Cordeiro, G.M.; Pascoa, M.A.D. The Kumaraswamy Burr XII distribution: Theory and practice. J. Stat. Comput. Simul. 2013, 83, 2117–2143. [Google Scholar] [CrossRef]
  33. Marinho, P.R.D.; Silva, R.B.; Bourguignon, M.; Cordeiro, G.M.; Nadarajah, S. AdequacyModel: An R package for probability distributions and general purpose optimization. PLoS ONE 2019, 14, e0221487. [Google Scholar] [CrossRef]
  34. Murthy, D.P.; Xie, M.; Jiang, R. Weibull Models; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  35. Ahmad, Z.; Almaspoor, Z.; Khan, F.; El-Morshedy, M. On predictive modeling using a new flexible Weibull distribution and machine learning approach: Analyzing the COVID-19 data. Mathematics 2022, 10, 1792. [Google Scholar] [CrossRef]
  36. Brown, A.; Williams, R. Equity implications of ride-hail travel during COVID-19 in California. Transp. Res. Rec. 2023, 2677, 1–14. [Google Scholar] [CrossRef]
  37. Mohamed, H.S.; Cordeiro, G.M.; Minkah, R.; Yousof, H.M.; Ibrahim, M. A size-of-loss model for the negatively skewed insurance claims data: Applications, risk analysis using different methods and statistical forecasting. J. Appl. Stat. 2024, 51, 348–369. [Google Scholar] [CrossRef] [PubMed]
  38. Golinelli, D.; Boetto, E.; Carullo, G.; Nuzzolese, A.G.; Landini, M.P.; Fantini, M.P. Adoption of digital technologies in health care during the COVID-19 pandemic: Systematic review of early scientific literature. J. Medical Internet Res. 2020, 22, e22280. [Google Scholar] [CrossRef] [PubMed]
  39. Mohamed, E.B.; Souissi, M.N.; Baccar, A.; Bouri, A. CEO’s personal characteristics, ownership and investment cash flow sensitivity: Evidence from NYSE panel data firms. J. Econ. Financ. Adm. Sci. 2014, 19, 98–103. [Google Scholar] [CrossRef]
Figure 1. PDF of WFW distribution for several parameter settings.
Figure 1. PDF of WFW distribution for several parameter settings.
Mca 30 00042 g001
Figure 2. HRF of WFW distribution for several parameter settings.
Figure 2. HRF of WFW distribution for several parameter settings.
Mca 30 00042 g002
Figure 3. Fitted WFW density to COVID-19 mortality dataset.
Figure 3. Fitted WFW density to COVID-19 mortality dataset.
Mca 30 00042 g003
Figure 4. PORT-VaR analysis for extreme failure times. Each histogram is related to a confidence level (CL) of (a) 80%, (b) 85%, (c) 90%, (d) 95%, and (e) 99%.
Figure 4. PORT-VaR analysis for extreme failure times. Each histogram is related to a confidence level (CL) of (a) 80%, (b) 85%, (c) 90%, (d) 95%, and (e) 99%.
Mca 30 00042 g004
Figure 5. PORT-VaR analysis for extreme COVID-19 deaths. Each histogram is related to a confidence level (CL) of (a) 80%, (b) 85%, (c) 90%, (d) 95%, and (e) 99%.
Figure 5. PORT-VaR analysis for extreme COVID-19 deaths. Each histogram is related to a confidence level (CL) of (a) 80%, (b) 85%, (c) 90%, (d) 95%, and (e) 99%.
Mca 30 00042 g005
Table 1. First four moments, standard deviation (SD), skewness (SK), and kurtosis (KR) for the WFW model.
Table 1. First four moments, standard deviation (SD), skewness (SK), and kurtosis (KR) for the WFW model.
θ = 0.04 , λ = 1.1 θ = 0.04 , λ = 0.99 θ = 0.04 , λ = 1.5 θ = 0.04 , λ = 0.5
μ 1 10.92310.77311.43910.040
μ 2 243.831240.959254.236228.0944
μ 3 6860.9086783.1867146.2376440.962
μ 4 219054.8216538.5228340.1205525.1
SD 11.1582411.1756911.10611.282
SK 1.0633381.0711.0321.11053
KR 3.298153.3093.2573.352
θ = 0.5 , λ = 20 θ = 0.5 , λ = 18 θ = 0.5 , λ = 26 θ = 0.5 , λ = 16
μ 1 6.29965.9807.1746125.642
μ 2 40.7108636.77652.526832.84342
μ 3 268.776231.495391.2338196.1263
μ 4 1807.2491486.4042957.791196.871
SD 1.0127741.00741.0255431.001466
SK −0.5871091−0.5543−0.6650084−0.5167628
KR 3.3010123.2273.4922993.147694
θ = 0.4 , λ = 20 θ = 0.3 , λ = 9 θ = 0.3 , λ = 14 θ = 0.3 , λ = 11
μ 1 7.05345.5656.8696.119
μ 2 51.31733.45349.78539.985
μ 3 383.059213.284376.477274.881
μ 4 2922.0481425.0852947.5511968.856
SD 1.2511.5741.6121.591
SK −0.516−0.127−0.2940.20
KR 3.1472.6032.7762.672
θ = 1.5 , λ = 0.5 θ = 1.5 , λ = 1.0 θ = 1.5 , λ = 1.5 θ = 1.5 , λ = 2.0
μ 1 0.6310.8491.0221.169
μ 2 0.4870.8151.1421.468
μ 3 0.4300.8541.3641.947
μ 4 0.4160.9581.7192.697
SD 0.2970.3050.3110.316
SK 0.3810.105−0.055−0.167
KR 2.5552.4982.5552.638
Table 2. Bias and RMSE of α and β for different sample sizes n.
Table 2. Bias and RMSE of α and β for different sample sizes n.
Bias for θ = 1 nMLEsLSEsWLSEsCMEsADEsRTADEs
200.07070.00390.01290.08120.02550.0440
500.02560.00130.00770.03040.00990.0169
1000.0099−0.00200.00280.01220.00280.0052
2000.0050−0.00140.00100.00570.00090.0008
5000.00240.00060.00160.00350.00120.0016
Bias for λ = 1 nMLEsLSEsWLSEsCMEsADEsRTADEs
200.10610.02020.03110.11160.04440.0839
500.03960.01010.01780.04450.01900.0343
1000.01740.00110.00730.01790.00660.0122
2000.0061−0.0039−0.00030.0044−0.0008−0.0004
5000.00270.00000.00130.00330.00060.0016
MSE for θ = 1 nMLEsLSEsWLSEsCMEsADEsRTADEs
200.19480.21580.20100.25380.18270.1958
500.10450.12580.11450.13420.10870.1115
1000.06610.08410.07510.08660.07250.0749
2000.04740.05890.05280.05970.05170.0532
5000.02880.03790.03260.03820.03230.0326
MSE for λ = 1 nMLEsLSEsWLSEsCMEsADEsRTADEs
200.33330.33660.32480.38650.30870.3622
500.17780.20270.18980.21460.18350.2102
1000.12680.14520.13580.14890.13340.1513
2000.08380.09640.08950.09730.08880.0989
5000.05090.05980.05460.06010.05430.0608
Table 3. MSE and Bias for θ and λ with n samples.
Table 3. MSE and Bias for θ and λ with n samples.
Bias for θ = 0.5 nMLEsLSEsWLSEsCMEsADEsRTADEs
200.038480.013860.015240.052980.019920.02619
500.01225−0.000960.002600.013170.003670.00728
1000.006440.000410.002340.007380.002610.00440
2000.004650.001610.002670.005070.002620.00313
5000.00070−0.00075−0.000080.00062−0.000260.00003
Bias for λ = 1 nMLEsLSEsWLSEsCMEsADEsRTADEs
200.111100.033520.037060.130050.050470.09532
500.039630.005620.014670.040930.015280.03732
1000.028750.013740.018310.031270.018130.02871
2000.012430.004860.007860.013460.007390.01084
5000.00199−0.00221−0.000430.00119−0.001150.00058
MSE for θ = 0.5 nMLEsLSEsWLSEsCMEsADEsRTADEs
200.096130.112210.104290.135530.092460.09473
500.052390.062500.056930.066270.054430.05605
1000.034010.041560.037300.043000.036260.03730
2000.024830.029230.026250.029890.026040.02644
5000.015520.019080.016870.019150.016760.01702
MSE for λ = 1 nMLEsLSEsWLSEsCMEsADEsRTADEs
200.364200.379650.362900.437840.347080.43814
500.198110.228210.213030.240430.206570.25621
1000.140330.160540.150590.165960.147320.17763
2000.096880.109390.102100.111100.101700.11767
5000.059450.070420.064310.070660.063980.07407
Table 4. MLEs and their respective SEs of fitted models for failure time dataset.
Table 4. MLEs and their respective SEs of fitted models for failure time dataset.
Model MLE (SE)
WFW ( θ , λ ) 0.1090.146
( 0.011 ) ( 0.03 )
GFW ( a , θ , λ ) 1.3620.1090.126
( 0.19 ) ( 0.013 ) ( 0.033 )
FW ( α , β ) 0.0990.183
( 0.012 ) ( 0.034 )
MW ( α , λ , β ) 0.4960.0340.562
( 0.099 ) ( 0.025 ) ( 0.098 )
EW ( α , λ , β ) 0.2900.7700.785
( 0.681 ) ( 0.99 ) ( 1.546 )
KwBXII ( a , b , c , k , s ) 0.1212.1994.3811.19321.015
( 0.019 ) ( 0.477 ) ( 0.147 ) ( 0.217 ) ( 0.125 )
BW ( a , b , α , β ) 0.7080.7030.4120.819
( 1.392 ) ( 1.46 ) ( 1.575 ) ( 1.057 )
Table 5. Evaluation metrics for the models applied to failure time dataset.
Table 5. Evaluation metrics for the models applied to failure time dataset.
Model W A AICCAICBICHQIC
WFW ( θ , λ ) 0.0410.267192.294192.55196.118193.750
GFW ( a , α , β ) 0.0420.257193.850194.372199.586196.035
FW ( α , β ) 0.0790.414195.846196.101199.670197.302
MW ( α , λ , β ) 0.1300.850208.727209.249214.463210.912
EW ( α , λ , β ) 0.1500.946210.713211.234216.449212.897
BW ( a , b , α , β ) 0.1490.942212.696213.585220.344215.608
KwBXII ( a , b , c , k , s ) 1.1320.870213.086214.450222.646216.726
Table 6. Findings from the fitted models to COVID-19 mortality dataset.
Table 6. Findings from the fitted models to COVID-19 mortality dataset.
Model MLE (SE)
WFW ( θ , λ ) 0.01023.243
( 0.0006 ) ( 3.755 )
GFW ( a , α , β ) 1.7020.01014.005
( 0.253 ) ( 0.001 ) ( 4.458 )
FW ( α , β ) 0.00832.812
( 0.001 ) ( 4.290 )
MW ( α , λ , β ) 0.0050.0031.161
( 0.002 ) ( 0.002 ) ( 0.116 )
EW ( α , λ , β ) 0.0131.4180.986
( 0.005 ) ( 0.503 ) ( 0.574 )
KwBXII ( a , b , c , k , s ) 10.52672.2710.3271.39340.836
( 25.224 ) ( 95.623 ) ( 0.401 ) ( 2.142 ) ( 122.510 )
BW ( a , b , α , β ) 3.6973.6650.0110.615
( 1.303 ) ( 1.943 ) ( 0.006 ) ( 0.120 )
Table 7. Adequacy measures for the fitted models for COVID-19 mortality dataset.
Table 7. Adequacy measures for the fitted models for COVID-19 mortality dataset.
Model W A AICCAICBICHQIC
WFW ( θ , λ ) 0.0290.212853.026853.176857.864854.969
GFW ( a , α , β ) 0.0340.241855.333855.637862.590858.248
FW ( α , β ) 0.0950.596859.516859.666864.353861.459
MW ( α , λ , β ) 0.0570.348858.767859.071866.024861.682
EW ( α , λ , β ) 0.0590.351858.690858.994865.946861.605
BW ( a , b , α , β ) 0.0880.544863.876864.389873.551867.763
KwBXII ( a , b , c , k , s ) 0.0830.508864.870865.649876.964869.728
Table 8. Estimates of the fitted models for COVID-19 times data.
Table 8. Estimates of the fitted models for COVID-19 times data.
ModelParameter 1Parameter 2Parameter 3Parameter 4
WFW ( θ , λ ) 0.127 (0.015)3.951 (0.931)
GFW ( a , α , β ) 0.3418 (0.566)0.1315 (0.07)16.38 (24.75)
FW ( θ , λ ) 0.1199 (0.018)5.596 (1.056)
EW ( α , λ , β ) 4.084 (3.476)1.188 (0.679)2.508 (3.049)
KwW ( a , b , λ , k ) 100 (1472.02)100 (960.966)2.444 (13.220)0.1249 (0.6912)
W ( α , β ) 6.9637 (0.715)1.879 (0.262)
LNORM ( μ , σ ) 1.6471 (0.112)0.611 (0.078)
Table 9. Adequacy measures for of the fitted models for COVID-19 times dataset.
Table 9. Adequacy measures for of the fitted models for COVID-19 times dataset.
ModelAICCAICBICHQIC
WFW ( θ , λ ) 157.7342158.1787159.5366158.6308
GFW ( a , α , β ) 158.677159.6162.8806160.0217
FW ( α , β ) 158.810158.2553160.6132158.7073
KwW ( a , b , λ , k ) 161.3179162.9179166.9226163.1109
W ( α , β ) 158.0685158.5129160.8709158.965
LNORM ( μ , σ ) 158.4702159161.2726159.3667
Table 10. MOP P assessment under n = 1000 and P = 1 , 2 , 3 , 4 , 5 .
Table 10. MOP P assessment under n = 1000 and P = 1 , 2 , 3 , 4 , 5 .
P 12345
α 0 = 10 , β 0 = 2.5
TMV1.001314
MSE0.99832740.99832740.99832740.99832740.9983274
Bias0.99916340.99916340.99916340.99916340.9991634
θ 0 = 2 , λ 0 = 1.5
TMV0.9973669
MSE0.99474070.99474070.99474070.99474070.9947407
Bias0.99736690.99736690.99736690.99736690.9973669
θ 0 = 0.5 , λ 0 = 0.5
TMV0.9978629
MSE0.99573040.99573040.99573040.99573030.9957296
Bias0.99786290.99786290.99786290.99786290.9978625
θ 0 = 100 , λ 0 = 50
TMV0.9988142
MSE0.99762970.99762970.99762970.99762970.9976297
Bias0.99881420.99881420.99881420.99881420.9988142
Table 11. PORT-VaR analysis for extreme failure times.
Table 11. PORT-VaR analysis for extreme failure times.
CLsN. of PORTVaRTVaRMin.1st Qu.MedianMean3rd Qu.Max.
80%403.4901.57640.1480.5863.0674.1586.41015.08
85%423.0191.0350.1140.5712.9313.9655.92915.08
90%451.4910.5930.0860.3812.0543.7084.89315.08
95%470.7440.3750.07400.32052.00603.55304.713515.08
99%940.3210.1300.0580.2541.6003.4104.53415.08
Table 12. PORT-VaR analysis for extreme COVID-19 deaths.
Table 12. PORT-VaR analysis for extreme COVID-19 deaths.
CLN. of PORTVaRTVaRMin.1st Qu.MedianMean3rd Qu.Max.
80%6624.430.37525.0040.2572.5081.74107.75201.00
85%7016.911.76919.0038.0070.5078.29107.00201.00
90%72151116.0037.5068.0076.56107.00201.00
95%7810.37.213.0032.5058.5071.76103.25201.00
99%826.4647.0030.2557.5068.65101.00201.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ramaki, Z.; Alizadeh, M.; Tahmasebi, S.; Afshari, M.; Contreras-Reyes, J.E.; Yousof, H.M. The Weighted Flexible Weibull Model: Properties, Applications, and Analysis for Extreme Events. Math. Comput. Appl. 2025, 30, 42. https://doi.org/10.3390/mca30020042

AMA Style

Ramaki Z, Alizadeh M, Tahmasebi S, Afshari M, Contreras-Reyes JE, Yousof HM. The Weighted Flexible Weibull Model: Properties, Applications, and Analysis for Extreme Events. Mathematical and Computational Applications. 2025; 30(2):42. https://doi.org/10.3390/mca30020042

Chicago/Turabian Style

Ramaki, Ziaurrahman, Morad Alizadeh, Saeid Tahmasebi, Mahmoud Afshari, Javier E. Contreras-Reyes, and Haitham M. Yousof. 2025. "The Weighted Flexible Weibull Model: Properties, Applications, and Analysis for Extreme Events" Mathematical and Computational Applications 30, no. 2: 42. https://doi.org/10.3390/mca30020042

APA Style

Ramaki, Z., Alizadeh, M., Tahmasebi, S., Afshari, M., Contreras-Reyes, J. E., & Yousof, H. M. (2025). The Weighted Flexible Weibull Model: Properties, Applications, and Analysis for Extreme Events. Mathematical and Computational Applications, 30(2), 42. https://doi.org/10.3390/mca30020042

Article Metrics

Back to TopTop