Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (50)

Search Parameters:
Keywords = exponentiated exponential-Pareto model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 797 KB  
Article
A Novel Exponentiated Pareto Exponential Distribution with Applications in Environmental and Financial Datasets
by Ibrahim Sule and Mogiveny Rajkoomar
Stats 2026, 9(2), 41; https://doi.org/10.3390/stats9020041 - 9 Apr 2026
Viewed by 168
Abstract
Environmental and financial datasets often display complex distributional characteristics, including heavy tails, high skewness and the presence of extreme observations. Traditional probability models such as the exponential, gamma or log-normal distributions may not adequately capture these behaviours particularly when modelling extreme events such [...] Read more.
Environmental and financial datasets often display complex distributional characteristics, including heavy tails, high skewness and the presence of extreme observations. Traditional probability models such as the exponential, gamma or log-normal distributions may not adequately capture these behaviours particularly when modelling extreme events such as rainfall, pollution levels, stock returns or loss severities. By integrating the characteristics of Pareto and exponential distributions into an exponentiated framework that can describe datasets arising from environmental and finance fields, this study presents a novel three-parameter exponentiated Pareto exponential distributions using the exponentiated Pareto family of distributions with classical exponential distribution as the baseline model. This novel model extends the classical exponential distribution with the addition of extra shape parameters which simultaneously regulate the centre and tail behaviours of the new model. The statistical and mathematical characteristics of the proposed distribution are determined and studied. The maximum likelihood estimate approach is used in a conducted simulation exercise, and the estimator’s efficiency is evaluated as seen from the results. The practical applicability of the model is illustrated with four real-life datasets utilising model adequacy and goodness-of-fit measurements such as log–likelihood, Akaike information criteria and Bayesian information criteria. The data reveal that the proposed model gives a better fit than the models chosen as comparators, making the EPE distribution useful and robust in environmental and financial fields of study. Full article
Show Figures

Figure 1

25 pages, 3930 KB  
Article
A Novel Unit Exponential Delay Time Distribution: Theory, Inference and Applications
by Ahmed M. Herzallah, Asmaa S. Al-Moisheer and Khalaf S. Sultan
Mathematics 2026, 14(6), 1029; https://doi.org/10.3390/math14061029 - 18 Mar 2026
Viewed by 215
Abstract
This paper introduces the Unit Exponential Delay Time Distribution (UEDTD), a two-parameter model for data with support in the unit interval (0,1). The model is derived using two distinct approaches: transformation method applied to the Exponential Delay Time [...] Read more.
This paper introduces the Unit Exponential Delay Time Distribution (UEDTD), a two-parameter model for data with support in the unit interval (0,1). The model is derived using two distinct approaches: transformation method applied to the Exponential Delay Time Distribution (EDTD), which itself arises as the convolution of two independent exponential random variables, and product convolution method of two independent power-function random variables that connects UEDTD to Pareto distribution, offering additional interpretability and giving rise to several exact and efficient algorithms for generating random samples. The limit distribution is examined with derivation of key statistical properties. The order statistics with interesting asymptotic results for extremes distribution are discussed and formulated. A reparameterization for the model is suggested to improve estimation stability and formulation with maximum likelihood approach employed for parameter inference. A simulation study demonstrates the consistency and efficiency of the estimators across various sample sizes and parameter configurations. The practical applicability of the UEDTD is demonstrated through a real-world dataset, where it shows superior performance compared to established unit distributions, confirming the utility of the UEDTD for modeling proportional data in applied research. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

21 pages, 3969 KB  
Article
Modelling NO2 Emissions at Eskom’s Coal-Fired Power Station: Application of Statistical Distributions at Arnot
by Mpendulo Wiseman Mamba and Delson Chikobvu
Environments 2026, 13(2), 111; https://doi.org/10.3390/environments13020111 - 17 Feb 2026
Viewed by 850
Abstract
The combustion of coal comes with a heavy price of pollutant emissions. To assist in the planning and management of these emissions and to protect human health, the current study uses the relatively heavy-tailed distributions, namely, the Weibull, Lognormal and Pareto distributions to [...] Read more.
The combustion of coal comes with a heavy price of pollutant emissions. To assist in the planning and management of these emissions and to protect human health, the current study uses the relatively heavy-tailed distributions, namely, the Weibull, Lognormal and Pareto distributions to analyse and characterise the distribution of NO2 emission (in tons) from Arnot, a coal-fired power station of South Africa’s power utility, Eskom. Quantile–quantile (QQ) plots and their corresponding derivative plots for the three distributions are used to characterise the statistical distribution of NO2 emissions. The strength and advantage of using derivative plots of the three distributions, in particular, for characterising NO2 emissions from a coal-fuelled power station, is that they are able to better capture and explain the behaviour of the data across different components of this data. Although this method possesses flexible ways of characterisation of data, it is not commonly applied to emissions data, especially NO2 emissions from a coal-fuelled power station belonging to Eskom, such as Arnot. The choice of the distributions of this study is motivated by their ability to cater to varied tails relative to the exponential distribution. Thus, the tail heaviness ranks of the distributions from lighter to heavier tail, that is, Weibull, Lognormal and Pareto, are taken into consideration in order to arrive at the best-fitting distribution(s). The Weibull distribution with a lighter tail than the Exponential distribution gave the best-fitting distribution over the Lognormal and Pareto distributions for the main body of the data. The Pareto distribution, however, captures the extreme emission tail behaviour much better than the other two distributions. The Kolmogorov–Smirnov and Vasicek–Song (VS) goodness of fit statistics were used to further assess the appropriateness of the fitted distributions. The selection of the Weibull distribution implies that milder high values and less frequent very high NO2 emission data are expected, showing the weakness of such criteria when extremes are present. For authorities to plan and draw policies for the reduction and management of emissions, these findings may be of interest to them and can assist in better understanding their behaviour and the planning to reduce the impact on humans and the environment. This may also assist practitioners in air quality modelling before other, more sophisticated methods can be explored. Full article
(This article belongs to the Special Issue Air Pollution in Urban and Industrial Areas, 4th Edition)
Show Figures

Graphical abstract

27 pages, 454 KB  
Article
Optimal Dividend and Capital Injection Strategies with Exit Options in Jump-Diffusion Models
by Ningning Feng and Ran Xu
Mathematics 2026, 14(3), 447; https://doi.org/10.3390/math14030447 - 27 Jan 2026
Viewed by 353
Abstract
This paper studies optimal dividend and capital injection strategies with active exit options under a jump-diffusion model. We introduce a piecewise terminal payoff function to capture stop-loss exits (for deficits) and profit-taking exits (for surpluses), enabling shareholders to dynamically balance risk and return. [...] Read more.
This paper studies optimal dividend and capital injection strategies with active exit options under a jump-diffusion model. We introduce a piecewise terminal payoff function to capture stop-loss exits (for deficits) and profit-taking exits (for surpluses), enabling shareholders to dynamically balance risk and return. Using the dynamic programming principle, we derive the associated quasi-variational inequalities (QVIs) and characterize the value function as the unique viscosity solution. To address analytical challenges, we employ the Markov chain approximation method, constructing a controlled Markov chain that closely approximates the jump-diffusion dynamics. Numerical solutions of the approximated problem are obtained via value iteration. The numerical results demonstrate how the value function and optimal strategies respond to different claim distributions (comparing Exponential and Pareto cases), key model parameters, and exit payoff functions. The numerical study further validates the algorithm’s convergence and examines the stability of solutions with respect to domain truncation in the QVI formulation. Full article
Show Figures

Figure 1

19 pages, 5275 KB  
Article
Prediction of Micro-Milling-Induced Residual Stress and Deformation in Titanium Alloy Thin-Walled Components and Multi-Objective Collaborative Optimization
by Jie Yi, Rui Wang, Dengyun Du, Dong Han, Xinyao Wang and Junfeng Xiang
Materials 2026, 19(2), 219; https://doi.org/10.3390/ma19020219 - 6 Jan 2026
Viewed by 525
Abstract
The intrinsically low stiffness of titanium alloy thin-walled components causes residual stresses to readily accumulate during high-speed micro-milling, leading to deformation and hindering machining precision. To clarify the residual-stress formation mechanism and enable deformation control, this study first proposes a surface residual stress [...] Read more.
The intrinsically low stiffness of titanium alloy thin-walled components causes residual stresses to readily accumulate during high-speed micro-milling, leading to deformation and hindering machining precision. To clarify the residual-stress formation mechanism and enable deformation control, this study first proposes a surface residual stress characterization model based on an exponentially decaying sinusoidal function, with model parameters efficiently identified via an improved particle swarm optimization algorithm, allowing rapid characterization of stress distributions under different process conditions. A response surface model constructed using a central composite design is then employed to reveal the coupled effects of machining parameters on residual stress and top-surface deformation. On this basis, a GA-BP neural network–based prediction framework is developed to improve the accuracy of residual stress and deformation prediction, while the AGE-MOEA2 multi-objective evolutionary algorithm is used to optimize micro-milling parameters for the simultaneous minimization of residual stress and deformation via Pareto-optimal solutions. Validation experiments on thin-wall micro-milling confirm that the optimized parameters significantly reduce peak residual stress and suppress top-surface deformation. The proposed modeling and optimization strategy provides an effective reference for high-precision machining of titanium alloy thin-walled components. Full article
Show Figures

Figure 1

34 pages, 5123 KB  
Article
Comparative Analysis of Tail Risk in Emerging and Developed Equity Markets: An Extreme Value Theory Perspective
by Sthembiso Dlamini and Sandile Charles Shongwe
Int. J. Financial Stud. 2026, 14(1), 11; https://doi.org/10.3390/ijfs14010011 - 6 Jan 2026
Viewed by 1570
Abstract
This research explores the application of extreme value theory in modelling and quantifying tail risks across different economic equity markets, with focus on the Nairobi Securities Exchange (NSE20), the South African Equity Market (FTSE/JSE Top40) and the US Equity Index (S&P500). The study [...] Read more.
This research explores the application of extreme value theory in modelling and quantifying tail risks across different economic equity markets, with focus on the Nairobi Securities Exchange (NSE20), the South African Equity Market (FTSE/JSE Top40) and the US Equity Index (S&P500). The study aims to recommend the most suitable probability distribution between the Generalised Extreme Value Distribution (GEVD) and the Generalised Pareto Distribution (GPD) and to assess the associated tail risk using the value-at-risk and expected shortfall. To address volatility clustering, four generalised autoregressive conditional heteroscedasticity (GARCH) models (standard GARCH, exponential GARCH, threshold-GARCH and APARCH (asymmetric power ARCH)) are first applied to returns before implementing the peaks-over-threshold and block maxima methods on standardised residuals. For each equity index, the probability models were ranked based on goodness-of-fit and accuracy using a combination of graphical and numerical methods as well as the comparison of empirical and theoretical risk measures. Beyond its technical contributions, this study has broader implications for building sustainable and resilient financial systems. The results indicate that, for the GEVD, the maxima and minima returns of block size 21 yield the best fit for all indices. For GPD, Hill’s plot is the preferred threshold selection method across all indices due to higher exceedances. A final comparison between GEVD and GPD is conducted to estimate tail risk for each index, and it is observed that GPD consistently outperforms GEVD regardless of market classification. Full article
(This article belongs to the Special Issue Financial Markets: Risk Forecasting, Dynamic Models and Data Analysis)
Show Figures

Figure 1

28 pages, 1641 KB  
Article
Bayesian Estimation of R-Vine Copula with Gaussian-Mixture GARCH Margins: An MCMC and Machine Learning Comparison
by Rewat Khanthaporn and Nuttanan Wichitaksorn
Mathematics 2025, 13(23), 3886; https://doi.org/10.3390/math13233886 - 4 Dec 2025
Viewed by 978
Abstract
This study proposes Bayesian estimation of multivariate regular vine (R-vine) copula models with generalized autoregressive conditional heteroskedasticity (GARCH) margins modeled by Gaussian-mixture distributions. The Bayesian estimation approach includes Markov chain Monte Carlo and variational Bayes with data augmentation. Although R-vines typically involve computationally [...] Read more.
This study proposes Bayesian estimation of multivariate regular vine (R-vine) copula models with generalized autoregressive conditional heteroskedasticity (GARCH) margins modeled by Gaussian-mixture distributions. The Bayesian estimation approach includes Markov chain Monte Carlo and variational Bayes with data augmentation. Although R-vines typically involve computationally intensive procedures limiting their practical use, we address this challenge through parallel computing techniques. To demonstrate our approach, we employ thirteen bivariate copula families within an R-vine pair-copula construction, applied to a large number of marginal distributions. The margins are modeled as exponential-type GARCH processes with intertemporal capital asset pricing specifications, using a mixture of Gaussian and generalized Pareto distributions. Results from an empirical study involving 100 financial returns confirm the effectiveness of our approach. Full article
(This article belongs to the Special Issue Contemporary Bayesian Analysis: Methods and Applications)
Show Figures

Figure 1

17 pages, 591 KB  
Article
Extending Approximate Bayesian Computation to Non-Linear Regression Models: The Case of Composite Distributions
by Mostafa S. Aminzadeh and Min Deng
Risks 2025, 13(11), 220; https://doi.org/10.3390/risks13110220 - 5 Nov 2025
Viewed by 616
Abstract
Modeling loss data is a crucial aspect of actuarial science. In the insurance industry, small claims occur frequently, while large claims are rare. Traditional heavy-tail distributions, such as Weibull, Log-Normal, and Inverse Gaussian distributions, are not suitable for describing insurance data, which often [...] Read more.
Modeling loss data is a crucial aspect of actuarial science. In the insurance industry, small claims occur frequently, while large claims are rare. Traditional heavy-tail distributions, such as Weibull, Log-Normal, and Inverse Gaussian distributions, are not suitable for describing insurance data, which often exhibit skewness and fat tails. The literature has explored classical and Bayesian inference methods for the parameters of composite distributions, such as the Exponential–Pareto, Weibull–Pareto, and Inverse Gamma–Pareto distributions. These models effectively separate small to moderate losses from significant losses using a threshold parameter. This research aims to introduce a new composite distribution, the Gamma–Pareto distribution with two parameters, and employ a numerical computational approach to find the maximum likelihood estimates (MLEs) of its parameters. A novel computational approach for a nonlinear regression model where the loss variable is distributed as the Gamma–Pareto and depends on multiple covariates is proposed. The maximum likelihood (ML) and Approximate Bayesian Computation (ABC) methods are used to estimate the regression parameters. The Fisher information matrix, along with a multivariate normal distribution as the prior distribution, is utilized through the ABC method. Simulation studies indicate that the ABC method outperforms the ML method in terms of accuracy. Full article
Show Figures

Figure 1

27 pages, 547 KB  
Article
Derivation of the Pareto Index in the Economic System as a Scale-Free Network and Introduction of New Parameters to Monitor Optimal Wealth and Income Distributions
by John G. Ingersoll
Economies 2025, 13(11), 310; https://doi.org/10.3390/economies13110310 - 30 Oct 2025
Viewed by 1093
Abstract
The purpose of this work is twofold: first, it aims to derive an exact analytical form of the Pareto index based on the already developed model of the economy as a scale-free network comprising a given amount of either wealth or income (total [...] Read more.
The purpose of this work is twofold: first, it aims to derive an exact analytical form of the Pareto index based on the already developed model of the economy as a scale-free network comprising a given amount of either wealth or income (total number of links, each link representing a non-zero amount or quantum of income or wealth) distributed among its variable number of actors (nodes), all of whom have equal access to the system), and second, it aims to employ the derived analytical form of the Pareto index to determine the degree to which the observed inequality in wealth and in income as measured by the respective empirical values of the Pareto index is inherent in the economic system rather than the result of externally imposed factors invariably reflecting a lack of equal access. The derived analytical form of the Pareto index for wealth or for income is described by an exponential function whose exponent is the inverse of the average number of wealth or of income per actor (one-half of the average number of links per node) in the economic model. This exponent features prominently in the scale-free model of the economy and has a numerical value of 0.69 when the Pareto index attains a numerical value of 2, which signifies the optimal, albeit still unequal, distribution of wealth or of income in the economy under the condition of equal access. Because of the correspondence of the scale-free model of the economy to a physical system comprising quantum particles such as photons in thermodynamic equilibrium or state of maximum entropy in accordance with the laws of statistical mechanics, the inverse of the exponent is proportional to the temperature of the economic system, and a new parameter introduced to describe in a comprehensible manner the deviation in the economic system from its optimal distribution of wealth or income. A comparison of the empirical wealth and income Pareto indexes based on economic data for the four largest economies in the word, i.e., USA, China, Germany, and Japan, which account for over 50% of the global GDP, versus the corresponding optimal values per the scale-free model of the economy reveals interesting trends that can be explained away by the prevailing degrees of equal access, as manifested by inadequate education, health care, and housing, as well as the existence of rules and institutions favoring certain actors over others, particularly with regard to the accumulation of wealth. It has also been determined that the newly introduced parameters in the scale-free model of the economy of temperature as well as the quanta of wealth and of income should be expressed in power purchase exchange rates for meaningful comparisons among national economies over time. Full article
Show Figures

Figure 1

17 pages, 505 KB  
Article
On Doubly-Generalized-Transmuted Distributions
by Barry C. Arnold, Yolanda M. Gómez, Diego I. Gallardo and Héctor W. Gómez
Symmetry 2025, 17(10), 1606; https://doi.org/10.3390/sym17101606 - 27 Sep 2025
Cited by 1 | Viewed by 510
Abstract
Many parametric models can be enriched by introducing additional parameters through transmutation, mixing, or compounding techniques. In this paper, we develop the framework of doubly generalized transmutation models (DGTMs), obtained by the repeated application of rank transmutation maps and their generalizations. We show [...] Read more.
Many parametric models can be enriched by introducing additional parameters through transmutation, mixing, or compounding techniques. In this paper, we develop the framework of doubly generalized transmutation models (DGTMs), obtained by the repeated application of rank transmutation maps and their generalizations. We show that several flexible families already available in the literature can be reinterpreted as instances of double or multiple transmutation, thus unifying apparently disparate constructions under a common perspective. A key feature of DGTMs is their ability to flexibly control symmetry through parameterization, enabling more accurate modeling of asymmetric or heavy-tailed phenomena. We also discuss the potential extension of these models to the bivariate case. In addition, we introduce the gentransmuted R package, Version 1.0, which provides routines for data generation, parameter estimation, and model comparison for generalized transmutation models. Two real data applications illustrate the practical advantages of this approach, highlighting improved model fit relative to classical alternatives. Our results underscore the value of transmutation-based methods as a systematic tool for generating flexible probability distributions and advancing their computational implementation. Full article
(This article belongs to the Special Issue Mathematics: Feature Papers 2025)
Show Figures

Figure 1

24 pages, 7349 KB  
Article
Return Level Prediction with a New Mixture Extreme Value Model
by Emrah Altun, Hana N. Alqifari and Kadir Söyler
Mathematics 2025, 13(17), 2705; https://doi.org/10.3390/math13172705 - 22 Aug 2025
Viewed by 1174
Abstract
The generalized Pareto distribution is frequently used for modeling extreme values above an appropriate threshold level. Since the process of determining the appropriate threshold value is difficult, a mixture of extreme value models rises to prominence. In this study, mixture extreme value models [...] Read more.
The generalized Pareto distribution is frequently used for modeling extreme values above an appropriate threshold level. Since the process of determining the appropriate threshold value is difficult, a mixture of extreme value models rises to prominence. In this study, mixture extreme value models based on exponentiated Pareto distribution are proposed. The Weibull, gamma, and log-normal models are used as bulk densities. The parameter estimates of the proposed models are obtained using the maximum likelihood approach. Two different approaches based on maximization of the log-likelihood and Kolmogorov–Smirnov p-value are used to determine the appropriate threshold value. The effectiveness of these methods is compared using simulation studies. The proposed models are compared with other mixture models through an application study on earthquake data. The GammaEP web application is developed to ensure the reproducibility of the results and the usability of the proposed model. Full article
(This article belongs to the Special Issue Mathematical Modelling and Applied Statistics)
Show Figures

Figure 1

28 pages, 835 KB  
Article
Progressive First-Failure Censoring in Reliability Analysis: Inference for a New Weibull–Pareto Distribution
by Rashad M. EL-Sagheer and Mahmoud M. Ramadan
Mathematics 2025, 13(15), 2377; https://doi.org/10.3390/math13152377 - 24 Jul 2025
Viewed by 914
Abstract
This paper explores statistical techniques for estimating unknown lifetime parameters using data from a progressive first-failure censoring scheme. The failure times are modeled with a new Weibull–Pareto distribution. Maximum likelihood estimators are derived for the model parameters, as well as for the survival [...] Read more.
This paper explores statistical techniques for estimating unknown lifetime parameters using data from a progressive first-failure censoring scheme. The failure times are modeled with a new Weibull–Pareto distribution. Maximum likelihood estimators are derived for the model parameters, as well as for the survival and hazard rate functions, although these estimators do not have explicit closed-form solutions. The Newton–Raphson algorithm is employed for the numerical computation of these estimates. Confidence intervals for the parameters are approximated based on the asymptotic normality of the maximum likelihood estimators. The Fisher information matrix is calculated using the missing information principle, and the delta technique is applied to approximate confidence intervals for the survival and hazard rate functions. Bayesian estimators are developed under squared error, linear exponential, and general entropy loss functions, assuming independent gamma priors. Markov chain Monte Carlo sampling is used to obtain Bayesian point estimates and the highest posterior density credible intervals for the parameters and reliability measures. Finally, the proposed methods are demonstrated through the analysis of a real dataset. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

33 pages, 15492 KB  
Article
Seasonal Bias Correction of Daily Precipitation over France Using a Stitch Model Designed for Robust Representation of Extremes
by Philippe Ear, Elena Di Bernardino, Thomas Laloë, Adrien Lambert and Magali Troin
Atmosphere 2025, 16(4), 480; https://doi.org/10.3390/atmos16040480 - 19 Apr 2025
Cited by 1 | Viewed by 2089
Abstract
Highly resolved and accurate daily precipitation data are required for impact models to perform adequately and correctly measure the impacts of high-risk events. In order to produce such data, bias correction is often needed. Most of those statistical methods correct the probability distributions [...] Read more.
Highly resolved and accurate daily precipitation data are required for impact models to perform adequately and correctly measure the impacts of high-risk events. In order to produce such data, bias correction is often needed. Most of those statistical methods correct the probability distributions of daily precipitation by modeling them with either empirical or parametric distributions. A recent semi-parametric model based on a penalized Berk–Jones (BJ) statistical test, which allows for automatic and personalized splicing of parametric and non-parametric distributions, has been developed. This method, called the Stitch-BJ model, was found to be able to model daily precipitation correctly and showed interesting potential in a bias correction setting. In the present study, we will consolidate these results by taking into account the seasonal properties of daily precipitation in an out-of-sample context and by considering dry days probabilities in our methodology. We evaluate the performance of the Stitch-BJ method in this seasonal bias correction setting against more classical models such as the Gamma, Exponentiated Weibull (ExpW), Extended Generalized Pareto (EGP) or empirical distributions. Results show that a seasonal separation of data is necessary in order to account for intra-annual non-stationarity. Moreover, the Stitch-BJ distribution was able to consistently perform as well as or better than all the other considered models over the validation set, including the empirical distribution, which is often used due to its robustness. Finally, while methods for correcting dry day probabilities can be easily applied, their relevance can be discussed as temporal and spatial correlations are often neglected. Full article
Show Figures

Figure 1

24 pages, 755 KB  
Article
Inference for Dependent Competing Risks with Partially Observed Causes from Bivariate Inverted Exponentiated Pareto Distribution Under Generalized Progressive Hybrid Censoring
by Rani Kumari, Yogesh Mani Tripathi, Rajesh Kumar Sinha and Liang Wang
Axioms 2025, 14(3), 217; https://doi.org/10.3390/axioms14030217 - 16 Mar 2025
Viewed by 856
Abstract
In this paper, inference under dependent competing risk data is considered with multiple causes of failure. We discuss both classical and Bayesian methods for estimating model parameters under the assumption that data are observed under generalized progressive hybrid censoring. The maximum likelihood estimators [...] Read more.
In this paper, inference under dependent competing risk data is considered with multiple causes of failure. We discuss both classical and Bayesian methods for estimating model parameters under the assumption that data are observed under generalized progressive hybrid censoring. The maximum likelihood estimators of model parameters are obtained when occurrences of latent failure follow a bivariate inverted exponentiated Pareto distribution. The associated existence and uniqueness properties of these estimators are established. The asymptotic interval estimators are also constructed. Further, Bayes estimates and highest posterior density intervals are derived using flexible priors. A Monte Carlo sampling algorithm is proposed for posterior computations. The performance of all proposed methods is evaluated through extensive simulations. Moreover, a real-life example is also presented to illustrate the practical applications of our inferential procedures. Full article
Show Figures

Figure 1

22 pages, 1347 KB  
Article
Semi-Empirical Approach to Evaluating Model Fit for Sea Clutter Returns: Focusing on Future Measurements in the Adriatic Sea
by Bojan Vondra
Entropy 2024, 26(12), 1069; https://doi.org/10.3390/e26121069 - 9 Dec 2024
Cited by 1 | Viewed by 1278
Abstract
A method for evaluating Kullback–Leibler (KL) divergence and Squared Hellinger (SH) distance between empirical data and a model distribution is proposed. This method exclusively utilises the empirical Cumulative Distribution Function (CDF) of the data and the CDF of the model, avoiding data processing [...] Read more.
A method for evaluating Kullback–Leibler (KL) divergence and Squared Hellinger (SH) distance between empirical data and a model distribution is proposed. This method exclusively utilises the empirical Cumulative Distribution Function (CDF) of the data and the CDF of the model, avoiding data processing such as histogram binning. The proposed method converges almost surely, with the proof based on the use of exponentially distributed waiting times. An example demonstrates convergence of the KL divergence and SH distance to their true values when utilising the Generalised Pareto (GP) distribution as empirical data and the K distribution as the model. Another example illustrates the goodness of fit of these (GP and K-distribution) models to real sea clutter data from the widely used Intelligent PIxel processing X-band (IPIX) measurements. The proposed method can be applied to assess the goodness of fit of various models (not limited to GP or K distribution) to clutter measurement data such as those from the Adriatic Sea. Distinctive features of this small and immature sea, like the presence of over 1300 islands that affect local wind and wave patterns, are likely to result in an amplitude distribution of sea clutter returns that differs from predictions of models designed for oceans or open seas. However, to the author’s knowledge, no data on this specific topic are currently available in the open literature, and such measurements have yet to be conducted. Full article
Show Figures

Figure 1

Back to TopTop