Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (72)

Search Parameters:
Keywords = Metropolis–Hastings sampling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
54 pages, 22294 KB  
Article
Research on Risk Evolution Probability of Urban Lifeline Natech Events Based on MdC-MCMC
by Shifeng Li and Yu Shang
Sustainability 2025, 17(17), 7664; https://doi.org/10.3390/su17177664 - 25 Aug 2025
Viewed by 669
Abstract
Urban lifeline Natech events are coupled systems composed of multiple risks and entities with complex dynamic transmission chains. Predicting risk evolution probabilities is the core task for achieving the safety management of urban lifeline Natech events. First, the risk evolution mechanism is analyzed, [...] Read more.
Urban lifeline Natech events are coupled systems composed of multiple risks and entities with complex dynamic transmission chains. Predicting risk evolution probabilities is the core task for achieving the safety management of urban lifeline Natech events. First, the risk evolution mechanism is analyzed, where urban lifeline Natech events exhibit spatial evolution characteristics, which involves dissecting the parallel and synergistic effects of risk evolution in spatial dimensions. Next, based on fitting marginal probability distribution functions for natural hazard and urban lifeline risk evolution, a Multi-dimensional Copula (MdC) function for the joint probability distribution of urban lifeline Natech event risk evolution is constructed. Building upon the MdC function, a Markov Chain Monte Carlo (MCMC) model for predicting risk evolution probabilities of urban lifeline Natech events is developed using the Metropolis–Hastings (M-H) algorithm and Gibbs sampling. Finally, taking the 2021 Zhengzhou ‘7·20’ catastrophic rainstorm as a case study, joint probability distribution functions for risk evolution under Rainfall-Wind speed scenarios are fitted for traffic, electric, communication, water supply, and drainage systems (including different risk transmission chains). Numerical simulations of joint probability distributions for risk evolution are conducted, and visualizations of joint probability predictions for risk evolution are generated. Full article
Show Figures

Figure 1

16 pages, 882 KB  
Article
MatBYIB: A MATLAB-Based Toolkit for Parameter Estimation of Eccentric Gravitational Waves from EMRIs
by Genliang Li, Shujie Zhao, Huaike Guo, Jingyu Su and Zhenheng Lin
Universe 2025, 11(8), 259; https://doi.org/10.3390/universe11080259 - 6 Aug 2025
Viewed by 262
Abstract
Accurate parameter estimation is essential for gravitational wave data analysis. In extreme mass-ratio inspiral binary systems, orbital eccentricity is a critical parameter for parameter estimation. However, the current software for the parameter estimation of the gravitational wave often neglects the direct estimation of [...] Read more.
Accurate parameter estimation is essential for gravitational wave data analysis. In extreme mass-ratio inspiral binary systems, orbital eccentricity is a critical parameter for parameter estimation. However, the current software for the parameter estimation of the gravitational wave often neglects the direct estimation of orbital eccentricity. To fill this gap, we have developed the MatBYIB, a MATLAB-based software (Version 1.0) package for the parameter estimation of the gravitational wave with arbitrary eccentricity. The MatBYIB employs the Analytical Kludge waveform as a computationally efficient signal generator and computes parameter uncertainties via the Fisher Information Matrix and the Markov Chain Monte Carlo. For Bayesian inference, we implement the Metropolis–Hastings algorithm to derive posterior distributions. To guarantee convergence, the Gelman–Rubin convergence criterion (the Potential Scale Reduction Factor R^) is used to determine sampling adequacy, with MatBYIB dynamically increasing the sample size until R^<1.05 for all parameters. Our results demonstrate strong agreement between predictions based on the Fisher Information Matrix and full MCMC sampling. This program is user-friendly and allows for the estimation of the gravitational wave parameters with arbitrary eccentricity on standard personal computers. Full article
Show Figures

Figure 1

23 pages, 3124 KB  
Article
Bee Swarm Metropolis–Hastings Sampling for Bayesian Inference in the Ginzburg–Landau Equation
by Shucan Xia and Lipu Zhang
Algorithms 2025, 18(8), 476; https://doi.org/10.3390/a18080476 - 2 Aug 2025
Viewed by 274
Abstract
To improve the sampling efficiency of Markov Chain Monte Carlo in complex parameter spaces, this paper proposes an adaptive sampling method that integrates a swarm intelligence mechanism called the BeeSwarm-MH algorithm. The method combines global exploration by scout bees with local exploitation by [...] Read more.
To improve the sampling efficiency of Markov Chain Monte Carlo in complex parameter spaces, this paper proposes an adaptive sampling method that integrates a swarm intelligence mechanism called the BeeSwarm-MH algorithm. The method combines global exploration by scout bees with local exploitation by worker bees. It employs multi-stage perturbation intensities and adaptive step-size tuning to enable efficient posterior sampling. Focusing on Bayesian inference for parameter estimation in the soliton solutions of the two-dimensional complex Ginzburg–Landau equation, we design a dedicated inference framework to systematically compare the performance of BeeSwarm-MH with the classical Metropolis–Hastings algorithm. Experimental results demonstrate that BeeSwarm-MH achieves comparable estimation accuracy while significantly reducing the required number of iterations and total computation time for convergence. Moreover, it exhibits superior global search capabilities and adaptive features, offering a practical approach for efficient Bayesian inference in complex physical models. Full article
Show Figures

Graphical abstract

35 pages, 11039 KB  
Article
Optimum Progressive Data Analysis and Bayesian Inference for Unified Progressive Hybrid INH Censoring with Applications to Diamonds and Gold
by Heba S. Mohammed, Osama E. Abo-Kasem and Ahmed Elshahhat
Axioms 2025, 14(8), 559; https://doi.org/10.3390/axioms14080559 - 23 Jul 2025
Viewed by 264
Abstract
A novel unified progressive hybrid censoring is introduced to combine both progressive and hybrid censoring plans to allow flexible test termination either after a prespecified number of failures or at a fixed time. This work develops both frequentist and Bayesian inferential procedures for [...] Read more.
A novel unified progressive hybrid censoring is introduced to combine both progressive and hybrid censoring plans to allow flexible test termination either after a prespecified number of failures or at a fixed time. This work develops both frequentist and Bayesian inferential procedures for estimating the parameters, reliability, and hazard rates of the inverted Nadarajah–Haghighi lifespan model when a sample is produced from such a censoring plan. Maximum likelihood estimators are obtained through the Newton–Raphson iterative technique. The delta method, based on the Fisher information matrix, is utilized to build the asymptotic confidence intervals for each unknown quantity. In the Bayesian methodology, Markov chain Monte Carlo techniques with independent gamma priors are implemented to generate posterior summaries and credible intervals, addressing computational intractability through the Metropolis—Hastings algorithm. Extensive Monte Carlo simulations compare the efficiency and utility of frequentist and Bayesian estimates across multiple censoring designs, highlighting the superiority of Bayesian inference using informative prior information. Two real-world applications utilizing rare minerals from gold and diamond durability studies are examined to demonstrate the adaptability of the proposed estimators to the analysis of rare events in precious materials science. By applying four different optimality criteria to multiple competing plans, an analysis of various progressive censoring strategies that yield the best performance is conducted. The proposed censoring framework is effectively applied to real-world datasets involving diamonds and gold, demonstrating its practical utility in modeling the reliability and failure behavior of rare and high-value minerals. Full article
(This article belongs to the Special Issue Applications of Bayesian Methods in Statistical Analysis)
Show Figures

Figure 1

20 pages, 774 KB  
Article
Robust Variable Selection via Bayesian LASSO-Composite Quantile Regression with Empirical Likelihood: A Hybrid Sampling Approach
by Ruisi Nan, Jingwei Wang, Hanfang Li and Youxi Luo
Mathematics 2025, 13(14), 2287; https://doi.org/10.3390/math13142287 - 16 Jul 2025
Viewed by 399
Abstract
Since the advent of composite quantile regression (CQR), its inherent robustness has established it as a pivotal methodology for high-dimensional data analysis. High-dimensional outlier contamination refers to data scenarios where the number of observed dimensions (p) is much greater than the [...] Read more.
Since the advent of composite quantile regression (CQR), its inherent robustness has established it as a pivotal methodology for high-dimensional data analysis. High-dimensional outlier contamination refers to data scenarios where the number of observed dimensions (p) is much greater than the sample size (n) and there are extreme outliers in the response variables or covariates (e.g., p/n > 0.1). Traditional penalized regression techniques, however, exhibit notable vulnerability to data outliers during high-dimensional variable selection, often leading to biased parameter estimates and compromised resilience. To address this critical limitation, we propose a novel empirical likelihood (EL)-based variable selection framework that integrates a Bayesian LASSO penalty within the composite quantile regression framework. By constructing a hybrid sampling mechanism that incorporates the Expectation–Maximization (EM) algorithm and Metropolis–Hastings (M-H) algorithm within the Gibbs sampling scheme, this approach effectively tackles variable selection in high-dimensional settings with outlier contamination. This innovative design enables simultaneous optimization of regression coefficients and penalty parameters, circumventing the need for ad hoc selection of optimal penalty parameters—a long-standing challenge in conventional LASSO estimation. Moreover, the proposed method imposes no restrictive assumptions on the distribution of random errors in the model. Through Monte Carlo simulations under outlier interference and empirical analysis of two U.S. house price datasets, we demonstrate that the new approach significantly enhances variable selection accuracy, reduces estimation bias for key regression coefficients, and exhibits robust resistance to data outlier contamination. Full article
Show Figures

Figure 1

24 pages, 2253 KB  
Article
Modeling Spatial Data with Heteroscedasticity Using PLVCSAR Model: A Bayesian Quantile Regression Approach
by Rongshang Chen and Zhiyong Chen
Entropy 2025, 27(7), 715; https://doi.org/10.3390/e27070715 - 1 Jul 2025
Viewed by 371
Abstract
Spatial data not only enables smart cities to visualize, analyze, and interpret data related to location and space, but also helps departments make more informed decisions. We apply a Bayesian quantile regression (BQR) of the partially linear varying coefficient spatial autoregressive (PLVCSAR) model [...] Read more.
Spatial data not only enables smart cities to visualize, analyze, and interpret data related to location and space, but also helps departments make more informed decisions. We apply a Bayesian quantile regression (BQR) of the partially linear varying coefficient spatial autoregressive (PLVCSAR) model for spatial data to improve the prediction of performance. It can be used to capture the response of covariates to linear and nonlinear effects at different quantile points. Through an approximation of the nonparametric functions with free-knot splines, we develop a Bayesian sampling approach that can be applied by the Markov chain Monte Carlo (MCMC) approach and design an efficient Metropolis–Hastings within the Gibbs sampling algorithm to explore the joint posterior distributions. Computational efficiency is achieved through a modified reversible-jump MCMC algorithm incorporating adaptive movement steps to accelerate chain convergence. The simulation results demonstrate that our estimator exhibits robustness to alternative spatial weight matrices and outperforms both quantile regression (QR) and instrumental variable quantile regression (IVQR) in a finite sample at different quantiles. The effectiveness of the proposed model and estimation method is demonstrated by the use of real data from the Boston median house price. Full article
(This article belongs to the Special Issue Bayesian Hierarchical Models with Applications)
Show Figures

Figure 1

12 pages, 5618 KB  
Article
An Algorithm for the Conditional Distribution of Independent Binomial Random Variables Given the Sum
by Kelly Ayres and Steven E. Rigdon
Mathematics 2025, 13(13), 2155; https://doi.org/10.3390/math13132155 - 30 Jun 2025
Viewed by 558
Abstract
We investigate Metropolis–Hastings (MH) algorithms to approximate the distribution of independent binomial random variables conditioned on the sum. Let XiBIN(ni,pi). We want the distribution of [...] Read more.
We investigate Metropolis–Hastings (MH) algorithms to approximate the distribution of independent binomial random variables conditioned on the sum. Let XiBIN(ni,pi). We want the distribution of [X1,,Xk] conditioned on X1++Xk=n. We propose both a random walk MH algorithm and an independence sampling MH algorithm for simulating from this conditional distribution. The acceptance probability in the MH algorithm always involves the probability mass function of the proposal distribution. For the random walk MH algorithm, we take this distribution to be uniform across all possible proposals. There is an inherent asymmetry; the number of moves from one state to another is not in general equal to the number of moves from the other state to the one. This requires a careful counting of the number of possible moves out of each possible state. The independence sampler proposes a move based on the Poisson approximation to the binomial. While in general, random walk MH algorithms tend to outperform independence samplers, we find that in this case the independence sampler is more efficient. Full article
Show Figures

Figure 1

16 pages, 616 KB  
Article
Bayesian Quantile Regression for Partial Functional Linear Spatial Autoregressive Model
by Dengke Xu, Shiqi Ke, Jun Dong and Ruiqin Tian
Axioms 2025, 14(6), 467; https://doi.org/10.3390/axioms14060467 - 16 Jun 2025
Viewed by 372
Abstract
When performing Bayesian modeling on functional data, the assumption of normality is often made on the model error and thus the results may be sensitive to outliers and/or heavy tailed data. An important and good choice for solving such problems is quantile regression. [...] Read more.
When performing Bayesian modeling on functional data, the assumption of normality is often made on the model error and thus the results may be sensitive to outliers and/or heavy tailed data. An important and good choice for solving such problems is quantile regression. Therefore, this paper introduces the quantile regression into the partial functional linear spatial autoregressive model (PFLSAM) based on the asymmetric Laplace distribution for the errors. Then, the idea of the functional principal component analysis, and the hybrid MCMC algorithm combining Gibbs sampling and the Metropolis–Hastings algorithm are developed to generate posterior samples from the full posterior distributions to obtain Bayesian estimation of unknown parameters and functional coefficients in the model. Finally, some simulation studies show that the proposed Bayesian estimation method is feasible and effective. Full article
Show Figures

Figure 1

14 pages, 698 KB  
Article
Inferring the Timing of Antiretroviral Therapy by Zero-Inflated Random Change Point Models Using Longitudinal Data Subject to Left-Censoring
by Hongbin Zhang, McKaylee Robertson, Sarah L. Braunstein, David B. Hanna, Uriel R. Felsen, Levi Waldron and Denis Nash
Algorithms 2025, 18(6), 346; https://doi.org/10.3390/a18060346 - 5 Jun 2025
Viewed by 704
Abstract
We propose a new random change point model that utilizes routinely recorded individual-level HIV viral load data to estimate the timing of antiretroviral therapy (ART) initiation in people living with HIV. The change point distribution is assumed to follow a zero-inflated exponential distribution [...] Read more.
We propose a new random change point model that utilizes routinely recorded individual-level HIV viral load data to estimate the timing of antiretroviral therapy (ART) initiation in people living with HIV. The change point distribution is assumed to follow a zero-inflated exponential distribution for the longitudinal data, which is also subject to left-censoring, and the underlying data-generating mechanism is a nonlinear mixed-effects model. We extend the Stochastic EM (StEM) algorithm by combining a Gibbs sampler with a Metropolis–Hastings sampling. We apply the method to real HIV data to infer the timing of ART initiation since diagnosis. Additionally, we conduct simulation studies to assess the performance of our proposed method. Full article
Show Figures

Figure 1

10 pages, 234 KB  
Article
Convergence of Limiting Cases of Continuous-Time, Discrete-Space Jump Processes to Diffusion Processes for Bayesian Inference
by Aaron Lanterman
Mathematics 2025, 13(7), 1084; https://doi.org/10.3390/math13071084 - 26 Mar 2025
Viewed by 1097
Abstract
Jump-diffusion algorithms are applied to sampling from Bayesian posterior distributions. We consider a class of random sampling algorithms based on continuous-time jump processes. The semigroup theory of random processes lets us show that limiting cases of certain jump processes acting on discretized spaces [...] Read more.
Jump-diffusion algorithms are applied to sampling from Bayesian posterior distributions. We consider a class of random sampling algorithms based on continuous-time jump processes. The semigroup theory of random processes lets us show that limiting cases of certain jump processes acting on discretized spaces converge to diffusion processes as the discretization is refined. One of these processes leads to the familiar Langevin diffusion equation; another leads to an entirely new diffusion equation. Full article
(This article belongs to the Section D1: Probability and Statistics)
16 pages, 808 KB  
Article
Modern Bayesian Sampling Methods for Cosmological Inference: A Comparative Study
by Denitsa Staicova
Universe 2025, 11(2), 68; https://doi.org/10.3390/universe11020068 - 17 Feb 2025
Cited by 1 | Viewed by 567
Abstract
We present a comprehensive comparison of different Markov chain Monte Carlo (MCMC) sampling methods, evaluating their performance on both standard test problems and cosmological parameter estimation. Our analysis includes traditional Metropolis–Hastings MCMC, Hamiltonian Monte Carlo (HMC), slice sampling, nested sampling as implemented in [...] Read more.
We present a comprehensive comparison of different Markov chain Monte Carlo (MCMC) sampling methods, evaluating their performance on both standard test problems and cosmological parameter estimation. Our analysis includes traditional Metropolis–Hastings MCMC, Hamiltonian Monte Carlo (HMC), slice sampling, nested sampling as implemented in dynesty, and PolyChord. We examine samplers through multiple metrics including runtime, memory usage, effective sample size, and parameter accuracy, testing their scaling with dimension and response to different probability distributions. While all samplers perform well with simple Gaussian distributions, we find that HMC and nested sampling show advantages for more complex distributions typical of cosmological problems. Traditional MCMC and slice sampling become less efficient in higher dimensions, while nested methods maintain accuracy but at higher computational cost. In cosmological applications using BAO data, we observe similar patterns, with particular challenges arising from parameter degeneracies and poorly constrained parameters. Full article
(This article belongs to the Section Cosmology)
Show Figures

Figure 1

21 pages, 954 KB  
Article
Advanced Monte Carlo for Acquisition Sampling in Bayesian Optimization
by Javier Garcia-Barcos and Ruben Martinez-Cantin
Entropy 2025, 27(1), 58; https://doi.org/10.3390/e27010058 - 10 Jan 2025
Viewed by 1535
Abstract
Optimizing complex systems usually involves costly and time-consuming experiments, where selecting the experiments to perform is fundamental. Bayesian optimization (BO) has proved to be a suitable optimization method in these situations thanks to its sample efficiency and principled way of learning from previous [...] Read more.
Optimizing complex systems usually involves costly and time-consuming experiments, where selecting the experiments to perform is fundamental. Bayesian optimization (BO) has proved to be a suitable optimization method in these situations thanks to its sample efficiency and principled way of learning from previous data, but it typically requires that experiments are sequentially performed. Fully distributed BO addresses the need for efficient parallel and asynchronous active search, especially where traditional centralized BO faces limitations concerning privacy in federated learning and resource utilization in high-performance computing settings. Boltzmann sampling is an embarrassingly parallel method that enables fully distributed BO using Monte Carlo sampling. However, it also requires sampling from a continuous acquisition function, which can be challenging even for advanced Monte Carlo methods due to its highly multimodal nature, constrained search space, and possibly numerically unstable values. We introduce a simplified version of Boltzmann sampling, and we analyze multiple Markov chain Monte Carlo (MCMC) methods with a numerically improved log EI implementation for acquisition sampling. Our experiments suggest that by introducing gradient information during MCMC sampling, methods such as the MALA or CyclicalSGLD improve acquisition sampling efficiency. Interestingly, a mixture of proposals for the Metropolis–Hastings approach proves to be effective despite its simplicity. Full article
(This article belongs to the Special Issue Advances in Bayesian Optimization and Deep Reinforcement Learning)
Show Figures

Figure 1

21 pages, 445 KB  
Article
Analysis of Block Adaptive Type-II Progressive Hybrid Censoring with Weibull Distribution
by Kundan Singh, Yogesh Mani Tripathi, Liang Wang and Shuo-Jye Wu
Mathematics 2024, 12(24), 4026; https://doi.org/10.3390/math12244026 - 22 Dec 2024
Viewed by 1006
Abstract
The estimation of unknown model parameters and reliability characteristics is considered under a block adaptive progressive hybrid censoring scheme, where data are observed from a Weibull model. This censoring scheme enhances experimental efficiency by conducting experiments across different testing facilities. Point and interval [...] Read more.
The estimation of unknown model parameters and reliability characteristics is considered under a block adaptive progressive hybrid censoring scheme, where data are observed from a Weibull model. This censoring scheme enhances experimental efficiency by conducting experiments across different testing facilities. Point and interval estimates for parameters and reliability assessments are derived using both classical and Bayesian approaches. The existence and uniqueness of maximum likelihood estimates are established. Consequently, reliability performance and differences across different testing facilities are analyzed. In addition, a Metropolis–Hastings sampling algorithm is developed to approximate complex posterior computations. Approximate confidence intervals and highest posterior density credible intervals are obtained for the parametric functions. The performance of all estimators is evaluated through an extensive simulation study, and observations are discussed. A cancer dataset is analyzed to illustrate the findings under the block adaptive censoring scheme. Full article
(This article belongs to the Special Issue Statistical Simulation and Computation: 3rd Edition)
Show Figures

Figure 1

31 pages, 10049 KB  
Article
A New Hyperparameter Tuning Framework for Regression Tasks in Deep Neural Network: Combined-Sampling Algorithm to Search the Optimized Hyperparameters
by Nguyen Huu Tiep, Hae-Yong Jeong, Kyung-Doo Kim, Nguyen Xuan Mung, Nhu-Ngoc Dao, Hoai-Nam Tran, Van-Khanh Hoang, Nguyen Ngoc Anh and Mai The Vu
Mathematics 2024, 12(24), 3892; https://doi.org/10.3390/math12243892 - 10 Dec 2024
Cited by 4 | Viewed by 4542
Abstract
This paper introduces a novel hyperparameter optimization framework for regression tasks called the Combined-Sampling Algorithm to Search the Optimized Hyperparameters (CASOH). Our approach enables hyperparameter tuning for deep learning models with two hidden layers and multiple types of hyperparameters, enhancing the model’s capacity [...] Read more.
This paper introduces a novel hyperparameter optimization framework for regression tasks called the Combined-Sampling Algorithm to Search the Optimized Hyperparameters (CASOH). Our approach enables hyperparameter tuning for deep learning models with two hidden layers and multiple types of hyperparameters, enhancing the model’s capacity to work with complex optimization problems. The primary goal is to improve hyperparameter tuning performance in deep learning models compared to conventional methods such as Bayesian Optimization and Random Search. Furthermore, CASOH is evaluated alongside the state-of-the-art hyperparameter reinforcement learning (Hyp-RL) framework to ensure a comprehensive assessment. The CASOH framework integrates the Metropolis-Hastings algorithm with a uniform random sampling approach, increasing the likelihood of identifying promising hyperparameter configurations. Specifically, we developed a correlation between the objective function and samples, allowing subsequent samples to be strongly correlated with the current sample by applying an acceptance probability in our sampling algorithm. The effectiveness of our proposed method was examined using regression datasets such as Boston Housing, Critical heat flux (CHF), Concrete compressive strength, Combined Cycle Power Plant, Gas Turbine CO, and NOx Emission, as well as an ‘in-house’ dataset of lattice-physics parameters generated from a Monte Carlo code for nuclear fuel assembly simulation. One of the primary goals of this study is to construct an optimized deep-learning model capable of accurately predicting lattice-physics parameters for future applications of machine learning in nuclear reactor analysis. Our results indicate that this framework achieves competitive accuracy compared to conventional random search and Bayesian optimization methods. The most significant enhancement was observed in the lattice-physics dataset, achieving a 56.6% improvement in prediction accuracy, compared to improvements of 53.2% by Hyp-RL, 44.9% by Bayesian optimization, and 38.8% by random search relative to the nominal prediction. While the results are promising, further empirical validation across a broader range of datasets would be helpful to better assess the framework’s suitability for optimizing hyperparameters in complex problems involving high-dimensional parameters, highly non-linear systems, and multi-objective optimization tasks. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Applications)
Show Figures

Figure 1

23 pages, 14253 KB  
Article
Optimal Estimation of Reliability Parameters for Modified Frechet-Exponential Distribution Using Progressive Type-II Censored Samples with Mechanical and Medical Data
by Dina A. Ramadan, Ahmed T. Farhat, M. E. Bakr, Oluwafemi Samson Balogun and Mustafa M. Hasaballah
Symmetry 2024, 16(11), 1476; https://doi.org/10.3390/sym16111476 - 6 Nov 2024
Cited by 1 | Viewed by 1368
Abstract
The aim of this research is to estimate the parameters of the modified Frechet-exponential (MFE) distribution using different methods when applied to progressive type-II censored samples. These methods include using the maximum likelihood technique and the Bayesian approach, which were used to determine [...] Read more.
The aim of this research is to estimate the parameters of the modified Frechet-exponential (MFE) distribution using different methods when applied to progressive type-II censored samples. These methods include using the maximum likelihood technique and the Bayesian approach, which were used to determine the values of parameters in addition to calculating the reliability and failure functions at time t. The approximate confidence intervals (ACIs) and credible intervals (CRIs) are derived for these parameters. Two bootstrap techniques of parametric type are provided to compute the bootstrap confidence intervals. Both symmetric loss functions such as the squared error loss (SEL) and asymmetric loss functions such as the linear-exponential (LINEX) loss are used in the Bayesian method to obtain the estimates. The Markov Chain Monte Carlo (MCMC) technique is utilized in the Metropolis–Hasting sampler approach to obtain the unknown parameters using the Bayes approach. Two actual datasets are utilized to examine the various progressive schemes and different estimation methods considered in this paper. Additionally, a simulation study is performed to compare the schemes and estimation techniques. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

Back to TopTop