Next Issue
Volume 4, September
Previous Issue
Volume 4, March
 
 

Stats, Volume 4, Issue 2 (June 2021) – 16 articles

Cover Story (view full-size image): In this paper, we propose a new clustering method inspired by mode-clustering that not only finds clusters but also assigns each cluster with an attribute label. Clusters obtained from our method show the connectivity of the underlying distribution. Excluding the regions around the mode, the connectivity refers to the regions with relatively high density compared to other regions. We improve the usual mode-clustering method by (1) adding additional clusters that can further partition the entire sample space, and (2) assigning an attribute label to each cluster. We also design a local two-sample test based on the clustering result that has more power than a conventional method. We apply our method to the Astronomy and GvHD data for illustration. Finally, we derive both statistical and computational guarantees of the proposed method. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 311 KiB  
Article
Robust Causal Estimation from Observational Studies Using Penalized Spline of Propensity Score for Treatment Comparison
by Tingting Zhou, Michael R. Elliott and Roderick J. A. Little
Stats 2021, 4(2), 529-549; https://doi.org/10.3390/stats4020032 - 10 Jun 2021
Cited by 3 | Viewed by 2435
Abstract
Without randomization of treatments, valid inference of treatment effects from observational studies requires controlling for all confounders because the treated subjects generally differ systematically from the control subjects. Confounding control is commonly achieved using the propensity score, defined as the conditional probability of [...] Read more.
Without randomization of treatments, valid inference of treatment effects from observational studies requires controlling for all confounders because the treated subjects generally differ systematically from the control subjects. Confounding control is commonly achieved using the propensity score, defined as the conditional probability of assignment to a treatment given the observed covariates. The propensity score collapses all the observed covariates into a single measure and serves as a balancing score such that the treated and control subjects with similar propensity scores can be directly compared. Common propensity score-based methods include regression adjustment and inverse probability of treatment weighting using the propensity score. We recently proposed a robust multiple imputation-based method, penalized spline of propensity for treatment comparisons (PENCOMP), that includes a penalized spline of the assignment propensity as a predictor. Under the Rubin causal model assumptions that there is no interference across units, that each unit has a non-zero probability of being assigned to either treatment group, and there are no unmeasured confounders, PENCOMP has a double robustness property for estimating treatment effects. In this study, we examine the impact of using variable selection techniques that restrict predictors in the propensity score model to true confounders of the treatment-outcome relationship on PENCOMP. We also propose a variant of PENCOMP and compare alternative approaches to standard error estimation for PENCOMP. Compared to the weighted estimators, PENCOMP is less affected by inclusion of non-confounding variables in the propensity score model. We illustrate the use of PENCOMP and competing methods in estimating the impact of antiretroviral treatments on CD4 counts in HIV+ patients. Full article
(This article belongs to the Special Issue Robust Statistics in Action)
Show Figures

Figure 1

20 pages, 9243 KiB  
Article
A Bayesian Approach to Linking a Survey and a Census via Small Areas
by Balgobin Nandram
Stats 2021, 4(2), 509-528; https://doi.org/10.3390/stats4020031 - 9 Jun 2021
Cited by 1 | Viewed by 2103
Abstract
We predict the finite population proportion of a small area when individual-level data are available from a survey and more extensive household-level (not individual-level) data (covariates but not responses) are available from a census. The census and the survey consist of the same [...] Read more.
We predict the finite population proportion of a small area when individual-level data are available from a survey and more extensive household-level (not individual-level) data (covariates but not responses) are available from a census. The census and the survey consist of the same strata and primary sampling units (PSU, or wards) that are matched, but the households are not matched. There are some common covariates at the household level in the survey and the census and these covariates are used to link the households within wards. There are also covariates at the ward level, and the wards are the same in the survey and the census. Using a two-stage procedure, we study the multinomial counts in the sampled households within the wards and a projection method to infer about the non-sampled wards. This is accommodated by a multinomial-Dirichlet–Dirichlet model, a three-stage hierarchical Bayesian model for multinomial counts, as it is necessary to account for heterogeneity among the households. The key theoretical contribution of this paper is to develop a computational algorithm to sample the joint posterior density of the multinomial-Dirichlet–Dirichlet model. Specifically, we obtain samples from the distributions of the proportions for each multinomial cell. The second key contribution is to use two projection procedures (parametric based on the nested error regression model and non-parametric based on iterative re-weighted least squares), on these proportions to link the survey to the census, thereby providing a copy of the census counts. We compare the multinomial-Dirichlet–Dirichlet (heterogeneous) model and the multinomial-Dirichlet (homogeneous) model without household effects via these two projection methods. An example of the second Nepal Living Standards Survey is presented. Full article
(This article belongs to the Special Issue Small Area Estimation: Theories, Methods and Applications)
Show Figures

Figure 1

23 pages, 4313 KiB  
Article
Refined Mode-Clustering via the Gradient of Slope
by Kunhui Zhang and Yen-Chi Chen
Stats 2021, 4(2), 486-508; https://doi.org/10.3390/stats4020030 - 1 Jun 2021
Viewed by 2398
Abstract
In this paper, we propose a new clustering method inspired by mode-clustering that not only finds clusters, but also assigns each cluster with an attribute label. Clusters obtained from our method show connectivity of the underlying distribution. We also design a local two-sample [...] Read more.
In this paper, we propose a new clustering method inspired by mode-clustering that not only finds clusters, but also assigns each cluster with an attribute label. Clusters obtained from our method show connectivity of the underlying distribution. We also design a local two-sample test based on the clustering result that has more power than a conventional method. We apply our method to the Astronomy and GvHD data and show that our method finds meaningful clusters. We also derive the statistical and computational theory of our method. Full article
(This article belongs to the Special Issue Recent Developments in Clustering and Classification Methods)
Show Figures

Figure 1

14 pages, 2134 KiB  
Article
Smart Visualization of Mixed Data
by Aurea Grané, Giancarlo Manzi and Silvia Salini
Stats 2021, 4(2), 472-485; https://doi.org/10.3390/stats4020029 - 1 Jun 2021
Cited by 4 | Viewed by 2997
Abstract
In this work, we propose a new protocol that integrates robust classification and visualization techniques to analyze mixed data. This protocol is based on the combination of the Forward Search Distance-Based (FS-DB) algorithm (Grané, Salini, and Verdolini 2020) and robust clustering. The resulting [...] Read more.
In this work, we propose a new protocol that integrates robust classification and visualization techniques to analyze mixed data. This protocol is based on the combination of the Forward Search Distance-Based (FS-DB) algorithm (Grané, Salini, and Verdolini 2020) and robust clustering. The resulting groups are visualized via MDS maps and characterized through an analysis of several graphical outputs. The methodology is illustrated on a real dataset related to European COVID-19 numerical health data, as well as the policy and restriction measurements of the 2020–2021 COVID-19 pandemic across the EU Member States. The results show similarities among countries in terms of incidence and the management of the emergency across several waves of the disease. With the proposed methodology, new smart visualization tools for analyzing mixed data are provided. Full article
(This article belongs to the Special Issue Robust Statistics in Action)
Show Figures

Figure 1

18 pages, 1000 KiB  
Article
Robust Fitting of a Wrapped Normal Model to Multivariate Circular Data and Outlier Detection
by Luca Greco, Giovanni Saraceno and Claudio Agostinelli
Stats 2021, 4(2), 454-471; https://doi.org/10.3390/stats4020028 - 1 Jun 2021
Cited by 3 | Viewed by 3000
Abstract
In this work, we deal with a robust fitting of a wrapped normal model to multivariate circular data. Robust estimation is supposed to mitigate the adverse effects of outliers on inference. Furthermore, the use of a proper robust method leads to the definition [...] Read more.
In this work, we deal with a robust fitting of a wrapped normal model to multivariate circular data. Robust estimation is supposed to mitigate the adverse effects of outliers on inference. Furthermore, the use of a proper robust method leads to the definition of effective outlier detection rules. Robust fitting is achieved by a suitable modification of a classification-expectation-maximization algorithm that has been developed to perform a maximum likelihood estimation of the parameters of a multivariate wrapped normal distribution. The modification concerns the use of complete-data estimating equations that involve a set of data dependent weights aimed to downweight the effect of possible outliers. Several robust techniques are considered to define weights. The finite sample behavior of the resulting proposed methods is investigated by some numerical studies and real data examples. Full article
(This article belongs to the Special Issue Robust Statistics in Action)
Show Figures

Figure 1

35 pages, 8817 KiB  
Article
On the Mistaken Use of the Chi-Square Test in Benford’s Law
by Alex Ely Kossovsky
Stats 2021, 4(2), 419-453; https://doi.org/10.3390/stats4020027 - 28 May 2021
Cited by 21 | Viewed by 7426
Abstract
Benford’s Law predicts that the first significant digit on the leftmost side of numbers in real-life data is distributed between all possible 1 to 9 digits approximately as in LOG(1 + 1/digit), so that low digits occur much more frequently than high digits [...] Read more.
Benford’s Law predicts that the first significant digit on the leftmost side of numbers in real-life data is distributed between all possible 1 to 9 digits approximately as in LOG(1 + 1/digit), so that low digits occur much more frequently than high digits in the first place. Typically researchers, data analysts, and statisticians, rush to apply the chi-square test in order to verify compliance or deviation from this statistical law. In almost all cases of real-life data this approach is mistaken and without mathematical-statistics basis, yet it had become a dogma or rather an impulsive ritual in the field of Benford’s Law to apply the chi-square test for whatever data set the researcher is considering, regardless of its true applicability. The mistaken use of the chi-square test has led to much confusion and many errors, and has done a lot in general to undermine trust and confidence in the whole discipline of Benford’s Law. This article is an attempt to correct course and bring rationality and order to a field which had demonstrated harmony and consistency in all of its results, manifestations, and explanations. The first research question of this article demonstrates that real-life data sets typically do not arise from random and independent selections of data points from some larger universe of parental data as the chi-square approach supposes, and this conclusion is arrived at by examining how several real-life data sets are formed and obtained. The second research question demonstrates that the chi-square approach is actually all about the reasonableness of the random selection process and the Benford status of that parental universe of data and not solely about the Benford status of the data set under consideration, since the focus of the chi-square test is exclusively on whether the entire process of data selection was probable or too rare. In addition, a comparison of the chi-square statistic with the Sum of Squared Deviations (SSD) measure of distance from Benford is explored in this article, pitting one measure against the other, and concluding with a strong preference for the SSD measure. Full article
(This article belongs to the Special Issue Benford's Law(s) and Applications)
Show Figures

Figure 1

19 pages, 1357 KiB  
Article
Analysis of ‘Pre-Fit’ Datasets of gLAB by Robust Statistical Techniques
by Maria Teresa Alonso, Carlo Ferigato, Deimos Ibanez Segura, Domenico Perrotta, Adria Rovira-Garcia and Emmanuele Sordini
Stats 2021, 4(2), 400-418; https://doi.org/10.3390/stats4020026 - 24 May 2021
Viewed by 2115
Abstract
The GNSS LABoratory tool (gLAB) is an interactive educational suite of applications for processing data from the Global Navigation Satellite System (GNSS). gLAB is composed of several data analysis modules that compute the solution of the problem of determining a position by means [...] Read more.
The GNSS LABoratory tool (gLAB) is an interactive educational suite of applications for processing data from the Global Navigation Satellite System (GNSS). gLAB is composed of several data analysis modules that compute the solution of the problem of determining a position by means of GNSS measurements. The present work aimed to improve the pre-fit outlier detection function of gLAB since outliers, if undetected, deteriorate the obtained position coordinates. The methodology exploits robust statistical tools for regression provided by the Flexible Statistics and Data Analysis (FSDA) toolbox, an extension of MATLAB for the analysis of complex datasets. Our results show how the robust analysis FSDA technique improves the capability of detecting actual outliers in GNSS measurements, with respect to the present gLAB pre-fit outlier detection function. This study concludes that robust statistical analysis techniques, when applied to the pre-fit layer of gLAB, improve the overall reliability and accuracy of the positioning solution. Full article
(This article belongs to the Special Issue Robust Statistics in Action)
Show Figures

Figure 1

15 pages, 560 KiB  
Article
Fiducial Inference on the Right Censored Birnbaum–Saunders Data via Gibbs Sampler
by Kalanka P. Jayalath
Stats 2021, 4(2), 385-399; https://doi.org/10.3390/stats4020025 - 21 May 2021
Cited by 4 | Viewed by 1676
Abstract
In this article, we implement a flexible Gibbs sampler to make inferences for two-parameter Birnbaum–Saunders (BS) distribution in the presence of right-censored data. The Gibbs sampler is applied on the fiducial distributions of the BS parameters derived using the maximum likelihood, methods of [...] Read more.
In this article, we implement a flexible Gibbs sampler to make inferences for two-parameter Birnbaum–Saunders (BS) distribution in the presence of right-censored data. The Gibbs sampler is applied on the fiducial distributions of the BS parameters derived using the maximum likelihood, methods of moments, and their bias-reduced estimates. A Monte-Carlo study is conducted to make comparisons between these estimates for Type-II right censoring with various parameter settings, sample sizes, and censoring percentages. It is concluded that the bias-reduced estimates outperform the rest with increasing precision. Higher sample sizes improve the overall accuracy of all the estimates while the amount of censoring shows a negative effect. Further comparisons are made with existing methods using two real-world examples. Full article
(This article belongs to the Special Issue Survival Analysis: Models and Applications)
Show Figures

Figure 1

26 pages, 5960 KiB  
Article
Unsupervised Feature Selection for Histogram-Valued Symbolic Data Using Hierarchical Conceptual Clustering
by Manabu Ichino, Kadri Umbleja and Hiroyuki Yaguchi
Stats 2021, 4(2), 359-384; https://doi.org/10.3390/stats4020024 - 18 May 2021
Cited by 2 | Viewed by 1714
Abstract
This paper presents an unsupervised feature selection method for multi-dimensional histogram-valued data. We define a multi-role measure, called the compactness, based on the concept size of given objects and/or clusters described using a fixed number of equal probability bin-rectangles. In each step of [...] Read more.
This paper presents an unsupervised feature selection method for multi-dimensional histogram-valued data. We define a multi-role measure, called the compactness, based on the concept size of given objects and/or clusters described using a fixed number of equal probability bin-rectangles. In each step of clustering, we agglomerate objects and/or clusters so as to minimize the compactness for the generated cluster. This means that the compactness plays the role of a similarity measure between objects and/or clusters to be merged. Minimizing the compactness is equivalent to maximizing the dis-similarity of the generated cluster, i.e., concept, against the whole concept in each step. In this sense, the compactness plays the role of cluster quality. We also show that the average compactness of each feature with respect to objects and/or clusters in several clustering steps is useful as a feature effectiveness criterion. Features having small average compactness are mutually covariate and are able to detect a geometrically thin structure embedded in the given multi-dimensional histogram-valued data. We obtain thorough understandings of the given data via visualization using dendrograms and scatter diagrams with respect to the selected informative features. We illustrate the effectiveness of the proposed method by using an artificial data set and real histogram-valued data sets. Full article
(This article belongs to the Special Issue Recent Developments in Clustering and Classification Methods)
Show Figures

Figure 1

11 pages, 456 KiB  
Article
Weighted Log-Rank Statistics for Accelerated Failure Time Model
by Seung-Hwan Lee
Stats 2021, 4(2), 348-358; https://doi.org/10.3390/stats4020023 - 3 May 2021
Cited by 3 | Viewed by 1930
Abstract
This paper improves the sensitivity of the Gρ family of weighted log-rank tests for the accelerated failure time model, accommodating realistic alternatives in survival analysis with censored data, such as heavy censoring and crossing hazards. The procedures are based on a weight [...] Read more.
This paper improves the sensitivity of the Gρ family of weighted log-rank tests for the accelerated failure time model, accommodating realistic alternatives in survival analysis with censored data, such as heavy censoring and crossing hazards. The procedures are based on a weight function with the censoring proportion incorporated as a component. Extensive simulations show that the weight function enhances the performance of the Gρ family, increasing its sensitivity and flexibility. The weight function method is illustrated with an example concerning vaginal cancer. Full article
(This article belongs to the Special Issue Survival Analysis: Models and Applications)
Show Figures

Figure 1

21 pages, 2366 KiB  
Article
fsdaSAS: A Package for Robust Regression for Very Large Datasets Including the Batch Forward Search
by Francesca Torti, Aldo Corbellini and Anthony C. Atkinson
Stats 2021, 4(2), 327-347; https://doi.org/10.3390/stats4020022 - 18 Apr 2021
Cited by 5 | Viewed by 2530
Abstract
The forward search (FS) is a general method of robust data fitting that moves smoothly from very robust to maximum likelihood estimation. The regression procedures are included in the MATLAB toolbox FSDA. The work on a SAS version of the FS originates from [...] Read more.
The forward search (FS) is a general method of robust data fitting that moves smoothly from very robust to maximum likelihood estimation. The regression procedures are included in the MATLAB toolbox FSDA. The work on a SAS version of the FS originates from the need for the analysis of large datasets expressed by law enforcement services operating in the European Union that use our SAS software for detecting data anomalies that may point to fraudulent customs returns. Specific to our SAS implementation, the fsdaSAS package, we describe the approximation used to provide fast analyses of large datasets using an FS which progresses through the inclusion of batches of observations, rather than progressing one observation at a time. We do, however, test for outliers one observation at a time. We demonstrate that our SAS implementation becomes appreciably faster than the MATLAB version as the sample size increases and is also able to analyse larger datasets. The series of fits provided by the FS leads to the adaptive data-dependent choice of maximally efficient robust estimates. This also allows the monitoring of residuals and parameter estimates for fits of differing robustness levels. We mention that our fsdaSAS also applies the idea of monitoring to several robust estimators for regression for a range of values of breakdown point or nominal efficiency, leading to adaptive values for these parameters. We have also provided a variety of plots linked through brushing. Further programmed analyses include the robust transformations of the response in regression. Our package also provides the SAS community with methods of monitoring robust estimators for multivariate data, including multivariate data transformations. Full article
(This article belongs to the Special Issue Robust Statistics in Action)
Show Figures

Figure 1

19 pages, 389 KiB  
Article
A Flexible Multivariate Distribution for Correlated Count Data
by Kimberly F. Sellers, Tong Li, Yixuan Wu and Narayanaswamy Balakrishnan
Stats 2021, 4(2), 308-326; https://doi.org/10.3390/stats4020021 - 15 Apr 2021
Cited by 5 | Viewed by 3906
Abstract
Multivariate count data are often modeled via a multivariate Poisson distribution, but it contains an underlying, constraining assumption of data equi-dispersion (where its variance equals its mean). Real data are oftentimes over-dispersed and, as such, consider various advancements of a negative binomial structure. [...] Read more.
Multivariate count data are often modeled via a multivariate Poisson distribution, but it contains an underlying, constraining assumption of data equi-dispersion (where its variance equals its mean). Real data are oftentimes over-dispersed and, as such, consider various advancements of a negative binomial structure. While data over-dispersion is more prevalent than under-dispersion in real data, however, examples containing under-dispersed data are surfacing with greater frequency. Thus, there is a demonstrated need for a flexible model that can accommodate both data types. We develop a multivariate Conway–Maxwell–Poisson (MCMP) distribution to serve as a flexible alternative for correlated count data that contain data dispersion. This structure contains the multivariate Poisson, multivariate geometric, and the multivariate Bernoulli distributions as special cases, and serves as a bridge distribution across these three classical models to address other levels of over- or under-dispersion. In this work, we not only derive the distributional form and statistical properties of this model, but we further address parameter estimation, establish informative hypothesis tests to detect statistically significant data dispersion and aid in model parsimony, and illustrate the distribution’s flexibility through several simulated and real-world data examples. These examples demonstrate that the MCMP distribution performs on par with the multivariate negative binomial distribution for over-dispersed data, and proves particularly beneficial in effectively representing under-dispersed data. Thus, the MCMP distribution offers an effective, unifying framework for modeling over- or under-dispersed multivariate correlated count data that do not necessarily adhere to Poisson assumptions. Full article
(This article belongs to the Special Issue Directions in Statistical Modelling)
Show Figures

Figure 1

17 pages, 544 KiB  
Article
Optimal Sampling Regimes for Estimating Population Dynamics
by Rebecca E. Atanga, Edward L. Boone, Ryad A. Ghanam and Ben Stewart-Koster
Stats 2021, 4(2), 291-307; https://doi.org/10.3390/stats4020020 - 7 Apr 2021
Cited by 1 | Viewed by 1872
Abstract
Ecologists are interested in modeling the population growth of species in various ecosystems. Specifically, logistic growth arises as a common model for population growth. Studying such growth can assist environmental managers in making better decisions when collecting data. Traditionally, ecological data is recorded [...] Read more.
Ecologists are interested in modeling the population growth of species in various ecosystems. Specifically, logistic growth arises as a common model for population growth. Studying such growth can assist environmental managers in making better decisions when collecting data. Traditionally, ecological data is recorded on a regular time frequency and is very well-documented. However, sampling can be an expensive process due to available resources, money and time. Limiting sampling makes it challenging to properly track the growth of a population. Thus, this design study proposes an approach to sampling based on the dynamics associated with logistic growth. The proposed method is demonstrated via a simulation study across various theoretical scenarios to evaluate its performance in identifying optimal designs that best estimate the curves. Markov Chain Monte Carlo sampling techniques are implemented to predict the probability of the model parameters using Bayesian inference. The intention of this study is to demonstrate a method that can minimize the amount of time ecologists spend in the field, while maximizing the information provided by the data. Full article
Show Figures

Figure 1

22 pages, 422 KiB  
Article
Inverted Weibull Regression Models and Their Applications
by Sarah R. Al-Dawsari and Khalaf S. Sultan
Stats 2021, 4(2), 269-290; https://doi.org/10.3390/stats4020019 - 1 Apr 2021
Cited by 1 | Viewed by 2153
Abstract
In this paper, we propose the classical and Bayesian regression models for use in conjunction with the inverted Weibull (IW) distribution; there are the inverted Weibull Regression model (IW-Reg) and inverted Weibull Bayesian regression model (IW-BReg). In the proposed models, we suggest the [...] Read more.
In this paper, we propose the classical and Bayesian regression models for use in conjunction with the inverted Weibull (IW) distribution; there are the inverted Weibull Regression model (IW-Reg) and inverted Weibull Bayesian regression model (IW-BReg). In the proposed models, we suggest the logarithm and identity link functions, while in the Bayesian approach, we use a gamma prior and two loss functions, namely zero-one and modified general entropy (MGE) loss functions. To deal with the outliers in the proposed models, we apply Huber and Tukey’s bisquare (biweight) functions. In addition, we use the iteratively reweighted least squares (IRLS) algorithm to estimate Bayesian regression coefficients. Further, we compare IW-Reg and IW-BReg using some performance criteria, such as Akaike’s information criterion (AIC), deviance (D), and mean squared error (MSE). Finally, we apply the some real datasets collected from Saudi Arabia with the corresponding explanatory variables to the theoretical findings. The Bayesian approach shows better performance compare to the classical approach in terms of the considered performance criteria. Full article
(This article belongs to the Section Regression Models)
Show Figures

Figure 1

18 pages, 343 KiB  
Article
Measuring Bayesian Robustness Using Rényi Divergence
by Luai Al-Labadi, Forough Fazeli Asl and Ce Wang
Stats 2021, 4(2), 251-268; https://doi.org/10.3390/stats4020018 - 29 Mar 2021
Cited by 6 | Viewed by 2147
Abstract
This paper deals with measuring the Bayesian robustness of classes of contaminated priors. Two different classes of priors in the neighborhood of the elicited prior are considered. The first one is the well-known ϵ-contaminated class, while the second one is the geometric [...] Read more.
This paper deals with measuring the Bayesian robustness of classes of contaminated priors. Two different classes of priors in the neighborhood of the elicited prior are considered. The first one is the well-known ϵ-contaminated class, while the second one is the geometric mixing class. The proposed measure of robustness is based on computing the curvature of Rényi divergence between posterior distributions. Examples are used to illustrate the results by using simulated and real data sets. Full article
(This article belongs to the Special Issue Robust Statistics in Action)
23 pages, 660 KiB  
Article
Decisions in Risk and Reliability: An Explanatory Perspective
by Antonio Pievatolo, Fabrizio Ruggeri, Refik Soyer and Simon Wilson
Stats 2021, 4(2), 228-250; https://doi.org/10.3390/stats4020017 - 26 Mar 2021
Viewed by 2256
Abstract
The paper discusses issues that surround decisions in risk and reliability, with a major emphasis on quantitative methods. We start with a brief history of quantitative methods in risk and reliability from the 17th century onwards. Then, we look at the principal concepts [...] Read more.
The paper discusses issues that surround decisions in risk and reliability, with a major emphasis on quantitative methods. We start with a brief history of quantitative methods in risk and reliability from the 17th century onwards. Then, we look at the principal concepts and methods in decision theory. Finally, we give several examples of their application to a wide variety of risk and reliability problems: software testing, preventive maintenance, portfolio selection, adversarial testing, and the defend-attack problem. These illustrate how the general framework of game and decision theory plays a relevant part in risk and reliability. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop