entropy-logo

Journal Browser

Journal Browser

Statistical Significance and the Logic of Hypothesis Testing

A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: closed (30 May 2016) | Viewed by 81692

Special Issue Editors

Department of Applied Mathematics, Institute of Mathematics and Statistics, University of São Paulo, São Paulo 05508-900, Brazil
Interests: Bayesian inference; foundations of statistics; significance measures; evidence; logic and epistemology of significance measures
Special Issues, Collections and Topics in MDPI journals
Federal University of Sao Carlos, São Carlos, São Paulo, Brazil
Interests: Bayesian inference; foundations of statistics; significance tests; reliability and survival analysis; model selection; biostatistics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The importance of using hypothesis testing in the development of research is widely recognized by the general scientific community. This Special Issue focuses on statistical significance and the logic of hypothesis testing and their proper application in practical problems.

Specific areas of interest include (but are not limited to):

Formal applications of hypothesis testing to novel applied problems.
Philosophical accounts of hypothesis testing (including contributions arguing for or against those).
Methodological developments of hypothesis testing (taking in account the practical usability of them).
Logical accounts of hypothesis testing.
Logical (compositional) properties of truth values.
Statistical significance measures.
Epistemology of statistical significance.
Coherence properties of significance measures.
Surveys of the state of the art in one of the above areas.

Prof. Dr. Julio Stern
Prof. Dr. Adriano Polpo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Statistical significance measures
  • Hypothesis testing or verification
  • Logic and truth value composition
  • Epistemology of statistical significance
  • Coherence properties of significance measures

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

420 KiB  
Article
On Wasserstein Two-Sample Testing and Related Families of Nonparametric Tests
by Aaditya Ramdas, Nicolás García Trillos and Marco Cuturi
Entropy 2017, 19(2), 47; https://doi.org/10.3390/e19020047 - 26 Jan 2017
Cited by 197 | Viewed by 12109
Abstract
Nonparametric two-sample or homogeneity testing is a decision theoretic problem that involves identifying differences between two random variables without making parametric assumptions about their underlying distributions. The literature is old and rich, with a wide variety of statistics having being designed and analyzed, [...] Read more.
Nonparametric two-sample or homogeneity testing is a decision theoretic problem that involves identifying differences between two random variables without making parametric assumptions about their underlying distributions. The literature is old and rich, with a wide variety of statistics having being designed and analyzed, both for the unidimensional and the multivariate setting. In this short survey, we focus on test statistics that involve the Wasserstein distance. Using an entropic smoothing of the Wasserstein distance, we connect these to very different tests including multivariate methods involving energy statistics and kernel based maximum mean discrepancy and univariate methods like the Kolmogorov–Smirnov test, probability or quantile (PP/QQ) plots and receiver operating characteristic or ordinal dominance (ROC/ODC) curves. Some observations are implicit in the literature, while others seem to have not been noticed thus far. Given nonparametric two-sample testing’s classical and continued importance, we aim to provide useful connections for theorists and practitioners familiar with one subset of methods but not others. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
Show Figures

Figure 1

840 KiB  
Article
Measures of Difference and Significance in the Era of Computer Simulations, Meta-Analysis, and Big Data
by Reinout Heijungs, Patrik J.G. Henriksson and Jeroen B. Guinée
Entropy 2016, 18(10), 361; https://doi.org/10.3390/e18100361 - 09 Oct 2016
Cited by 16 | Viewed by 5460
Abstract
In traditional research, repeated measurements lead to a sample of results, and inferential statistics can be used to not only estimate parameters, but also to test statistical hypotheses concerning these parameters. In many cases, the standard error of the estimates decreases (asymptotically) with [...] Read more.
In traditional research, repeated measurements lead to a sample of results, and inferential statistics can be used to not only estimate parameters, but also to test statistical hypotheses concerning these parameters. In many cases, the standard error of the estimates decreases (asymptotically) with the square root of the sample size, which provides a stimulus to probe large samples. In simulation models, the situation is entirely different. When probability distribution functions for model features are specified, the probability distribution function of the model output can be approached using numerical techniques, such as bootstrapping or Monte Carlo sampling. Given the computational power of most PCs today, the sample size can be increased almost without bounds. The result is that standard errors of parameters are vanishingly small, and that almost all significance tests will lead to a rejected null hypothesis. Clearly, another approach to statistical significance is needed. This paper analyzes the situation and connects the discussion to other domains in which the null hypothesis significance test (NHST) paradigm is challenged. In particular, the notions of effect size and Cohen’s d provide promising alternatives for the establishment of a new indicator of statistical significance. This indicator attempts to cover significance (precision) and effect size (relevance) in one measure. Although in the end more fundamental changes are called for, our approach has the attractiveness of requiring only a minimal change to the practice of statistics. The analysis is not only relevant for artificial samples, but also for present-day huge samples, associated with the availability of big data. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
Show Figures

Graphical abstract

1230 KiB  
Article
Ordering Quantiles through Confidence Statements
by Cassio P. De Campos, Carlos A. De B. Pereira, Paola M. V. Rancoita and Adriano Polpo
Entropy 2016, 18(10), 357; https://doi.org/10.3390/e18100357 - 08 Oct 2016
Cited by 1 | Viewed by 4176
Abstract
Ranking variables according to their relevance to predict an outcome is an important task in biomedicine. For instance, such ranking can be used for selecting a smaller number of genes for then applying other sophisticated experiments only on genes identified as important. A [...] Read more.
Ranking variables according to their relevance to predict an outcome is an important task in biomedicine. For instance, such ranking can be used for selecting a smaller number of genes for then applying other sophisticated experiments only on genes identified as important. A nonparametric method called Quor is designed to provide a confidence value for the order of arbitrary quantiles of different populations using independent samples. This confidence may provide insights about possible differences among groups and yields a ranking of importance for the variables. Computations are efficient and use exact distributions with no need for asymptotic considerations. Experiments with simulated data and with multiple real -omics data sets are performed, and they show advantages and disadvantages of the method. Quor has no assumptions but independence of samples, thus it might be a better option when assumptions of other methods cannot be asserted. The software is publicly available on CRAN. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
Show Figures

Figure 1

296 KiB  
Article
Paraconsistent Probabilities: Consistency, Contradictions and Bayes’ Theorem
by Juliana Bueno-Soler and Walter Carnielli
Entropy 2016, 18(9), 325; https://doi.org/10.3390/e18090325 - 07 Sep 2016
Cited by 8 | Viewed by 6499
Abstract
This paper represents the first steps towards constructing a paraconsistent theory of probability based on the Logics of Formal Inconsistency (LFIs). We show that LFIs encode very naturally an extension of the notion of probability able to express sophisticated probabilistic reasoning under contradictions [...] Read more.
This paper represents the first steps towards constructing a paraconsistent theory of probability based on the Logics of Formal Inconsistency (LFIs). We show that LFIs encode very naturally an extension of the notion of probability able to express sophisticated probabilistic reasoning under contradictions employing appropriate notions of conditional probability and paraconsistent updating, via a version of Bayes’ theorem for conditionalization. We argue that the dissimilarity between the notions of inconsistency and contradiction, one of the pillars of LFIs, plays a central role in our extended notion of probability. Some critical historical and conceptual points about probability theory are also reviewed. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
1611 KiB  
Article
Bayesian Dependence Tests for Continuous, Binary and Mixed Continuous-Binary Variables
by Alessio Benavoli and Cassio P. De Campos
Entropy 2016, 18(9), 326; https://doi.org/10.3390/e18090326 - 06 Sep 2016
Cited by 1 | Viewed by 4917
Abstract
Tests for dependence of continuous, discrete and mixed continuous-discrete variables are ubiquitous in science. The goal of this paper is to derive Bayesian alternatives to frequentist null hypothesis significance tests for dependence. In particular, we will present three Bayesian tests for dependence of [...] Read more.
Tests for dependence of continuous, discrete and mixed continuous-discrete variables are ubiquitous in science. The goal of this paper is to derive Bayesian alternatives to frequentist null hypothesis significance tests for dependence. In particular, we will present three Bayesian tests for dependence of binary, continuous and mixed variables. These tests are nonparametric and based on the Dirichlet Process, which allows us to use the same prior model for all of them. Therefore, the tests are “consistent” among each other, in the sense that the probabilities that variables are dependent computed with these tests are commensurable across the different types of variables being tested. By means of simulations with artificial data, we show the effectiveness of the new tests. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
Show Figures

Figure 1

958 KiB  
Article
Optimal Noise Benefit in Composite Hypothesis Testing under Different Criteria
by Shujun Liu, Ting Yang, Mingchun Tang, Hongqing Liu, Kui Zhang and Xinzheng Zhang
Entropy 2016, 18(8), 400; https://doi.org/10.3390/e18080400 - 19 Aug 2016
Viewed by 3898
Abstract
The detectability for a noise-enhanced composite hypothesis testing problem according to different criteria is studied. In this work, the noise-enhanced detection problem is formulated as a noise-enhanced classical Neyman–Pearson (NP), Max–min, or restricted NP problem when the prior information is completely known, completely [...] Read more.
The detectability for a noise-enhanced composite hypothesis testing problem according to different criteria is studied. In this work, the noise-enhanced detection problem is formulated as a noise-enhanced classical Neyman–Pearson (NP), Max–min, or restricted NP problem when the prior information is completely known, completely unknown, or partially known, respectively. Next, the detection performances are compared and the feasible range of the constraint on the minimum detection probability is discussed. Under certain conditions, the noise-enhanced restricted NP problem is equivalent to a noise-enhanced classical NP problem with modified prior distribution. Furthermore, the corresponding theorems and algorithms are given to search the optimal additive noise in the restricted NP framework. In addition, the relationship between the optimal noise-enhanced average detection probability and the constraint on the minimum detection probability is explored. Finally, numerical examples and simulations are provided to illustrate the theoretical results. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
Show Figures

Figure 1

806 KiB  
Article
Indicators of Evidence for Bioequivalence
by Stephan Morgenthaler and Robert Staudte
Entropy 2016, 18(8), 291; https://doi.org/10.3390/e18080291 - 09 Aug 2016
Cited by 2 | Viewed by 4489
Abstract
Some equivalence tests are based on two one-sided tests, where in many applications the test statistics are approximately normal. We define and find evidence for equivalence in Z-tests and then one- and two-sample binomial tests as well as for t-tests. Multivariate [...] Read more.
Some equivalence tests are based on two one-sided tests, where in many applications the test statistics are approximately normal. We define and find evidence for equivalence in Z-tests and then one- and two-sample binomial tests as well as for t-tests. Multivariate equivalence tests are typically based on statistics with non-central chi-squared or non-central F distributions in which the non-centrality parameter λ is a measure of heterogeneity of several groups. Classical tests of the null λ λ 0 versus the equivalence alternative λ < λ 0 are available, but simple formulae for power functions are not. In these tests, the equivalence limit λ 0 is typically chosen by context. We provide extensions of classical variance stabilizing transformations for the non-central chi-squared and F distributions that are easy to implement and which lead to indicators of evidence for equivalence. Approximate power functions are also obtained via simple expressions for the expected evidence in these equivalence tests. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
Show Figures

Graphical abstract

229 KiB  
Article
The Structure of the Class of Maximum Tsallis–Havrda–Chavát Entropy Copulas
by Jesús E. García, Verónica A. González-López and Roger B. Nelsen
Entropy 2016, 18(7), 264; https://doi.org/10.3390/e18070264 - 19 Jul 2016
Cited by 3 | Viewed by 4521
Abstract
A maximum entropy copula is the copula associated with the joint distribution, with prescribed marginal distributions on [ 0 , 1 ] , which maximizes the Tsallis–Havrda–Chavát entropy with q = 2 . We find necessary and sufficient conditions for each maximum entropy [...] Read more.
A maximum entropy copula is the copula associated with the joint distribution, with prescribed marginal distributions on [ 0 , 1 ] , which maximizes the Tsallis–Havrda–Chavát entropy with q = 2 . We find necessary and sufficient conditions for each maximum entropy copula to be a copula in the class introduced in Rodríguez-Lallena and Úbeda-Flores (2004), and we also show that each copula in that class is a maximum entropy copula. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
365 KiB  
Article
The Logical Consistency of Simultaneous Agnostic Hypothesis Tests
by Luís G. Esteves, Rafael Izbicki, Julio M. Stern and Rafael B. Stern
Entropy 2016, 18(7), 256; https://doi.org/10.3390/e18070256 - 13 Jul 2016
Cited by 14 | Viewed by 6195
Abstract
Simultaneous hypothesis tests can fail to provide results that meet logical requirements. For example, if A and B are two statements such that A implies B, there exist tests that, based on the same data, reject B but not A. Such [...] Read more.
Simultaneous hypothesis tests can fail to provide results that meet logical requirements. For example, if A and B are two statements such that A implies B, there exist tests that, based on the same data, reject B but not A. Such outcomes are generally inconvenient to statisticians (who want to communicate the results to practitioners in a simple fashion) and non-statisticians (confused by conflicting pieces of information). Based on this inconvenience, one might want to use tests that satisfy logical requirements. However, Izbicki and Esteves shows that the only tests that are in accordance with three logical requirements (monotonicity, invertibility and consonance) are trivial tests based on point estimation, which generally lack statistical optimality. As a possible solution to this dilemma, this paper adapts the above logical requirements to agnostic tests, in which one can accept, reject or remain agnostic with respect to a given hypothesis. Each of the logical requirements is characterized in terms of a Bayesian decision theoretic perspective. Contrary to the results obtained for regular hypothesis tests, there exist agnostic tests that satisfy all logical requirements and also perform well statistically. In particular, agnostic tests that fulfill all logical requirements are characterized as region estimator-based tests. Examples of such tests are provided. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
Show Figures

Graphical abstract

677 KiB  
Article
A Simulation-Based Study on Bayesian Estimators for the Skew Brownian Motion
by Manuel Barahona, Laura Rifo, Maritza Sepúlveda and Soledad Torres
Entropy 2016, 18(7), 241; https://doi.org/10.3390/e18070241 - 28 Jun 2016
Cited by 4 | Viewed by 4563
Abstract
In analyzing a temporal data set from a continuous variable, diffusion processes can be suitable under certain conditions, depending on the distribution of increments. We are interested in processes where a semi-permeable barrier splits the state space, producing a skewed diffusion that can [...] Read more.
In analyzing a temporal data set from a continuous variable, diffusion processes can be suitable under certain conditions, depending on the distribution of increments. We are interested in processes where a semi-permeable barrier splits the state space, producing a skewed diffusion that can have different rates on each side. In this work, the asymptotic behavior of some Bayesian inferences for this class of processes is discussed and validated through simulations. As an application, we model the location of South American sea lions (Otaria flavescens) on the coast of Calbuco, southern Chile, which can be used to understand how the foraging behavior of apex predators varies temporally and spatially. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
Show Figures

Graphical abstract

1417 KiB  
Article
Optimal Noise Enhanced Signal Detection in a Unified Framework
by Ting Yang, Shujun Liu, Mingchun Tang, Kui Zhang and Xinzheng Zhang
Entropy 2016, 18(6), 213; https://doi.org/10.3390/e18060213 - 17 Jun 2016
Cited by 2 | Viewed by 4031
Abstract
In this paper, a new framework for variable detectors is formulated in order to solve different noise enhanced signal detection optimal problems, where six different disjoint sets of detector and discrete vector pairs are defined according to the two inequality-constraints on detection and [...] Read more.
In this paper, a new framework for variable detectors is formulated in order to solve different noise enhanced signal detection optimal problems, where six different disjoint sets of detector and discrete vector pairs are defined according to the two inequality-constraints on detection and false-alarm probabilities. Then theorems and algorithms constructed based on the new framework are presented to search the optimal noise enhanced solutions to maximize the relative improvements of the detection and the false-alarm probabilities, respectively. Further, the optimal noise enhanced solution of the maximum overall improvement is obtained based on the new framework and the relationship among the three maximums is presented. In addition, the sufficient conditions for improvability or non-improvability under the two certain constraints are given. Finally, numerous examples are presented to illustrate the theoretical results and the proofs of the main theorems are given in the Appendix. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
Show Figures

Graphical abstract

834 KiB  
Article
Reproducibility Probability Estimation and RP-Testing for Some Nonparametric Tests
by Lucio De Capitani and Daniele De Martini
Entropy 2016, 18(4), 142; https://doi.org/10.3390/e18040142 - 16 Apr 2016
Cited by 6 | Viewed by 4082
Abstract
Several reproducibility probability (RP)-estimators for the binomial, sign, Wilcoxon signed rank and Kendall tests are studied. Their behavior in terms of MSE is investigated, as well as their performances for RP-testing. Two classes of estimators are considered: the semi-parametric one, where RP-estimators are [...] Read more.
Several reproducibility probability (RP)-estimators for the binomial, sign, Wilcoxon signed rank and Kendall tests are studied. Their behavior in terms of MSE is investigated, as well as their performances for RP-testing. Two classes of estimators are considered: the semi-parametric one, where RP-estimators are derived from the expression of the exact or approximated power function, and the non-parametric one, whose RP-estimators are obtained on the basis of the nonparametric plug-in principle. In order to evaluate the precision of RP-estimators for each test, the MSE is computed, and the best overall estimator turns out to belong to the semi-parametric class. Then, in order to evaluate the RP-testing performances provided by RP estimators for each test, the disagreement between the RP-testing decision rule, i.e., “accept H0 if the RP-estimate is lower than, or equal to, 1/2, and reject H0 otherwise”, and the classical one (based on the critical value or on the p-value) is obtained. It is shown that the RP-based testing decision for some semi-parametric RP estimators exactly replicates the classical one. In many situations, the RP-estimator replicating the classical decision rule also provides the best MSE. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
Show Figures

Figure 1

3675 KiB  
Article
Statistical Evidence Measured on a Properly Calibrated Scale for Multinomial Hypothesis Comparisons
by Veronica J. Vieland and Sang-Cheol Seok
Entropy 2016, 18(4), 114; https://doi.org/10.3390/e18040114 - 30 Mar 2016
Cited by 7 | Viewed by 5018
Abstract
Measurement of the strength of statistical evidence is a primary objective of statistical analysis throughout the biological and social sciences. Various quantities have been proposed as definitions of statistical evidence, notably the likelihood ratio, the Bayes factor and the relative belief ratio. Each [...] Read more.
Measurement of the strength of statistical evidence is a primary objective of statistical analysis throughout the biological and social sciences. Various quantities have been proposed as definitions of statistical evidence, notably the likelihood ratio, the Bayes factor and the relative belief ratio. Each of these can be motivated by direct appeal to intuition. However, for an evidence measure to be reliably used for scientific purposes, it must be properly calibrated, so that one “degree” on the measurement scale always refers to the same amount of underlying evidence, and the calibration problem has not been resolved for these familiar evidential statistics. We have developed a methodology for addressing the calibration issue itself, and previously applied this methodology to derive a calibrated evidence measure E in application to a broad class of hypothesis contrasts in the setting of binomial (single-parameter) likelihoods. Here we substantially generalize previous results to include the m-dimensional multinomial (multiple-parameter) likelihood. In the process we further articulate our methodology for addressing the measurement calibration issue, and we show explicitly how the more familiar definitions of statistical evidence are patently not well behaved with respect to the underlying evidence. We also continue to see striking connections between the calculating equations for E and equations from thermodynamics as we move to more complicated forms of the likelihood. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
Show Figures

Graphical abstract

Review

Jump to: Research

885 KiB  
Review
A Confidence Set Analysis for Observed Samples: A Fuzzy Set Approach
by José Alejandro González, Luis Mauricio Castro, Víctor Hugo Lachos and Alexandre Galvão Patriota
Entropy 2016, 18(6), 211; https://doi.org/10.3390/e18060211 - 30 May 2016
Cited by 4 | Viewed by 6181
Abstract
Confidence sets are generally interpreted in terms of replications of an experiment. However, this interpretation is only valid before observing the sample. After observing the sample, any confidence sets have probability zero or one to contain the parameter value. In this paper, we [...] Read more.
Confidence sets are generally interpreted in terms of replications of an experiment. However, this interpretation is only valid before observing the sample. After observing the sample, any confidence sets have probability zero or one to contain the parameter value. In this paper, we provide a confidence set analysis for an observed sample based on fuzzy set theory by using the concept of membership functions. We show that the traditional ad hoc thresholds (the confidence and significance levels) can be attained from a general membership function. The applicability of the newly proposed theory is demonstrated by using well-known examples from the statistical literature and an application in the context of contingency tables. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
Show Figures

Graphical abstract

188 KiB  
Review
Beyond Hypothesis Testing
by Joseph B. Kadane
Entropy 2016, 18(5), 199; https://doi.org/10.3390/e18050199 - 20 May 2016
Cited by 1 | Viewed by 4066
Abstract
The extraordinary success of physicists to find simple laws that explain many phenomena is beguiling. With the exception of quantum mechanics, it suggests a deterministic world in which theories are right or wrong, and the world is simple. However, attempts to apply such [...] Read more.
The extraordinary success of physicists to find simple laws that explain many phenomena is beguiling. With the exception of quantum mechanics, it suggests a deterministic world in which theories are right or wrong, and the world is simple. However, attempts to apply such thinking to other phenomena have not been so successful. Individually and collectively we face many situations dominated by uncertainty, about weather and climate, about how wisely to raise children, and how the economy should be managed. The controversy about hypothesis testing is dominated by the tension between simple explanations and the complexity of the world we live in. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
Back to TopTop