Next Article in Journal
Is Cholesterol Sulfate Deficiency a Common Factor in Preeclampsia, Autism, and Pernicious Anemia?
Previous Article in Journal
Empirical Data Confirm Autism Symptoms Related to Aluminum and Acetaminophen Exposure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Statistical Information: A Bayesian Perspective

1
Department of Statistics, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA 15213, USA
2
Instituto de Matemática e Estatística, Universidade de São Paulo, Rua do Matão 1010, 05508-900, São Paulo, Brazil
*
Author to whom correspondence should be addressed.
Entropy 2012, 14(11), 2254-2264; https://doi.org/10.3390/e14112254
Submission received: 15 August 2012 / Revised: 28 September 2012 / Accepted: 1 November 2012 / Published: 7 November 2012

Abstract

:
We explore the meaning of information about quantities of interest. Our approach is divided in two scenarios: the analysis of observations and the planning of an experiment. First, we review the Sufficiency, Conditionality and Likelihood principles and how they relate to trivial experiments. Next, we review Blackwell Sufficiency and show that sampling without replacement is Blackwell Sufficient for sampling with replacement. Finally, we unify the two scenarios presenting an extension of the relationship between Blackwell Equivalence and the Likelihood Principle.

1. Introduction

One of the goals of statistics is to extract information about unknown quantities of interest from observations or from an experiment to be performed. The intuitive definition of information that we adopt from Basu [1] is:
“Information is what it does for you, it changes your opinion”.
One might further question:
  • Information about what?
We are interested in information about a quantity of interest, θ Θ . A quantity of interest represents a state of nature that we are uncertain of. For example, one might be interested in the number of rainy days next year. For instance, θ can be this number and Θ all natural numbers smaller or equal to 366.
  • Where is the information?
Stating Θ already uses previous knowledge about θ. In the example in the last paragraph, we have informed that any year has at most 366 days and, therefore, θ must be smaller than this number. Besides stating Θ, one might also think that some values are more probable than others. This kind of knowledge is used to elicit the prior distribution for θ. The prior distribution represents a description of our present state of uncertainty about θ. Usually, the scientists’ goal is to decrease his uncertainty about θ. Thus, he collects data he believes to be related to the quantity of interest. That is, he expects that there is information about θ in the data he collects.
  • How is information extracted?
We focus on the case in which one uses Bayes’ theorem, to compute the posterior distribution for θ given the observation. The posterior distribution describes the uncertainty about the quantity of interest after calibrating the prior by the observation. (In practice, the posterior distribution can rarely be computed. In these cases, it usually is sufficient to compute a quantity proportional to the posterior or to sample from the posterior.) Information also depends on the statistical framework.
  • How much information is extracted?
In Section 3 we question: How much information is extracted from a given observation? Section 3.1 reviews common principles in Statistics and their relationship with the Likelihood principle. Section 3.2 presents a simple example and discusses information functions compatible with the Likelihood principle.
In Section 4 we consider questions related to experimental design: "How much information do we expect to obtain by performing an experiment?" or "What is the best choice among possible experiments?". Blackwell Sufficiency is a strong criterion for the comparison of experiments. The definition of Blackwell Sufficiency, with a new example, is presented in Section 4.1. If Θ is finite, then two experiments are equally informative in Blackwell’s sense iff the distribution of their likelihoods is the same (Torgersen [2]). In Section 4.2 we extend this result to a setting with no restrictions on Θ. Finally, since not all experiments are comparable in Blackwell’s sense, Section 4.3 explores the metrics discussed in Section 3.2 within the framework of decision theory to compare experiments.
In the following Section, we formalize the definitions here introduced.

2. Definitions

A probability space is a triple Ω , , P in which Ω is a set, is a σ-algebra on Ω and P : [ 0 , 1 ] is a probability function. A quantity R corresponds to a function from Ω to a set . We define the probability space induced by R, , R , P R , where R = M : R - 1 [ M ] and P R ( M ) = P R - 1 [ M ] . Finally, the σ-algebra induced on Ω by a quantity R is called | R and corresponds to R - 1 [ M ] : M R .
An experiment corresponds to a mechanism that allows observing a given quantity. The performance of an experiment corresponds to the observation of this quantity. In order to be concise, from this point forward, we use the word experiment for both the experiment itself and the quantity that is observed when the experiment is performed.
Many quantities of interest are not observable. Therefore, it is only possible to learn about them in an indirect manner. Here, we restrict ourselves to performing experiments that are related to the quantity of interest and applying Bayes’ Theorem to update our knowledge about the latter. For example, in Section 1, the quantity of interest θ corresponds to the number of rainy days next year. A possible experiment to learn about θ would be to collect pluviometric data from recent years. Let X be the quantity representing this yet unobserved data. For brevity, we also call X the experiment. Our uncertainty about θ after performing X and observing X = x is given by P ( θ | X = x ) .
Let X be an experiment in X . A function T : X τ is called a statistic of X. Therefore, T ( X ) is also an experiment. Whenever there is no confusion, we use the letter T both to indicate the statistic T and the experiment T ( X ) .
From now on, we restrict ourselves to quantities in R with probability distributions that are discrete or are absolutely continuous with respect to the Lebesgue measure. p X ( x | θ ) is the conditional probability (density) function of the experiment X given the quantity of interest θ. After the experiment is performed we write L ( θ | X = x ) for the likelihood function of X at point x. Whenever clear in the context, we write p ( x | θ ) and L ( θ | x ) for the former functions. The prior distribution of θ is denoted by p ( θ ) and the posterior by p ( θ | X = x ) . Bayes’s Theorem provides p ( θ | X = x ) p ( θ ) L ( θ | x ) .
Finally, we say that an experiment X : Ω X is trivial for a quantity of interest θ : Ω Θ if | X is independent of | θ . This condition is equivalent to the assertion that, θ Θ , x X , p ( x | θ ) = p ( x ) . We use the word trivial to emphasize that X and θ are not associated. Consequently, performing X alone does not bring “information” about θ.

3. Information after an Experiment is Performed

3.1. Statistical Principles and Information

Let I n f ( X , x , θ ) denote the information gained about the quantity of interest θ after observing outcome x in experiment X. We follow Basu [1] and Birnbaum [3] in restricting the possible forms for I n f ( X , x , θ ) by assuming common statistical principles.
A statistic T : X τ is sufficient if X and θ are conditionally independent given T, that is, X is a trivial experiment for θ given T. Thus, since X is a trivial experiment for θ given T, all the information about θ in X is gained by observing T alone. The Sufficiency Principle states that for any sufficient statistic T, for any x and y in X , if T ( x ) = T ( x ) then I n f ( X , x , θ ) = I n f ( X , x , θ ) . This principle is usually followed by all scientists, although not always explicitly mentioned: for inference about θ the scientist only needs to consider a sufficient statistic.
The Conditionality Principle is another important statistical principle: it can be seen as the reciprocal of Sufficiency. The latter states that a trivial experiment performed after T does not bring extra information about θ. The former states that a trivial experiment performed before another experiment does not bring extra information about θ. Let X 1 and X 2 be two arbitrary experiments. Let Y { 1 , 2 } be an experiment jointly independent from θ, X 1 and X 2 . Let X Y be the mixture of X 1 and X 2 . X Y is performed in the following way: Perform Y. If the result of Y is 1 then perform X 1 , else perform X 2 . The Conditionality Principle states that I n f ( ( Y , X Y ) , ( i , x ) , θ ) = I n f ( X i , x , θ ) , i { 1 , 2 } . This principle is more controversial than that of Sufficiency.
The Likelihood Principle states that any two possible outcomes having proportional likelihood functions must provide the same information about the quantity of interest. Therefore, for any experiments X 1 and X 2 and any x 1 X 1 and x 2 X 2 , if L ( θ | x 1 ) L ( θ | x 2 ) , then I n f ( X 1 , x 1 , θ ) = I n f ( X 2 , x 2 , θ ) . This principle is stronger than the Sufficiency Principle and the Conditionality Principle. Birnbaum [3], Basu 4] present the converse statement,
Theorem 1 The Sufficiency and the Conditionality Principles hold iff the Likelihood Principle holds.
A scientist who follows the Likelihood Principle can perform inference about the quantity of interest solely based on the likelihood function. Lindley and Philips [5] and Pereira and Lindley [6] provide examples in which some frequentist methods violate the Likelihood Principle. Indeed, Frequentist Statistics does not follow the Conditionality Principle. On the other hand, Wechsler et al. [7] shows that Bayesian Statistics follows the Likelihood Principle.

3.2. Information in the Observation

After performing an experiment, how much information about θ does one obtain? In the last section, we argued that the information obtained from points with proportional likelihoods should be the same. Nevertheless, this property only gives a vague idea about how the information function should be. In order to add precision to the definition of information we again rely on: “Information is what it does for you, it changes your opinion”.
Before one performs an experiment, his opinion about θ is given by his prior distribution. On the other hand, his opinion after the experiment is performed is given by his posterior distribution. Hence, since the information should represent the change in opinion, it should be a function of prior and posterior distributions. If prior and posterior distributions are equal, there is no gain of information.
Next, we use an intuitive example to illustrate some information functions that satisfy this property. Consider that there are 4 balls, 2 of them are black and 2 are white. 3 of these 4 balls are put in an urn. You do not know which ball was left out. You are offered the possibility of performing one of the following three experiments; Experiment 1 consists of taking only one ball from the urn; Experiment 2 consists of taking two balls with replacement and; Experiment 3 consists of taking two balls without replacement. Your goal is to guess the number of white balls in the urn, 1 or 2. Assume that, a priori, you do not believe any combination of balls is more likely, a uniform prior. Also assume that all balls in the urn have equal probability of being selected. Let θ be the number of white balls in the urn and X i be the number of white balls observed in the i-th experiment. The posterior probabilities P ( θ = 1 | X i = j ) are provided in Table 1.
Table 1. P ( θ = 1 | X i = j ) .
Table 1. P ( θ = 1 | X i = j ) .
i/j012
10.330.67-
20.200.500.80
300.501
Some information functions that can be applied to these experiments are:
  • The Euclidean distance: I n f E ( X i , x i , θ ) = j P ( θ = j ) - P ( θ = j | X i = x i ) 2 .
  • I n f V ( X i , x i , θ ) = [ E ( θ | X i = x i ) - E ( θ ) ] 2 .
  • Kullback–Leibler divergence: I n f K L ( X i , x i , θ ) = j P ( θ = j | X i = x i ) log P ( θ = j | X i = x i ) P ( θ = j ) .
Which of the experiments is the most informative? That is, which experiment do you expect to most change your opinion? Table 2 does not provide a straightforward answer. For example, Experiment 1, in a worst case scenario, brings more information than Experiments 2 and 3. Similarly, P ( X 2 = 1 ) < P ( X 3 = 1 ) and, thus, obtaining no information in Experiment 2 is less likely than in Experiment 3. On the other hand, Experiment 3 provides the largest possible increments in information. In the next section, we discuss how to decide which experiment is the most informative.
Table 2. From left to right, tables for I n f E ( X i , j , θ ) , I n f V ( X i , j , θ ) and I n f K L ( X i , j , θ ) .
Table 2. From left to right, tables for I n f E ( X i , j , θ ) , I n f V ( X i , j , θ ) and I n f K L ( X i , j , θ ) .
i/j012i/j012i/j012
10.230.23-10.030.03-10.020.02-
20.4200.4220.0900.0920.0800.08
30.700.730.2500.2530.3000.30

4. Information before an Experiment is Performed

4.1. Blackwell Sufficiency

Consider two experiments, X and Y, that depend on θ. One usually wants to choose between X and Y for inferences about θ based solely on the conditional distributions of X given θ and Y given θ. In this section we review the concept of Blackwell Sufficiency Blackwell [8] and show that it is a generalization of the Sufficiency Principle for comparison of experiments.
A statistic T is sufficient for an experiment X, if X and θ are conditionally independent given T. Consequently, T is sufficient iff p ( x | θ ) = p ( t | θ ) p ( x | t ) . The conditional distribution of X given θ can be generated by observing T and sampling from p ( x | t ) .
Let X X ( X ) and Y X ( Y ) be two statistical experiments. X is Blackwell Sufficient for Y if there exists a map H : X ( X ) × X ( Y ) [ 0 , 1 ] , a transition function, satisfying the following properties:
  • For any y X ( Y ) , H ( · , y ) is measurable on the σ-algebra induced by X, | X .
  • For any x X ( X ) , H ( x , · ) is a probability (density) function defined on ( X ( Y ) , | Y ) .
  • For any y X ( Y ) , p ( y | θ ) = E ( H ( X , y ) | θ ) , the conditional expectation of H ( X , y ) given θ.
Let X ( X ) and X ( Y ) be countable sets and define for all x X ( X ) , Z x X ( Y ) as a trivial experiment such that P ( Z x = y ) = H ( x , y ) . From the definition of Blackwell Sufficiency, the quantities ( Z X , θ ) and ( Y , θ ) are equally distributed: X is Blackwell Sufficient for Y if and only if one can obtain an experiment with the same distribution as Y by observing X = x and, after that, performing the “randomization”, Z x .
Next, we provide two examples of Blackwell Sufficiency that address the question in the end of Section 3.2. Example 1 is a version of that in Basu and Pereira [9]. Example 2 is new and shows that sampling without replacement is Blackwell sufficient for sampling with replacement. Other examples of Blackwell Sufficiency can be found, for example, in Goel and Ginebra [10] and Torgersen [2].
Example 1 Let X and Y be two experiments, π a quantity of interest in [ 0 , 1 ] and q and p known constants in [ 0 , 1 ] . Representing the Bernoulli distribution with parameter p by Ber ( p ) , consider also that the conditional distributions of X and Y given π are, respectively:
X Ber ( π )   and   Y Ber ( q π + ( 1 - q ) p )
X is Blackwell Sufficient for Y regarding π.
Proof. Let A Ber ( q ) and B Ber ( p ) , both independent of all other variables, then defining Y = A X + ( 1 - A ) B , ( Y , π ) and ( Y , π ) are equally distributed. Therefore, X is Blackwell Sufficient for Y.
Example 2 Next, we generalize the example of Section 3.2. Consider an urn with N balls. θ of these balls are black and N - θ are white. n ( N ) balls are drawn from the urn.
By stating that ( X 1 , , X n ) is a sample with replacement from the urn, we mean:
  • Conditionally on θ, X 1 Ber θ N ;
  • Conditionally on θ, X 1 , , X n are identically distributed;
  • X i + 1 is conditionally independent of ( X i , , X 1 ) given θ, i { 1 , , n - 1 } .
Analogously, ( Y 1 , , Y n ) corresponds to a sample without replacement, that is:
  • Conditionally on θ, Y 1 Ber θ N ;
  • Y i + 1 | ( y i , , y 1 , θ ) Ber θ - j = 1 i y j N - i ,
    i { 1 , , n - 1 } , ( y i , , y 1 ) { 0 , 1 } i .
( Y 1 , , Y n ) is Blackwell Sufficient for ( X 1 , , X n ) regarding θ.
Proof. Define X 1 * = Y 1 , T i = j = 1 i Y j and i { 1 , , n - 1 } two quantities A i + 1 and B i + 1 . These two quantities are such that:
  • A i + 1 Ber N - i N , and is independent of all other variables;
  • B i + 1 | T i = t i Ber t i i ;
  • i { 1 , , n } , conditionally on T i = t i , B i is jointly independent of ( A 1 , , A n ) , ( B 1 , , B i - 1 ) , ( Y i + 1 , , Y n ) and θ.
Define:
X i + 1 * = A i + 1 Y i + 1 + ( 1 - A i + 1 ) B i + 1
Conditionally on θ, X i + 1 * | t i Ber ( θ / N ) , t i { 0 , , i } . Therefore, X i + 1 * Ber ( θ / N ) and is conditionally independent of ( Y i , , Y 1 ) given θ. Finally, since ( X i * , , X 1 * ) is a function of ( Y i , , Y 1 ) , ( A i , , A 2 ) and ( B i , , B 2 ) , conclude that X i + 1 * is independent of ( X i * , , X 1 * ) given θ. By the previous conclusions, ( X 1 * , , X n * , θ ) is identically distributed to ( X 1 , , X n , θ ) . Also, by construction, ( X 1 * , , X n * ) | ( Y 1 = y 1 , , Y n = y n ) is trivial, ( y 1 , , y n ) { 0 , 1 } n . Hence, sampling without replacement is Blackwell Sufficient for sampling with replacement.
Hence, in Section 3.2, Experiment 3 is Blackwell Sufficient for Experiment 2. Similarly, Basu and Pereira [11] shows that Experiment 3 is Blackwell Sufficient for 1. One expects that the information gained about θ by performing Experiment 3 is at least as much as one would obtain by performing Experiments 1 or 2. Are experiments 1 or 2 also Blackwell Sufficient for 3? In this case, the experiments would be equally informative. In the next subsection we present a theorem that characterizes when two experiments are equally informative in Blackwell’s sense and, thus, also answers the comparison of the experiments in Section 3.2.

4.2. Equivalence Relation in Experiment Information

In this section, the experiments can assume values in a countable set. For an experiment X : Ω X , we assume that X is measurable on the power set of X and that θ Θ , x X , P ( x | θ ) > 0 . No assumption is required of Θ.
Using Blackwell Sufficiency, it is possible to define an equivalence relation between experiments: X and Y are Blackwell Equivalent if any one is Blackwell Sufficient for the other, X Y . This equivalence relates to the Likelihood Principle in Section 3.1 through:
Theorem 2 Let X X and Y Y be two experiments. X Y iff, for every likelihood function L ( · ) ,
θ Θ , P ( { x X : L X ( · | x ) L ( · ) } | θ ) = P ( { y Y : L Y ( · | y ) L ( · ) } | θ )
The following notation reduces the algebra involved. Since all sets are countable, consider them to be ordered. Let, θ Θ , P ( X = x | θ ) be a probability function, then we define that p ( . | θ ) is a vector such that in its i-th position the value assumed is P ( x i | θ ) ; x i is the i-th element of the ordering assumed in the set of values of X. Consider F to be an arbitrary map from X × Y into [ 0 , 1 ] . We also use the symbol F for the countably infinite matrix that has in its j-th row and i-th column position the value of F ( x i , y j ) ; x i is the i-th element of the ordering in X and y j is the j-th element of the ordering in Y . Finally, a (transposed) transition matrix is such that all of its elements are greater or equal to 0 and for any column the sum of its elements is equal to 1.
Proof. (⇐) Let S : X [ 0 , 1 ] Θ and T : = Y [ 0 , 1 ] Θ , such that S ( x ) and T ( y ) are likelihood nuclei of x and y—a likelihood nucleus is a chosen likelihood between all of those that are proportional. Recall from Basu [4] that S and T are, respectively, minimal sufficient statistics for X and Y. Therefore, S X and T Y . By the hypothesis, ( S , θ ) and ( T , θ ) are identically distributed, therefore they are Blackwell Equivalent. By transitivity of Blackwell Equivalence S T , since S X Y T .
(⇒) Consider the above statistics S and T. For simplicity, we call (For an arbitrary function f and set A, we define f [ A ] as the image of A through f.) S [ X [ Ω ] ] = ξ X and T [ Y [ Ω ] ] = ξ Y . We also call P ( S ( X ) = l x | θ ) = p X ( l x | θ ) and P ( T ( Y ) = l y | θ ) = p Y ( l y | θ ) . Clearly, by construction, for every two points in ξ X or in ξ Y , if their likelihood functions are proportional, then they are the same point. Since S and T are minimal sufficient statistics, S X , T Y and, therefore, S T .
Since S is Blackwell Sufficient for T, there exists a map A : ξ X × ξ Y [ 0 , 1 ] such that A is a transition matrix and:
A p X ( . | θ ) = p Y ( . | θ ) , θ Θ
On the other hand, T is also Blackwell Sufficient for S and, similarly, there exists a map B : ξ Y × ξ X [ 0 , 1 ] such that B is a transition matrix and:
B p Y ( . | θ ) = p X ( . | θ ) , θ Θ
From these two equations, there exist two other transition matrices, M = B A and N = A B , such that:
M p X ( . | θ ) = p X ( . | θ ) , θ Θ N p Y ( . | θ ) = p Y ( . | θ ) , θ Θ
Since M and N are transition matrices, respectively, from ξ X to ξ X and from ξ Y to ξ Y , we consider the Markov Chains associated to them. All probability functions in the family { p X ( . | θ ) : θ Θ } are invariant measures for M. Note that there are no transient states in M. If there were, let x be a transient state in M, consequently P ( x | θ ) = 0 , θ Θ . This is a contradiction from the assumption that θ Θ , x X , P ( x | θ ) > 0 ; Conclude that there is no transient state in M.
Next, we use the following result found in Ferrari and Galves [12]:
Lemma 1 Consider a Markov Chain on a countable space X with a transition matrix M and no transient states. Let M have irreducible components C ( 1 ) , …, C ( n ) , …. Then, there exists an unique set of probability functions { p j ( · ) : j N } , with p j ( x ) defined in { 1 , , | C ( j ) | } , such that all invariant measures (μ) of M can be written as the following:
If c k , i is the i-th element of C ( k ) , then μ ( c k , i ) = p k ( i ) . q ( k ) and q is a probability function in N.
Recall that if a Markov Chain is irreducible, it admits a unique ergodic measure. This lemma states that any invariant measure of an arbitrary countable Markov Chain is a mixture of the unique ergodic measures in each of the irreducible components.
Using the lemma, since C ( 1 ) , , C ( n ) , are irreducible components of M and c ( k , i ) is the element of number i of C ( k ) , then p 1 ( c ( k , i ) | θ ) = p k ( i ) q k , θ . Consequently,
p 1 ( c ( k , i ) | θ ) = p 1 ( c ( k , j ) | θ ) p k ( i ) p k ( j )
If two states are in the same irreducible component then their likelihood functions are proportional. The same proof holds to matrix N.
The i-th element of ξ X is said to connect to the j-th element of ξ Y if A ( i , j ) > 0 . Similarly, the i-th element of ξ Y is said to connect to the j-th element of ξ X if B ( i , j ) > 0 . Note that every state in ξ X connects to at least one state in ξ Y and vice-versa. This is true because A and B are transition matrices.
For all x 1 ξ X , if x 1 connects to y ξ Y then y only connects to x 1 . If there were a state x 2 ξ X such that y connected to x 2 , then x 1 and x 2 would be on the same irreducible component of M. Therefore x 1 and x 2 would yield proportional likelihood functions and, by the definition of S, x 1 = x 2 . Similarly, if a state y ξ Y connects to a state x ξ X then x connects solely to y.
Finally, we conclude that every state in ξ X only connects to one state in ξ Y and vice versa. Also, if x ξ X connects to y ξ Y , then y connects to x and vice-versa. This implies that if x connects to y, then P ( X = x | θ ) = P ( Y = y | θ ) , θ Θ . Since S and T are sufficient the Theorem is proved.
Applying the above Theorem and the Likelihood Principle, one obtains the following result: if X is Blackwell Equivalent to Y,
A e = x : I n f ( X , x , θ ) = e X 1 ; B e = y : I n f ( Y , y , θ ) = e X 2
then P ( A e | θ ) = P ( B e | θ ) , θ Θ , for all possible e—the value of information.
For any information function, I n f , satisfying the Likelihood Principle — if x and y yield proportional likelihood functions, then I n f ( X , x , θ ) = I n f ( Y , y , θ ) —, X is Blackwell Equivalent to Y, if and only if, the distribution of ( I n f , θ ) for X and Y are the same.
Also, since the likelihood nuclei are not equally distributed in the experiments in Section 3.2, conclude that no pair of them is Blackwell Equivalent. Hence, from the conclusions in 4.1, Experiment 3 is strictly more informative than Experiments 2 and 1.

4.3. Experiment Information Function

In the last section, we defined properties an information function should satisfy. We reviewed Blackwell Sufficiency as a general rule for comparing experiments. Nevertheless, not every two experiments are comparable through this criterion. Next, we explicitly consider functions capable of describing the information of an experiment. A possible approach to this problem is considering that the information gained is a utility function DeGroot [13] that the scientist wants to maximize. This way, it follows from DeGroot [13] that I n f ( X , θ ) = E I n f ( X , x , θ ) . Since we consider the data information function as non-negative, the utility function is concave, see DeGroot [14] for instance.
Proceeding with this approach, we compare the different information functions presented in Section 3.2. In this example, the maximum information is obtained when the posterior distribution is such that P ( θ = 0 | x ) = 0 or P ( θ = 0 | x ) = 1 . Therefore, to compare those information functions, we divide all of them by these maxima.
First, we consider Euclidean distance as the information function. In the first experiment, with probability 1 the gain of information is 33 % . That is, a small gain with a small risk. On the second experiment, with probability 56 % the gain is 60 % of the maximum and with probability 44 % it is 0 % of the maximum, moderate gain with moderate risk. In the third experiment one can get 100 % of the maximum possible information with probability 33 % and can get 0 % of the maximum possible information with probability 67 % , maximum gain with great risk. In conclusion, if one uses the Euclidian’s “utility”, then he/she would have no preference among the three experimnents, since, for all of them, the expected information gain is of 33 % . This is surprising as the third experiment is Blackwell Sufficient for both the others.
Next, consider I n f ( X , x , θ ) = [ E ( θ ) - E ( θ | X = x ) ] 2 . The information of an experiment using this metric is: I n f ( X , θ ) = V ( E ( θ | X ) ) . The expected information gain for each of the three experiments is, respectively, 11 % , 20 % and 33 % . Thus, the third experiment is more informative than the second, which in turn is more informative than the first.
Similarly, considering the Kullback–Leibler divergence, the expected gain of information for each of the three experiments is, respectively, 2 . 4 % , 4 . 6 % and 33 % . Again, the ordering induced by information gain in X 1 , X 2 , X 3 agrees with the ordering induced by Blackwell Sufficiency. The difference of information between experiments 3 and 2 is much higher than that between 2 and 1 when using Kullblack–Leibler divergence than when using V ( E ( θ | X ) ) .

5. Conclusions

We used Basu’s concept of information as a starting point for reflection. To operationalize Basu’s concept, we discussed some common Statistical Principles. While these principles are usually presented under a frequentist perspective, we chose a Bayesian one. For instance, the definition of the Conditionality Principle that we presented is slightly different from that in Birnbaum [3] and Basu [4]. Such principles are based on the idea that trivial experiments (or ancillary statistics) should not bring information about the parameter.
We also discussed comparison experiments. A known alternative to the classical sufficiency definition is that of Blackwell Sufficiency. Let X and Y be two experiments such that X is Blackwell Sufficient for Y, if you are restricted to choose only one, it should be X. We showed that sampling without replacement is preferable to with replacement in this sense. Blackwell Sufficiency is also useful for characterization of distributions, for instance Basu and Pereira [11].
Theorem 2 states that two experiments are Blackwell Equivalent if and only if their likelihood-function statistics are equally distributed conditionally on θ. Two applications of this Theorem are as follows. (i) If one believes in the Likelihood Principle and that two experiments are equally informative if the distribution of the information functions are equal, then the information equivalence between experiments induced by Blackwell Equivalence follows. (ii) To prove that an experiment is not Blackwell Sufficient for another is, in general, difficult: one must show that there is no transition function from one to the other. However, if X if Blackwell Sufficient for Y, using theorem 2, if the likelihood-function statistics, conditionally on θ, are not equally distributed, then Y is not Blackwell Sufficient for X. This is the case for both examples in Section 4.1 and, thus, Blackwell Equivalence does not hold.
We end this paper by evoking the memory of D. Basu who, among other teachings, inspires the authors with the illuminating concept of information: “Information is what it does for you, it changes your opinion”.

Acknowledgments

The authors are grateful to Basu for his legacy on Foundations of Statistics. We are grateful to Adriano Polpo, Estéfano Alves de Souza, Fernando V. Bonassi, Luis G. Esteves, Julio M. Stern, Paulo C. Marques and Sergio Wechsler for the insightful discussions and suggestions. We are grateful to the suggestions from the anonymous referees. The authors of this paper have benefited from the support of CNPq and FAPESP.

References

  1. Basu, D. A Note on Likelihood. In Statistical Information and Likelihood : A Collection of Critical Essays; Ghosh, J.K., Ed.; Springer: Berlin, Germany, 1988. [Google Scholar]
  2. Torgersen, E.N. Comparison of Experiments; Cambridge Press: Cambridge, UK, 1991. [Google Scholar]
  3. Birnbaum, A. On the foundations of statistical inference. J. Am. Stat. Assoc. 1962, 57, 269–326. [Google Scholar] [CrossRef]
  4. Basu, D. Statistical information and likelihood: a collection of critical essays. In Statistical Information & Likelihood; Ghosh, J.K., Ed.; Springer: Berlin, Germany, 1988. [Google Scholar]
  5. Lindley, D.V.; Philips, L.D. Inference for a bernoulli process (a bayesian view). Am. Stat. 1976, 30, 112–119. [Google Scholar]
  6. Pereira, C.; Lindley, D.V. Examples questioning the use of partial likelihood. Statistician 1987, 36, 15–20. [Google Scholar] [CrossRef]
  7. Wechsler, S.; Pereira, C.; Marques, P. Birnbaum’s theorem redux. AIP Conf. Proc. 2008, 1073, 96–100. [Google Scholar]
  8. Blackwell, D. Comparison of experiments. In Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 31 July–12 August 1950.
  9. Basu, D.; Pereira, C. Blackwell sufficiency and bernoulli experiments. Braz. J. Prob. Stat. 1990, 4, 137–145. [Google Scholar]
  10. Goel, P.K.; Ginebra, J. When is one experiment "always better than" another? Statistician 2003, 52, 515–537. [Google Scholar] [CrossRef]
  11. Basu, D.; Pereira, C. A note on blackwell sufficiency and skibinsky characterization of distributions. Sankhya A 1983, 45, 99–104. [Google Scholar]
  12. Ferrari, P.; Galves, A. Coupling and Regeneration for Stochastic Processes; Sociedad Venezoelana de Matematicas: Caracas, Venezuela, 2000. [Google Scholar]
  13. DeGroot, M.H. Optimal Statistical Decisions; Wiley: New York, NY, USA, 1970. [Google Scholar]
  14. DeGroot, M.H. Uncertainty, information, and sequential experiments. Ann. Math. Stat. 1971, 33, 404–419. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Stern, R.B.; Pereira, C.A.d.B. Statistical Information: A Bayesian Perspective. Entropy 2012, 14, 2254-2264. https://doi.org/10.3390/e14112254

AMA Style

Stern RB, Pereira CAdB. Statistical Information: A Bayesian Perspective. Entropy. 2012; 14(11):2254-2264. https://doi.org/10.3390/e14112254

Chicago/Turabian Style

Stern, Rafael B., and Carlos A. de B. Pereira. 2012. "Statistical Information: A Bayesian Perspective" Entropy 14, no. 11: 2254-2264. https://doi.org/10.3390/e14112254

Article Metrics

Back to TopTop