Next Article in Journal
Construction of Full Order-of-Addition Generalization Simplex-Centroid Designs by the Directed Graph Approach
Next Article in Special Issue
Single-Block Recursive Poisson–Dirichlet Fragmentations of Normalized Generalized Gamma Processes
Previous Article in Journal
The Solutions of Initial (-Boundary) Value Problems for Sharma-Tasso-Olver Equation
Previous Article in Special Issue
Trapping the Ultimate Success
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Partial Exchangeability for Contingency Tables

Department of Mathematics and Statistics, Sequoia Hall, Stanford University, Stanford, CA 94305, USA
Mathematics 2022, 10(3), 442; https://doi.org/10.3390/math10030442
Submission received: 30 December 2021 / Revised: 19 January 2022 / Accepted: 20 January 2022 / Published: 29 January 2022

Abstract

:
A parameter free version of classical models for contingency tables is developed along the lines of de Finetti’s notions of partial exchangeability.

1. Introduction

Consider cross-classified data: X 1 , X 2 , , X n , where X a = ( i a , j a ) , i a [ I ] , j a [ J ] (for [ I ] = { 1 , 2 , , I } ) . Such data are often presented as an I × J contingency table T = ( t i j ) where t i j is the number of times ( i , j ) happens. Suppose that X 1 , , X n are exchangeable and extendible. Then, de Finetti’s theorem says:
Theorem 1.
For exchangeable { X i } i = 1 taking values in [ I ] × [ J ]
P [ X 1 = ( i 1 , j 1 ) , , X n = ( i n , j n ) ] = Δ I × J i , j p i j t i j μ ( d p ) ,
where Δ I × J = { p i j 0 , i , j p i j = 1 } . The representing measure μ is unique.
A popular model for cross classified data is
p i j = θ i η j .
Here is a Bayesian, parameter free, description.
Theorem 2.
For exchangeable { X i } i = 1 taking values in [ I ] × [ J ] , a necessary and sufficient condition for the mixing measure μ in Theorem 1 to be supported on Δ I × Δ J (with Δ I = { p 1 , , p I : p i 0 , i p i = 1 } ), so
P [ X 1 = ( i 1 , j 1 ) , , X n = ( i n , j n ) ] = Δ I × Δ J θ i t i η j t j μ ( d θ , d η ) ,
is that
P [ X 1 = ( i 1 , j 1 ) , X 2 = ( i 2 , j 2 ) , X 3 = ( i 3 , j 3 ) , , X n = ( i n , j n ) ] =                       P [ X 1 = ( i 1 , j 2 ) , X 2 = ( i 2 , j 1 ) , X 3 = ( i 3 , j 3 ) , , X n = ( i n , j n ) ] .
Condition (1) is to hold for any n 2 and any ( i a , j a ) 1 a n .
Proof. 
Condition (1) implies for all n and h 1 (surpressing P a.s. throughout)
P [ X 1 = ( i 1 , j 1 ) , X 2 = ( i 2 , j 2 ) | X n = ( i n , j n ) , , X n + h = ( i n + h , j n + h ) ] =                 P [ X 1 = ( i 1 , j 2 ) , X 2 = ( i 2 , j 1 ) | X n = ( i n , j n ) , , X n + h = ( i n + h , j n + h ) ] .
Let h and then n . Let T be the tail field of { X i } i = 1 . Then, Doob’s increasing and decreasing martingale theorems show
P [ X 1 = ( i 1 , j 1 ) , X 2 = ( i 2 , j 2 ) | T ] = P [ X 1 = ( i 1 , j 2 ) , X 2 = ( i 2 , j 1 ) | T ] .
However, a standard form of de Finetti’s theorem says that, given T , the { X i } i = 1 are i.i.d. with P [ X 1 = ( i , j ) ] = p i j . Thus
p i j p i j = p i j p i j for all i , i , j , j .
Finally, observe that (3) implies (writing p i : = j p i j , p j : = i p i j )
p i p j = h , l p i h p l j = h l p i j p h l = p i j .
We remark the following points.
1
If X i = ( Y i , Z i ) condition (2) is equivalent to
L ( ( Y 1 , Z 1 ) , ( Y 2 , Z 2 ) , , ( Y n , Z n ) ) = L ( ( Y 1 , Z σ ( 1 ) ) , , ( Y n , Z σ ( n ) ) )
for all n and σ S n ( S n is the symmetric group over 1 , 2 , , n ). Since { ( Y i , Z i ) } i = 1 n are exchangeable this is equivalent to saying the law is invariant under S n × S n .
2
The mixing measure μ ( d θ , d η ) allows general dependence between the row parameters θ and column parameters η . Classical Bayesian analysis of contingency tables often chooses μ so that θ and η are independent. A parameter free version is that under P, the row sums t i and column sums t j are independent. It is natural to weaken this to “close to independent” along the lines of [1] or [2]. See also [3].
3
Theorems 1 and 2 have been stated for discrete state spaces. By a standard discretization argument, they hold for quite general spaces. For example:
Theorem 3.
Let X i = ( Y i , Z i ) be exchangeable with Y i Y , Z i Z , complete separable metric spaces, 1 i < . Suppose
P [ X 1 ( A 1 , B 1 ) , X 2 ( A 2 , B 2 ) , , X n ( A n , B n ) ] =                                   P [ X 1 ( A 1 , B 2 ) , X 2 ( A 2 , B 1 ) , , X n ( A n , B n ) ]
for all measurable A i , B i and all n. Then,
P ( X 1 ( A 1 , B 1 ) , , X n ( A n , B n ) ) = P ( Y ) × P ( Z ) 1 n θ ( A i ) η ( B i ) μ ( d θ , d η ) ,
with P ( Y ) , P ( Z ) the probabilities on the Borel sets of Y , Z . The mixing measure μ is unique.
4.
Theorem 2 is closely related to de Finetti’s work in [1,4].
5.
De Finetti’s law of large numbers holds as well, in Theorem 3
1 n δ X i ( A × B ) μ ( θ ( A ) , η ( B ) ) .
One object of this paper is to develop similar parameter free de Finetti theorems for widely used log-linear models for discrete data. Section 2 begins by relating this to an ongoing conversation with Eugenio Regazzini. Section 3 provides needed background on discrete exponential families and algebraic statistics. Section 4 and Section 5 apply those tools to give de Finetti style partially exchangeable theorems for some widely used hierarchical and graphical models for contingency tables. Section 6 shows how these exponential family tools can be used for other Bayesian tasks: building “de Finetti priors” for “almost exchangeability” and running the “exchange” algorithm for doubly intractable Bayesian computation. Some philosophy and open problems are in the final section.

2. Some History

I was lucky enough to be able to speak at Eugenio Regazzini’s 60TH birthday celebration, in Milan, in 2006. My talk began this way:
≪ Hello, my name is Persi and I have a problem. ≫
For those of you not aware of the many “10 step-programs” (alcoholics anonymous, gamblers anonymous, …) they all begin this way, with the participants admitting to having a problem. In my case the problem was this:
(a)
After 50 years of thinking about it, I think that the subjectivist approach to probability, induction and statistics is the only thing that works;
(b)
At the same time, I have done a lot of work inventing and analyzing various schemes for generating random samples for things like contingency tables with given row and column sums; graphs with given degree sequences; …; Markov Chain Monte Carlo. These are used for things like permutation tests and Fisher’s exact test.
There is a lot of nice mathematics and hard work in (b) but such tests violate the likelihood principle and lead to poor scientific practice. Hence my problem (I still have it): (a) and (b) are incompatible.
There has been some progress. I now see how some of the tools developed for (b) can be usefully employed for natural tasks suggested by (a). Not so many people care about such inferential questions in these ’big data’ days. However, there are also lots of small datasets where the inferential details matter. There are still useful questions for people like Eugenio (and me).

3. Background on Exponential Families and Algebraic Statistics

The following development is closely based on [5], which should be considered for examples, proofs and more details.
Let X be a finite set. Consider the exponential family:
p θ ( x ) = 1 Z ( θ ) e θ · T ( x ) θ R d , x X .
Here, Z ( θ ) is a normalizing constant and T : X N d { 0 } . If X 1 , X 2 , , X n are independent and identically distributed from (4), the statistic t = T ( X 1 ) + + T ( X n ) is sufficient for θ . Let
Y t = { ( x 1 , , x n ) : T ( x 1 ) + + T ( x n ) = t } .
Under (4), the distribution of X 1 , , X n given t is uniform on Y t . It is usual to write
t = i = 1 n T ( X i ) = X σ ( x ) T ( x ) with σ ( x ) = # { i : T ( X i ) = T ( x ) } .
Let
F t = { f : X N : f ( x ) T ( x ) = t } .
Example 1.
For contingency tables X = { ( i , j ) : 1 i I , 1 j J } . The usual model for independence has T ( i , j ) N I + J a vector of length I + J with two non zero entries equal 1. The 1’s in T ( i , j ) are in the i t h place and position j of the last j places. The sufficient statistic t contains the row and column sums of the contingency table associated to the first n observations. The set F t is the set of an I × J tables with these row and column sums.
A Markov chain on this F t can be based on the following moves: pick i i , j j and change the entries in the current f by adding ± 1 in pattern
j j i + i + o r + +
This does not change the row sums and it does not change the column sums. If told to go negative, just pick new i , i , j , j . This gives a connected, aperiodic Markov chain on F t with a uniform stationary distribution. See [6].
Returning to the general case, an analog of + + moves is given by the following:
Definition 1
(Markov basis). A Markov basis is a set of functions f 1 , f 2 , , f L from X to Z such that
X f i ( x ) T ( x ) = 0 1 i L
and that for any t and f , f F t there are ( t 1 , f i 1 ) , , ( t A , f i A ) with t i = ± 1 , such that
f = f + j = 1 A t j f i j a n d f + j = 1 a t j f i j 0 , for 1 a A .
This allows the construction of a Markov chain on F t : from f, pick I { 1 , 2 , , L } and t = ± 1 at random and consider f + t f I . If this is positive, move there. If not, stay at f. Assumptions (5) and (6) ensure that this Markov chain is symmetric and ergodic with a uniform stationary distribution. Below, I will use a Markov basis to formulate a de Finetti theorem to characterize mixtures of the model (4).
One of the main contributions of [5] is a method of effectively constructing Markov bases using polynomial algebra. For each x X , introduce an indeterminate, also called x. Consider the ring of polynomials k [ X ] in these indeterminates where k is a field, e.g., the complex numbers. A function g : X N is represented as a monomial X g = X x g ( x ) . The function T : X N d gives a homomorphism
φ T : k [ X ] k [ t 1 , , t d ] x t 1 T 1 ( x ) t 2 T 2 ( x ) t d T d ( x ) ,
extended linearly and multiplicatively ( φ T ( x + y ) = φ T ( x ) + φ T ( y ) and φ T ( x 2 ) = φ T ( x ) 2 and so on). The basic object of interest is the kernel of φ T :
I T = { p k [ X ] : φ T ( p ) = 0 } .
This is an ideal in k [ X ] . A key result of [5] is that a generating set for I T is equivalent to a Markov basis. To state this, observe that any f : X Z can be written f = f + f with f + ( x ) = max ( f ( x ) , 0 ) and f ( x ) = max ( f ( x ) , 0 ) . Observe f ( x ) T ( x ) = 0 iff X f + X f I T . The key result is
Theorem 4.
A collection of functions f 1 , f 2 , , f L is a Markov basis if and only if the set
X f i + X f i 1 i L
generates the ideal I T .
Now, the Hilbert Basis Theorem shows that ideals in k [ X ] have finite bases and modern computer algebra packages give an effective way of finding bases.
I do not want (or need) to develop this further. See [5] or the book by Sullivant [7] or Aoki et al. [8]. There is even a Journal of Algebraic Statistics.
I hope that the above gives a flavor for what I mean by “working in (b) is hard honest work”. Most of the applications are for standard frequentist tasks. In the following sections, I will give Bayesian applications.

4. Log Linear Model for Contingency Tables

Log linear models for multiway contingency tables are a healthy part of the modern statistics. The index set is X = γ Γ I γ with Γ indexing categories and I γ the levels of γ . Let p ( x ) be the probability of falling into cell x X . A log linear model can be specified by writing:
log p ( x ) = a Γ φ a ( x ) .
The sum ranges over subsets a of Γ and φ a ( x ) means a function that only depends on x through the coordinates in a. Thus, φ ( x ) is a constant and φ Γ ( x ) is allowed to depend on all coordinates. Specifying φ a = 0 for some class of sets a determines a model. Background and extensive references are in [9]. If the a with φ a 0 permitted form a simplicial complex C (so a C and a a a C ) the model is called hierarchical. If C consists of the cliques in a graph, the model is called graphical. If the graph is chordal (every cycle of length 4 contains a chord) the graphical model is called decomposable.
Example 2
(3 way contingency tables). The graphical models for three way tables are:Mathematics 10 00442 i001
The simplest hierarchical model that is not graphical is No Three Way Interaction Model.
This can be specified by saying ’the odds rate of any pair of variables does not depend on the third’. Thus,
p i j k p i j k p i j k p i j k is constant in k for fixed i , i , j , j .
As one motivation, recall that for two variables, the independence model is specified by
p i j = θ i η j .
For three variables, suppose there are parameters θ i j , η j k , ψ i k satisfying:
p i j k = θ i j η j k ψ i k for all i , j , k .
It is easy to see that (8) entails (7) hence ’no three way interaction’. Cross multiplying (7) entails
p i j k p i j k p i j k p i j k = p i j k p i j k p i j k p i j k .
This is the form we will work with for the de Finetti theorems below.
For background, history and examples (and some nice theorems) see ([10], Section 8.2), [11,12], Simpsons ’paradox’ [13] is based on understanding the no three way interaction model. Further discussion is in Section 5 below.

5. From Markov Bases to de Finetti Theorems

Suppose X is a finite set, T : X N d { 0 } is a statistic and { f i } i = 1 L is a Markov basis as in Section 3. The following development shows how to translate this into de Finetti theorems for the contingency table examples of Section 4. The first argument abstracts the argument used for Theorem 2 above.
Lemma 1
(Key Lemma). Let X be a finite set and { X i } i = 1 an exchangeable sequence of X -valued random variables. Suppose for all n > m
P [ X 1 = x 1 , , X m = x m , X m + 1 = x m + 1 , , X n = x n ] =                             P [ X 1 = y 1 , , X m = y m , X m + 1 = x m + 1 , , X n = x n ] .
In (10), x 1 , , x m , y 1 , , y m are fixed and x m + 1 , , x n are arbitrary. Then, if T is the tail field of { X i } i = 1 and p ( x ) = P [ X 1 = x | T ] ,
i = 1 m p ( x i ) = i = 1 m p ( y i ) .
Proof. 
From (10) and exchangeability
P [ X 1 = x 1 , , X m = x m , X n + 1 = x n + 1 , , X n + h = x n + h ] =                             P [ X 1 = y 1 , , X m = y m , X n + 1 = x n + 1 , , X n + h = x n + h ]
so
P [ X 1 = x 1 , , X m = x m | X n + 1 = x n + 1 , , X n + h = x n + h ] =                             P [ X 1 = y 1 , , X m = y m | X n + 1 = x n + 1 , , X n + h = x n + h ] .
Let h and then n , use Doob’s upward and then downward martingale convergence theorems to see:
P [ X 1 = x 1 , , X m = x m | T ] = P [ X 1 = y 1 , , X m = y m | T ] .
Now, de Finetti’s theorem implies (11). □
Remark 1.
The Key Lemma shows that the p ( x ) satisfy certain relations. Using choices of { x i } , { y i } derived from a Markov basis will show that p ( x ) satisfy the required independence properties. Suppose that X f ( x ) T ( x ) = 0 , X f ( x ) = 0 and f { 0 , ± 1 } . Let S + = { x : f ( x ) = 1 } , S = { y : f ( y ) = 1 } . Say | S + | = | S | = m . Enumerate S + = { x 1 , , x m } , S = { y 1 , , y m } . Assumptions (10) and conclusion (11) will give our theorems.
Example 3
(Independence in a two way table). Let X = [ I ] × [ J ] . A minimal basis for the independence model is given by f i , j , i , j :
j j i + i + ( a l l o t h e r e n t r i e s = 0 ) .
The condition of the Key Lemma becomes:
P [ X 1 = ( i , j ) , X 2 = ( i , j ) , X 3 = ( i 3 , j 3 ) , , X n = ( i n , j n ) ] =                               P [ X 1 = ( i , j ) , X 2 = ( i , j ) , X 3 = ( i 3 , j 3 ) , , X n = ( i n , j n ) ] .
Passing to the limit gives
p i j p i j = p i j p i j
and so
p i p j = i j p i j p i j = p i j .
This is precisely Theorem 2 of the Introduction. □
Example 4
(Complete independence in a three way table). The sufficient statistics are T i , T j , T k . From [5], there are two kinds of moves in a minimal basis. Up to symmetries, these are:
Class IClass II
j j i + i +              j j i + j j i +
Passing to the limit, this entails:
p i j k p i j k = p i j k p i j k a n d p i j k p i j k = p i j k p i j k .
These may be said as ’the product of any p i j k , p i j k remains unchanged if the middle coordinates are exchanged’. By symmetry, this remains true if the two first or last coordinates are exchanged. As above, this entails
p i p j p k = p i j k .
These observations can be rephrased into a statement that looks more similar to the classical de Finetti theorem; using symmetry:
Theorem 5.
Let { X i } i = 1 be exchangeable, taking values in [ I ] × [ J ] × [ K ] . Then
P [ X 1 = ( i 1 , j 1 , k 1 ) , , X n = ( i n , j n , k n ) ] =                            P [ X 1 = ( σ ( i 1 ) , ζ ( j 1 ) , η ( k 1 ) ) , , X n = ( σ ( i n ) , ζ ( j n ) , η ( k n ) ) ]
for all n, { ( i a , j a , k a ) } a = 1 n and ( σ , ζ , η ) S I × S J × S K is necessary and sufficient for there to exist a unique μ on Δ I × Δ J × Δ K with
P [ X a = ( i a , j a , k a ) , 1 a n ] = Δ I × Δ J × Δ K a = 1 n p i a q j a r k a μ ( d p , d q , d r ) .
Example 5
(One variable independent of the other two). Suppose, without loss, that the graph isMathematics 10 00442 i002
Identify the pairs ( j , k ) with { 1 , 2 , , L } with L = J K . The problem reduces to Example 4. A minimal basis consists of (again, up to relabeling)
l l i + i +
We may conclude
Theorem 6.
Let { X i } i = 1 be exchangeable, taking values in [ I ] × [ J ] × [ K ] . Then
P [ X 1 = ( i 1 , j 1 , k 1 ) , , X n = ( i n , j n , k n ) ] =                                     P [ X 1 = ( σ ( i 1 ) , ζ ( j 1 , k 1 ) ) , , X n = ( σ ( i n ) , ζ ( j n , k n ) ) ]
for all n, { ( i a , j a , k a ) } a = 1 n and ( σ , ζ ) S I × S J × K is necessary and sufficient for there to exist a unique μ on Δ I × Δ J K with
P [ X a = ( i a , j a , k a ) , 1 a n ] = Δ I × Δ J K a = 1 n p a q a μ ( d p , d q ) .
Example 6
(Conditional independence). Suppose variable i and j are conditionally independent given k.Mathematics 10 00442 i003
Rewrite the parameter condition of section four as
p k p i j k = p i k p j k f o r a l l i , j , k
The sufficient statistics are { T i k } i , k , { T j k } j k . From [5], a minimal generating set is
j k j k i k + i k + K × I ( I 1 ) 2 × J ( J 1 ) 2 m o v e s i n a l l .
From this, the Key Lemma shows (for all i , j , k )
p i j k p i j k = p i j k p i j k .
This entails:
p i k p j k = i , j p i j k p i j k = i j p i j k p i j k = p i j k p k .
Again, phrasing the condition (10) in terms of symmetry.
Theorem 7.
Let { X i } i = 1 be exchangeable, taking values in [ I ] × [ J ] × [ K ] . Then,
P [ X 1 = ( i i , j i , k i ) , , X n = ( i n , j n , k n ) ] =                           P [ X 1 = ( σ k 1 ( i 1 ) , ζ k 1 ( j 1 ) , k 1 ) , , X n = ( σ k n ( i n ) , ζ k n ( j n ) , k n ) ]
for all n, { ( i a , j a , k a ) } a = 1 n and σ k , ζ k S I × S J , 1 k K , is necessary and sufficient for there to exist a unique family μ × b = 1 k μ b , r on Δ K × ( Δ I × Δ J ) K
P [ X a = ( i a , j a , k a ) , 1 a n ] =                                     Δ K × ( Δ I × Δ J ) K a = 1 n r k a p i a k a q j a k a b = 1 k μ b , r ( p i b q i b ) μ ( d r ) .
Both (12) and (13) have a simple interpretation. For (12), { X i } i = 1 n are exchangeable 3-vectors. For any k and specified sequence of values { ( i a , j a , k ) } a = 1 n the chance of observing these values is unchanged under permuting the ( i a , j a , k ) , by permutations σ k S I , ζ k S J . Here σ k , ζ k are allowed to depend on k.
On the right of (13), the mixing measure may be understood as follows. There is a probability μ on Δ K . Pick r = ( r 1 , , r k ) Δ K . Given this r, pick ( p k , q k ) from μ k , r on the k t h copy of Δ I × Δ J . These choices are allowed to depend on r but are independent, conditional on r, 1 k K .
All of this simply says that, conditional on the tail field,
P [ X a = ( i , j , k ) | T ] = P [ X a = ( i , , k ) | T ) P ( X a = ( , j , k ) | T ] .
The first two coordinates are conditionally independent given the third.
Example 7
(No three way interaction). The model is described in Section 4. The sufficient statistics are { T i j } , { T i k } , { T j k } . Minimal Markov bases have proved intractable. See [5] or [8]. For any fixed I , J , K , the computer can produce a Markov basis but these can have a huge number of terms. See [7,8] and their references for a surprisingly rich development.
There is a pleasant surprise. Markov bases are required to connect the associated Markov chain. There is a natural subset, the first moves anyone considers, and and these are enough for a satisfactory de Finetti theorem (!).
Described informally, for an I × J × K array, pick a pair of parallel planes, say the k , k planes in the three dimensional array, and consider moves depicted as
j j i + i + j j i + i + k k
These moves preserve all line sums (the sufficient statistics). They arenotsufficient to connect any two datasets with the same sufficient statistics. Using the prescription in the Key Lemma, suppose:
P [ X 1 = ( i , j , k ) , X 2 = ( i , j , k ) , X 3 = ( i , j , k ) , X 4 = ( i , j , k ) , X a = ( i a , j a , k a ) 5 a n ] = P [ X 1 = ( i , j , k ) , X 2 = ( i , j , k ) , X 3 = ( i , j , k ) , X 4 = ( i , j , k ) ,                            X a = ( i a , j a , k a ) 5 a n ] .
Passing to the limit gives
p i j k p i j k p i j k p i j k = p i j k p i j k p i j k p i j k .
This is exactly the no three way interaction condition. Or, equivalently:
p i j k p i j k p i j k p i j k = p i j k p i j k p i j k p i j k .
The odds ratios are constant on the k t h and k t h planes (of course, they depend on i , j , i , j ). These considerations imply:
Theorem 8.
Let { X i } i = 1 be exchangeable, taking values in [ I ] × [ J ] × [ K ] . Then, condition (14) is necessary and sufficient for the existence of a unique probability μ on Δ I J K , supported on the no three way interaction variety (15) satisfying
P [ X a = ( i a , j a , k a ) , 1 a n ] = Δ I J K p i j k η i j k μ ( d p i j k ) .
We remark on the following points.
  • It follows from theorems in [12] and [11] that, if all p i j k > 0 , condition (15) is equivalent to the unique representation,
    p i j k = r α j k β k i γ i j ,
    where r , α , β , γ have positive entries and satisfy
    k α j k = i β k i = j γ i j = 1 for all i , j , k
    and
    r i , j , k α j k β k i γ i j = 1 .
    The integral representation in the theorem can be stated in this parametrization. The condition p i j k > 0 is equivalent to P ( X 1 = ( i , j , k ) ) > 0 on observables.
  • Condition (14) does not have an obvious symmetry interpretation.
  • Conditions (14) and (15) are stated via varying the third variable when i , j , i , j are fixed. Because of (16), if they hold in this form, they hold for any two variables fixed as the third varies.
  • It is possible to go on, but, as John Darroch put it, ’the extensions to higher order interactions… are not likely to be of practical interest’. The most natural development—the generalization to decomposable models—is being developed by Paula Gablenz.
  • There are many extensions of the Key Lemma above. These allow a similar development for more general log linear models and exponential families.

6. Discussion and Conclusions

The tools of algebraic statistics have been harnessed above to develop partial exchangeability for standard contingency table models. I have used them for two further Bayesian tasks: approximate exchangeability and the problem of ’doubly intractable priors’. As both are developed in papers, I will be brief.
Approximate exchangeability.Consider n men and m women along with a binary outcome. If the men are judged exchangeable (for fixed outcomes for the women) and vice versa, and, if both sequences are extendable, de Finetti [1] shows that there is a unique prior on the unit square [ 0 , 1 ] 2 such that, for any outcomes t 1 , , t n , σ 1 , , σ m in { 0 , 1 }
P [ X 1 = t 1 , , X n = t n , Y 1 = σ 1 , , Y m = σ m ] =                                        [ 0 , 1 ] 2 p S ( 1 p ) n S θ T ( 1 θ ) m T μ ( d p , d θ ) ,
with S = i = 1 n t i , T = j = 1 m σ j .
If, for the outcome of interest, { X i , Y j } were almost fully exchangeable (so the men/ women difference is judged practically irrelevant) the prior μ would be concentrated near the diagonal of [ 0 , 1 ] 2 . De Finetti suggested implementing this by considering priors of the form
μ ( d p , d θ ) = Z 1 e A ( p θ ) 2 d p d θ
for A large.
In joint work with Sergio Bacallado and Susan Holmes [3], multivariate versions of such priors are developed. These are required to concentrate near sub-manifolds of cubes or products of simplicies; think about ‘approximate no three way interaction’. We used the tools of algebraic statistics to suggest appropriate many variable polynomials which vanish on submanifold of interest. Many ad hoc choices were involved. Sampling from such priors or posteriors is a fresh research area. See [2,14,15].
Doubly intractable priors. Consider an exponential family as in Section 3:
p θ ( x ) = 1 Z ( θ ) e θ · T ( x ) .
Here x X a finite set, T : X R d and θ R d . In many real examples, the normalizing constant Z ( θ ) will be unknown and unknowable. For a Bayesian treatment, let Π ( d θ ) be a prior distribution on R d . For example, the conjugate prior.
If X 1 , X 2 , , X n is as i.i.d. sample from p θ , T is a sufficient statistic and the posterior has the form
Z ( Z 1 ( θ ) ) n e θ F Π ( d θ ) ,
with F = i = 1 n T ( X i ) and Z another normalizing constant. The problem is that Z 1 ( θ ) depends on θ and is unknown!
The exchange algorithm and many variants offer a useful solution. See [16,17].
In practical implementations, there is an intermediary step requiring a sample form p θ T , the measure induced by p θ n under i n T ( x i ) : X n R . This is a discrete sampling task and Markov basis techniques have been proved useful. See [16].
A philosophical comment. The task undertaken above, finding believable Bayesian interpretations for widely used log linear models, goes somewhat against the grain of standard statistical practice. I do not think anyone takes a reasonably complex, high dimensional hierarchical model seriously. They are mostly used as a part of exploratory data analysis; this is not to deny their usefulness. Making any sense of a high dimensional dataset is a difficult task. Practitioners search through huge collections of models in an automated way. Usually, any reflection suggests the underlying data is nothing like a sample from a well specified population. Nonetheless, models are compared using product likelihood criteria. It is a far far cry from being based on anyone’s reasoned opinion.
I have written elsewhere about finding Bayesian justification for important statistical tasks such as graphical methods or exploratory data analysis [18]. These seem like tasks similar to ’how do you form a prior’. Different from the focus of even the most liberal Bayesian thinking.
The sufficiency approach. There is a different approach to extending de Finetti’s theorem. This uses ‘sufficiency’. Consider exchangeable { X i } i = 1 . For each n, suppose T n : X n Y is a function. The { T n } have to fit together according to simple rules satisfied in all of the examples above. Call { X i }  partially exchangeable with respect to  T n if P [ X 1 = x 1 , , X n = x n | T n = t n ] is uniform. Then, Diaconis and Freedman [19] show that a version of de Finetti’s theorem holds. The law of { X i } is a mixture of i.i.d. laws indexed by extremal laws. In dozens of examples, these extremal laws can be identified with standard exponential families. This last step remains to be carried out in the generality of Section 3 above. What is required is a version of the Koopman–Pitman–Darmois theorem for discrete random variables. This is developed in [19] when X N and T n ( X 1 , , X n ) = X 1 + + X n . Passing to interpretation, this version of partial exchangeability has the following form:
if T n ( x 1 , , x n ) = T n ( y 1 , , y n ) ,                               then P [ X 1 = x 1 , , X n = x n ] = P [ X 1 = y 1 , , X n = y n ] .
This is neat mathematics (and allows a very general theoretical development). However, it does not seem as easy to think about in natural examples. Exchangeability via symmetry is much easier. The development above is a half-way house between symmetry and sufficiency. A close relative of the sufficiency approach is the topic of ‘extremal models’ as developed by Martin-Löf and Lauritzen. See [20] and its references. Moreover, Refs. [21,22] are recent extensions aimed at contingency tables.
Classical Bayesian contingency table analysis. There is a healthy development of parametric analysis for the examples of Section 5. This is based on natural conjugate priors. It includes nice theory and R packages to actually carry out calculations in real problems. Three papers that I like are [23,24,25,26]. The many wonderful contributions by I.J. Good are still very much worth consulting. See [27] for a survey. Section 5 provides ‘observable characterizations’ of the models. The problem of providing ‘observable characterizations’ of the associated conjugate priors (along the lines of [28]) remains open.

Funding

Thisresearch received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme under grant agreement No 817257 and funding from NSF grant No 1954042.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The author would like to thank Paula Gablenz, Sourav Chatterjee and Emanuele Dolera for help throughout.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. de Finetti, B. On the condition of partial exchangeability. Stud. Inductive Log. Probab. 1980, 2, 193–205. [Google Scholar]
  2. Bruno, A. On the notion of partial exchangeability (Italian). In Giornale dell’Istituto Italiano degli Attuari; English Translation in: de Finetti, Probability, Induction and Statistics; International Statistical Institute: Leidschenveen, The Netherlands, 1964; Volume 27, Chapter 10; pp. 174–196. [Google Scholar]
  3. Baccalado, S.; Diaconis, P.; Holmes, S. De Finetti priors using Markov chain Monte Carlo computations. J. Stat. Comput. 2015, 25, 797–808. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. de Finetti, B. Probability, Induction and Statistics: The Art of Guessing; Wiley: Hoboken, NJ, USA, 1972. [Google Scholar]
  5. Diaconis, P.; Sturmfels, B. Algebraic algorithms for sampling from conditional distributions. Ann. Stat. 1998, 26, 363–397. [Google Scholar] [CrossRef] [Green Version]
  6. Diaconis, P.; Gangolli, A. Rectangular arrays with fixed margins. In Discrete Probability and Algorithms; Springer: New York, NY, USA, 1995; Volume 72, pp. 15–41. [Google Scholar]
  7. Sullivant, S. Algebraic Statistics; AMS: Providence, RI, USA, 2018. [Google Scholar]
  8. Aoki, S.; Hara, H.; Takemura, A. Markov Bases in Algebraic Statistics; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  9. Lauritzen, S.L. Graphical Models, 2nd ed.; Oxford University Press: Oxford, UK, 2004. [Google Scholar]
  10. Agresti, A. Categorical Data Analysis, 2nd ed.; Wiley: Hoboken, NJ, USA, 2002. [Google Scholar]
  11. Birch, M.W. Maximum likelihood in three-way contingency tables. J. R. Stat. Soc. Ser. B 1963, 25, 220–233. [Google Scholar] [CrossRef]
  12. Darroch, J.N. Interactions in multi-factor contingency tables. J. R. Stat. Soc. Ser. 1962, 24, 251–263. [Google Scholar] [CrossRef]
  13. Simpson, E.H. The interpretation of interaction in contingency tables. J. R. Stat. Soc. Ser. 1951, 13, 238–241. [Google Scholar] [CrossRef]
  14. Diaconis, P.; Holmes, S.; Shahshahani, M. Sampling From a Manifold. In Advances in Modern Statistical Theory and Applications: A Festschrift in Honor of Morris L. Eaton; IMS Statistics Collections: Beachwood, OH, USA, 2013; pp. 102–125. [Google Scholar]
  15. Gerencsér, B.; Ottolini, A. Rates of convergence for Gibbs sampling in the analysis of almost exchangeable data. arXiv 2020, arXiv:2010.15539v2. [Google Scholar]
  16. Diaconis, P.; Wang, G. Bayesian goodness of fit tests: A conversation for David Mumford. Ann. Math. Sci. Appl. 2018, 3, 287–308. [Google Scholar] [CrossRef] [Green Version]
  17. Wang, G. On the Theoretical Properties of the Exchange Algorithm. arXiv 2021, arXiv:2005.09235v4. [Google Scholar]
  18. Diaconis, P. Theories of data analysis: From magical thinking through classical statistics. In Exploring Data Tables, Trends, and Shapes; Hoaglin, D.C., Mosteller, F., Tukey, J.W., Eds.; Wiley: Hoboken, NJ, USA, 1985; pp. 1–36. [Google Scholar]
  19. Diaconis, P.; Freedman, D. Partial Exchangeability and Sufficiency. In Statistics:Applications and New Directions; Ghosh, K., Roy, F., Eds.; Indian Statistical Institute: Calcutta, India, 1984; pp. 205–236. [Google Scholar]
  20. Lauritzen, S.L. General Exponential Models for Discrete Observations. Scand. J. Stat. 1975, 2, 23–33. [Google Scholar]
  21. Lauritzen, S.L.; Rinaldo, A.; Sadeghi, K. Random Networks, Graphical Models, and Exchangeability. arXiv 2017, arXiv:1701.08420v2. [Google Scholar] [CrossRef] [Green Version]
  22. Lauritzen, S.L.; Rinaldo, A.; Sadeghi, K. On exchangeability in network models. J. Algebr. Stat. 2019, 10, 85–114. [Google Scholar] [CrossRef]
  23. Albert, J.H.; Gupta, A.K. Mixtures of Dirichlet distributions and estimation in contingency tables. Ann. Stat. 1982, 10, 1261–1268. [Google Scholar] [CrossRef]
  24. Murray, I.; Ghahramani, Z.; MacKay, D.J.C. MCMC for doubly-intractable distributions. In Proceedings of the 22nd Conference in Uncertainty in Artificial Intelligence (UAI ’06), Cambridge, MA, USA, 13–16 July 2006. [Google Scholar]
  25. Letac, G.; Massam, H. Bayes factors and the geometry of discrete hierarchical loglinear models. Ann. Stat. 2012, 40, 861–890. [Google Scholar] [CrossRef]
  26. Tarantola, C.; Ntzoufras, I. Bayesian Analysis of Graphical Models of Marginal Independence for Three Way Contingency Tables. In Quaderni di Dipartimento from University of Pavia; No 172; Department of Economics and Quantitative Methods, University of Pavia: Pavia, Italy, 2012; Available online: http://dem-web.unipv.it/web/docs/dipeco/quad/ps/RePEc/pav/wpaper/q172.pdf (accessed on 30 December 2021).
  27. Diaconis, P.; Efron, B. Testing for independence in a two-way table: New interpretations of the Chi-Square statistic. Ann. Stat. 1985, 13, 845–874. [Google Scholar] [CrossRef]
  28. Diaconis, P.; Ylvisaker, D. Quantifying prior opinion. In Bayesian Statistics, II; Bernardo, J., DeGroot, M., Lindley, D., Smith, A.F.M., Eds.; North-Holland: Amsterdam, The Netherlands, 1985; pp. 133–156. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Diaconis, P. Partial Exchangeability for Contingency Tables. Mathematics 2022, 10, 442. https://doi.org/10.3390/math10030442

AMA Style

Diaconis P. Partial Exchangeability for Contingency Tables. Mathematics. 2022; 10(3):442. https://doi.org/10.3390/math10030442

Chicago/Turabian Style

Diaconis, Persi. 2022. "Partial Exchangeability for Contingency Tables" Mathematics 10, no. 3: 442. https://doi.org/10.3390/math10030442

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop