Next Article in Journal
Diagnosis of Combined Cycle Power Plant Based on Thermoeconomic Analysis: A Computer Simulation Study
Next Article in Special Issue
Is an Entropy-Based Approach Suitable for an Understanding of the Metabolic Pathways of Fermentation and Respiration?
Previous Article in Journal
Tsallis Entropy Theory for Modeling in Water Engineering: A Review
Previous Article in Special Issue
Thermodynamics, Statistical Mechanics and Entropy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Maximum Entropy and Inference

1
Empirical Inference Department, Max Planck Institute for Intelligent Systems, Max-Planck-Ring 4, 72076 Tübingen, Germany
2
High-field Magnetic Resonance Department, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 11, 72076 Tübingen, Germany
3
Quantitative Life Sciences Section, The Abdus Salam International Center for Theoretical Physics, Strada Costiera 11, 34151 Trieste, Italy
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(12), 642; https://doi.org/10.3390/e19120642
Submission received: 9 October 2017 / Revised: 18 November 2017 / Accepted: 20 November 2017 / Published: 28 November 2017
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)

Abstract

:
Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent) variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics) directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

1. Introduction

Statistical mechanics stems from classical (or quantum) mechanics. The latter prescribes which are the relevant quantities (i.e., the conserved ones). The former brings this further, and it predicts that the probability to observe a system in a microscopic state s , in thermal equilibrium, is given by:
P ( s ) = 1 Z e β H [ s ]
where H [ s ] is the energy of configuration s . The inverse temperature β is the only relevant parameter that needs to be adjusted, so that the ensemble average H matches the observed energy U. It has been argued [1] that the recipe that leads from H [ s ] to the distribution P ( s ) is maximum entropy: among all distributions that satisfy H = U , the one maximizing the entropy S = s P ( s ) log P ( s ) should be chosen. Information theory clarifies that the distribution Equation (1) is the one that assumes nothing else but H = U , or equivalently, that all other observables can be predicted from the knowledge of H [ s ] .
This idea carries through more generally to inference problems: given a dataset of N observations s ^ = { s ( 1 ) , , s ( N ) } of a system, one may invoke maximum entropy to infer the underlying distribution P ( s ) that reproduces the empirical averages of a set M of observables ϕ μ ( s ) ( μ M ). This leads to Equation (1) with:
β H [ s ] = μ M g μ ϕ μ ( s )
where the parameters g μ should be fixed by solving the convex optimization problems:
ϕ μ = ϕ μ ¯ 1 N i = 1 N ϕ μ s ( i )
that result from entropy maximization and are also known to coincide with maximum likelihood estimation (see [2,3,4]).
For example, in the case of spin variables s { ± 1 } n , the distribution that reproduces empirical averages s i ¯ and correlations s i s j ¯ is the pairwise model:
H [ s ] = i h i s i i < j J i j s i s j ,
which in the case J i j J i , j and h i h i is the celebrated Ising model. The literature on inference of Ising models, stemming from the original paper on Boltzmann learning [5] to early applications to neural data [6] has grown considerably (see [7] for a recent review), to the point that some suggested [8] that a purely data-based statistical mechanics is possible.
Research has mostly focused on the estimate of the parameters g = { h i , J i j } , which itself is a computationally challenging issue when n 1 [7], or in recovering sparse models, distinguishing true interactions ( J i j 0 ) from spurious ones ( J i j = 0 ; see, e.g., [9]). Little has been done to go beyond pairwise interactions (yet, see [10,11,12,13]). This is partly because pairwise interactions offer a convenient graphical representation of statistical dependences; partly because -th order interactions require ∼n parameters and the available data hardly ever allow one to go beyond = 2 [14]. Yet, strictly speaking, there may be no reason to believe that interactions among variables are only pairwise. The choice of form (4) for the Hamiltonian represents an assumption on the intrinsic laws of motion, which reflects an a priori belief of the observer on the system. Conversely, one would like to have inference schemes that certify that pairwise interactions are really the relevant ones, i.e., those that need to be included in H in order to reproduce correlations s i 1 s i 2 s i ¯ of arbitrary order [15].
We contrast a view of inference as parameter estimation of a preassigned (pairwise) model, where maximum entropy serves merely an ancillary purpose, with the one where the ultimate goal of statistical inference is precisely to identify the minimal set M of sufficient statistics that, for a given dataset s ^ , accurately reproduces all empirical averages. In this latter perspective, maximum entropy plays a key role in that it affords a sharp distinction between relevant variables ( ϕ μ ( s ) , μ M ) (which are the sufficient statistics) and irrelevant ones, i.e., all other operators that are not a linear combination of the relevant ones, but whose values can be predicted through theirs. To some extent, understanding amounts precisely to distinguishing the relevant variables from the “dependent” ones: those whose values can be predicted.
Bayesian model selection provides a general recipe for identifying the best model M ; yet, as we shall see, the procedure is computationally unfeasible for spin models with interactions of arbitrary order, even for moderate dimensions ( n = 5 ). Our strategy will then be to perform model selection within the class of mixture models, where it is straightforward [16], and then, to project the result on spin models. The most likely models in this setting are those that enforce a symmetry among configurations that occur with the same frequency in the dataset. These symmetries entail a decomposition of the log-likelihood with a flavor that is similar to principal component analysis (yet of a different kind than that described in [17]). This directly predicts the sufficient statistics ψ λ ( s ) that need to be considered in maximum entropy inference. Interestingly, we find that the number of sufficient statistics depends on the frequency distribution of observations in the data. This implies that the dimensionality of the inference problem is not determined by the number of parameters in the model, but rather by the richness of the data.
The resulting model features interactions of arbitrary order, in general, but is able to recover sparse models in simple cases. An application to real data shows that the proposed approach is able to spot the prevalence of two-body interactions, while suggesting that some specific higher order terms may also be important.

2. Background

2.1. Spin Models with Interactions of Arbitrary Order

Consider a system of n spin variables s i = ± 1 , a state of which is defined by a configuration s = ( s 1 , , s n ) . The number of all possible states is 2 n . A generic model is written as:
P ( s | g , M ) = 1 Z e μ M g μ ϕ μ ( s ) ,
where ϕ μ ( s ) = i μ s i is the product of all spins involved in the corresponding interaction g μ , the sum on μ runs on a subset of operators ϕ μ and Z ensures normalization. We follow the same notation as in [18]: There are 2 n possible such operators, which can be indexed by an integer μ = 0 , , 2 n 1 whose binary representation indicates those spins that occur in the operator ϕ μ . Therefore, μ = 0 corresponds to the constant operator ϕ 0 ( s ) = 1 , and for μ = 11 , the notation i μ is equivalent to i { 1 , 2 , 4 } , i.e., ϕ 11 ( s ) = s 1 s 2 s 4 .
Given a dataset s ^ of N observations of s , assuming them to be i.i.d. draws from P ( s | g , M ) , the parameters g μ are determined by solving Equation (3). Bayesian inference maintains that different models should be compared on the basis of the posterior P { M | s ^ } that can be computed integrating the likelihood over the parameters g (we refer to [18] for a discussion of Bayesian model selection within this setup). This can already be a daunting task if n is large. Note that each operator ϕ μ ( μ > 0 ) can either be present or not in M ; this implies that the number of possible models is 2 2 n 1 . Finding the most likely model is impossible, in practice, even for moderately large n.

2.2. Bayesian Model Selection on Mixture Models

Let us consider mixture models in which the probability of state s is of the form:
P ( s | ϱ , Q ) = j = 1 q ρ j 𝟙 Q j ( s )
where 𝟙 A ( s ) is the indicator function 𝟙 A ( s ) = 1 if s A , and 𝟙 A ( s ) = 0 otherwise. The prior (and posterior) distributions of ϱ take a Dirichlet form; see Appendix B. In other words, model Q assigns the same probability ρ j to all configurations in the same set Q j . Formally, Q = { Q i } is a partition of the set of configurations s , i.e., a collection of subsets such that { ± 1 } n = j Q j and Q j Q j = j j . The model’s parameters ρ j are subject to the normalization constraint j | Q j | ρ j = 1 , where | Q | stands for the number of elements within Q. We denote by q = | Q | the number of subsets in Q . The number of independent parameters in model Q is then q 1 .
The number of possible models Q is the number of partitions of a set of 2 n elements, which is the Bell number B 2 n . This grows even faster than the number of spin models M . Yet, Bayesian model selection can be easily carried out, as shown in [16], assuming Dirichlet’s prior. In brief, the most likely partition Q depends on the assumed prior, but it is such that if two states s and s are observed a similar number of times k s k s , then the most likely model places them in the same set Q q and assigns them the same probability ρ q . In other words, considering the frequency partition:
K = { Q k } , Q k = { s : k s = k } ,
that groups in the same subset all states s that are observed the same number k s of times, the optimal partition Q is always a coarse graining of K , likely to merge together subsets corresponding to similar empirical frequencies.
For example, in the n = 1 case, the possible partitions are Q ( 1 ) = { { ± 1 } } and Q ( 2 ) = { { 1 } , { + 1 } } . The first corresponds to a model with no parameters where P ( s | Q ( 1 ) ) = 1 / 2 for s = ± 1 , whereas the second assigns probability P ( s = + 1 | Q ( 2 ) ) = ρ + and P ( s = 1 | Q ( 2 ) ) = 1 ρ + . Detailed calculation [16] shows that model Q ( 1 ) should be preferred unless s ¯ is sufficiently different from zero (i.e., the frequency with which the states s = ± 1 are observed is sufficiently different).
We refer the interested reader to Appendix B and [16] for more details, as well as for a heuristic for finding the Q model.

3. Mapping Mixture Models into Spin Models

Model Q allows for a representation in terms of the variables g μ , thanks to the relation:
g Q μ = 1 2 n s ϕ μ ( s ) log P ( s | ϱ , Q ) = 1 2 n j χ j μ log ρ j , χ j μ = s Q j ϕ μ ( s ) ,
which is of the same nature of the one discussed in [11] and whose proof is deferred to Appendix A. The index in g Q μ indicates that the coupling refers to model Q and merely corresponds to a change of variables ϱ g ; we shall drop it in what follows, if it causes no confusion.
In Bayesian inference, ϱ should be considered as a random variable, whose posterior distribution for a given dataset s ^ can be derived (see [16] and Appendix B). Then, Equation (8) implies that also g is a random variable, whose distribution can also be derived from that of ϱ .
Notice, however, that Equation (8) spans only a q 1 -dimensional manifold in the 2 n 1 -dimensional space g , because there are only q 1 independent variables ϱ . This fact is made more evident by the following argument: Let v be a 2 n 1 component vector such that:
μ = 1 2 n 1 v μ χ j μ = 0 , j = 1 , , q .
Then, we find that:
μ v μ g μ = 0 .
In other words, the linear combination of the random variables g μ with coefficients v μ that satisfy Equation (9) is not random at all. There are (generically) 2 n 1 q vectors v that satisfy Equation (9) each of which imposes a linear constraint of the form of Equation (10) on the possible values of g .
In addition, there are q orthogonal directions u λ that can be derived from the singular value decomposition of χ j μ :
χ j μ = λ = 1 q Λ λ u λ μ w λ , j , μ u λ μ u λ μ = δ λ , λ , j w λ , j w λ , j = δ λ , λ .
This in turn implies that model Q can be written in the exponential form (see Appendix D for details):
P ( s | g , Q ) = 1 Z e λ = 1 q g λ ψ λ ( s ) ,
where:
ψ λ ( s ) = μ u λ μ ϕ μ ( s ) ;
g λ = μ u λ μ g μ .
The exponential form of Equation (12) identifies the variables ψ λ ( s ) with the sufficient statistics of the model. The maximum likelihood parameters g ^ λ can be determined using the knowledge of empirical averages of ψ λ ( s ) alone, solving the equations ψ λ = ψ λ ¯ for all λ = 1 , , q . The resulting distribution is the maximum entropy distribution that reproduces the empirical averages of ψ λ ( s ) . In this precise sense, ψ λ ( s ) are the relevant variables. Notice that, the variables ψ λ ( s ) are themselves an orthonormal set:
s ψ λ ( s ) = 0 , 1 2 n s ψ λ ( s ) ψ λ ( s ) = δ λ , λ .
In particular, if we focus on the K partition of the set of states, the one assigning the same probability ρ k to all states s that are observed k times, we find that P ( s | g ^ , Q ) = k s / N exactly reproduces the empirical distribution. This is a consequence of the fact that the variables g ^ λ that maximize the likelihood must correspond to the maximum likelihood estimates ρ ^ k = k / N , via Equation (8). This implies that the maximum entropy distribution Equation (12) reproduces not only the empirical averages ψ λ ( s ) , but also that of the operators ϕ μ ( s ) for all μ . A direct application of Equation (8) shows that the maximum entropy parameters are given by the formula:
g ^ K μ = 1 2 n s ϕ μ ( s ) log k s N = 1 2 n k χ k μ log k N .
Similarly, the maximum likelihood parameters g ^ λ are given by:
g ^ λ = 1 2 n s ψ λ ( s ) log k s N = Λ λ 2 n k w λ , k log k N .
Notice that, when the set Q 0 = { s : k s = 0 } of states that are not observed is not empty, all couplings g ^ μ with χ 0 μ 0 diverge. Similarly, all g ^ λ with w λ , 0 0 also diverge. We shall discuss later how to regularize these divergences that are expected to occur in the under-sampling regime (i.e., when N 2 n ).
It has to be noted that, of the q parameters ρ q , only q 1 are independent. Indeed, we find that one of the q singular values Λ λ in Equation (11) is practically zero. It is interesting to inspect the covariance matrix C μ , ν = E [ δ g μ δ g ν ] of the deviations δ g μ = g μ E [ g μ ] from the expected values computed on the posterior distribution. We find (see Appendix C and Appendix D) that C μ , ν has eigenvalues Λ λ 2 along the eigenvectors u λ and zero eigenvalues along the directions v . The components λ with the largest singular value Λ λ are those with the largest statistical error, so one would be tempted to consider them as “sloppy” directions, as in [19]. Yet, by Equation (17), the value of g ^ λ itself is proportional to Λ λ , so the relative fluctuations are independent of Λ λ . Indeed “sloppy” modes appear in models that overfit the data, whereas in our case, model selection on mixtures ensures that the model Q does not overfit. This is why relative errors on the parameters g λ are of comparable magnitude. Actually, variables ψ λ that correspond to the largest eigenvalues Λ λ are the most relevant ones, since they identify the directions along which the maximum likelihood distribution Equation (12) tilts most away from the unconstrained maximal entropy distribution P 0 ( s ) = 1 / 2 n . A further hint in this direction is that Equation (17) implies that variables ψ λ ( s ) with the largest Λ λ are those whose variation across states s typically correlates mostly with the variation of log k s in the sample.
Notice that the procedure outlined above produces a model that is sparse in the ϱ variables, i.e., it depends only on q 1 parameters, where q is, in the case of the K partition, the number of different values that k s takes in the sample. Yet, it is not sparse in the g μ variables. Many of the results that we have derived carry through with obvious modifications if the sums over μ are restricted to a subset M of putatively relevant interactions. Alternatively, the results discussed above can be the starting point for the approximate scheme to find sparse models in the spin representation.

4. Illustrative Examples

In the following, we present simple examples clarifying the effects of the procedure outlined above.

4.1. Recovering the Generating Hamiltonian from Symmetries: Two and Four Spins

As a simple example, consider a system of two spins. The most general Hamiltonian that should be considered in the inference procedure is:
H Inf [ s ] = g 1 · s 1 + g 2 · s 2 + g 3 · s 1 s 2 .
Imagine the data are generated from the Hamiltonian:
H Gen [ s ] = J · s 1 s 2 .
and let us assume that the number of samples is large enough, so that the optimal partition Q groups configurations of aligned spins Q = = { s : s 1 = s 2 } distinguishing them from the configuration of unaligned ones Q = { s : s 1 = s 2 } .
Following the strategy explained in Section 2.1, we observe that χ = μ = χ μ = 0 for both μ = 1 and 2. Therefore, Equation (8) implies g 1 = g 2 = 0 . Therefore, the Q model only allows for g 3 to be nonzero. In this simple case, symmetries induced by the Q model (i.e., ( s 1 , s 2 ) ( s 1 , s 2 ) ) directly produce a sparse model where all interactions that are not consistent with them are set to zero.
Consider now a four-spin system. Suppose that the generating Hamiltonian is that of a pairwise fully-connected model as in Figure 1 (left), with the same couplings g 3 = g 5 = g 6 = g 9 = g 1 0 = g 1 2 = J . With enough data, we can expect that the optimal model is based on the partition Q that distinguishes three sets of configurations:
Q j = s : s 1 + s 2 + s 3 + s 4 = ± 2 j , j = 0 , 1 , 2
depending on the absolute value of the total magnetization. The Q model assigns the same probability ρ j to configurations s in the same set Q j . Along similar lines to those in the previous example, it can be shown that any interaction of order one is put to zero ( g 1 = g 2 = g 4 = g 8 = 0 ), as well as any interaction of order three ( g 7 = g 11 = g 13 = g 14 = 0 ), because the corresponding interactions are not invariant under the symmetry s s that leaves Q invariant. The interactions of order two will on the other hand correctly be nonzero and take on the same value g 3 = g 5 = g 6 = g 9 = g 1 0 = g 1 2 . The value of the four-body interaction is:
g 15 = 1 2 4 2 · log ρ 2 2 · 4 1 log ρ 1 + 2 · 4 2 log ρ 0 .
This, in general, is different from zero. Indeed, a model with two- and four-body interactions shares the same partition Q in Equation (20). Therefore, unlike the example of two spins, symmetries of the Q model do not allow one to recover uniquely the generative model (Figure 1, left). Rather, the inferred model has a fourth order interaction (Figure 1, right) that cannot be excluded on the basis of symmetries alone. Note that there are 2 2 4 1 = 32,768 possible models of four spins. In this case, symmetries allow us to reduce the set of possible models to just two.

4.2. Exchangeable Spin Models

Consider models where P ( s ) is invariant under any permutation π of the spins, i.e., P ( s 1 , , s n ) = P ( s π 1 , , s π n ) . For these models, P ( s ) only depends on the total magnetization i s i . For example, the fully-connected Ising model:
P ( s ) = 1 Z e β n i < j s i s j
belongs to this class. It is natural to consider the partition where Q q contain all configurations with q spins s i = 1 and n q spins s j = + 1 ( q = 0 , 1 , , n ). Therefore, when computing χ q μ , one has to consider | Q q | = n q configurations. If μ involves m spins, then m j n m q j of them will involve j spins s i = 1 , and the operator ϕ μ ( s ) takes the value ( 1 ) j on these configurations. Therefore, χ q μ only depends on the number m = | μ | of spins involved and:
χ q μ = χ q , m 1 2 n j = 0 q m j n m q j ( 1 ) j , m = | μ | .
This implies that the coefficients g μ of terms that involve m spins must all be equal. Indeed, for any two operators μ μ with | μ | = | μ | :
g μ = q = 0 n χ q , m log μ q = g μ .
Therefore, the proposed scheme is able, in this case, to reduce the dimensionality of the inference problem dramatically, to models where interactions g μ only depend on the number m = | μ | of spins involved in ϕ μ .
Note also that any non-null vector v μ such that μ v μ = 0 and v μ = 0 if | μ | m > 0 satisfies:
μ v μ χ q μ = 0 .
The vectors u λ corresponding to the non-zero singular values of χ ^ need to be orthogonal to each of these vectors, so they need to be constant for all μ that involve the same number of spins. In other words, u λ μ = u λ ( | μ | ) only depend on the number | μ | of spins involved. A suitable choice of a set of n independent eigenvectors in this space is given by u λ ( m ) = a δ λ , m that correspond to vectors that are constant within the sectors of μ with | μ | = λ and are zero outside. In such a case, the sufficient statistics for models of this type are:
ψ λ ( s ) = δ i s i , n 2 λ ,
as it should indeed be. We note in passing that terms of this form have been used in [20].
Inference can also be carried out directly. We first observe that the g λ are defined up to a constant. This allows us to fix one of them arbitrarily, so we will take g 0 = 0 . If K λ is the number of observed configurations with λ spins s i = 1 , then the equation ϕ λ = ϕ λ ¯ (for λ > 0 ) reads:
n λ e g λ Z = K λ N , Z = 1 + λ = 1 n n λ e g λ
so that, after some algebra,
g λ = log K λ n λ K 0 .
From this, one can go back to the couplings of operators g μ using:
g μ = λ = 0 n log K λ n λ K 0 1 2 n j = 0 λ | μ | j n | μ | λ j ( 1 ) j
Figure 2 illustrates this procedure for the case of the mean field (pairwise) Ising model Equation (22). As this shows, the procedure outlined above identifies the right model when the number of samples is large enough. If N is not large enough, large deviations from theoretical results start arising in couplings of highest order, especially if β is large.

4.3. The Deep Under-Sampling Limit

The case where the number N of sampled configurations is so small that some of the configurations are never observed deserves some comments. As we have seen, taking the frequency partition K , where Q k = { s : k s = k } , if Q 0 , then divergences can manifest in those couplings where χ 0 μ 0 .
It is instructive to consider the deep under-sampling regime where the number N of visited configurations is so small that configurations are observed at most once in the sample. This occurs when N 2 n . In this case, the most likely partitions are (i) the one where all states have the same probability Q 0 and (ii) the one where states observed once have probability ρ / N and states not yet observed have probability ( 1 ρ ) / ( 2 n N ) , i.e., Q 1 = { Q 0 , Q 1 } with Q k = { s : k s = k } , k = 0 , 1 . Following [16], it is easy to see that that generically, the probability of model Q 0 overweights the one of model Q 1 , because P { s ^ | Q 1 } P { s ^ | Q 0 } . Under Q 0 , it is easy to see that χ 0 μ = 0 for all μ > 0 . This, in turn, implies that g μ = 0 exactly for all μ > 0 . We reach the conclusion that no interaction can be inferred in this case [21].
Taking instead the partition Q 1 , a straightforward calculation shows that Equation (8) leads to g μ = a ϕ μ ¯ . Here, a should be fixed in order to solve Equation (3). It is not hard to see that this leads to a . This is necessary in order to recover empirical averages, which are computed assuming that unobserved states s Q 0 have zero probability.
This example suggests that the divergences that occur when assuming the K partition, because of unobserved states ( k s = 0 ) can be removed by considering partitions where unobserved states are clamped together with states that are observed once.
These singularities arise because, when all the singular values are considered, the maximum entropy distribution exactly reproduces the empirical distribution. This suggests that a further method to remove these singularities is to consider only the first singular values (those with largest Λ λ ) and to neglect the others, i.e., to set g λ = 0 for all other λ ’s. It is easy to see that this solves the problem in the case of the deep under-sampling regime considered above. There, only one singular value exists, and when this is neglected, one derives the result g ^ μ = 0 , μ again. In order to illustrate this procedure in a more general setting, we turn to the specific case of the U.S. Supreme Court data [8].

4.4. A Real-World Example

We have applied the inference scheme to the data of [8] that refer to the decisions of the U.S. Supreme Court on 895 cases. The U.S. Supreme Court is composed of nine judges, each of whom casts a vote against ( s i = 1 ) or in favor ( s i = + 1 ) of a given case. Therefore, this is a n = 9 spin system for which we have N = 895 observations. The work in [8] has fitted this dataset with a fully-connected pairwise spin model. We refer to [8] for details on the dataset and on the analysis. The question we wish to address here is whether the statistical dependence between judges of the U.S. Supreme Court can really be described as a pairwise interaction, which hints at the direct influence of one judge on another one, or whether higher order interactions are also present.
In order to address this issue, we also studied a dataset n = 9 spins generated form a pairwise interacting model, Equation (22), from which we generated N = 895 independent samples. The value of β = 2.28 was chosen so as to match the average value of two-body interactions fitted in the true dataset. This allows us to test the ability of our method to recover the correct model when no assumption on the model is made.
As discussed above, the procedure discussed in the previous section yields estimates g ^ μ that allow us to recover empirical averages of all the operators. These, for a finite sample size N, are likely to be affected by considerable noise that is expected to render the estimated g ^ μ extremely unstable. In particular, since the sample contains unobserved states, i.e., states with k s = 0 , we expect some of the parameters g μ to diverge or, with a finite numerical precision, to attain large values.
Therefore, we also performed inference considering only the components with largest Λ λ in the singular value decomposition. Table 1 reports the values of the estimated parameters g ^ λ obtained for the U.S. Supreme Court considering only the top = 2 to 7 singular values, and it compares them to those obtained when all singular values are considered. We observe that when enough singular values are considered, the estimated couplings converge to stable values, which are very different from those obtained when all 18 singular values are considered. This signals that the instability due to unobserved states can be cured by neglecting small singular values Λ λ 1 .
This is confirmed by Figure 3, which shows that estimates of g ^ μ are much more stable when few singular values are considered (top right panel). The top left panel, which refer to synthetic data generated from Equation (22), confirms this conclusion. The estimates g ^ μ are significantly larger for a two-body interaction than for higher order and one-body interactions, as expected. Yet, when all singular values are considered, the estimated values of a two-body interaction fluctuate around values that are much larger than the theoretical one ( β / n 0.2533 ) and the ones estimated from fewer singular values.
In order to test the performance of the inferred couplings, we measure for each operator μ the change:
Δ μ = max g : g μ = 0 μ g μ ϕ μ ¯ μ g ^ μ ϕ μ ¯
in log-likelihood when g μ is set to zero. If Δ μ is positive or is small and negative, the coupling g μ can be set to zero without affecting much the ability of the model to describe the data. A large and negative Δ μ instead signals a relevant interaction g μ .
Clearly, Δ μ 0 for all μ when g ^ μ is computed using all the q components. This is because in that case, the log-likelihood reaches the maximal value it can possibly achieve. When not all singular values are used, Δ μ can also attain positive values.
Figure 3 confirms our conclusions that inference using all the components is unstable. Indeed for the synthetic data, the loss in likelihood is spread out on operators of all orders, when all singular values are considered. When few singular values are considered, instead, the loss in likelihood is heavily concentrated on two body terms (Figure 3, bottom left). Pairwise interactions stick out prominently because Δ μ < 0 for all two-body operators μ . Still, we see that some of the higher order interactions, with even order, also generate significant likelihood losses.
With this insight, we can now turn to the U.S. Supreme Court data, focusing on inference with few singular values. Pairwise interactions stick out as having both sizable g ^ μ (Figure 3, top right) and significant likelihood loss (Figure 3, bottom right). Indeed, the top interactions (those with minimal Δ μ ) are prevalently pairwise ones. Figure 4 shows the hypergraph obtained by considering the top 15 interactions [22], which are two- or four-body terms (see the caption for details). Comparing this with synthetic data, where we find that the top 19 interactions are all pairwise, we conjecture that four-body interactions may not be spurious. The resulting network clearly reflects the orientation of individual judges across an ideological spectrum going from liberal to conservative positions (as defined in [8]). Interestingly, while two-body interactions describe a network polarized across this spectrum with two clear groups, four-body terms appear to mediate the interactions between the two groups. The prevalence of two-body interactions suggests that direct interaction between the judges is clearly important, yet higher order interactions seem to play a relevant role in shaping their collective behavior.
As in the analysis in [8], single-body terms, representing a priori biases of individual judges, are not very relevant [23].

5. Conclusions

The present work represents a first step towards a Bayesian model selection procedure for spin models with interactions of arbitrary order. Rather than tackling the problem directly, which would imply comparing an astronomical number of models even for moderate n, we show that model selection can be performed first on mixture models, and then, the result can be projected in the space of spin models. This approach spots symmetries between states that occur with a similar frequency, which impose constraints between the parameters g μ . As we have seen, in simple cases, these symmetries are enough to recover the correct sparse model, imposing that g μ = 0 for all those interactions ϕ μ that are not consistent with the symmetries. These symmetries allow us to derive a set of sufficient statistics ψ λ ( s ) (the relevant variables) whose empirical values allow one to derive the maximum likelihood parameters g ^ λ . The number q of sufficient statistics is given by the number of sets in the optimal partition Q of states. Therefore, the dimensionality of the inference problem is not related to the number of different interaction terms ϕ μ ( s ) (or equivalently, of parameters g μ ), but it is rather controlled by the number of different frequencies that are observed in the data. As the number N of samples increases, q increases and so does the dimensionality of the inference problem, until one reaches the well-sampled regime ( N 2 n ) when all states s are well resolved in frequency.
It has been observed [11] that the family of probability distributions of the form (5) is endowed with a hierarchical structure that implies that high-order and low-order interactions are entangled in a nontrivial way. For example, we observe a non-trivial dependence between two- and four-body interactions. On the other hand, Ref. [18] shows that the structure of interdependence between operators in a model is not simply related to the order of the interactions and is invariant with respect to gauge transformations that do not conserve the order of operators. This, combined with the fact that our approach does not introduce any explicit bias to favor an interaction of any particular order, suggests that the approach generates a genuine prediction on the relevance of interactions of a particular order (e.g., pairwise). Yet, it would be interesting to explore these issues further, combining the quasi-orthogonal decomposition introduced in [11] with our approach.
It is interesting to contrast our approach with the growing literature on sloppy models (see, e.g., [19]). Transtrum et al. [19] have observed that inference of a given model is often plagued by overfitting that causes large errors in particular combinations of the estimated parameters.
Our approach is markedly different in that we stem right from the beginning from Bayesian model selection, and hence, we rule out overfitting from the outset. Our decomposition in singular values identifies those directions in the space of parameters that allow one to match the empirical distribution while preserving the symmetries between configurations observed with a similar frequency.
The approach discussed in this paper is only feasible when the number of variables n is small. Yet, the generalization to a case where the set M of interactions is only a subset of the possible interactions is straightforward. This entails setting to zero all couplings g μ relative to interactions μ M . Devising decimation schemes for doing this in a systematic manner, as well as combining our approach with regularization schemes (e.g., LASSO) to recover sparse models comprise a promising avenue of research for exploring the space of models.

Acknowledgments

We gratefully acknowledge Edward D. Lee and William Bialek for sharing the data of [8]. We are grateful to Iacopo Mastromatteo, Vijay Balasubramanian, Yasser Roudi, Clélia de Mulatier and Paolo Pietro Mazza for insightful discussions.

Author Contributions

Luigi Gresele and Matteo Marsili conceived the research, performed the analysis and wrote the paper. Both authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. The Completeness Relation (8)

We notice that the set of operators satisfies the following orthogonality relations:
{ s } ϕ μ ( s ) ϕ ν ( s ) = δ μ , ν · 2 n ;
{ s } ϕ μ ( s ) = 0 , μ > 0 .
Then, taking the logarithm of Equation (5), multiplying by ϕ ν ( s ) / 2 n with ν > 0 and summing over s , one finds that:
1 2 n { s } ϕ ν ( s ) · μ ϕ μ ( s ) g μ log Z = μ δ μ , ν g μ = g ν ,
Combining the above identity with the expression with Equation (6) finally yields Equation (8).

Appendix B. The Posterior Distribution of ϱ

Following [16], we assume a Dirichlet prior for the parameter vector ϱ , i.e.,
P 0 Q ( ϱ ) = Γ ( a · 2 n ) Γ ( a ) 2 n i ρ i a 1 δ i ρ i 1 .
This is a natural choice due to the fact that it is a conjugate prior for the parameters of the multinomial distribution [24]. This means that the posterior distribution has the same functional form as the prior, and the a parameters can be interpreted as pseudocounts. In other words, the posterior probability of ρ is still a Dirichlet distribution, i.e.,
P 1 Q ( ϱ | s ^ ) = Γ i k i + a i ρ i k i + a 1 Γ ( k i + a ) δ i ρ i 1 ,
where k s is the number of times state s i was observed in the sample. We remind that the choice a = 0.5 corresponds to the least informative (Jeffrey’s) prior [25], whereas with a = 0 , the expected values of the parameters ϱ q coincide with the maximal likelihood estimates.
The likelihood of the sample s ^ under model Q is given by:
P { s ^ | Q } = j log Γ ( K j + a ) Γ ( a ) K j log m j log Γ ( N + a q ) Γ ( a q )
where:
K j = s Q j k s ,
is the number of sample points in partition Q j and m j = | Q j | is the number of states in partition Q j . The work in [16] shows that the partition that maximizes the likelihood is the one where states with similar frequencies are grouped together, which is a coarse-graining of the K partition.

Appendix C. The Covariance Matrix of the gμ Parameters

The covariance matrix C ^ can be written as:
C μ ν = Cov [ g μ , g ν ] = 1 2 2 n q , q χ q μ χ q μ Cov [ log ρ q , log ρ q ] .
In order to compute the covariance of the g parameters, Cov [ log ρ q , log ρ q ] must be computed first. For the following considerations, it is useful to define:
Z ( λ ) = E q ρ q λ q ,
where the expectation is taken over the posterior distribution of ρ :
P 1 Q ( ρ | s ^ ) = Γ i k i + a i ρ i k i + a 1 Γ ( k i + a ) δ i ρ i 1 ,
as in [16]. First, one can see that:
λ q λ q Z ( λ ) = E δ q , q · ρ q λ q ( log ρ q ) 2 q q ρ q λ q + E ( 1 δ q , q ) · ρ q λ q log ρ q · ρ q λ q log ρ q q q , q ρ q λ q .
This relation, for λ = 0 , yields:
(A9) λ q λ q Z ( λ ) | λ = 0 = E δ q , q · ( log ρ q ) 2 + ( 1 δ q , q ) · log ρ q · log ρ q (A10) = E log ρ q , log ρ q ,
and we find that:
(A11) Cov [ log ρ q , log ρ q ] = λ q λ q Z ( λ ) λ q Z ( λ ) λ q Z ( λ ) | λ = 0 (A12) = 2 λ λ log Z ( λ ) | λ = 0
Given the distribution of ρ , the expression in Equation (A7) reads:
Z ( λ ) = 1 q m q λ · q Γ ( K q + λ q + a ) Γ ( q ( K q + λ q + a ) ) .
then:
2 λ λ log Z ( λ ) = 2 λ λ q log Γ ( K q + λ q + a ) log Γ ( q K q + q λ q + q a ) = λ Γ ( K q + λ q + a ) Γ ( K q + λ q + a ) Γ ( M + q λ q + N a ) Γ ( M + q λ q + N a ) = δ q , q · ψ ( 1 ) ( K q + λ q + a ) ψ ( 1 ) ( M + q λ q + N a )
where ψ ( n ) ( x ) = d n + 1 d x n + 1 log Γ ( x ) is the polygamma function.
Evaluated in λ = 0 , this yields:
Cov [ log ρ q , log ρ q ] = C q , q = δ q , q · ψ ( 1 ) ( K q + a ) ψ ( 1 ) ( M + N a ) .
Therefore, the covariance matrix among the elements of ρ is composed of a diagonal part plus a part proportional to the identity matrix.
Inserting (A14) into (A6),
(A15) C μ ν = Cov [ g μ , g ν ] = 1 2 2 n q χ q μ χ q ν ψ ( 1 ) ( K q + a ) 1 2 2 n · ψ ( 1 ) ( M + N a ) · q χ q μ q χ q ν (A16) = 1 2 2 n q χ q μ χ q ν ψ ( 1 ) ( K q + a ) ,
due to the fact that, by Equation (A2), q χ q ν = 0 for all ν > 0 .

Appendix D. Sufficient Statistics

Property (10) can be used to give a proof of Equation (12).
Let v j μ , j = 1 , , 2 n 1 q be the vectors such that μ v j μ g μ = 0 . With the u λ μ , these constitute an orthonormal basis for the 2 n 1 dimensional space with coordinates g μ . This implies the identity:
δ μ , ν = λ u λ μ u λ ν + j v j μ v j ν
that can be inserted in the expression:
(A17) μ g μ ϕ μ ( s ) = μ , ν g μ ϕ ν ( s ) δ μ , ν (A18) = λ μ u λ μ g μ ν u λ ν ϕ ν ( s ) + j μ v j μ g μ ν v j ν ϕ ν ( s )
that yields the desired result, because μ v j μ g μ = 0 for all j.

References and Notes

  1. Jaynes, E.T. Information Theory and Statistical Mechanics. Phys. Rev. 1957, 106, 620–630. [Google Scholar] [CrossRef]
  2. Pitman, E.J.G. Sufficient statistics and intrinsic accuracy. In Mathematical Proceedings of the Cambridge Philosophical Society; Cambridge University Press: Cambridge, UK, 1936; Volume 32, pp. 567–579. [Google Scholar]
  3. Darmois, G. Sur les lois de probabilité à estimation exhaustive. C. R. Acad. Sci. Paris 1935, 200, 1265–1266. (In French) [Google Scholar]
  4. Koopman, B.O. On distributions admitting a sufficient statistic. Trans. Am. Math. Soc. 1936, 39, 399–409. [Google Scholar] [CrossRef]
  5. Ackley, D.H.; Hinton, G.E.; Sejnowski, T.J. A Learning Algorithm for Boltzmann Machines. Cogn. Sci. 1985, 9, 147–169. [Google Scholar] [CrossRef]
  6. Schneidman, E.; Berry, M.J., II; Segev, R.; Bialek, W. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature 2006, 440, 1007–1012. [Google Scholar] [CrossRef] [PubMed]
  7. Nguyen, H.C.; Zecchina, R.; Berg, J. Inverse statistical problems: From the inverse Ising problem to data science. arXiv, 2017; arXiv:1702.01522. [Google Scholar]
  8. Lee, E.; Broedersz, C.; Bialek, W. Statistical mechanics of the US Supreme Court. J. Stat. Phys. 2015, 160, 275–301. [Google Scholar] [CrossRef]
  9. Wainwright, M.J.; Jordan, M.I. Variational inference in graphical models: The view from the marginal polytope. In Proceedings of the Annual Allerton Conference on Communication Control and Computing, Allerton, IL, USA, 23–25 September 1998; Volume 41, pp. 961–971. [Google Scholar]
  10. Sejnowski, T.J. Higher-order Boltzmann machines. AIP Conf. Proc. 1986, 151, 398–403. [Google Scholar]
  11. Amari, S. Information Geometry on Hierarchy of Probability Distributions; IEEE: Hoboken, NJ, USA, 2001; Volume 47, pp. 1701–1711. [Google Scholar]
  12. Margolin, A.; Wang, K.; Califano, A.; Nemenman, I. Multivariate dependence and genetic networks inference. IET Syst. Biol. 2010, 4, 428–440. [Google Scholar] [CrossRef] [PubMed]
  13. Merchan, L.; Nemenman, I. On the Sufficiency of Pairwise Interactions in Maximum Entropy Models of Networks. J. Stat. Phys. 2016, 162, 1294–1308. [Google Scholar] [CrossRef]
  14. Limiting inference schemes to pairwise interactions is non-trivial when variables take more than two values (e.g., Potts spins). A notable example is that of the inference of protein contacts from amino acid sequences. There, each variable can take 20 possible values; hence, there are 200 parameters for each pair of positions. Sequences are typically n ∼ 100 amino acids long, so a pairwise model contains 200 n2/2 ∼ 106 parameters. In spite of the fact that the number of available sequences is much less than that (i.e., N ∼ 103▽·104), learning Potts model parameters has been found to be an effective means to predict structural properties of proteins [7]. However, we will not enter into details related to the Potts model in the present work.
  15. As already pointed out in [5], any higher order interaction can be reduced to pairwise interaction, introducing hidden variables. Conversely, higher order interactions may signal the presence of hidden variables.
  16. Haimovici, A.; Marsili, M. Criticality of mostly informative samples: A Bayesian model selection approach. J. Stat. Mech. Theory Exp. 2015, 2015, P10013. [Google Scholar] [CrossRef]
  17. Collins, M.; Dasgupta, S.; Schapire, R.E. A Generalization of Principal Component Analysis to the Exponential Family. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 2001; pp. 617–624. [Google Scholar]
  18. Beretta, A.; Battistin, C.; Mulatier, C.; Mastromatteo, I.; Marsili, M. The Stochastic complexity of spin models: How simple are simple spin models? arXiv, 2017; arXiv:1702.07549. [Google Scholar]
  19. Transtrum, M.K.; Machta, B.B.; Brown, K.S.; Daniels, B.C.; Myers, C.R.; Sethna, J.P. Perspective: Sloppiness and emergent theories in physics, biology, and beyond. J. Chem. Phys. 2015, 143, 010901. [Google Scholar] [CrossRef] [PubMed]
  20. Tkačik, G.; Marre, O.; Mora, T.; Amodei, D.; Berry, M.J., II; Bialek, W. The simplest maximum entropy model for collective behavior in a neural network. J. Stat. Mech. Theory Exp. 2013, 2013, P03011. [Google Scholar] [CrossRef]
  21. Notice that other inference methods may infer non-zero interactions in this case [7]. Note also that the statistics of the frequencies can be very different if one takes a subset of n′ < n spin, so the present approach may predict gμ ≠ 0 when the same dataset is restricted to a subset of spins.
  22. A conservative estimate of the number of significant interactions is given by the number of independent parameters gλ in our data. These are 18 in the U.S. Supreme Court data and 12 in the synthetic data.
  23. Reference [8] remarks that the definitions of “yes” and “no” are somewhat arbitrary and do not carry any information on the political orientation associated with a given vote, since they are decided in lower courts; it also shows that, even when a “left-wing/right-wing” label is attached to the “yes/no” votes, the fields alone do not explain the data well.
  24. Gelman, A.; Carlin, J.B.; Stern, H.S.; Dunson, D.B.; Vehtari, A.; Rubin, D.B. Bayesian Data Analysis; Chapman and Hall/CRC Press: Boca Raton, FL, USA, 2014; Volume 2. [Google Scholar]
  25. Box, G.E.P.; Tiao, G.C. Bayesian Inference in Statistical Analysis; Addison-Wesley Publishing Company: Boston, MA, USA, 1973. [Google Scholar]
Figure 1. A representation of the two models consistent with the Q partition in Equation (20). Blue links represent pairwise interactions and the shaded area represents a four body interaction.
Figure 1. A representation of the two models consistent with the Q partition in Equation (20). Blue links represent pairwise interactions and the shaded area represents a four body interaction.
Entropy 19 00642 g001
Figure 2. Inferred parameters of a mean field Ising model Equation (22) with n = 10 (bottom) and 20 (top) spins (only interactions up to m = 10 spins are shown for n = 20 ). (left) Couplings g m of m-th order interactions as a function of β ( N = 10 3 samples for n = 10 and N = 10 4 for n = 20 ). (right) g m as a function of N for β = 1 . The correct value g 2 = β / n is shown as a full line.
Figure 2. Inferred parameters of a mean field Ising model Equation (22) with n = 10 (bottom) and 20 (top) spins (only interactions up to m = 10 spins are shown for n = 20 ). (left) Couplings g m of m-th order interactions as a function of β ( N = 10 3 samples for n = 10 and N = 10 4 for n = 20 ). (right) g m as a function of N for β = 1 . The correct value g 2 = β / n is shown as a full line.
Entropy 19 00642 g002
Figure 3. Inference of a system of n = 9 spins from a dataset of N = 895 samples. (left) Data generated from a pairwise Ising model Equation (22) with β = 2 . 28 . (right) Data from the U.S. Supreme Court [8]. The upper panels report the estimated values of the parameters g ^ μ as a function of the order m of the interaction. Different colors refer to inference limited to the largest = 2 , 3 , 5 , 7 singular values or to the case when all singular values are considered. The lower panels report the change in log likelihood (per sample point) when a single parameter g μ is set to zero, as a function of the order m = | μ | of the interaction.
Figure 3. Inference of a system of n = 9 spins from a dataset of N = 895 samples. (left) Data generated from a pairwise Ising model Equation (22) with β = 2 . 28 . (right) Data from the U.S. Supreme Court [8]. The upper panels report the estimated values of the parameters g ^ μ as a function of the order m of the interaction. Different colors refer to inference limited to the largest = 2 , 3 , 5 , 7 singular values or to the case when all singular values are considered. The lower panels report the change in log likelihood (per sample point) when a single parameter g μ is set to zero, as a function of the order m = | μ | of the interaction.
Entropy 19 00642 g003
Figure 4. Hypergraph of the top 15 interactions between the nine judges of the second Rehnquist Court. Judges are represented as nodes with labels referring to the initials (as in [8]). Two-body interactions are represented by (red) links of a width that increases with | Δ μ | , whereas four-body interactions as (green) shapes joining the four nodes. The shade of the nodes represents the ideological orientation, as reported in [8], from liberal (black) to conservative (white).
Figure 4. Hypergraph of the top 15 interactions between the nine judges of the second Rehnquist Court. Judges are represented as nodes with labels referring to the initials (as in [8]). Two-body interactions are represented by (red) links of a width that increases with | Δ μ | , whereas four-body interactions as (green) shapes joining the four nodes. The shade of the nodes represents the ideological orientation, as reported in [8], from liberal (black) to conservative (white).
Entropy 19 00642 g004
Table 1. Singular values and estimated parameters for the U.S. Supreme Court data. The parameters g ^ λ ( ) refer to maximum entropy estimates of the model that considers only the top singular values (i.e., λ ), whereas g ^ λ in the last column refers to estimated parameters using all singular values.
Table 1. Singular values and estimated parameters for the U.S. Supreme Court data. The parameters g ^ λ ( ) refer to maximum entropy estimates of the model that considers only the top singular values (i.e., λ ), whereas g ^ λ in the last column refers to estimated parameters using all singular values.
λ Λ λ g ^ λ ( 2 ) g ^ λ ( 3 ) g ^ λ ( 4 ) g ^ λ ( 5 ) g ^ λ ( 7 ) g ^ λ
10.5280.9461.0231.3471.5121.5103.680
20.250−0.506−0.573−0.688−0.722−0.722−1.213
30.15900.2560.3580.3780.3770.519
40.10200−0.436−0.492−0.491−0.601
50.073000−0.178−0.131−0.152
60.06200000.0180.087
70.0620000−0.010−0.041
80.05500000−0.222

Share and Cite

MDPI and ACS Style

Gresele, L.; Marsili, M. On Maximum Entropy and Inference. Entropy 2017, 19, 642. https://doi.org/10.3390/e19120642

AMA Style

Gresele L, Marsili M. On Maximum Entropy and Inference. Entropy. 2017; 19(12):642. https://doi.org/10.3390/e19120642

Chicago/Turabian Style

Gresele, Luigi, and Matteo Marsili. 2017. "On Maximum Entropy and Inference" Entropy 19, no. 12: 642. https://doi.org/10.3390/e19120642

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop