Next Article in Journal
Toward the Relational Formulation of Biological Thermodynamics
Next Article in Special Issue
Dynamic Phase Transition in 2D Ising Systems: Effect of Anisotropy and Defects
Previous Article in Journal
Seeing Is Believing: Brain-Inspired Modular Training for Mechanistic Interpretability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Onset of Parisi’s Complexity in a Mismatched Inference Problem

by
Francesco Camilli
1,†,
Pierluigi Contucci
2,† and
Emanuele Mingione
2,*,†
1
The Abdus Salam International Center for Theoretical Physics, 34151 Trieste, Italy
2
Dipartimento di Matematica, Alma Mater Studiorum—Universita di Bologna, 40127 Bologna, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2024, 26(1), 42; https://doi.org/10.3390/e26010042
Submission received: 27 November 2023 / Revised: 21 December 2023 / Accepted: 25 December 2023 / Published: 30 December 2023

Abstract

:
We show that a statistical mechanics model where both the Sherringhton–Kirkpatrick and Hopfield Hamiltonians appear, which is equivalent to a high-dimensional mismatched inference problem, is described by a replica symmetry-breaking Parisi solution.

1. Introduction

Beginning with Parisi’s seminal works on the Sherringhton–Kirkpatrick (SK) model [1,2], the ideas and the tools developed in spin glass theory spread across many other research fields such as computer science, probability theory, neural networks and others [3,4,5,6,7]. From a mathematical perspective, efforts to rigorously prove Parisi’s theory have yielded powerful techniques, such as interpolation methods [8,9], stochastic stability [10], and synchronization [11], which are currently instrumental in analyzing numerous disordered systems.
In this work, we will consider a family of mean-field spin glasses whose Hamiltonian contains two types of random interactions: the first is the SK type, while the second is a Hopfield model with a finite number of patterns. This class of models can also be interpreted as a high-dimensional inference problem known as a spiked Wigner model in a mismatched setting [12,13,14,15,16].
Our main result is a representation of the thermodynamic limit for the quenched pressure per particle in terms of a variational problem of Parisi type. The proof relies on two main ingredients: Guerra’s replica symmetry-breaking bound, which allows controlling the SK contribution, and adaptive interpolation, which is employed to linearize the Hopfield interaction. We start with a review of the SK model in Section 2 and then lay the ground for the inferential interpretation of the model under study in Section 3. In Section 4, we define and rigorously identify the exact solution of the model. Finally, we describe some interesting challenges for future investigations.

2. The SK Model

The SK model was introduced in the 1970s by D. Sherringhton and S. Kirkpatrick [17] and stands as an explicitly solvable mean-field spin glass. In their work, the authors discovered that the solution obtained through the replica symmetric (RS) approximation was not correct at low temperature. With a groundbreaking approach, Parisi identified a new type of solutions, nowadays called replica symmetry breaking (RSB), which proved to be correct at any temperature, thereby revealing a novel mathematical and physical structure [18].
The SK model is defined by its Hamiltonian, that is a function of N spins σ = ( σ i ) i N { 1 , 1 } N :
H N S K ( σ ) = 1 2 N i , j N z i j σ i σ j
where z = ( z i j ) i , j N is a collection of i.i.d. standard Gaussian. In physical terms, the couplings between pairs of spins can be ferromagnetic or antiferromagnetic with equal probability. Consider also a random variable ξ with E | ξ | < and a collection ξ = ( ξ i ) i N iid ξ representing random external fields acting on the spins. The Parisi formula is a representation for the large N limit of the pressure p N S K defined by
p N S K ( β , h ) = 1 N log σ { 1 , 1 } N exp β H N S K ( σ ) + h σ · ξ
In the definition (2), ( β , h ) R > 0 × R are fixed parameters, and the dependence on the realization of the random collections z , ξ is kept implicit. One can prove [5] that p N S K converges, for almost all realizations of the disorder, to its average p ¯ N S K ( β , h ) = E p N S K ( β , h ) . Notice that E , taken after the logarithm, averages both the collections of z and ξ that are called quenched variables. The Hamiltonian (1) can also be regarded as a centered Gaussian process with covariance
E H N S K ( σ 1 ) H N S K ( σ 2 ) = N 2 q N 2 σ 1 , σ 2
where
q N ( σ 1 , σ 2 ) = 1 N i = 1 N σ i 1 σ i 2 = 1 N σ 1 · σ 2 .
q N ( σ 1 , σ 2 ) is the overlap between two spin configurations σ 1 and σ 2 .
The Parisi variational principle for the limiting pressure per particle of this model was proved after almost three decades of efforts, and it is mainly due to the works of Guerra [8] and Talagrand [19]. We hereby summarize these milestones in a single theorem.
Theorem 1
(Parisi Formula [8,19]). Let M [ 0 , 1 ] be the space of probability measures on [ 0 , 1 ] , β > 0 and y R . Consider the Parisi functional, which is defined as
χ M [ 0 , 1 ] P ( χ ; β , y ) = log 2 + Φ χ ( 0 , y , β ) β 2 2 0 1 d q q χ ( [ 0 , q ] ) .
where Φ χ ( s , y , β ) solves the PDE
s Φ χ = β 2 2 y 2 Φ χ + χ ( [ 0 , s ] ) ( y Φ χ ) 2 Φ χ ( 1 , y , β ) = log cosh y .
The following holds
lim N p N S K ( β , h ) = inf χ M [ 0 , 1 ] E ξ P ( χ ; β , h ξ ) a . s .
The key tool for the proof is the (Gaussian) interpolation method, which is introduced in [9] in order to prove the existence of the large N limit of p ¯ N S K .
The thermodynamic equilibrium induced by the pressure p ¯ N S K is called quenched equilibrium and is defined as follows. Physical quantities (e.g., energy) are functions of the disorder variables z , ξ and the spin configurations σ . Given a function f ( z , ξ , σ ) , its equilibrium value is defined as
E f N : = E σ { 1 , 1 } N G N ( σ ) f ( z , ξ , σ )
where G N is the (random) Boltzmann–Gibbs distribution
G N ( σ ) = exp β H N S K ( σ ) + h σ · ξ Z N
The measure E N is called a quenched measure and can be viewed as a two-step measuring process. Initially, for a given realization of the disorder variables z , ξ , one assumes that the system equilibrates according to the canonical Boltzmann–Gibbs distribution G N defining a (random) measure on the space of spin configurations. The expectation with respect to G N is denoted by N , namely
f N : = σ { 1 , 1 } N G N ( σ ) f ( z , ξ , σ )
In probabilistic terms, G N defines a conditional measure given z and ξ . The remaining degrees of freedom z , ξ are then averaged according to their apriori distribution E .
An important role is played by the concept of replicas. Replicas are i.i.d. samples from G N at fixed disorder. Hence, the equilibrium value of a function f ( z , ξ , σ 1 , , σ n ) of n replicas and the quenched variables z , ξ is defined by
E f N = E a n σ a { 1 , 1 } N G N ( σ a ) f ( z , ξ , σ 1 , , σ n ) .
The computation of derivatives of p ¯ N S K shows, using integration by parts, that the SK model is fully characterized by the (joint) distribution of the overlap array q N ( σ l , σ l ) l , l n ( q l , l ) l , l n , namely the overlaps between any finite number n of replicas with respect to the measure (11). The main feature of the Parisi theory is the characterization of the mentioned joint measure by means of two structural properties:
(i)
It is uniquely determined by a one-dimensional marginal, namely the distribution of q 1 , 2 ;
(ii)
The distribution of three replicas has with a probability of one an ultrametric support
lim N E 1 q 1 , 2 min q 1 , 3 , q 2 , 3 = 1 .
Despite having a mathematical proof of the Parisi Formula (7) for the SK model, (i) and (ii) have been rigorously proved only in the mixed p-spin model [6,20,21], an extension of the SK model, whose Hamiltonian contains also higher-order interactions (three-body, four-body, etc.).
One of the crucial instruments to achieve a rigorous control of the model is the so-called Ruelle Probability Cascades (RPCs), defined by Ruelle [22] when formalizing the properties of the Generalized Random Energy model of Derrida [23]. See also the characterization of RPC in terms of coalescent processes given in [24]. The first direct link between RPC and the SK model appeared in the work of Aizenman–Sims–Starr [25], where the authors found a representation of the thermodynamic limit of quenched pressure per particle in terms of the cavity fields distribution. This representation strongly suggested that if the thermodynamic limit of the overlap distribution is described by an RPC, then the Parisi formula is correct.
The first signal that the overlap array is described by an RPC was originally found by Aizenmann and Contucci in [10] with the identification of stochastic stability and by Ghirlanda and Guerra [26]. Both papers show an (infinite) set of identities for the moments of the overlap array distribution. It turns out that these identities actually imply that the support of the joint distribution of the overlap is ultrametric, as proved by Panchenko [27]. It should be noticed that Panchenko’s theorem requires identities for the overlap moments of all orders. The latter do not hold for the bare SK model, but it can be shown that there exists a perturbation of the Hamiltonian that forces the SK model to satisfy them without affecting the limit of the quenched pressure [28].
Once the validity of the Parisi Formula (7) is established, it is natural to ask for the properties of its solution. The uniqueness of the minimizer of (7) has been assessed by Auffinger and Chen [29], and its properties have been investigated for example in [30,31].
A relevant question about the minimizer is the following: for which values of the parameters ( β , h ) is the solution of (7) a Dirac-delta function δ q for some q [ 0 , 1 ] ? In this case, we say that the model is replica symmetric and the Parisi Formula (7) reads
lim N p N S K ( β , h ) = inf q [ 0 , 1 ] log 2 + E z , ξ log cosh β z q + h ξ + β 2 4 ( 1 q ) 2 .
The replica symmetric region can be identified [6,32] with the region of parameters ( β , h ) where the overlap is a self-averaging quantity, namely
lim N E q 1 , 2 q 2 N = 0
where q is exactly the value that realizes the infimum in (13). The physics conjecture is that the replica symmetric region can be identified by the so called Almeida–Thouless [33]
β 2 E z , ξ cosh 4 β z q + h ξ 1 .
The above conjecture is proved only in the case of Gaussian external field ξ i N ( 0 , 1 ) [34]. An alternative characterization of the replica symmetric region has been obtained in [6,35]. If the minimizer corresponds to a non-trivial distribution (i.e., with non-zero variance), we say that replica symmetry breaking occurs, and the overlap is not a self-averaging quantity.
The Parisi formula has been extended to other mean field models with centered Gaussian interactions: vector spins [36], multispecies models [11,37,38], multiscale models [39,40]. Finally, we mention that the SK model fulfills a remarkable universality property: as long as z i j ’s are independent, centered, and with unit variance, the thermodynamic limit is still described by the Parisi solution [41].
In this work, we show that a class of non-centered Gaussian spin glasses admits an interpretation of high-dimensional inference that extends the celebrated correspondence between the spiked Wigner model and the SK model in the Nishimori line where replica symmetry is always fulfilled [3]. We show that the addition of an SK Hamiltonian to a Hopfield with a finite number of patterns can be mapped into a high-dimensional mismatched inference problem, where the statistician ignores the correct apriori distribution on the signal components they have to reconstruct. We shall see that even this slight mismatch may lead to the emergence of complexity, namely to the breakdown of replica symmetry, which is instead guaranteed under very mild hypotheses for optimal statisticians.

3. High-Dimensional Inference and Statistical Physics

High-dimensional inference aims at recovering a ground truth signal, ξ in the following, that is usually a vector with a very large number of components from some noisy observations of it, which is denoted by Y . The main feature of this setting is that the dimension of the signal, i.e., the number of real parameters to reconstruct, and the number of observations at disposal are a function of one another, typically a polynomial. For instance, for our purposes, ξ will be a vector of R N and Y will be an N × N matrix for a total of N 2 noisy observations. Hence, if the number of observations becomes large, the number of parameters to retrieve also does. Contrary to what happens in typical low-dimensional settings, where max-likelihood, or Maximum A Posteriori (MAP) approaches yield provably satisfactory reconstruction performances, in a high-dimensional setting, this is not always the case. In particular, one needs to devise another kind of more refined estimators that exploit the marginal posterior probabilities for each signal component.
Both approaches described above are Bayesian, and the knowledge of a prior distribution on the signal components can play a key role especially for high-dimensional problems. Furthermore, to compose the posterior measure for the entire signal, one needs the likelihood of the data, which is the probability of an outcome y of the variable Y given a certain ground truth realization ξ = x . As we shall discuss soon, under certain hypotheses, the Bayesian approach highlights the correspondence of relevant information theoretic quantities with thermodynamic ones. Among the others, a key quantity is the mutual information between the signal ξ and the observations Y , which quantifies the residual amount of information left in Y about ξ after the noise corruption. As intuition may suggest, the mutual information gives access to the best reconstruction error that is information theoretically achievable.
Finally, we stress that the high dimensionality of the problem can induce phase transition in some parameters of the model, like the so-called signal-to-noise ratio (SNR), that tunes the strength of the signal with respect to that of the noise in the observations.

3.1. Bayes-Optimality and Nishimori Identities

For the sake of simplicity, we start by considering a signal ξ = ( ξ i ) i N R N of i.i.d. (independently and identically distributed) components ξ i iid P ξ , where P ξ has a finite fourth moment. The observations at the disposal of a statistician can be modeled as a stochastic function of the ground-truth signal: Y = F ( ξ ; z ) , where z is the source of randomness or simply the noise. Knowing the function F , from a Bayesian perspective, translates directly into having the likelihood of the model, namely the conditional distribution d P Y | ξ = x ( y ) = p Y | ξ = x ( y ) d y , which we assume to have a density p Y | ξ = x ( y ) over the Lebesgue measure. Observe that the likelihood is strongly affected by the nature of the noise.
According to Bayes’ rule, the posterior distribution of ξ given the data is:
d P ξ Y = y ( x ) = p Y | ξ = x ( y ) d P ξ ( x ) Z ( y ) , Z ( y ) = p Y | ξ = x ( y ) d P ξ ( x ) ,
where d P ξ ( x ) = i N d P ξ ( x i ) , and Z ( y ) is the probability of a given realization of the data, which is sometimes also called evidence. In practice, the above posterior, which would be ideal to perform inference, is rarely available, and the statistician is not aware either of the likelihood or of the correct prior distribution for the signal, or even both. This motivates the following definition of a special inference setting:
Definition 1
(Bayes optimality). The statistician is said to be Bayes optimal, or in the Bayes-optimal setting, if they are aware both of P ξ and F ( · ; z ) ; namely, they have access to the posterior (16).
The above is saying that an optimal statistician knows everything about the model except for the ground truth ξ itself. The Bayes-optimal setting is thus often used as a theoretical framework to establish the information theoretical limits. Indeed, it is known that the mean square error between the ground truth and an estimator ξ ^ ( y )
MSE ( ξ ^ ) = E ξ ξ ^ ( y ) 2
is minimized by an optimal statistician that can use the posterior mean as an estimator, yielding the minimum mean square error (MMSE)
MMSE = E ξ E ξ Y ξ 2 .
In the following, we shall denote averages with respect to the posterior as · Y .
Another important consequence of this setting is the so-called Nishimori identities, which can be stated as follows. Given any continuous bounded function f of the data Y , the ground truth ξ and n 1 i.i.d. samples from the posterior ( x ( k ) ) k = 2 n , one has
E f ( Y , ξ , ( x ( k ) ) k = 2 n ) Y = E f ( Y , x ( 1 ) , ( x ( k ) ) k = 2 n ) Y , ,
where x ( 1 ) P ξ Y . An elementary proof can be found in [42]. These identities are enforcing a symmetry between replicas drawn from the posterior and the ground truth. For instance, a direct application of the Nishimori identities yields
MMSE = E ξ x Y 2 = E x x Y 2 Y .
It is important to stress that, as it can be seen from the above equation, an optimal statistician is actually able to compute the minimum mean square error using their posterior.
At this point, the reader will have noticed a similarity with the Statistical Mechanics formalism. In fact, it is possible to interpret Z ( y ) as the partition function of a model with Hamiltonian log p Y ξ = x ( y ) and unit inverse absolute temperature. The pressure per particle of such a model would thus be
p N ( y ) = 1 N log Z ( y ) , p ¯ N = E Y p N ( Y ) = 1 N H ( Y ) ,
namely minus the Shannon entropy of the data per signal component, which is related to the mutual information
1 N I N ( Y ; ξ ) = 1 N H ( Y ) 1 N H ( Y ξ ) .
The contribution coming from the conditional entropy H ( Y ξ ) can be regarded as due only to the noise, since for fixed ξ , the only randomness in Y is due to Z .
We stress here that Bayes optimality, and the Nishimori identities, under rather mild hypotheses [43] are enough to grant replica symmetry in the model, i.e., concentration of the order parameters in the model. For the models we are interested in, the latter can be shown to imply finite-dimensional variational principles for the limiting mutual information.

3.2. The Spiked Wigner Model

The spiked Wigner model (WSM) was first introduced in [44] as a model for Principal Component Analysis (PCA), and since then, it was widely studied in recent literature. Without pretension of being exhaustive, we refer the interested reader to [42,45,46,47,48,49,50,51]. For our purposes, we restrict ourselves to the case where the signal is an N-dimensional vector of ± 1 s, drawn from a Rademacher distribution ξ i iid P ξ = ( δ 1 + δ 1 ) / 2 . The function F ( ; z ) is a Gaussian channel, namely
Y i j = μ 2 N ξ i ξ j + z i j
where z i j iid N ( 0 , 1 ) , and μ is a positive parameter called the signal-to-noise ratio. The statistician is tasked with the recovery of ξ given the observations Y . The Bayes-optimal posterior measure for this inference problem can be written directly as a Boltzmann–Gibbs random measure thanks to the Gaussian nature of the likelihood:
G N ( σ ) = 1 Z ( z , ξ ) e H N ( σ ; z , ξ ) , H N ( σ ; z , ξ ) = i , j = 1 N μ 2 N z i j σ i σ j + μ 2 N σ i ξ i σ j ξ j
where we have already exploited the fact that ( ξ i ) 2 = σ i 2 = 1 . We are denoting the posterior samples with σ . Since the quantity we are interested in is the quenched pressure of this model
p ¯ N = 1 N E log σ { 1 , 1 } N exp H N ( σ ; z , ξ )
that is connected to the mutual information I N ( Y ; ξ ) / N by a simple shift with an additive constant, we are allowed to perform a gauge transformation without altering its value:
z i j z i j ξ i ξ j , σ i σ j σ i σ j ξ i ξ j .
This results in a Hamiltonian that is now independent of the original ground-truth signal
H N ( σ ; z ) = i , j = 1 N μ 2 N z i j + μ 2 N σ i σ j
and the coupling between spins are Gaussian random variables with a mean equal to their variance. This condition identifies a peculiar region of the phase space of a spin-glass model, which is called Nishimori line. In fact, the Nishimori identities were first discovered and studied in the context of gauge spin-glasses. Despite looking simpler, the above model retains most of the features we need for our study.
For inference models with additive Gaussian noise, like the one above, it is possible to prove the so-called I-MMSE relation:
d d μ I N ( Y ; ξ ) N = 1 4 N 2 E ξ ξ σ σ F 2 ,
where · F is the Frobenius norm and · denotes the expectation with respect to the Boltzmann–Gibbs measure induced by (27). Hence, once the mutual information is known, the MMSE can be accessed through a derivative with respect to the signal-to-noise ratio. A clarification is in order here: the above is the MMSE on the reconstruction of the rank-one matrix ξ ξ , because, due to flip symmetry, here we do not have any actual information on the single vector ξ , but only on the spike ξ ξ .

3.3. Sub-Optimality and Replica Symmetry Breaking

There are several ways to break Bayes optimality. Some examples are that the statistician does not know the signal-to-noise ratio μ [13,52]; the statistician adopts a likelihood different from that of the true model [14]; the statistician adopts a wrong prior [12,53]; combinations of the previous and many others. We will focus on the mismatching priors case, where the statistician not only adopts a wrong prior on the ground-truth elements, but they are not aware of the rank of the spiked matrix hidden inside the noise, which is denoted by M. The rest is assumed to be known. The channel of the inference problem is
Y i j = μ 2 N k = 1 M ξ i ( k ) ξ j ( k ) + z i j .
If the statistician assumes a Rademacher prior to the signal components and a rank-one hidden matrix, they will write a posterior in the form
/ P ξ | z , ξ ( σ ) = 1 / Z ( z , ξ ) exp H N ( σ ; z , ξ )
where
H N ( σ ; z , ξ ) = i , j = 1 N μ 2 N z i j + μ 2 N k = 1 P ξ i ( k ) ξ j ( k ) σ i σ j
The slash on quantities emphasizes that they are not the Bayes-optimal ones. In this setting, one can no longer rely on the Nishimori identities, and in principle, replica symmetry is no longer guaranteed. On the contrary, as we shall argue later on, a mismatch in the prior only is already sufficient to cause replica symmetry breaking.

4. The Model

Let M be a fixed integer and k { 1 , , M } . Consider two independent random collections ( z i j ) i , j N iid N ( 0 , 1 ) and ( ξ i ( k ) ) i N k M iid P ξ where P ξ is such that E [ ξ 4 ] < . The above random collections play the role of quenched disorder in the model. Consider N Ising spins σ = ( σ i ) i N { + 1 , 1 } N and the Hamiltonian function
H N ( σ ; μ , ν , λ ) H N ( σ ) = H N i n t ( σ ) h · σ = i , j = 1 N μ 2 N z i j + ν 2 N k M ξ i ( k ) ξ j ( k ) σ i σ j k M λ k i = 1 N ξ i ( k ) σ i
with μ , ν 0 , λ = ( λ k ) k M R M . Here, H N i n t is the interacting part while
h = ( h i ) i N h ( λ , ξ ) = k M λ k ξ i ( k ) i N
denotes the random external field acting on the spins. The Hamiltonian (32) is determined by the choice of M , μ , ν , λ and P ξ . For μ = ν , the interaction term H N i n t coincides with the Hamiltonian (31). Note that for some special choices of the parameters, we recover some well-known spin glass models:
  • ν = 0 gives the SK model (1) at β = μ and random external field h .
  • μ = 0 gives the Hopfield model [6,7,18] with a finite number of patterns ( ξ ( k ) ) k M .
  • M = 1 , ν = μ , λ 1 = 0 and P ξ = 1 2 ( δ 1 + δ 1 ) gives the SK model on the Nishimori line (27). As we have seen in Section 3, the latter can be also viewed as a spiked Wigner model in the Bayesian-optimal setting.
Notice that the entire model can be interpreted as a Hopfield model where the traditional Hebbian matrix k M ξ i ( k ) ξ j ( k ) is corrupted by Gaussian noise. Furthermore, if the Hebbian coupling is replaced by a constant matrix, the model reduces to an SK model with the addition of a ferromagnetic interaction, and it was studied in [54].
Our main result is the computation of the thermodynamic limit of the pressure per particle
p N ( μ , ν , λ ) = 1 N log σ { 1 , 1 } N e H N ( σ )
whose variance can be shown to converge to 0 as an O ( N 1 ) , namely:
Lemma 1.
Assume E ξ 1 4 < . Then, for any μ , ν R and λ R K
E p N ( μ , ν , λ ) E p N ( μ , ν , λ ) 2 K N
where K is a suitable positive constant.
We thus focus on p ¯ N ( μ , ν , λ ) = E p N ( μ , ν , λ ) . The proof of this lemma makes use of the Efron–Stein concentration inequality to bound the variance, and it is simple but tedious. It follows closely that of ([12], Lemma 9). We are now in a position to state our main theorem:
Theorem 2
(Variational solution). If E [ ξ 4 ] < + then
lim N p ¯ N ( μ , ν , λ ) = sup x R M φ ( x ; μ , ν , λ )
where
φ ( x ; μ , ν , λ ) : = ν | x | 2 2 + E P μ , h ( x , λ , ξ ) , x = ( x k ) k M R M
and P is the Parisi functional (5) with a random external field
h ( x , λ , ξ ) = k M ( λ k + ν x k ) ξ ( k )
and E denotes the expectation with respect to ξ ( k ) iid P ξ . The consistency equations are
x k = x k E P μ , h ( x , λ , ξ ) , k M
Moreover, there exists C > 0 such that for any k M , one has | x k | C and the supremum in (36) can be restricted to [ C , C ] M .
The proof of the theorem is based on the concentration of the Mattis magnetization, which is the normalized scalar product between a spin-configuration (or sample from the wrong posterior measure) and one of the ξ ( k ) :
m N ( σ | ξ ( k ) ) = 1 N i = 1 N σ i ξ i ( k ) = 1 N σ · ξ ( k ) .
The Hamiltonian can thus be rewritten using (40) in the following form:
H N ( σ ) = μ H N S K ( σ ) N k M ν 2 m N 2 ( σ | ξ ( k ) ) + λ k m N ( σ | ξ ( k ) )
The Mattis magnetization, in fact, plays the role of an order parameter for this model. The concentration we can prove is only an integral average over some suitably small magnetic fields, which is still sufficient for our purposes:
Proposition 1
(Concentration of Mattis Magnetizations). Consider a k such that k M . Let ϵ k [ s N , 2 s N ] with s N = 1 2 N α , α ( 0 , 1 / ( 2 M ) ) for all k M . For any y R , we denote by · N , y the Boltzmann–Gibbs measure induced by the Hamiltonian H N , y ( σ ) = H N ( σ ) y σ · ξ ( k ) . Then
lim N 1 s N M s N 2 s N M d ϵ E m N ( σ | ξ ( k ) ) E m N ( σ | ξ ( k ) ) N , ϵ 2 N , ϵ = 0 ,
for all μ , ν 0 and λ R M .
We shall omit the proof of the above result as it is completely analogous to the one in [12]. We will need an intermediate lemma that leads to it (see Lemma 2 later) together with a second key ingredient: the adaptive interpolation technique [48] combined with Guerra’s replica symmetry-breaking upper bound for the quenched pressure of the SK model [8].
Proof of Theorem 2.
Here, we outline the main steps of the proof of the variational principle for the thermodynamic limit. The proof is achieved via two bounds that match in the N limit. Let us start by defining the interpolating Hamiltonian
H N ( t ; σ ) : = H N ( σ ; μ , ( 1 t ) ν , λ + R ϵ ( t ) ) = = μ H N S K ( σ ) ( 1 t ) N ν 2 k M m N 2 ( σ | ξ ( k ) ) N k M ( λ k + R ϵ ( k ) ( t ) ) m N ( σ | ξ ( k ) )
where R ϵ = ( R ϵ ( k ) ) k M and
R ϵ ( k ) ( t ) = ϵ k + ν 0 t d s r ϵ ( k ) ( s ) , ϵ k [ s N , 2 s N ] , s N = N α 2
with α ( 0 , 1 / ( 2 M ) ) and where the interpolating functions r ϵ ( k ) , that must be continuously differentiable in [ 0 , 1 ] and non negative, will be suitably chosen. With this interpolation, one is able to prove the following sum rule:
Proposition 2.
The following sum rule holds:
p ¯ N ( μ , ν , λ ) = p ¯ N S K ( μ , λ + R ϵ ( 1 ) ) ν 2 0 1 d t k M [ r ϵ ( k ) 2 ( t ) Δ ϵ ( k ) ( t ) ] + O ( s N )
where
Δ ϵ ( k ) ( t ) : = E m N ( σ | ξ ( k ) ) r ϵ ( k ) ( t ) 2 N , R ϵ ( t ) .
The proof consists of the computation of the derivative of the interpolating pressure related to the model (43). It follows closely that of ([12], Proposition 7), to which we refer the interested reader. Since the remainder Δ ϵ ( k ) is non-negative, the above proposition already yields a bound for the quenched pressure of our model when we choose r ϵ ( k ) = x k R constant:
lim inf N p ¯ N ( μ , ν , λ ) sup x R M φ ( x ; μ , ν , λ )
where we used Lipschitz continuity of the SK pressure in the magnetic fields.
The upper bound requires more attention. First, we notice that p ¯ N S K ( μ , λ + R ϵ ( 1 ) ) is convex in the magnetic fields and that R ϵ ( k ) ( 1 ) = 0 1 d t ( ϵ k + ν r ϵ ( k ) ( t ) ) . Hence, we can use Jensen’s inequality and Lipschitz continuity of p ¯ S K to obtain:
p ¯ N ( μ , ν , λ ) 0 1 d t p ¯ N S K ( μ , λ + r ϵ ( t ) ) ν k M r ϵ ( k ) 2 ( t ) 2 + ν 2 k M 0 1 Δ ϵ ( k ) ( t ) d t + O ( s N ) .
Now, we use Guerra’s bound for the SK pressure, that, importantly, is uniform in N, and we average over ϵ on both sides
p ¯ N ( μ , ν , λ ) E ϵ 0 1 φ ( r ϵ ( t ) ; μ , ν , λ ) d t + ν 2 k M E ϵ 0 1 Δ ϵ ( k ) ( t ) d t + O ( s N ) sup x R M φ ( x ; μ , ν , λ ) + ν 2 k M E ϵ 0 1 Δ ϵ ( k ) ( t ) d t + O ( s N ) .
What remains to do is to prove that E ϵ Δ ϵ ( k ) ( t ) N 0 for a proper choice of the interpolating functions R ϵ . The choice is made through a system of coupled ODEs
R ˙ ϵ ( k ) ( t ) ν r ϵ ( k ) ( t ) = ν E m N ( σ ξ ( k ) ) N , R ϵ ( t ) , R ϵ ( k ) ( 0 ) = ϵ k , k M .
One can easily check that the above system is regular enough to admit a unique solution on the interval t [ 0 , 1 ] . In this case, the remainder to push to 0 would appear as
1 s N M s N 2 s N M d ϵ E m N ( σ | ξ ( k ) ) E m N ( σ | ξ ( k ) ) N , R ϵ ( t ) 2 N , R ϵ ( t ) .
The goal is now to apply a concentration lemma here:
Lemma 2.
Let y [ y 1 , y 2 ] , δ ( 0 , 1 ) and denote by · N , y the Boltzmann–Gibbs expectation associated to the Hamiltonian H N ( σ ; μ , ν , λ + y e k ) where k M and e k is the k-th canonical basis vector of R M . Then
E m N ( σ | ξ ( k ) ) m N ( σ | ξ ( k ) ) N , y 2 N , y = 1 N d 2 d y 2 p ¯ N ( μ , ν , λ + y e k )
E m N ( σ | ξ ) N , y E m N ( σ | ξ ) N , y 2 12 K δ 2 N + + 8 a d d y p ¯ N ( μ , ν , λ + ( y + δ ) e k ) p ¯ N ( μ , ν , λ + ( y δ ) e k )
with K a positive constant.
Notice that the integral in (51) is over ϵ and not over the effective magnetic field of the model, which is instead R ϵ ( t ) . Nevertheless, we can integrate over the magnetic fields R ϵ ( t ) with a change of variables. This involves a Jacobian that is larger than 1. In fact, thanks to Liouville’s theorem ([55], Corollary 3.1, Chapter V), one can prove that
det R ϵ ( t ) ϵ = exp 0 t ν k M R ϵ ( k ) ( s ) E m N ( σ | ξ ( k ) ) N , R ϵ ( s ) d s = exp 0 t N ν k M E m N ( σ | ξ ( k ) ) E m N ( σ | ξ ( k ) ) N , R ϵ ( s ) 2 N , R ϵ ( s ) d s 1 ,
when ν 0 .
This allows us to bound the thermal fluctuations in (51) using (52) and then Liouville’s theorem:
Δ T ( k ) : = 1 s N M s N 2 s N M d ϵ E m N ( σ | ξ ( k ) ) m N ( σ | ξ ( k ) ) N , R ϵ ( t ) 2 N , R ϵ ( t ) 1 s N M M R s N ( ) ( t ) R 2 s N ( ) ( t ) d h 1 N d 2 d h k 2 p ¯ N ( μ , ν , λ + h ) = = 1 N s N M M , k R s N ( ) ( t ) R 2 s N ( ) ( t ) d h E m N ( σ | ξ ( k ) ) N , h ; h k = R 2 s N ( k ) E m N ( σ | ξ ( k ) ) N , h ; h k = R s N ( k )
Since ξ i has a bounded second moment, using Cauchy–Schwarz inequality, one can show that | E m N ( σ | ξ ( k ) ) N , h | is uniformly bounded by a constant C. Hence, | R ϵ ( k ) ( t ) | ϵ k + C t ν 1 + C ν for any k M by construction (recall (44) and (50)). Therefore, Δ T = O 1 N s N M .
The fluctuations induced by the disorder can be bounded in a very similar fashion using (53):
Δ D ( k ) : = 1 s N M s N 2 s N M d ϵ E m N ( σ | ξ ( k ) ) N , R ϵ ( t ) E m N ( σ | ξ ( k ) ) N , R ϵ ( t ) 2 = O 1 N δ 2 + δ s N M
Hence, overall (51), that equals Δ T ( k ) + Δ D ( k ) is a O 1 N s N M + 1 N δ 2 + δ s N M . δ can be chosen as a function of N in order to optimize the convergence rate: δ = s N 2 M / 3 N 1 / 3 . Using Fubini’s theorem in (49) to exchange the t and ϵ averages and then Dominated Convergence, one concludes the proof. □
From the the variational problem (36), we can deduce also the differentiability properties of the limiting pressure obtaining the average values of the relevant thermodynamic quantities of the model:
Corollary 1.
Let p ( μ , ν , λ ) = lim N p ¯ N ( μ , ν , λ ) , and Ω μ , ν , λ = argmax x [ C , C ] M φ ( x ; μ , ν , λ ) . Then
  • If Ω μ , ν , λ = { x ¯ } is a singleton with x ¯ = ( x ¯ k ) k M then for any k M , one has
    lim N E m N ( σ | ξ k ) N = λ k p ( μ , ν , λ ) = x ¯ k
    and
    lim N E q 12 2 N = 1 4 μ p ( μ , ν , λ ) = q 2 d χ ( q ) .
    where χ ( q ) denotes the unique measure solving the Parisi variational principle in Theorem 1.
  • If | x | 2 , x Ω μ , ν , λ = { | x ¯ | 2 } is a singleton then
    ν p ( μ , ν , λ ) = | x ¯ | 2 2 .
More generally, let y be one of the variables μ , ν , λ 1 , , λ M , then the function y p ( μ , ν , λ ) is convex. By Danskin theorem (see [56]), y p ( μ , ν , λ ) is differentiable if and only if the set φ ( x ; μ , ν , λ ) y , x Ω μ , ν , λ is a singleton.

5. Conclusions and Perspectives

In this paper, we offer an overview of the Parisi formula from a mathematical physics perspective, emphasizing its potential applications, particularly in addressing the mismatched inference problem outlined earlier. Building upon our previous work [12], we investigate a scenario where a statistician, tasked with reconstructing a finite-rank matrix, lacks knowledge about the underlying matrix generation process, including both the matrix elements and its rank. We consider the case in which the statistician assumes a rank-one matrix, leading to a mismatch between the "true" Bayes posterior and the one used for inference. Our key contribution is the proof that, contrary to what happens in the Bayes-optimal setting, this Bayesian mismatch induces replica symmetry breaking in the model. Consequently, we express the pressure of the corresponding spin glass as an infinite-dimensional variational principle over the space of distributions on [0, 1].
The chosen mismatch scenario shares some similarities with those studied in [57,58] with the fundamental difference being that here the rank of the hidden matrix is finite. In a recent work [59], the authors consider a general case of mismatch, which includes mismatching priors and likelihoods. The mentioned paper proves a universality property with respect to the likelihood assumed by the statistician provided that observations remain independent given the ground truth.
Despite these advancements, all the proofs available so far in the literature break down when considering a high-rank hidden matrix. To rigorously comprehend this scenario, addressing the solution of the Hopfield model is of crucial importance. However, to the best of our knowledge, its complete solution remains elusive [5,6].

Author Contributions

All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

PIerluigi Contucci and Emanuele Mingione were partially supported by the EU H2020 ICT48 project Humane AI Net contract number 952026; by the Italian Extended Partnership PE01—FAIR Future Artificial Intelligence Research—Proposal code PE00000013 under the MUR National Recovery and Resilience Plan; by the project PRIN 2022—Proposal code: J53D23003690006.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors thank Jorge Kurchan, Nicolas Macris and Farzad Pourkamali for fruitful interactions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Parisi, G. An Infinite Number of Order Parameters for Spin Glasses. Phys. Rev. Lett. 1979, 43, 1754–1756. [Google Scholar] [CrossRef]
  2. Parisi, G. A Sequence of Approximated Solutions to the S-K Model for Spin Glasses. J. Phys. A 1980, 13, L115. [Google Scholar] [CrossRef]
  3. Nishimori, H. Statistical Physics of Spin Glasses and Information Processing: An Introduction; Oxford University Press: Oxford, NY, USA, 2001. [Google Scholar]
  4. Mézard, M.; Montanari, A. Information, Physics, and Computation; Oxford Academic: Oxford, UK, 2009. [Google Scholar]
  5. Talagrand, M. Mean Field Models for Spin Glasses: Volume I: Basic Examples; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  6. Talagrand, M. Mean Field Models for Spin Glasses: Volume II: Advanced Replica-Symmetry and Low Temperature; Springer: Berlin/Heidelberg, Germany, 2011; Volume 55. [Google Scholar] [CrossRef]
  7. Amit, D.J.; Gutfreund, H.; Sompolinsky, H. Spin-glass models of neural networks. Phys. Rev. A 1985, 32, 1007–1018. [Google Scholar] [CrossRef] [PubMed]
  8. Guerra, F. Broken Replica Symmetry Bounds in the Mean Field Spin Glass Model. Commun. Math. Phys. 2003, 233, 1–12. [Google Scholar] [CrossRef]
  9. Guerra, F.; Toninelli, F.L. The Thermodynamic Limit in Mean Field Spin Glass Models. Commun. Math. Phys. 2002, 230, 71–79. [Google Scholar] [CrossRef]
  10. Aizenman, M.; Contucci, P. On the Stability of the Quenched State in Mean Field Spin Glass Models. J. Stat. Phys. 1998, 92, 765–783. [Google Scholar] [CrossRef]
  11. Panchenko, D. The free energy in a multi-species Sherrington–Kirkpatrick model. Ann. Probab. 2015, 43, 3494–3513. [Google Scholar] [CrossRef]
  12. Camilli, F.; Contucci, P.; Mingione, E. An inference problem in a mismatched setting: A spin-glass model with Mattis interaction. SciPost Phys. 2022, 12, 125. [Google Scholar] [CrossRef]
  13. Pourkamali, F.; Macris, N. Mismatched Estimation of rank-one symmetric matrices under Gaussian noise. arXiv 2021, arXiv:cs.IT/2107.08927. [Google Scholar]
  14. Barbier, J.; Hou, T.; Mondelli, M.; Sáenz, M. The price of ignorance: How much does it cost to forget noise structure in low-rank matrix estimation? Adv. Neural Inf. Process. Syst. 2022, 35, 36733–36747. [Google Scholar]
  15. Fu, T.; Liu, Y.; Barbier, J.; Mondelli, M.; Liang, S.; Hou, T. Mismatched estimation of non-symmetric rank-one matrices corrupted by structured noise. arXiv 2023, arXiv:2302.03306. [Google Scholar]
  16. Barbier, J.; Chen, W.K.; Panchenko, D.; Sáenz, M. Performance of Bayesian linear regression in a model with mismatch. arXiv 2021, arXiv:2107.06936. [Google Scholar]
  17. Sherrington, D.; Kirkpatrick, S. Solvable Model of a Spin-Glass. Phys. Rev. Lett. 1975, 35, 1792–1796. [Google Scholar] [CrossRef]
  18. Mezard, M.; Parisi, G.; Virasoro, M. Spin Glass Theory and Beyond; Lecture Notes in Physics Series; World Scientific: Singapore, 1987. [Google Scholar]
  19. Talagrand, M. The Parisi Formula. Ann. Math. 2006, 163, 221–263. [Google Scholar] [CrossRef]
  20. Panchenko, D. The Sherrington-Kirkpatrick Model; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  21. Auffinger, A.; Jagannath, A. Thouless–Anderson–Palmer equations for generic p-spin glasses. Ann. Probab. 2019, 47, 2230–2256. [Google Scholar] [CrossRef]
  22. Ruelle, D. A mathematical reformulation of Derrida’s REM and GREM. Commun. Math. Phys. 1987, 108, 225–239. [Google Scholar] [CrossRef]
  23. Derrida, B. A generalization of the Random Energy Model which includes correlations between energies. J. Phys. Lett. 1985, 46, 401–407. [Google Scholar] [CrossRef]
  24. Bolthausen, E.; Sznitman, A.S. On Ruelle’s Probability Cascades and an Abstract Cavity Method. Commun. Math. Phys. 1998, 197, 247–276. [Google Scholar] [CrossRef]
  25. Aizenman, M.; Sims, R.; Starr, S. Extended variational principle for the Sherrington-Kirkpatrick spin-glass model. Phys. Rev. B 2003, 68, 214403. [Google Scholar] [CrossRef]
  26. Ghirlanda, S.; Guerra, F. General properties of overlap probability distributions in disordered spin systems. Towards Parisi ultrametricity. J. Phys. A Math. Gen. 1998, 31, 9149. [Google Scholar] [CrossRef]
  27. Panchenko, D. The Parisi ultrametricity conjecture. Ann. Math. 2011, 177, 383–393. [Google Scholar] [CrossRef]
  28. Contucci, P.; Mingione, E.; Starr, S. Factorization Properties in d-Dimensional Spin Glasses. Rigorous Results and Some Perspectives. J. Stat. Phys. 2012, 151, 809–829. [Google Scholar] [CrossRef]
  29. Auffinger, A.; Chen, W.K. The Parisi Formula has a Unique Minimizer. Commun. Math. Phys. 2015, 335, 1429–1444. [Google Scholar] [CrossRef]
  30. Auffinger, A.; Chen, W.K. On properties of Parisi measures. Probab. Theory Relat. Fields 2013, 161, 817–850. [Google Scholar] [CrossRef]
  31. Jagannath, A.; Tobasco, I. Some Properties of the Phase Diagram for Mixed p-Spin Glasses. Probab. Theory Relat. Fields 2017, 167, 615–672. [Google Scholar] [CrossRef]
  32. Pastur, L.A.; Shcherbina, M. Absence of self-averaging of the order parameter in the Sherrington-Kirkpatrick model. J. Stat. Phys. 1991, 62, 1–19. [Google Scholar] [CrossRef]
  33. de Almeida, J.R.L.; Thouless, D.J. Stability of the Sherrington-Kirkpatrick solution of a spin glass model. J. Phys. A Math. Gen. 1978, 11, 983. [Google Scholar] [CrossRef]
  34. Chen, W.K. On the Almeida-Thouless transition line in the SK model with centered Gaussian external field. arXiv 2021, arXiv:2103.04802. [Google Scholar]
  35. Guerra, F.; Toninelli, F.L. Quadratic replica coupling in the Sherrington-Kirkpatrick mean field spin glass model. J. Math. Phys. 2002, 43, 3704–3716. [Google Scholar] [CrossRef]
  36. Panchenko, D. Free energy in the mixed p-spin models with vector spins. Ann. Probab. 2018, 46, 865–896. [Google Scholar] [CrossRef]
  37. Barra, A.; Contucci, P.; Mingione, E.; Tantari, D. Multi-Species Mean Field Spin Glasses. Rigorous Results. Ann. Inst. Henri Poincaré 2013, 16, 691–708. [Google Scholar] [CrossRef]
  38. Subag, E. TAP approach for multispecies spherical spin glasses II: The free energy of the pure models. Ann. Probab. 2023, 51, 1004–1024. [Google Scholar] [CrossRef]
  39. Contucci, P.; Mingione, E. A Multi-scale Spin-Glass Mean-Field Model. Commun. Math. Phys. 2019, 368, 1323–1344. [Google Scholar] [CrossRef]
  40. Mourrat, J.C.; Panchenko, D. Extending the Parisi formula along a Hamilton-Jacobi equation. Electron. J. Probab. 2020, 25, 1–17. [Google Scholar] [CrossRef]
  41. Carmona, P.; Hu, Y. Universality in Sherrington–Kirkpatrick’s spin glass model. In Annales de l’Institut Henri Poincare (B) Probability and Statistics; Elsevier: Amsterdam, The Netherlands, 2006; Volume 42, pp. 215–222. [Google Scholar]
  42. Lelarge, M.; Miolane, L. Fundamental limits of symmetric low-rank matrix estimation. Probab. Theory Relat. Fields 2017, 173, 859–929. [Google Scholar] [CrossRef]
  43. Barbier, J. Overlap matrix concentration in optimal Bayesian inference. Inf. Inference A J. IMA 2020, 10, 597–623. [Google Scholar] [CrossRef]
  44. Johnstone, I.M. On the distribution of the largest eigenvalue in principal components analysis. Ann. Stat. 2001, 29, 295–327. [Google Scholar] [CrossRef]
  45. Krzakala, F.; Xu, J.; Zdeborová, L. Mutual information in rank-one matrix estimation. In Proceedings of the 2016 IEEE Information Theory Workshop (ITW), Cambridge, UK, 11–14 September 2016; pp. 71–75. [Google Scholar] [CrossRef]
  46. Barbier, J.; Macris, N.; Miolane, L. The layered structure of tensor estimation and its mutual information. In Proceedings of the 55th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 3–6 October 2017. [Google Scholar]
  47. Barbier, J.; Macris, N. The adaptive interpolation method for proving replica formulas. Applications to the Curie–Weiss and Wigner spike models. J. Phys. A Math. Theor. 2019, 52, 294002. [Google Scholar] [CrossRef]
  48. Barbier, J.; Macris, N. The adaptive interpolation method: A simple scheme to prove replica formulas in Bayesian inference. Probab. Theory Relat. Fields 2019, 174, 1133–1185. [Google Scholar] [CrossRef]
  49. Barbier, J.; Dia, M.; Macris, N.; Krzakala, F.; Zdeborová, L. Rank-one matrix estimation: Analysis of algorithmic and information theoretic limits by the spatial coupling method. arXiv 2018, arXiv:1812.02537. [Google Scholar]
  50. Alaoui, A.E.; Krzakala, F.; Jordan, M. Fundamental limits of detection in the spiked Wigner model. Ann. Stat. 2020, 48, 863–885. [Google Scholar] [CrossRef]
  51. Lesieur, T.; Krzakala, F.; Zdeborová, L. Constrained low-rank matrix estimation: Phase transitions, approximate message passing and applications. J. Stat. Mech. Theory Exp. 2017, 2017, 073403. [Google Scholar] [CrossRef]
  52. Pourkamali, F.; Macris, N. Mismatched Estimation of Non-Symmetric Rank-One Matrices Under Gaussian Noise. In Proceedings of the 2022 IEEE International Symposium on Information Theory (ISIT), Espoo, Finland, 26 June–1 July 2022; pp. 1288–1293. [Google Scholar] [CrossRef]
  53. Verdú, S. Mismatched Estimation and Relative Entropy. IEEE Trans. Inf. Theory 2010, 56, 3712–3720. [Google Scholar] [CrossRef]
  54. Chen, W. On the mixed even-spin Sherrington-Kirkpatrick model with ferromagnetic interaction. Ann. Inst. Henri Poincare 2014, 50, 63–83. [Google Scholar] [CrossRef]
  55. Hartman, P. OrdinaryDifferentialEquations; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1964. [Google Scholar]
  56. Bertsekas, D. Nonlinear Programming; A thena Scientific: Nashua, NH, USA, 2003. [Google Scholar]
  57. Camilli, F.; Mézard, M. Matrix factorization with neural networks. Phys. Rev. E 2023, 107, 064308. [Google Scholar] [CrossRef]
  58. Camilli, F.; Mézard, M. The Decimation Scheme for Symmetric Matrix Factorization. arXiv 2023, arXiv:2307.16564. [Google Scholar]
  59. Guionnet, A.; Ko, J.; Krzakala, F.; Zdeborová, L. Estimating rank-one matrices with mismatched prior and noise: Universality and large deviations. arXiv 2023, arXiv:2306.09283. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Camilli, F.; Contucci, P.; Mingione, E. The Onset of Parisi’s Complexity in a Mismatched Inference Problem. Entropy 2024, 26, 42. https://doi.org/10.3390/e26010042

AMA Style

Camilli F, Contucci P, Mingione E. The Onset of Parisi’s Complexity in a Mismatched Inference Problem. Entropy. 2024; 26(1):42. https://doi.org/10.3390/e26010042

Chicago/Turabian Style

Camilli, Francesco, Pierluigi Contucci, and Emanuele Mingione. 2024. "The Onset of Parisi’s Complexity in a Mismatched Inference Problem" Entropy 26, no. 1: 42. https://doi.org/10.3390/e26010042

APA Style

Camilli, F., Contucci, P., & Mingione, E. (2024). The Onset of Parisi’s Complexity in a Mismatched Inference Problem. Entropy, 26(1), 42. https://doi.org/10.3390/e26010042

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop