Next Article in Journal
Lévy Flight Model of Gaze Trajectories to Assist in ADHD Diagnoses
Previous Article in Journal
A Study of Adjacent Intersection Correlation Based on Temporal Graph Attention Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Inverse of Exact Renormalization Group Flows as Statistical Inference

1
Centre for Theoretical Physics, Queen Mary University of London, Mile End Road, London E1 4NS, UK
2
Department of Physics, University of Illinois, Urbana, IL 61801, USA
*
Author to whom correspondence should be addressed.
Entropy 2024, 26(5), 389; https://doi.org/10.3390/e26050389
Submission received: 15 March 2024 / Revised: 26 April 2024 / Accepted: 28 April 2024 / Published: 30 April 2024
(This article belongs to the Special Issue Applications of Fisher Information in Sciences II)

Abstract

:
We build on the view of the Exact Renormalization Group (ERG) as an instantiation of Optimal Transport described by a functional convection–diffusion equation. We provide a new information-theoretic perspective for understanding the ERG through the intermediary of Bayesian Statistical Inference. This connection is facilitated by the Dynamical Bayesian Inference scheme, which encodes Bayesian inference in the form of a one-parameter family of probability distributions solving an integro-differential equation derived from Bayes’ law. In this note, we demonstrate how the Dynamical Bayesian Inference equation is, itself, equivalent to a diffusion equation, which we dub Bayesian Diffusion. By identifying the features that define Bayesian Diffusion and mapping them onto the features that define the ERG, we obtain a dictionary outlining how renormalization can be understood as the inverse of statistical inference.

1. Introduction

The Renormalization Group (RG) is used in physical settings to deduce how a theory changes when it is viewed at different scales. In Wilsonian RG, one regards the set of possible theories as a space coordinatized by a collection of coupling constants specifying all of the possible contributions to a classical action. An RG flow can then be understood as a one-parameter family, or flow, in the space of theories generated by a vector field on theory space referred to as the Beta Function [1]. The flow is completely specified once the requisite initial data are provided, for example, fixing a theory in the ultraviolet (UV) from which the RG flow begins (by fixing a theory in the UV, we mean a theory that is a priori valid at all energy scales).
A typical approach to RG is to deduce the beta function by sequentially coarse-graining the theory—that is, integrating out degrees of freedom that become suppressed at lower energy scales. In field theoretic contexts, this approach makes use of perturbative techniques in order to perform the requisite functional integrals. From a formal perspective, however, it is possible to study the properties of renormalization as an abstract flow equation without immediately concerning ourselves with any complicated or even intractable calculations that may be required to realize the flow explicitly. This approach to RG goes under the name of the Exact Renormalization Group (ERG) and will be the primarily focus of our paper.
In our approach to ERG, we shall regard a Quantum Field Theory (QFT) as equivalent to the specification of a probability distribution on the space of fields included in the theory, which we denote by F ( M ) and refer to as the sample space in standard probability theoretic nomenclature (to accommodate this perspective, we shall consider only Euclidean QFT). From this perspective, the space of theories is isomorphic to the space of probability distributions on the sample space F ( M ) , denoted by M . An ERG flow is therefore equivalent to a one-parameter family of probability distributions on M .
From this point of view, we can still make contact with the Wilsonian picture by regarding a Wilsonian Effective Action, written in terms of a collection of coupling constants, as specifying a parameteric family of probability distributions. In other words, Wilsonian RG corresponds to a particular coordinatization of the space M .
Following the lead of [2,3,4] and others, it is natural to interpret the one-parameter family of probability distributions generated by an ERG flow as being governed by a functional convection–diffusion equation on the sample space F ( M ) . Regarding ERG as a flow on the space of probability distributions over a given sample space contextualizes renormalization in a language that is amenable to applications outside of the usual realm of physical theory. In particular, it suggests a manifestly information-theoretic interpretation for renormalization. Uncovering and presenting the details of this interpretation is the primarily objective of this note.
In seeking an answer to the question, “What is the information-theoretic Interpretation of an ERG flow?” we find it useful to consider a related question: “What does it mean to invert an ERG flow?”. Here, our understanding of ERG as being governed by a diffusion equation provides us with a direction. A powerful method for formally inverting a diffusion process is Bayesian Inversion [5]. Bayesian Inversion is a probabilistic approach used to determine the initial data that was fed into a partial differential equation and subsequently generated an observed sequence of outcomes. As the name implies, the main tool employed in Bayesian Inversion is Bayes’ Law. This lends some credence to the idea that Bayesian Inference may serve as an “inverse process” to ERG.
In our previous publication [6], we introduced Dynamical Bayesian Inference, which recasts the Bayesian Inference as a dynamical system. Like ERG, Dynamical Bayesian Inference generates a one-parameter family of probability distributions satisfying a flow equation. Whereas in ERG, the flow equation is deduced by sequential application of a coarse-graining law, in Dynamical Bayes the flow equation is obtained by the sequential application of Bayes’ law with respect to a continuously growing set of observed data. In this respect, an ERG flow continuously loses information (hence, it is diffusive), while a Dynamical Bayesian flow continuously gains information.
In this note, we shall argue in favor of this interpretation. In particular, we will demonstrate that the Dynamical Bayesian Flow equation gives rise to a convection–diffusion equation describing the evolution of an associated posterior predictive distribution backwards against the collection of additional data. We refer to the process described by this equation as Bayesian Diffusion, or Backward Inference. We interpret the equation governing Bayesian Diffusion as defining an ERG flow, with a coarse-graining procedure given by the continuous discarding of observed data. By construction, this ERG is inverted by the forward Dynamical Bayesian inference flow that is obtained by reincorporating the lost data back into the model. Alternatively, starting from an ERG flow we identify the Dynamical Bayesian flow for which it is the Backward Inference process by drawing a correspondence between the partial differential equation governing ERG and the partial differential equation governing Bayesian diffusion. Ultimately, this correspondence suggests a fascinating information-theoretic interpretation for ERG: it can be regarded as the one-parameter family of probability distributions obtained by starting from a data-generating model and continuously throwing away information in the form of observed data.
The organization of the paper is given as follows: in Section 2, we review the formulation of ERG as a functional differential equation, specifically in the Wegner–Morris form. We also demonstrate that the Wegner–Morris equation is equivalent to a Fokker–Planck equation with a given potential function, establishing the correspondence between ERG and optimal transport. In Section 3, we introduce Stochastic Differential Equations (SDEs) and discuss the relationship between SDEs and partial differential equations of the type appearing in ERG. In Section 4, we review Dynamical Bayesian Inference and derive the Bayesian Diffusion equations. Finally, in Section 5, we establish the correspondence between Bayesian Diffusion and ERG explicitly. We conclude with Section 6, in which we review our findings and discuss future research directions.

2. ERG Equation as a Functional Diffusion Equation

In this section, we will provide an overview of functional renormalization with the objective of demonstrating how ERG can be understood as a functional differential equation. Our presentation will only focus on the aspects of ERG that are relevant for this work; for a more complete overview, see [2,4,7,8] and the references therein. The presentation here follows very closely with that in [4].
Let us consider a theory with fields Φ F ( M ) . Here, we are using a notation in which capital letters correspond to random variables, and lower case letters to realizations. In this case, Φ is to be regarded as a random variable taking values in a space of continuous functions on a manifold M. The distribution over Φ is a probability functional P [ ϕ ( x ) ] e S [ ϕ ( x ) ] where S [ ϕ ( x ) ] is the Euclidean action. The idea of RG is that we are only capable of probing scales less than a cutoff Λ . As we change our cutoff, we obtain a family of probability distributions, P Λ [ ϕ ( x ) ] e S Λ [ ϕ ( x ) ] , corresponding to effective descriptions for the field Φ at each measurement threshold. ERG provides a description for the changes to the effective probability distribution in the form of a flow equation:
Λ d d Λ P Λ [ ϕ ] = F P Λ [ ϕ ] , δ P Λ [ ϕ ] δ ϕ , δ 2 P Λ [ ϕ ] δ ϕ δ ϕ , . . . .
where F is a functional of P Λ [ ϕ ] and all its functional derivatives and specifies a particular ERG scheme.

2.1. Polchinski’s Equation

The canonical example of ERG is Polchinski’s approach [2]. We begin with the partition function of a scalar field with source J given by:
Z Λ [ J ] : = D ϕ exp { 1 2 d d p ( 2 π ) d ϕ ( p ) ϕ ( p ) ( p 2 + m 2 ) K Λ 1 ( p 2 ) + J ( p ) ϕ ( p ) S i n t , Λ [ ϕ ] } .
Here, S i n t , Λ is the interacting action, and K Λ ( p 2 ) is a cutoff function that differentially weighs modes ϕ ( p ) based on their momenta.
Polchinski’s idea was to consider a scale Λ R < Λ and integrate out modes down to Λ R . To this end, we assume that J ( p ) has compact support in the sphere of momenta with radius Λ R ϵ for a small ϵ > 0 . If Λ R is only infinitesimally smaller than Λ , we can compute the differential change to Z Λ [ J ] on account of integrating out the shell of modes between Λ and Λ R . Then, we demand that:
Λ d d Λ Z Λ [ J ] = A Λ Z Λ [ J ] .
where A Λ is a constant for each value of Λ . If (3) holds, any correlation functions below the changed scale (i.e., for which one can take functional derivatives with respect to J) will be unchanged. Hence, this form of RG respects the fact that measured correlation functions ought to be independent of the RG flow below the relevant momentum scale.
Expanding out (3), we find:
Λ d d Λ Z Λ [ J ] = D ϕ 1 2 d d p ( 2 π ) d ϕ ( p ) ϕ ( p ) ( p 2 + m 2 ) Λ K Λ 1 ( p 2 ) Λ + Λ S i n t , Λ [ ϕ ] Λ e S Λ [ ϕ , J ] .
Again, K Λ ( p 2 ) is a function with prescribed Λ dependence. The behavior governed by (3) is that of S i n t , Λ . Polchinski showed that one can consistently satisfy (3) by taking S i n t , Λ to satisfy the functional differential equation:
Λ S i n t , Λ [ ϕ ] Λ = 1 2 d d p ( 2 π ) d ( p 2 + m 2 ) 1 Λ K Λ ( p 2 ) Λ δ 2 S i n t , Λ δ ϕ ( p ) δ ϕ ( p ) δ S i n t , Λ δ ϕ ( p ) δ S i n t , Λ δ ϕ ( p ) .
Following the approach of (1), we define the probability functional: P Λ [ ϕ ] = e S Λ [ ϕ ] / Z Λ , which is explicitly the probability distribution for the field Φ at scale Λ . Then, we can truly write the Polchinski equation in the form of a convection–diffusion equation as:
Λ d d Λ P Λ [ ϕ ] = 1 2 d d p ( 2 π ) d ( p 2 + m 2 ) 1 Λ K Λ ( p 2 ) Λ δ 2 δ ϕ ( p ) δ ϕ ( p ) P Λ [ ϕ ]
     + 1 2 d d p ( 2 π ) d ( p 2 + m 2 ) 1 Λ K Λ ( p 2 ) Λ δ δ ϕ ( p ) 2 ( p 2 + m 2 ) ( 2 π ) d K Λ ( p 2 ) ϕ ( p ) P Λ [ ϕ ] .
Throughout this note, we shall emphasize the sense in which such equations can be understood as functional analogs of more well defined finite dimensional equations. For example, (6) ought to be compared to the finite dimensional equation:
d d t p t ( y ) = i , j g i j y i y j p t ( y ) + i , j y i ( g i j v j ( y ) p t ( y ) ) ,
where we have written sums explicitly to make the analogy clear. To move between the finite dimensional equation and the Polchinski equation, we have made use of the dictionary found in Table 1.

2.2. Wegner–Morris Flows

We have now established that Polchinski’s equation is simply an infinite dimensional convection–diffusion equation. It can be shown that a family of such equations exists that satisfies the constraint (3). This family is defined by considering different choices for the metric and drift velocity appearing in Table 1. The choice of this data then corresponds to a choice of scheme for the ERG flow. The aforementioned family of ERG schemes is given explicit representation in terms of the Wegner–Morris flow equation [9,10,11,12,13]. As a one-parameter family of probability distributions, the Wegner–Morris flow is governed by the equation:
Λ d d Λ P Λ [ ϕ ] = d d x δ δ ϕ ( x ) Ψ Λ [ ϕ , x ] P Λ [ ϕ ] .
The Wegner–Morris scheme is encapsulated in the kernel Ψ Λ [ ϕ , x ] . Again, it is useful to compare to a finite dimensional flow equation:
d d t p t ( y ) = i y i V i ( p t , y ) p t .
Here, Ψ Λ has been replaced by a vector field that depends not only on y but on the entire probability function p t . It is often natural to regard V as the gradient of a scalar function W ( p t , y ) , which also depends on p t , i.e., V i = g i j ( y ) y j W ( p t , y ) . Then, we can write:
d d t p t ( y ) = i j y i g i j ( y ) y j W ( p t , y ) p t .
A natural interpretation of (8) is that it simply reparameterizes the field at each new scale. To be precise, as Λ changes we find:
ϕ ( x ) = ϕ ( x ) + δ Λ Λ Ψ Λ [ ϕ , x ] .
This implies that the Wegner–Morris flow conserves probability; it only changes the way that the probability is distributed. This is the reason why the Wegner–Morris family of flow equations satisfies (3). On account of this new interpretation, Ψ Λ is given the title of the Reparameterization Kernel.
Following the intuition from the finite dimensional case, we can represent the Reparameterization Kernel as a functional gradient:
Ψ Λ [ ϕ , x ] = d d y 1 2 C ˙ Λ ( x , y ) δ Σ Λ [ ϕ ] δ ϕ ( y ) .
Here, C ˙ Λ ( x , y ) is playing the same role it did in the Polchinski equation as an inverse metric on the sample space of fields; however, we have not yet fixed its functional form, and, indeed, it may differ from Polchinski’s choice. In the literature, C ˙ Λ ( x , y ) is called the ERG kernel.
A popular choice for Σ Λ is given by the difference between the renormalized action of the theory and a second action, S ^ Λ , called the seed action:
Σ Λ [ ϕ ] : = S Λ [ ϕ ] 2 S ^ Λ [ ϕ ] .
The factor of two is conventional. In this framework, an RG scheme is therefore specified entirely by a choice of C ˙ Λ and S ^ Λ . For example, we can reconcile the Polchinski equation from the Wegner–Morris set up by taking:
C ˙ Λ ( p 2 ) = ( 2 π ) d ( p 2 + m 2 ) 1 Λ K Λ ( p 2 ) Λ ; S ^ Λ [ ϕ ] = 1 2 d d p ( 2 π ) d ( p 2 + m 2 ) K Λ 1 ( p 2 ) ϕ ( p ) ϕ ( p ) .

2.3. Field Reparameterization and Scheme Independence

In Quantum Field Theory, fields are not an observable quantity but rather a device used to encode a theory. In this respect, we should regard field parametrization as a choice of coordinates on the sample space of the theory and require that the theory be invariant under diffeomorphisms of these coordinates. Following this line of thought, the author of [3] showed that the field reparamterization appearing in (11) can be interpreted as a gauge redundancy, and, as a consequence, the drift component of the Wegner–Morris equation can be interpreted as a choice of gauge. This perspective can be made concrete by viewing the convective derivative of the resulting flow equation as a covariant derivative, with the vector field generating the drift playing the role of a gauge field. Changing the drift field changes the description of the ERG only cosmetically; in particular, the expectation values of relevant operators are left-invariant under a change of gauge. We shall adopt this perspective and regard the prescription of drift in an ERG as tantamount to a choice of scheme.

2.4. Wegner–Morris and Fokker–Planck

In the previous subsection, we saw that ERG can be understood as a functional differential equation associated with the Wegner–Morris Equation (9). In this section, we shall demonstrate how the Wegner–Morris equation may be regarded as a form of the Fokker–Planck equation. For simplicity, we shall work in the context of probability theory on a differentiable sample space, S.
Given a function F : M R on a (psuedo)-Riemannian manifold M with metric g, we define the gradient as follows: for any path γ : [ 0 , 1 ] M , with γ ( 0 ) = x , the gradient of F at x with respect to the metric g is the tangent vector grad g F ( x ) such that:
d d t γ * F | t = 0 = g ( grad g F ( x ) , γ * d d t ) | t = 0 .
An equivalent definition that does not require the introduction of a path simply defines the gradient in terms of the exterior derivative on M:
d F ( X ) = g ( grad g F , X ) ; grad g F = ( d F ) .
Here, the map : T * M T M corresponds to the usual notion of index raising via the metric, mapping one form into vectors. To be precise, given α , β T * M we have: α ( β ) = g 1 ( α , β ) .
Let M = dens ( S ) denote the manifold of probability distribution functions on the sample space S. The tangent space to M at a point p is defined by:
T p M : = η C ( S ) | S Vol S η = 0 .
This is a good definition since functions η T M can be regarded as perturbations to probability densities that do not spoil the integral normalization (⋆ is the Hodge star on S):
S ( p + η ) = S p + S η = S p = 1 .
Hence, perturbations p p + η still belong to the manifold M .
Now, as a first exercise in understanding gradients on M , let us consider the most straightforward combination of metric and scalar function on this manifold. M is naturally equipped with an 2 metric of the form
G 2 ( η 1 , η 2 ) : = S η 1 η 2 .
Moreover, there is a natural functional to consider, namely, the Dirichlet Energy Functional  E : M R defined by (d is the exterior derivative on S not on M ):
E [ p ] = 1 2 S d p d p .
The exterior derivative on M should be understood as the variational derivative with respect to the density p. Thus, by standard techniques, we can compute:
δ E [ p ] = S d δ p d p = S ( 1 ) d δ p d d p + S d ( δ p d p ) .
A probability distribution must necessarily have bounded support to satisfy an integral normalization condition; hence, in this and any future computations we can safely take the boundary variation to zero. Thus, we arrive at the desired result:
δ E [ p ] = S ( 1 ) d δ p d d p = G 2 ( ( 1 ) d d d p , δ p ) .
Matching this to the definition of the gradient (16), we conclude that the gradient of the Dirichlet Energy Functional with respect to the 2 metric on M is equivalent to the Laplacian of p:
grad 2 E [ p ] = Δ p .
An immediate corollary of this fact is that it allows us to write the standard heat equation, t p t = Δ p t , in the form of a gradient flow as
t p t = grad 2 E [ p t ] .
Remarkably, if we choose a different metric for the space of probability distributions M , we can reconcile the heat equation as a gradient flow with respect to a different functional of the probability distribution. Of particular interest to us is the task of constructing a metric for which the potential is the differential entropy:
S [ p ] = S p ln ( p ) .
To find such a metric, we begin by establishing the following isomorphism of T M : given η T p M and η ^ C ( S ) , the following equation implicitly defines an isomorphism between them, up to a constant:
η = d i p ( d η ^ ) Vol S .
In more familiar vector calculus notation, this equation takes the form
η = div ( p grad η ^ ) .
Using this isomorphism, we may now define a new metric on M , which is essentially a probability weighted version of the Dirichlet Energy Functional. In particular, we take:
G W 2 ( η 1 , η 2 ) = S p d η ^ 1 d η ^ 2 = S η 1 η ^ 2 = S η ^ 1 η 2 .
As our notation suggests, this metric can be understood as the infinitesimal form of the Wasserstein two distance defined in Appendix A. A proof of this fact can be found in [4].
Now, let us compute the exterior derivative of the differential entropy, and by extension its gradient with respect to the newly minted Wasserstein metric (28). Let η = δ p denote an element of T M obtained by perturbing the density p. Moreover, let η ^ denote the element isomorphic to η via (26). We may now complete the desired computation:
          δ S [ p ] = S δ p ln ( p ) + p δ p p = S η ( ln ( p ) + 1 )
            = S d i p ( d η ^ ) Vol S ( ln ( p ) + 1 ) Using ( 26 )
            = S ( 1 ) d i p ( d η ^ ) Vol S d p p Integrating by parts
            = S ( 1 ) d p d η ^ d p p i ω Vol S = ω and S α β = S α β
            = S ( 1 ) d d η ^ d p = S η ^ Δ p Integrating by parts once more
= G W 2 ( Δ p , η ) .
Hence, we have succeeded in showing that:
grad W 2 S [ p ] = Δ p .
meaning that we can indeed write the heat equation:
t p t = grad W 2 S [ p t ] .
In general, if we specify a functional, F : M R , and follow the approach described above, the resulting gradient flow equation will read:
t p t = grad W 2 F [ p t ] = d i p t d ( δ F δ p | p t ) Vol S .
For our purposes, it will be interesting to consider functionals that are the sum of two pieces:
F [ p ] = S [ p ] + V [ p ] = S p ln ( p ) + S p V .
Here, S [ p ] is the differential entropy; thus, this part of the functional will source a diffusion equation in (37). The second term should be interpreted as a potential and will introduce drift into (37). Provided V : S R is a function that does not depend on p, (37) takes the form of a convection–diffusion equation:
t p t d i d p Vol S d i p t d V Vol S = 0 .
This is precisely the Fokker–Planck equation. In vector calculus notation, it takes the form:
t p t Δ p t div ( p grad V ) = 0 .
If the normalization Z = S e V Vol S is finite, q = 1 Z e V is a probability distribution, which is the stationary state of (39). In this case, the gradient flow of F can equivalently be understood as the gradient flow of the KL-Divergence between p t and q. This follows from a simple calculation:
F [ p t ] + ln ( Z ) = S p ( ln ( p t e V ) + ln ( Z ) ) = S p t ln ( p t q ) = D K L ( p t     q ) .
Because ln ( Z ) is independent of the distribution p, the variational derivatives of F and D K L ( p t     q ) will be equal: δ F δ p = δ D K L ( p     q ) δ p . Thus, both of these functionals provide the same (37).
We are now prepared to make good on our promise and show that the Wegner–Morris equation is equivalent to the Fokker–Planck equation with a specified stationary distribution. To this end, let us write:
p = e S p ( x ) Z p , q = e S q ( x ) Z q .
where Z d = S d ^ is the normalization factor for the Boltzmann weight of the distribution d, i.e., p ^ = e S p . Let us also define
Σ = ln ( p ^ q ^ ) = S q S p ; Ψ = grad g Σ .
which are the analogs of the renormalization scheme function, and the associated reparameterization kernel. Now, we need only compute the variation of the KL-Divergence:
δ D K L ( p     q ) = S δ p ( ln ( p ) ln ( q ) + 1
           = S δ p S p ln ( Z p ) + 1 + S q + ln ( Z q )
           = S δ p S q S q S δ p const . = 0 s . t . S ( p + δ p ) = 1
           = S d i p d η ^ Vol S Σ Via the tan gent space isomorphism with δ p = η
           = ( 1 ) d S i p d η ^ Vol S d Σ Integrating by parts
           = ( 1 ) d S p d η ^ d Σ i ω Vol S = ω and S α β = S α β
           = ( 1 ) d S η ^ d ( p Σ ) Integrating by parts
  = G W 2 ( d ( p d Σ ) ) , η ^ ) .
Hence, we have shown that:
grad W 2 D K L ( p     q ) = d ( p d Σ ) .
Thus, the Fokker Planck equation associated with the gradient flow of F is precisely the Finite Dimensional Wegner–Morris equation:
t p t = grad W 2 D K L ( p     q ) = d ( p d Σ ) = x i p g i j x j Σ = x i ψ i p .

2.5. Renormalization Group Flow as Optimal Transport

Relating RG flow to the Continuity Equation (53) is more or less an exercise in properly identifying the sample space of the RG probability distribution and completing the analogies that follow from there. For simplicity, we have reproduced this exercise in the form of a dictionary between optimal transport for finite sample spaces and ERG which can be found in Table 2.

3. From Stochastic Differential Equations to Partial Differential Equations and Back Again

Having placed the Exact Renormalization Group flow equations squarely in the context of Partial Differential Equations, we would now like to take a brief detour into addressing some of the generic properties of these equations. In particular, we will be interested in understanding the relationship between continuity equations on the one hand and Stochastic Differential Equations on the other. For more information on the relationship between Stochastic Differential Equations, partial differential equations, and optimal transport, see [14,15,16,17,18,19,20,21].

3.1. Stochastic Differential Equations

A stochastic process is a one-parameter family of random variables, { X t } t 0 . For our purposes, we shall be interested in stochastic processes whose dynamics are governed by a Stochastic Differential Equation of the Ito form [22,23,24]:
d X t i = m i ( X t , t ) d t + σ j i ( X t , t ) d W t j .
Here, m i describes the drift, or mean rate of change, while σ j i describes the diffusion, or mean variability. d W t i corresponds to an independent increment of a Brownian motion, a random variable drawn from a standard normal distribution corresponding to the noisy motion of the process over a single increment of time. (We recall that the Ito formulation is one of two major approaches to the subject of stochastic calculus, with the other being the Stratonovich formalism. Here, we use Ito formulation because it makes the relationship between Stochastic Differential Equations and Partial Differential Equations very clear).
Physicists may find it useful to compare (54) to the Langevin Equation
d X i d t = μ i ( X t ) + η t i .
which describes the dynamics of a quantity X t governed by a law d X d t = μ but subject to random fluctuations, η [25,26]. To match the Stochastic Differential Equation, we should take η t = ( η t 1 , . . . , η t n ) to be a random variable with the correlation structure
η t i η t j = δ k l σ k i σ l j δ ( t t ) .
Given a function: f : [ 0 , T ] × S R , we can determine its stochastic differential when evaluated over the process X t by using the principal of quadratic variation. Quadratic variation dictates that the product of two increments of Brownian motion scales linearly in the interval between realizations of the stochastic process. This fact follows from the observation that the variance of a Weiner process is linear in time. Schematically, we encode the principal of quadratic variation in the form:
d W t i d W t j δ i j d t + ( 1 δ i j ) O ( d t 2 ) .
If we now expand the differential of f ( X t , t ) in a power series, retaining terms up to O ( d t ) , we find:
d f ( X t , t ) = f t + m i f x i + 1 2 δ k l σ k i σ l j 2 f x i x j d t + f x i d W t i .
The second-order derivative terms are the unique addition provided to us by quadratic variation. The realizations, { f ( X t , t ) } t 0 , now define a stochastic process in their own right, governed by the Stochastic Differential Equation (58).
A stochastic process, { Y t } t 0 , is called a Martingale if it satisfies the equation
E ( Y t | { Y s } s τ ) = Y τ ; s t .
One can read (59) as specifying that the mean value of the stochastic process, Y t , has no tendency to change over time. Inspired by this interpretation, it is not hard to show that a stochastic process will be a Martingale if and only if it is described by a Stochastic Differential Equation with vanishing drift. This leads to a beautiful connection between the theory of stochastic processes, and harmonic analysis [27,28,29]. Considering the stochastic process { f ( X t , t ) } t 0 , if we demand that the drift in (58) be set equal to zero, we find that f must satisfy the Partial Differential Equation:
f t + m i f x i + 1 2 δ k l σ k i σ l j 2 f x i x j = 0 .
Moreover, using (59), we see that this partial differential equation has a formal solution:
f ( X t , t ) = E ( f ( X T , T ) { X s } s t ) .
where T should be regarded as the terminal time of the stochastic process.
Let us define the differential operator that appears in (60) as:
A = m i x i + 1 2 δ k l σ k i σ l j 2 x i x j .
With respect to the 2 inner product on the space of functions, we can define a formal adjoint operator: A such that
G 2 ( f , A ( g ) ) = G 2 ( A ( f ) , g ) .
The adjoint operator can be deduced by integrating by parts, and we find:
A ( f ) = x i ( m i f ) + 1 2 2 x i x j ( δ k l σ k i σ l j f ) .
The differential equation generated by the adjoint operator therefore reads:
f t = x i ( m i f ) 1 2 2 x i x j ( δ k l σ k i σ l j f ) .
This is the Fokker–Planck equation. In fact, it is slightly more general than the Fokker–Planck equation we uncovered through our analysis in Section 2.4 because it allows for a stochastic process with non-trivial diffusion matrices σ j i . To reconcile the Fokker–Planck Equation (39), we should consider the Stochastic Differential Equation:
d X t i = ( grad g V ) i d t + 2 δ j i d W t j .
Here, we again see the role of the potential V in sourcing the mean drift behavior of the stochastic process. That is, the stochastic behavior associated with the random variable described by a probability distribution solving the Fokker–Planck equation with potential function V is itself a stochastic gradient flow with respect to that potential.
The pair of differential equations we have written have interpretations as “Forward” and “Backward” processes. The equation generated by A :
t + A f = 0 ,
is called the Backward equation because its solution, i.e., (61), is specified by a terminal condition. On the contrary, the adjoint equation:
t A f = 0 ,
is called the Forward equation because its solution is specified by an initial condition: f ( 0 , x ) .

3.2. Continuity Equations

Let us now begin from the reverse perspective and seek to understand how a continuity equation might be associated with a stochastic process. A measure on S is a top form whose integral over all of S can be normalized to 1. For convenience, we will refer to such a form as μ M S with
M S : = { μ Ω d ( S ) | S μ = 1 } .
By Hodge duality, a measure can be related to a distribution: p = μ , or μ = p Vol S , which would bring us back into the notation used in the previous section. We will move back and forth between the measure-based and probability-density-based approaches more or less at will.
To write down a continuity equation, we consider a one-parameter family of measures, or otherwise a trajectory through the space M S , μ : [ 0 , 1 ] M S , which we denote as μ t ( x ) M S , governed by the differential equation:
t μ t + d i v t μ t = 0 .
Here, v t : [ 0 , 1 ] T S is a time-dependent vector field. Since μ t is a top form, we can regard the second term above as the Lie Derivative and write:
t μ t + L v t μ t = 0 .
Thus, a continuity equation has the immediate interpretation as a flow generated by the vector field v t [19]. Compared to (39), we see that the vector field that generates a gradient flow with respect to the potential F is given by:
v t = d δ F δ p ( p t ) .
A measure μ t that solves the continuity equation is referred to as a strong solution. However, we can also consider a weaker condition in which the integral of (70) against a compactly supported C 1 ( S ) function is always zero [30]. That is, given ψ : [ 0 , 1 ] C 1 ( S ) , we demand:
0 = [ 0 , 1 ] × S ψ t ( t μ t + d i v t μ t ) .
Integrating by parts and using the fact that ψ t is compactly supported (73) implies that
0 = [ 0 , 1 ] × S μ t ( ψ t t + d ψ t ( v t ) ) .
To arrive at (74) we have also used the fact that
d ψ t i v t μ t = d ψ t ( v t ) μ t ,
where d ψ t ( v t ) is the pairing of d ψ t and v t in the sense of forms and tangent vectors. We can interpret (74) as the condition:
[ 0 , 1 ] E μ t ( ψ t t + d ψ t ( v t ) ) = 0 .
Provided we can exchange the order of the t-derivative and the integral over S, we can reframe our analysis in terms of a t-independent function, ϕ : S R , in which case we find
d d t E μ t ( ϕ ) + E μ t ( d ϕ ( v t ) ) = 0 .
This equation can be regarded as an Ehrenfest theorem specifying the mean dynamics of the field ϕ . In particular, it says that ϕ will satisfy a gradient flow in the direction of v t , and thus we can regard (77) as the expectation value of the Stochastic Differential Equation (54).
The heat equation is a special case of the continuity equation with the flow vector given by grad p t p t = grad ( ln ( p t ) ) : to see that this is the case, let us write μ t = p t Vol . Then, it is straightforward to show that
d i grad p t p t μ t = d i grad p t p t Vol S p t = d i grad p t Vol S = div ( grad p t ) Vol S = Δ p t Vol S = Δ μ t .
Here we have used that the Laplacian operator is defined on differential forms as Δ = d d + d d . It is easy to show that the Laplacian of a top form, μ t = p t Vol , is related to the Laplacian of its associated scalar by:
Δ μ t = ( d d + d d ) μ t = d d μ t = d d p t = Δ p t .
Hence, in this case (70) becomes:
t μ t + Δ μ t = 0 ; t p t + Δ p t = 0 .
The fundamental solution to the heat equation with initial data p 0 is written formally in terms of the heat kernel [31,32]:
p t = e t Δ p 0 ,
with e t Δ the operator obtained by exponentiating Δ , which acts on the initial data. Indeed, it is easy to show that:
t ( e t Δ p 0 ) = Δ e t Δ p 0 .
In a position basis, we can represent the heat kernel as an integral kernel of the form:
x | e t Δ | y = H ( x , y , t ) = ( 4 π t ) d / 2 e x y 2 4 t .
Thus, we can interpret the heat kernel as the transition function for a Markov process corresponding to the probability density of translation from sample point x to sample point y in an interval 2 t . This density of the form of a multivariate Gaussian, which we shall denote by H ( x , y , t ) = N ( y , 2 t I ) ( N ( μ , Σ ) , is the multivariate normal distribution with mean parameter μ and covariance matrix Σ ).
A general continuity equation will be generated by a differential operator A , through the equation:
t + A p t = 0 .
We can play the same game as before and solve this differential equation formally by writing p t = e t A p 0 . Again, we can choose a continuous basis and express the heat kernel of the operator A as an integral kernel:
x | e t A | y = H ( x , y , t ) .
It remains natural to interpret H ( x , y ; t ) as the transition probability density for a Markov process [33,34]. The Markov process in question is defined by a continuous set of operators, { P t } t 0 . One can regard the operator P t as the “time evolution operator” associated with A , which, in this context, should be understood as the infinitesimal generator of time translations. In other words, P t is obtained by exponentiating the operator A , P t = e t A .
Given any measurable function, f : S R , the action of the operator P t translates f along the Markov chain. If we work in the positional basis, we can write:
P t ( f ) = e t A ( f ) = S Vol S ( z ) H ( z , x , t ) f ( z ) .
which is simply the convolution of the transition density H with f. This provides us with a very useful formula for determining the operator A :
A ( f ) = lim t 0 P t ( f ) f t .
Equation (87) reverses the operator exponentiation by differentiating along the integral curve P t ( f ) .
The family of operators { P t } t 0 can be regarded as a semigroup with the simple composition law [35]:
P t + s ( f ) = P t ( P s ( f ) ) = P s ( P t ( f ) ) .
Moreover, the set of operators { P t } t 0 defines a stochastic process { X t } t 0 , satisfying the law [17]:
E ( f ( X s + t ) X r t ) = P s ( f ) ( X t ) .
This expression is equivalent to the Martingale condition (61). Writing the operator A in the form (62), the stochastic process { X t } t 0 is immediately identified with the Stochastic Process:
d X t i = m i ( X t , t ) d t + σ j i d W t i .
Thus, we have succeeded in mapping our way back to a Stochastic Differential Equation, this time beginning from a diffusion equation.

4. Dynamical Bayesian Inference and the Backward Equation

The formulation of the Exact Renormalization Group flow as an Optimal Transport problem, particularly in its relationship to the extremization of relative entropy, already suggests a deep connection between renormalization, diffusion, and information theory. We now turn our attention to the task of fleshing out this relationship and, in doing so, providing an information-theoretic conceptualization of renormalization through the intermediary of statistical inference. To accomplish this task, we will build on the language of Dynamical Bayesian Inference introduced in [6]. We will show that the dynamics of an inferred probability model defined by a continuously updating Bayesian scheme give rise to an inverse process governed by a diffusion equation that can be brought into correspondence with ERG. Using this fact, we advocate for the perspective that renormalization can be understood as the inverse process of statistical inference.
We should emphasize that inversion is understood in the statistical sense as a ‘reverse time’ stochastic process relative to the Kolmogorov forward process defined by ERG [36]. Such stochastic processes have recently appeared in diffusion learning, where the reverse time Stochastic Differential Equation relative to a forward diffusion process is modeled via a score-based generative algorithm [37]. From the point of view developed in this section, a score-based generative algorithm can be interpreted as implementing a form of Bayesian inference. We will revisit this and other related points in Section 6.

4.1. Bayesian Inversion

Bayesian inference is a probabilistic, self-consistent approach to adjusting one’s beliefs in the presence of new information. It originates from Bayes’ Law, which encodes the probabilistic relationship between two, potentially co-varying, random variables H S H and E S E :
P ( H E ) = P ( E H ) P ( H ) P ( E ) .
In Bayesian Inference, the variables H and E are respectively identified with the Hypothesis and the Evidence. In this language, one can attach a compelling geometric interpretation to Bayes’ law as measuring the relative volumes of the region H S H × S E , for which both H and E are realized, and the region O S E , for which the observed evidence is true. Thus, the interpretation of Bayesian Inference is that it measures the volume of hypotheses that is consistent with a given set of observed data, which we denote as the region H O S H × S E .
A particularly interesting form of Bayesian Inference arises in the context of so-called Bayesian Inversion problems [5,38,39,40,41,42,43]. There, we are concerned with a pair of random variables, Y S Y and U S U , which are related by the following process:
y = G ( u ) + n
We regard G : S U S Y as a deterministic function, denoted by N S N S Y noise, which can either be associated with the process itself or the process of measuring its output.
The goal of the Bayesian Inversion problem is to determine the signal, u, that led to the measured output, y. However, it is often impractical to truly invert the process; thus, it is more prudent to seek to place a probability distribution over the inputs.
To this end, we regard U S U as having a prior measure Π 0 = π 0 ( u ) Vol U ( u ) M S U and treat the noise N as a random variable independent from Y also possessing of a measure ρ 0 = p 0 ( n ) Vol N ( n ) M S N . Then, we can define the conditional random variable Y U as:
Y U = G ( U ) + N .
If we regard ϕ : S Y S N as a map from output to noise, such that
n = ϕ ( y ) = y G ( u ) .
we can obtain a conditional measure for Y U by pulling back the measure ρ 0 via the map ϕ :
ρ Y U ( y u ) = ϕ * ( ρ 0 ( n ) ) = p 0 ( y G ( u ) ) det ϕ i y j Vol N ( y G ( u ) ) = p 0 ( y G ( u ) ) Vol Y ( y ) .
We regard the pulled-back density p 0 ( y G ( u ) ) as the likelihood function for an inference problem and often denote it by p Y U ( y u ) . We therefore obtain a joint measure:
μ Y , U ( y , u ) = ρ Y U ( y u ) Π 0 ( u ) = p 0 ( y G ( u ) ) π 0 ( u ) Vol Y ( y ) Vol U ( u )
for the random variable ( Y , U ) S Y × S U .
Provided the marginalization:
p Y ( y ) = S U p 0 ( y G ( u ) ) π 0 ( u ) Vol U ( u )
is greater than zero and finite, we can define the Bayesian posterior measure as:
Π y * ( u ) = 1 p Y ( y * ) ρ Y U ( y * u ) Π 0 ( u ) ,
which is just a recapitulation of Bayes’ law as written in the form (91). The notation Π y * ( u ) is meant to remind us that the posterior distribution depends on the realization of observed data y * S Y . We can also recognize p Y ( y ) as the marginal density for the random variable Y modulo the prior distribution π 0 .
Next, we define the Bayesian Potential as:
Φ ( u ; y ) : = ln ( p 0 ( y G ( u ) ) )
which is nothing but the negative log likelihood. With this potential in hand, we can write the Bayesian Inference condition suggestively as:
d Π y d Π 0 ( u ) = 1 p Y ( y ) e Φ ( u ; y ) .
On the left hand side, we have the Radon–Nikodym derivative of the posterior measure with respect to the prior measure. On the right hand side, we have what we would like to interpret as the stationary distribution modulo observations y. This equation can be interpreted as follows: given any measurable function f : S U R
E Π y f ( U ) = E Π 0 d Π y d Π 0 ( U ) f ( U )
meaning that the expectation value with respect to the posterior measure is the same as the expectation value with respect to the prior provided the function is augmented by the evidence in the form of the stationary distribution e Φ ( u ; y ) p Y ( y ) .

4.2. Dynamical Bayes

Dynamical Bayesian Inference is an extension of the conventional approach to Bayesian Inference in which one implements Bayesian inference as an iterative process where swathes of evidence are collected, and the posterior distribution at the end of a given iteration is used as the prior distribution in the following iteration [6].
Using such an approach, one can regard Bayesian Inference as a dynamical system governed by a first-order differential equation. To begin, we define a timelike variable, T, which essentially corresponds to the total number of data point observed. Inference up to the “time” T therefore leverages evidence from the set of observed data { y t } 0 t T , which we can regard as a continuous time stochastic process coming from the data-generating measure μ Y * ( y ) = p Y * ( y ) Vol Y . In what follows we shall regard the data-generating measure as belonging to the parametric family p Y U , associated with the true underlying signal value u * . That is, p Y * ( y ) = p Y U ( y u * ) . The equation satisfied by π T ( u ) is then given by:
T π T ( u ) = D ( p Y U ) E Π T ( D ( p Y U ( y U ) ) π T ( u ) ,
where
D ( p ) = D K L ( p Y *     p )
is the KL-Divergence between a distribution p on S Y and the data-generating distribution p Y * . As an aside, we note that (102) is equivalent to the replicator dynamic which appears in evolutionary game theory provided the fitness of a particular model p Y U ( y u ) is taken to be minus its KL-Divergence with the data-generating model. We refer the reader to [44,45] for more detail.
Among the most significant insights of this approach is that, if we consider models that are within an ϵ neighborhood of the true underlying signal, the 2 n -point functions described by Π T are very well approximated by power laws:
C i 1 . . . i 2 n : = E Π T j = 1 2 n ( U u * ) i j = 1 T n p P 2 n 2 ( r , s ) p I i r i s | u * .
Here, I i j is the inverse of the Fisher Metric arising from the family of distributions p Y U , evaluated at the data-generating parameter u * .

4.3. Bayesian Diffusion for Normal Data

To illustrate the Dynamical Bayesian Inference dynamic, it will be beneficial to work through the problem of performing Bayesian inference on the mean of normally distributed data with known variance, σ 2 . In this case, we take:
y = G ( u ) + n
where n is noise distributed according to a distribution N ( 0 , σ 2 ) :
p 0 ( n ) = 1 2 π σ 2 e 1 2 σ 2 n 2
and G ( u ) = μ is a random draw from the distribution over means for the data y so that p 0 ( y μ ) = N ( μ , σ 2 ) (for notational simplicity, we shall denote this distribution by p ( y μ ) ). The data-generating distribution is taken to belong to the same parametric family of distributions, only with a fixed but unknown “true” underlying mean parameter μ * p Y * ( y ) = p ( y μ * ) .
The governing equation of Dynamical Bayesian Inference can be solved formally as:
π T ( u ) = exp T D ( u ) exp T 0 T d T D ( α T ) .
Here, we have used an abbreviated notation in which D ( u ) = D K L ( u *     u ) is the KL-divergence between two distributions of the same paramteric form with given parameter values u. Notice that only the first exponential depends on the variable u; hence, we conclude that the role of the second exponential is simply to maintain the normalization of π T as a probability distribution. Thus, we can write:
π T ( u ) = 1 Z exp T D ( u )
where
Z = Vol U exp T D ( u ) = exp T 0 T d T D ( α T ) .
In the case of the normal model with fixed variance, the KL-divergence is given by D ( μ ) = 1 2 σ 2 ( μ μ * ) 2 . The T-Posterior is therefore given by:
π T ( μ ) = 1 Z e T 2 σ 2 ( μ μ * ) 2 .
It is straightforward to determine the normalization of this distribution by performing the requisite integral that is now Gaussian. When all is said and done, the T-dependent posterior density is of the form:
π T ( μ ) = 1 2 π ( σ 2 / T ) e 1 2 ( σ 2 / T ) ( μ μ * ) 2 .
Let us now compare (111) to the standard density of a length t increment of a Weiner process with diffusivity parameter σ :
f W t ( x ) = 1 2 π σ 2 t e x 2 2 σ 2 t .
Recall that the density f W t ( x ) solves the heat equation:
t f W t ( x ) σ 2 2 x 2 f W t ( x ) = 0 .
Following the analysis of Section 3, one can also recognize the stochastic process { f W t ( X t ) } t 0 as a martingale adapted to the Weiner process
d X t = σ d W t .
We therefore recognize (111) as describing a shifted Brownian motion for the mean parameter with diffusivity σ in the “time” parameter τ = 1 T :
π τ ( μ ) = 1 2 π σ 2 τ e ( μ μ * ) 2 2 σ 2 τ .
This observation lends credence to the idea that Bayesian inference can be associated with a diffusion process. Moreover, it provides an important insight: Bayesian Diffusion ensues backwards with respect to the performance of Bayesian Inference, in the timelike parameter, τ , which is the inverse of the Bayesian time, T, originally introduced.
Given the τ -posterior π τ , and terminal data such as the parametric form of the data-generating distribution, one can obtain the τ path of the posterior predictive distribution for future data by marginalizing. For the normal model, this means:
p τ ( y ) = R d μ π τ ( μ ) p ( y μ ) = R d μ π τ ( μ ) p 0 ( y μ ) ,
which we can recognize as the convolution of π τ and p 0 ( y ) and interpret as the action of the Green Function of the Heat Operator translating the model forward in τ (and backward in T).

4.4. Bayesian Drift and Scheme Independence

We have now shown that the solution to the Dynamical Bayesian inference equation, (111), is also the solution to the standard diffusion equation when viewed as transforming backwards relative to the update time, T. As we shall now discuss, we can promote (111) to a solution to a drift-diffusion equation by an analogous argument to the one that appeared in Section 2.3. In particular, we argue that drift in the context of Bayesian diffusion is also associated with a redundancy of description, in this case related to the specific sequence with which data are observed.
The reasoning behind this argument is most easily understood through an example. Suppose again that one is performing Bayesian inference on a system is tthataken to follow a normal distribution with known variance, σ 2 , but an unknown mean. To deduce the mean of the distribution governing the system, we observe a sequence of N-independent, identically distributed random draws from the true underlying distribution, E = { Y 1 , . . . , Y N } . Starting with a normal prior, one finds that the mean of the posterior distribution after observing the first n pieces of data shall be given by:
μ ( E n ) = 1 n i = 1 n Y i .
Let π Perm ( N ) be a permutation, and let π ( E ) = { Y π ( 1 ) , . . . , Y π ( N ) } denote the same set of evidence but in a new order defined by π . We interpret this transformation as changing the sequence in which the data are obtained. Crucially, π ( E ) is still a set of N-independent, identically distributed random variables drawn from the same data-generating distribution. If we compute the maximum likelihood estimate based on the first n piece of data appearing in π ( E ) , we find:
π ( μ ( E n ) ) = μ ( π ( E ) n ) = 1 n i = 1 n Y π ( i ) .
and hence it is true that π ( μ ( E n ) ) μ ( E n ) . However, if we compute the maximum likelihood estimate on account of all of the data contained in either set, we find:
π ( μ ( E ) ) = 1 N i = 1 N Y π ( i ) = 1 N i = 1 N Y i = μ ( E ) .
That is, the terminal maximum likelihood estimate is invariant with respect to the sequence in which the data are incorporated into the model.
We extrapolate this observation to the statement that the posterior distribution can be assigned an arbitrary path through the space of probability models, provided the terminal distribution remains consistent with the large observation limit, that is, the central tendency towards the data-generating distribution. This is completely analogous to the role of drift in defining an ERG scheme: the invariant definition of an ERG flow is given in terms of the IR fixed point it describes, and the path through the space of theories by which the theory moves from the UV to the IR is scheme-dependent.
Operationally, we make use of the sequencing freedom in the Bayesian inference to “seed” the flow with information about the individual inference trajectory by specifying the τ path of the maximum a posteriori estimate (MAP). In the general case where we are given a set of signal parameters u S U , we specify the trajectory of the MAP as a flow on the manifold S U generated by a vector field V : S U T S U . That is,
γ : R S U
such that
γ * d d τ = V ( γ τ )
or, in more standard notation,
d d τ γ τ = γ ˙ τ = V ( γ τ )
The trajectory of the MAP arises from maximizing the log-likelihood of the data-generating model. Thus, in many cases the MAP path, γ τ will realize a gradient descent:
γ ˙ τ = grad u ln ( p Y ( y G ( γ τ ) ) ) = grad u Φ ( u ; y ) | γ τ = I i j ( γ τ ) G k u i Φ y k | γ τ u j ,
where I i j are the matrix components of the inverse Fisher metric, and we have implemented the chain rule to evaluate the derivative. It is very crucial to note that the gradient descent here ensues in the direction of increased data observation, that is, as T , γ approaches the data-generating parameter value, as desired.

4.5. Generic Bayesian Diffusion at Late T

The Normal Model is significant because it arises as the late T limit of any Dynamical Bayesian Inference scheme for which the parameters of the data-generating distribution are equal to some fixed values. This observation arises naturally as an asymptotic limit of the solution to (102) and is a statement of the Central Limit Theorem. At late T, the KL-Divergence can be approximated by the quadratic form:
D K L ( u *     u ) = 1 2 I i j ( u u * ) i ( u u * ) j + O ( ( 1 T ) 2 )
meaning we can write the unnormalized posterior distribution as:
ϖ T ( u ) = e T 2 I i j ( u u * ) i ( u u * ) j .
Provided I does not depend on u, the normalization is obtained by performing a Gaussian integral, and we can write the posterior distribution as:
π τ ( u ) = 1 | 2 π τ I 1 | e 1 2 τ I i j ( u u * ) i ( u u * ) j .
Here, we have changed the variables to τ in order to observe that this is the distribution of a multi-variate Brownian motion.
Using the gauge freedom discussed in the last section, we can promote this solution to one of the forms:
π τ ( u ) = 1 | 2 π τ I 1 | e 1 2 τ I i j ( u γ τ ) i ( u γ τ ) j
where γ τ is the trajectory of the MAP. As advertised, both (126) and (127) are consistent with the late T statistics (104).

4.6. A Partial Differential Equation for Bayesian Inference

In light of the previous sections, we shall now show that one can derive a convection–diffusion equation describing the evolution of the posterior predictive distribution when it is updated according to Bayes’ law. As we have come to recognize, the sense in which Bayesian inference describes a diffusion process is in moving backwards relative to the observation of new information. We will therefore work in the time parameter τ , defined as the inverse to the time parameter T that tracks the amount of data observed. Given the time τ posterior distribution, which we have argued is of the form of a modified heat kernel, we can define the time τ posterior predictive model for future data p τ by marginalizing over the likelihood model:
p τ ( y ) = S U Vol U ( u ) π τ ( u ) p Y U ( y u ) .
In Section 3, we reviewed the relationship between the solution to a convention-diffusion equation and the convolution of given boundary value data with a heat kernel. Comparing (128) with (86), it is natural to regard the posterior distribution as a Markovian transition kernel measuring the probability of going from y S Y to y G ( u ) S Y . From this perspective, we define the set of operators { P τ } τ 0 such that:
P τ ( p 0 ) ( y ) : = E Π τ ( p 0 ( y G ( U ) ) ) .
Following the approach outlined in (87), we can now deduce the diffusion equation generated by the semi-group { P τ } τ 0 . By Taylor Expansion, we can compute:
P τ ( p 0 ) ( y ) = E Π τ p 0 ( y ) p 0 y i G i u j u ˙ j τ + 1 2 2 p 0 y i y j G i u k G j u l u ˙ k u ˙ l τ 2 + O ( ( u ˙ τ ) 3 ) .
Here, we regard u ˙ i τ as corresponding to the infinitesimal flow of the parameter u i in terms of the vector field γ ˙ τ generating the flow of the MAP. That is, u ˙ i τ = δ u i . Following this interpretation:
P τ ( p 0 ) ( y ) = E Π τ p 0 ( y ) p 0 y i G i u j γ ˙ τ τ + 1 2 2 p 0 y i y j G i u k G j u l δ u k δ u l + O ( ( δ u ) 3 ) .
We can now use (104) to compute the expectation values explicitly to the order specified in the expansion. Because we are describing the diffusion in terms of the variable τ = 1 / T and we shall eventually be taking the limit as τ 0 , we can use the T results described in Equations (104) and (127). In particular, we make use of the fact that
E Π τ j = 1 2 n δ u i j = 1 T n p P 2 n 2 ( r , s ) p I i r i s .
Meaning, we can write:
P τ ( p 0 ) ( y ) = p 0 ( y ) G i u j γ ˙ τ j p 0 y i τ + G i u k G j u l I k l ( γ τ ) 1 2 2 p 0 y i y j τ + O ( τ 2 ) .
We can rewrite these terms in a slightly more illuminating way by writing:
G i u k G j u l I k l ( γ τ ) = ( G * I 1 ) i j | γ τ = K i j ( γ τ ) .
Moreover, if we assume the MAP follows a gradient flow with respect to the log-likelihood, we can also write:
G i u j γ ˙ τ j = G * grad u Φ ( u ; y ) | u = γ τ = G i u j G k u l I j l Φ y k = K i j ( γ τ ) Φ y j : = m i .
Putting everything together, we have therefore shown that:
P τ ( p 0 ) ( y ) p 0 ( y ) τ = m i p 0 y i + K i j ( γ τ ) 2 p 0 y i y j + O ( τ 2 ) .
Thus, taking the limit τ 0 we obtain the result:
p 0 τ = lim τ 0 P τ ( p 0 ) p 0 τ = m i p 0 y i + K i j ( γ τ ) 2 p 0 y i y j .
This is precisely of the form of the Kolmogorov equation for a diffusion process with potential V = Φ ( γ τ ; y ) = Φ τ !
It is tempting to interpret the matrix K i j as being associated with the induced Fisher metric in the space of probability distributions along the path of the MAP. From this perspective, we can regard Y τ , the random variable associated with the T-dependent posterior predictive distribution, as a stochastic process on a curved space specified by the Stochastic Differential Equation:
d Y τ = grad Φ τ d τ + d W τ .
The potential is given by Φ τ , which is minus the log-likelihood associated with the time τ maximum likelihood estimate for the generating distribution. Thus, we have shown that a Dynamical Bayesian inference induces a gradient flow with respect to the log-likelihood of the data-generating distribution.

5. The ERG Flow/Dynamical Bayesian Inference Correspondence

We are now prepared to build the dictionary relating ERG flow and Bayesian Inference. For simplicity, we shall consider one-parameter families of probability distributions on finite dimensional sample spaces; however, it is a simple exercise to generalize these insights to the infinite dimensional case as well. To begin, let us review the work we have presented in the previous sections.
In Section 2, we recalled the Wegner–Morris formulation for ERG and demonstrated that it is equivalent to a one-parameter family of probability distributions described by a gradient flow with respect to the relative entropy:
p t t = grad W 2 D K L ( p t     q t ) = x i p t g i j x j Σ = x i p t Ψ i .
In this equation, we interpret the time parameter t as a logarithm of the scale – t = ln Λ . The distribution q t = q ^ t / Z q , with q ^ t = e V for the potential V, can be regarded as specifying the ERG scheme through the fixing of a stationary point along the flow generated by (139). The choice of V defines the functional, Σ = ln ( p ^ t q ^ t ) , and the reparameterization kernel, Ψ = grad g Σ ; hence, it is equivalent to a choice of scheme in the standard Wegner–Morris sense. As we have reviewed in Section 3, the Fokker–Planck equation is equivalent to the Kolmogorov Forward equation for the Stochastic Process governed by the Stochastic Differential Equation:
d X t = ( grad g V ) d t + 2 d W t .
This is intuitively satisfying since such an equation describes a stochastic gradient descent of the potential function V. Once initial data are supplied in the form of a UV theory, p 0 , (139) completely describes an ERG flow terminating at an IR fixed point.
In Section 4, we introduced the notion of Dynamical Bayesian Inference. Dynamical Bayesian Inference describes a one-parameter family of probability distributions obtained by implementing Bayes’ law using data collected from a continuous time stochastic process. To quantify this family of distributions, we introduced the “time” parameter, T, which corresponds to the number of data incorporated into the model. In the direction of increasing T, the inferred probability model converges onto the distribution generating the observed data. In this respect, a Dynamical Bayesian flow has the complexion of an inversion in the ERG sense because it begins with an uninformed prior distribution but eventually converges to an informed distribution. In the language of ERG, this describes a flow from an IR theory to a UV theory. The pair consisting of an uniformed prior, along with the specification of a sufficiently complete set of data, therefore defines a flow terminating with a UV theory. We take this as our definition of a Dynamical Bayesian Flow.
This picture suggests that Dynamical Bayesian Inference and ERG flow are inverse of each other. If we can find a Dynamical Bayesian inference that begins with a prior distribution equal to the IR fixed point of an ERG flow, and which terminates at a data-generating distribution equal to the UV initial data of said ERG flow, we will have obtained an inversion of the ERG flow. Let us now describe a strategy for determining pairs of ERG flows with Dynamical Bayesian Flows.

5.1. Dynamical Bayesian Flow → ERG Flow

Suppose we are given a Dynamical Bayesian flow and asked to determine an ERG flow that is inverse to it. Recall, we have defined a Dynamical Bayesian Flow as the pair, ( q 0 , { Y t } t = 1 T ) , where q 0 is an uninformed prior distribution and { Y t } t = 1 T is a set of data generated by a distribution p * . Using these data, it is straightforward to define the corresponding ERG flow as the one-parameter family of probability distributions obtained from the Dynamical Bayesian flow when viewed as evolving backwards with respect to the time parameter T. In other words, one takes the terminal distribution of the Dynamical Bayesian flow, p * , as defining the UV initial data of the ERG flow and obtains subsequent probability distributions along the ERG flow by removing items of data from the inferred model.
In Section 4.6, we derived a partial differential equation governing just such a situation, in which the posterior predictive distribution evolves backwards against the collection of new data. The resulting partial differential equation describes a diffusion process that we dubbed Bayesian Diffusion: that is, one obtains a one-parameter family of probability distributions { p τ } such that p 0 = p * and for which:
p τ τ + m i p τ y i K i j ( γ τ ) 2 p τ y i y j = 0 .
This differential equation describes the evolution of a probability model governing the stochastic process Y τ , which itself is governed by the Stochastic Differential Equation
d Y τ = ( grad K Φ ) d τ + d W τ .
Here, γ τ is a trajectory in the model space of the theory describing the path of the maximum a posteriori parameter estimate generated by the sequence of observed data, K is the pullback of the Fisher Information Metric on the space of models defined in (134), and Φ is the log-likelihood function associated with the Bayesian Inference scheme.
Together, the aforementioned items constitute a choice of scheme for the Dynamical Bayesian flow. Bayesian diffusion can therefore be associated with an ERG flow governed by the Wegner–Morris equation:
p τ τ = grad W 2 D K L ( p τ     q τ ) = y i p τ K i j y j Σ = y i p τ Ψ i
where now the stationary distribution, scheme function, and reparameterization kernel are, respectively, given by:
q τ = e Φ ( γ τ ; y ) Z q ; Σ = Φ ( γ τ ; y ) Φ ( u ; y ) ; Ψ = grad K Σ .
These data, along with the distribution p * , define an ERG flow in the Wegner–Morris sense, which, by construction, is the inverse of the Dynamical Bayesian flow we began with.

5.2. Dynamical Bayesian Flow ← ERG Flow

Conversely, suppose that we are given an ERG flow in a Wegner–Morris form and asked to determine a Dynamical Bayesian flow that is inverse to it. To do so, we first identify the Bayesian Diffusion process that the ERG flow corresponds to. Since we have shown that ERG and Bayesian Diffusion are governed by the same equations, this is as simple as translating between the ERG scheme and the Dynamical Bayesian scheme. Comparing (139) and (143), we deduce that the ERG associated with an optimal transport with potential V and sample space metric g is equivalent to a Bayesian diffusion in which V is taken as the log-likelihood function, and g is identified with the metric K. These data are sufficient to define a Bayesian inference problem. Notice, if we go all the way back to (13), we can finally provide a conceptual understanding of the seed action: the seed action sets the log-likelihood for the Bayesian Inference scheme related to the ERG flow it defines.

5.3. A Dictionary

We summarize the analysis of this final section in Table 3. It provides a dictionary translating between the Wegner–Morris equations relevant to finite sample space ERG, infinite dimensional sample space ERG, and Bayesian Diffusion.

5.4. Renormalizability and Scale

The dictionary in Table 3 provides a blueprint for interpreting ERG in the language of Statistical Inference. In this role, our work suggests new approaches to understanding and resolving many interesting problems inside and outside of field theory. As a demonstration, we shall use this final section to discuss the meaning of renormalizability in the context of an ERG flow related to the Bayesian Inference paradigm.
An important observation in (3) is the correspondence between the Fisher Metric in the inference context and C ˙ Λ ( x , y ) in the ERG context. In the exact renormalization of a free field theory, C ˙ Λ ( x , y ) is the regulated two-point function and therefore sets a running momentum scale for operators in the theory. The Fisher Metric, I , plays an analogous role in the Bayesian Inference scheme as a generalized two-point function encoding a notion of scale through the covariance between operators.
The interpretation of the Fisher Metric as defining an energy scale is made very clear when we consider the inverse Bayesian flow as a diffusion process. Let M = dens ( S ) denote the manifold of probability distributions over a sample space S. Then, a Bayesian diffusion, or equivalently an Exact Renormalization Group flow, can be described by a drift-diffusion process generated by an operator L : M M , such that:
t + L p t = 0 .
Given initial data (for example, in an ERG we would provide a U V theory) p 0 M , we can write the solution to (145) symbolically as:
p t = e t L p 0 .
where e t L is the heat kernel of L. To give more concrete meaning to (146), let us assume that L is a positive definite operator that can be diagonalized as
L ( ψ n ) = λ n 2 ψ n .
Here, { ψ n } is a countably infinite set forming a basis for M , and λ n is non-negative but may be equal to zero. An arbitrary element p M can be expanded as a series: p = n p n ψ n , where p n is the coordinate of p in the eigendirection ψ n .
The spectrum of L defines an emergent energy scale in the following sense: consider the action of (146) as given by:
p t = n e t λ n 2 p 0 n ψ n .
(148) dictates that the projection of p t onto each mode, ψ n , is damped over time with a strength determined by on the “energy” λ n 2 . At time t, the effective description exponentially suppresses modes that have large eigenvalues relative to the operator L and thus sequentially removes these modes in a generalized integration over effective “momentum shells”. From Equation (137), we can see that the operator L is generically a convection–diffusion operator with a diffusion matrix given by the Fisher Information. Through further comparison to (6), we see that in ERG for physical theories the analogous role is played by the regulated two-point function. This leads to the important conclusion that, in physical contexts, the emergent energy scale is in fact equivalent to a physical one.
More generally, interpreting the two-point function, or the covariance matrix, in a statistical inference problem as generating an emergent energy scale provides the foundation for an information-theoretic interpretation of the conditions for renormalizability. In Wilsonian RG, a theory is said to be non-renormalizable if the divergences present in higher-order Feynman diagrams can only be canceled by the introduction of an infinite number of arbitrarily high-energy couplings [46]. By contrast, a theory is said to be renormalizable if arbitrarily higher-order Feynman diagrams can be computed by introducing only a finite number of operator sourced counterterms.
In the language of statistical inference, the operator content of a theory is related to the problem of model selection [47,48]. In the context of parametric statistics (as we have mentioned in the introduction, the relationship between Wilsonian RG and ERG is analogous to the relationship between Parametric and Non-Parametric statistics), model selection can be reduced to determining the set of sufficient parameters needed to form a model that can accurately compute the expectation values of any observable associated with the system of interest. A natural framing of this problem is given in terms of n-point correlation functions for the random variable, Y, observed throughout the inference. It is sensible to restrict our attention to n-point functions since arbitrary observables can be constructed from them using a Taylor expansion. Put differently, a probability distribution can be reconstructed with knowledge of all its moments.
In view of the previous discussion, higher n-point functions can be interpreted as encoding information at higher values of the emergent energy scale. This inspires the interpretation that an inference model is “renormalizable” if a finite N exists such that for any n > N , the n-point function can be computed with the information contained only in m-point functions with m N . In other words, an energy as measured through L, λ * 2 exists, above which all of the information in the theory is actually encoded in lower energy operators. This happens, for example, in a Gaussian theory in which all n-point functions higher than N = 2 can be formulated as sums of products of 2-point functions using Wick’s theorem. If no such finite N exists, the inference problem is “nonrenormalizable”. A nonrenormalizable theory can therefore be understood as a theory in which an infinite number of n-point functions will be required to compute the expectation values of arbitrary observables. In other words, there is no energy scale above which information becomes encoded in the energy scales below it—every energy scale contributes, in some sense, independently to the theory.

6. Discussion

In this note, we have demonstrated that an ERG flow can be identified with a diffusion process that is inversely related to a Dynamical Bayesian Inference scheme. In particular, we have argued that ERG flow can be understood as a one-parameter family of probability distributions arising where data are continuously removed from the inferred probability model. We have motivated this interpretation by illustrating that the equations governing ERG and Bayesian Diffusion can be brought into direct correspondence with one another, as outlined in Section 5. The resulting dictionary provides a novel, fully information-theoretic language for understanding ERG flow. It also provides an operational answer to the question of what it means to “invert” an ERG flow.
From a very general perspective, the solution to this problem can be framed in the following way. Given a preliminary probability distribution, p 0 , we imagine running our model through a noisy channel generated by a diffusion operator B. In other words, we produce a probability distribution, p τ , which solves the differential equation:
p τ τ + B ( p τ ) = 0 ,
with initial data p 0 . After a given period of time, t, we obtain a new probability distribution, p t = e t B ( p 0 ) , which has lost some of the information previously contained in p 0 to diffusion. In the case of an ERG flow viewed from the functional diffusion perspective, we can regard this loss of information as being generated by a coarse-graining scheme encoded in the operator B.
We then ask the question, can this diffusion process can be “inverted”? Since exact inversion may not be possible, we frame this problem in the form of an optimization scheme. Consider the set F consisting of all operators F, generating a one-parameter flow of probability distributions, q T , such that:
q T T + F ( q T ) = 0 .
subject to an initial condition q 0 . If we take the initial data of this process to be the terminal distribution of the diffusion process given by (149), q 0 = e t B ( p 0 ) , we can interpret the solution q t = e t F ( q 0 ) as a reconstruction algorithm for the initial data, p 0 . It is natural to identify the optimal reconstruction algorithm as the operator F * F , for which the relative entropy between the reconstructed distribution, q t , and the initial data, p 0 , is minimal:
F * : = arg min F F D K L ( p 0     e t F e t B p 0 ) .
In this language, we interpret the main result of our paper as dictating that, given an ERG generated by a diffusion operator B, the optimal reconstruction operator F * corresponds to a continuous Bayesian Inference scheme in which the information lost to coarse-graining is re-learned and hence reincorporated into the model. This clarifies the sense in which an ERG is “invertible” as long as we allow for the reconstruction of information ostensibly destroyed by diffusion.
One can visualize this process as follows: imagine an experimenter performing a statistical inference experiment in which they observe a collection of data { Y i } i = 1 T , generated from the distribution p 0 . Next, imagine we can place each of the observations along the real axis, distinguishing a series of points, each of which we label by a probability distribution { p T } T = 0 . The probability distribution at the Tth point, p T , is obtained by incorporating all of the data to the left of T into a model using Bayes’ law. Moving to the left along this axis corresponds to disincorporating data from the model and therefore induces a diffusion process and by extension an ERG scheme. Conversely, moving to the right along this axis corresponds to reincorporating lost data and therefore inverts the ERG flow.
Framing the relationship between ERG and Statistical Inference in terms of the reconstruction problem (151) suggests several interesting paths for future study. Firstly, the reconstruction problem is equivalent to a common problem encountered in Machine Learning when one wishes to sample data from an analytically intractable distribution, p 0 . An approach to this problem goes by the name of Diffusion Learning [49,50,51,52]. Diffusion learning is a two-step process: first, one uses a diffusion operator, B, with a known fixed point to transform the initial data p 0 into an analytically tractable form. Then one identifies a second diffusion operator, F, which optimally reconstructs the initial data without sacrificing analytic tractability. This routine is equivalent to (151), provided we restrict the set of allowed reconstruction algorithms to operators that generate diffusion processes. More generally, the information-theoretic formulation of ERG constructed in this paper renders renormalization in a form that is amenable to applications outside of pure physics. We hope this will catalyze continued work, especially at the intersection of physics and data science, geared towards constructing and better understanding machine learning algorithms like diffusion learning. Since the original drafting of this note, some work to this end has been undertaken. In [53], the approach introduced in this paper was adapted into a practical renormalization scheme for generic statistical inference models including neural networks. As a proof of concept, this scheme was subsequently applied to construct renormalization group flows for autoencoder neural networks.
A second fascinating implementation of (151) appears in the study of Holography [54]. There, one is interested in reconstructing a bulk spacetime from the data contained in a quantum field theory on its conformal boundary [55,56]. The relationship between our work and bulk reconstruction is very natural. In modern literature, bulk reconstruction is often interpreted through the language of Quantum Error Correction as the inversion of a quantum channel associated with the propagation of bulk data into a subregion of the conformal boundary [57,58,59,60]. Placed in this context, the bulk reconstuction problem is a non-commutative generalization of (151) in which one replaces probability distributions by density operators, and the maps B and F by Quantum Channels [61,62] (see Appendix B for a short discussion of the relationship between this paper and error correction). This suggests that by studying a quantum version of the correspondence introduced in this paper, one might be able to shed light on some of the mysterious aspects of the A d S / C F T correspondence. Beyond this, there are also interesting questions that pertain to the generalization of our correspondence into the general language of non-commutative probability theory, C * operator algebras. These include understanding statistical inference for non-commutative operator algebras [63,64,65], the study of non-commutative diffusion processes [66,67,68], and the formulation of renormalization techniques that work beyond the scope of Euclidean QFT in the vein of entanglement renormalization and tensor networks [69,70,71,72,73,74].
Finally, our picture of ERG can provide a powerful tool for constructing and interpreting theorems about renormalization. By formulating ERG flows as a related statistical inference problem, one obtains a stark accounting of the information/degrees of freedom contained in the renormalized theory at any point over the course of its flow. Such knowledge is of great use in computing the information lost between various points along an RG flow and, especially, in constructing and interpreting RG monotones [75,76,77,78,79]. We are hopeful that our conceptual approach to the ERG can expand renormalization in its role as a toolset for studying the space of Quantum Field Theories.

Author Contributions

Conceptualization, D.S.B. and M.S.K.; Methodology, D.S.B.; Formal analysis, D.S.B. and M.S.K.; Investigation, D.S.B. and M.S.K.; Writing—original draft, D.S.B. and M.S.K.; Funding acquisition, D.S.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Pierre Andurand.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

We thank Jonathan Heckman for collaboration on dynamical Bayes in [6], which led to many of the ideas in this paper. We are grateful to the participants of the “String Data 2022” conference, where this work was first presented, and subsequent comments from Semon Rezchikov and Miranda Cheng. We also wish to thank Alex Stapleton for related collaboration on forth-coming work on diffusion models and reconstruction channels in holography. Finally, we thank Samuel Goldman and Robert Leigh for enlightening discussions on Exact Renormalization. DSB acknowledges support from Pierre Andurand over the course of this research. MSK is supported through the Physics Department at the University of Illinois at Urbana-Champaign.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Optimal Transport

Optimal transport consists of redistributing the mass between two probability measures in order to accomplish cost minimization. To be precise, let Y S be a random variable in the sample space S. We regard S as an orientable differentiable manifold possessing a reference measure, Vol S ( y ) , which is simply the volume form on S. A probability measure can then be obtained by considering a measurable function: p : S R + , which is normalized in the integral sense:
S p = S p ( y ) Vol S ( y ) = 1
Optimal transport can then be stated in two equivalent forms. The Monge Formulation is as follows: given a space S and two probability distributions p 1 and p 2 corresponding to measures μ 1 = p 1 and μ 2 = p 2 , we seek a transport function T : S S such that:
  •  
T * μ 2 = μ 1
meaning the transport function pulls back mass from μ 2 to μ 1 , or
T ( U ) μ 2 = U μ 1
  • for any subset U S . Note, this is usually written in terms of the “pushforward”: T μ 1 = μ 2 , which is the pullback by the inverse map T 1 : S S , i.e., μ 2 = ( T 1 ) * μ .
2.
The transport function is selected to minimize the objective function:
M [ T ] = S μ 1 ( y ) c ( y , T ( y ) )
  • where c : S × S is a cost function that is typically associated with a distance on the sample space.
Alternatively, we can state the optimal transport problem in the Kantorovich Formulation. In that case, one begins with a joint measure Π on the sample space S × S that pushes forward (in the sense of integrating along the fiber or simply marginalizing in the probabilistic sense) to the measure μ 1 and μ 2 , respectively. That is, Π ( y 1 , y 2 ) = π ( y 1 , y 2 ) Vol S ( y 1 ) Vol S ( y 2 ) .
  •  
μ 1 ( y 1 ) = Vol S ( y 1 ) S Vol S ( y 2 ) π ( y 1 , y 2 ) μ 2 ( y 2 ) = Vol S ( y 2 ) S Vol S ( y 1 ) π ( y 1 , y 2 )
  • We shall henceforth denote by Γ ( μ 1 , μ 2 ) the set of joint measures that marginalize to μ 1 and μ 2 .
2.
The joint measure Π is chosen so as to minimize the joint expectation value for the cost:
K ( Π ) = S × S Π ( y 1 , y 2 ) c ( y 1 , y 2 )
Notice, if we choose π ( y 1 , y 2 ) = p 1 ( x ) δ ( y T ( x ) ) the Kantorovich objective function is equivalent to the Monge function.
There is an important theorem that states that for the 2 cost function, c ( y 1 , y 2 ) = y 1 y 2 2 , a smooth solution to the Monge problem exists. In particular, a smooth function f : S R will exist for which
T i ( y ) = g i j ( y ) y j f ( y )
or
T ( y ) = grad f ( y )
Recall the relationship between μ 1 and μ 2 : μ 1 = T * μ 2 or
μ 1 = T * ( μ 2 ) = det T i ( y ) y j p 2 ( T ( y ) ) Vol S ( y )
or, with respect to the probability distributions:
p 1 ( y ) = det T i y j p 2 ( T ( y ) )
Hence, with respect to the solution we have specified in (A7), we can write:
p 1 ( y ) = det y j g i k ( y ) y k f p 2 ( grad f ( y ) )
or, put more suggestively,
p 1 ( y ) p 2 ( grad f ( y ) ) = Δ f ( y )
where Δ is the Laplacian.
The Wasserstein Distance is a metric on the space of probability measures that is defined as the solution to an optimal transport problem with a specified cost:
W c ( μ 1 , μ 2 ) : = inf Π Γ ( μ 1 , μ 2 ) S × S Π ( y 1 , y 2 ) c ( y 1 , y 2 )
Of particular interest to us will be the Wasserstein two distance, which is the Wasserstein distance defined with respect to the 2 cost function:
W 2 ( μ 1 , μ 2 ) = inf Π Γ ( μ 1 , μ 2 ) S × S Π ( y 1 , y 2 ) y 1 y 2 2 1 / 2

Appendix B. ERG and Error Correction

In this section, we would like to note how the work we have presented in this paper connects to a related approach to understanding RG through the language of quantum error correction [58,59,60].
Given an operator algebra, A , corresponding to a set of observable degrees of freedom, a theory can be thought of as a state, or, in more physical language, as a density operator, which assigns to each operator an expectation value. We shall denote the set of states on A as A * . A quantum channel, E : A * B * , is a completely positive, trace-preserving, linear map from states on an operator algebra A to states on an operator algebra B . A quantum channel is a natural mathematical representation for the generator of an RG flow because of the Data-Processing Inequality:
D K L ( ρ     ρ ) D K L ( E ( ρ )     E ( ρ ) )
One can interpret (A15) as stating that the distinguishability of states is decreased under the action of any quantum channel. For this reason, a quantum channel is sometimes also referred to as a coarse-graining map, in analogy with an RG flow.
A quantum channel is exactly reversible if and only if it is sufficient in the sense that no information is lost: that is, for all ρ , ρ A * the Data-Processing Inequality is saturated and
D K L ( ρ     ρ ) = D K L ( E ( ρ )     E ( ρ ) )
In this case, a complimentary channel, P ρ , E : B * A * , will exist, called the Petz Map such that P ρ , E E ( ρ ) = ρ .
In general, (A16) will only be met for a subset of states, C * A * . The operator algebra associated with this set of states is denoted by C and called the Code Subspace. The operators that live inside the code-subspace are defined by the property that their expectation values are invariant under the flow generated by E . Leveraging the interpretation of a quantum channel as generating an RG Flow, we can therefore interpret the code subspace of E as corresponding to the set of relevant operators.
In this paper, we have provided a concrete link between information theory and ERG through the intermediary of statistical inference by applying the concept of Bayesian inversion to the heat flow describing an ERG. Regarding Bayesian Inference as dual to ERG fits neatly into the Error Correcting picture discussed above. The Petz Map of a quantum channel E can be defined as its formal adjoint with respect to a non-commutative generalization of the Fisher Information Metric on A * .
g ρ : T ρ A * × T ρ A * R , such that g ρ ( X , Y ) = Tr ( X Ω ρ 1 ( Y ) ) , where Ω ρ 1 is a map that corresponds to a non-commutative generalization of “division by ρ ”. Explicitly,
Ω ρ 1 ( Y ) = d d t | t = 0 log ( ρ + t Y )
The metric g ρ takes the schematic form, Tr ( X Y ρ ) , which is equivalent to the usual form of the Fisher Metric.
g ρ ( X , P ρ , E ( Y ) ) = g E ( ρ ) ( E ( X ) , Y )
When the operator algebra in question is commutative, the set of states becomes equivalent to the space of probability distributions, and the metric g ρ reduces to the unique Fisher Metric on this space. A consequence of this fact is that the Petz Map for commutative operator algebras as obtained through (A18) is equivalent the Bayesian posterior with prior ρ . We prove this statement now.
Let A be a commutative algebra. For example, its elements may correspond to the set of measurable functions on a domain, S, or, in the parlance of Probability Theory, random variables on the sample space S. A state on A is then a probability measure on S, which is a measurable function p : S R that is normalized in the integral sense:
Tr Vol ( p ) = S Vol S ( x ) p ( x ) = 1
Here, Vol S is the reference measure on S, if S is an orientable differentiable manifold this is nothing but the volume form. The pairing of a state, p A * , and a random variable, f A , is the computation of an expectation value:
p [ f ] : = E p ( f ( X ) ) = Tr Vol ( p f ) = S Vol S ( x ) p ( x ) f ( x )
A channel between states on commutative algebras, E : A * B * , is a Stochastic Map. Let S A and S B denote the sample spaces associated with the algebras A and B , respectively. Then, we can associate E with a conditional probability distribution: p B A : A × B R such that:
S B Vol B ( y ) p B A ( y x ) = 1 ; x S A
and
R S B Vol B ( y ) p B A ( y x ) = P ( y R X = x )
More to the point, given a marginal probability distribution p A A * , the action of the map E is given by the following integral:
E ( p A ) = S A Vol A ( x ) p B A ( y x ) p A ( x ) B *
The Fisher Metric on A * takes the form:
This form of the Fisher Metric should be compared with the more standard form:
( g p A ) i j = S A Vol A ( x ) p A ( x θ ) log ( p A ( x θ ) ) θ i log ( p A ( x θ ) ) θ j = S A Vol A ( x ) 1 p A ( x θ ) p A ( x θ ) θ i p A ( x θ ) θ j
Here, θ are a set of parameters specifying a probability density in a parametric family. In the first equality, one regards i = log ( p A ( x θ ) ) θ i as a basis for T p A A * , with T p A A * identified as the set of random variables on S A with zero expectation value. The second equality arises from a simple algebraic manipulation of the first but implies a different condition on T p A A * , namely, that is the set of measurable functions on S A that integrate to zero. The second condition is consistent with our chosen specification of T p A A * , hence why we have chosen the form of the Fisher Metric in (A25)
g p A ( U , V ) = Tr Vol A U V p A = S A Vol A ( x ) U ( x ) V ( x ) p A ( x )
Here, U , V : S A R are elements of T p A A * , which, in the vein of (17), we identify with measurable functions on the sample space, which have zero weight when integrated over the sample space with respect to the reference measure. This guarantees that such random variables can be identified with perturbations to a probability density that maintains the integral normalization condition.
With all of this terminology in place, we are now prepared to understand the implication of the Petz Map for commutative algebras. To begin, let us remark that the Petz map, as a channel, P p A , E : B * A * , can be associated with a stochastic map q A B : S A × S B R with
P p A , E ( p B ) = S B Vol B ( y ) q A B ( x y ) p B ( y ) A *
Thus, the adjoint condition:
g p A ( U , P p A , E ( V ) ) = g E ( p A ) ( E ( U ) , V )
implies the following. First, the left hand side can be written:
g p A ( U , P p A , E ( V ) ) = S A Vol A ( x ) U ( x ) S B Vol B ( y ) q A B ( x y ) V ( y ) p A ( x )
        = S A × S B Vol A ( x ) Vol B ( y ) U ( x ) q A B ( x y ) V ( y ) p A ( x )
Similarly, the right hand side is of the form:
g E ( p A ) ( E ( U ) , V ) = S B Vol B ( y ) S A Vol A ( x ) p B A ( y x ) U ( x ) V ( y ) S A Vol A ( x ) p B A ( y x ) p A ( x )
         = S A × S B Vol A ( x ) Vol B ( y ) U ( x ) p B A ( y x ) V ( y ) S A Vol A ( x ) p B A ( y x ) p A ( x )
Equating the left hand side and the right hand side for arbitrary functions U : S A R and V : S B R , we therefore find:
q A B ( x y ) p A ( x ) = p B A ( y x ) S A Vol A ( x ) p B A ( y x ) p A ( x )
This is Bayes’ Law.

References

  1. Wilson, K.G.; Kogut, J. The renormalization group and the ϵ expansion. Phys. Rep. 1974, 12, 75–199. [Google Scholar] [CrossRef]
  2. Polchinski, J. Renormalization and effective lagrangians. Nucl. Phys. B 1984, 231, 269–295. [Google Scholar] [CrossRef]
  3. Latorre, J.I.; Morris, T.R. Exact scheme independence. J. High Energy Phys. 2000, 2000, 004. [Google Scholar] [CrossRef]
  4. Cotler, J.; Rezchikov, S. Renormalization group flow as optimal transport. arXiv 2022, arXiv:2202.11737. [Google Scholar] [CrossRef]
  5. Dashti, M.; Stuart, A.M. The bayesian approach to inverse problems. In Handbook of Uncertainty Quantification; Springer: Berlin/Heidelberg, Germany, 2017; pp. 311–428. [Google Scholar]
  6. Berman, D.S.; Heckman, J.J.; Klinger, M. On the dynamics of inference and learning. arXiv 2022, arXiv:2204.12939. [Google Scholar]
  7. Bagnuls, C.; Bervillier, C. Exact renormalization group equations: An introductory review. Phys. Rep. 2001, 348, 91–157. [Google Scholar] [CrossRef]
  8. Rosten, O.J. Fundamentals of the exact renormalization group. Phys. Rep. 2012, 511, 177–272. [Google Scholar] [CrossRef]
  9. Wegner, F.J.; Houghton, A. Renormalization group equation for critical phenomena. Phys. Rev. A 1973, 8, 401. [Google Scholar] [CrossRef]
  10. Wegner, F. Some invariance properties of the renormalization group. J. Phys. C Solid State Phys. 1974, 7, 2098. [Google Scholar] [CrossRef]
  11. Morris, T.R. The exact renormalization group and approximate solutions. Int. J. Mod. Phys. A 1994, 9, 2411–2449. [Google Scholar] [CrossRef]
  12. Morris, T.R. Derivative expansion of the exact renormalization group. Phys. Lett. B 1994, 329, 241–248. [Google Scholar] [CrossRef]
  13. Morris, T.R. Elements of the continuous renormalization group. Prog. Theor. Phys. Suppl. 1998, 131, 395–414. [Google Scholar] [CrossRef]
  14. Bogachev, V.I.; Krylov, N.V.; Röckner, M.; Shaposhnikov, S.V. Fokker–Planck–Kolmogorov Equations; American Mathematical Society: Providence, RI, USA, 2022; Volume 207. [Google Scholar]
  15. Da Prato, G. Kolmogorov Equations for Stochastic PDEs; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  16. Fuhrman, M. Nonlinear kolmogorov equations in infinite dimensional spaces: The backward Stochastic Differential Equations approach and applications to optimal control. Ann. Probab. 2002, 30, 1397–1465. [Google Scholar] [CrossRef]
  17. Chen, X. Geometric Flows for Applied Mathematicians. Available online: http://publish.illinois.edu/xiaohuichen/files/2020/12/geometric_flows.pdf (accessed on 27 April 2024).
  18. Shreve, S.E. Stochastic Calculus for Finance II: Continuous-Time Models; Springer: Berlin/Heidelberg, Germany, 2004; Volume 11. [Google Scholar]
  19. Ambrosio, L.; Gigli, N.; Savaré, G. Gradient Flows: In Metric Spaces and in the Space of Probability Measures; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  20. Santambrogio, F. {Euclidean, metric, and Wasserstein} gradient flows: An overview. Bull. Math. Sci. 2017, 7, 87–154. [Google Scholar] [CrossRef]
  21. Villani, C. Optimal Transport: Old and New; Springer: Berlin/Heidelberg, Germany, 2009; Volume 338. [Google Scholar]
  22. Itô, K. 109. stochastic integral. Proc. Imp. Acad. 1944, 20, 519–524. [Google Scholar]
  23. Itô, K. On Stochastic Differential Equations; American Mathematical Soc.: Providence, RI, USA, 1951; No. 4. [Google Scholar]
  24. Itô, K.; Henry, P., Jr. Diffusion Processes and Their Sample Paths: Reprint of the 1974 Edition; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
  25. Coffey, W.; Kalmykov, Y.P. The Langevin Equation: With Applications to Stochastic Problems in Physics, Chemistry and Electrical Engineering; World Scientific: Singapore, 2012; Volume 27. [Google Scholar]
  26. Sekimoto, K. Langevin equation and thermodynamics. Prog. Theor. Phys. Suppl. 1998, 130, 17–27. [Google Scholar] [CrossRef]
  27. Simon, B. Harmonic Analysis; American Mathematical Soc.: Providence, RI, USA, 2015. [Google Scholar]
  28. Maruyama, G. The harmonic analysis of stationary stochastic processes. Mem. Fac. Sci. Kyushu Univ. Ser. A Math. 1949, 4, 45–106. [Google Scholar] [CrossRef]
  29. Bochner, S. Harmonic Analysis and the Theory of Probability; Courier Corporation: North Chelmsford, MA, USA, 2005. [Google Scholar]
  30. Santambrogio, F. Optimal transport for applied mathematicians. Birkäuser NY 2015, 55, 94. [Google Scholar]
  31. Davies, E.B. Heat Kernels and Spectral Theory; Cambridge University Press: Cambridge, UK, 1989; No. 92. [Google Scholar]
  32. Berline, N.; Getzler, E.; Vergne, M. Heat Kernels and Dirac Operators; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  33. Rudnicki, R.; Pichór, K.; Tyran-Kamińska, M. Markov semigroups and their applications. In Dynamics of Dissipation; Springer: Berlin/Heidelberg, Germany, 2002; pp. 215–238. [Google Scholar]
  34. Lorenzi, L.; Bertoldi, M. Analytical Methods for Markov Semigroups; Chapman and Hall/CRC: Boca Raton, FL, USA, 2006. [Google Scholar]
  35. Kolokoltsov, V.N. Markov processes, semigroups and generators. In Markov Processes, Semigroups and Generators; de Gruyter: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  36. Anderson, B.D. Reverse-time diffusion equation models. Stoch. Process. Their Appl. 1982, 12, 313–326. [Google Scholar] [CrossRef]
  37. Song, Y.; Sohl-Dickstein, J.; Kingma, D.P.; Kumar, A.; Ermon, S.; Poole, B. Score-Based Generative Modeling through Stochastic Differential Equations. arXiv 2020, arXiv:2011.13456. [Google Scholar]
  38. Hoang, V.H.; Schwab, C.; Stuart, A.M. Complexity analysis of accelerated mcmc methods for bayesian inversion. Inverse Probl. 2013, 29, 085010. [Google Scholar] [CrossRef]
  39. Cockayne, J.; Oates, C.; Sullivan, T.; Girolami, M. Probabilistic meshless methods for partial differential equations and bayesian inverse problems. arXiv 2016, arXiv:1605.07811. [Google Scholar]
  40. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  41. Adler, J.; Öktem, O. Deep bayesian inversion. arXiv 2018, arXiv:1811.05910. [Google Scholar]
  42. Schillings, C.; Schwab, C. Scaling limits in computational bayesian inversion. ESAIM Math. Model. Numer. Anal. 2016, 50, 1825–1856. [Google Scholar] [CrossRef]
  43. Matthies, H.G.; Zander, E.; Rosić, B.V.; Litvinenko, A. Parameter estimation via conditional expectation: A bayesian inversion. Adv. Model. Simul. Eng. Sci. 2016, 3, 24. [Google Scholar] [CrossRef]
  44. Harper, M. Information geometry and evolutionary game theory. arXiv 2009, arXiv:0911.1383. [Google Scholar]
  45. Harper, M. The replicator equation as an inference dynamic. arXiv 2009, arXiv:0911.1763. [Google Scholar]
  46. Parisi, G. The theory of non-renormalizable interactions: The large n expansion. Nucl. Phys. B 1975, 100, 368–388. [Google Scholar]
  47. Anderson, D.; Burnham, K.; White, G. Comparison of akaike information criterion and consistent akaike information criterion for model selection and statistical inference from capture-recapture studies. J. Appl. Stat. 1998, 25, 263–282. [Google Scholar] [CrossRef]
  48. Balasubramanian, V. Statistical inference, occam’s razor, and statistical mechanics on the space of probability distributions. Neural Comput. 1997, 9, 349–368. [Google Scholar] [CrossRef]
  49. Sohl-Dickstein, J.; Weiss, E.; Maheswaranathan, N.; Ganguli, S. Deep unsupervised learning using nonequilibrium thermodynamics. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; PMLR. pp. 2256–2265. [Google Scholar]
  50. Neal, R.M. Annealed importance sampling. Stat. Comput. 2001, 11, 125–139. [Google Scholar] [CrossRef]
  51. Ramesh, A.; Pavlov, M.; Goh, G.; Gray, S.; Voss, C.; Radford, A.; Chen, M.; Sutskever, I. Zero-shot text-to-image generation. In Proceedings of the International Conference on Machine Learning, Virtual, 18–24 July 2021; PMLR. pp. 8821–8831. [Google Scholar]
  52. Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; Chen, M. Hierarchical text-conditional image generation with clip latents. arXiv 2022, arXiv:2204.06125. [Google Scholar]
  53. Berman, D.S.; Klinger, M.S.; Stapleton, A.G. Bayesian renormalization. Mach. Learn. Sci. Tech. 2022, 4, 045011. [Google Scholar] [CrossRef]
  54. Maldacena, J. The large-n limit of superconformal field theories and supergravity. Int. J. Theor. Phys. 1999, 38, 1113–1133. [Google Scholar] [CrossRef]
  55. Dong, X.; Harlow, D.; Wall, A.C. Reconstruction of bulk operators within the entanglement wedge in gauge-gravity duality. Phys. Rev. Lett. 2016, 117, 021601. [Google Scholar] [CrossRef]
  56. Pastawski, F.; Yoshida, B.; Harlow, D.; Preskill, J. Holographic quantum error-correcting codes: Toy models for the bulk/boundary correspondence. J. High Energy Phys. 2015, 2015, 149. [Google Scholar] [CrossRef]
  57. Lashkari, N.; Raamsdonk, M.V. Canonical energy is quantum fisher information. J. High Energy Phys. 2016, 2016, 153. [Google Scholar] [CrossRef]
  58. Cotler, J.; Hayden, P.; Penington, G.; Salton, G.; Swingle, B.; Walter, M. Entanglement wedge reconstruction via universal recovery channels. Phys. Rev. X 2019, 9, 031011. [Google Scholar] [CrossRef]
  59. Faulkner, T. The holographic map as a conditional expectation. arXiv 2020, arXiv:2008.04810. [Google Scholar]
  60. Furuya, K.; Lashkari, N.; Ouseph, S. Real-space renormalization, error correction and conditional expectations. arXiv 2020, arXiv:2012.14001. [Google Scholar]
  61. Ohya, M.; Petz, D. Quantum Entropy and Its Use; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  62. Junge, M.; Renner, R.; Sutter, D.; Wilde, M.M.; Winter, A. Universal recovery maps and approximate sufficiency of quantum relative entropy. In Annales Henri Poincaré; Springer: Berlin/Heidelberg, Germany, 2018; Volume 19, pp. 2955–2978. [Google Scholar]
  63. Helstrom, C.W. Quantum detection and estimation theory. J. Stat. Phys. 1969, 1, 231–252. [Google Scholar] [CrossRef]
  64. Bény, C.; Osborne, T.J. Information-geometric approach to the renormalization group. Phys. Rev. A 2015, 92, 022330. [Google Scholar] [CrossRef]
  65. Bény, C.; Osborne, T.J. The renormalization group via statistical inference. New J. Phys. 2015, 17, 083005. [Google Scholar] [CrossRef]
  66. Carlen, E.A.; Maas, J. An analog of the 2-wasserstein metric in non-commutative probability under which the fermionic fokker–planck equation is gradient flow for the entropy. Commun. Math. Phys. 2014, 331, 887–926. [Google Scholar] [CrossRef]
  67. Carlen, E.A.; Maas, J. Gradient flow and entropy inequalities for quantum markov semigroups with detailed balance. J. Funct. Anal. 2017, 273, 1810–1869. [Google Scholar] [CrossRef]
  68. Carlen, E.A.; Maas, J. Non-commutative calculus, optimal transport and functional inequalities in dissipative quantum systems. J. Stat. Phys. 2020, 178, 319–378. [Google Scholar] [CrossRef]
  69. Nozaki, M.; Ryu, S.; Takayanagi, T. Holographic geometry of entanglement renormalization in quantum field theories. J. High Energy Phys. 2012, 2012, 193. [Google Scholar] [CrossRef]
  70. Swingle, B. Entanglement renormalization and holography. Phys. Rev. D 2012, 86, 065007. [Google Scholar] [CrossRef]
  71. Alvarez, E.; Gomez, C. Geometric holography, the renormalization group and the c-theorem. Nucl. Phys. B 1999, 541, 441–460. [Google Scholar] [CrossRef]
  72. Leigh, R.G.; Parrikar, O.; Weiss, A.B. Holographic geometry of the renormalization group and higher spin symmetries. Phys. Rev. D 2014, 89, 106012. [Google Scholar] [CrossRef]
  73. Mollabashi, A.; Naozaki, M.; Ryu, S.; Takayanagi, T. Holographic geometry of cmera for quantum quenches and finite temperature. J. High Energy Phys. 2014, 2014, 98. [Google Scholar] [CrossRef]
  74. Evenbly, G.; Vidal, G. Tensor network renormalization. Phys. Rev. Lett. 2015, 115, 180405. [Google Scholar] [CrossRef] [PubMed]
  75. Zamolodchikov, A.B. Irreversibility of the flux of the renormalization group in a 2d field theory. JETP Lett 1986, 43, 730–732. [Google Scholar]
  76. Myers, R.C.; Sinha, A. Seeing a c-theorem with holography. Phys. Rev. D 2010, 82, 046006. [Google Scholar] [CrossRef]
  77. Casini, H.; Huerta, M. A c-theorem for entanglement entropy. J. Phys. A Math. Theor. 2007, 40, 7031. [Google Scholar] [CrossRef]
  78. Casini, H.; Huerta, M.; Myers, R.C.; Yale, A. Mutual information and the f-theorem. J. High Energy Phys. 2015, 2015, 3. [Google Scholar] [CrossRef]
  79. Casini, H.; Testé, E.; Torroba, G. Markov property of the conformal field theory vacuum and the a theorem. Phys. Rev. Lett. 2017, 118, 261602. [Google Scholar] [CrossRef]
Table 1. Dictionary relating Finite Dimensional Diffusion and Polchinski’s ERG equation.
Table 1. Dictionary relating Finite Dimensional Diffusion and Polchinski’s ERG equation.
Finite Dim . Infinite Dim .
Time Parameter t = ln ( Λ )
Random Variable y i ϕ ( p )
Metric g i j C ˙ Λ ( p ) = ( 2 π ) d ( p 2 + m 2 ) 1 Λ K Λ ( p 2 ) Λ
Drift Velocity g i j v j Ψ [ ϕ ] = d d p C ˙ Λ ( p ) 2 ( p 2 + m 2 ) ( 2 π ) d K Λ ( p 2 ) ϕ ( p )
Table 2. A dictionary describing the correspondence between Optimal Transport and ERG.
Table 2. A dictionary describing the correspondence between Optimal Transport and ERG.
Optimal TransportExact Renormalization Group
Sample SpaceS, a differentiable manifold F ( M ) , the space of fields on spacetime M
Coordinates y S , a point ϕ F ( M ) , a field
Indices y i , i { 1 , . . . , dim ( S ) } , components ϕ ( x ) , x M , image at each point in M
Metric g = g i j d y i d y j C ˙ Λ ( x , y ) , an integral kernel
Space of Distributions M : = { p : S R | S p V o l S = 1 } M : = { P : F ( M ) R | F ( M ) D ϕ P [ ϕ ] = 1 }
Tangent Space T M : = { η : S R | S η Vol S = 0 } T M : = { η : F ( M ) R | F ( M ) D ϕ η [ ϕ ] = 0 }
Tangent Space Isomorphism η = d i p d η ^ Vol S η = M × M Vol M ( x ) Vol M ( y ) δ δ ϕ ( x ) P C ˙ Λ ( x , y ) δ δ ϕ ( y ) η ^ [ ϕ ]
Wasserstein Metric G W 2 ( η 1 , η 2 ) = S p d η ^ 1 d η ^ 2 G W 2 ( η 1 , η 2 ) = F ( M ) D ϕ P [ ϕ ] M × M C ˙ Λ ( x , y ) δ η ^ 1 [ ϕ ] δ ϕ ( x ) δ η ^ 2 [ ϕ ] δ ϕ ( y )
ERG Kernel Σ ( x ) = ln ( p ^ q ^ ) Σ [ ϕ ] = S [ ϕ ] + 2 S ˜ [ ϕ ]
Reparameterization Kernel Ψ ( x , p ) = grad g Σ ( x ) Ψ [ ϕ , x ] = 1 2 M Vol M ( y ) C ˙ Λ ( x , y ) δ Σ [ ϕ ] δ ϕ ( y )
Potential Function V : S R , Stationary Distribution 2 S ˜ [ ϕ ] , Twice the Seed Action
Wegner–Morris Equation grad W 2 D K L ( p t     q t ) = d ( p d Σ ) grad W 2 D K L ( P Λ     Q Λ ) = M Vol M ( x ) δ δ ϕ ( x ) Ψ Λ [ ϕ , x ] P Λ [ ϕ ]
Table 3. Dictionary relating ERG and Bayesian Diffusion.
Table 3. Dictionary relating ERG and Bayesian Diffusion.
Finite Dim . ERG Infinite Dim . ERG Bayesian Diffusion
Time Parameter t = ln ( Λ ) τ = 1 T
Metric g i j C ˙ Λ ( x , y ) I i j ( γ τ )
Potential V 2 S ^ Λ [ ϕ ] Φ ( γ τ ; y )
Scheme Σ = ln ( p q ) Σ = S Λ [ ϕ ] 2 S ^ Λ [ ϕ ] Σ = Φ ( u ; y ) Φ ( γ τ ; y )
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Berman, D.S.; Klinger, M.S. The Inverse of Exact Renormalization Group Flows as Statistical Inference. Entropy 2024, 26, 389. https://doi.org/10.3390/e26050389

AMA Style

Berman DS, Klinger MS. The Inverse of Exact Renormalization Group Flows as Statistical Inference. Entropy. 2024; 26(5):389. https://doi.org/10.3390/e26050389

Chicago/Turabian Style

Berman, David S., and Marc S. Klinger. 2024. "The Inverse of Exact Renormalization Group Flows as Statistical Inference" Entropy 26, no. 5: 389. https://doi.org/10.3390/e26050389

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop