Next Article in Journal
Test of the Pauli Exclusion Principle in the VIP-2 Underground Experiment
Previous Article in Journal
The Legendre Transform in Non-Additive Thermodynamics and Complexity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Critical Behavior in Physics and Probabilistic Formal Languages

1
Department of Physics, Harvard University, Cambridge, MA 02138, USA
2
Department of Physics & MIT Kavli Institute, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(7), 299; https://doi.org/10.3390/e19070299
Submission received: 15 December 2016 / Revised: 12 June 2017 / Accepted: 12 June 2017 / Published: 23 June 2017
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity, which we dub the rational mutual information, and discuss generalizations of our claims involving more complicated Bayesian networks.

1. Introduction

Critical behavior, where long-range correlations decay as a power law with distance, has many important physics applications ranging from phase transitions in condensed matter experiments to turbulence and inflationary fluctuations in our early Universe. It has important applications beyond the traditional purview of physics, as well [1,2,3,4,5], including applications to music [4,6], genomics [7,8] and human languages [9,10,11,12].
In Figure 1, we plot a statistic that can be applied to all of the above examples: the mutual information between two symbols as a function of the number of symbols in between the two symbols [9]. As discussed in previous works [9,11,13], the plot shows that the number of bits of information provided by a symbol about another drops roughly as a power law (The power law discussed here should not be confused with another famous power law that occurs in natural languages: Zipf’s law [14]. Zipf’s law implies power law behavior in one-point statistics (in the histogram of word frequencies), whereas we are interested in two-point statistics. In the former case, the power law is in the frequency of words; in the latter case, the power law is in the separation between characters. One can easily cook up sequences that obey Zipf’s law, but are not critical and do not exhibit a power law in the mutual information. However, there are models of certain physical systems where Zipf’s law follows from criticality [15,16].) with distance in sequences (defined as the number of symbols between the two symbols of interest) as diverse as the human genome, music by Bach and text in English and French. Why is this, when so many other correlations in nature instead drop exponentially [17]?
Better understanding the statistical properties of natural languages is interesting not only for geneticists, musicologists and linguists, but also for the machine learning community. Any tasks that involve natural language processing (e.g., data compression, speech-to-text conversion, auto-correction) exploit statistical properties of language and can all be further improved if we can better understand these properties, even in the context of a toy model of these data sequences. Indeed, the difficulty of automatic natural language processing has been known at least as far back as Turing, whose eponymous test [22] relies on this fact. A tempting explanation is that natural language is something uniquely human. However, this is far from a satisfactory one, especially given the recent successes of machines at performing tasks as complex and as “human” as playing Jeopardy! [23], chess [24], Atari games [25] and Go [26]. We will show that computer descriptions of language suffer from a much simpler problem that has involved no talk about meaning or being non-human: they tend to get the basic statistical properties wrong.
To illustrate this point, consider Markov models of natural language. From a linguistics point of view, it has been known for decades that such models are fundamentally unsuitable for modeling human language [27]. However, linguistic arguments typically do not produce an observable that can be used to quantitatively falsify any Markovian model of language. Instead, these arguments rely on highly specific knowledge about the data, in this case, an understanding of the language’s grammar. This knowledge is non-trivial for a human speaker to acquire, much less an artificial neural network. In contrast, the mutual information is comparatively trivial to observe, requiring no specific knowledge about the data, and it immediately indicates that natural languages would be poorly approximated by a Markov/hidden Markov model, as we will demonstrate.
Furthermore, the mutual information decay may offer a partial explanation of the impressive progress that has been made by using deep neural networks for natural language processing (see, e.g., [28,29,30,31,32]) (for recent reviews of deep neural networks, see [33,34]), We will see that a key reason that currently popular recurrent neural networks with long-short-term memory (LSTM) [35] do much better is that they can replicate critical behavior, but that even they can be further improved, since they can under-predict long-range mutual information.
While motivated by questions about natural languages and other data sequences, we will explore the information-theoretic properties of formal languages. For simplicity, we focus on probabilistic regular grammars and probabilistic context-free grammars (PCFGs). Of course, real-world data sources like English are likely more complex than a context-free grammar [36], just as a real-world magnet is more complex than the Ising model. However, these formal languages serve as toy models that capture some aspects of the real data source, and the theoretical techniques we develop for studying these toy models might be adapted to more complex formal languages. Of course, independent of their connection to natural languages, formal languages are also theoretically interesting in their own right and have connections to, e.g., group theory [37].
This paper is organized as follows. In Section 2, we show how Markov processes exhibit exponential decay in mutual information with scale; we give a rigorous proof of this and other results in a series of Appendices. To enable such proofs, we introduce a convenient quantity that we term rational mutual information, which bounds the mutual information and converges to it in the near-independence limit. In Section 3, we define a subclass of generative grammars and show that they exhibit critical behavior with power law decays. We then generalize our discussion using Bayesian nets and relate our findings to theorems in statistical physics. In Section 4, we discuss our results and explain how LSTM RNNs can reproduce critical behavior by emulating our generative grammar model.

2. Markov Implies Exponential Decay

For two discrete random variables X and Y, the following definitions of mutual information are all equivalent:
I ( X , Y ) S ( X ) + S ( Y ) S ( X , Y ) = D p ( X Y ) | | p ( X ) p ( Y ) = log B P ( a , b ) P ( a ) P ( b ) = a b P ( a , b ) log B P ( a , b ) P ( a ) P ( b ) ,
where S log B P is the Shannon entropy [38] and D ( p ( X Y ) | | p ( X ) p ( Y ) ) is the Kullback–Leibler divergence [39] between the joint probability distribution and the product of the individual marginals. If the base of the logarithm is taken to be B = 2 , then I ( X , Y ) is measured in bits. The mutual information can be interpreted as how much one variable knows about the other: I ( X , Y ) is the reduction in the number of bits needed to specify for X once Y is specified. Equivalently, it is the number of encoding bits saved by using the true joint probability P ( X , Y ) instead of approximating X and Y as independent. It is thus a measure of statistical dependencies between X and Y. Although it is more conventional to measure quantities such as the correlation coefficient ρ in statistics and statistical physics, the mutual information is more suitable for generic data, since it does not require that the variables X and Y are numbers or have any algebraic structure, whereas ρ requires that we are able to multiply X · Y and average. Whereas it makes sense to multiply numbers, it is meaningless to multiply or average two characters such as “!” and “?”.
The rest of this paper is largely a study of the mutual information between two random variables that are realizations of a discrete stochastic process, with some separation τ in time. More concretely, we can think of sequences { X 1 , X 2 , X 3 , } of random variables, where each one might take values from some finite alphabet. For example, if we model English as a discrete stochastic process and take τ = 2 , X could represent the first character (“F”) in this sentence, whereas Y could represent the third character (“r”) in this sentence.
In particular, we start by studying the mutual information function of a Markov process, which is analytically tractable. Let us briefly recapitulate some basic facts about Markov processes (see, e.g., [40] for a pedagogical review). A Markov process is defined by a matrix M of conditional probabilities M a b = P ( X t + 1 = a | X t = b ) . Such Markov matrices (also known as stochastic matrices) thus have the properties M a b 0 and a M a b = 1 . They fully specify the dynamics of the model:
p t + 1 = M p t ,
where p t is a vector with components P ( X t = a ) that specifies the probability distribution at time t. Let λ i denote the eigenvalues of M , sorted by decreasing magnitude: | λ 1 | | λ 2 | | λ 3 | All Markov matrices have | λ i | 1 , which is why blowup is avoided when Equation (2) is iterated, and λ 1 = 1 , with the corresponding eigenvector giving a stationary probability distribution μ satisfying M μ = μ .
In addition, two mild conditions are usually imposed on Markov matrices: M is irreducible, meaning that every state is accessible from every other state (otherwise, we could decompose the Markov process into separate Markov processes). Second, to avoid processes like 1 2 1 2 that will never converge, we take the Markov process to be aperiodic. It is easy to show using the Perron–Frobenius theorem that being irreducible and aperiodic implies | λ 2 | < 1 and, therefore, that μ is unique.
This section is devoted to the intuition behind the following theorem, whose full proof is given in Appendix A and Appendix B. The theorem states roughly that for a Markov process, the mutual information between two points in time t 1 and t 2 decays exponentially for large separation | t 2 t 1 | :
Theorem 1.
Let M be a Markov matrix that generates a Markov process. If M is irreducible and aperiodic, then the asymptotic behavior of the mutual information I ( t 1 , t 2 ) is exponential decay toward zero for | t 2 t 1 | 1 with decay timescale log 1 | λ 2 | , where λ 2 is the second largest eigenvalue of M . If M is reducible or periodic, I can instead decay to a constant; no Markov process whatsoever can produce power law decay. Suppose M is irreducible and aperiodic so that p t μ as t , as mentioned above. This convergence of one-point statistics, e.g., p t , has been well-studied [40]. However, one can also study higher order statistics such as the joint probability distribution for two points in time. For succinctness, let us write P ( a , b ) P ( X = a , Y = b ) , where X = X t 1 , Y = X t 2 and τ | t 2 t 1 | . We are interested in the asymptotic situation where the Markov process has converged to its steady state, so the marginal distribution P ( a ) b P ( a , b ) = μ a , independently of time.
If the joint probability distribution approximately factorizes as P ( a , b ) μ a μ b for sufficiently large and well-separated times t 1 and t 2 (as we will soon prove), the mutual information will be small. We can therefore Taylor expand the logarithm from Equation (1) around the point P ( a , b ) = P ( a ) P ( b ) , giving:
I ( X , Y ) = log B P ( a , b ) P ( a ) P ( b ) = log B 1 + P ( a , b ) P ( a ) P ( b ) 1 P ( a , b ) P ( a ) P ( b ) 1 1 ln B = I R ( X , Y ) ln B ,
where we have defined the rational mutual information:
I R P ( a , b ) P ( a ) P ( b ) 1 .
For comparing the rational mutual information with the usual mutual information, it will be convenient to take e as the base B of the logarithm. We derive useful properties of the rational mutual information in Appendix A. To mention just one, we note that the rational mutual information is not just asymptotically equal to the mutual information in the limit of near-independence, but it also provides a strict upper bound on it: 0 I I R .
Let us without loss of generality take t 2 > t 1 . Then, iterating Equation (2) τ times gives P ( b | a ) = ( M τ ) b a . Since P ( a , b ) = P ( a ) P ( b | a ) , we obtain:
I R + 1 = P ( a , b ) P ( a ) P ( b ) = a b P ( a , b ) P ( a , b ) P ( a ) P ( b ) = a b P ( b | a ) 2 P ( a ) 2 P ( a ) P ( b ) = a b μ a μ b [ ( M τ ) b a ] 2 .
We will continue the proof by considering the typical case where the eigenvalues of M are all distinct (non-degenerate), and the Markov matrix is irreducible and aperiodic; we will generalize to the other cases (which form a set of measure zero) in Appendix B. Since the eigenvalues are distinct, we can diagonalize M by writing:
M = B D B 1
for some invertible matrix B and some a diagonal matrix D , whose diagonal elements are the eigenvalues: D i i = λ i . Raising Equation (5) to the power τ gives M τ = BD τ B 1 , i.e.,
( M τ ) b a = c λ c τ B b c ( B 1 ) c a .
Since M is non-degenerate, irreducible and aperiodic, 1 = λ 1 > | λ 2 | > > | λ n | , so all terms except the first in the sum of Equation (6) decay exponentially with τ , at a decay rate that grows with c. Defining r = λ 3 / λ 2 , we have:
( M τ ) b a = B b 1 B 1 a 1 + λ 2 τ B b 2 B 2 a 1 + O ( r τ ) = μ b + λ 2 τ A b a ,
where we have made use of the fact that an irreducible and aperiodic Markov process must converge to its stationary distribution for large τ , and we have defined A as the expression in square brackets above, satisfying lim τ A b a = B b 2 B 2 a 1 . Note that b A b a = 0 in order for M to be properly normalized.
Substituting Equation (7) into Equation (8) and using the facts that a μ a = 1 and b A b a = 0 , we obtain:
I R = a b μ a μ b [ ( M τ ) b a ] 2 1 = a b μ a μ b μ b 2 + 2 μ b λ 2 τ A b a + λ 2 2 τ A b a 2 1 = a b λ 2 2 τ μ b 1 A b a 2 μ a = C λ 2 2 τ ,
where the term in the last parentheses is of the form C = C 0 + O ( r τ ) .
In summary, we have shown that an irreducible and aperiodic Markov process with non-degenerate eigenvalues cannot produce critical behavior, because the mutual information decays exponentially. In fact, no Markov processes can, as we show in Appendix B.
To hammer the final nail into the coffin of Markov processes as models of critical behavior, we need to close a final loophole. Their fundamental problem is the lack of long-term memory, which can be superficially overcome by redefining the state space to include symbols from the past. For example, if the current state is one of n, and we wish the process to depend on the the last τ symbols, we can define an expanded state space consisting of the n τ possible sequences of length τ and a corresponding n τ × n τ Markov matrix (or an n τ × n table of conditional probabilities for the next symbol given the last τ symbols). Although such a model could fit the curves in Figure 1 in theory, it cannot in practice, because M requires way more parameters than there are atoms in our observable Universe ( 10 78 ): even for as few as n = 4 symbols and τ = 1000 , the Markov process involves over 4 1000 10 602 parameters. Scale-invariance aside, we can also see how Markov processes fail simply by considering the structure of text. To model English well, M would need to correctly close parentheses even if they were opened more than τ = 100 characters ago, requiring an M -matrix with more than n 100 parameters, where n > 26 is the number of characters used.
We can significantly generalize Theorem 1 into a theorem about hidden Markov models (HMM). In an HMM, the observed sequence X 1 , , X n is only part of the picture: there are hidden variables Y 1 , , Y n that themselves form a Markov chain. We can think of an HMM as follows: imagine a machine with an internal state space Y that updates itself according to some Markovian dynamics. The internal dynamics are never observed, but at each time-step, it also produces some output Y i X i that forms the sequence that we can observe. These models are quite general and are used to model a wealth of empirical data (see, e.g., [41]).
Theorem 2.
Let M be a Markov matrix that generates the transitions between hidden states Y i in an HMM. If  M is irreducible and aperiodic, then the asymptotic behavior of the mutual information I ( t 1 , t 2 ) is exponential decay toward zero for | t 2 t 1 | 1 with decay timescale log 1 | λ 2 | , where λ 2 is the second largest eigenvalue of M . This theorem is a strict generalization of Theorem 1, since given any Markov process M with corresponding matrix M , we can construct an HMM that reproduces the exact statistics of M by using M as the transition matrix between the Y’s and generating X i from Y i by simply setting x i = y i with probability one.
The proof is very similar in spirit to the proof of Theorem 1, so we will just present a sketch here, leaving a full proof to Appendix B. Let G be the Markov matrix that governs Y i X i . To compute the joint probability between two random variables X t 1 and X t 2 , we simply compute the joint probability distribution between Y t 1 and Y t 2 , which again involves a factor of M τ , and then, we use two factors of G to convert the joint probability on Y t 1 , Y t 2 to a joint probability on X t 1 , X t 2 . These additional two factors of G will not change the fact that there is an exponential decay given by M τ .
A simple, intuitive bound from information theory (namely the data processing inequality [40]) gives I ( Y t 1 , Y t 2 ) I ( Y t 1 , X t 2 ) I ( X t 1 , X t 2 ) . However, Theorem 1 implies that I ( Y t 1 , Y t 2 ) decays exponentially. Hence, I ( X t 1 , X t 2 ) must also decay at least as fast as exponentially.
There is a well-known correspondence between so-called probabilistic regular grammars [42] (sometimes referred to as stochastic regular grammars) and HMMs. Given a probabilistic regular grammar, one can generate an HMM that reproduces all statistics and vice versa. Hence, we can also state Theorem 2 as follows:
Corollary 1.
No probabilistic regular grammar exhibits criticality.
In the next section, we will show that this statement is not true for context-free grammars.

3. Power Laws from Generative Grammar

If computationally-feasible Markov processes cannot produce critical behavior, then how do such sequences arise? In this section, we construct a toy model where sequences exhibit criticality. In the parlance of theoretical linguistics, our language is generated by a stochastic or probabilistic context-free grammar (PCFG) [43,44,45,46]. We will discuss the relationship between our model and a generic PCFG in Section 3.3.

3.1. A Simple Recursive Grammar Model

We can formalize the above considerations by giving production rules for a toy language L over an alphabet A. The language is defined by how a native speaker of L produces sentences: first, she/he draws one of the | A | characters from some probability distribution μ on A. She/he then takes this character x 0 and replaces it with q new symbols, drawn from a probability distribution P ( b | a ) , where a A is the first symbol and b A is any of the second symbols. This is repeated over and over. After u steps, she/he has a sentence of length q u (This exponential blowup is reminiscent of de Sitter space in cosmic inflation. There is actually a much deeper mathematical analogy involving conformal symmetry and p-adic numbers that has been discussed [47]).
One can ask for the character statistics of the sentence at production step u given the statistics of the sentence at production step u 1 . The character distribution is simply:
P u ( b ) = a P ( b | a ) P u 1 ( a ) .
Of course this equation does not imply that the process is a Markov process when the sentences are read left to right. To characterize the statistics as read from left to right, we really want to compute the statistical dependencies within a given sequence, e.g., at fixed u.
To see that the mutual information decays like a power law rather than exponentially with separation, consider two random variables X and Y separated by τ . One can ask how many generations took place between X and the nearest ancestor of X and Y. Typically, this will be about log q τ generations. Hence, in the tree graph shown in Figure 2, which illustrates the special case q = 2 , the number of edges Δ between X and Y is about 2 log q τ . Hence, by the previous result for Markov processes, we expect an exponential decay of the mutual information in the variable Δ 2 log q τ . This means that I ( X , Y ) should be of the form:
I ( X , Y ) q γ Δ = q 2 γ log q τ = τ 2 γ ,
where γ is controlled by the second-largest eigenvalue of G , the matrix of conditional probabilities P ( b | a ) . However, this exponential decay in Δ is exactly a power law decay in τ ! This intuitive argument is transformed into a rigorous proof in Appendix C.

3.2. Further Generalization: Strongly Correlated Characters in Words

In the model we have been describing so far, all nodes emanating from the same parent can be freely permuted since they are conditionally independent. In this sense, characters within a newly-generated word are uncorrelated. We call models with this property weakly correlated. There are still arbitrarily large correlations between words, but not inside of words. If a weakly correlated grammar allows a a b , it must allow for a b a with the same probability. We now wish to relax this property to allow for the strongly-correlated case where variables may not be conditionally independent given the parents. This allows us to take a big step towards modeling realistic languages: in English, god significantly differs in meaning and usage from dog.
In the previous computation, the crucial ingredient was the joint probability P ( a , b ) = P ( X = a , Y = b ) . Let us start with a seemingly trivial remark. This joint probability can be re-interpreted as a conditional joint probability. Instead of X and Y being random variables at specified sites t 1 and t 2 , we can view them as random variables at randomly chosen locations, conditioned on their locations being t 1 and t 2 . Somewhat pedantically, we write P ( a , b ) = P ( a , b | t 1 , t 2 ) . This clarifies the important fact that the only way that P ( a , b | t 1 , t 2 ) depends on t 1 and t 2 is via a dependence on Δ ( t 1 , t 2 ) . Hence,
P ( a , b | t 1 , t 2 ) = P ( a , b | Δ ) .
This equation is specific to weakly-correlated models and does not hold for generic strongly-correlated models.
In computing the mutual information as a function of separation, the relevant quantity is the right-hand side of Equation (7). The reason is that in practical scenarios, we estimate probabilities by sampling a sequence at fixed separation t 1 t 2 , corresponding to Δ 2 log q | t 2 t 1 | + O ( 1 ) , but varying t 1 and t 2 (the O ( 1 ) term is discussed in Appendix C).
Now, whereas P ( a , b | t 1 , t 2 ) will change when strong correlations are introduced, P ( a , b | Δ ) will retain a very similar form. This can be seen as follows: knowledge of the geodesic distance corresponds to knowledge of how high up the closest parent node is in the hierarchy (see Figure 2). Imagine flowing down from the parent node to the leaves. We start with the stationary distribution μ i at the parent node. At the first layer below the parent node (corresponding to a causal distance Δ 2 ), we get Q r r P ( r r ) = i P S ( r r | i ) P ( i ) , where the symmetrized probability P S = 1 2 i [ P ( r r | i ) + P ( r r | i ) ] comes into play because knowledge of the fact that r , r are separated by Δ 2 gives no information about their order. To continue this process to the second stage and beyond, we only need the matrix G s r = P ( s | r ) = s P S ( s s | r ) . The reason is that since we only wish to compute the two-point function at the bottom of the tree, the only place where a three-point function is ever needed is at the very top of the tree, where we need to take a single parent into two children nodes. After that, the computation only involves evolving a child node into a grand-child node, and so forth. Hence, the overall two-point probability matrix P ( a b | Δ ) is given by the simple equation:
P ( Δ ) = G Δ / 2 1 Q G Δ / 2 1 t .
As we can see from the above formula, changing to the strongly-correlated case essentially reduces to the weakly correlated case where:
P ( Δ ) = G Δ / 2 diag ( μ ) G Δ / 2 t ,
except for a perturbation near the top of the tree. We can think of the generalization as equivalent to the old model except for a different initial condition. We thus expect on intuitive grounds that the model will still exhibit power law decay. This intuition is correct, as we will prove in Appendix C. Our result can be summarized by the following theorem:
Theorem 3.
There exist probabilistic context-free grammars (PCFGs) such that the mutual information I ( A , B ) between two symbols A and B in the terminal strings of the language decay like d k , where d is the number of symbols in between A and B.
In Appendix C, we give an explicit formula for k, as well as the normalization of the power law for a particular class of grammars.

3.3. Further Generalization: Bayesian Networks and Context-Free Grammars

Just how generic is the scaling behavior of our model? What if the length of the words is not constant? What about more complex dependencies between layers? If we retrace the derivation in the above arguments, it becomes clear that the only key feature of all of our models considered so far is that the rational mutual information decays exponentially with the causal distance Δ :
I R e γ Δ .
This is true for (hidden) Markov processes and the hierarchical grammar models that we have considered above. So far, we have defined Δ in terms of quantities specific to these models; for a Markov process, Δ is simply the time separation. Can we define Δ more generically? In order to do so, let us make a brief aside about Bayesian networks. Formally, a Bayesian net is a directed acyclic graph (DAG), where the vertices are random variables and conditional dependencies are represented by the arrows. Now, instead of thinking of X and Y as living at certain times ( t 1 , t 2 ) , we can think of them as living at vertices ( i , j ) of the graph.
We define Δ ( i , j ) as follows. Since the Bayesian net is a DAG, it is equipped with a partial order ≤ on vertices. We write k l iff there is a path from k to l, in which case, we say that k is an ancestor of l. We define the L ( k , l ) to be the number of edges on the shortest directed path from k to l. Finally, we define the causal distance Δ ( i , j ) to be:
Δ ( i , j ) min x i , x j L ( x , i ) + L ( x , j ) .
It is easy to see that this reduces to our previous definition of Δ for Markov processes and recursive generative trees (see Figure 2).
Is it true that our exponential decay result from Equation (14) holds even for a generic Bayesian net? The answer is yes, under a suitable approximation. The approximation is to ignore long paths in the network when computing the mutual information. In other words, the mutual information tends to be dominated by the shortest paths via a common ancestor, whose length is Δ . This is generally a reasonable approximation, because these longer paths will give exponentially weaker correlations, so unless the number of paths increases exponentially (or faster) with length, the overall scaling will not change.
With this approximation, we can state a key finding of our theoretical work. Deep models are important because without the extra “dimension” of depth/abstraction, there is no way to construct “shortcuts” between random variables that are separated by large amounts of time with short-range interactions; 1D models will be doomed to exponential decay. Hence, the ubiquity of power laws may partially explain the success of applications of deep learning to natural language processing. In fact, this can be seen as the Bayesian net version of the important result in statistical physics that there are no phase transitions in 1D [48,49].
One might object that while the requirement of short-ranged interactions is highly motivated in physical systems, it is unclear why this restriction is necessary in the context of natural languages. Our response is that allowing for a generic interaction between say k-nearest neighbors will increase the number of parameters in the model exponentially with k.
There are close analogies between our deep recursive grammar and more conventional physical systems. For example, according to the emerging standard model of cosmology, there was an early period of cosmological inflation when density fluctuations were getting added on a fixed scale as space itself underwent repeated doublings, combining to produce an excellent approximation to a power law correlation function. This inflationary process is simply a special case of our deep recursive model (generalized from 1–3 dimensions). In this case, the hidden “depth” dimension in our model corresponds to cosmic time, and the time parameter that labels the place in the sequence of interest corresponds to space. A similar physical analogy is turbulence in a fluid, where energy in the form of vortices cascades from large scales to ever smaller scales through a recursive process where larger vortices create smaller ones, leading to a scale-invariant power spectrum. There is also a close analogy to quantum mechanics: Equation (13) expresses the exponential decay of the mutual information with geodesic distance through the Bayesian network; in quantum mechanics, the correlation function of a many body system decays exponentially with the geodesic distance defined by the tensor network, which represents the wavefunction [50].
It is also worth examining our model using techniques from linguistics. A generic PCFG G consists of three ingredients:
  • An alphabet A = A T , which consists of non-terminal symbols A and terminal symbols T.
  • A set of production rules of the form a B , where the left-hand side a A is always a single non-terminal character and B is a string consisting of symbols in A .
  • Probabilities associated with each production rule P ( a B ) , such that for each a A , B P ( a B ) = 1 .
It is a remarkable fact that any stochastic-context free grammars can be put in Chomsky normal form [27,45]. This means that given G , there exists some other grammar G ¯ , such that all of the production rules are either of the form a b c or a α , where a , b , c A and α T and the corresponding languages L ( G ) = L ( G ¯ ) . In other words, given some complicated grammar G , we can always find a grammar G ¯ such that the corresponding statistics of the languages are identical and all of the production rules replace a symbol by at most two symbols (at the cost of increasing the number of production rules in G ¯ ).
This formalism allows us to strengthen our claims. Our model with a branching factor q = 2 is precisely the class of all context-free grammars that are generated by the production rules of the form a b c . While this might naively seem like a very small subset of all possible context-free grammars, the fact that any context-free grammar can be converted into Chomsky normal form shows that our theory deals with a generic context-free grammar, except for the additional step of producing terminal symbols from non-terminal symbols. Starting from a single symbol, the deep dynamics of the PCFG in normal form are given by a strongly-correlated branching process with q = 2 , which proceeds for a characteristic number of productions before terminal symbols are produced. Before most symbols have been converted to terminal symbols, our theory applies, and power law correlations will exist amongst the non-terminal symbols. To the extent that the terminal symbols that are then produced from non-terminal symbols reflect the correlations of the non-terminal symbols, we expect context-free grammars to be able to produce power law correlations.
From our corollary to Theorem 2, we know that regular grammars cannot exhibit power law decays in mutual information. Hence, context-free grammars are the simplest grammars that support criticality, e.g., they are the lowest in the Chomsky hierarchy that support criticality. Note that our corollary to Theorem 2 also implies that not all context-free grammars exhibit criticality since regular grammars are a strict subset of context-free grammars. Whether one can formulate an even sharper criterion should be the subject of future work.

4. Discussion

By introducing a quantity we term rational mutual information, we have proven that hidden Markov processes generically exhibit exponential decay, whereas PCFGs can exhibit power law decays thanks to the “extra dimension” in the network. To the extent that natural languages and other empirical data sources are generated by processes more similar to PCFGs than Markov processes, this explains why they can exhibit power law decays.
We will draw on these lessons to give a semi-heuristic explanation for the success of deep recurrent neural networks widely used for natural language processing and discuss how mutual information can be used as a tool for validating machine learning algorithms.

4.1. Connection to Recurrent Neural Networks

While the generative grammar model is appealing from a linguistic perspective, it may appear to have little to do with machine learning algorithms that are implemented in practice. However, as we will now see, this model can in fact be viewed as an idealized version of a long-short term memory (LSTM) recurrent neural network (RNN) that is generating (“hallucinating”) a sequence.
Figure 3 shows that an LSTM RNN can reproduce critical behavior. In this example, we trained an RNN (consisting of three hidden LSTM layers of size 256 as described in [29]) to predict the next character in the 100-MB Wikipedia sample known as enwik8 [20]. We then used the LSTM to hallucinate 1 MB of text and measured the mutual information as a function of distance. Figure 3 shows that not only is the resulting mutual information function a rough power law, but it also has a slope that is relatively similar to the original.
We can understand this success by considering a simplified model that is less powerful and complex than a full LSTM, but retains some of its core features; such an approach to studying deep neural nets has proven fruitful in the past (e.g., [51]).
The usual implementation of LSTMs consists of multiple cells stacked one on top of each other. Each cell of the LSTM (depicted as a yellow circle in Figure 4) has a state that is characterized by a matrix of numbers C t and is updated according to the following rule:
C t = f t C t 1 + i t D t ,
where denotes element-wise multiplication, and D t = D t ( C t 1 , x t ) is some function of the input x t from the cell from the layer above (denoted by downward arrows in Figure 4), the details of which do not concern us. Generically, a graph of this picture would look like a rectangular lattice, with each node having an arrow to its right (corresponding to the first term in the above equation) and an arrow from above (corresponding to the second term in the equation). However, if the forget weights f decay rapidly with depth (e.g., as we go from the bottom cell to the towards the top) so that the timescales for forgetting grow exponentially, we will show that a reasonable approximation to the dynamics is given by Figure 4.
If we neglect the dependency of D t on C t 1 , the forget gate f t leads to exponential decay of C t 1 e.g., C t = f t C 0 ; this is how LSTM’s forget their past. Note that all operations including exponentiation are performed element-wise in this section only.
In general, a cell will smoothly forget its past over a timescale of log ( 1 / f ) τ f . On timescales τ f , the cells are weakly correlated; on timescales τ f , the cells are strongly correlated. Hence, a discrete approximation to this above equation is the following:
C t = C t 1 , for τ f time steps = D t ( x t ) , on every τ f + 1 time step .
This simple approximation leads us right back to the hierarchical grammar. The first line of the above equation is labeled “remember” in Figure 2, and the second line is what we refer to as “Markov”, since the next state depends only on the previous. Since each cell perfectly remembers its previous state for τ f time steps, the tree can be reorganized so that it is exactly of the form shown in Figure 4, by omitting nodes that simply copy the previous state. Now, supposing that τ f grows exponentially with depth τ f ( layer i ) q τ f ( layer i + 1 ) , we see that the successive layers become exponentially sparse, which is exactly what happens in our deep grammar model, identifying the parameter q, governing the growth of the forget timescale, with the branching parameter in the deep grammar model (compare Figure 2 and Figure 4).

4.2. A New Diagnostic for Machine Learning

How can one tell whether a neural network can be further improved? For example, an LSTM RNN similar to the one we used in Figure 4 can predict Wikipedia text with a residual entropy ∼1.4 bits/character [29], which is very close to the performance of current state of the art custom compression software, which achieves ∼1.3 bits/character [52]. Is that essentially the best compression possible or can significant improvements be made?
Our results provide a powerful diagnostic for shedding further light on this question: measuring the mutual information as a function of separation between symbols is a computationally-efficient way of extracting much more meaningful information about the performance of a model than simply evaluating the loss function, usually given by the conditional entropy H ( X t | X t 1 , X t 2 , ) .
Figure 4 shows that even with just three layers, the LSTM-RNN is able to learn long-range correlations; the slope of the mutual information of hallucinated text is comparable to that of the training set. However, the figure also shows that the predictions of our LSTM-RNN are far from optimal. Interestingly, the hallucinated text shows about the same mutual information for distances O ( 1 ) , but significantly less mutual information at large separation. Without requiring any knowledge about the true entropy of the input text (which is famously NP-hard to compute), this figure immediately shows that the LSTM-RNN we trained is performing sub-optimally; it is not able to capture all of the long-term dependencies found in the training data.
As a comparison, we also calculated the bigram transition matrix P ( X 3 X 4 | X 1 X 2 ) from the data and used it to hallucinate 1 MB of text. Despite the fact that this higher order Markov model needs ∼103 more parameters than our LSTM-RNN, it captures less than a fifth of the mutual information captured by the LSTM-RNN even at modest separations ≳5. This phenomenon is related to a classic result in the theory of formal languages: a context-free grammar
In summary, Figure 4 shows both the successes and shortcomings of machine learning. On the one hand, LSTM-RNN’s can capture long-range correlations much more efficiently than Markovian models; on the other hand, they cannot match the two point functions of training data, never mind higher order statistics!
One might wonder how the lack of mutual information at large scales for the bigram Markov model is manifested in the hallucinated text. Below, we give a line from the Markov hallucinations:
[[computhourgist, Flagesernmenserved whirequotes
or thand dy excommentaligmaktophy as
its:Fran at ||\&lt;If ISBN 088;\&ampategor
and on of to [[Prefung]]’ and at them rector>
This can be compared with an example from the LSTM RNN:
 Proudknow pop groups at Oxford
- [http://ccw.com/faqsisdaler/cardiffstwander
--helgar.jpg] and Cape Normans’s first
 attacks Cup rigid (AM).
Despite using many fewer parameters, the LSTM manages to produce a realistic-looking URL and is able to close brackets correctly [53], something with which the Markov model struggles.
Although great challenges remain to accurately model natural languages, our results at least allow us to improve on some earlier answers to key questions we sought to address:
  • Why is natural language so hard? The old answer was that language is uniquely human. Our new answer is that at least part of the difficulty is that natural language is a critical system, with long-range correlations that are difficult for machines to learn.
  • Why are machines bad at natural languages, and why are they good? The old answer is that Markov models are simply not brain/human-like, whereas neural nets are more brain-like and, hence, better. Our new answer is that Markov models or other one-dimensional models cannot exhibit critical behavior, whereas neural nets and other deep models (where an extra hidden dimension is formed by the layers of the network) are able to exhibit critical behavior.
  • How can we know when machines are bad or good? The old answer is to compute the loss function. Our new answer is to also compute the mutual information as a function of separation, which can immediately show how well the model is doing at capturing correlations on different scales.
Future studies could include generalizing our theorems to more complex formal languages, such as merge grammars.

Acknowledgments

This work was supported by the Foundational Questions Institute http://fqxi.org. The authors wish to thank Noam Chomsky and Greg Lessard for valuable comments on the linguistic aspects of this work, Taiga Abe, Meia Chita-Tegmark, Hanna Field, Esther Goldberg, Emily Mu, John Peurifoi, Tomaso Poggio, Luis Seoane, Leon Shen, David Theurel, Cindy Zhao and two anonymous referees for helpful discussions and encouragement, Michelle Xu for help acquiring genome data and the Center for Brains Minds and Machines (CMBB) for hospitality.

Author Contributions

H.W.L. proposed the project idea in collaboration with M.T. H.W.L. and M.T. collaboratively formulated the proofs, performed the numerical experiments, analyzed the data, and wrote the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Properties of Rational Mutual Information

In this Appendix, we prove the following elementary properties of rational mutual information:
  • Symmetry: for any two random variables X and Y, I R ( X , Y ) = I R ( Y , X ) . The proof is straightforward:
    I R ( X , Y ) = a b P ( X = a , Y = b ) 2 P ( X = a ) P ( Y = b ) 1 = b a P ( Y = b , X = a ) 2 P ( Y = b ) P ( X = a ) 1 = I R ( Y , X ) .
  • Upper bound to mutual information: The logarithm function satisfies ln ( 1 + x ) x with equality if and only if (iff) x = 0 . Therefore, setting x = P ( a , b ) P ( a ) P ( b ) 1 gives:
    I ( X , Y ) = log B P ( a , b ) P ( a ) P ( b ) = 1 ln B ln 1 + P ( a , b ) P ( a ) P ( b ) 1 1 ln B P ( a , b ) P ( a ) P ( b ) 1 = I R ( X , Y ) ln B .
    Hence, the rational mutual information I R I ln B with equality iff I = 0 (or simply, I R I , if we use the natural logarithm base B = e ).
  • Non-negativity: It follows from the above inequality that I R ( X , Y ) 0 with equality iff P ( a , b ) = P ( a ) P ( b ) , since I R = I = 0 iff P ( a , b ) = P ( a ) P ( b ) . Note that this short proof is only possible because of the information inequality I 0 . From the definition of I R , it is only obvious that I R 1 ; information theory gives a much tighter bound. Our Findings 1–3 can be summarized as follows:
    I R ( X , Y ) = I R ( Y , X ) I ( X , Y ) 0 ,
    where both equalities occur iff p ( X , Y ) = p ( X ) p ( Y ) . It is impossible for one of the last two relations to be an equality while the other is an inequality.
  • Generalization: Note that if we view the mutual information as the divergence between two joint probability distributions, we can generalize the notion of rational mutual information to that of rational divergence:
    D R ( p | | q ) = p q 1 ,
    where the expectation value is taken with respect to the “true” probability distribution p. This is a special case of what is known in the literature as α -divergence [54].
    The α -divergence is itself a special case of so-called f-divergences [55,56,57]:
    D f ( p | | q ) = p i f ( q i / p i ) ,
    where D R ( p | | q ) corresponds to f ( x ) = 1 x 1 .
    Note that as it is written, p could be any probability measure on either a discrete or continuous space. The above results can be trivially modified to show that D R ( p | | q ) D K L ( p | | q ) and, hence, D R ( p | | q ) 0 , with equality iff p = q .

Appendix B. General Proof for Markov Processes

In this Appendix, we drop the assumptions of non-degeneracy, irreducibility and non-periodicity made in the main body of the paper where we proved that Markov processes lead to exponential decay.

Appendix B.1. The Degenerate Case

First, we consider the case where the Markov matrix M has degenerate eigenvalues. In this case, we cannot guarantee that M can be diagonalized. However, any complex matrix can be put into Jordan normal form. In Jordan normal form, a matrix is block diagonal, with each d × d block corresponding to an eigenvalue with degeneracy d. These blocks have a particularly simple form, with block i having λ i on the diagonal and ones right above the diagonal. For example, if there are only three distinct eigenvalues and λ 2 is three-fold degenerate, the the Jordan form of M would be:
B 1 MB = 1 0 0 0 0 0 λ 2 1 0 0 0 0 λ 2 1 0 0 0 0 λ 2 0 0 0 0 0 λ 3 .
Note that the largest eigenvalue is unique and equal to one for all irreducible and aperiodic M . In this example, the matrix power M τ is:
B 1 M τ B = 1 0 0 0 0 0 λ 2 τ τ 1 λ 2 τ 1 τ 2 λ 2 τ 2 0 0 0 λ 2 τ τ 1 λ 2 τ 1 0 0 0 0 λ 2 τ 0 0 0 0 0 λ 3 τ .
In the general case, raising a matrix to an arbitrary power will yield a matrix that is still block diagonal, with each block being an upper triangular matrix. The important point is that in block i, every entry scales λ i τ , up to a combinatorial factor. Each combinatorial factor grows only polynomially with τ , with the degree of the polynomials in the i-th block bounded by the multiplicity of λ i , minus one.
Using this Jordan decomposition, we can replicate Equation (7) and write:
M i j τ = μ i + λ 2 τ A i j .
There are two cases, depending on whether the second eigenvalue λ 2 is degenerate or not. If not, then the equation:
lim τ A i j = B i 2 B 2 j 1
still holds, since for i 3 , ( λ i / λ 2 ) τ decays faster than any polynomial of finite degree. On the other hand, if the second eigenvalue is degenerate with multiplicity m 2 , we instead define A with the combinatorial factor removed:
M i j τ = μ i + τ m 2 λ 2 τ A i j .
If m 2 = 1 , this definition simply reduces to the previous definition of A . With this definition,
lim τ A i j = λ 2 m 2 B i 2 B ( 2 + m 2 ) j 1 ,
Hence, in the most general case, the mutual information decays like a polynomial P ( τ ) e γ τ , where γ = 2 ln 1 λ 2 . The polynomial is non-constant if and only if the second largest eigenvalue is degenerate. Note that even in this case, the mutual information decays exponentially in the sense that it is possible to bound the mutual information by an exponential.

Appendix B.2. The Reducible Case

Now, let us generalize to the case where the Markov process is reducible. A general Markov state space can be partitioned into m subsets,
S = i = 1 m S i ,
where elements in the same partition communicate with each other: it is possible to transition from i j and j i for i , j S i .
In general, the set of partitions will be a finite directed acyclic graph (DAG), where the arrows of the DAG are inherited from the Markov chain. Since the DAG is finite, after some finite amount of time, almost all of the probability will be concentrated in the “final” partitions that have no outgoing arrows, and almost no probability will be in the “transient” partitions. Since the statistics of the chain that we are interested in are determined by running the chain for infinite time, they are insensitive to transient behavior, and hence, we can ignore all but the final partitions (the mutual information at fixed separation is still determined by averaging over all (infinite) time steps).
Consider the case where the initial probability distribution only has support on one of the S i . Since states in S j S i will never be accessed, the Markov process (with this initial condition) is identical to an irreducible Markov process on S i . Our previous results imply that the mutual information will exponentially decay to zero.
Let us define the random variable Z = f ( X ) , where f ( x S i ) = S i . For a general initial condition, the total probability within each set S i is independent of time. This means that the entropy H ( Z ) is independent of time. Using the fact that H ( Z | X ) = H ( Y | X ) = 0 , one can show that:
I ( X , Y ) = I ( X , Y | Z ) + H ( Z ) ,
where I ( X , Y | Z ) = H ( X | Z ) H ( Y | X , Z ) is the conditional mutual information. Our previous results then imply that the conditional mutual information decays exponentially, whereas the second term H ( Z ) log m is constant. In the language of statistical physics, this is an example of topological order, which leads to constant terms in the correlation functions; here, the Markov graph of M is disconnected, so there are m degenerate equilibrium states.

Appendix B.3. The Periodic Case

If a Markov process is periodic, one can further decompose each final partition. It is easy to check that the period of each element in a partition must be constant throughout the partition. It follows that each final partition S i can be decomposed into cyclic classes S i 1 , S i 2 , , S i d , where d is the period of the elements in the partition in S i . The arguments in the previous section with f ( x S i k ) = S i k then show that the mutual information again has two terms, one of which exponentially decays, the other of which is constant.

Appendix B.4. The n > 1 Case

The following proof holds only for order n = 1 Markov processes, but we can easily extend the results for arbitrary n. Any n = 2 Markov process can be converted into an n = 1 Markov process on pairs of letters X 1 X 2 . Hence, our proof shows that I ( X 1 X 2 , Y 1 Y 2 ) decays exponentially. However, for any random variables X , Y , the data processing inequality [40] states that I ( X , g ( Y ) ) I ( X , Y ) , where g is an arbitrary function of Y. Letting g ( Y 1 Y 2 ) = Y 1 and then permuting and applying g ( X 1 , X 2 ) = X 1 give:
I ( X 1 X 2 , Y 1 Y 2 ) I ( X 1 X 2 , Y 1 ) I ( X 1 , Y 1 ) .
Hence, we see that I ( X 1 , Y 1 ) must exponentially decay. The preceding remarks can be easily formalized into a proof for an arbitrary Markov process by induction on n.

Appendix B.5. The Detailed Balance Case

This asymptotic relation can be strengthened for a subclass of Markov processes that obey a condition known as detailed balance. This subclass arises naturally in the study of statistical physics [58]. For our purposes, this simply means that there exist some real numbers K m and a symmetric matrix S a b = S b a , such that:
M a b = e K a / 2 S a b e K b / 2 .
Let us note the following facts. (1) The matrix power is simply M τ a b = e K a / 2 S τ a b e K b / 2 . (2) By the spectral theorem, we can diagonalize S into an orthonormal basis of eigenvectors, which we label as v (or sometimes w), e.g., S v = λ i v and v · w = δ v w . Notice that:
n M a b e K n / 2 v n = n e K m / 2 S m n v n = λ i e K m / 2 v m .
Hence, we have found an eigenvector of M for every eigenvector of S. Conversely, the set of eigenvectors of S forms a basis, so there cannot be any more eigenvectors of M. This implies that all of the eigenvalues of M are given by P m v = e K m / 2 v m , and the eigenvalues of P v are λ i . In other words, M and S share the same eigenvalues.
(3) μ a = 1 Z e K a is an eigenvector with eigenvalue one and, hence, is the stationary state:
b M a b μ b = 1 Z b e ( K a + K b ) / 2 S a b = 1 Z e K a b e K b / 2 S b a e K a / 2 = μ a b M b a = μ a .
The previous facts then let us finish the calculation:
P ( a , b ) P ( a ) P ( b ) = a b e K a S τ a b 2 e K b e K b K a = a b e K a S τ a b 2 e K b e K b K a       = a b S τ a b 2 = | | S τ | | 2 .
Now, using the fact that | | A | | 2 = tr A T A and is therefore invariant under an orthogonal change of basis, we find that:
P ( a , b ) P ( a ) P ( b ) = i | λ i | 2 τ .
Since the λ i ’s are both the eigenvalues of M and S and since M is irreducible and aperiodic, there is exactly one eigenvalue λ 1 = 1 , and all other eigenvalues are less than one. Altogether,
I R ( t 1 , t 2 ) = P ( a , b ) P ( a ) P ( b ) 1 = i = 2 | λ i | 2 τ .
Hence, one can easily estimate the asymptotic behavior of the mutual information if one has knowledge of the spectrum of M. We see that the mutual information exponentially decays, with a decay time-scale given by the second largest eigenvalue λ 2 :
τ decay 1 = 2 log 1 λ 2 .

Appendix B.6. Hidden Markov Model

In this subsection, we generalize our findings to hidden Markov models and present a proof of Theorem 2. Based on the considerations in the main body of the text, the joint probability distribution between two visible states X t 1 , X t 2 is given by:
P ( a , b ) = c d G b d M τ d c μ c G a c ,
where the term in brackets would have been there in an ordinary Markov model, and the two new factors of G are the result of the generalization. Note that as before, μ is the stationary state corresponding to M . We will only consider the typical case where M is aperiodic, irreducible and non-degenerate; once we have this case, the other cases can be easily treated by mimicking our above proof for or ordinary Markov processes. Using Equation (7) and defining g = M μ gives:
P ( a , b ) = c d G b d M τ d c μ c G a c = g a g b + λ 2 τ c d G b d A d c μ c G a c .
Plugging this into our definition of rational mutual information gives:
I R + 1 = a b P ( a , b ) 2 g a g b = a b g a g b + λ 2 τ c d G b d A d c μ c G a c + λ 2 2 τ C = 1 + λ 2 τ c d A d c μ c + λ 2 2 τ C = 1 + λ 2 2 τ C ,
where we have used the facts that i G i j = 1 , i A i j = 0 , and as before, C is asymptotically constant. This shows that I R λ 2 2 τ exponentially decays.

Appendix C. Power Laws for Generative Grammars

In this Appendix, we prove that the rational mutual information decays like a power law for a sub-class of generative grammars. We proceed by mimicking the strategy employed in the above appendix. Let G be the linear operator associated with the matrix P b | a , the probability that a node takes the value b given that the parent node has value b. We will assume that G is irreducible and aperiodic, with no degeneracies. From the above discussion, we see that removing the degeneracy assumption does not qualitatively change things; one simply replaces the procedure of diagonalizing G with putting G in Jordan normal form.
Let us start with the weakly-correlated case. In this case,
P ( a , b ) = r μ r G Δ / 2 a r G Δ / 2 b r ,
since as we have discussed in the main text, the parent node has the stationary distribution μ and G Δ / 2 give the conditional probabilities from transitioning from the parent node to the nodes at the bottom of the tree in which we are interested. We now employ our favorite trick of diagonalizing G and then writing:
( G Δ / 2 ) i j = μ i + λ 2 Δ / 2 A i j ,
which gives:
P ( a , b ) = r μ r μ a + λ 2 Δ / 2 A a r μ b + λ 2 Δ / 2 A b r , = r μ r μ a μ b + μ a ϵ A b r + μ b ϵ A a r + ϵ 2 A a r A b r
where we have defined ϵ = λ 2 Δ / 2 . Now, note that r A a r μ r = 0 , since μ is an eigenvector with eigenvalue one of G Δ / 2 . Hence, this simplifies the above to just:
P ( a , b ) = μ a μ b + ϵ 2 r μ r A a r A b r .
From the definition of rational mutual information and employing the fact that i A i j = 0 give:
I R + 1 a b μ a μ b + ϵ 2 r μ r A a r A b r 2 μ a μ b = a b μ a μ b + ϵ 4 N a b 2 , = 1 + ϵ 4 | | N | | 2 ,
where N a b ( μ a μ b ) 1 / 2 r μ r A a r A b r is a symmetric matrix and | | · | | denotes the Frobenius norm. Hence:
I R = λ 2 2 Δ | | S | | 2 .
Let us now generalize to the strongly correlated case. As discussed in the text, the joint probability is modified to:
P ( a , b ) = r s Q r s G Δ / 2 1 a r G Δ / 2 1 b s ,
where Q is some symmetric matrix that satisfies r Q r s = μ s . We now employ our favorite trick of diagonalizing G and then writing:
( G Δ / 2 ) i j = μ i + ϵ A i j ,
where ϵ λ 2 Δ / 2 1 . This gives:
P ( a , b ) = r s Q r s μ a + ϵ A a r μ b + ϵ A b s , = μ a μ b + r s Q r s μ a ϵ A b s + μ b ϵ A a r + ϵ 2 A a r A b s . = μ a μ b + s μ a ϵ A b s μ s + r μ b ϵ A a r μ r + ϵ 2 r s Q r s A a r A b s = μ a μ b + ϵ 2 r s Q r s A a r A b s .
Now, defining the symmetric matrices R a b r s Q r s A a r A b s μ a μ b 1 / 2 N a b and noting that a R a b = 0 , we have:
I R + 1 = a b μ a μ b + ϵ 2 R a b 2 μ a μ b = a b μ a μ b + ϵ 4 N a b 2 , = 1 + ϵ 4 | | N | | 2 ,
which gives:
I R = λ 2 2 Δ 4 | | N | | 2 .
In either the strongly- or the weakly-correlated case, note that N is asymptotically constant. We can write the second largest eigenvalue | λ 2 | 2 = q k 2 / 2 , where q is the branching factor,
I R q Δ k 2 / 2 ~   q k 2 log q | i j | = C | i j | k 2 .
Behold the glorious power law! We note that the normalization C must be a function of the form C = m 2 f ( λ 2 , q ) , where m 2 is the multiplicity of the eigenvalue λ 2 . We evaluate this normalization in the next section.
As before, this result can be sharpened if we assume that G satisfies detailed balance G m n = e K m / 2 S m n e K n / 2 where S is a symmetric matrix and K n are just numbers. Let us only consider the weakly correlated case. By the spectral theorem, we diagonalize S into an orthonormal basis of eigenvectors v. As before, G and S share the same eigenvalues. Proceeding,
P ( a , b ) = 1 Z v λ v Δ v a v b e K a + K b / 2 ,
where Z is a constant that ensures that P is properly normalized. Let us move full steam ahead to compute the rational mutual information:
a b P ( a , b ) 2 P ( a ) P ( b ) = a b e K a + K b v λ v Δ v a v b e K a + K b / 2 2 = a b v λ v Δ v a v b 2 .
This is just the Frobenius norm of the symmetric matrix H v λ v Δ v a v b ! The eigenvalues of the matrix can be read off, so we have:
I R ( a , b ) = i = 2 | λ i | 2 Δ .
Hence, we have computed the rational mutual information exactly as a function of Δ . In the next section, we use this result to compute the mutual information as a function of separation | i j | , which will lead to a precise evaluation of the normalization constant C in the equation:
I ( a , b ) C | i j | k 2 .

Appendix C.1. Detailed Evaluation of the Normalization

For simplicity, we specialize to the case q = 2 , although our results can surely be extended to q > 2 . Define δ = Δ / 2 and d = | i j | . We wish to compute the expected value of I R conditioned on knowledge of d. By Bayes rule, p ( δ | d ) p ( d | δ ) p ( δ ) . Now, p ( d | δ ) is given by a triangle distribution with mean 2 δ 1 and compact support ( 0 , 2 δ ) . On the other hand, p ( δ ) 2 δ for δ δ max and p ( δ ) = 0 for δ 0 or δ > δ max . This new constant δ max serves two purposes. First, it can be thought of as a way to regulate the probability distribution p ( δ ) so that it is normalizable; at the end of the calculation, we formally take δ max without obstruction. Second, if we are interested in empirically sampling the mutual information, we cannot generate an infinite string, so setting δ max to a finite value accounts for the fact that our generated string may be finite.
We now assume d 1 , so that we can swap discrete sums with integrals. We can then compute the conditional expectation value of 2 k 2 δ . This yields:
I R 0 2 k 2 δ P ( d | δ ) d δ = 1 2 k 2 d k 2 k 2 ( k 2 + 1 ) log ( 2 ) ,
or equivalently,
C q = 2 = 1 | λ 2 | 4 k 2 ( k 2 + 1 ) 1 log 2 .
It turns out that it is also possible to compute the answer without making any approximations with integrals:
I R = 2 k 2 + 1 log 2 ( d ) 2 k 2 + 1 1 2 log 2 ( d ) 2 d 2 k 2 1 2 k 2 + 1 1 .
The resulting predictions are compared in Figure A1.
Figure A1. Decay of rational mutual information with separation for a binary sequence from a numerical simulation with probabilities p ( 0 | 0 ) = p ( 1 | 1 ) = 0 . 9 and a branching factor q = 2 . The blue curve is not a fit to the simulated data, but rather an analytic calculation. The smooth power law displayed on the left is what is predicted by our “continuum” approximation. The very small discrepancies (right) are not random, but are fully accounted for by more involved exact calculations with discrete sums.
Figure A1. Decay of rational mutual information with separation for a binary sequence from a numerical simulation with probabilities p ( 0 | 0 ) = p ( 1 | 1 ) = 0 . 9 and a branching factor q = 2 . The blue curve is not a fit to the simulated data, but rather an analytic calculation. The smooth power law displayed on the left is what is predicted by our “continuum” approximation. The very small discrepancies (right) are not random, but are fully accounted for by more involved exact calculations with discrete sums.
Entropy 19 00299 g005

Appendix D. Estimating (Rational) Mutual Information from Empirical Data

Estimating mutual information or rational mutual information from empirical data is fraught with subtleties.
It is well known that a naive estimate of the Shannon entropy obtained S ^ = i = 1 K N i N log N i N is biased, generally underestimating the true entropy from finite samples. For example, We use the estimator advocated by Grassberger [59]:
S ^ = log N 1 N i = 1 K N i ψ ( N i ) ,
where ψ ( x ) is the digamma function, N = N i , and K is the number of characters in the alphabet. The mutual information estimator can then be estimated by I ^ ( X , Y ) = S ^ ( X ) + S ^ ( Y ) S ^ ( X , Y ) . The variance of this estimator is then the sum of the variances:
var ( I ^ ) = varEnt ( X ) + varEnt ( Y ) + varEnt ( X , Y ) ,
where the varEntropy is defined as:
varEnt ( X ) = var log p ( X ) ,
where we can again replace logarithms with the digamma function ψ . The uncertainty after N measurements is then var ( I ^ ) / N .
To compare our theoretical results with the experiment in Figure 3, we must measure the rational mutual information for a binary sequence from (simulated) data. For a binary sequence with covariance coefficient ρ ( X , Y ) = P ( 1 , 1 ) P ( 1 ) 2 , the rational mutual information is:
I R ( X , Y ) = ρ ( X , Y ) P ( 0 ) P ( 1 ) 2 .
This was essentially calculated in [60] by considering the limit where the covariance coefficient is small ρ 1 . In their paper, there is an erroneous factor of two. To estimate covariance ρ ( d ) as a function of d (sometimes confusingly referred to as the correlation function), we use the unbiased estimator for a data sequence { x 1 , x 2 , x n } :
ρ ^ ( d ) = 1 n d 1 i = 1 n d x i x ¯ x i + d x ¯ .
However, it is important to note that estimating the covariance function ρ by averaging and then squaring will generically yield a biased estimate; we circumvent this by simply estimating I R ( X , Y ) 1 / 2 ρ ( X , Y ) .

References

  1. Bak, P. Self-organized criticality: An explanation of the 1/<i>f</i> noise. Phys. Rev. Lett. 1987, 59, 381–384. [Google Scholar] [PubMed]
  2. Bak, P.; Tang, C.; Wiesenfeld, K. Self-organized criticality. Phys. Rev. A 1988, 38, 364. [Google Scholar] [CrossRef]
  3. Linkenkaer-Hansen, K.; Nikouline, V.V.; Palva, J.M.; Ilmoniemi, R.J. Long-Range Temporal Correlations and Scaling Behavior in Human Brain Oscillations. J. Neurosci. 2001, 21, 1370–1377. [Google Scholar] [PubMed]
  4. Levitin, D.J.; Chordia, P.; Menon, V. Musical rhythm spectra from Bach to Joplin obey a 1/f power law. Proc. Natl. Acad. Sci. USA 2012, 109, 3716–3720. [Google Scholar] [CrossRef] [PubMed]
  5. Tegmark, M. Consciousness as a State of Matter. arXiv 2014. [Google Scholar]
  6. Manaris, B.; Romero, J.; Machado, P.; Krehbiel, D.; Hirzel, T.; Pharr, W.; Davis, R.B. Zipf’s law, music classification, and aesthetics. Comput. Music J. 2005, 29, 55–69. [Google Scholar] [CrossRef]
  7. Peng, C.K.; Buldyrev, S.V.; Goldberger, A.; Havlin, S.; Sciortino, F.; Simons, M.; Stanley, H.E. Long-range correlations in nucleotide sequences. Nature 1992, 356, 168–170. [Google Scholar] [CrossRef] [PubMed]
  8. Mantegna, R.N.; Buldyrev, S.V.; Goldberger, A.L.; Havlin, S.; Peng, C.K.; Simons, M.; Stanley, H.E. Linguistic features of noncoding DNA sequences. Phys. Rev. Lett. 1994, 73, 3169–3172. [Google Scholar] [CrossRef] [PubMed]
  9. Ebeling, W.; Pöschel, T. Entropy and Long-Range Correlations in Literary English. EPL (Europhys. Lett.) 1994, 26, 241–246. [Google Scholar] [CrossRef]
  10. Ebeling, W.; Neiman, A. Long-range correlations between letters and sentences in texts. Phys. A Stat. Mech. Appl. 1995, 215, 233–241. [Google Scholar] [CrossRef]
  11. Altmann, E.G.; Cristadoro, G.; Degli Esposti, M. On the origin of long-range correlations in texts. Proc. Natl. Acad. Sci. USA 2012, 109, 11582–11587. [Google Scholar] [CrossRef] [PubMed]
  12. Montemurro, M.A.; Pury, P.A. Long-range fractal correlations in literary corpora. Fractals 2002, 10, 451–461. [Google Scholar] [CrossRef]
  13. Deco, G.; Schürmann, B. Information Dynamics: Foundations and Applications; Springer: New York, NY, USA, 2012. [Google Scholar]
  14. Zipf, G.K. Human Behavior and the Principle of Least Effort; Addison-Wesley Press: Boston, MA, USA, 1949. [Google Scholar]
  15. Lin, H.W.; Loeb, A. Zipf’s law from scale-free geometry. Phys. Rev. E 2016, 93, 032306. [Google Scholar] [CrossRef] [PubMed]
  16. Pietronero, L.; Tosatti, E.; Tosatti, V.; Vespignani, A. Explaining the uneven distribution of numbers in nature: The laws of Benford and Zipf. Phys. A Stat. Mech. Appl. 2001, 293, 297–304. [Google Scholar] [CrossRef]
  17. Kardar, M. Statistical Physics of Fields; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
  18. Homo Sapiens Genome. Available online: ftp://ftp.ncbi.nih.gov/genomes/Homo_sapiens/ (accessed on 15 June 2016).
  19. MIDI Files: Sonatas and Partitas for Solo Violin. Available online: http://www.jsbach.net/midi/midi_solo_violin.html (accessed on 15 June 2016).
  20. 50,000 Euro Prize for Compressing Human Knowledge. Available online: http://prize.hutter1.net/ (accessed on 15 June 2016).
  21. Corpatext 1.02. Available online: http://www.lexique.org/public/lisezmoi.corpatext.htm (accessed on 15 June 2016).
  22. Turing, A.M. Computing machinery and intelligence. Mind 1950, 59, 433–460. [Google Scholar] [CrossRef]
  23. Ferrucci, D.; Brown, E.; Chu-Carroll, J.; Fan, J.; Gondek, D.; Kalyanpur, A.A.; Lally, A.; Murdock, J.W.; Nyberg, E.; Prager, J.; et al. Building Watson: An overview of the DeepQA project. AI Mag. 2010, 31, 59–79. [Google Scholar]
  24. Campbell, M.; Hoane, A.J.; Hsu, F.H. Deep blue. Artif. Intell. 2002, 134, 57–83. [Google Scholar] [CrossRef]
  25. Mnih, V. Human-level control through deep reinforcement learning. Nature 2015, 518, 529–533. [Google Scholar] [CrossRef] [PubMed]
  26. Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; van den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484–489. [Google Scholar] [CrossRef] [PubMed]
  27. Chomsky, N. On certain formal properties of grammars. Inf. Control 1959, 2, 137–167. [Google Scholar] [CrossRef]
  28. Kim, Y.; Jernite, Y.; Sontag, D.; Rush, A.M. Character-Aware Neural Language Models. arXiv 2015. [Google Scholar]
  29. Graves, A. Generating Sequences with Recurrent Neural Networks. arXiv 2013. [Google Scholar]
  30. Graves, A.; Mohamed, A.; Hinton, G. Speech recognition with deep recurrent neural networks. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vancouver, BC, Canada, 26–31 May 2013; pp. 6645–6649. [Google Scholar]
  31. Collobert, R.; Weston, J. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning (ICML), Helsinki, Finland, 5–9 July 2008; pp. 160–167. [Google Scholar]
  32. Van den Oord, A.; Dieleman, S.; Zen, H.; Simonyan, K.; Vinyals, O.; Graves, A.; Kalchbrenner, N.; Senior, A.; Kavukcuoglu, K. Wavenet: Agenerative model for raw audio. arXiv 2016. [Google Scholar]
  33. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed]
  34. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  35. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  36. Shieber, S.M. Evidence against the context-freeness of natural language. In The Formal Complexity of Natural Language; Springer: Dordrecht, The Netherlands, 1985; pp. 320–334. [Google Scholar]
  37. Anisimov, A.V. Group languages. Cybern. Syst. Anal. 1971, 7, 594–601. [Google Scholar] [CrossRef]
  38. Shannon, C.E. A Mathematical Theory of Communication. ACM SIGMOB. Mob. Comput. Commun. Rev. 1948, 5, 3–55. [Google Scholar] [CrossRef]
  39. Kullback, S.; Leibler, R.A. On Information and Sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  40. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  41. Rabiner, L.R. A tutorial on hidden Markov models and selected applications in speech recognition. IEEE Proc. 1989, 77, 257–286. [Google Scholar] [CrossRef]
  42. Carrasco, R.C.; Oncina, J. Learning stochastic regular grammars by means of a state merging method. In Proceedings of the Second International Colloquium on Grammatical Inference and Applications (ICGI’94), Alicante, Spain, 21–23 September 1994; pp. 139–152. [Google Scholar]
  43. Ginsburg, S. The Mathematical Theory of Context Free Languages; McGraw-Hill Book Company: New York, NY, USA, 1966. [Google Scholar]
  44. Booth, T.L. Probabilistic representation of formal languages. In Proceedings of the 1969 IEEE Conference Record of 10th Annual Symposium on Switching and Automata Theory, Waterloo, ON, Canada, 15–17 October 1969; pp. 74–81. [Google Scholar]
  45. Huang, T.; Fu, K. On stochastic context-free languages. Inf. Sci. 1971, 3, 201–224. [Google Scholar] [CrossRef]
  46. Lari, K.; Young, S.J. The estimation of stochastic context-free grammars using the inside-outside algorithm. Comput. Speech Lang. 1990, 4, 35–56. [Google Scholar] [CrossRef]
  47. Harlow, D.; Shenker, S.H.; Stanford, D.; Susskind, L. Tree-like structure of eternal inflation: A solvable model. Phys. Rev. D 2012, 85, 063516. [Google Scholar] [CrossRef]
  48. Van Hove, L. Sur l’intégrale de configuration pour les systèmes de particules à une dimension. Physica 1950, 16, 137–143. [Google Scholar] [CrossRef]
  49. Cuesta, J.A.; Sánchez, A. General Non-Existence Theorem for Phase Transitions in One-Dimensional Systems with Short Range Interactions, and Physical Examples of Such Transitions. J. Stat. Phys. 2004, 115, 869–893. [Google Scholar] [CrossRef]
  50. Evenbly, G.; Vidal, G. Tensor network states and geometry. J. Stat. Phys. 2011, 145, 891–918. [Google Scholar] [CrossRef]
  51. Saxe, A.M.; McClelland, J.L.; Ganguli, S. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv 2013. [Google Scholar]
  52. Mahoney, M. Large Text Compression Benchmark. Available online: http://mattmahoney.net/dc/text.html (accessed on 23 June 2017).
  53. Karpathy, A.; Johnson, J.; Fei-Fei, L. Visualizing and Understanding Recurrent Networks. arXiv 2015. [Google Scholar]
  54. Amari, S.I. α-Divergence and α-Projection in Statistical Manifold. In Differential-Geometrical Methods in Statistics; Springer: New York, NY, USA, 1985; pp. 66–103. [Google Scholar]
  55. Morimoto, T. Markov processes and the H-theorem. J. Phys. Soc. Jpn. 1963, 18, 328–331. [Google Scholar] [CrossRef]
  56. Csisz, I. Information-type measures of difference of probability distributions and indirect observations. Stud. Sci. Math. Hung. 1967, 2, 299–318. [Google Scholar]
  57. Ali, S.M.; Silvey, S.D. A general class of coefficients of divergence of one distribution from another. J. R. Stat. Soc. Ser. B (Methodol.) 1966, 28, 131–142. [Google Scholar]
  58. Gardiner, C.W. Handbook of Stochastic Methods; Springer: Berlin, Germany, 1985; Volume 3. [Google Scholar]
  59. Grassberger, P. Entropy Estimates from Insufficient Samplings. arXiv 2003. [Google Scholar]
  60. Li, W. Mutual information functions versus correlation functions. J. Stat. Phys. 1990, 60, 823–837. [Google Scholar] [CrossRef]
Figure 1. Decay of mutual information with separation. Here, the mutual information in bits per symbol is shown as a function of separation d ( X , Y ) = | i j | , where the symbols X and Y are located at positions i and j in the sequence in question, and shaded bands correspond to 1 σ error bars. The statistics were computed using a sliding window using an estimator for the mutual information detailed in Appendix D. All measured curves are seen to decay roughly as power laws, explaining why they cannot be accurately modeled as Markov processes, for which the mutual information instead plummets exponentially (the example shown has I e d / 6 ). The measured curves are seen to be qualitatively similar to that of a famous critical system in physics: a 1D slice through a critical 2D Ising model, where the slope is 1 / 2 . The human genome data consist of 177,696,512 base pairs {A, C, T,G} from chromosome 5 from the National Center for Biotechnology Information [18], with unknown base pairs omitted. The Bach data consist of 5727 notes from Partita No. 2 [19], with all notes mapped into a 12-symbol alphabet consisting of the 12 half-tones {C, C#, D, D#, E, F, F#, G, G#, A, A#, B}, with all timing, volume and octave information discarded. The three text corpuses are 100 MB from Wikipedia [20] (206 symbols), the first 114 MB of a French corpus [21] (185 symbols) and 27 MB of English articles from slate.com (143 symbols). The large long-range information appears to be dominated by poems in the French sample and by html-like syntax in the Wikipedia sample.
Figure 1. Decay of mutual information with separation. Here, the mutual information in bits per symbol is shown as a function of separation d ( X , Y ) = | i j | , where the symbols X and Y are located at positions i and j in the sequence in question, and shaded bands correspond to 1 σ error bars. The statistics were computed using a sliding window using an estimator for the mutual information detailed in Appendix D. All measured curves are seen to decay roughly as power laws, explaining why they cannot be accurately modeled as Markov processes, for which the mutual information instead plummets exponentially (the example shown has I e d / 6 ). The measured curves are seen to be qualitatively similar to that of a famous critical system in physics: a 1D slice through a critical 2D Ising model, where the slope is 1 / 2 . The human genome data consist of 177,696,512 base pairs {A, C, T,G} from chromosome 5 from the National Center for Biotechnology Information [18], with unknown base pairs omitted. The Bach data consist of 5727 notes from Partita No. 2 [19], with all notes mapped into a 12-symbol alphabet consisting of the 12 half-tones {C, C#, D, D#, E, F, F#, G, G#, A, A#, B}, with all timing, volume and octave information discarded. The three text corpuses are 100 MB from Wikipedia [20] (206 symbols), the first 114 MB of a French corpus [21] (185 symbols) and 27 MB of English articles from slate.com (143 symbols). The large long-range information appears to be dominated by poems in the French sample and by html-like syntax in the Wikipedia sample.
Entropy 19 00299 g001
Figure 2. Both a traditional Markov process (top) and our recursive generative grammar process (bottom) can be represented as Bayesian networks, where the random variable at each node depends only on the node pointing to it with an arrow. The numbers show the geodesic distance Δ to the leftmost node, defined as the smallest number of edges that must be traversed to get there. Roughly speaking, our results show that for large Δ , the mutual information decays exponentially with Δ (see Theorems 1 and 2). Since this geodesic distance Δ grows only logarithmically with the separation in time in a hierarchical generative grammar (the hierarchy creates very efficient shortcuts), the exponential kills the logarithm, and we are left with power law decays of mutual information in such languages.
Figure 2. Both a traditional Markov process (top) and our recursive generative grammar process (bottom) can be represented as Bayesian networks, where the random variable at each node depends only on the node pointing to it with an arrow. The numbers show the geodesic distance Δ to the leftmost node, defined as the smallest number of edges that must be traversed to get there. Roughly speaking, our results show that for large Δ , the mutual information decays exponentially with Δ (see Theorems 1 and 2). Since this geodesic distance Δ grows only logarithmically with the separation in time in a hierarchical generative grammar (the hierarchy creates very efficient shortcuts), the exponential kills the logarithm, and we are left with power law decays of mutual information in such languages.
Entropy 19 00299 g002
Figure 3. Diagnosing different models by hallucinating text and then measuring the mutual information as a function of separation. The red line is the mutual information of enwik8, a 100-MB sample of English Wikipedia. In shaded blue is the mutual information of hallucinated Wikipedia from a trained LSTM with three layers of size 256. We plot in solid black the mutual information of a Markov process on single characters, which we compute exactly (this would correspond to the mutual information of hallucinations in the limit where the length of the hallucinations goes to infinity). This curve shows a sharp exponential decay after a distance of ∼10, in agreement with our theoretical predictions. We also measured the mutual information for hallucinated text on a Markov process for bigrams, which still underperforms the LSTMs in long-range correlations, despite having ∼10 3 more parameters.
Figure 3. Diagnosing different models by hallucinating text and then measuring the mutual information as a function of separation. The red line is the mutual information of enwik8, a 100-MB sample of English Wikipedia. In shaded blue is the mutual information of hallucinated Wikipedia from a trained LSTM with three layers of size 256. We plot in solid black the mutual information of a Markov process on single characters, which we compute exactly (this would correspond to the mutual information of hallucinations in the limit where the length of the hallucinations goes to infinity). This curve shows a sharp exponential decay after a distance of ∼10, in agreement with our theoretical predictions. We also measured the mutual information for hallucinated text on a Markov process for bigrams, which still underperforms the LSTMs in long-range correlations, despite having ∼10 3 more parameters.
Entropy 19 00299 g003
Figure 4. Our deep generative grammar model can be viewed as an idealization of a long-short term memory (LSTM) recurrent neural net, where the “forget weights” drop with depth so that the forget timescales grow exponentially with depth. The graph drawn here is clearly isomorphic to the graph drawn in Figure 1. For each cell, we approximate the usual incremental updating rule by either perfectly remembering the previous state (horizontal arrows) or by ignoring the previous state and determining the cell state by a random rule depending on the node above (vertical arrows).
Figure 4. Our deep generative grammar model can be viewed as an idealization of a long-short term memory (LSTM) recurrent neural net, where the “forget weights” drop with depth so that the forget timescales grow exponentially with depth. The graph drawn here is clearly isomorphic to the graph drawn in Figure 1. For each cell, we approximate the usual incremental updating rule by either perfectly remembering the previous state (horizontal arrows) or by ignoring the previous state and determining the cell state by a random rule depending on the node above (vertical arrows).
Entropy 19 00299 g004

Share and Cite

MDPI and ACS Style

Lin, H.W.; Tegmark, M. Critical Behavior in Physics and Probabilistic Formal Languages. Entropy 2017, 19, 299. https://doi.org/10.3390/e19070299

AMA Style

Lin HW, Tegmark M. Critical Behavior in Physics and Probabilistic Formal Languages. Entropy. 2017; 19(7):299. https://doi.org/10.3390/e19070299

Chicago/Turabian Style

Lin, Henry W., and Max Tegmark. 2017. "Critical Behavior in Physics and Probabilistic Formal Languages" Entropy 19, no. 7: 299. https://doi.org/10.3390/e19070299

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop