1 The Maximum Entropy Principle – overview and a generic example
The
Maximum Entropy Principle as conceived in its modern form by Jaynes, cf. [
11], [
12] and [
13], is easy to formulate: “Given a model of probability distributions, choose the distribution with highest entropy.” With this choice you single out the most significant distribution, the least biased one, the one which best represents the “true” distribution. The sensibility of this principle in a number of situations is well understood and discussed at length by Jaynes, in particular.
The principle is by now well established and has numerous applications in physics, biology, demography, economy etc. For practically all applications, the key example which is taken as point of departure – and often the only example discussed – is that of models prescribed by moment conditions. We refer to Kapur, [
14] for a large collection of examples as well as a long list of references.
In this section we present models defined by just one moment condition. These special models will later be used to illustrate theoretical points of more technical sections to follow.
Our approach will be based on the introduction of a two-person zero-sum game. The principle which this leads to, called the principle of Game Theoretical Equilibrium is taken to be even more basic than the Maximum Entropy Principle. In fact, from this principle you are led directly to the Maximum Entropy Principle and, besides, new interesting features emerge naturally by focusing on the interplay between a system and the observer of the system. As such the new principle is in conformity with views of quantum physics, e.g. we can view the principle of Game Theoretical Equilibrium as one way of expressing certain sides of the notion of complementarity as advocated by Niels Bohr in a precise mathematical way.
To be more specific, let us choose the language of physics and assume that on the set of natural numbers ℕ we have given a function E, the energy function. This function is assumed to be bounded below. Typically, E will be non-negative. Further, we specify a certain finite energy level, λ, and take as our model all probability distributions with mean energy λ. We assume that the energy Ei in “state” i ∈ ℕ goes fast enough to infinity that the entropies of distributions in the model remain bounded. In particular, this condition is fulfilled if Ei = ∞ for all i sufficiently large – the corresponding states are then “forbidden states” – and in this case the study reduces to a study of models with finite support.
Once you have accepted the Maximum Entropy Principle, this leads to a search for a maximum entropy distribution in the model. It is then tempting to introduce Lagrange multipliers and to solve the constrained optimization problem you are faced with in the standard manner. In fact, this is what practically all authors do and we shall briefly indicate this approach
We want to maximize entropy
subject to the moment condition
and subject to the usual constraints
pi ≥ 0;
i ∈ ℕ and
. Introducing Lagrange multipliers –
β and
μ, we are led to search for a solution for which all partial derivatives of the function
vanish. This leads to the suggestion that the solution is of the form
for some value of
β for which the
partition function Z defined by
is finite.
The approach is indeed very expedient. But there are difficulties connected with it. Theoretically, we have to elaborate on the method to be absolutely certain that it leads to the solution, even in the finite case when Ei = ∞ for i sufficiently large. Worse than this, in the infinite case there may not be any solution at all. This is connected with the fact that there may be no distribution of the form (1.1) which satisfies the required moment condition. In such cases it is not clear what to do.
Another concern is connected with the observation that the method of Lagrange multipliers is a completely general tool, and this very fact indicates that in any particular situation there may possibly be other ways forward which better reflect the special structure of the problem at hand. Thus one could hope to discover new basic features by appealing to more intrinsic methods.
Finally we note that the method of Lagrange multipliers cannot handle all models of interest. If the model is refined by just adding more moment constraints, this is no great obstacle. Then the distributions and partition functions that will occur instead of (1.1) and (1.2) will work with inner products of the form
in place of the simple product
βEi. In fact, we shall look into this later. Also other cases can be handled based on the above analysis, e.g. if we specify the geometric mean, this really corresponds to a linear constraint by taking logarithms, and the maximum entropy problem can be solved as above (and leads to interesting distributions in this case, socalled
power laws). But for some problems it may be difficult or even impossible to use natural extensions of the standard method or to use suitable transformations which will reduce the study to the standard set-up. In such cases, new techniques are required in the search for a maximum entropy distribution. As examples of this difficulty we point to models involving binomial or empirical distributions, cf. [
8] and [
22].
After presentation of preliminary material, we introduce in
Section 3 the basic concepts related to the game we shall study. Then follows a section which quickly leads to familiar key results. The method depends on the information- and game-theoretical point of view. This does not lead to complete clarification. For the proper understanding of certain phenomena, a more thorough theoretical discussion is required and this is taken up in the remaining sections.
New results are related to so-called entropy loss – situations where a maximum entropy distribution does not exist. In the last section, these type of models are related to Zipf’s law regarding statistical aspects of the semantics of natural languages.
Mathematical justification of all results is provided. Some technical results which we shall need involve special analytical tools regarding Dirichlet series and are delegated to an
appendix.
2 Information theoretical preliminaries
Let , the alphabet, be a discrete set, either finite or countably infinite and denote by ∼, respectively the set of non-negative measures P on (with the discrete Borel structure) such that P () ≤ 1, respectively P () = 1. The elements in can be thought of in many ways, e.g. as letters (for purely information theoretical or computer science oriented studies), as pure states (for applications to quantum physics) or as outcomes (for models of probability theory and statistics).
For convenience, will always be taken to be the set ℕ of natural numbers or a finite section thereof, and elements in are typically referred to by indices like i, j, ⋯.
Measures in are probability distributions, or just distributions, measures in ~ are general distributions and measures in ~ are incomplete distributions. For P, Q, ⋯ ∈ ∼, the point masses are, typically, denoted by pi, qi, ⋯.
By
∼K(
), we denote the set of all mappings
κ :
→ [0; ∞] which satisfy
Kraft’s inequalityElements in ∼K() are general codes. The values of a general code κ are denoted κi. The terminology is motivated by the fact that if κ ∈ ∼K() and if the base for the exponential in (2.3) is 2, then there exists a binary prefix-free code such that the i’th code word consists of approximately κi binary digits.
By
K(
) we denote the set of mappings
κ :
→ [0; ∞] which satisfy
Kraft’s equality
This case corresponds to codes without superfluous digits. For further motivation, the reader may wish to consult [
23] or standard textbooks such as [
3] and [
6].
Elements in K() are compact codes, for short just codes.
For mathematical convenience, we shall work with exponentials and logarithms to the base e.
For
κ ∈
∼K(
) and
i ∈
,
κi is the
code length associated with
i or, closer to the intended interpretation, we may think of
κi as the code length of the code word which we imagine
κ associates with
i. There is a natural bijective correspondance between
∼ and
∼K(
), expressed notationally by writing
P ↔
κ or
κ ↔
P , and defined by the formulas
Here the values
κi = ∞ and
pi = 0 correspond to eachother. When the above formulas hold, we call (
κ, P) a
matching pair and we say that
κ is
adapted to
P or that
P is the general distribution which
matches κ. If
and
κ ∈
K(
), we say that
κ is
-
adapted if
κ is adapted to one of the distributions in
. Note that the correspondance
κ ↔
P also defines a bijection between
and k(
)
The support of κ is the set of i ∈ with κi < ∞. Thus, with obvious notation, supp (κ) = supp (P) where P is the distribution matching κ and supp (P) is the usual support of P .
For expectations – always w.r.t. genuine probability distributions – we use the bracket notation. Thus, for
and
f :
→ [−∞; ∞], we put
whenever this is a well-defined extended real number. Mostly, our functions will be non-negative and then 〈
f,
P〉 will of course be a well defined number in [0; ∞]. In particular this is the case for
average code length defined for
κ ∈
∼K(
) and
by
Entropy and divergence are defined as usual, i.e., for
, the
entropy of
P is given by
or, equivalently, by
H(
P) = 〈
κ, P〉 where
κ is the code adapted to
P. And for
and
we define the
divergence (or
relative entropy) between
P and
Q by
Divergence is well defined with 0 ≤
D(
P‖
Q) ≤ ∞ and
D(
P‖
Q) = 0 if and only if
P =
Q.
The topological properties which we shall find useful for codes and for distributions do not quite go in parallel. On the coding side we consider the space ∼K() of all general codes and remark that this space is a metrizable, compact and convex Hausdorff space. This may be seen by embedding ∼K() in the space [0; ∞] of all functions on taking values in the compact space [0; ∞]. The topology on ∼K() then is the topology of pointwise convergence. This is the only topology we shall need on ∼K().
On the distribution side we shall primarily consider probability distributions but on the corresponding space, , we find it useful to consider two topologies, the usual, pointwise topology and then a certain stronger non-metrizable topology, the information topology.
As to the usual topology on
we remind the reader that this is a metrizable topology, indeed it is metrized by total variation defined by
We write for convergence and , etc. for closure in this topology (the examples show the closure of and of the convex hull of , respectively).
As to the information topology – the second topology which we need on the space
–this can be described as the strongest topology such that, for (
Pn)
n≥1 ⊆
and
P ∈
, lim
n→∞ D(
Pn‖
P) = 0 implies that the sequence (
Pn)
n≥1 converges to
P. Convergence in this topology is denoted
. We only need convergence in this topology for sequences, not for generalized sequences or nets. Likewise, we only need sequential closure and
,
or what the case may be denotes sequential closure. Thus
denotes the set of distributions
P for which there exists a sequence (
Pn)
n≥1 of distributions in
with
. The necessary and sufficient condition that
holds is that
D(
Pn‖
P) → 0 as
n → ∞. We warn the reader that the corresponding statement for nets (generalized sequences) is wrong – only the sufficiency part holds generally. For the purposes of this paper, the reader needs only worry about sequences but it is comforting to know that the sequential notion
is indeed a topological notion of convergence. Further details will be in [
9].
An important connection between total variation and divergence is expressed by
Pinsker’s inequality:
which shows that convergence in the information topology is stronger than convergence in total variation.
The functions of relevance to us, entropy and divergence, have important continuity properties:
P ↷
H(
P) is lower semi-continuous on
and (
P, Q) ↷
D(
P‖
Q) is jointly lower semicontinuous on
. These continuity properties even hold w.r.t. the usual, pointwise topology. Details may be found in [
23].
3 The Code Length Game, introduction
In this section is a non-empty subset of , neutrally referred to as the model. In specific applications it may be more appropriate with other terminology, e.g. the preparation space or the statistical model. Distributions in are called consistent distributions.
With
we associate a two-person zero-sum game, called the
Code Length Game over
. In this game, Player I chooses a consistent distribution, and Player II chooses a general code. The
cost-
function, seen from the point of view of Player II, is the
given by the average code length:
This game was introduced in [
20], see also [
15], [
21], [
10], [
8] and [
22]. Player I may be taken to represent “the system”, “Nature”, “God” or ⋯ , whereas Player II represents “the observer”, “the statistician” or ⋯ .
We can motivate the game introduced in various ways. The personification of the two participants in the game is natural as far as Player II is concerned since, in many situations, we can identify ourselves with that person. Also, the objective of Player II appears well motivated. To comment on this in more detail, we first remind the reader that we imagine that there is associated a real code consisting of binary sequences to κ ∈ ∼K() and that κ merely tells us what the code lengths of the various code words are.
We can think of a specific code in at least three different ways: as a representation of the letters in , as a means for identification of these letters and – the view we find most fruitful – as a strategy for making observations from a source generating letters from . The two last views are interrelated. In fact, for the strategy of observation which we have in mind, we use the code to identify the actual outcome by posing a succession of questions, starting with the question “is the first binary digit in the code word corresponding to the outcome a 1?” , then we ask for the second binary digit and so on until it is clear to us which letter is the actual outcome from the source. The number of questions asked is the number of binary digits in the corresponding code word.
The cost function can be interpretated as mean representation time, mean identification time or mean observation time and it is natural for Player II to attempt to minimize this quantity. The sense in assuming that Player I has the opposite aim, namely to maximize the cost function is more dubious. The arguments one can suggest to justify this, thereby motivating the zero-sum character of the Code Length Game, are partly natural to game theory in general, partly can be borrowed from Jaynes’ reasoning behind his Maximum Entropy Principle. Without going into lengthy discussions we give some indications: Though we do not seriously imagine that Player I is a “real” person with rational behaviour, such thoughts regarding the fictive Player I reflect back on our own conceptions. With our fictitious assumptions we express our own modelling. If all we know is the model and if, as is natural, all we strive for is minimization of the cost function, we cannot do better than imagining that Player I is a real person behaving rationally in a way which is least favourable to us. Any other assumption would, typically, lead to non-sensical results which would reveal that we actually knew more than first expressed and therefore, as a consequence, we should change the model in order better to reflect our level of knowledge.
To sum up, we have argued that the observer should be allowed freely to choose the means of observation, that codes offer an appropriate technical tool for this purpose and that the choice of a specific code should be dictated by the wish to minimize mean observation time, modelled adequately by the chosen cost function. Further, the more fictitious views regarding Player I and the behaviour of that player, really reflect on the adequacy and completeness of our modelling. If our modelling is precise, the assumptions regarding Player I are sensible and general theory of two-person zero-sum games can be expected to lead to relevant and useful results.
The overall principle we shall apply, we call the principle of Game Theoretical Equilibrium. It is obtained from general game theoretical considerations applied to the Code Length Game. No very rigid formulation of this principle is necessary. It simply dictates that in the study of a model, we shall investigate standard game theoretical notions such as equilibrium and optimal strategies.
According to our basic principle, Player I should consider, for each possible strategy
P ∈
, the infimum of 〈
κ, P〉 over
κ ∈
∼K(
). This corresponds to the optimal response of Player II to the chosen strategy. The infimum in question can easily be identified by appealing to an important identity which we shall use frequently in the following. The identity connects average code length, entropy and divergence and states that
valid for any
κ ∈
∼K(
) and
with
Q the (possibly incomplete) distribution matching
κ. The identity is called the
linking identity. As
D(
P‖
Q) ≥ 0 with equality if and only if
P =
Q, an immediate consequence of the linking identity is that entropy can be conceived as minimal average code length:
The minimum is attained for the code adapted to
P and, provided
H(
P) < ∞, for no other code.
Seen from the point of view of Player I, the optimal performance is therefore achieved by maximizing entropy. The maximum value to strive for is called the
maximum entropy value (
Hmax-value) and is given by
On the side of Player II – the “coding side” – we consider, analogously, for each
κ ∈
∼K(
) the
associated risk given by
and then the
minimum risk value (
Rmin-value)
This is the value to strive for for Player II.
We have now looked at each side of the game separately. Combining the two sides, we are led to the usual concepts, well known from the theory of two-person zero-sum games. Thus, the model
is in
equilibrium if
Hmax(
) =
Rmin(
) < ∞, and in this case,
Hmax(
) =
Rmin(
) is the
value of the game. Note that as a “supinf” is bounded by the corresponding “infsup”, the inequality
always holds.
The concept of optimal strategies also follows from general considerations. For Player I, this is a consistent distribution with maximal entropy, i.e. a distribution P ∈ with H(P) = Hmax(). And for Player II, an optimal strategy is a code κ∗ ∈ ∼K() such that R(κ∗|) = Rmin(). Such a code is also called a minimum risk code (Rmin-code).
4 Cost-stable codes, partition functions and exponential families
The purpose of this section is to establish a certain sufficient condition for equilibrium and to identify the optimal strategies for each of the players in the Code Length Game. This cannot always be done but the simple result presented here already covers most applications. Furthermore, the approach leads to familiar concepts and results. This will enable the reader to judge the merits of the game theoretical method as compared to a more standard approach via the introduction of Lagrange multipliers.
As in the previous section, we consider a model
. Let
κ∗ ∈
K(
) together with its matching distribution
P∗ be given and assume that
P∗ is consistent. Then we call
κ∗ a
Nash equilibrium code for the model
if
and if
H(
P∗) < ∞. The terminology is adapted from mathematical economy, cf. e.g. Aubin [
2]. The requirement can be written
R(
κ∗|
) ≤
H(
P∗) < ∞. Note that here we insist that a Nash equilibrium code be
-adapted. This condition will later be relaxed.
Theorem 4.1. Let be a model and assume that there exists a -adapted Nash equilibrium code κ∗, say, with matching distribution P∗. Then is in equilibrium and both players have optimal strategies. Indeed, P∗ is the unique optimal strategy for Player I and κ∗ the unique optimal strategy for Player II.
Proof. Since R(κ∗|) ≤ H(P∗), Rmin() ≤ Hmax(). As the opposite inequality always holds by (3.13), is in equilibrium, the value of the Code Length Game associated with is H(P∗) and κ∗ and P∗ are optimal strategies.
To establish the uniqueness of
κ∗, let
κ be any code distinct from
κ. Let
P be the distribution matching
κ. Then, by the linking identity,
hence
κ is not optimal.
For the uniqueness proof of
P∗, let
P be a consistent distribution distinct from
P∗. Then, again by the linking identity,
and
P cannot be optimal. ☐
As we shall see later, the existence of a Nash equilibrium code is, essentially, also necessary for the conclusion of the theorem.
1 This does not remove the differenty of actually finding the Nash equilibrium code in concrete cases of interest. In many cases it turns out to be helpful to search for codes with stronger properties. A code
κ∗ is a
cost-stable code for
if there exists
h < ∞ such that 〈
κ∗,
P〉 =
h for all
P ∈
. Clearly, a cost-stable code with a consistent matching distribution is a Nash equilibrium code. Therefore, we obtain the following corollary from Theorem 4.1:
Corollary 4.2. If κ∗ is a cost-stable code for and if the matching distribution P∗ is consistent, then is in equilibrium and κ∗ and P∗ are the unique optimal strategies pertaining to the Code Length Game.
In order to illustrate the usefulness of this result, consider the case of a model
given by finitely many linear constraints, say
with
E1, . . . ,
En real-valued functions bounded from below and
λ1, . . . ,
λn real-valued constants.
Let us search for cost-stable codes
κ for
. Clearly, any code of the form
is cost-stable. Here,
α and the
β’s denote constants,
and
vectors and a dot signifies scalar products of vectors. For
κ defined by (4.16) to define a code we must require that
κ ≥ 0 and, more importantly, that Kraft’s equality (2.4) holds. We are thus forced to assume that the
partition function evaluated at
is finite, i.e. that
is finite, and that
. When these conditions are fulfilled,
defined by
defines a cost-stable code with individual code lengths given by
The matching distribution
is given by the point probabilities
In most cases where linear models occur in the applications, one will be able to adjust the parameters in such that is consistent. By Corollary 4.2, the entropy maximization problem will then be solved. However, not all cases can be settled in this way as there may not exist a consistent maximum entropy distribution.
We have seen that the search for cost-stable codes led us to consider the well-known partition function and also the well-known exponential family consisting of distributions () with ranging over all vectors for which .
From our game theoretical point of view, the family of codes () with has at least as striking features as the corresponding family of distributions. We shall therefore focus on both types of objects and shall call the family of matching pairs with ranging over vectors with for the exponential family associated with the set of functions on or associated with the family of models one can define from E by choosing and considering given by (4.15).
We stress that the huge literature on exponential families displays other families of discrete distributions than those that can be derived from the above definition. In spite of this we maintain the view that an information theoretical definition in terms of codes (or related objects) is more natural than the usual structural definitions. We shall not pursue this point vigorously here as it will require the consideration of further games than the simple Code Length Game.
In order to further stress the significance of the class of cost stable codes we mention a simple continuity result:
Theorem 4.3. If a model has a cost-stable code, the entropy function H is continuous when restricted to .
Proof. Assume that 〈κ∗, P〉 = h < ∞ for all P ∈ . Then H(P) + D(P‖P∗) = h for P ∈ with P∗ the distribution matching κ∗. As the sum of the two lower semi-contimuous functions in this identity is a constant function, each of the functions, in particular the entropy function, must be continuous.
☐
As we have already seen, the notion of cost-stable codes is especially well suited to handle models defined by linear constraints. In
section 6 we shall take this up in more detail.
The following sections will be more technical and mathematically abstract. This appears necessary in order to give a comprehensive treatment of all basic aspects related to the Cost Length Game and to the Maximum Entropy Principle.
5 The Code Length Game, further preparations
In
section 3 we introduced a minimum of concepts that enabled us to derive the useful results of
section 4. With that behind us as motivation and background material, we are ready to embark on a more thorough investigation which will lead to a clarification of certain obscure points, especially related to the possibility that a consistent distribution with maximal entropy may not exist. In this section we point out certain results and concepts which will later be useful.
In view of our focus on codes it is natural to look upon divergence in a different way, as
redundancy. Given is a code
κ ∈
∼K(
) and a distribution
. We imagine that we use
κ to code letters from
generated by a “source” and that
P is the “true” distribution of the letters. The optimal performance is, according to (3.9), represented by the entropy
H(
P) whereas the actual performance is represented by the number 〈
κ, P〉. The difference 〈
κ, P 〉 −
H(
P) is then taken as the
redundancy. This is well defined if
H(
P) < ∞ and then coincides with
D(
P‖
Q) where
Q denotes the distribution matching
κ. As
D(
P‖
Q) is always well defined, we use this quantity for our technical definition: The
redundancy of κ ∈
∼K(
)
against is denoted
D(
P‖
κ) and defined by
Thus
D(
P‖
κ) and
D(
P‖
Q) can be used synonymously and reflect different ways of thinking. Using redundancy rather than divergence, the linking identity takes the following form:
We shall often appeal to basic concavity and convexity properties. Clearly, the entropy function is concave as a minimum of affine functions, cf. (3.9). However, we need a more detailed result which also implies strict concavity. The desired result is the following identity
where
is any finite or countably infinite convex combination of probability distributions. This follows by the linking identity.
A closely related identity involves divergence and states that, with notation as above and with
Q denoting an arbitrary general distribution,
The identity shows that divergence
D(
·‖
Q) is strictly convex. A proof can be found in [
23].
For the remainder of the section we consider a model and the associated Code Length Game.
By supp () we denote the support of , i.e. the set of i ∈ for which there exists P ∈ with pi > 0. Thus, supp () = supp (P), the union of the usual supports of all consistent distributions. Often one may restrict attention to models with full support, i.e. to models with supp . However, we shall not make this assumption unless pointed out specifically.
Recall that distributions in
are said to be
consistent. Often, it is more appropriate to consider distributions in
. These distributions are called
essentially consistent distributions. Using these distributions we relax the requirements to a distribution with maximum entropy, previously only considered for consistent distributions. Accordingly, a distribution
P∗ is called a
maximum entropy distribution (
Hmax-distribution) if
P∗ is essentially consistent and
H(
P∗) =
Hmax(
). We warn the reader that the usual definition in the literature insists on the requirement of consistency. Nevertheless, we find the relaxed requirement of essential consistency more adequate. For one thing, lower semi-continuity of the entropy function implies that
and this points to the fact that the models
and
behave in the same way as
. This view is further supported by the observation that for any
κ ∈
∼K(
),
This follows as the map
P ↷ 〈
κ, P〉 is lower semi-continuous and affine. As a consequence,
It follows that all models with
behave similarly as far as the Code Length Game is concerned. The reason why we do not relax further the requirement of a
Hmax-distribution from
to
is firstly, that we hold the information topology for more relevant for our investigations than the usual topology. Secondly, we shall see that the property
whichis stronger than
can in fact be verified in the situations we have in mind (see Theorem 6.2).
The fact that a consistent Hmax-distribution may not exist leads to further important notions. Firstly, a sequence (Pn)n≥1 of distributions is said to be asymptotically optimal if all the Pn are consistent and if H(Pn) → Hmax() for n → ∞. And, secondly, a distribution P∗ is the maximum entropy attractor (Hmax-attractor) if P∗ is essentially consistent and if for every asymptotically optimal sequence (Pn)n≥1.
As an example, consider the (uninteresting!) model of all deterministic distributions. For this model, the Hmax-attractor does not exist and there is no unique Hmax-distribution. For more sensible models, the Hmax-attractor P∗ will exist, but it may not be the Hmax-distribution as lower semi-continuity only quarantees the inequality H(P∗) ≤ Hmax(), not the corresponding equality.
Having by now refined the concepts related to the distribution side of the Code Length Game, we turn to the coding side.
It turns out that we need a localized variant of the risk associated with certain codes. The codes we shall consider are, intuitively, all codes which the observer (Player II) off-hand finds it worth while to consider. If P ∈ is the “true” distribution, and the observer knows this, he will choose the code adapted to P in order to minimize the average code length. As nature (Player I) could from time to time change the choice of P ∈ , essentially any strategy in the closure of could be approached. With these remarks in mind we find it natural for the observer only to consider -adapted codes in the search for reasonable strategies.
Assume now that the observer decides to choose a
-adapted code
κ. Let
P ∈
be the distribution which matches
κ. Imagine that the choice of
κ is dictated by a strong belief that the true distribution is
P or some distribution very close to
P (in the information topology!). Then the observer can evaluate the associated risk by calculating the
localized risk associated with
κ which is defined by the equation:
where the supremum is over the class of all sequences of consistent distributions which converge in the information topology to
P. Note that we insist on a definition which operates with sequences.
Clearly, the normal “global” risk must be at least as large as localized risk, therefore, for any
-adapted code,
A further and quite important inequality is the following:
This inequality is easily derived from the defining relation (5.27) by writing 〈
κ, Pn〉 in the form
H(
Pn) +
D(
Pn‖
P) with
κ ↔
P, noting also that
implies that
D(
Pn ‖
P) → 0. We note that had we allowed nets in the defining relation (5.27), a different and sometimes strictly larger quantity would result and (5.29) would not necessarily hold.
As the last preparatory result, we establish pretty obvious properties of an eventual optimal strategy for the observer, i.e. of an eventual Rmin-code.
Lemma 5.1. Let with Rmin () < ∞ be given. Then the Rmin-code is unique and if it exists, say R(κ∗|) = Rmin(), then κ∗ is compact with supp (κ∗) = supp ().
Proof. Assume that κ∗∈ ∼K() is a Rmin-code. As R(κ∗|) < ∞, supp () ⊆ supp (κ∗). Then consider an a0 ∈ supp (κ∗) and assume, for the purpose of an indirect proof, that a0 ∈ \supp (). Then the code κ obtained from κ∗ by putting κ(a0) = ∞ and keeping all other values fixed, is a general non-compact code which is not identically +∞. Therefore, there exists ε > 0 such that κ − ε is a compact code. For any P ∈ , we use the fact that a0 ∉ supp (P)) to conclude that 〈κ − ε, P 〉 = 〈κ∗ − ε, P〉, hence R(κ − ε|) = R(κ∗|) − ε, contradicting the minimality property of κ∗. Thus we conclude that supp (κ∗) = supp (). Similarly, it is clear that κ∗ must be compact – since otherwise, κ∗− ε would be more efficient than κ∗ for some ε > 0.
In order to prove uniqueness, assume that both κ1 and κ2 are Rmin-codes for . If we assume that κ1 ≠ κ2, then κ1(a) ≠ κ2(a) holds for some a in the common support of the codes κ1 and κ2 and then, by the geometric/arithmetic inequality, we see that is a general non-compact code. For some ε > 0, − ε will then also be a code and as this code is seen to be more efficient than κ1 and κ2, we have arrived at a contradiction. Thus κ1 = κ2, proving the uniqueness assertion. ☐
6 Models in equilibrium
Let . By definition the requirement of equilibrium is one which involves the relationship between both sides of the Code Length Game. The main result of this section shows that the requirement can be expressed in terms involving only one of the sides of the game, either distributions or codes.
Theorem 6.1 (conditions for equilibrium). Let be a model and assume that Hmax(
) < ∞
. Then the following conditions are equivalent:- (i)
is in equilibrium,
- (ii)
Hmax(co ) = Hmax(),
- (iii)
there exists a-adapted code κ∗ such that
Proof. (i) ⇒ (iii): Here we assume that Hmax() = Rmin(). In particular, Rmin() < ∞. As the map κ ↷R(κ|) is lower semi-continuous on ∼K() (as the supremum of the maps κ ↷ 〈κ, 〉; P ∈ ), and as ∼K() is compact, the minimum of κ ↷ R(κ|) is attained. Thus, there exists κ∗ ∈ ∼K() such that R(κ∗|) = Rmin(). As observed in Lemma 5.1, κ∗ is a compact code and κ∗ is the unique Rmin-code.
For
P ∈
,
It follows that
D(
Pn‖
κ∗) → 0 for any asymptotically optimal sequence (
Pn)
n≥1. In other words, the distribution
which matches
κ∗ is the
Hmax- attractor of the model.
We can now consider any asymptotically optimal sequence (
Pn)
n≥1 in order to conclude that
By (5.28), the assertion of (iii) follows.
(iii) ⇒ (ii): Assuming that (iii) holds, we find from (5.25), (3.13) and (5.29) that
and the equality of (ii) must hold.
(ii) ⇒ (i): For this part of the proof we fix a specific asymptotically optimal sequence (
Pn)
n≥1 ⊆
. We assume that (ii) holds. For each
n and
m we observe that by (5.22) and (2.7), with
It follows that (
Pn)
n≥1 is a Cauchy sequence with respect to total variation, hence there exists
such that
.
Let
κ∗ be the code adapted to
P∗. In order to evaluate
R(
κ∗|
) we consider any
P ∈
. For a suitable sequence (
εn)
n≥1 of positive numbers converging to zero, we consider the sequence (
Qn)
n≥1 ⊆
given by
By (5.22) we find that
hence
As
Q ↷
D(
P‖
Q) is lower semi-continuous, we conclude from this that
Choosing the
εn’s appropriately, e.g.
, it follows that
H(
P)+
D(
P‖
P∗) ≤
Hmax(
), i.e. that 〈
κ∗,
P〉 ≤
Hmax(
). As this holds for all
P ∈
,
R(
κ∗|
) ≤
Hmax(
) follows. Thus
Rmin(
) ≤
Hmax(
), hence equality must hold here, and we have proved that
is in equilibrium, as desired.
It is now easy to derive the basic properties which hold for a system in equilibrium.
Theorem 6.2 (models in equilibrium). Assume that is a model in equilibrium. Then the following properties hold:- (i)
There exists a unique Hmax-attractor and for this distribution, say P∗, the inequalityholds for all κ ∈
∼K(
)
. - (ii)
There exists a unique Rmin-code and for this code, say κ∗, the inequalityholds for every P ∈
, even for every . The Rmin-code is compact.
Proof. The existence of the Rmin-code was established by the compactness argument in the beginning of the proof of Theorem 6.1. The inequality (6.31) for P ∈ is nothing but an equivalent form of the inequality R(κ∗|) ≤ Hmax() and this inequality immediately implies that the distribution P∗ matching κ∗ is the Hmax-attractor. The extension of the validity of (6.31) to follows from (5.26).
To prove (6.30), let (
Pn)
n≥1 ⊆
be asymptotically optimal. Then
which is the desired conclusion as
Hmax(
) =
Rmin(
). ☐
If is a model in equilibrium, we refer to the pair (κ∗, P∗) from Theorem 6.2 as the optimal matching pair pair. Thus κ∗ denotes the Rmin-code and P∗ the Hmax-attractor.
Combining Theorem 6.2 and Theorem 6.1 we realize that, unless R(κ|) = ∞ for every κ ∈∼K(), there exists a unique Rmin-code. The matching distribution is the Hmax-attractor for the model co (). We also note that simple examples (even with a two-element set) show that may have a Hmax-attractor without being in equilibrium and this attractor may be far away from the Hmax-attractor for co ().
Corollary 6.3. For a model in equilibrium, the Rmin-code and the Hmax-attractor form a matching pair: κ∗ ↔ P∗, and for any matching pair (κ, P) with ,
Proof. Combining (6.30) with (6.31) it folows that for
κ ↔
P with
,
and the result follows from Pinskers inequality, (2.7). ☐
Corollary 6.3 may help us to judge the approximate position of the
Hmax-attractor
P∗ even without knowing the value of
Hmax(
). Note also that the proof gave the more precise bound
J (
P, P∗) ≤
R(
κ|
) −
H(
P) with assumptions as in the theorem and with
J (
·,
·) denoting
Jeffrey’s measure of discrimination, cf. [
3] or [
16].
Corollary 6.4. Assume that the model has a cost-stable code κ∗ and let P∗ be the matching distribution. Then is in equilibrium and has (κ∗, P∗) as optimal matching pair if and only if P∗ is essentially consistent.
Proof. By definition an Hmax-attractor is essentially consistent. Therefore, the necessity of the condition is trivial. For the proof of sufficiency, assume that 〈κ∗, P〉 = h for all P ∈ with h a finite constant. Clearly then, Hmax() ≤ h. Now, let (Pn)n≥1 be a sequence of consistent distributions with . By the linking identity, H(Pn) + D(Pn‖P∗) = h for all n, and we see that Hmax() ≥ h. Thus Hmax() = h and (Pn) is asymptotically optimal. By Theorem 6.2, the sequence converges in the information topology to the Hmax-attractor which must then be P∗. The result follows. ☐
Note that this result is a natural further development of Corollary 4.2.
Corollary 6.5. Assume that is a model in equilibrium. Then all models with are in equilibrium too and they all have the same optimal matching pair.
Proof. If
then
and we see that
is in equilibrium. As an asymptotically optimal sequence for
is also asymp- totically optimal for
, it follows that
has the same
Hmax-attractor, hence also the same optimal matching pair, as
.
Another corollary is the following result which can be used as a basis for proving certain limit theorems, cf. [
22].
Corollary 6.6. Let be a sequence of models and assume that they are all in equilibrium with and that they are nested in the sense that . Let there further be given a model such thatThen is in equilibrium too, and the sequence of Hmax-attractors of the ’s converges in diver-gence to the Hmax-attractor of . Clearly, the corollaries are related and we leave it to the reader to extend the argument in the proof of Corollary 6.5 so that it also covers the case of Corollary 6.6.
We end this section by developing some results on models given by linear conditions, thereby continuing the preliminary results from
section 1 and
section 4. We start with a general result which uses the following notion: A distribution
P∗ is
algebraically inner in the model
if, for every
P ∈
there exists
Q ∈
such that
P∗ is a convex combination of
P and
Q.
Lemma 6.7. If the model is in equilibrium and has a Hmax-distribution P∗ which is algebraically inner in , then P∗ is cost-stable.
Proof. Let κ∗ be the code adapted to P∗. To any P ∈ we determine Q ∈ such that P∗ is a convex combination of these two distributions. Then, as 〈κ∗, P〉 ≤ Hmax() and 〈κ∗, Q〉 ≤ Hmax() and as a convex combination gives 〈κ∗, P∗〉 ≤ Hmax() we must conclude that 〈κ∗, P〉 = 〈κ∗, Q〉 since 〈κ∗, P∗〉 is in fact equal to Hmax(). Therefore, κ∗ is cost-stable.
Theorem 6.8. If the alphabet is finite and the model affine, then the model is in equilibrium and the Rmin-code is cost-stable.
Proof. We may assume that is closed. By Theorem 6.1, the model is in equilibrium and by continuity of the entropy function, the Hmax-attractor is a Hmax-distribution. For the Rmin-code κ∗, supp (P∗) = supp () by Lemma 5.1. As is finite we can then conclude that P∗ is algebraically inner and Lemma 6.7 applies. ☐
We can now prove the following result:
Theorem 6.9. Let be a non-empty model given by finitely many linear constraints as in (4.15)
:Assume that the functions E1, ⋯
, En, 1
are linearly independent and that Hmax(
) < ∞
. Then the model is in equilibrium and the optimal matching pair (κ∗, P∗) belongs to the exponential family defined by (4.18) and (4.20). In particular, κ∗ is cost-stable. Proof. The model is in equilibrium by Theorem 6.1. Let (κ∗, P
∗) be the corresponding optimal matching pair. If is finite the result follows by Theorem 6.8 and some standard linear algebra.
Assume now that
is infinite Choose an asymptotically optimal sequence (
Pn)
n≥1. Let
be a finite subset of
, chosen sufficiently large (see below), and denote by
the convex model of all
P ∈
for which
pi =
Pn,i for all
. Let
be the
Hmax-attractor for
n and
the adapted code. Then this code is cost-stable for
n and of the form
If the set
is sufficiently large, the constants appearing here are uniquely determined. We find that
is asymptotically optimal for
, and therefore,
. It follows that the constants
βn,ν and
αn converge to some constants
βν and
α and that
As can be chosen arbitrarily large, the constants α and βν must be independent of with i ∈ and the above equation must hold for all i ∈ . ☐
Remark.
Extensions of the result just proved may well be possible, but care has to be taken. For instance, if we consider models obtained by infinitely many linear constraints, the result does not hold. As a simple instance of this, the reader may consider the case where the model is a “line” , viz. the affine hull generated by the two distributions P, Q on given by pi = 2−i; i ≥ 1 and qi = (ζ(3) i3)−1; i ≥ 1. This model is in equilibrium with P as Hmax-distribution, but the adapted code is not cost-stable. These facts can be established quite easily via the results quoted in the footnote following the proof of Theorem 4.1.
7 Entropy-continuous models
In the sequel we shall only discuss models in equilibrium. Such models can be quite different regarding the behaviour of the entropy function near the maximum. We start with a simple observation.
Lemma 7.1. If is in equilibrium and the Hmax-value Hmax() is attained on , it is only attained for the Hmax-attractor.
Proof. Assume that and that H(P) = Hmax(). Choose a sequence (Pn)n≥1 ⊆ which converges to P in total variation. By lower semi-continuity and as H(P) = Hmax() we see that (Pn) is asymptotically optimal. Therefore, for the Hmax-attractor P∗, , hence also . It follows that P = P∗.
Lemma 7.2. For a model in equilibrium and with Hmax-attractor P∗ the following conditions are equivalent:- (i)
is continuous at P∗ in the topology of total variation,
- (ii)
is sequentially continuous at P∗ in the information topology,
- (iii)
H(P∗) = Hmax().
Proof. Clearly, (i) implies (ii).
Assume that (ii) holds and let (Pn)n≥1⊆ be asymptotically optimal. Then . By assumption, H(Pn) → H(P∗) and (iii) follows since H(Pn) → Hmax() also holds.
Finally, assume that (iii) holds and let
satisfy
. By lower semi-continuity,
and
H(
Pn) →
H(
P∗) follows. Thus (i) holds.
A model in equilibrium is entropy-continuous if H(P∗) = Hmax() with P∗ the Hmax-attractor. In the opposite case we say that there is an entropy loss.
We now discuss entropy-continuous models. As we shall see, the previously introduced notion of Nash equilibrium code, cf.
Section 4, is of central importance in this connection. We need this concept for any
-adapted code. Thus, by definition a code
κ∗ is a
Nash equilibrium code if
κ∗ is
-adapted and if
We stress that the definition is used for any model (whether or not it is known beforehand that the model is in equilibrium). We shall see below that a Nash equilibrium code is unique.
Theorem 7.3 (entropy-continuous models). Let be a model. The following conditions are equivalent:- (i)
is in equilibrium and entropy-continuous,
- (ii)
is in equilibrium and has a maximum entropy distribution,
- (iii)
has a Nash equilibrium code.
If these conditions are fulfilled, the Hmax-distribution is unique and coincides with the Hmax-attractor. Likewise, the Nash equilibrium code is unique and it coincides with the Rmin-code. Proof. (i) ⇒ (ii): This is clear since, assuming that (i) holds, the Hmax-attractor must be a Hmax-distribution.
(ii) ⇒ (iii): Assume that
is in equilibrium and that
is a
Hmax-distribution. Let (
κ∗,
P∗) be the optimal matching pair pair. Applying Theorem 6.2, (6.31) with
P =
P0, we conclude that
D(
P0 ‖
P∗) = 0, hence
P0 =
P∗. Then we find that
and we see that
κ∗ is a Nash equilibrium code.
(iii) ⇒ (i): If
κ∗ is a Nash equilibrium code for
, then
and we conclude that
is in equilibrium and that
κ∗ is the minimum risk code.
In establishing the equivalence of (i)–(iii) we also established the uniqueness assertions claimed. ☐
The theorem generalizes the previous result, Theorem 4.1. We refer to
section 4 for results which point to the great applicability of results like Theorem 7.3.
8 Loss of entropy
We shall study a model in equilibrium. By previous results we realize that for many purposes we may assume that is a closed, convex subset of with Hmax() < ∞. Henceforth, these assumptions are in force.
Denote by (
κ∗,
P∗) the optimal matching pair associated with
. By the
disection of
we understand the decomposition of
consisting of all non-empty sets of the form
Let ∆ denote the set of
with
. As
R(
κ∗|
) =
Rmin(
) =
Hmax(
), and as
is convex with
P∗ ∈
, ∆ is a subinterval of [0;
Hmax(
)] which contains the interval [
H(
P∗),
Hmax(
)[.
Clearly, κ∗ is a cost-stable code for all models ; x ∈ ∆. Hence, by Theorem 4.3 the entropy function is continuous on each of the sets ; x ∈ ∆.
Each set
;
x ∈ ∆ is a sub-model of
and as each
is convex with
Hmax(
) < ∞, these sub-models are all in equilibrium. The linking identity shows that for all
P ∈
,
This implies that
Hmax(
) ≤
x, a sharpening of the trivial inequality
Hmax(
) ≤
Hmax(
). From (8.34) it also follows that maximizing entropy
H(
·) over
amounts to the same thing as minimizing divergence
D(
·‖
P∗) over
. In other words, the
Hmax-attractor of
may, alterna-tively, be characterized as the
I-
projection of
P∗ on
, i.e. as the unique distribution
for which
for every sequence (
Qn) ⊆
for which
D(
Qn‖
P∗) converges to the infimum of
D(
Q‖
P∗) with
Q ∈
.
*Further basic results are collected below:
Theorem 8.1 (disection of models). Let be a convex model in equilibrium with optimal matching pair (κ∗, P∗) and assume that P∗ ∈
. Then the following properties hold for the disection defined by (8.33):- (i)
The set ∆ is an interval with sup ∆ = Hmax(). A necessary and sufficient condition that Hmax() ∈ ∆ is that is entropy-continuous. If has entropy loss, ∆ contains the non-degenerate interval [H(P∗), Hmax()[.
- (ii)
The entropy function is continuous on each sub-model ; x ∈ ∆,
- (iii)
Each sub-model ; x ∈ ∆ is in equilibrium and the Hmax-attractor for x is the I-projection of P∗ on ,
- (iv)
For x ∈ ∆
, Hmax(
) ≤
x and the following bi-implications hold, where denotes the Hmax-attractor of :
Proof. (i)–(ii) as well as the inequality Hmax() ≤ x of (iii) were proved above.
For the proof of (iv) we consider an x ∈ ∆ and let be an asymptotically optimal sequence for . Then the condition Hmax() = x is equivalent with the condition H(Pn) → x, and the condition = P∗ is equivalent with the condition. In view of the equality x = H(n)+ D(Pn‖P∗) we now realize that the first bi-implication of (8.35) holds. For the second bi-implication we first remark that as x ≥ H() holds generally, if then x ≥ H(P∗) must hold.
For the final part of the proof of (iv), we assume that x ≥ H(P∗). The equality Hmax() = x is evident if x = H(P∗). We may therefore assume that H(P∗) < x < Hmax(). We now let (Pn) denote an asymptotically optimal sequence for the full model such that H(Pn) ≥ x; n ≥ 1. As 〈κ∗, P∗〉 ≤ x ≤ 〈κ∗, Pn〉 for all n, we can find a sequence (Qn)n≥1 of distributions in Px such that each Qn is a convex combination of the form Qn = αnP∗ + βnPn. By (5.23), D(Qn‖P∗) ≤ βnD(Pn‖P∗) → 0. Thus P∗ is essentially consistent for and as the code adapted to P∗ is cost-stable for this model, Corollary 6.4 implies that the model has P∗ as its Hmax-attractor. ☐
A distribution
P∗ is said to have
potential entropy loss if the distribution is the
Hmax-attractor of a model in equilibrium with entropy loss. As we shall see, this amounts to a very special behaviour of the point probabilities. The definition we need at this point we first formulate quite generally for an arbitrary distribution
P . With
P we consider the
density function Ω associated with the adapted code, cf. the
appendix. In terms of
P this function is given by:
(# = “number of elements in”). We can now define a
hyperbolic distribution as a distribution
P such that
Clearly, Ω(
t) ≤ exp(
t) for each
t so that the equality in the defining relation may just as well be replaced by the inequality “≥” .
We note that zero point probabilities do not really enter into the definition, therefore we may assume without any essential loss of generality that all point probabilities are positive. And then, we may as well assume that the point probabilities are ordered:
. In this case, it is easy to see that (8.37) is equivalent with the requirement
In the sequel we shall typically work with distributions which are ordered in the above sense. The terminology regarding hyperbolic distributions is inspired by [
19] but goes back further, cf. [
24]. In these references the reader will find remarks and results pertaining to this and related types of distributions and their discovery from empirical studies which we will also comment on in the next section.
We note that in (8.38) the inequality “≤” is trivial as
for every
. Therefore, in more detail, a distribution with ordered point probabilities is hyperbolic if and only if, for every
a > 1,
for infinitely many indices.
Theorem 8.2. Every distribution with infinite entropy is hyperbolic.
Proof. Assume that P is not hyperbolic and that the point probabilities are ordered. Then there exists a > 1 such that pi ≥ i−a for all sufficiently large i. As the distribution with point probabilities equal to i−a, properly normalized, has finite entropy, the result follows. ☐
With every model
in equilibrium we associate a
partition function and an
exponential family, simply by considering the corresponding objects associated with the
Rmin-code for the model in question. This then follows the definition given in
Section 4, but for the simple case where there is only one “energy function” with the
Rmin-code playing the role of the energy function.
Theorem 8.3 (maximal models). Let be given and assume that there exists a model such that ,
is in equilibrium and Hmax(
) =
Hmax(
)
. Then itself must be in equilibrium. Furthermore, there exists a largest model with the stated properties, namely the modelwhere κ∗ denotes the minimum risk code of . Finally, any model with is in equilibrium and has the same optimal matching pair as . Proof. Choose
with the stated properties. By Theorem 6.1,
hence
is in equilibrium. Let
κ∗ be the
Rmin-code of
in accordance with Theorem 6.2 and consider
defined by (8.40).
Now let be an equilibrium model with . As an asymptotically optimal sequence for is also asymptotically optimal for , we realize that has the same Hmax-attractor, hence also the same Rmin-code as . Thus R(κ∗|) = Rmin() = Hmax() = Hmax() and it follows that .
Clearly, is convex and , hence is in equilibrium by Theorem 6.1.
The final assertion of the theorem follows by one more application of Theorem 6.1. ☐
The models which can arise as in Theorem 8.3 via (8.40) are called maximal models.
Let
κ∗ ∈
K(
) and 0 ≤
h < ∞. Put
We know that any maximal model must be of this form. Naturally, the converse does not hold. An obvious necessary condition is that the entropy of the matching distribution be finite. But we must require more. Clearly, the models in (8.41) are in equilibrium but it is not clear that they have
κ∗ as
Rmin-code and
h as
Hmax-value.
Theorem 8.4. A distribution with finite entropy has potential entropy loss if and only if it is hyperbolic.
Proof. We may assume that the point probabilities of P are ordered.
Assume first that
P∗ is not hyperbolic and that
P∗ is the attractor for some model. Consider the corresponding maximal models
and consider a value of
h with
H(
P∗) ≤
h ≤
Hmax(
). Let
γ be the abscissa of convergence associated with
κ∗ and let Φ be defined as in the
appendix. As
γ < 1, we can choose
β > γ such that Φ(
β) =
h. Now both
P∗ and
Qβ given by
are attractors for
and hence equal. It follows that
h =
H(
P∗). Next we show that a hyperbolic distribution has potential entropy loss.
Consider the maximal models
. Each one of these models is given by a single linear constraint. Therefore, the attractor is element in the corresponding exponential family. The abscissa of convergence is 1 and, therefore, the range of the map
. For
, there exists a consistent maximum entropy distribution. Assume that
h0 >
H(
P∗) and that the attractor equals
By Theorem 8.1, Qβ must be attractor for all with h ∈ [Φ(β); h0]. Especially, this holds for h = H(P∗). This shows that P∗ = Qβ. By Theorem 8.1 the conclusion is now clear. ☐
9 Zipf’s law
Zipf’s law is an empirically discovered relationship for the relative frequencies of the words of a natural language. The law states that
where
fi is the relative frequency of the
i’th most common word in the language, and where
a and
b denote constants. For large values of
i we then have
The constants
a and
b depend on the language, but for many languages
a ≈ 1, see [
24].
Now consider an ideal language where the frequencies of words is described by a hyperbolic probability distribution P∗. Assume that the entropy of the distribution is finite. We shall discribe in qualitative terms the consequences of these assumptions as they can be derived from the developed theory, especially Theorem 8.4. We shall see that our asumption introduces a kind of stability of the language which is desirable in most situations.
Small children with a limited vocabulary will use the few words they know with relative frequencies very different from the probabilities described by P∗. They will only form simple sentences, and at this stage the number of bits per word will be small in the sense that the entropy of the childs probability distribution is small. Therefore the parents will often be able to understand the child even though the pronounciation is poor. The parents will, typically, talk to their children with a lower bit rate than they normally use, but with a higher bit rate than their children. Thereby new words and grammatical structures will be presented to the child, and, adopting elements of this structure, the child will be able to increase its bit rate. At a certain stage the child will be able to communicate at a reasonably high rate (about H(P∗)). Now the child knows all the basic words and structures of the language.
The child is still able to increase its bit rate, but from now on this will make no significant change in the relative frequencies of the words. Bit rates higher than H(P∗) are from now on obtained by the introduction of specialized words, which occur seldom in the language as a whole. The introduction of new specialized words can be continued during the rest of the life. Therefore one is able to express even complicated ideas without changing the basic structure of the language, indeed there is no limit, theoretically, to the bit rate at which one can communicate without change of basic structure.
We realize that in view of our theoretical results, specifically Theorem 8.4, the features of a natural language as just discussed are only possible if the language obeys Zipf’s law. Thus we have the striking phenomenon that the apparent “irregular” behaviour of models with entropy loss (or just potential entropy loss) is actually the key to desirable stability, the fact that for such models you can increase the bit rate, the level of communication, and maintain the basic features of the language. One could even speculate that modelling based on entropy loss lies behind the phenomenon that many will realize as a fact, viz. that “we can talk without thinking” . We just start talking using basic structure of the language (and rather common words) and then from time to time stick in more informative words and phrases in order to give our talk more semantic content, but in doing so, we use relatively infrequent words and structures, thus not violating basic principles – hence still speaking recognizably danish, english or what the case may be, so that also the receiver or listener feels at ease and recognizes our talk as unmistakenly danish, english or ...
We see that very informative speeking can be obtained by use of infrequent expressions. Therefore a conversation between, say 2 physicists may use English supplied with specialized words like electron and magnetic flux. We recognize their language as English because the basic words and grammer is the same in all English. The specialists only have to know special words, not a special grammer. In this sense the languages are stable. If the entropy of our distribution is infinite the language will behave in just about the same manner as described above. In fact one would not feel any difference between a language with finite entropy and a language with infinite entropy.
We see that it is convienient that a language follows a Zipf’s law, but the information theoretic methods also gives some explanation of how the language may have evolved into a state which obeys Zipf’s law. The set of hyperbolic distributions is convex. Therefore if 2 information sources both follows Zipf’s law then so do their mixture, and if 2 information sources both approximately follows Zipf’s law their mixture will do this even more. The information sources may be from different languages, but it is more interesting to consider a small child learning the language. The child gets input from different sources: the mother, father, other children ect. trying to imitate their language the child will use the words with frequences which are closer to Zipf’s law the the sources. As the language develops during the centuries the frequences will converge to a hyperbolic distribution.
Here we have discussed entropy as bit per word and not bit per letter. The letters give an encoding of the words which should primarilly be understood by others, and therefore the encoding cannot just be changed to obtain a better data compression. To stress the difference between bit per word and bit per letter we remark the the words are the basic semantic structure in the language. Therefore we may have an internal representation of the words which has very little to do with their length when spoken, which could explain that it is often much easier to remember a long word in a language you understand than a short word in a language you do not understand. It would be interesting to compare these ideas with empirical measurements of the entropy here considered but, precisely in the regime where Zipf’s law holds, such a study is very difficult as convergence of estimators of the entropy is very slow, cf. [
1].