Next Article in Journal
Particle Indistinguishability Symmetry within a Field Theory. Entropic Effects
Previous Article in Journal
Thermodynamics of High Temperature Plasmas
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Maximum Entropy on Compact Groups

Centrum Wiskunde & Informatica, Science Park 123, 1098 GB Amsterdam, Noord-Holland, The Netherlands
Entropy 2009, 11(2), 222-237; https://doi.org/10.3390/e11020222
Submission received: 30 December 2008 / Accepted: 31 March 2009 / Published: 1 April 2009

Abstract

:
In a compact group the Haar probability measure plays the role of uniform distribution. The entropy and rate distortion theory for this uniform distribution is studied. New results and simplified proofs on convergence of convolutions on compact groups are presented and they can be formulated as entropy increases to its maximum. Information theoretic techniques and Markov chains play a crucial role. The convergence results are also formulated via rate distortion functions. The rate of convergence is shown to be exponential.

1. Introduction

It is a well-known and celebrated result that the uniform distribution on a finite set can be characterized as having maximal entropy. Jaynes used this idea as a foundation of statistical mechanics [1], and the Maximum Entropy Principle has become a popular principle for statistical inference [2,3,4,5,6,7,8]. Often it is used as a method to get prior distributions. On a finite set, for any distributions P we have H ( P ) = H ( U ) D ( P U ) where H is the Shannon entropy, D is information divergence, and U is the uniform distribution. Thus, maximizing H ( P ) is equivalent to minimizing D ( P U ) . Minimization of information divergence can be justified by the conditional limit theorem by Csiszár [9, Theorem 4]. So if we have a good reason to use the uniform distribution as prior distribution we automatically get a justification of the Maximum Entropy Principle. The conditional limit theorem cannot justify the use of the uniform distribution itself, so we need something else. Here we shall focus on symmetry.
Example 1.
A die has six sides that can be permuted via rotations of the die. We note that not all permutations can be realized as rotations and not all rotations will give permutations. Let G be the group of permutations that can be realized as rotations. We shall consider G as the symmetry group of the die and observe that the uniform distribution on the six sides is the only distribution that is invariant under the action of the symmetry group G.
Example 2.
G = R / 2 π Z is a commutative group that can be identified with the group S O 2 of rotations in 2 dimensions. This is the simplest example of a group that is compact but not finite.
For an object with symmetries the symmetry group defines a group action on the object, and any group action on an object defines a symmetry group of the object. A special case of a group action of the group G is left translation of the elements in G. Instead of studying distributions on objects with symmetries, in this paper we shall focus on distributions on the symmetry groups themselves. It is no serious restriction because a distribution on the symmetry group of an object will induce a distribution on the object itself.
Convergence of convolutions of probability measures were studied by Stromberg [10] who proved weak convergence of convolutions of probability measures. An information theoretic approach was introduced by Csiszár [11]. Classical methods involving characteristic functions have been used to give conditions for uniform convergence of the densities of convolutions [12]. See [13] for a review of the subject and further references.
Finally it is shown that convergence in information divergence corresponds to uniform convergence of the rate distortion function and that weak convergence corresponds to pointwise convergence of the rate distortion function. In this paper we shall mainly consider convolutions as Markov chains. This will give us a tool, which allows us to prove convergence of iid. convolutions, and the rate of convergence is proved to be exponential.
The rest of the paper is organized as follows. In Section 2 we establish a number of simple results on distortion functions on compact set. These results will be used in Section 4. In Section 3 we define the uniform distribution on a compact group as the uniquely determined Haar probability measures. In Section 4 it is shown that the uniform distribution is the maximum entropy distribution on a compact group in the sense that it maximizes the rate distortion function at any positive distortion level. Convergence of convolutions of a distribution to the uniform distribution is established in Section 5 using Markov chain techniques, and the rate of convergence is discussed in Section 6. The group S O 2 is used as our running example. We finish with a short discussion.

2. Distortion on compact groups

Let G be a compact group where ∗ denotes the composition. The neutral element will be denoted e and the inverse of the element g will be denoted g 1 .
We shall start with some general comments on distortion functions on compact sets. Assume that the group both plays the role as source alphabet and reproduction alphabet. A distortion function d : G × G R is given and we will assume that d x , y 0 with equality if and only if x = y . We will also assume that the distortion function is continuous.
Example 3.
As distortion function on S O 2 we use the squared Euclidean distance between the corresponding points on the unit circle, i.e.
d x , y = 4 sin 2 x y 2 = 2 2 cos x y .
This is illustrated in Figure 1.
Figure 1. Squared Euclidean distance between the rotation angles x and y.
Figure 1. Squared Euclidean distance between the rotation angles x and y.
Entropy 11 00222 g001
The distortion function might be a metric but even if the distortion function is not a metric, the relation between the distortion function and the topology is the same as if it was a metric. One way of constructing a distortion function on a group is to use the squared Hilbert-Smidt norm in a unitary representation of the group.
Theorem 4.
If C is a compact set and d : C × C R is a non-negative continuous distortion function such that d x , y = 0 if and only if x = y , then the topology on C is generated by the distortion balls x C d x , y < r where y C and r > 0 .
Proof. 
We have to prove that a subset B C is open if and only if for any y B there exists a ball that is a subset of B and contains y. Assume that B C is open and that y B . Then B compact. Hence, the function x d x , y has a minimum r on B and r must be positive because r = d x , y = 0 would imply that x = y B . Therefore x C d x , y < r B .
Continuity of d implies that the balls x C d x , y < r are open. If any point in B is contained in an open ball, then B is a union of open set and open. ☐
The following theorem may be considered as a kind of uniform continuity of the distortion function or as a substitute for the triangular inequality when d is not a metric.
Lemma 5.
If C is a compact set and d : C × C R is a non-negative continuous distortion function such that d x , y = 0 if and only if x = y , then there exists a continuous function f 1 satisfying f 1 0 = 0 such that
d x , y d z , y f 1 d z , y for x , y , z C .
Proof. 
Assume that the theorem does not hold. Then there exists ϵ > 0 and a net x λ , y λ , z λ λ Λ such that
d x λ , y λ d z λ , y λ > ϵ
and d z λ , y λ 0 . A net in a compact set has a convergent subnet so without loss of generality we may assume that the net x λ , y λ , z λ λ Λ converges to some triple x , y , z . By continuity of the distortion function we get
d x , y d z , y ϵ
and d z , y = 0 , which implies z = y and we have a contradiction. ☐
We note that if a distortion function satisfies (1) then it defines a topology in which the distortion balls are open.
In order to define the weak topology on probability distributions we extend the distortion function from C × C to M + 1 C × M + 1 C via
d P , Q = inf E d X , Y ,
where X and Y are random variables with values in C and the infimum is taken all joint distributions on X , Y such that the marginal distribution of X is P and the marginal distribution of Y is Q. The distortion function is continuous so x , y d x , y has a maximum that we denote d max .
Theorem 6.
If G is a compact set and d : C × C R is a non-negative continuous distortion function such that d x , y = 0 if and only if x = y , then
d P , Q d S , Q f 2 d S , P for P , Q , S M + 1 C
for some continuous function f 2 satisfying f 2 0 = 0 .
Proof. 
According to Lemma 5 there exists a function f 1 satisfying (1). We use that
E d X , Y d Z , Y E f 1 d Z , X = E f 1 d Z , X d Z , X δ · P d Z , X δ + E f 1 d Z , X d Z , X > δ · P d Z , X > δ f 1 δ · 1 + f 1 d max · E d Z , X δ f 1 δ + f 1 d max · d S , P δ .
This hold for all δ > 0 and in particular for δ = d S , P 1 / 2 , which proves the theorem. ☐
The theorem can be used to construct the weak topology on M + 1 C with
P M + 1 C d P , Q < r ,
P M + 1 C , r > 0 as open balls that generate the topology. We note without proof that this definition is equivalent with the quite different definition of weak topology that one will find in most textbooks.
For a group G we assume that the distortion function is right invariant in the sense that for all x , y , z G a distortion function d satisfies
d x z , y z = d x , y .
A right invariant distortion function satisfies d x , y = d x y 1 , e , so right invariant continuous distortion functions of a group can be constructed from non-negative functions with a minimum in e.

3. The Haar measure

We use ∗ to denote convolution of probability measures on G. For g G we shall use g P to denote the g-translation of the measure P or, equivalently, the convolution with a measure concentrated in g. The n-fold convolution of a distribution P with itself will be denoted P n . For random variables with values in G one can formulate an analog of the central limit theorem. We recall some facts about probability measures on compact groups and their Haar measures.
Definition 7.
Let G be a group. A measure P is said to be a left Haar measure if g P = P for any g G . Similarly, P is said to be a right Haar measure if P g = P for any g G . A measure is said to be a Haar measure if it is both a left Haar measure and a right Haar measure.
Example 8.
The uniform distribution on S O 2 or R / 2 π Z has density 1 / 2 π with respect to the Lebesgue measure on 0 ; 2 π . The function
f x = 1 + n = 1 a n cos n x + ϕ n
is a density on a probability distribution P on S O 2 if the Fourier coefficients a n are sufficiently small so that f is non-negative. A sufficient condition for f to be non-negative is that n = 1 a n 1 .
Translation by y gives a distribution with density
f x y = 1 + n = 1 a n cos n x y + ϕ n .
The distribution P is invariant if and only if f is 1 or, equivalently, all Fourier coefficients a n n N are 0.
A measure P on G is said to have full support if the support of P is G, i.e. P A > 0 for any non-empty open set A G . The following theorem is well-known [14,15,16].
Theorem 9.
Let U be a probability measure on the compact group G. Then the following four conditions are equivalent.
  • U is a left Haar measure.
  • U is a right Haar measure.
  • U has full support and is idempotent in the sense that U ∗ U = U.
  • There exists a probability measure P on G with full support such that P ∗ U = U.
  • There exists a probability measure P on G with full support such that U ∗ P = U.
In particular a Haar probability measure is unique.
In [14,15,16] one can find the proof that any locally compact group has a Haar measure. The unique Haar probability measure on a compact group will be called the uniform distribution and denoted U. For probability measures P and Q the information divergence from P to Q is defined by
D P Q = log d P d Q d P , if P Q ; , otherwise.
We shall often calculate the divergence from a distribution to the uniform distribution U, and introduce the notation
D P = D P U .
For a random variable X with values in G we will sometimes write D X U instead of D P U when X has distribution P.
Example 10.
The distribution P with density f given by (2) has
D P = 1 2 π 0 2 π f x log f x d x 1 2 π 0 2 π f x f x 1 d x = 1 2 n = 1 a n 2 .
Let G be a compact group with uniform distribution U and let F be a closed subgroup of G. Then the subgroup has a Haar probability measure U F and
D U F = log G : F
where G : F denotes the index of F in G. In particular D U F is finite if and only if G : F is finite.

4. The rate distortion theory

We will develop aspects of the rate distortion theory of a compact group G. Let P be a probability measure on G. We observe that compactness of G implies that a covering of G by distortion balls of radius δ > 0 contains a finite covering. If k is the number of balls in a finite covering then R P δ log k where R P is the rate distortion function of the probability measure P. In particular the rate distortion function is upper bounded. The entropy of a probability distribution P is given by H P = R P 0 . If the group is finite then the uniform distribution maximizes the Shannon entropy R P 0 but if the group is not finite then in principle there is no entropy maximizer. As we shall see the uniform distribution still plays the role of entropy maximizer in the sense that the uniform distribution maximize the value R P δ of the rate distortion function for any positive distortion level δ > 0 . The rate distortion function R P can be studied using its convex conjugate R P given by
R P β = sup δ β · δ R P δ .
The rate distortion function is then recovered by the formula
R P δ = sup β β · δ R P β .
The techniques are pretty standard [17].
Theorem 11.
The rate distortion function of the uniform distribution is given by
R U β = log Z β
where Z is the partition function defined by
Z β = G exp β · d g , e d U g .
The rate distortion function of an arbitrary distribution P satisfies
R U D P U R P R U .
Proof. 
First we prove a Shannon type lower bound on the rate distortion function of an arbitrary distribution P on the group. Let X be a random variable with values in G and distribution P, and let X ^ be a random variable coupled with X such that the mean distortion E d X , X ^ equals δ. Then
I X ; X ^ = D X U X ^ D X U
= D X X ^ 1 U X ^ D X U
D X X ^ 1 U D X U .
Now, E d X , X ^ = E d X X ^ 1 , e and
D X X ^ 1 U D P β U
where P β is the distribution that minimizes divergence under the constraint E d Y , e = δ when Y has distribution P β . The distribution P β is given by the density
d P β d U g = exp β · d g , e Z β .
where β is determined by the condition δ = Z β / Z β .
If P is uniform then a joint distribution is obtained by choosing X ^ uniformly distributed, and choosing Y distributed according to P β and independent of X ^ . Then X = Y X ^ is distributed according to P β U = U , and we have equality in (7). Hence the lower bound (7) is achievable for the uniform distribution, which prove the first part of the theorem, and the left inequality in (4).
The joint distribution on X , X ^ that achieved the rate distortion function when X has a uniform distribution, defines a Markov kernel Ψ : X X ^ that is invariant under translations in the group. For any distribution P the joint distribution on X , X ^ determined by P and Ψ gives an achievable pair of distortion, and rate that is on the rate distortion curve of the uniform distribution. This proves the right inequality in Equation (4). ☐
Figure 2. The rate distortion region of the uniform distribution on S O 2 is shaded. The rate distortion function is the lower bounding curve. In the figure the rate is measured in nats. The critical distortion d c r i t equals 2, and the dashed line indicates d max = 4 .
Figure 2. The rate distortion region of the uniform distribution on S O 2 is shaded. The rate distortion function is the lower bounding curve. In the figure the rate is measured in nats. The critical distortion d c r i t equals 2, and the dashed line indicates d max = 4 .
Entropy 11 00222 g002
Example 12.
For the group S O 2 the rate distortion function can be parametrized using the modified Bessel functions I j , j N 0 . The partition function is given by
Z β = G exp β · d g , e d U g = 1 2 π 0 2 π exp β · 2 2 cos x d x = exp 2 β · 1 π 0 π exp 2 β · cos x d x = exp 2 β · I 0 2 β .
Hence R U β = log Z β = 2 β + log I 0 2 β . The distortion δ corresponding to β is given by
δ = 2 2 I 1 2 β I 0 2 β
and the corresponding rate is
R U δ = β · δ 2 β + log I 0 2 β = β · 2 I 1 2 β I 0 2 β log I 0 2 β .
These joint values of distortion and rate can be plotted with β as parameter as illustrated in Figure 2.
The minimal rate of the uniform distribution is achieved when X and X ^ are independent. In this case the distortion is E d X , X ^ = G d x , e d P x . This distortion level will be called the critical distortion and will be denoted d c r i t . On the interval 0 ; d c r i t the rate distortion function is decreasing and the distortion rate function is the inverse R P 1 of the rate distortion function R P on this interval. The distortion rate function satisfies:
Theorem 13.
The distortion rate function of an arbitrary distribution P satisfies
R U 1 δ f 2 d P , U R P 1 δ R U 1 δ for δ d c r i t
for some increasing continuous function f 2 satisfying f 2 0 = 0 .
Proof. 
The right hand side follows because R U is decreasing in the interval 0 ; d c r i t . Let X be a random variable with distribution P and let Y be a random variable coupled with X. Let Z be a random variable coupled with X such that E d X , Z = d P , U . The couplings between X and Y, and between X and Z can be extended to a joint distribution on X , Y and Z such that Y and Z are independent given X. For this joint distribution we have
I Z ; Y I X , Y
and
E d Z , Y E d X , Y f 2 d P , U .
We have to prove that
E d X , Y R U 1 I X , Y f 2 d P , U
but I Z ; Y I X , Y so it is sufficient to prove that
E d X , Y R U 1 I Z , Y f 2 d P , U
and this follows because E d Z , Y R U 1 I Z , Y . ☐

5. Convergence of convolutions

We shall prove that under certain conditions the n-fold convolutions P n converge to the uniform distribution.
Example 14.
The function
f x = 1 + n = 1 a n cos n x + ϕ n
is a density on a probability distribution P on G if the Fourier coefficients a n are sufficiently small. If a n and b n are Fourier coefficients of P and Q then the convolution has density
1 2 π 0 2 π 1 + n = 1 a n cos n x y + ϕ n 1 + n = 1 b n cos n y + ψ n d y = 1 + 1 2 π n = 1 0 2 π a n b n cos n x y + ϕ n cos n y + ψ n d y = 1 + 1 2 π n = 1 0 2 π a n b n cos n x + ϕ n + ψ n n y cos n y d y = 1 + 1 2 π n = 1 0 2 π a n b n cos n x + ϕ n + ψ n cos n y + sin n x + ϕ n + ψ n sin n y cos n y d y = 1 + n = 1 a n b n cos n x + ϕ n + ψ n 2 π 0 2 π cos 2 n y d y = 1 + n = 1 a n b n cos n x + ϕ n + ψ n 2 .
Therefore the n-fold convolution has density
1 + k = 1 a k n cos k x + n ϕ k 2 n 1 = 1 + k = 1 a k 2 n 2 cos k x + n ϕ k .
Therefore each of the Fourier coefficients is exponentially decreasing.
Clearly, if P is uniform on a proper subgroup then convergence to the uniform distribution on the whole group does not hold. In several papers on this topic [13,18, and references therein] it is claimed and “proved” that if convergence does not hold then the support of P is contained in the coset of a proper normal subgroup. The proofs therefore contain errors that seem to have been copied from paper to paper. To avoid this problem and make this paper more self-contained we shall reformulate and reprove some already known theorems.
In the theory of finite Markov chains is well-known that there exists an invariant probability measure. Certain Markov chains exhibits periodic behavior where a certain distribution is repeated after a number of transitions. All distributions in such a cycle will lie at a fixed distance from any (fixed) measure, where the distance is given by information divergence or total variation (or any other Csiszár f-divergence). It is also well-known that finite Markov chains without periodic behavior are convergent. In general a Markov chain will converge to a “cyclic” behavior as stated in the following theorem [19].
Theorem 15.
Let Φ be a transition operator on a state space A with an invariant probability measure Q i n . If D S Q < then there exists a probability measure P such that D Φ n S Φ n Q 0 and D Φ n Q Q i n is constant.
We shall also use the following proposition that has a purely computational proof [20].
Proposition 16.
Let P x , x X be distributions and let Q be a probability distribution on X. Then
D P x Q d Q x = D P x d Q x Q + D P x P t d Q t d Q x .
We denote the set of probability measures on G by M + 1 G .
Theorem 17.
Let P be a distribution on a compact group G and assume that the support of P is not contained in any nontrivial coset of a subgroup of G. If D S U is finite then D P n S U 0 for n .
Proof. 
Let Ψ : G M + 1 G denote the Markov kernel Ψ g = P g . Then P n S = Ψ n P S . Thus there exists a probability measure Q on G such that D Ψ n P Ψ n Q 0 for n and such that D Ψ n Q is constant. We shall prove that Q = U .
First we note that
D Q = D P Q = G D g Q D g Q P Q d P g = D Q G D g Q P Q d P g .
Therefore g Q = P Q for P almost every g G . Thus there exists at least one g 0 G such that g 0 Q = P Q . Then Q = P ˜ Q where P ˜ = g 0 1 P .
Let Ψ ˜ : G M + 1 G denote the Markov kernel g P ˜ g . Put
P n = 1 n i = 1 n P ˜ i = 1 n i = 1 n Ψ ˜ i 1 P ˜ .
According to [19] this ergodic mean will converge to a distribution T such that Ψ ˜ T = T so that T P ˜ = T . Hence we also have that T T = T , i.e. T is idempotent and therefore supported by a subgroup of G. We know that P ˜ is not contained in any nontrivial subgroup of G so the support of T must be G. We also get Q = T Q , which together with Theorem 9 implies that Q = U . ☐
By choosing S = P we get the following corollary.
Corollary 18.
Let P be a probability measure on the compact group G with Haar probability measure U. Assume that the support of P is not contained in any coset of a proper subgroup of G and D P U is finite. Then D P n U 0 for n .
Corollary 18 together with Theorem 11 implies the following result.
Corollary 19.
Let P be a probability measure on the compact group G with Haar probability measure U. Assume that the support of P is not contained in any coset of a proper subgroup of G and D P U is finite. Then the rate distortion function of P n converges uniformly to the rate distortion function of the uniform distribution.
We also get weak versions of these results.
Corollary 20.
Let P be a probability measure on the compact group G with Haar probability measure U. Assume that the support of P is not contained in any coset of a proper subgroup of G. Then P n converges to U in the weak topology, i.e. d P n , U 0 for n .
Proof. 
If we take S = P β then D P β is finite and D P n P β U 0 for n . We have
d P n P β , U d max P n P β U d max 2 D P n P β U 1 / 2
implying that d P n P β , U 0 for n . Now
d P n , U d P n P β , U f 2 d P n P β , P n f 2 d P β , e .
Therefore lim n sup d P n , U f 2 d P β , e for all β, which implies that
lim n sup d P n , U = 0 .
 ☐
Corollary 21.
Let P be a probability measure on the compact group G with Haar probability measure U. Assume that the support of P is not contained in any coset of a proper subgroup of G and D P U is finite. Then R P n converges to R U pointwise on the interval 0 ; d max for n .
Proof. 
Corollary 20 together with Theorem 13 implies uniform convergence of the distortion rate function for distortion less than d c r i t . This implies pointwise convergence of the rate distortion function on 0 ; d c r i t because rate distortion functions are convex functions. The same argument works in the interval d c r i t ; d max . Pointwise convergence in d c r i t must also hold because of continuity. ☐

6. Rate of convergence

Normally the rate of convergence will be exponential. If the density is lower bounded this is well-known. We bring a simplified proof of this.
Lemma 22.
Let P be a probability distribution on the compact group G with Haar probability measure U. If d P / d U c > 0 and D P is finite, then
D P n 1 c n 1 D P .
Proof. 
First we write
P = 1 c · S + c · U
where S denotes the probability measure
S = P c U 1 c .
For any distribution Q on G we have
D Q P = D 1 c · Q S + c · Q U 1 c · D Q S + c · D Q U 1 c · D Q + c · D U = 1 c · D Q .
Here we have used convexity of divergence. ☐
If a distribution P has support in a proper subgroup F then
D P D U F = log G : F log 2 = 1 bit .
Therefore D P < 1 bit implies that P cannot be supported by a proper subgroup, but it implies more.
Proposition 23.
If P is a distribution on the compact group G and D P < 1 bit then d P P d U is lower bounded by a positive constant.
Proof. 
The condition D P < 1 bit implies that U d P d U > 0 > 1 / 2 . Hence there exists ε > 0 such that U d P d U > ε > 1 / 2 . We have
d P P d U y = G d P d U x · d P d U y x d U x d P d U > ε ε · d P d U y x d U x ε · d P d U x > ε d P d U y x > ε ε d U x = ε 2 · U d P d U x > ε d P d U y x > ε .
Using the inclusion-exclusion inequalities we get
U d P d U x > ε d P d U y x > ε = U d P d U x > ε + U d P d U y x > ε U d P d U x > ε d P d U y x > ε 2 · U d P d U x > ε 1 .
Hence
d P P d U y 2 ε 2 U d P d U x > ε 1 / 2
for all y G .
Combining Theorem 17, Lemma 22, and Proposition 23 we get the following result.
Theorem 24.
Let P be a probability measure on a compact group G with Haar probability measure U. If the support of P is not contained in any coset of a proper subgroup of G and D P U ) is finite then the rate of convergence of D P n U ) to zero is exponential.
As a corollary we get the following result that was first proved by Kloss [21] for total variation.
Corollary 25.
Let P be a probability measure on the compact group G with Haar probability measure U. If the support of P is not contained in any coset of a proper subgroup of G and D P U is finite then P n converges to U in variation and the rate of convergence is exponential.
Proof. 
This follows directly from Pinsker’s inequality [22,23]
1 2 P n U 2 D P n U .
Corollary 26.
Let P be a probability measure on the compact group G with Haar probability measure U. If the support of P is not contained in any coset of a proper subgroup of G and D P U is finite, then the density
d P n d U
converges to 1 point wise almost surely for n tending to infinity.
Proof. 
The variation norm can be written as
P n U = G d P n d U 1 d U .
Thus
U d P n d U 1 ε P n U ε .
The result follows by the exponential rate of convergence of P n to U in total variation combined with the Borel-Cantelli Lemma. ☐

7. Discussion

In this paper we have assumed the existence of the Haar measure by referring to the literature. With the Haar measure we have then proved convergence of convolutions using Markov chain techniques. The Markov chain approach can also be used to prove the existence of the Haar measure by simply referring to the fact that a homogenous Markov chain on a compact set has an invariant distribution. The problem about this approach is that the proof that a Markov chain on a compact set has an invariant distribution is not easier than the proof of the existence of the Haar measure and is less known.
We have shown that the Haar probability measure maximizes the rate distortion function at any distortion level. The normal proofs of the existence of the Haar measure use a kind of covering argument that is very close to the techniques found in rate distortion technique. There is a chance that one can get an information theoretic proof of the existence of the Haar measure. It seems obvious to use concavity arguments as one would do for Shannon entropy but, as proved by Ahlswede [24], the rate distortion function at a given distortion level is not a concave function of the underlying distribution, so some more refined technique is needed.
As noted in the introduction for any algebraic structure A the group A u t A can be considered as symmetry group, it it has a compact subgroup for which the results of this paper applies. It would be interesting to extend the information theoretic approach to the algebraic object A itself, but in general there is no known equivalent to the Haar measure for other algebraic structures. Algebraic structures are used extensively in channel coding theory and cryptography so although the theory may become more involved extensions of the result presented in this paper are definitely worthwhile.

Acknowledgement

The author want to thank Ioannis Kontoyiannis for stimulating discussions.

References and Notes

  1. Jaynes, E.T. Information Theory and Statistical Mechanics, I and II. Physical Reviews 1957, 106 and 108, 620–630 and 171–190. [Google Scholar]
  2. Topsøe, F. Game Theoretical Equilibrium, Maximum Entropy and Minimum Information Discrimination. In Maximum Entropy and Bayesian Methods; Mohammad-Djafari, A., Demoments, G., Eds.; Kluwer Academic Publishers: Dordrecht, Boston, London, 1993; pp. 15–23. [Google Scholar]
  3. Jaynes, E.T. Clearing up mysteries – The original goal. In Maximum Entropy and Bayesian Methods; Skilling, J., Ed.; Kluwer: Dordrecht, 1989. [Google Scholar]
  4. Kapur, J.N. Maximum Entropy Models in Science and Engineering, revised Ed.; Wiley: New York, 1993. [Google Scholar]
  5. Grünwald, P.D.; Dawid, A.P. Game Theory, Maximum Entropy, Minimum Discrepancy, and Robust Bayesian Decision Theory. Annals of Mathematical Statistics 2004, 32, 1367–1433. [Google Scholar]
  6. Topsøe, F. Information Theoretical Optimization Techniques. Kybernetika 1979, 15, 8–27. [Google Scholar]
  7. Harremoës, P.; Topsøe, F. Maximum Entropy Fundamentals. Entropy 2001, 3, 191–226. [Google Scholar]
  8. Jaynes, E.T. Probability Theory - The Logic of Science; Cambridge University Press: Cambridge, 2003. [Google Scholar]
  9. Csiszár, I. Sanov Property, Generalized I-Projection and a Conditional Limit Theorem. Ann. Probab. 1984, 12, 768–793. [Google Scholar] [CrossRef]
  10. Stromberg, K. Probabilities on compact groups. Trans. Amer. Math. Soc. 1960, 94, 295–309. [Google Scholar] [CrossRef]
  11. Csiszár, I. A note on limiting distributions on topological groups. Magyar Tud. Akad. Math. Kutaló INt. Kolzl. 1964, 9, 595–598. [Google Scholar]
  12. Schlosman, S. Limit theorems of probability theory for compact groups. Theory Probab. Appl. 1980, 25, 604–609. [Google Scholar] [CrossRef]
  13. Johnson, O. Information Theory and Central Limit Theorem; Imperial Collage Press: London, 2004. [Google Scholar]
  14. Haar, A. Der Massbegriff in der Theorie der kontinuierlichen Gruppen. Ann. Math. 1933, 34. [Google Scholar] [CrossRef]
  15. Halmos, P. Measure Theory; D. van Nostrand and Co., 1950. [Google Scholar]
  16. Conway, J. A Course in Functional Analysis; Springer-Verlag: New York, 1990. [Google Scholar]
  17. Vogel, P.H.A. On the Rate Distortion Function of Sources with Incomplete Statistics. IEEE Trans. Inform. Theory 1992, 38, 131–136. [Google Scholar] [CrossRef]
  18. Johnson, O.T.; Suhov, Y.M. Entropy and convergence on compact groups. J. Theoret. Probab. 2000, 13, 843–857. [Google Scholar] [CrossRef]
  19. Harremoës, P.; Holst, K.K. Convergence of Markov Chains in Information Divergence. Journal of Theoretical Probability 2009, 22, 186–202. [Google Scholar] [CrossRef]
  20. Topsøe, F. An Information Theoretical Identity and a problem involving Capacity. Studia Scientiarum Mathematicarum Hungarica 1967, 2, 291–292. [Google Scholar]
  21. Kloss, B. Probability distributions on bicompact topological groups. Theory Probab. Appl. 1959, 4, 237–270. [Google Scholar] [CrossRef]
  22. Csiszár, I. Information-type measures of difference of probability distributions andindirect observations. Studia Sci. Math. Hungar. 1967, 2, 299–318. [Google Scholar]
  23. Fedotov, A.; Harremoës, P.; Topsøe, F. Refinements of Pinsker’s Inequality. IEEE Trans. Inform. Theory 2003, 49, 1491–1498. [Google Scholar] [CrossRef]
  24. Ahlswede, R.F. Extremal Properties of Rate-Distortion Functions. IEEE. Trans. Inform. Theory 1990, 36, 166–171. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Harremoës, P. Maximum Entropy on Compact Groups. Entropy 2009, 11, 222-237. https://doi.org/10.3390/e11020222

AMA Style

Harremoës P. Maximum Entropy on Compact Groups. Entropy. 2009; 11(2):222-237. https://doi.org/10.3390/e11020222

Chicago/Turabian Style

Harremoës, Peter. 2009. "Maximum Entropy on Compact Groups" Entropy 11, no. 2: 222-237. https://doi.org/10.3390/e11020222

Article Metrics

Back to TopTop