Next Article in Journal
On Spatial Covariance, Second Law of Thermodynamics and Configurational Forces in Continua
Next Article in Special Issue
On Clustering Histograms with k-Means by Using Mixed α-Divergences
Previous Article in Journal
Relative Entropy, Interaction Energy and the Nature of Dissipation
Previous Article in Special Issue
Information-Geometric Markov Chain Monte Carlo Methods Using Diffusions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Fisher Metric of Conditional Probability Polytopes

1
Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, Leipzig 04103, Germany
2
Department of Mathematics and Computer Science, Leipzig University, PF 10 09 20, Leipzig 04009, Germany
3
Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA
*
Author to whom correspondence should be addressed.
Entropy 2014, 16(6), 3207-3233; https://doi.org/10.3390/e16063207
Submission received: 31 March 2014 / Revised: 18 May 2014 / Accepted: 29 May 2014 / Published: 6 June 2014
(This article belongs to the Special Issue Information Geometry)

Abstract

: We consider three different approaches to define natural Riemannian metrics on polytopes of stochastic matrices. First, we define a natural class of stochastic maps between these polytopes and give a metric characterization of Chentsov type in terms of invariance with respect to these maps. Second, we consider the Fisher metric defined on arbitrary polytopes through their embeddings as exponential families in the probability simplex. We show that these metrics can also be characterized by an invariance principle with respect to morphisms of exponential families. Third, we consider the Fisher metric resulting from embedding the polytope of stochastic matrices in a simplex of joint distributions by specifying a marginal distribution. All three approaches result in slight variations of products of Fisher metrics. This is consistent with the nature of polytopes of stochastic matrices, which are Cartesian products of probability simplices. The first approach yields a scaled product of Fisher metrics; the second, a product of Fisher metrics; and the third, a product of Fisher metrics scaled by the marginal distribution.

Graphical Abstract

1. Introduction

The Riemannian structure of a function’s domain has a crucial impact on the performance of gradient optimization methods, especially in the presence of plateaus and local maxima. The natural gradient [1] gives the steepest increase direction of functions on a Riemannian space. For example, artificial neural networks can often be trained by following some function’s gradient on a space of probabilities. In this context, it has been observed that following the natural gradient with respect to the Fisher information metric, instead of the Euclidean metric, can significantly alleviate the plateau problem [1,2]. The Fisher information metric, which is also called Shahshahani metric [3] in biological contexts, is broadly recognized as the natural metric of probability spaces. An important argument was given by Chentsov [4], who showed that the Fisher information metric is the only metric on probability spaces for which certain natural statistical embeddings, called Markov morphisms, are isometries. More generally, Chentsov’s theorem characterizes the Fisher metric and α-connections of statistical manifolds uniquely (up to a multiplicative constant) by requiring invariance with respect to Markov morphisms. Campbell [5] gave another proof that characterizes invariant metrics on the set of non-normalized positive measures, which restrict to the Fisher metric in the case of probability measures (up to a multiplicative constant). In this paper, we explore ways of defining distinguished Riemannian metrics on spaces of stochastic matrices.

In learning theory, when modeling the policy of a system, it is often preferred to consider stochastic matrices instead of joint probability distributions. For example, in robotics applications, policies are optimized over a parametric set of stochastic matrices by following the gradient of a reward function [6,7]. The set of stochastic matrices can be parametrized in many ways, e.g., in terms of feedforward neural networks, Boltzmann machines [8] or projections of exponential families [9]. The information geometry of policy models plays an important role in these applications and has been studied by Kakade [2], Peters and co-workers [1012], and Bagnell and Schneider [13], among others. A stochastic matrix is a tuple of probability distributions, and therefore, the space of stochastic matrices is a Cartesian product of probability simplices. Accordingly, in applications, usually a product metric is considered, with the usual Fisher metric on each factor. On the other hand, Lebanon [14] takes an axiomatic approach, following the ideas of Chentsov and Campbell, and characterizes a class of invariant metrics of positive matrices that restricts to the product of Fisher metrics in the case of stochastic matrices. We will consider three different approaches discussed in the following.

In the first part, we take another look at Lebanon’s approach for characterizing a distinguished metric on polytopes of stochastic matrices. However, since the maps considered by Lebanon do not map stochastic matrices to stochastic matrices, we will use different maps. We show that the product of Fisher metrics can be characterized by an invariance principle with respect to natural maps between stochastic matrices.

In the second part, we consider an approach that allows us to define Riemannian structures on arbitrary polytopes. Any polytope can be identified with an exponential family by using the coordinates of the polytope vertices as observables. The inverse of the moment map then defines an embedding of the polytope in a probability simplex. This embedding can be used to pull back geometric structures from the probability simplex to the polytope, including Riemannian metrics, affine connections, divergences, etc. This approach has been considered in [9] as a way to define low-dimensional families of conditional probability distributions. More general embeddings can be defined by identifying each exponential family with a point configuration, B, together with a weight function, ν. Given B and ν, the corresponding exponential family defines geometric structures on the set (conv B)°, which is the relative interior of the convex support of the exponential family Moreover, we can define natural morphisms between weighted point configurations as surjective maps between the point sets, which are compatible with the weight functions. As it turns out, the Fisher metric on (conv B)° can be characterized by invariance under these maps.

In the third part, we return to stochastic matrices. We study natural embeddings of conditional distributions in probability simplices as joint distributions with a fixed marginal. These embeddings define a Fisher metric equal to a weighted product of Fisher metrics. This result corresponds to the definitions commonly used in robotics applications.

All three approaches give very similar results. In all cases, the identified metric is a product metric. This is a sensible result, since the set of k × m stochastic matrices is a Cartesian product of probability simplices Δ m 1 × × Δ m 1 = Δ m 1 k, which suggests using the product metric of the Fisher metrics defined on the factor simplices, ∆m−1. Indeed, this is the result obtained from our second approach. The first approach yields that same result with an additional scaling factor of 1/k. Only when stochastic matrices of different sizes are compared, the two approaches differ. The third approach yields a product of Fisher metrics scaled by the marginal distribution that defines the embedding.

Which metric to use depends on the concrete problem and whether a natural marginal distribution is defined and known. In Section 7, we do a case study using a reward function that is given as an expectation value over a joint distribution. In this simple example, the weighted product metric gives the best asymptotic rate of convergence, under the assumption that the weights are optimally chosen. In Section 8, we sum up our findings.

The contents of the paper is organized as follows. Section 2 contains basic definitions around the Fisher metric and concepts of differential geometry. In Section 3, we discuss the theorems of Chentsov, Campbell and Lebanon, which characterize natural geometric structures on the probability simplex, on the set of positive measures and on the cone of positive matrices, respectively. In Section 4, we study metrics on polytopes of stochastic matrices, which are invariant under natural embeddings. In Section 5, we define a Riemannian structure for polytopes, which generalizes the Fisher information metric of probability simplices and conditional models in a natural way. In Section 6, we study a class of weighted product metrics. In Section 7, we study the gradient flow with respect to an expectation value. Section 8 contains concluding remarks. In Appendix A, we investigate restrictions on the parameters of the metrics characterized in Sections 3 and 4 that make them positive definite. Appendix B contains the proofs of the results from Section 4.

2. Preliminaries

We will consider the simplex of probability distributions on [m] := {1,…, m}, m ≥ 2, which is given by Δ m 1 : = { ( p i ) i m : p i 0 , i p i = 1 }. The relative interior of ∆m−1 consists of all strictly positive probability distributions on [m], and will be denoted Δ m 1 °. This is a subset of + m, the cone of strictly positive vectors. The set of k × m row-stochastic matrices is given by Δ m 1 k : = { ( K i j ) i j } k × m : ( K i j ) j Δ m 1 for all i ∈ [k and is equal to the Cartesian product ×i∈[k]m−1. The relative interior ( Δ m 1 k ) is a subset of + k × m, the cone of strictly positive matrices.

Given two random variables X and Y taking values in the finite sets [k] and [m], respectively, the conditional probability distribution of Y given X is the stochastic matrix K = (P(y|x))x∈[k],y∈[m] with rows (P(y|x))y∈[m] ∈ ∆m−1 for all x ∈ [k]. Therefore, the polytope of stochastic matrices Δ m 1 k is called a conditional polytope.

The tangent space of + n at a point p + n, denoted by T p + n, is the real vector space spanned by the vectors ∂1,…, n of partial derivatives with respect to the n components. The tangent space of Δ n 1 at a point p Δ n 1 + n is the subspace T p Δ n 1 T p + n consisting of the vectors:

u = i u i i T p + n with i u i = 0.

The Fisher metric on the positive probability simplex Δ n 1 is the Riemannian metric given by:

g p ( n ) ( u , v ) = i = 1 n u i v i p i , for all  u , v T p Δ n 1 .

The same formula (2) also defines a Riemannian metric on + n, which we will denote by the same symbol. This, however, is not the only way in which the Fisher metric can be extended from Δ n 1 to + n. We will discuss other extensions in the next section (see Campbell’s theorem, Theorem 3).

Consider a smoothly parametrized family of probability distributions = { ( p ( x ; θ ) ) x [ n ] : θ Ω } Δ n 1 , where Ω d is open. Then, g(n) induces a Riemannian metric on . Denote by θ i = θ i the tangent vector corresponding to the partial derivative with respect to θi, for all i ∈ [d]. Then, the Fisher matrix has coordinates:

Entropy 16 03207f3

Here, it is not necessary to assume that the parameters θi are independent. In particular, the dimension of may be smaller than d, in which case the matrix is not positive definite. If the map Ω , θ p ( · ; θ ) is an embedding (i.e., a smooth injective map that is a diffeomorphism onto its image), then g θ defines a Riemannian metric on , which corresponds to the pull-back of g(n).

Consider an embedding f: ε ε′. The pull-back of a metric g′ on ε′ through f is defined as:

( f * g ) p ( u , v ) : = g f ( p ) ( f * u , f * v ) , for all  u , v T p ε ,

where f* denotes the push-forward of Tpε through f, which in coordinates is given by:

f * : T p ε T p ε ; i u i θ i j i u i f j ( p ) θ i θ j ,

where { θ i } i spans Tqε and { θ j } j spans Tf(p)ε′.

An embedding f: εε′ of two Riemannian manifolds (ε, g) and (ε′, g′) is an isometry iff:

g p ( u , v ) = ( f * g ) p ( u , v ) , for all  p ε  and  u , v T p ε ,

In this case, we say that the metric g is invariant with respect to f (and g′).

3. The Results of Campbell and Lebanon

One of the theoretical motivations for using the Fisher metric is provided by Chentsov’s characterization [4], which states that the Fisher metric is uniquely specified, up to a multiplicative constant, by an invariance principle under a class of stochastic maps, called Markov morphisms. Later, Campbell [5] considered the characterization problem on the space + n instead of Δ n 1 . This simplifies the computations, since + n has a more symmetric parametrization.

Definition 1. Let 2 ≤ mn. A (row) stochastic partition matrix (or just row-partition matrix) is a matrix Q m × n of non-negative entries, which satisfies j A i Q i j = δ i i for an m block partition {A1,…, Am} of [n]. The linear map defined by:

+ m + n ; p p Q

is called a congruent embedding by a Markov mapping of + m to + n or just a Markov map, for short.

An example of a 3 × 5 row-partition matrix is:

Q = ( 1 / 2 0 1 / 2 0 0 0 1 / 3 0 2 / 3 0 0 0 0 0 1 ) .

Markov maps preserve the 1-norm and restrict to embeddings Δ m 1 Δ n 1 .

Theorem 2 (Chentsov’s theorem.).

  • Let g(m) be a Riemannian metric on Δ m 1 for m ∈ {2, 3,…}. Let this sequence of metrics have the property that every congruent embedding by a Markov mapping is an isometry. Then, there is a constant C > 0 that satisfies:

    g p ( m ) ( u , v ) = C i u i v i p i .

  • Conversely, for any C > 0, the metrics given by Equation (9) define a sequence of Riemannian metrics under which every congruent embedding by a Markov mapping is an isometry.

The main result in Campbell’s work [5] is the following variant of Chentsov’s theorem.

Theorem 3 (Campbell’s theorem.).

  • Let g(m) be a Riemannian metric on + m for m ∈ {2,3,…}. Let this sequence of metrics have the property that every embedding by a Markov mapping is an isometry. Then:

    g p ( m ) ( i , j ) = A ( | p | ) + δ i j C ( | p | ) | p | p i ,

    where | p | = i = 1 m p i, δij is the Kronecker delta, and A and C are C functions on + satisfying C(α) > 0 and A(α) + C(α) > 0 for all α > 0.

  • Conversely, if A and C are C functions on + satisfying C(α) > 0, A(α) + C(α) > 0 for all α > 0, then Equation (10) defines a sequence of Riemannian metrics under which every embedding by a Markov mapping is an isometry.

The metrics from Campbell’s theorem also define metrics on the probability simplices Δ m 1 for m = 2,3,…. Since the tangent vectors υ = i υ i i T p Δ m 1 satisfy i υ i = 0, for any two vectors u , υ T p Δ m 1 , also i j A u i υ j = 0 for any A. In this case, the choice of A is immaterial, and the metric becomes Chentsov’s metric.

Remark 4. Observe that Chentsov’s theorem is not a direct implication of Campbell’s theorem. However, it can be deduced from it by the following arguments. Suppose that we have a family of Riemannian simplices ( Δ m 1 , g ( m ) ) for m ∈ {2, 3,…}, and suppose that they are isometric with respect to Markov maps. If we can extend every g(m) to a Riemannian metric g ˜ ( m ) on + m in such a way that the resulting spaces ( + m , g ˜ ( m ) ) still isometric with respect to Markov maps, then Campbell’s theorem implies that g(m) is a multiple of the Fisher metric. Such metric extensions can be defined as follows. Consider the diffeomorphism:

Δ m 1 × + + m , ( p , r ) r p .

Any tangent vector u T ( p , r ) + m can be written uniquely as u = up + urr, where up is tangent to r Δ m 1 . Since each Markov map f preserves the one-norm | · |, its push-forward f* maps the tangent vector r T ( p , r ) + m to the corresponding tangent vector r T f ( p , r ) + m; that is, f*u = f*up + urr. Therefore,

g ˜ ( p , r ) ( m ) ( u , υ ) : = g p ( m ) ( u p , υ p ) + u r υ r

is a metric on + m that is invariant under f.

In what follows, we will focus on positive matrices. In order to define a natural Riemannian metric, we can use the identification + k × m + k m and apply Campbell’s theorem. This leads to metrics of the form:

g M ( k , m ) ( i j , k l ) = A ( | M | ) + δ i k δ j l C ( | M | ) / M i j ,

where i j = M i j and | M | = i j M i j. However, a disadvantage of this approach is that the action of general Markov maps on + k m has no natural interpretation in terms of the matrix structure. Therefore, Lebanon [14] considered a special class of Markov maps defined as follows.

Definition 5. Consider a k × l row-partition matrix R and a collection of m × n row-partition matrices Q = {Q(1),…, Q(k)}. The map:

+ k × m + l × n ; M R ( M Q )

is called a congruent embedding by a Markov morphism of + k × m to + l × n in [15]. We will refer to such an embedding as a Lebanon map. Here, the row product MQ is defined by:

( M Q ) a b = ( M Q ( a ) ) a b , for all  a [ k ] , b [ n ] ;

that is, the a-th row of M is multiplied by the matrix Q(a).

In a Lebanon map, each row of the input matrix M is mapped by an individual Markov mapping Q(i), and each resulting row is copied and scaled by an entry of R. This kind of map preserves the sum of all matrix entries. Therefore, with the identification + k × m + k m, each Lebanon map restricts to a map Δ m k 1 Δ n l 1 . The set Δ m k 1 can be identified with the set of joint distributions of two random variables. Lebanon maps can be regarded as special Markov maps that incorporate the product structure present in the set of joint probability distributions of a pair of random variables. In Section 4, we will give an interpretation of these maps.

Contrary to what is stated in [15], a Lebanon map does not map ( Δ m 1 k ) ° to ( Δ l 1 l ) °, unless k = l. Therefore, later, we will provide a characterization for the metrics on ( Δ m 1 k ) ° in terms of invariance under other maps (which are not Markov nor Lebanon maps).

The main result in Lebanon’s work [15, Theorems 1 and 2] is the following.

Theorem 6 (Lebanon’s theorem.).

  • For each k ≥ 1, m ≥ 2, let g(k,m) be a Riemannian metric on + k × m in such a way that every Lebanon map is an isometry. Then:

    g M ( k , m ) ( a b , c d ) = A ( | M | ) + δ a c ( B ( | M | ) | M a | + δ b d C ( | M | ) M a b )

    for some differentiable functions A , B , C C ( + ).

  • Conversely, let { ( M k × m , g ( k , m ) ) } be a sequence of Riemannian manifolds, with metrics g(k,m) of the form (16) for some A , B , C C ( + ). Then, every Lebanon map is an isometry.

Lebanon does not study the question under which assumptions on A , B , C C ( + ) the formula (16) does indeed define a Riemannian metric. This question has the following simple answer, which we will prove in Appendix A:

Proposition 7. The matrix (16) is positive definite if and only if C(|M|) > 0, B(|M|) + C(|M|) > 0 and A(|M|) + B(|M|) + C(|M|) > 0.

The class of metrics (16) is larger than the class of metrics (13) derived in Campbell’s theorem. The reason is that Campbell’s metrics are invariant with respect to a larger class of embeddings.

The special case with A(|M|) = 0, B(|M|) = 0 and C(|M|) = 1 is called product Fisher metric,

g M ( k , m ) ( a b , c d ) = δ a c δ b d 1 M a b .

Furthermore, if we restrict to ( Δ m 1 k ) °, the functions A and B do not play any role. In this case |M| = k, and we obtain the scaled product Fisher metric:

g M ( k , m ) ( a b , c d ) = δ a c δ b d C ( k ) M a b ,

where C ( k ) : + is a positive function. As mentioned before, Lebanon’s theorem does not give a characterization of invariant metrics of stochastic matrices, since Lebanon maps do not preserve the stochasticity of the matrices. However, Lebanon maps are natural maps on the set Δ m k 1 of positive joint distributions. In the same way as Chentsov’s theorem can be derived from Campbell’s theorem (see Remark 4), we obtain the following corollary:

Corollary 8.

  • Let { ( Δ k m 1 , g ( k , m ) ) : k 1 , m 2 } be a double sequence of Riemannian manifolds with the property that every Lebanon map is an isometry. Then:

    g P ( k , m ) ( u , υ ) = B a b , c u a b u a c | P a | + C a b u a b u a c P a b , f o r   e a c h   P Δ k m 1 ,

    for some constants B , C with C > 0 and B + C > 0, where | P a | = b P a b.

  • Conversely, let { ( Δ k m 1 , g ( k , m ) ) } be a sequence of Riemannian manifolds with metrics g(k,m) of the form of Equation (19) for some B , C Then, every Lebanon map is an isometry.

Observe that these metrics agree with (a multiple of) the Fisher metric only if B = 0. The case B = 0 can also be characterized; note that Lebanon maps do not treat the two random variables symmetrically Switching the two random variables corresponds to transposing the joint distribution matrix P. When exchanging the role of the two random variables, the Lebanon map becomes P ⟼ (PQ) R. We call such a map a dual Lebanon map. If we require invariance under both Lebanon maps and their duals in Theorem 6 or Corollary 8, the statements remain true with the additional restriction that B = 0 (as a function or constant, respectively).

4. Invariance Metric Characterizations for Conditional Polytopes

According to Chentsov’s theorem (Theorem 2), a natural metric on the probability simplex can be characterized by requiring the isometry of natural embeddings. Lebanon follows this axiomatic approach to characterize metrics on products of positive measures (Theorem 6). However, the maps considered by Lebanon dissolve the row-normalization of conditional distributions. In general, they do not map conditional polytopes to conditional polytopes. Therefore, we will consider a slight modification of Lebanon maps, in order to obtain maps between conditional polytopes.

4.1. Stochastic Embeddings of Conditional Polytopes

A matrix of conditional distributions P(Y|X) in Δ m 1 k can be regarded as the equivalence class of all joint probability distributions P(X, Y) ∈ ∆km−1 with conditional distribution P(Y|X). Which Markov maps of probability simplices are compatible with this equivalence relation? The most obvious examples are permutations (relabelings) of the state spaces of X and Y.

In information theory, stochastic matrices are also viewed as channels. For any distribution of X, the stochastic matrix gives us a joint distribution of the pair (X, Y) and, hence, a marginal distribution of Y. If we input a distribution of X into the channel, the stochastic matrix determines what the distribution of the output Y will be.

Channels can be combined, provided the cardinalities of the state spaces fit together. If we take the output Y of the first channel P(Y|X) and feed it into another channel P(Y′|Y) then we obtain a combined channel P(Y′|X). The composition of channels corresponds to ordinary matrix multiplication. If the first channel is described by the stochastic matrix K and the second channel by Q, then the combined channel is described by K · Q. Observe that in this case, the joint distribution P (considered as a normalized matrix P ∈ ∆km−1) is transformed similarly; that is, the joint distribution of the pair (X, Y′) is given by P · Q.

More general maps result from compositions where the choice of the second channel depends on the input of the first channel. In other words, we have a first channel that takes as input X and gives as output Y, and we have another channel that takes as input (X,Y) and gives as output Y′; we are interested in the resulting channel from X to Y′. The second channel can be described by a collection of stochastic matrices Q = {Q(i)}i. If K describes the first channel, then the combined channel is described by the row product KQ (see Definition 5). Again, the joint distribution of (X, Y′) arises in a similar way as PQ.

We can also consider transformations of the first random variable X. Suppose that we use X as the input to a channel described by a stochastic matrix R. In this case, the joint distribution of the output X′ of the channel and Y is described by R X. However, in general, there is not much that we can say about the conditional distribution of Y given X′. The result depends in an essential way on the original distribution of X. However, this is not true in the special case that the channel is “not mixing”, that is, in the case that R is a stochastic partition matrix. In this case, the conditional distribution P(Y|X′) is described by R ¯ K, where R ¯ is the corresponding partition indicator matrix, where all non-zero entries of R are replaced by one. In other words, each state of X corresponds to several states of X′, and the corresponding row of K is copied a corresponding number of times.

To sum up, if we combine the transformations due to Q and R, then the joint probability distribution transforms as PR (PQ) and the conditional transforms as K R ¯ ( K Q ). In particular, for the joint distribution, we obtain the definition of a Lebanon map. Figure 1 illustrates the situation.

Finally, we will also consider the special case where the partition of R (and R ¯) is homogeneous, i.e., such that all blocks have the same size. For example, this describes the case where there is a third random variable Z that is independent of Y given X. In this case, the conditional distribution satisfies P(Y|X) = P(Y|X, Z), and R describes the conditional distribution of (X, Z) given X.

Definition 9. A (row) partition indicator matrix is a matrix R ¯ { 0 , 1 } k × l that satisfies:

R ¯ i j = { 1 , i f j A i , 0 , else ,

for a k block partition {A1,…, Ak} of [l].

For example, the 3 × 5 partition indicator matrix corresponding to Equation (8) is:

R ¯ = ( 1 0 1 0 0 0 1 0 1 0 0 0 0 0 1 ) .

Definition 10. Consider a k × l partition indicator matrix R ¯ and a collection of m × n stochastic partition matrices Q = { Q ( i ) } i = 1 k. We call the map:

f : + k × m + l × n ; M R ¯ ( M Q )

a conditional embedding of + k × m in + l × n. We denote the set of all such maps by ^ k , m l , n. If R ¯ is the partition indicator matrix of a homogeneous partition (with partition blocks of equal cardinality), then we call f a homogeneous conditional embedding. We denote the set of all such homogeneous conditional embeddings by ^ k , m l , n and assume that l is a multiple of k.

Conditional embeddings preserve the 1-norm of the matrix rows; that is, the elements of ^ k , m l , n map ( Δ m 1 k ) ° to ( Δ n 1 k ) °. On the other hand, they do not preserve the 1-norm of the entire matrix. Conditional embeddings are Markov maps only when k = l, in which case they are also Lebanon maps.

4.2. Invariance Characterization

Considering the conditional embeddings discussed in the previous section, we obtain the following metric characterization.

Theorem 11.

  • Let g(k,m) denote a metric on + k × m for each k ≥ 1 and m ≥ 2. If every homogeneous conditional embedding f k , m l , n is an isometry with respect to these metrics, then:

    g M ( k , m ) ( a b , c d ) = A k 2 + δ a c ( k B k 2 + δ b d | M | M a b C k 2 ) , f o r a l l M + k × m ,

    for some constants A , B , C , where a b = M a b and | M | = a b M a b.

  • Conversely, given the metrics defined by Equation (23) for any non-degenerate choice of constants A , B , C , each homogeneous conditional embedding f k , m l , n , k l , m n is an isometry.

  • Moreover, the tensors g(k,m) from Equation (23) are positive-definite for all k ≥ 1 and m ≥ 2 if and only if C > 0, B + C > 0 and A + B + C > 0.

The proof of Theorem 11 is similar to the proof of the theorems of Chentsov, Campbell and Lebanon. Due to its technical nature, we defer it to Appendix B.

Now, for the restriction of the metric g(k,m) to ( Δ m 1 k ) °, we have the following. In this case, |M| = k. Since tangent vectors υ = a b υ a b a b T M ( Δ m 1 k ) ° satisfy b υ a b = 0 for all a, the constants A and B become immaterial, and the metric can be written as:

g M ( k , m ) ( u , υ ) = a b | M | u a b υ a b M a b C k 2 = a b u a b υ a b M a b C k , for all  u , υ T M ( Δ m 1 k ) ° .

This metric is a specialization of the metric (18) derived by Lebanon (Theorem 6).

The statement of Theorem 11 becomes false if we consider general conditional embeddings instead of homogeneous ones:

Theorem 12. There is no family of metrics g(k,m) on + k × m (or on ( Δ m 1 k ) °) for each k ≥ 1 and m ≥ 2, for which every conditional embedding f ^ k , m l , n is an isometry.

This negative result will become clearer from the perspective of Section 6: as we will show in Theorem 17, although there are no metrics that are invariant under all conditional embeddings, there are families of metrics (depending on a parameter, ρ) that transform covariantly (that is, in a well-defined manner) with respect to the conditional embeddings. We defer the proof of Theorem 12 to Appendix B.

5. The Fisher Metric on Polytopes and Point Configurations

In the previous section, we obtained distinguished Riemannian metrics on + k × m and ( Δ m 1 k ) ° by postulating invariance under natural maps. In this section, we take another viewpoint based on general considerations about Riemannian metrics on arbitrary polytopes. This is achieved by embedding each polytope in a probability simplex as an exponential family. We first recall the necessary background. In Section 5.2, we then present our general results, and in Section 5.3, we discuss the special case of conditional polytopes.

5.1. Exponential Families and Polytopes

Let X be a finite set and A d × X a matrix with columns ax indexed by x X. It will be convenient to consider the rows Ai, i ∈ [d] of A as functions A i : X Finally, let ν : X +. The exponential family εA,ν is the set of probability distributions on X given by:

p ( x ; θ ) = exp ( θ a x + log ( ν ( x ) ) log ( Z ( θ ) ) ) , for all  x X , for all  θ d ,

with the normalization function Z ( θ ) = x X exp ( θ a x + log ( ν ( x ) ) ). The functions Ai are called the observables and ν the reference measure of the exponential family. When the reference measure ν is constant, ν(x) = 1 for all x X, we omit the subscript and write εA.

A direct calculation shows that the Fisher information matrix of εA,ν at a point θ d has coordinates:

g θ ε A , ν ( θ i , θ j ) = cov θ ( A i , A j ) ,  for all   i , j [ d ] .

Here, covθ denotes the covariance computed with respect to the probability distribution p(·; θ).

The convex support of εA,ν is defined as:

Entropy 16 03207f4

where conv S is the set of all convex combinations of points in S. The moment map μ : p Δ n 1 A · p d restricts to a homeomorphism ε A , v ¯ conv  A conv A; see [16]. Here, ε A , v ¯ denotes the Euclidean closure of εA. The inverse of μ will be denoted by μ 1 : conv  A ε A , v ¯ Δ n 1. This gives a natural embedding of the polytope conv A in the probability simplex Δ | X | 1. Note that the convex support is independent of the reference measure ν. See [17] for more details.

5.2. Invariance Fisher Metric Characterizations for Polytopes

Let P d be a polytope with n vertices a1,…, an. Let A = (a1,…, an) be the matrix with columns a i d for all i ∈ [n]. Then, ε A Δ n 1 is an exponential family with convex support P. We will also denote this exponential family by εP. We can use the inverse of the moment map, μ−1, to pull back geometric structures on Δ n 1 to the relative interior P° of P.

Definition 13. The Fisher metric on P° is the pull-back of the Fisher metric on ε A Δ n 1 by μ−1.

Some obvious questions are: Why is this a natural construction? Which maps between polytopes are isometries between their Fisher metrics? Can we find a characterization of Chentsov type for this metric?

Affine maps are natural maps between polytopes. However, in order to obtain isometries, we need to put some additional constraints. Consider two polytopes P d, P d and an affine map ϕ : d d that satisfies ϕ(P) ⊆ P′. A natural condition in the context of exponential families is that ϕ restricts to a bijection between the set vert(P) of vertices of P and the set vert(P′) of vertices of P′. In this case, ε P ε P Δ n 1 . Moreover, the moment map μ′ of P′ factorizes through the moment map μ of P: μ′ = ϕ ○ μ. Let ϕ−1 = μ ○ μ′−1. Then, the following diagram commutes:

Entropy 16 03207f5

It follows that ϕ−1 is an isometry from P′° to its image in P°. Observe that the inverse moment map itself arises in this way: In the diagram (28), if P is equal to ∆n−1, then the upper moment map μ−1 is the identity map, and ϕ−1 equals the inverse moment map μ'−1 of P′.

The constraint of mapping vertices to vertices bijectively is very restrictive. In order to consider a larger class of affine maps, we need to generalize our construction from polytopes to weighted point configurations.

Definition 14. A weighted point configuration is a pair (A, ν) consisting of a matrix A × d × n with columns a1,…, an and a positive weight function ν : { 1 , , n } + assigning a weight to each column ai. The pair (A, ν) defines the exponential family εA,ν.

The (A, ν)-Fisher metric on (conv A)° is the pull-back of the Fisher metric on Δ n 1 through the inverse of the moment map.

We recover Definition 13 as follows. For a polytope P, let A be the point configuration consisting of the vertices of P. Moreover, let ν be a constant function. Then, εP = εA,ν, and the two definitions of the Fisher metric on P° coincide.

The following are natural maps between weighted point configurations:

Definition 15. Let (A, ν), (A′, ν) be two weighted point configurations with A = ( a i ) i d × n and A = ( a j ) j d × n . A morphism (A,ν) → (A′) is a pair (ϕ, σ) consisting of an affine map ϕ : d d and a surjective map σ: {1,…, n} → {1,…, n′} with ϕ ( a i ) = a σ ( i ) and v ( a j ) = α i : σ ( i ) = j v ( a i ), where α > 0 is a constant that does not depend on j.

Consider a morphism (ϕ, σ): (A, ν) → (A′, ν′). For each j ∈ [n′], let A j = { i : ϕ ( a i ) = a j }. Then, is ( A 1 , , A n ) a partition of [n]. Define a matrix Q n × n by:

Q j i = { ν ( i ) i A j ν ( i ) , if  i A j , 0 , else .

Then, Q is a Markov mapping, and the following diagram commutes:

Entropy 16 03207f6

By Chentsov’s theorem (Theorem 2), Q is an isometric embedding. It follows that ϕ−1 also induces an isometric embedding. This shows the first part of the following theorem:

Theorem 16.

  • Let (ϕ, σ): (A, ν) → (A′, ν′) be a morphism of weighted point configurations. Then, ϕ−1: (conv A′)° → (conv A)° is an isometric embedding with respect to the Fisher metrics on (conv A)° and (conv A')°.

  • Let gA,ν be a Riemannian metric on (conv A)° for each weighted point configuration (A, ν). If every morphism (ϕ, σ): (A, ν) → (A′, ν) of weighted point configurations induces an isometric embedding ϕ−1: (convA′)° → (conv A)°, then there exists a constant α + such that gA,ν is equal to α times the (A, ν)-Fisher metric.

Proof. The first statement follows from the discussion before the theorem. For the second statement, we show that under the given assumptions, all Markov maps are isometric embeddings. By Chentsov’s theorem (Theorem 2), this implies that the metrics gP agree with the Fisher metric whenever P is a simplex. The statement then follows from the two facts that the metric on P° or (conv A)° is the pull-back of the Fisher metric through the inverse of the moment map and that μ−1 is itself a morphism.

Observe that ∆n−1 = conv In = conv{e1,…, en} is a polytope, and Δ n 1 is the corresponding exponential family. Consider a Markov embedding Q : Δ n 1 Δ n 1 , pp·· Q. Let ν ( i ) = j Q j i be the value of the unique non-zero entry of Q in the i-th column. This defines a morphism and an embedding as follows:

Let A be the matrix that arises from Q by replacing each non-zero entry by one. We define ϕ as the linear map represented by the matrix A, and define σ: [n] → [n'] by σ(j) = i if and only if aj = ei, that is, σ(j) indicates the row i in which the j-th column of A is non-zero. Then, (ϕ, σ) is a morphism ( I n , ν ) ( I n , 1 ), and by assumption, the inverse ϕ−1 is an isometric embedding Δ n 1 Δ n 1 . However, ϕ−1 is equal to the Markov map Q. This shows that all Markov maps are isometric embeddings, and so, by Chentsov’s theorem, the statement holds true on the simplices. □

Theorem 16 defines a natural metric on ( Δ m 1 k ) ° that we want to discuss in more detail next.

5.3. Independence Models and Conditional Polytopes

Consider k random variables with finite state spaces [n1],…, [nk]. The independence model consists of all joint distributions p Δ i [ k ] n i 1 of these variables that factorize as:

p ( x 1 , , x k ) = i [ k ] p i ( x i ) , for all  x 1 [ n 1 ] , , x k [ n k ] ,

where p i Δ n i 1 for all i ∈ [k]. Assuming fixed n1,…, nk, we denote the independence model by ε k ¯. It is the Euclidean closure of an exponential family (with observables of the form δ i y i). The convex support of εk is equal to the product of simplices P k : = Δ n 1 1 × × Δ n k 1. The parametrization (31) corresponds to the inverse of the moment map.

We can write any tangent vector u T ( p 1 , , p k ) P k of this open product of simplices as a linear combination u = i [ k ] x i [ n i ] u i x i i , x i, where x i [ n i ] υ i x i = 0 for all i ∈ [k]. Given two such tangent vectors, the Fisher metric is given by:

g ( p 1 , , p k ) P k ( u , υ ) = i [ k ] x i [ n i ] u i x i υ i x i p i ( x i )

Just as the convex support of the independence model is the Cartesian product of probability simplices, the Fisher metric on the independence model is the product metric of the Fisher metrics on the probability simplices of the individual variables. If n1 = … = nk =: n, then P k = Δ n 1 k can be identified with the set of k × n stochastic matrices.

The Fisher metric on the product of simplices is equal to the product of the Fisher metrics on the factors. More generally, if P = Q1 × Q2 is a Cartesian product, then the Fisher metric on P° is equal to the product of the Fisher metrics on Q 1 and Q 2 . In fact, in this case, the inverse of the moment map of P can be expressed in terms of the two moment map inverses μ 1 : Q 1 ε Q 1 ¯ Δ m 1 1 and μ 2 : Q 2 ε Q 2 ¯ Δ m 2 1 and the moment map μ ˜ of the independence model Δ m 1 1 × Δ m 2 1, by:

μ 1 ( q 1 , q 2 ) = μ ˜ 1 ( μ 1 1 ( q 1 ) , μ 2 1 ( q 2 ) ) .

Therefore, the pull-back by μ−1 factorizes through the pull-back by μ ˜ 1, and since the independence model carries a product metric, the product of polytopes also carries a product metric.

Let us compare the metric g K ( k , m ) from Equation (24), with the Fisher metric g ( K 1 , , K k ) P k from Equation (32) on the product of simplices P = ( Δ m 1 k ) . In both cases, the metric is a product metric; that is, it has the form:

g = g 1 + + g k ,

where gi is a metric on the i-th factor Δ m 1 . For g K Δ m 1 k, gi is equal to the Fisher metric on Δ m 1 . However, for g K ( k , m ), gi is equal to 1/k times the Fisher metric on Δ m 1 . Since this factor only depends on k, it only plays a role if stochastic matrices of different sizes are compared. The additional factor of 1/k can be interpreted as the uniform distribution on k elements. This is related to another more general class of Riemannian metrics that are used in applications; namely, given a function K Δ m 1 k ρ K + k, it is common to use product metrics with gi equal to ρK(i) times the Fisher metric on Δ m 1 . When K has the interpretation of a channel or when K describes the policy by which a system reacts to some sensor values, a natural possibility is to let ρK be the stationary distribution of the channel input or of the sensor values, respectively. We will discuss this approach in Section 6.

6. Weighted Product Metrics for Conditional Models

In this section, we consider metrics on spaces of stochastic matrices defined as weighted sums of the Fisher metrics on the spaces of the matrix rows, similar to Equation (34). This kind of metric was used initially by Amari [1] in order to define a natural gradient in the supervised learning context. Later, in the context of reinforcement learning, Kakade [2] defined a natural policy gradient based on this kind of metric, which has been further developed by Peters et al. [10]. Related applications within unsupervised learning have been pursued by Zahedi et al. [18].

Consider the following weighted product Fisher metric:

g K ρ , m = a ρ K ( a ) g K a ( m ) , a , for all  K ( Δ m 1 k ) ° ,

where g K a ( m ) , a denotes the Fisher metric of Δ m 1 at the a-th row of K and ρ K Δ k 1 is a probability distribution over a associated with each K ( Δ m 1 k ) °. For example, the distribution ρK could be the stationary distribution of sensor values observed by an agent when operating under a policy described by K.

In the following, we will try to illuminate the properties of polytope embeddings that yield the metric (35) as the pull-back of the Fisher information metric on a probability simplex. We will focus on the case that ρK = ρ is independent of K.

There are two direct ways of embedding Δ n 1 k in a probability simplex. In Section 5, we used the inverse of the moment map of an exponential family, possibly with some reference measure. This embedding is illustrated in the left panel of Figure 2. If we have given a fixed probability distribution ρ Δ k 1 , there is a second natural embedding ψ ρ : Δ m 1 k Δ k m 1 defined as follows:

ψ ρ ( K ) ( x , y ) = ρ ( x ) K x , y for all  x [ k ] , y [ m ] .

If ρ is the distribution of a random variable X and K Δ m 1 k is the stochastic matrix describing the conditional distribution of another variable Y given X, then ψρ(K) is the joint distribution of X and Y. Note that ψρ is an affine embedding. See the right panel of Figure 2 for an illustration.

The pull-back of the Fisher metric on Δ k m 1 through ψρ is given by:

Entropy 16 03207f7

This recovers the weighted sum of Fisher metrics from Equation (35).

Are there natural maps that leave the metrics gρ,m invariant? Let us reconsider the stochastic embeddings from Definition 10. Let R ¯ be a k × l indicator partition matrix and R a stochastic partition matrix with the same block structure as R ¯. Observe that to each indicator partition matrix R ¯ there are many compatible stochastic partition matrices R, but the indicator partition matrix R ¯ for any stochastic partition matrix R is unique. Furthermore, let Q = {Q(a)}a∈[k] be a collection of stochastic partition matrices. The corresponding conditional embedding f ¯ maps K Δ m 1 k to f ¯ ( K ) : R ¯ ( K Q ) Δ n 1 l.

Let ρ Δ k 1 . Suppose that K describes the conditional distribution of Y given X and that ψρ(K) describes the joint distribution of Y and X. As explained in Section 4.1, the matrix f(P) := R(PQ) describes the joint distribution of a pair of random variables (X′, Y′), and the conditional distribution of Y′ given X′ is given by f ¯ ( K ). In this situation, the marginal distribution of X′ is given by ρ′ = ρR. Therefore, the following diagram commutes:

Entropy 16 03207f8

The preceding discussion implies the first statement of the following result:

Theorem 17.

  • For any k ≥ 1 and m ≥ 2 and any ρ Δ k 1 , the Riemannian metric gρ,m on ( Δ m 1 k ) ° satisfies:

    g ρ , m = f ¯ * ( g ρ , n ) , f o r ρ = ρ R ,

    for any conditional embedding f ¯ : K R ¯ ( K Q ).

  • Conversely, suppose that for any k ≥ 1 and m ≥ 2 and any ρ Δ k 1 , there is a Riemannian metric g(ρ,m) on ( Δ m 1 k ) °, such that Equation (39) holds for all conditional embeddings, and suppose that g(ρ,m) depends continuously on ρ. Then, there is a constant A > 0 that satisfies g(ρ,m) = Agρ,m.

Proof. The first statement follows from the commutative diagram (38). For the second statement, denote by ρk the uniform distribution on a set of k elements. If f ¯ : K R ¯ ( K Q ) is a homogeneous conditional embedding of Δ m 1 k in Δ n 1 l, then R = k l R ¯ is a stochastic partition matrix corresponding to the partition indicator matrix R ¯. Observe that ρl = ρkR. Therefore, the family of Riemannian metrics g ρ k , m on Δ m 1 k satisfies the assumptions of Theorem 11. Therefore, there is a constant A > 0 for which g ρ k , m equals A/k times the product Fisher metric. This proves the statement for uniform distributions ρ.

A general distribution ρ Δ k 1 can be approximated by a distribution with rational probabilities. Since g(ρ,m) is assumed to be continuous, it suffices to prove the statement for rational ρ. In this case, there exists a stochastic partition matrix R for which ρ := ρR is a uniform distribution, and so, g ( ρ , n ) is of the desired form. Equation (39) shows that g(ρ,m) is also of the desired form. □

7. Gradient Fields and Replicator Equations

In this section, we use gradient fields in order to compare Riemannian metrics on the space ( Δ n 1 k ) °.

7.1. Replicator Equations

We start with gradient fields on the simplex Δ n 1 . A Riemannian metric g on Δ n 1 allows us to consider gradient fields of differentiable functions F : Δ n 1 . To be more precise, consider the differential d p F : T p Δ n 1 of F in p. It is a linear form on T p Δ n 1 , which maps each tangent vector u to d p F ( u ) = F u ( p ) . Using the map ugp(u, ·), this linear form can be identified with a tangent vector in T p Δ n 1 , which we denote by gradpF. If we choose the Fisher metric g(n) as the Riemannian metric, we obtain the gradient in the following way. First consider a differentiable extension of F to the positive cone + n, which we will denote by the same symbol F. With the partial derivatives iF of F, the Fisher gradient of F on the simplex Δ n 1 is given as:

( grad p F ) i = p i ( i F ( p ) j = 1 n p j j F ( p ) ) , i [ n ] .

Note that the expression on the right-hand side of Equation (40) does not depend on the particular differentiable extension of F to + n. The corresponding differential equation is well known in theoretical biology as the replicator equation; see [19,20].

p ˙ i = p i ( i F ( p ) j = 1 n p j j F ( p ) ) , i [ n ] .

We now apply this gradient formula to functions that have the structure of an expectation value. Given real numbers Fi, i ∈ [n], referred to as fitness values, we consider the mean fitness:

F ¯ ( p ) : = i = 1 n p i F i .

Replacing the pi by any positive real numbers leads to a differentiable extension of F, also denoted by F. Obviously, we have iF = Fi, which leads to the following replicator equation:

p ˙ i = p i ( F i F ¯ ( p ) ) , i [ n ] .

This equation has the solution:

p i ( t ) = p i ( 0 ) e t F i j = 1 n p j ( 0 ) e t F i , i [ n ] .

Clearly, the mean fitness will increase along this solution of the gradient field. The rate of increase can be easily calculated:

d d t F ¯ ( p ( t ) ) = i = 1 n p ˙ i ( t ) F i = i = 1 n p i ( F i F ¯ ( p ) ) F i = i = 1 n p i ( F i F ¯ ( p ) ) 2 = var p ( F ) > 0.

As limit points of this solution, we obtain:

Entropy 16 03207f9

and:

Entropy 16 03207f10

7.2. Extension of the Replicator Equations to Stochastic Matrices

Now, we come to the corresponding considerations of gradient fields in the context of stochastic matrices K ( Δ n 1 k ) °. We consider a function:

K F ( K ) = F ( K 11 , , K 1 n ; K 21 , , K 2 n ; ; K k 1 , , K k n ) .

One way to deal with this is to consider for each i ∈ [k] the corresponding replicator equation:

K ˙ i j = K i j ( i j F ( K ) j = 1 n k i j i j F ( K ) ) , j [ n ] .

Obviously, this is the gradient field that one obtains by using the product Fisher metric on ( Δ n 1 k ) ° (Equation (17)):

g K ( k , m ) ( u , υ ) = i j 1 K i j u i j υ i j .

If we replace the metric by the weighted product Fisher metric considered by Kakade (Equation (35)),

g K ρ , m ( u , υ ) = i j ρ i K i j u i j υ i j ,

then we obtain

K ˙ i j = K i j ρ i ( i j F ( K ) j = 1 n K i j i j F ( K ) ) , j [ n ] .

7.3. The Example of Mean Fitness

Next, we want to study how the gradient flows with respect to different metrics compare. We restrict to the class of metrics gρ,m (Equation (35)), where ρ Δ k is a probability distribution. In principle, one could drop the normalization condition i ρ i = 1 and allow arbitrary coefficients ρi. However, it is clear that the rate of convergence can always be increased by scaling all values ρi with a common positive factor. Therefore, some normalization condition is needed for ρ.

With a probability distribution ρ Δ k 1 and fitness values Fij, let us consider again the example of an expectation value function:

F ¯ ( K ) = i = 1 k p i j = 1 n K i j F i n .

With i j F ¯ ( π ) = p i F i j, this leads to:

K ˙ i j = p i ρ i K i j ( F i j j = 1 n K i j F i j ) , j [ n ] .

The corresponding solutions are given by:

K i j ( t ) = K i j ( 0 ) e t P i ρ i F i j j = 1 n K i j ( 0 ) e t P i ρ i F i j , i [ n ] .

Since argmax ( p i ρ i F i . ) and argmin ( p i ρ i F i . ) are independent of ρi > 0, the limit points are given independently of the chosen ρ as:

Entropy 16 03207f11

and:

Entropy 16 03207f12

This is consistent with the fact that the critical points of gradient fields are independent of the chosen Riemannian metric. However, the speed of convergence does depend on the metric:

For each i, let Gi = maxj Fij and g i = max j argmax ( F i j ) F i j be the largest and second-largest values in the i-th row of Fij, respectively. Then, as: t → ∞,

Entropy 16 03207f13

Therefore,

Entropy 16 03207f14

Thus, in the long run, the rate of convergence is given by inf i { p i ρ i ( G i g i ) }, which depends on the parameter ρ of the metric. As a result, in this case study, the optimal choice of ρi, i.e., with the largest convergence rate, can be computed if the numbers Gi and gi are known.

Consider, for example, the case that the differences Gigi are of comparable sizes for all i. Then, we need to find the choice of ρ that maximizes inf i { p i ρ i }. Clearly, inf i { p i ρ i } 1 (since there is always an index i with piρi). Equality is attained for the choice ρi = pi. Thus, we recover the choice of Kakade.

8. Conclusions

So, which Riemannian metric should one use in practice on the set of stochastic matrices, ( Δ n 1 k ) °? The results provided in this manuscript give different answers, depending on the approach. In all cases, the characterized Riemannian metrics are products of Fisher metrics with suitable factor weights. Theorem 11 suggests to use a factor weight proportional to 1/k, and Theorem 16 suggests to use a constant weight independent of k. In many cases, it is possible to work within a single conditional polytope ( Δ n 1 k ) ° and a fixed k, and then, these two results are basically equivalent. On the other hand, Theorem 17 gives an answer that allows arbitrary factor weights ρ.

Which metric performs best obviously depends on the concrete application. The first observation is that in order to use the metric gρ,m of Theorem 17, it is necessary to know ρ. If the problem at hand suggests a natural marginal distribution ρ, then it is natural to make use of it and choose the metric gρ,m. Even if ρ is not known at the beginning, a learning system might try to learn it to improve its performance.

On the other hand, there may be situations where there is no natural choice of the weights ρ. Observe that ρ breaks the symmetry of permuting the rows of a stochastic matrix. This is also expressed by the structural difference between Theorems 11 and 16 on the one side and Theorem 17 on the other. While the first two theorems provide an invariance metric characterization, Theorem 17 provides a “covariance” classification; that is, the metrics gρ,m are not invariant under conditional embeddings, but they transform in a controlled manner. This again illustrates that the choice of a metric should depend on which mappings are natural to consider, e.g., which mappings describe the symmetries of a given problem.

For example, consider a utility function of the form F = i ρ i j K i j F i j. Row permutations do not leave gρ,m invariant (for a general ρ), but they are not symmetries of the utility function F, either, and hence, they are not very natural mappings to consider. However, row permutations transform the metric gρ,m and the utility function in a controlled manner; in such a way that the two transformations match. Therefore, in this case, it is natural to use gρ,m. On the other hand, when studying problems that are symmetric under all row permutations, it is more natural to use the invariant metric g(k,m).

Acknowledgments

The authors are grateful to Keyan Zahedi for discussions related to policy gradient methods in robotics applications. Guido Montúfar thanks the Santa Fe Institute for hosting him during the initial work on this article. Johannes Rauh acknowledges support by the VW Foundation. This work was supported in part by the DFG Priority Program, Autonomous Learning (DFG-SPP 1527).

Author Contributions

All authors contributed to the design of the research. The research was carried out by all authors, with main contributions by Guido Montúfar and Johannes Rauh. The manuscript was written by Guido Montúfar, Johannes Rauh and Nihat Ay. All authors read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interests.

Appendix

A. Conditions for Positive Definiteness

Equation (16) in Lebanon’s Theorem 6 defines a Riemannian metrics whenever it defines a positive-definite quadratic form. The next proposition gives sufficient and necessary conditions for which this is the case.

Proposition 18. For each pair k ≥ 1 and m ≥ 2, consider the tensor on + k × mdefined by:

g M ( k , m ) ( a b , c d ) = A ( | M | ) + δ a c ( B ( | M | ) | M a | + δ b d C ( | M | ) M a b )

for some differentiable functions A , B , C C ( + ). The tensor g(k,m) defines a Riemannian metric for all k and m if and only if C(α) > 0, B(α) + C(α) > 0 and A(α) + B(α) + C(α) > 0 for all α +.

Proof. The tensors are Riemannian metrics when:

g M ( k , m ) ( V ) = A ( | M | ) ( a b V a b ) 2 + B ( | M | ) a | M | | M a | ( b V a b ) 2 + C ( | M | ) a b | M | M a b V a b 2

is strictly positive for all non-zero V k × m, for all M + k × m.

We can derive necessary conditions on the functions A, B, C from some basic observations. Choosing V = ab in Equation (A2) shows that A ( | M | ) + | M | | M a | + B ( | M | ) + | M | | M a b | C ( | M | ) has to be positive for all a ∈ [k], b ∈ [m], for all M + k × m. Since Mab can be arbitrarily small for fixed |M| and |Ma|, we see that C has to be non-negative. Since we can choose |Ma|Mab ≪ |M| for a fixed |M|, we find that B + C has to be non-negative. Further, since we can choose Mab ≈ |Ma| ≈ |M| for a given |M|, we find that A + B + C has to be non-negative. This shows that the quadratic form is positive definite only if C ≥ 0, B + C ≥ 0, A + B + C ≥ 0. Since the cone of positive definite matrices is open, these inequalities have to be strictly satisfied. In the following, we study sufficient conditions.

For any given M + k × m, we can write Equation (A2) as a product VGV, for all V k m, where G = G A + G B + G C k m × k m is the sum of a matrix GA with all entries equal to A(|M|), a block diagonal matrix GB whose a-th block has all entries equal to | M | | M a | B ( | M | ), and a diagonal matrix GC with diagonal entries equal to | M | M a b C ( | M | ). The matrix G is obviously symmetric, and by Sylvester’s criterion, it is positive definite iff all its leading principal minors are positive. We can evaluate the minors using Sylvester’s determinant theorem. That theorem states that for any invertible m × m matrix X, an m × n matrix Y and an n × m matrix Z, one has the equality det(X + YZ) = det(X) det(In + ZX−1Y).

Let us consider a leading square block G′, consisting of all entries Gab,cd of G with row-index pairs (a, b) satisfying b ∈ [m] for all a < a′ and bb′ for a = a′ for some a′k and b′m; and the same restriction for the column index pairs. The corresponding block G A + G B can be written as the rank-a′ matrix YZ, with Y consisting of columns 1a for all aa′ and Z consisting of rows A + 1 a | M | | M a | B for all aa′. Hence, the determinant of G′ is equal to:

det ( G ) = det ( G C ) det ( I a + Z G C 1 Y )

Since G′C is diagonal, the first term is just:

det ( G C ) = ( a < a b | M | M a b C ) ( b b | M | M a b C ) .

The matrix in the second term of Equation (A3) is given by:

Entropy 16 03207f15

By Sylvester’s determinant theorem, we have:

Entropy 16 03207f16

where A a = | M a | | M | A for a < a′ and A a = b b M a b | M | A, and Ba = B for a < a′ and B a = b b M a b | M a | B.

This shows that the matrix G is positive definite for all M if and only if C > 0, C + B > 0 and ( 1 + a a A a C + B a ) > 0 for all a′ and b′. The latter inequality is satisfied whenever A + B + C > 0. This completes the proof. □

B. Proofs of the Invariance Characterization

The following lemma follows directly from the definition and contains all the technical details we need for the proofs.

Lemma 19. The push-forward f * : T M + k × m T f ( M ) + l × n of a map f ^ k , m l , n is given by:

f * ( a b ) = i = 1 l j = 1 n R ¯ a i Q b j ( a ) i j ,

and the pull-back of a metric g(l,n) on + l × n through f is given by:

( f * g ( l , n ) ) M ( a b , c d ) = g f ( M ) ( l , n ) ( f * a b , f * c d ) = i = 1 l j = 1 n s = 1 l t = 1 n R ¯ a i R ¯ c s Q b j ( a ) Q d t ( c ) g f ( M ) ( l , n ) ( i j , s t ) .

Proof of Theorem 11. We follow the strategy of [5,14]. The idea is to consider subclasses of maps from the class k , m l , n and to evaluate their push-forward and pull-back maps together with the isometry requirement. This yields restrictions on the possible metrics, eventually fully characterizing them.

First. Consider the maps h π , σ k , m l , n, resulting from permutation matrices Q ( a ) = P π a , π a : [ m ] [ m ] for all a ∈ [k], and R ¯ = P σ , σ : [ k ] [ k ]. Requiring isometry yields:

( h π , σ ) * ( a b ) = σ ( a ) π a ( b )
g M ( k , m ) ( a b , c d ) = g h π , σ ( M ) ( k , m ) ( σ ( a ) π ( a ) ( b ) , σ ( c ) π ( c ) ( d ) ) .

Second. Consider the maps r z w k , m k z , m w defined by Q ( 1 ) = = Q ( k ) m × m w and R ¯ k × k z being uniform. In this case, for some permutations π and σ,

( r z w ) * ( a b ) = 1 w i = 1 z j = 1 w σ ( a ) ( i ) π ( b ) ( j )
( r z w g ( k z , m w ) ) M ( a b , c d ) = 1 w 2 i = 1 z j = 1 w s = 1 z t = 1 w g r z w ( M ) ( k z , m w ) ( σ ( a ) ( i ) π ( b ) ( j ) , σ ( c ) ( s ) π ( d ) ( t ) ) .

Third. For a rational matrix M = 1 Z M ˜ with M ˜ k × m and row-sum | M ˜ a | = N for all a ∈ [k], consider the map υ M k , m z , k , N that maps M to a constant matrix. In this case, R ¯ k × k z and Q(a) has the b-th row with | M ˜ a b | entries with value 1 | M ˜ a b |, at positions π ( a b ) ( [ M ˜ a b ] ) [ N ], and:

( υ M ) ( a b ) = 1 M ˜ a b i = 1 k j = 1 M ˜ a b σ ( a ) ( i ) π ( a b ) ( j )
( υ M g ( k z , N ) ) M ( a b , c d ) = 1 M ˜ a b 1 M ˜ c d i = 1 z j = 1 M ˜ a b s = 1 z t = 1 M ˜ c d g v M ( M ) ( k z , N ) ( σ ( a ) ( i ) π ( a b ) ( j ) , σ ( c ) ( s ) π ( c d ) ( t ) ) .

Step 1: ac. Consider a constant matrix M = U. Then:

g U ( k , m ) ( a 1 b 1 , c 1 d 1 ) = g h π , σ ( U ) ( k , m ) ( a 2 b 2 , c 2 d 2 ) = g U ( k , m ) ( a 2 b 2 , c 2 d 2 ) .

This implies that g U ( k , m ) ( a b , c d ) = A ^ ( k , m ) when ac.

Using the second type of map, we get:

A ^ ( k , m ) = z 2 w 2 w 2 A ^ ( k z , m w ) ,

which implies g U ( k , m ) ( a b , c d ) = A k 2, when ac. Considering a rational matrix M and the map vM yields:

g M ( k , m ) ( a b , c , d ) = A k 2 .

Step 2: bd. By similar arguments as in Part 1, g U ( k , m ) ( a b , a d ) = B ^ ( k , m ). Evaluating the map rzw yields:

B ^ ( k , m ) = z w 2 w 2 B ^ ( k z , m w ) + z ( z 1 ) w 2 w 2 A ( k z ) 2 = z B ^ ( k z , m w ) + z 1 z A k 2 ,

and therefore,

1 z ( B ^ ( k , m ) A k 2 ) = B ^ ( k z , m w ) A ( k z ) 2 ,

which implies that ( B ^ ( k , m ) A k 2 ) is independent of m and scales with the inverse of k, such that it can be written as B k. Rearranging the terms yields g U ( k , m ) ( a b , a d ) = A k 2 + B k, for bd.

For a rational matrix M, the pull-back through vM shows then:

Entropy 16 03207f17

Step 3: a = c and b = d. In this case, g U ( k , m ) ( a 1 b 1 , a 1 b 1 ) = g U ( k , m ) ( a 2 b 2 , a 2 b 2 ) = C ^ ( k , m ) and:

Entropy 16 03207f18

which implies:

k m ( C ^ ( k , m ) A k 2 B k ) = k z m w ( C ^ ( k z , m w ) A ( k z ) 2 B k z ) ,

such that the left-hand side is a constant C, and g U ( k , m ) ( a b , a b ) = A k 2 + B k + m k C. Now, for a rational matrix M, pulling back through vM gives:

Entropy 16 03207f19

Summarizing, we found:

g M ( k , m ) ( a b , c d ) = A k 2 + δ a c ( k B k 2 + δ b d | M | M a b C k 2 ) ,

which proves the first statement. The second statement follows by plugging Equation (23) into Equation (A8). Finally, the statement about the positive-definiteness is a direct consequence of Proposition 7. □

Proof of Theorem 12. Suppose, contrary to the claim, that a family of metrics g M ( k , m ) exists, which is invariant with respect to any conditional embedding. By Theorem 11, these metrics are of the form of Equation (23). To prove the claim, we only need to show that A, B and C vanish. In the following, we study conditional embeddings where Q consists of identity matrices and evaluate the isometry requirement ( f * g ( l , n ) ) M ( a b , c d ) = g M ( k , m ) ( a b , c d ).

Step 1: In the case ac, we obtain from the invariance requirement and Equation (A8), that:

A k 2 = | R ¯ a | | R ¯ c | A l 2 .

Observe that:

1 k i = 1 k | R ¯ i | = 1 k | R ¯ | = 1 k .

In fact, | R ¯ i | is the cardinality of the i-th block of the partition belonging to R ¯. Therefore, if we choose R ¯ to be the partition indicator matrix of a partition that is not homogeneous and in which | R ¯ a | > l / k and | R ¯ c | > l / k, then Equation (A25) implies that A = 0.

Step 2: In the case a = c and bd, we obtain from invariance and Equation (A8), that:

B k = i = 1 l s = 1 l R ¯ a i R ¯ a s δ i s B l = | R ¯ a | B l .

Again, we may chose R ¯ a in such a way that | R ¯ a | k l and find that B = 0.

Step 3: Finally, in the case a = c and b = d, we obtain from invariance and Equation (A8), that:

C | M | k 2 M a b = i = 1 l s = 1 l R ¯ a i R ¯ a s δ i , s C | R ¯ M | l 2 ( R ¯ M ) i b = | R ¯ a | C | R ¯ M | l 2 M a b .

If we chose R ¯ a, such that | R ¯ a | | M | | R ¯ M |, then we see that C = 0. Therefore, g(k,m) is the zero-tensor, which is not a metric. □

References

  1. Amari, S. Natural gradient works efficiently in learning. Neur. Comput 1998, 10, 251–276. [Google Scholar]
  2. Kakade, S. A Natural Policy Gradient. In Advances in Neural Information Processing Systems 14; MIT Press: Cambridge, MA, USA, 2001; pp. 1531–1538. [Google Scholar]
  3. Shahshahani, S. A New Mathematical Framework for the Study of Linkage and Selection; American Mathematical Society: Providence, RI, USA, 1979. [Google Scholar]
  4. Chentsov, N. Statistical Decision Rules and Optimal Inference; American Mathematical Society: Providence, RI, USA, 1982. [Google Scholar]
  5. Campbell, L. An extended Čencov characterization of the information metric. Proc. Am. Math. Soc 1986, 98, 135–141. [Google Scholar]
  6. Sutton, R.S.; McAllester, D.; Singh, S.; Mansour, Y. Policy Gradient Methods for Reinforcement Learning with Function Approximation. In Advances in Neural Information Processing Systems 12; MIT Press: Cambridge, MA, USA, 2000; pp. 1057–1063. [Google Scholar]
  7. Marbach, P.; Tsitsiklis, J. Simulation-based optimization of Markov reward processes. IEEE Trans. Autom. Control 2001, 46, 191–209. [Google Scholar]
  8. Montúfar, G.; Ay, N.; Zahedi, K. Expressive power of conditional restricted boltzmann machines for sensorimotor control 2014, arXiv, 1402.3346.
  9. Ay, N.; Montúfar, G.; Rauh, J. Selection Criteria for Neuromanifolds of Stochastic Dynamics. In Advances in Cognitive Neurodynamics (III); Yamaguchi, Y., Ed.; Springer-Verlag: Dordrecht, The Netherlands, 2013; pp. 147–154. [Google Scholar]
  10. Peters, J.; Schaal, S. Natural Actor-Critic. Neurocomputing 2008, 71, 1180–1190. [Google Scholar]
  11. Peters, J.; Schaal, S. Policy Gradient Methods for Robotics, Proceedings of the IEEE International Conference on Intelligent Robotics Systems (IROS 2006), Beijing, China, 9–15 October 2006.
  12. Peters, J.; Vijayakumar, S.; Schaal, S. Reinforcement learning for humanoid robotics, Proceedings of the third IEEE-RAS international conference on humanoid robots, Karlsruhe, Germany, 29–30 September 2003; pp. 1–20.
  13. Bagnell, J.A.; Schneider, J. Covariant policy search, Proceedings of the 18th International Joint Conference on Artificial Intelligence, Acapulco, Mexico, August 9–15 2003; Morgan Kaufmann Publishers Inc: San Francisco, CA, USA, 2003; pp. 1019–1024.
  14. Lebanon, G. Axiomatic geometry of conditional models. IEEE Trans. Inform. Theor 2005, 51, 1283–1294. [Google Scholar]
  15. Lebanon, G. An Extended Čencov-Campbell Characterization of Conditional Information Geometry, Proceedings of the 20th Conference in Uncertainty in Artificial Intelligence (UAI 04), Banff, AL, Canada, 7–11 July 2004; Chickering, D.M., Halpern, J.Y., Eds.; AUAI Press: Arlington, VA, USA, 2004; pp. 341–345.
  16. Barndorff-Nielsen, O. Information and Exponential Families: In statistical Theory; John Wiley & Sons, Inc: Hoboken, NJ, USA, 1978. [Google Scholar]
  17. Brown, L.D. Fundamentals of Statistical Exponential Families with Applications in Statistical Decision Theory; Institute of Mathematical Statistics: Hayward, CA, USA, 1986. [Google Scholar]
  18. Zahedi, K.; Ay, N.; Der, R. Higher coordination with less control—A result of informaion maximiation in the sensorimotor loop. Adapt. Behav 2010, 18. [Google Scholar]
  19. Hofbauer, J.; Sigmund, K. Evolutionary Games and Population Dynamics; Cambridge University Press: Cambridge, United Kingdom, 1998. [Google Scholar]
  20. Ay, N.; Erb, I. On a notion of linear replicator equations. J. Dyn. Differ. Equ 2005, 17, 427–451. [Google Scholar]
Figure 1. An interpretation for Lebanon maps and conditional embeddings. The variable X′ is computed from X by R, and Y′ is computed from X and Y by Q.
Figure 1. An interpretation for Lebanon maps and conditional embeddings. The variable X′ is computed from X by R, and Y′ is computed from X and Y by Q.
Entropy 16 03207f1 1024
Figure 2. An illustration of different embeddings of the conditional polytope Δ m 1 k in a probability simplex. The left panel shows an embedding in Δ m k 1 by the inverse of the moment map μ of the independence model. The right panel shows an affine embedding in Δ k m 1 as a set of joint probability distributions for two different specifications of marginals.
Figure 2. An illustration of different embeddings of the conditional polytope Δ m 1 k in a probability simplex. The left panel shows an embedding in Δ m k 1 by the inverse of the moment map μ of the independence model. The right panel shows an affine embedding in Δ k m 1 as a set of joint probability distributions for two different specifications of marginals.
Entropy 16 03207f2 1024

Share and Cite

MDPI and ACS Style

Montúfar, G.; Rauh, J.; Ay, N. On the Fisher Metric of Conditional Probability Polytopes. Entropy 2014, 16, 3207-3233. https://doi.org/10.3390/e16063207

AMA Style

Montúfar G, Rauh J, Ay N. On the Fisher Metric of Conditional Probability Polytopes. Entropy. 2014; 16(6):3207-3233. https://doi.org/10.3390/e16063207

Chicago/Turabian Style

Montúfar, Guido, Johannes Rauh, and Nihat Ay. 2014. "On the Fisher Metric of Conditional Probability Polytopes" Entropy 16, no. 6: 3207-3233. https://doi.org/10.3390/e16063207

Article Metrics

Back to TopTop