Next Article in Journal
A Chemo-Mechanical Model of Diffusion in Reactive Systems
Next Article in Special Issue
Statistics of Correlations and Fluctuations in a Stochastic Model of Wealth Exchange
Previous Article in Journal
Coarse-Graining Approaches in Univariate Multiscale Sample and Dispersion Entropy
Previous Article in Special Issue
Robustification of a One-Dimensional Generic Sigmoidal Chaotic Map with Application of True Random Bit Generation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lagrangian Function on the Finite State Space Statistical Bundle

De Castro Statistics, Collegio Carlo Alberto, 10122 Torino, Italy
Entropy 2018, 20(2), 139; https://doi.org/10.3390/e20020139
Submission received: 26 December 2017 / Revised: 21 January 2018 / Accepted: 24 January 2018 / Published: 22 February 2018
(This article belongs to the Special Issue Theoretical Aspect of Nonlinear Statistical Physics)

Abstract

:
The statistical bundle is the set of couples ( Q , W ) of a probability density Q and a random variable W such that 𝔼 Q W = 0 . On a finite state space, we assume Q to be a probability density with respect to the uniform probability and give an affine atlas of charts such that the resulting manifold is a model for Information Geometry. Velocity and acceleration of a one-dimensional statistical model are computed in this set up. The Euler–Lagrange equations are derived from the Lagrange action integral. An example Lagrangian using minus the entropy as potential energy is briefly discussed.

1. Introduction

The set-up of classical Lagrangian Mechanics is a finite-dimensional Riemannian manifold. For example, see the monographs by V.I. Arnold ([1], Chapters III–IV), R. Abraham and J.E. Mardsen ([2], Chapter 3), J.E. Marsden and T.S. Ratiu ([3], Chapter 7). Classical Information geometry, as it was first defined in the monograph by S.-I. Amari and H. Nagaoka [4], views parametric statistical models as a manifold endowed with a dually-flat connection. In a recent paper, M. Leok and J. Zhang [5] have pointed out the natural relation between these two topics and have given a wide overview of the mathematical structures involved.
In the present paper, we take up the same research program with two further qualifications. First, we assume a non-parametric approach by considering the full set of positive probability functions on a finite set, as it was done, for example, in our review paper [6]. The discussion is restricted here to a finite state space to avoid difficult technical problems. Second, we consider a specific expression of the tangent space of the statistical manifold, which is a Hilbert bundle that we call a statistical bundle. Our aim is to emphasize the basic statistical intuition of the geometric quantities involved. Because of that, we chose to systematically use the language of non-parametric differential geometry as it is developed in the monography of S. Lang [7].
Herein, we use our version of Information Geometry; see the review paper [6]. Preliminary versions of this paper have been presented at the SigmaPhy2017 Conference held in Corfu, Greece, 10–14 July 2017, and at a seminar held at Collegio Carlo Alberto, Moncalieri, on 5 September 2017. In these early versions, we did not refer to Leok and Zhang’s work, which we were unaware of at that time.
In Section 2, we review the definition and properties of the statistical bundle, and of the affine atlas that endows it with both a manifold structure and a natural family of transports between the fibers. In Section 3, we develop the formalism of the tangent space of the statistical bundle and derive the expression of the velocity and the acceleration of a one-dimensional statistical model in the given affine atlas. The derivation of the Euler–Lagrange equations, together with a relevant example, is discussed in Section 4.

2. Statistical Bundle

We consider a finite sample space Ω , with # Ω = N . The probability simplex is Δ ( Ω ) , and Δ ( Ω ) is its interior. The uniform probability on Ω is denoted as μ , μ ( x ) = 1 N , x Ω . The maximal exponential family ε μ is the set of all strictly positive probability densities of ( Ω , μ ) . The expected value of f : Ω R with respect to the density P ε μ is denoted 𝔼 P f = 𝔼 μ f P = 1 N x Ω f ( x ) P ( x ) .
In [6,8,9], we made the case for the statistical bundle being the key structure of Information Geometry. The statistical bundle with base Ω is
S ε μ = ( Q , V ) | Q ε μ , 𝔼 Q V = 0 .
The statistical bundle is a semi-algebraic subset of R 2 N ; i.e., it is defined by algebraic equations and strict inequalities. It is trivially a real manifold. At each Q ε μ , the fiber S Q ε μ is endowed with the scalar product
( V 1 , V 2 ) V 1 , V 2 Q = 𝔼 Q V 1 V 2 = Cov Q V 1 , V 2 .
To this structure we add a special affine atlas of charts in order to show a structure of affine manifold, which is of interest in the statistical applications. The exponential atlas of the statistical manifold S ε μ is the collection of charts given for each P ε μ by
s P : S ε μ ( Q , V ) ( s P ( Q ) , e U Q P V ) S P ε μ × S P ε μ ,
where (with a slight abuse of notation)
s P ( Q ) = log Q P 𝔼 P log Q P , e U Q P V = V 𝔼 P V .
As s P ( P , V ) = ( 0 , V ) , we say that s P is the chart centered at P. If s P ( Q ) = U , it is easy to derive the exponential form of Q as a density with respect to P; namely, Q = e U 𝔼 P log Q P · P . As 𝔼 μ Q = 1 , then 1 = 𝔼 p e U 𝔼 P log P Q = 𝔼 p e U e 𝔼 P log P Q , so that the cumulant function K P is defined on S P ε μ by
K p ( U ) = log 𝔼 P e U = 𝔼 P log P Q = D P Q ;
that is, K P ( V ) is the expression in the chart at P of Kullback–Leibler divergence of Q D P Q , and we can write
Q = e U K P ( U ) × P = e P ( U ) .
The patch centered at P is
s P 1 = e P : ( S P ε μ ) 2 ( U , W ) ( e P ( U ) , e U P e P ( U ) W ) S ε μ .
In statistical terms, the random variable log Q / P is the relative point-wise information about Q relative to the reference P, while s P ( Q ) is the deviation from its mean value at P. The expression of the other divergence in the chart centered at P is
D Q P = 𝔼 Q log Q P = 𝔼 Q U K P ( U ) = 𝔼 Q U K P ( U ) .
The equation above shows that the two divergences are convex conjugate functions in the proper charts; see [10].
The transition maps of the exponential atlas in Equations (1) and (2) are
s P 2 e P 1 ( U , W ) = s P 2 e P 1 ( U ) , e U P e 1 P ( U ) W = s P 2 e U K P 1 ( U ) × P 1 , W 𝔼 e P 1 ( U ) W = U K P 1 ( U ) + log P 1 P 2 𝔼 P 2 U K P 1 ( U ) + log P 1 P 2 , W 𝔼 e P 1 ( U ) W 𝔼 P 2 W 𝔼 e P 1 ( U ) W = e U P 1 P 2 U + s P 2 ( P 1 ) , e U P 1 P 2 W ,
so that the exponential atlas is indeed affine. Notice that the linear part is e U P 1 P 2 .

3. The Tangent Space of the Statistical Bundle

Let us compute the expression of the velocity at time t of a smooth curve t γ ( t ) = ( Q ( t ) , W ( t ) ) S ε μ in the chart centered at P. The expression of the curve is
γ P ( t ) = s P ( γ ( t ) ) = s P ( Q ( t ) ) , e U Q ( t ) P W ( t ) ,
and hence we have, by denoting the derivative in R N by the dot,
d d t s P ( Q ( t ) ) = d d t log Q ( t ) P 𝔼 P log Q ( t ) P = Q ˙ ( t ) Q ( t ) 𝔼 P Q ˙ ( t ) Q ( t ) = e U Q ( t ) P Q ˙ ( t ) Q ( t ) ,
and
d d t e U Q ( t ) P W ( t ) = d d t W ( t ) 𝔼 P W ( t ) = W ˙ ( t ) 𝔼 P W ˙ ( t ) = e U Q ( t ) P W ˙ ( t ) 𝔼 Q ( t ) W ˙ ( t ) .
If we define the velocity of t Q ( t ) = e U ( t ) K p ( U ( t ) ) × P to be
Q ( t ) = Q ˙ ( t ) Q ( t ) = d d t log Q ( t ) = U ˙ ( t ) d K P ( U ( t ) ) [ U ˙ ( t ) ] S Q ( t ) ε μ ,
then t ( Q ( t ) , Q ( t ) ) is a curve in the statistical bundle whose expression in the chart centered at P is t ( U ( t ) , U ˙ ( t ) ) . The velocity as defined above is nothing else as the score function of the one-dimensional statistical model; see e.g., the textbook by B. Efron and T. Haste (Section 4.2, [11]). The variance of the score function (i.e., the squared norm of Q ( t ) in S Q ( t ) ε μ ) is classically known as Fisher information at t.
We define the second statistical bundle to be
S 2 ε μ = ( Q , W , X , Y ) | ( Q , W ) S ε μ , X , Y S Q ε μ ,
with charts
s P ( Q , V , X , Y ) = s P ( Q , V ) , e U Q P X , e U Q P Y ,
we can identify the second bundle with the tangent space of the first bundle as follows.
For each curve t γ ( t ) = ( Q ( t ) , W ( t ) ) in the statistical bundle, define its velocity at t to be
γ ( t ) = Q ( t ) , W ( t ) , Q ( t ) , W ˙ ( t ) 𝔼 Q ( t ) W ˙ ( t ) ,
because t γ ( t ) is a curve in the second statistical bundle, and its expression in the chart at P has the last two components equal to the values given in Equations (3) and (4).
In particular, consider the a curve t χ ( t ) = ( Q ( t ) , Q ( t ) ) . The velocity is
χ ( t ) = Q ( t ) , Q ( t ) , Q ( t ) , Q * * ( t ) ,
where the acceleration Q * * ( t ) is
Q * * ( t ) = d d t Q ˙ ( t ) Q ( t ) 𝔼 Q ( t ) d d t Q ˙ ( t ) Q ( t ) = Q ¨ ( t ) Q ( t ) Q ( t ) 2 𝔼 Q ( t ) Q ( t ) 2
It should be noted that the acceleration has been defined without explicitly mentioning the relevant connection. In fact, the connection here is implicitly defined by the transports e U P Q , which is unusual in Differential Geometry, but is quite natural from the probabilistic point of view; see P. Gibilisco and G. Pistone [12]. We shall see below that the non-parametric approach to Information Geometry allows the definition of a dual transport, hence a dual connection as it was in [4]. Because of that, we could have defined other types of acceleration together with the one we have defined. Namely, we could consider an exponential acceleration e D 2 Q ( t ) = Q * * ( t ) , a mixture acceleration m D 2 Q ( t ) = Q ¨ ( t ) / Q ( t ) , and a Riemannian acceleration
0 D 2 Q ( t ) = 1 2 e D 2 Q ( t ) + m D 2 Q ( t ) = Q ¨ ( t ) Q ( t ) 1 2 Q ˙ ( t ) Q ( t ) 2 𝔼 Q ( t ) Q ˙ ( t ) Q ( t ) 2 ,
each acceleration being associated with a specific connection; see the review paper [6]. We do not further discuss the different second-order geometries associated with the statistical bundle in this paper.
Example 1 (Boltzmann–Gibbs).
Let us compare the formalism we have introduced above with standard computations in Statistical Physics. The Boltzmann–Gibbs distribution gives to point x Ω the probability e ( 1 / θ ) H ( x ) / Z ( θ ) , with Z ( θ ) = x Ω e ( 1 / θ ) H ( x ) and θ > 0 , see Landau and Lifshitz ([13], Chapter 3). As a curve in ε μ , it is Q ( θ ) = N e ( 1 / θ ) H / Z ( θ ) because of the reference to the uniform probability. The velocity defined above becomes in this case Q ( θ ) = θ 2 ( H 𝔼 θ H ) , while the acceleration of Equation (5) is Q * * ( θ ) = θ 3 ( H 𝔼 θ H ) . Notice that we have the equation θ Q * * ( θ ) + Q ( θ ) = 0 .
Following the original construction of Amari’s Information Geometry [4], we have defined on the statistical bundle a manifold structure which is both an affine and a Riemannian manifold. The base manifold ε μ is actually a Hessian manifold with respect to any of the convex functions K p ( U ) = log 𝔼 p e U , U S p ε μ (see [14]). Many computations are actually performed using the Hessian structure. The following equations are easily checked and frequently used:
𝔼 e P ( U ) H = d K P ( U ) [ H ] ;
e U P e P ( U ) H = H d K P ( U ) [ H ] ;
d 2 K P ( U ) [ H 1 , H 2 ] = e U P e P ( U ) H 1 , e U P e P ( U ) H 2 e P ( U ) ;
d 3 K p ( U ) [ H 1 , H 2 , H 3 ] = 𝔼 e P ( U ) e U P e P ( U ) H 1 × e U P e P ( U ) H 2 × e U P e P ( U ) H 3 .
We have defined a centering operation that can be thought of as a transport among fibers,
e U P Q : S p ε μ S q ε μ ,
whose adjoint is m U q p V = q p V . In fact, is the adjoint of e U p q ,
e U P Q U , V Q = 𝔼 Q ( U 𝔼 Q U ) V = 𝔼 Q U V = 𝔼 P U Q P V = U , m U Q P V P
Moreover, iff U , V S P ε μ , then
e U P Q U , m U P Q V Q = e U Q P e U P Q U , V P = U , V P .
Example 2 (Entropy flow).
This example is taken from [8]. In the scalar field ε Q = 𝔼 Q log Q , there is no dependence on the fiber. If t Q ( t ) = e V ( t ) K P ( V ( t ) ) · P is a smooth curve in ε μ expressed in the chart centered at P, then we can write
Q ( t ) = 𝔼 Q ( t ) V ( t ) K P ( V ( t ) ) + log P = K P ( V ( t ) ) 𝔼 Q ( t ) V ( t ) + log P + P + P = K P ( V ( t ) ) d K P ( V ( t ) ) [ V ( t ) + log P + P ] + P ,
where the argument of the last expectation belongs to the fiber S P ε μ and we have expressed the expected value as a derivative by using Equation (7).
Again using Equations (7) and (9), we compute the derivative of the entropy along the given curve as
d d t Q ( t ) = d d t K P ( V ( t ) ) d d t d K P ( V ( t ) ) [ V ( t ) + log P + P ] = d K P ( V ( t ) ) [ V ˙ ( t ) ] d 2 K P ( V ( t ) ) [ V ( t ) + log P + P , V ˙ ( t ) ] d K P ( V ( t ) ) [ V ˙ ( t ) ] = 𝔼 Q ( t ) e U P Q ( t ) ( V ( t ) + log P ) e U P Q ( t ) V ˙ ( t ) .
We use now the equations
V ( t ) + log P = log Q ( t ) + K P ( V ( t ) ) , e U P Q ( t ) log Q ( t ) + K P ( V ( t ) ) = log Q ( t ) + Q ( t ) ,
and e U P Q ( t ) V ˙ ( t ) = Q ( t ) to obtain
d d t Q ( t ) = log Q ( t ) + Q ( t ) , Q ( t ) Q ( t ) .
We have identified the gradient of the entropy in the statistical bundle,
grad Q = ( log Q + Q ) .
Notice that the previous computation could have been done using the exponential family Q ( t ) = e P ( t V ) . See the computation of the gradient flow in [8].
In the next section, we extend the computation illustrated in the example above to scalar fields on the statistical bundle.

4. Lagrangian Function

A Lagrangian function is a smooth scalar field on the statistical bundle
L : S ε μ ( Q , W ) L ( Q , W ) R .
At each fixed density Q ε μ , the partial mapping
S Q ε μ W L ( Q , W )
is defined on the vector space S q ε μ ; hence, we can use the ordinary derivative, which in this case is called the fiber derivative,
d 2 L ( Q , W ) [ H 2 ] = d d t L ( Q , W + t H 2 ) t = 0 , H 2 S Q ε μ .
Example 3 (Running Example 1).
If
L ( Q , W ) = 1 2 W , W Q + κ Q , κ o ,
then d 2 L ( Q , W ) [ H 2 ] = W , H 2 Q . The example is suggested by the form of the classical Lagrangian function in mechanics, where the first term is the kinetic energy and κ ε Q is the potential energy.
As the statistical bundle S ε μ is non-trivial, the computation of the partial derivative of the Lagrangian with respect to the first variable requires some care. We want to compute the expression of the total derivative in a chart of the affine atlas defined in Equations (1) and (2).
Let t γ ( t ) = ( Q ( t ) , W ( t ) ) be a smooth curve in the statistical bundle. In the chart centered at P, we have
Q ( t ) = e U ( t ) K P ( U ( t ) ) × P = e P ( U ( t ) ) , W ( t ) = e U P e P ( U ( t ) ) V ( t ) ,
with t γ P ( t ) = ( U ( t ) , V ( t ) ) being a smooth curve in ( S P ε μ ) 2 . Let us compute the velocity of variation of the Lagrangian L along the curve γ .
d d t L ( γ ( t ) ) = d d t L ( Q ( t ) , W ( t ) ) = d d t L ( e P ( U ( t ) ) , e U P e P ( U ( t ) ) V ( t ) ) = d d t L P ( U ( t ) , V ( t ) ) ,
with L P ( U , V ) = L ( e P ( U ) , e U P e P ( U ) V ) . It follows that
d d t L ( Q ( t ) , W ( t ) ) = d 1 L P ( U ( t ) , V ( t ) ) [ U ˙ ( t ) ] + d 2 L P ( U ( t ) , V ( t ) ) [ V ˙ ( t ) ] .
If we write Q = e P ( U ) and W = e U P e P ( U ) V , then we have
d 2 L P ( U , V ) [ H 2 ] = d d t L P ( U , V + t H 2 ) t = 0 = d d t L ( Q , W + t e U P Q H 2 ) t = 0 = d 2 L ( Q , W ) [ e U P Q H 2 ] ,
where d 2 L is the fiber derivative of L. As U ˙ ( t ) = e U Q ( t ) P Q ( t ) and e U P e P ( U ( t ) ) V ˙ ( t ) = W ( t ) , it follows from Equations (16) and (17) that
d d t L ( Q ( t ) , W ( t ) ) = d 1 L P ( U ( t ) , V ( t ) ) [ e U Q ( t ) P Q ( t ) ] + d 2 L ( Q ( t ) , W ( t ) ) [ W ( t ) ] .
In the equation above, the first term on the RHS does not depend on P because the LHS and the second term of the RHS do not depend on P. Hence, we define the first partial derivative of the Lagrangian function to be
d 1 ( Q , W ) [ H 1 ] = d 1 L P ( U , V ) [ e U e P ( U ) P H 1 ] , H 1 S Q ε μ ,
so that the derivative of L along γ becomes
d d t L ( Q ( t ) , W ( t ) ) = d 1 L ( Q ( t ) , W ( t ) ) [ Q ( t ) ] + d 2 L ( Q ( t ) , W ( t ) ) [ W ( t ) ] .
In particular, if W ( t ) = Q ( t ) , then
d d t L ( Q ( t ) , Q ( t ) ) = d 1 L ( Q ( t ) , Q ( t ) ) [ Q ( t ) ] + d 2 L ( Q ( t ) , Q ( t ) ) [ Q * * ( t ) ] ,
see Equation (5).
Example 4 (Running Example 2).
With the Lagrangian of Equation (15), we have
L P ( U , V ) = 1 2 e U P e P ( U ) V , e U P e P ( U ) V e P ( U ) κ 𝔼 e P ( U ) U K P ( U ) + log P = 1 2 d 2 K P ( U ) [ V , V ] + κ K P ( U ) d K P ( U ) [ U + log P + P ] + P ,
see Equations (9) and (11). The first partial derivative is
d 1 L P ( U , V ) [ H 1 ] = 1 2 d 3 K P ( U ) [ V , V , H 1 ] + κ d K P ( U ) [ H 1 ] d 2 K P ( U ) [ U + log P + P , H 1 ] d K P ( U ) [ H 1 ] = 1 2 d 3 K P ( U ) [ V , V , H 1 ] κ d 2 K P ( U ) [ U + log P + P , H 1 ] = 1 2 𝔼 Q W 2 e U P e P ( U ) H 1 κ 𝔼 Q ( log Q + Q ) e U P e P ( U ) H 1 = 𝔼 Q 1 2 W 2 𝔼 Q W 2 κ ( log Q + Q ) e U P e P ( U ) H 1 ,
where we have used Equations (9) and (10) together with e U P e P ( U ) ( U + log P + P ) = log Q + Q .
We have found that
d 1 L ( Q , W ) [ H 1 ] = 1 2 W 2 𝔼 Q W 2 κ ( log Q + Q ) , H 1 Q , H 1 S Q ε μ ,
and also
d 1 L ( Q ( t ) , Q ( t ) ) [ Q ( t ) ] = 1 2 Q ( t ) 2 𝔼 Q Q ( t ) 2 κ ( log Q + Q ) , Q ( t ) Q .
Using the fiber derivative computed in the first part of the running example, we find
d d t L ( Q ( t ) , Q ( t ) ) = 1 2 Q ( t ) 2 𝔼 Q Q ( t ) 2 κ ( log Q + Q ) , Q ( t ) Q + Q ( t ) , Q * * ( t ) Q .
Notice that Equation (12) shows that one of the terms in the equations above is grad Q .

5. Action Integral

If [ 0 , 1 ] t Q ( t ) is a smooth curve in the exponential manifold, then the action integral
A ( Q ) = 0 1 L ( Q ( t ) , Q ( t ) ) d t
is well defined. We consider the expression of Q in the chart centered at P, Q ( t ) = e U ( t ) K P ( U ( t ) ) × P .
Given φ C 1 ( [ 0 , 1 ] ) with φ ( 0 ) = φ ( 1 ) = 0 , for each δ R and H S P ε μ , we define the perturbed curve
Q δ ( t ) = e ( U ( t ) + δ φ ( t ) H ) K P ( U ( t ) + δ φ ( t ) H ) × P .
We have Q δ ( 0 ) = Q ( 0 ) , Q δ ( 1 ) = Q ( 1 ) , and
Q δ ( t ) = U ˙ ( t ) + δ φ ˙ ( t ) H 𝔼 Q δ ( t ) ( U ˙ ( t ) + δ φ ˙ ( t ) ) H ,
whose expression in the chart centered at P is U ˙ ( t ) + δ φ ˙ ( t ) H .
Let us consider the variation in δ of the action integral. We apply Equation (19) applied to the smooth curve in S ε μ given by
δ ( Q δ ( t ) , Q δ ( t ) ) ,
where t is fixed. As
d d δ log Q δ ( t ) = d d δ U ( t ) + δ φ ( t ) H 𝔼 Q δ ( t ) d d δ U ( t ) + δ φ ( t ) H = φ ( t ) ( H 𝔼 Q δ ( t ) H )
and
e U P Q δ ( t ) d d δ ( U ˙ ( t ) + δ φ ˙ ( t ) H ) = φ ˙ ( t ) ( H 𝔼 Q δ ( t ) H ) ,
we obtain
d d δ A ( Q δ ) = 0 1 d d δ L ( Q δ ( t ) , Q δ ( t ) ) d t = 0 1 φ ( t ) d 1 L ( Q δ ( t ) , Q δ ( t ) ) [ H 𝔼 Q δ ( t ) H ] + φ ˙ ( t ) d 2 L ( Q δ ( t ) , Q δ ( t ) ) [ H 𝔼 Q δ ( t ) H ] d t = 0 1 φ ( t ) d 1 L ( Q δ ( t ) , Q δ ( t ) ) [ H 𝔼 Q δ ( t ) H ] d d t d 2 L ( Q δ ( t ) , Q δ ( t ) ) [ H 𝔼 Q δ ( t ) H ] d t .
If t Q ( t ) is a critical curve of the action integral, then d d δ A ( Q δ ) δ = 0 = 0 ; hence, for all φ and H, we have
0 1 φ ( t ) d 1 L ( Q ( t ) , Q ( t ) ) [ H 𝔼 Q ( t ) H ] d d t d 2 L ( Q ( t ) , Q ( t ) ) [ H 𝔼 Q ( t ) H ] d t = 0 .
This in turn implies that for each t [ 0 , 1 ] and H S Q ( t ) ε μ , the Euler–Lagrange equation holds:
d 1 L ( Q ( t ) , Q ( t ) ) [ H ] d d t d 2 L ( Q ( t ) , Q ( t ) ) [ H ] = 0 .
Example 5 (Running Example 3).
For the Lagrangian of Equation (15), we can use Equation (20) in the form
d 1 L ( Q ( t ) , Q ( t ) ) [ H 𝔼 Q ( t ) H ] = 1 2 Q ( t ) 2 𝔼 Q ( t ) Q ( t ) 2 κ ( log Q ( t ) + Q ( t ) ) , H 𝔼 Q ( t ) H Q ( t ) ,
with H S P ε μ . For the other term, we have
d 2 L ( Q ( t ) , Q ( t ) ) [ H 𝔼 Q ( t ) H ] = Q ( t ) , H 𝔼 Q ( t ) H Q ( t ) = d 2 K P ( U ( t ) ) [ U ˙ ( t ) , H ] ,
whose derivative is
d d t d 2 K P ( U ( t ) ) [ U ˙ ( t ) , H R ] = d 3 K P ( U ( t ) ) [ U ˙ ( t ) , U ˙ ( t ) , H ] + d 2 K P ( U ( t ) ) [ U ¨ ( t ) , H ] = 𝔼 Q ( t ) Q ( t ) 2 ( H 𝔼 Q ( t ) H ) + 𝔼 Q ( t ) Q * * ( t ) ( H 𝔼 Q ( t ) H ) = 𝔼 Q ( t ) Q ( t ) 2 𝔼 Q ( t ) Q ( t ) 2 ( H 𝔼 Q ( t ) H ) + 𝔼 Q ( t ) Q * * ( t ) ( H 𝔼 Q ( t ) H ) .
Dropping the generic H, the Euler–Lagrange equation becomes
Q * * ( t ) + Q ( t ) 2 𝔼 Q ( t ) Q ( t ) 2 = 1 2 Q ( t ) 2 𝔼 Q ( t ) Q ( t ) 2 κ ( log Q ( t ) + Q ( t ) ) ;
that is,
Q * * ( t ) + 1 2 Q ( t ) 2 𝔼 Q ( t ) Q ( t ) 2 = κ ( log Q ( t ) + Q ( t ) ) .
The equation above has been derived using the exponential affine geometry of the statistical bundle and involves Q * * ( t ) . However, by using Equations (5), (6), and (12), we find the equivalent form
0 D 2 Q ( t ) = κ grad Q ( t ) .

6. Discussion

We have shown that the research program consisting of applying concepts taken from Classical Mechanics to Statistics makes sense, even if no practical application has been produced in this paper. Some simple examples have been discussed in order to show clearly that the language from classical mechanics is indeed suggestive when applied to typical concepts in Statistics such as Fisher score and statistical entropy. The derivation of the Euler–Lagrange equations is classically done in the set-up of the Riemannian geometry, while here we have used the affine structure of Information Geometry. The present provisional results prompt a generalization to non-finite sample spaces and the development of applications. Finally, the related Hamiltonian formalism remains to be investigated.

Acknowledgments

The Author gratefully thanks Hiroshi Matsuzoe (Nagoya Institute of Technology, Japan), Lamberto Rondoni (Politecnico di Torino, Italy), Antonio Scarfone (CNR and Politecnico di Torino, Italy), Tatsuaki Wada (Ibaraki University, Japan), for their interesting comments on early versions of this piece of research. He thanks two anonymous referees for their useful and enlightening comments. He acknowledges the support of de Castro Statistics, Collegio Carlo Alberto, and of GNAMPA-INdAM.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Arnold, V.I. Mathematical Methods of Classical Mechanics, 2nd ed.; Graduate Texts in Mathematics; Springer: New York, NY, USA, 1989; Volume 60, p. xvi+516. [Google Scholar]
  2. Abraham, R.; Marsden, J.E. Foundations of Mechanics, 2nd ed.; Advanced Book Program, Reading, Mass; Benjamin/Cummings Publishing Co., Inc.: San Francisco, CA, USA, 1978; pp. xxii+m–xvi+806. [Google Scholar]
  3. Marsden, J.E.; Ratiu, T.S. Introduction to Mechanics and Symmetry: A Basic Exposition of Classical Mechanical Systems, 2nd ed.; Texts in Applied Mathematics; Springer: New York, NY, USA, 1999; Volume 17, p. xviii+582. [Google Scholar]
  4. Amari, S.; Nagaoka, H. Methods of Information Geometry; American Mathematical Society: Providence, RI, USA, 2000; p. x+206. [Google Scholar]
  5. Leok, M.; Zhang, J. Connecting Information Geometry and Geometric Mechanics. Entropy 2017, 19, 518. [Google Scholar] [CrossRef]
  6. Pistone, G. Nonparametric information geometry. In Geometric Science of Information, Proceedings of the First International Conference, GSI 2013, Paris, France, 28–30 August 2013; Nielsen, F., Barbaresco, F., Eds.; Lecture Notes in Computer Science; Springer: Heidelberg, Germany, 2013; Volume 8085, pp. 5–36. [Google Scholar]
  7. Lang, S. Differential and Riemannian Manifolds, 3rd ed.; Graduate Texts in Mathematics; Springer: Berlin, Germany, 1995; Volume 160, p. xiv+364. [Google Scholar]
  8. Pistone, G. Examples of the application of nonparametric information geometry to statistical physics. Entropy 2013, 15, 4042–4065. [Google Scholar] [CrossRef]
  9. Lods, B.; Pistone, G. Information Geometry Formalism for the Spatially Homogeneous Boltzmann Equation. Entropy 2015, 17, 4323–4363. [Google Scholar] [CrossRef]
  10. Pistone, G.; Rogantin, M. The exponential statistical manifold: mean parameters, orthogonality and space transformations. Bernoulli 1999, 5, 721–760. [Google Scholar] [CrossRef]
  11. Efron, B.; Hastie, T. Computer Age Statistical Inference: Algorithms, Evidence, and Data Science; Cambridge University Press: New York, NY, USA, 2016; Volume 5, p. xix+475. [Google Scholar]
  12. Gibilisco, P.; Pistone, G. Connections on non-parametric statistical manifolds by Orlicz space geometry. IDAQP 1998, 1, 325–347. [Google Scholar] [CrossRef]
  13. Landau, L.D.; Lifshits, E.M. Course of Theoretical Physics. Statistical Physics, 3rd ed.; Butterworth-Heinemann: Oxford, UK, 1980; Volume 5. [Google Scholar]
  14. Shima, H. The Geometry of Hessian Structures; World Scientific Publishing Co. Pte. Ltd.: Hackensack, NJ, USA, 2007; p. xiv+246. [Google Scholar]

Share and Cite

MDPI and ACS Style

Pistone, G. Lagrangian Function on the Finite State Space Statistical Bundle. Entropy 2018, 20, 139. https://doi.org/10.3390/e20020139

AMA Style

Pistone G. Lagrangian Function on the Finite State Space Statistical Bundle. Entropy. 2018; 20(2):139. https://doi.org/10.3390/e20020139

Chicago/Turabian Style

Pistone, Giovanni. 2018. "Lagrangian Function on the Finite State Space Statistical Bundle" Entropy 20, no. 2: 139. https://doi.org/10.3390/e20020139

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop