Next Article in Journal
Stability and Evolution of Synonyms and Homonyms in Signaling Game
Next Article in Special Issue
A Short Review on Minimum Description Length: An Application to Dimension Reduction in PCA
Previous Article in Journal
Generalizing the Balance Heuristic Estimator in Multiple Importance Sampling
Previous Article in Special Issue
A Review of Bayesian Hypothesis Testing and Its Practical Implementations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

λ-Deformation: A Canonical Framework for Statistical Manifolds of Constant Curvature

by
Jun Zhang
1,2,* and
Ting-Kam Leonard Wong
3
1
Department of Psychology, University of Michigan, Ann Arbor, MI 48109-1109, USA
2
Department of Statistics, University of Michigan, Ann Arbor, MI 48109-1109, USA
3
Department of Statistical Sciences, University of Toronto, Toronto, ON M5S 1A1, Canada
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(2), 193; https://doi.org/10.3390/e24020193
Submission received: 15 December 2021 / Revised: 15 January 2022 / Accepted: 18 January 2022 / Published: 27 January 2022
(This article belongs to the Special Issue Review Papers for Entropy)

Abstract

:
This paper systematically presents the λ -deformation as the canonical framework of deformation to the dually flat (Hessian) geometry, which has been well established in information geometry. We show that, based on deforming the Legendre duality, all objects in the Hessian case have their correspondence in the λ -deformed case: λ -convexity, λ -conjugation, λ -biorthogonality, λ -logarithmic divergence, λ -exponential and λ -mixture families, etc. In particular, λ -deformation unifies Tsallis and Rényi deformations by relating them to two manifestations of an identical λ -exponential family, under subtractive or divisive probability normalization, respectively. Unlike the different Hessian geometries of the exponential and mixture families, the λ -exponential family, in turn, coincides with the λ -mixture family after a change of random variables. The resulting statistical manifolds, while still carrying a dualistic structure, replace the Hessian metric and a pair of dually flat conjugate affine connections with a conformal Hessian metric and a pair of projectively flat connections carrying constant (nonzero) curvature. Thus, λ -deformation is a canonical framework in generalizing the well-known dually flat Hessian structure of information geometry.

1. Introduction

Information geometry is a differential-geometric framework for studying finite-dimensional statistical models that coherently integrates the following notions:
(i)
A differentiable manifold M consisting of probability density functions or finite measures on a common sample space;
(ii)
A divergence function D [ p | | p ] that defines an asymmetric proximity between points p, p in M ;
(iii)
A Riemannian metric g plus a pair of torsion-free dual (conjugate) affine connections , * on M .
For completeness, we recall that a pair of affine connections ∇, * on M are said to be dual (or conjugate) with respect to a Riemannian metric g if for any vector fields X, Y, and Z on M , one has:
Z g ( X , Y ) = g ( Z X , Y ) + g ( X , Z * Y ) .
Here, ( M , g , , * ) is called a dualistic structure. When D is the Kullback–Leibler divergence (or more generally, f-divergence), the induced Riemannian metric g is the Fisher–Rao metric, and the induced cubic form C = * is the Amari–Chentsov tensor [1]. It can be shown that the Fisher–Rao metric and the Amari–Chentsov tensor are unique invariants, of respectively second and third orders, under sufficient statistics on the manifold M [2].
Geometrically, the standard model (denoted the S -model in this paper) uses a pair of affine connections that are torsion-free, though in general, they are not curvature-free. An alternative, “partially flat” model (denoted the P -model in this paper) was recently investigated in [3], leading to the notion of “statistical mirror symmetry” [4]. Under the P -model, the affine connections ∇ and * are allowed to carry torsion, but are both curvature-free. See [4] for the geometric properties of the P -model leading to a symplectic-to-complex correspondence characteristic of mirror Calabi–Yau manifolds studied in string theory and mathematical physics.
Within the usual S -model, a special case is the dually flat geometry where the Riemannian metric can be expressed under special coordinate systems as a Hessian metric. Two prominent examples are the exponential family and the mixture family, where the Hessian metric coincides with the Fisher–Rao metric. The Hessian geometry is said to be dually flat because the Riemann curvature tensors of both the primal and the dual connections vanish; the corresponding primal and dual affine coordinate systems are linked via Legendre transformations by a pair of convex potentials. For an exponential family, these coordinates are precisely the natural (canonical) and mixture (expectation) coordinate systems, respectively. Note that the Hessian metric itself is not flat as its Levi-Civita connection contains curvature in general.
Between the well-understood dually flat Hessian geometry and the full-blown S -model, there is a wide range of geometries capturing various probability models. Of special interest are generalizations of the exponential family, namely deformed exponential families. The ϕ -exponential family was introduced in the context of statistical physics [5]; it was later shown [6] to be equivalent to the U-model [7] motivated by applications in machine learning—[6] revealed that both the ϕ - and U-models can be generated from the ( ρ , τ ) -model [8] through the mechanism of “gauge selection”. The ( ρ , τ ) -metric generalizes the Fisher–Rao metric and may lead to a conformal Hessian metric for a ϕ -exponential family. However, the connections are typically not curvature-free unless a special type of gauge is selected; this underlies the geometric characterization of the q-exponential model of Tsallis by [9,10,11].
In recent years, the second author [12], motivated by previous works with Pal on mathematical finance and optimal transport [13,14,15,16], studied a class of deformed exponential families generating constant curvatures through the use of a new divergence function called logarithmic divergence. By constant (information geometric) curvature, we mean that both the primal and dual Riemann curvature tensors have (the same) constant sectional curvature with respect to g . In [17], the present authors developed a unified framework, based on the notions of λ-duality and the λ-exponential family, which appears to provide a canonical extension of the dually flat geometry to the constant curvature case. Previously, statistical manifolds with constant curvature were studied using the abstract tools of affine differential geometry; see, e.g., [1,18] (also see [19]). Our framework provides a concrete approach and an explicit construction that elucidates how the properties of the exponential family and the dually flat geometry may be extended to the constant curvature case. In this paper, a careful exposition of the λ -deformation framework is provided from the perspective of λ -duality, namely the λ -deformation of Legendre duality.
The rest of the paper is organized as follows. In Section 2, we review the standard S -model of information geometry with a focus on the dually flat geometry, based on convex duality and Bregman divergence, of the exponential and mixture families. The section closes with a preview of λ -deformation by introducing a suite of four deformation functions, as two pairs of mutually inverse functions: log λ versus exp λ and κ λ versus γ λ , with the first pair deforming log and exp and the second pair deforming the identity function. In Section 3, we describe the λ -duality, which deforms the standard convex duality. In particular, we compare λ -duality and standard Legendre duality and show their relations to each other upon a change of parameterization. In Section 4, we define the λ -gradient and then the λ -logarithmic divergence and study the constant curvature information geometry the latter induces. In Section 5, we relate λ -divergence to Rényi entropy by introducing the λ -exponential and λ -mixture family. The two expressions of the λ -exponential family under divisive and subtractive normalization correspond to, respectively, Rényi deformation and Tsallis deformation. Section 6 concludes with a comparison of λ -deformation with the standard dually flat (Hessian) framework.

2. The Standard Model of Information Geometry

2.1. The Standard Model

We begin by recalling the standard framework (referred to as the S -model) of parametric information geometry [1,20]. Let M be a finite-dimensional differentiable manifold with dimension d and θ = ( θ 1 , , θ d ) be a local coordinate system. The most important case is where M is a manifold of parametric probability density functions. However, the idea of deforming Legendre duality to λ -duality and hence dually flat (Hessian) manifolds to manifolds of constant curvature discussed in Section 3 and Section 4 is entirely general and does not rely on M being a manifold of probability density functions.
Let ( X , μ ) be a measure space, where μ is called the reference (or dominating) measure. Let Θ R d be an open domain. A parametric family of density functions is a mapping θ Θ p ( · | θ ) , where each p ( · | θ ) is a probability density function with respect to μ , i.e., X p ( ζ | θ ) d μ ( ζ ) = 1 . We assume that the family is sufficient regular such that all analytical operations (such as differentiation under the integral sign) can be performed as needed.
While a dualistic structure ( M , g , , * ) can be defined abstractly, in practice, it is often constructed by a divergence, namely a smooth, non-negative function D [ · | | · ] on M × M such that D [ p | | p ] = 0 only if p = p and the (0,2)-tensor g it induces on M (see (2) below) is positive definite. Intuitively, D [ p | | p ] defines a notion of “asymmetric distance” between points p and p of M . When M is a manifold of density functions, a prominent example is the Kullback–Leibler (KL) divergence (relative entropy) given by:
H [ p | | p ] = p log p p d μ .
When dealing with parametric probability families, p and p are replaced by p ( · | θ ) and p ( · | θ ) , then we denote D [ p | | p ] as D ( θ , θ ) with an abuse of notation, that is:
D [ p ( · | θ ) | | p ( · | θ ) ] D ( θ , θ ) ,
and similarly for H as well—the notation of [ p | | p ] in the divergence for probability density functions emphasizes the non-symmetricity in p, p ; see [1].
Eguchi [21] showed that any divergence function (called a “contrast function” there) induces a dualistic structure ( M , g , , * ) . In local coordinates, given D ( θ , θ ) , the components g i j of the metric g are given by:
g i j ( θ ) = 2 θ i θ j D ( θ , θ ) θ = θ = 2 θ i θ j D ( θ , θ ) θ = θ ,
and the Christoffel symbols of the conjugate connections ∇ and * are given respectively by:
Γ i j , k ( θ ) = 3 θ i θ j θ k D ( θ , θ ) θ = θ , Γ i j , k * ( θ ) = 3 θ i θ j θ k D ( θ , θ ) θ = θ .
Conversely, given any dualistic structure ( M , g , , * ) , there exists a divergence D that induces it, but this D is not unique in general [22]. Thus, the standard model S is completely encoded by the choice of a divergence function.

2.2. Dually Flat Geometry

The most important example of a dualistic structure is the dually flat geometry, which is induced by a Bregman divergence [23]. Let M be prescribed with an affine coordinate chart θ Θ on an open convex set Θ R d . Let ϕ : Θ R be a differentiable convex function; specifically, we assumed that ϕ is C 2 and its Hessian D 2 ϕ is strictly positive definite. The Bregman divergence of ϕ is defined by:
B ϕ ( θ , θ ) = ϕ ( θ ) ϕ ( θ ) D ϕ ( θ ) · ( θ θ ) , θ , θ Θ ,
where D ϕ ( θ ) = ( θ 1 ϕ ( θ ) , , θ d ϕ ( θ ) ) is the Euclidean gradient and a · b denotes the standard dot product. We call θ Θ the primal coordinates, and η = D ϕ ( θ ) the dual coordinates, where the inverse of D ϕ is given by θ = D ϕ * ( η ) . Here, the Legendre conjugate ϕ * (or convex conjugate) of ϕ is defined by:
ϕ * ( η ) = sup θ θ · η ϕ ( θ ) .
Then, the components g i j of the Riemannian metric g , under the respective local coordinate system, are given by:
g i j ( θ ) = 2 θ i θ j ϕ ( θ ) , g i j ( η ) = 2 η i η j ϕ * ( η ) .
In particular, g is a Hessian metric with potential ϕ (resp. ϕ * ) under θ (resp. η ). Furthermore, the Christoffel symbols of ∇ and * are given respectively by:
Γ i j , k ( θ ) = 0 , Γ i j , k * ( η ) = 0 .
From (6), we see that the Riemann curvature tensors of both ∇ and * vanish. Thus, we call this a dually flat geometry. Furthermore, a ∇-geodesic (resp. * -geodesic) is a constant velocity straight line under the θ (resp. η ) coordinate system.
Moreover, the θ and η coordinates are biorthogonal in the sense that:
g θ i , η j = δ i j ,
and the Bregman divergence takes the forms of:
B ϕ ( θ , θ ( η ) ) = A ϕ ( θ , η ) = A ϕ * ( η , θ ) = B ϕ * ( η , η ( θ ) )
with η = D ϕ ( θ ) and θ = D ϕ * ( η ) , where A is called the canonical divergence:
A ϕ ( θ , η ) = ϕ ( θ ) + ϕ * ( η ) θ · η = A ϕ * ( η , θ ) .
Following [24,25], we call the equality between two expressions of B and the equality between two expressions of A in (8) reference–representation biduality. In [26], the identity (8) was used to motivate a family of Fenchel–Young losses in the context of regularized prediction in machine learning. Last but not least, the Bregman divergence satisfies the generalized Pythagorean theorem: given points P, Q, and R, we have the equality:
B ϕ ( θ Q , θ P ) + B ϕ ( θ R , θ Q ) = B ϕ ( θ R , θ P )
if and only if the ∇-geodesic between Q and R and the * -geodesic between Q and P meet g -orthogonally at Q. As we will see in Section 4, all the properties above have natural generalizations under our λ -framework. We stress that the dually flat geometry depends crucially on classical convex (or Legendre) duality, as seen from (4) and (8).

2.3. Exponential and Mixture Families

The dually flat Hessian geometry arises naturally in the exponential and mixture families of probability densities. Given a reference measure μ on a state space X , an exponential family is a parameterized probability density p ( e ) ( · | θ ) of the form:
p ( e ) ( ζ | θ ) = e θ · F ( ζ ) ϕ ( θ ) ,
where θ = ( θ 1 , , θ d ) Θ R d and F ( ζ ) = ( F 1 ( ζ ) , , F d ( ζ ) ) is a vector of sufficient statistics. In (9), the cumulant generating function ϕ , defined by:
ϕ ( θ ) = log e θ · F d μ ,
enforces the normalization p ( e ) d μ = 1 . The exponential family generalizes the Boltzmann–Gibbs distribution in statistical physics, where Z ( θ ) = e ϕ ( θ ) is called the partition function.
The information geometry of the exponential family begins with the observation that ϕ is convex. Then, ϕ defines a Bregman divergence B ϕ giving rise to a dually flat structure. It can be shown that the Bregman divergence is a KL-divergence:
B ϕ ( θ , θ ) = H [ p ( e ) ( · | θ ) | | p ( e ) ( · | θ ) ] .
The induced Riemannian metric g , the Fisher–Rao metric given by (in matrix components g i j ), becomes a Hessian metric D 2 ϕ :
g i j ( θ ) = θ i log p ( e ) ( ζ | θ ) θ j log p ( e ) ( ζ | θ ) p ( e ) ( ζ | θ ) d μ = 2 θ i θ j ϕ ( θ ) .
Equivalently, g ( θ ) is the covariance matrix of the sufficient statistics F:
g i j ( θ ) = p ( e ) ( ζ | θ ) F i ( ζ ) p ( e ) ( ζ | θ ) F i ( ζ ) F j ( ζ ) p ( e ) ( ζ | θ ) F j ( ζ ) d μ .
Furthermore, the dual coordinate η = D ϕ ( θ ) is the expectation coordinates given by:
η = p ( e ) ( ζ | θ ) F ( ζ ) d μ ,
and the dual potential function ϕ * is, as a function of η , the negative Shannon entropy:
ϕ * ( η ) = H [ p ( e ) ( · | θ ) ] = p ( e ) ( ζ | θ ) log p ( e ) ( ζ | θ ) d μ .
A theoretical justification for the exponential family is that it maximizes the Shannon entropy under the constraints of its expected value of the vector of random functions F ( · ) .
The mixture family is another probability family that is very useful in both theory and applications. Let P 0 ( ζ ) , P 1 ( ζ ) , , P d ( ζ ) be a set of affinely independent probability densities with respect to the same dominating measure μ . Given mixture parameters η i > 0 for i = 0 , , d with i = 0 d η i = 1 , the mixture family p ( m ) ( · | η ) is defined by:
p ( m ) ( ζ | η ) = i = 0 d η i P i ( ζ ) = P 0 ( ζ ) + i = 1 d η i ( P i ( ζ ) P 0 ( ζ ) ) ,
where ( η 1 , , η d ) may be taken as the independent parameters. It can be shown that the negative Shannon entropy:
ψ ( η ) = H [ p ( m ) ( · | η ) ] = p ( m ) ( ζ | η ) log p ( m ) ( ζ | η ) d μ
of a mixture family is convex in η . Using ψ as the potential function, we have:
B ψ ( η , η ) = H [ p ( m ) ( · | η ) | | p ( m ) ( · | | η ) ] ,
which is again a KL-divergence and induces a dually flat geometry. In summary, the exponential and mixture families are both dually flat when the geometry is induced by the KL-divergence.
For completeness, we note that the convex conjugate of ψ ( η ) is:
ψ * ( θ ) = P 0 ( ζ ) log p ( m ) ( ζ | η ) d μ ,
with conjugate parameters θ = D ψ given by:
θ i = ( P i ( ζ ) P 0 ( ζ ) ) log p ( m ) ( ζ | η ) d μ .

2.4. Deforming exp and log

The exponential function used in the exponential family:
p ( e ) ( ζ | θ ) = exp { θ · F ( ζ ) ϕ ( θ ) } = e θ · F ( ζ ) Z ( θ )
allows the cumulant generating function ϕ ( θ ) (also called the potential function) and the partition function Z ( θ ) to be linked by the simple relation ϕ = log Z . The equivalence of using ϕ as subtractive normalization and Z as divisive normalization of the same exponential family p ( e ) ( ζ | θ ) d μ = 1 is due to the elementary, but crucial property exp ( x + y ) = exp ( x ) exp ( y ) of the exponential function. Using a functional form other than exp (exponential function) or log (logarithm function) is referred to as deformation in information geometric (statistical and information theoretic) contexts, and the resulting probability families are called “deformed” families. Typically, this is performed by regarding log, or equivalently exp, as special cases of some parametric class of functions that include them as special members.
More generally, the exponential/logarithmic function can be considered within a non-parametric function space that includes exp or log as a special member. Several approaches can be found in the literature, including the ϕ -deformed exponential approach by Naudts [5,27,28], the conjugate ( ρ , τ ) -embedding approach by the first author [8,25,29], and the U-model by Eguchi [7,30]. The ϕ -model and U-model are both one-function models, while the ( ρ , τ ) -model uses two free functions. It eventually became clear in the 2018 paper [6] by Naudts and the first author that (i) the ϕ - and U-model turned out to be equivalent; (ii) they are special cases of the ( ρ , τ ) -model upon a particular fixing of the “gauge freedom”; (iii) the corresponding ( ρ , τ ) -geometry of the manifold of the ϕ -exponential family can have different appearances (gauges freedom), such as a Hessian geometry (under one type of gauge selection) and a conformal Hessian geometry (under another type of gauge selection). The work [6] unified the intermediary results in [10,11,31] and provided a general deformation framework that preserves the rigid interlocking of: (i) the functional form of entropy, cross-entropy, and relative entropy (divergence); (ii) the functional form of the deformed probability family with the corresponding normalization and potential and the duality between the natural and expectation parameterizations; (iii) the expressions of the Riemannian metric (Fisher–Rao metric in general and Hessian metric in particular) and of the conjugate connections. Some of these concepts have their correspondence in nonparametric probability families as well [32,33,34].
Although the ( ρ , τ ) -model may admit a conformal Hessian metric (more rigorously stated: the ϕ -exponential family with the ( ρ , τ ) -metric under a certain gauge will lead to conformal Hessian geometry), the dual connections are not projectively flat (as the geometry studied by [12]). As a result, while the connections are not flat (torsion-free, but not curvature-free), they are not in general of the constant-curvature-type either. Therefore they are “too general” and do not generate the space of constant curvatures.

2.5. Highlights of λ -Deformation

Here enters λ -deformation as a middle ground [17]. The λ -deformation theory absorbs the q-deformation model of Tsallis and the F ( ± α ) model of Wong [12] in deforming the exponential family and unifies the subtractive and divisive normalization—this is an occasion where subtractive and divisive normalizations are still linked by a simple reparameterization of the probability family.
Let us introduce some notations. Consider the following deformed logarithm and exponential functions (note the slight difference to the log q notation used by Tsallis, in the way how the subscript indicates the deformation parameter):
log λ ( t ) = 1 λ t λ 1 , exp λ ( t ) = ( 1 + λ t ) 1 / λ .
More precisely, we define exp λ : R [ 0 , ] (where λ R , λ 0 ) by:
exp λ ( t ) = 1 + λ t + 1 / λ ,
where a + = max { a , 0 } . In our analysis, we assumed implicitly that 1 + λ t > 0 , which is shown to hold for λ -duality, so the subscript + can be omitted. Furthermore, d d t exp λ ( t ) = [ exp λ ( t ) ] 1 λ , so exp λ ( · ) is convex if and only if λ < 1 . For this reason, we restricted λ to this range as in [9,28]. Below, we also took log t = whenever t 0 . Note that our notation differs slightly from Tsallis’ indexing of the deformed logarithm and exponential functions; see Section 5.
Next, we construct another pair of inverse functions κ λ , γ λ by:
κ λ = log exp λ , γ λ = log λ exp ,
where ∘ denotes function composition. Explicitly, they are:
κ λ ( t ) = 1 λ log ( 1 + λ t ) , γ λ ( t ) = 1 λ e λ t 1 .
This suite of four functions, namely exp λ , log λ as an inverse pair and κ λ , γ λ as another inverse pair, is called λ-deformation and used in the discussions of λ -convexity, λ -conjugation, and λ -duality. Regular exponential and logarithmic functions are recovered when λ 0 , whence both κ λ and γ λ reduce to the identity function.
Using these four functions, Wong and Zhang [17] developed the λ -deformation framework to solve the problem of relating the exponential family under subtractive normalization:
p ( λ ) ( ζ | θ ) = exp λ ( θ · F ( ζ ) ϕ λ ( θ ) )
to that under divisive normalization:
p ( λ ) ( ζ | ϑ ) = exp λ ( ϑ · F ( ζ ) ) e φ λ ( ϑ ) .
There, the same λ -deformed exponential family can be expressed by two parameterizations θ and ϑ linked through:
θ = ϑ e λ φ λ ( ϑ ) ϑ = θ 1 λ ϕ λ ( θ ) ,
while the normalization functions ϕ λ and φ λ (with different domains) are linked through:
ϕ λ ( θ ) = γ λ ( φ λ ( ϑ ) ) φ λ ( ϑ ) = κ λ ( ϕ λ ( θ ) ) .
The λ -deformation framework led to a unified way of looking at the Tsallis entropy (related to the subtractive denormalization) and Rényi entropy (related to the divisive normalization), as well as generating new insights into the distinction between the exponential and mixture families through the lens of deformation theory. To understand this deformation better, we describe the underlying mathematical framework of λ -deformation.

3. Deforming the Legendre Duality: λ -Duality

In this section, we describe the λ -duality and a its link to the standard Legendre duality. We start by defining the notions of λ -conjugate and λ -convexity/ λ -concavity, then draw a parallel to the regular Legendre duality. We proceed to establish a formal correspondence between the λ -duality and classical convex duality, including the associated notions of the λ -gradient, λ -logarithmic divergence, etc. Some of the derivations are illustrative, yet heuristic—a rigorous analysis in the spirit of Rockafellar [35] is yet to be performed in future research.

3.1. Legendre Duality and Bregman Divergence Reviewed

Recall from (4) that the convex conjugate of a function f on R d is defined by:
f * ( u ) = sup x x · u f ( x ) , u R d .
It can be proven that:
(i)
f * is convex;
(ii)
( ( f * ) * ) * = f * ;
(iii)
( f * ) * = f if f is convex and lower semicontinuous.
When f is further differentiable, then the Legendre transformation:
u = D f ( x ) ,
which can be motivated by the first-order condition in (11), defines a “dual variable” u, satisfying the Fenchel identity:
f ( x ) + f * ( u ) = x · u .
We have x = D f * ( u ) , provided the second derivative or D 2 f is positive definite. The function f also defines a Bregman divergence B f given by:
B f ( x , x ) = f ( x ) f ( x ) D f ( x ) · ( x x ) 0 .
The Bregman divergence satisfies the reference–representation biduality [24,25] in the sense that:
B f ( x , x ) = B f * ( u , u )
where u = D f ( x ) , u = D f ( x ) . Note that when f is convex and differentiable, the non-negativity of the Bregman divergence encodes the fact that for any x , x :
f ( x ) f ( x ) D f ( x ) · ( x x ) .

3.2. λ -Deformation of Legendre Duality

The main idea behind the λ -deformation of the Legendre duality (“ λ -duality”) is to replace the term x · u in (11) by a monotone transformation of x · u . Given a parameter λ R \ { 0 } , later revealed to be the curvature parameter of the information geometric characterization, we replace the term x · u by:
κ λ ( x · u ) = 1 λ log ( 1 + λ x · u ) ,
where κ λ ( t ) and its inverse γ λ ( t ) are given by (10). With this in mind we give the following definition.
Definition 1
( λ -conjugation). Let Ω , Ω R d . Given a function f : Ω R , we define its λ-conjugate f ( λ ) by:
f ( λ ) ( u ) = sup x Ω κ λ ( x · u ) f ( x ) , u Ω .
Generalized convex dualities have been heavily used in optimal transport theory [36,37] to characterize the optimal transport plans; in this context, it is called the c-duality where c is the cost function of the transport problem. A major novelty of our framework is that the functional form of κ λ (and of γ λ ) leads to explicit formulas, which are not available in the general case. We remark that this is closely related to the fact that the associated information geometry has constant curvature λ .
It turns out that the λ -conjugation defined by (14) corresponds to an appropriately generalized notion of convexity or concavity, through the aid of the function γ λ given by (10). Henceforth, we let λ R \ { 0 } be a fixed constant.
Definition 2
( λ -exponential convexity and concavity). Let Ω R d be an open convex set. A function f : Ω R is said to be λ-exponentially convex (“λ-convex”), or λ-exponentially concave (“λ-concave”), if:
G λ , f ( x ) = ( γ λ f ) ( x ) = 1 λ e λ f ( x ) 1
is convex, or concave, on Ω. When f is C 2 , we have equivalently that f is λ-convex, or λ-concave, if the Hessian of G λ , f γ λ f is positive definite, or negative definite.
Note that the additive term 1 / λ in the above definition of G λ , f ( x ) = 1 λ ( e λ f ( x ) 1 ) is not necessary; it is included so that lim λ 0 G λ , f ( x ) = f ( x ) , meaning that in the limiting case of zero-convexity is just ordinary convexity.
It is easily shown that, for λ > 0 a fixed positive number,
(i)
f is λ -convex if and only if f is ( λ ) -concave;
(ii)
f is λ -concave if and only if f is ( λ ) -convex.
Proposition 1.
Given any f : Ω R , we define variable x ˜ , which has range Ω ˜ R d , and function g : Ω ˜ R by:
x ˜ = x e λ f ( x ) = x ( 1 λ G λ , f ( x ) ) ,
g ( x ˜ ) = 1 λ e λ f ( x ) 1 = γ λ ( f ( x ) ) = G λ , f ( x ) .
Then, the convex (Legendre) conjugate g * of the function g:
g * ( u ) = sup x ˜ Ω ˜ x ˜ · u g ( x ˜ )
is related to the λ-conjugate f ( λ ) of the function f via:
g * ( u ) = 1 λ e λ f ( λ ) ( u ) 1 = γ λ ( f ( λ ) ( u ) ) = G λ , f ( λ ) ( u ) .
Proof. 
We first prove the following identities:
( 1 + λ x · u ) e λ f ( x ) = e λ f ( x ) + λ e λ f ( x ) x · u = ( 1 λ g ( x ˜ ) ) + λ x ˜ · u = 1 + λ ( x ˜ · u g ( x ˜ ) )
where, going from the first to the second line, we used (15) and the fact:
1 λ g ( x ˜ ) = e λ f ( x ) ,
which is a re-write of the definition of g given by (16).
With the above identity, we can proceed to prove this proposition. For u Ω , we have:
f ( λ ) ( u ) = sup x Ω 1 λ log ( 1 + λ ( x · u ) ) f ( x ) = sup x ˜ Ω ˜ 1 λ log 1 + λ ( x ˜ · u g ( x ˜ ) ) = 1 λ log 1 + λ sup x ˜ Ω ˜ x ˜ · u g ( x ˜ ) = 1 λ log ( 1 + λ g * ( u ) ) = κ λ ( g * ( u ) ) .
Recasting the above relation yields (17). □
Recall that from convex analysis, g * is always a convex function regardless of whether g is convex (by the property of Legendre conjugation). The expression of g * ( u ) = γ λ ( f ( λ ) ( u ) ) in (17) therefore implies that f ( λ ) is λ -convex, by the definition of λ -convexity.
Corollary 1.
For any f : Ω R , its λ-conjugate f ( λ ) ( u ) as defined by (14) is a λ-convex function of u on Ω (note Ω may not necessarily be convex).
Proof. 
We can also give a direct proof (essentially reversing the steps of the proof of Proposition 1).
g * ( u ) = sup x ˜ Ω ˜ x ˜ · u g ( x ˜ ) = sup x Ω e λ f ( x ) ( x · u ) 1 λ e λ f ( x ) 1 = sup x Ω 1 λ ( 1 + λ x · u ) e λ f ( x ) 1 λ = 1 λ sup x Ω e log ( 1 + λ x · u ) λ f ( x ) 1 = 1 λ e sup x Ω ( log ( 1 + λ x · u ) λ f ( x ) ) 1 = 1 λ e λ f ( λ ) ( u ) 1 = γ λ ( f ( λ ) ( u ) ) .
Corollary 1 is the extension of the claim that for any f, the standard Legendre conjugate f * as given by (11) is always a convex function. Because of this, we can prove, in analogy to the standard Legendre conjugation ∗, the following relations:
(i)
( ( f ( λ ) ) ( λ ) ) ( λ ) = f ( λ ) for any f.
(ii)
( f ( λ ) ) ( λ ) = f if f is λ -convex.

3.3. Relations between the λ -Duality and Legendre Duality

We proceed to establish a formal relationship between the λ -duality and the ordinary Legendre duality, by relating the λ -conjugation of a λ -convex function f, denoted by f ( λ ) , to the standard Legendre conjugation of a function (denoted by ∗).
We continue the analysis performed in Proposition 1. Taking λ -conjugation for a second time,
( f ( λ ) ) ( λ ) ( x ) = sup u Ω 1 λ log ( 1 + λ ( x · u ) ) f ( λ ) ( u ) = sup u ˜ Ω 1 λ log ( 1 + λ ( x · u ˜ g ˜ ( u ˜ ) ) ) = 1 λ log 1 + λ sup u ˜ Ω ( x · u ˜ g ˜ ( u ˜ ) ) = 1 λ log ( 1 + λ g ˜ * ( x ) ) = κ λ ( g ˜ * ( x ) ) .
Here, the variable u ˜ is defined by:
u ˜ = u e λ f ( λ ) ( u ) ,
and the function g ˜ by:
g ˜ ( u ˜ ) = γ λ ( f ( λ ) ( u ) ) = G λ , f ( λ ) ( u ) .
In the event when f is λ -convex, then ( f ( λ ) ) λ = f . Therefore:
g ˜ * ( x ) = γ λ ( f ( x ) ) = G λ , f ( x ) .
Therefore, g ˜ ( u ˜ ) = ( G λ , f ) * ( u ˜ ) . That is, the function g ˜ is just the (regular) Legendre conjugation * of the function G λ , f ( x ) . In u ˜ parameterization, the g ˜ function has the expression of (18) with u ˜ and u related by (20). This parallels the fact that g ( x ˜ ) = ( G λ , f ( λ ) ) * ( x ˜ ) , and in x ˜ parameterization, the g function has the expression of (16) with x ˜ and x related by (19).
Summarizing the above, we have:
Theorem 1
(Connecting λ -duality to Legendre duality). Let f be a λ-convex function and f ( λ ) be its λ-conjugate. Denote two functions g and g ˜ :
g ( x ˜ ) = G λ , f ( x ) = γ λ ( f ( x ) ) , g ˜ ( u ˜ ) = G λ , f ( λ ) ( u ) = γ λ ( f ( λ ) ( u ) ) ,
where the two variables x ˜ and u ˜ are given by:
x ˜ = x e λ f ( x ) x = x ˜ 1 λ g ( x ˜ ) ,
u ˜ = u e λ f ( λ ) ( u ) u = u ˜ 1 λ g ˜ ( u ˜ ) .
Then, the following statements are equivalent:
(i) 
The ( x , u ) variables satisfy the λ-duality of a pair of λ-convex functions ( f , f ( λ ) ) :
κ λ ( x · u ) = f ( x ) + f ( λ ) ( u ) ;
(ii) 
The ( x ˜ , u ) variables satisfy the Legendre duality of a pair of convex functions ( g , g * ) :
x ˜ · u = g ( x ˜ ) + g * ( u )
with:
g * ( u ) = G λ , f ( λ ) ( u ) = γ λ ( f ( λ ) ( u ) ) ;
(iii) 
The ( x , u ˜ ) variables satisfy the Legendre duality of a pair of convex functions ( g ˜ , g ˜ * ) :
x · u ˜ = g ˜ * ( x ) + g ˜ ( u ˜ )
with:
g ˜ * ( x ) = G λ , f ( x ) = γ λ ( f ( x ) ) .
Proof. 
To prove the equivalence of (21) and (22), we re-write the latter as:
e λ f ( x ) x · u = γ λ ( f ( x ) ) + γ λ ( f ( λ ) ( u ) ) = 1 λ e λ f ( λ ) ( u ) e λ f ( x ) ,
where we inserted the following relations:
g ( x ˜ ) = γ λ ( f ( x ) ) , g * ( u ) = γ λ ( f ( λ ) ( u ) )
and replaced x ˜ by x using (19). Multiplying e λ f ( x ) on both sides, we obtain:
x · u = 1 λ e λ ( f ( λ ) ( u ) + f ( x ) ) 1 = γ λ ( f ( λ ) ( u ) + f ( x ) ) .
Noting ( γ λ ) 1 = κ λ verifies (21). To prove the equivalence of (21) and (23), we rely on an analogous identity:
( 1 + λ x · u ) e λ f ( λ ) ( u ) = 1 + λ ( x · u ˜ g ˜ ( u ˜ ) ) ,
where:
u ˜ = u e λ f ( λ ) ( u ) , g ˜ ( u ˜ ) = γ λ ( f ( λ ) ( u ) ) .
We have, after multiplying e λ f ( λ ) ( u ) on both sides of (24),
x · u e λ f ( λ ) ( u ) = 1 λ e λ f ( x ) e λ f ( λ ) ( u ) = γ λ ( f ( x ) ) + γ λ ( f ( λ ) ( u ) ) = g ˜ * ( x ) + g ˜ ( u ˜ ) ,
where the last step used:
g ˜ * ( x ) = γ λ ( ( f ( λ ) ) ( λ ) ( x ) ) , g ˜ ( u ˜ ) = γ λ ( f ( λ ) ( u ) ) .
Noting ( f ( λ ) ) ( λ ) = f due to f assumed to be λ -convex, then (23) follows. □
We see that the functions γ λ and γ λ serve as link functions from the ( f , f ( λ ) ) -pair of the λ -deformed Legendre conjugation to the ( g , g * ) -pair and the ( g ˜ , g ˜ * ) -pair of the regular Legendre conjugation.

4. λ -Logarithmic Divergence and Its Dualistic Geometry

In this section, we study the λ -deformation of the Bregman (canonical) divergence function and the resulting dualistic geometry (Riemannian metric and dual connections), which correspond to the λ -duality. This involves first establishing the λ -deformation to the gradient operation (so-called λ -gradient), which then leads to the so-called λ -logarithmic divergence function as deformation to the Bregman divergence. Finally, we show that the resulting Riemannian metric is a conformal Hessian metric, while the resulting dual connections are projectively flat (with constant curvature). The conformal and projective factor is parameterized by λ , which gives the curvature of the constant curvature space.

4.1. λ -Gradient

Definition 3
( λ -gradient). For x Ω , define the λ-gradient D ( λ ) f by:
D ( λ ) f ( x ) = 1 1 λ D f ( x ) · x D f ( x ) .
The work of [17] (Theorem 2.2) showed the above formula for deforming the gradient of a function motivated by the λ -duality setting. For mathematical convenience, it is proven under some regularity conditions; a full generalization along the lines of [35] is a natural direction for further research.
Theorem 2
( λ -gradient for λ -duality). Let λ 0 , and let f be a λ-exponentially convex function that is C 2 on some open convex set Ω R d , such that (a) D 2 G λ , f is strictly positive definite and (b) 1 λ D f ( x ) · x > 0 on Ω. Then we have
(i) 
D ( λ ) f is a C 1 -diffeomorphism from Ω to its range Ω .
(ii) 
Denote u = D ( λ ) f ( x ) . We have 1 + λ x · u > 0 , and the following identity holds:
f ( x ) + f ( λ ) ( u ) = 1 λ log ( 1 + λ x · u ) κ λ ( x · u ) .
(iii) 
Furthermore, x = D ( λ ) f ( λ ) ( u ) .
Note that the λ -gradient D ( λ ) f differs from the regular gradient D f by a scalar multiplication. The duality between x and u under the λ -duality is mediated by a dual variable u = D ( λ ) f ( x ) , which plays an important role in what follows.
Let:
(a)
u = D ( λ ) f ( x ) denote the λ -conjugate variable corresponding to x with respect to f ( x ) ;
(b)
u ^ = D g ( x ˜ ) be the Legendre conjugate variable corresponding to x ˜ with respect to g ( x ˜ ) ;
(c)
x = D ( λ ) f ( λ ) ( u ) denote the λ -conjugate variable corresponding to u with respect to f ( λ ) ( u ) ;
(d)
x ^ = D g ˜ ( u ˜ ) be the Legendre conjugate variable corresponding to u ˜ with respect to g ˜ ( u ˜ ) .
Is there a simple relationship between them? The following proposition says u ( x ) = u ^ ( x ˜ ) , where x ˜ and x are linked by (19), and x ( u ) = x ^ ( u ˜ ) , where u ˜ and u are linked by (20).
Proposition 2.
We have:
u = D x ( λ ) f ( x ) = D x ˜ g ( x ˜ ) , x = D u ( λ ) f ( λ ) ( u ) = D u ˜ g ˜ ( u ˜ )
Here, we add the subscript to D to emphasize the argument with respect to which the derivative is taken.
Proof. 
We use matrix notations where the gradient is regarded as a column vector. Applying the multivariate chain rule to (17), we have:
( D x ˜ g ( x ˜ ) ) = e λ f ( x ) ( D x f ( x ) ) x x ˜ ( x ˜ ) ,
where x x ˜ ( x ˜ ) is the Jacobian of the transformation x ˜ x and ( · ) denotes the transpose, For two vectors x and y, their outer product is denoted by x y , which is a rank-one square matrix with the ( i , j ) -entry x i y j .
From (15), we have:
x ˜ x ( x ) = e λ f ( x ) I λ x ( D x f ( x ) ) .
Since 1 λ D x f ( x ) · x > 0 by assumption, we can invert the Jacobian by the Sherman–Morrison formula (see [12], Proposition 4) to obtain:
x x ˜ ( x ˜ ) = e λ f ( x ) I + λ x ( D x f ( x ) ) 1 λ D x f ( x ) · x .
Plugging this into the above, we have:
( D x ˜ g ( x ˜ ) ) = ( D x f ( x ) ) 1 λ D x f ( x ) · x .
Using (25) to relate D x ( λ ) f ( x ) to D x f ( x ) , the first relation involving D x ˜ g ( x ˜ ) is proven. The proof of the second relation in this proposition is analogous. □
Just as ordinary convexity leads to the notion of Bregman divergence (12), the notion of λ -exponential convexity leads to a generalization that we call the λ-logarithmic divergence. Henceforth, we let f : Ω R be a λ -exponentially convex function on an open convex domain Ω R d , and we assumed that the regularity conditions in Theorem 2 hold.

4.2. λ -Logarithmic Divergence

By the definition of the λ -convexity, we have that G λ , f ( x ) = γ λ ( f ( x ) ) is convex on Ω . By the ordinary convexity of G λ , f , we have:
G λ , f ( x ) G λ , f ( x ) D G λ , f ( x ) · ( x x ) , x , x Ω .
In terms of f, we have, after some manipulations,
γ λ ( f ( x ) f ( x ) ) D f ( x ) · ( x x ) .
Since γ λ is increasing, we have:
f ( x ) f ( x ) ( γ λ ) 1 D f ( x ) · ( x x ) = κ λ D f ( x ) · ( x x ) .
This motivates the following definition.
Definition 4
( λ -logarithmic divergence). We define the λ-logarithmic divergence of f by:
L λ , f ( x , x ) = f ( x ) f ( x ) κ λ D f ( x ) · ( x x ) = f ( x ) f ( x ) 1 λ log ( 1 + λ D f ( x ) · ( x x ) ) , x , x Ω .
See Figure 1 for a graphical illustration. We note that the logarithmic correction in (26) corresponds to a logarithmic first-order approximation, based at x , which is possible due to the λ -exponential convexity of f. We also note that when λ > 0 , it is possible that L λ , f ( x , x ) = . Nevertheless, L λ , f ( x , x ) is finite when x and x are sufficiently close. Formally, letting λ 0 in (26) recovers the Bregman divergence.

4.3. λ -Logarithmic Divergence in Different Forms

We now prove a lemma about the relationship of the variables x , u and gradients or λ -gradients of f or f ( λ ) . We assumed, for convenience, that 1 + λ x · u > 0 for all x Ω , u Ω .
Lemma 1.
Given u = D x ( λ ) f ( x ) or equivalently x = D u ( λ ) f ( λ ) ( u ) , for arbitrary x , u (such that the expressions are well defined), we have the following identities:
κ λ ( u · x ) κ λ ( u · x ) = κ λ ( u · ( x x ) ( Π λ ) 1 ) ,
κ λ ( u · x ) κ λ ( u · x ) = κ λ ( ( u u ) · x ( Π λ ) 1 ) .
where Π λ is a multiplicative factor (function of x or u) given by:
Π λ 1 + λ x · u = 1 1 λ D f ( x ) · x = 1 1 λ D f ( λ ) ( u ) · u .
Proof. 
Since u = D x ( λ ) f ( x ) , substituting (25), we have:
u · x = D f ( x ) · x 1 λ D f ( x ) · x
and:
1 + λ u · x = 1 1 λ D f ( x ) · x
so:
1 + λ u · x = 1 + λ D f ( x ) · ( x x ) 1 λ D f ( x ) · x = ( 1 + λ u · x ) ( 1 + λ D f ( x ) · ( x x ) ) .
Taking the logarithm and rearranging, we obtain (27).
On the other hand, because:
x = D u ( λ ) f ( λ ) ( u ) = D f ( λ ) ( u ) 1 λ D f ( λ ) ( u ) · u ,
we also have:
1 + λ x · u = 1 1 λ D f ( λ ) ( u ) · u .
The proof of (28) is similar. □
In this above lemma, x and u are arbitrary; it is interesting that a modified form of “linearity” holds even though κ λ is itself nonlinear. As a consequence, we have an alternative expression for L λ , f ( x , x ) .
Proposition 3.
L λ , f ( x , x ) defined by (26) can also be written as:
L λ , f ( x , x ) = f ( x ) f ( x ) κ λ ( x · u ) + κ λ ( x · u ) .
where u = D ( λ ) f ( x ) .
Of course, we may express the λ -logarithmic divergence using the conjugate variables u , u as well. Indeed, we have the analogous reference–representation biduality (see [24,25]) that is characteristic of Bregman divergence and canonical divergence for dually flat spaces, that is (8). See [38] for the reference–representation biduality of a general c-divergence (which includes both the Bregman and logarithmic divergences) based on optimal transport.
Theorem 3.
The λ-logarithmic divergence satisfies the reference–representation biduality, namely:
L λ , f ( λ ) ( u , u ) = L λ , f ( x , x ) ,
where u = D ( λ ) f ( x ) and u = D ( λ ) f ( x ) . Moreover, define the λ-deformed canonical divergence A λ , f by:
A λ , f ( x , u ) = f ( x ) + f ( λ ) ( u ) 1 λ log ( 1 + λ x · u ) = A λ , f ( λ ) ( u , x ) .
We have:
L λ , f ( x , x ) = A λ , f ( x , u ) = A λ , f ( λ ) ( u , x ) = L λ , f ( λ ) ( u , u ) .
Proposition 3 also allows us to derive our next theorem (Theorem 4) linking λ -logarithmic divergence and Bregman divergence (also see [19] for a discussion of conformal divergence in the affine immersion setting).
Theorem 4.
The canonical forms of the λ-logarithmic divergence A λ , f and A λ , f ( λ ) are related to the canonical forms of the Bregman divergence A g * and A g ˜ via a conformal transformation and the non-linear link function κ λ :
A λ , f ( λ ) ( u , x ) = κ λ e λ f ( λ ) ( u ) A g * ( u , x ˜ ) , = A λ , f ( x , u ) = κ λ e λ f ( x ) A g ˜ * ( x , u ˜ ) .
Proof. 
A λ , f ( λ ) ( u , x ) = f ( λ ) ( u ) + f ( x ) 1 λ log ( 1 + λ u · x ) = f ( λ ) ( u ) 1 λ log ( 1 + λ u · x ) e λ f ( x ) = 1 λ log ( 1 + λ g * ( u ) ) 1 λ log e λ f ( x ) + λ u · ( x e λ f ( x ) ) = 1 λ log ( 1 + λ g * ( u ) ) 1 λ log ( 1 λ g ( x ˜ ) + λ u · x ˜ ) = 1 λ log 1 + λ ( u · x ˜ g ( x ˜ ) ) 1 + λ g * ( u ) = 1 λ log 1 + λ u · x ˜ g ( x ˜ ) g * ( u ) 1 + λ g * ( u ) = 1 λ log 1 λ A g * ( u , x ˜ ) 1 + λ g * ( u ) = 1 λ log 1 λ e λ f ( λ ) ( u ) A g * ( u , x ˜ ) = κ λ e λ f ( λ ) ( u ) A g * ( u , x ˜ ) .
The proof of the second line of Theorem 4 is similar. We have A λ , f ( λ ) ( u , x ) = A λ , f ( x , u ) from Theorem 3. □

4.4. Dualistic Geometry of λ -Logarithmic Divergence

Regard x Ω as the primal (global) coordinate system of a manifold M . As described in Section 2.1, we may use the λ -logarithmic divergence L λ , f of f to construct a dualistic structure ( M , g , , * ) . In this subsection, we provide explicit expressions of the corresponding coefficients and state some key geometric consequences.
We begin with the Riemannian metric.
Theorem 5.
The Riemannian metric g induced from L λ , f ( x , x ) is given in primal coordinate x by:
g ( x ) = D 2 f ( x ) + λ ( D f ( x ) ) ( D f ( x ) ) = e λ f ( x ) D 2 G λ , f ( x ) .
Proof. 
According to (2), we perform direct differentiation of (26):
g i j ( x ) = 2 x i x j L λ , f ( x , x ) x = x
and obtain the expression of (29). □
By symmetry, under the dual coordinate system u = D ( λ ) f ( x ) , we have:
g ( u ) = D 2 f ( λ ) ( u ) + λ ( D f ( λ ) ( u ) ) ( D f ( λ ) ( u ) ) .
From the first equality in (29), we see that g is a rank-one correction of the Hessian matrix D 2 f ( x ) . From the second equality, we see that g is in fact a conformal Hessian metric, i.e., it has the form g ( x ) = e λ f ( x ) g 0 ( x ) , where g 0 ( x ) = D 2 G λ , f ( x ) is the Hessian metric induced by the convex function G λ , f ( x ) = 1 λ ( e λ f ( x ) 1 ) . This conclusion is entirely anticipated from Theorem 4.
To compute the Christoffel symbols of the primal and dual connections, we need an expression of the inverse of the Riemannian metric g ( x ) as a matrix. This is provided by the following proposition.
Proposition 4.
The metric g can be expressed as:
g ( x ) = 1 Π λ ( x ) I d λ Π λ ( x ) u x u x ( x ) ,
where u x is the Jacobian matrix of the coordinate transformation x u and I d is the d × d identity matrix with Kronecker δ i j as its entries. Here:
Π λ ( x ) = 1 + λ x · u = 1 1 λ D f ( x ) · x
and Π λ ( x ) > 0 for x Ω and u = D ( λ ) f ( x ) , due to Part (ii) of Theorem 2.
Moreover, the inverse of g ( x ) can be expressed as:
( g ( x ) ) 1 = Π λ ( x ) x u ( u ) ( I d + λ u x ) .
Proof. 
Using the λ -logarithmic divergence represented as the generalized canonical divergence A λ , f (26), we apply (2) to obtain:
g i j ( x ) = 2 x i x j L λ , f ( x , x ) x = x = 2 x i x j f ( x ) + f ( λ ) ( u ) 1 λ log ( 1 + λ x · u ) x = x = 1 Π λ ( x ) u i x j λ Π λ ( x ) u i k = 1 d x k u k x j .
Expressing the above expression using matrix notations gives (30). Formula (31) follows by inverting (30) using the Sherman–Morrison formula. □
Under the dualistic structure induced by a λ -logarithmic divergence, the primal and dual coordinate vector fields are no longer biorthogonal in the sense of (7). Nevertheless, we have the following generalization. Again, we write Π λ ( x ) = 1 + λ x · u .
Corollary 2.
The inner product of the coordinate vector fields x i and u j is given by a λ-deformed “biorthogonality” relation:
g x i , u j = 1 Π λ ( x ) δ i j λ Π λ ( x ) 2 x j u i .
Proof. 
Write u j = m = 1 d x m u j x m . Then:
g x i , u j = m = 1 d x m u j g x i , x m .
Simplifying the expression using (30) gives the result. For details, see ([12], Proposition 8). □
Theorem 6.
The Christoffel symbols of the primal connection are given by:
Γ i j , k ( x ) = λ Π λ ( x ) 2 u j u i x k + u i u j x k + 2 λ 2 Π λ ( x ) 3 = 1 d u i u j x u x k ,
where Π λ ( x ) = 1 + λ x · u as in Proposition 4. Furthermore, let Γ i j k = = 1 d Γ i j , g k be the Christoffel symbol of the second kind, then:
Γ i j k ( x ) = λ Π λ ( x ) u i δ j k + u j δ i k = λ f x i ( x ) δ j k + f x j ( x ) δ i k ,
where δ is the Kronecker delta.
Similarly, under the dual coordinate system u, the Christoffel symbol (of the second kind) of the dual connection * is given by:
Γ i j * k ( u ) = λ f ( λ ) u i ( u ) δ j k + f ( λ ) u j ( u ) δ i k .
Proof. 
This is a straightforward computation using (3) and Proposition 4. The details, which are a minor modification of the proof of ([12], Proposition 5), are omitted. □
Although the curvatures of ∇ and * are nonzero, it can be shown that ∇ and * are both projectively flat, i.e., each of them is projectively equivalent to a flat connection. Specifically, any ∇-geodesic (resp. * -geodesic) is a time-reparameterized straight line under the x (resp. u) coordinate system.
Theorem 7.
The sectional curvatures of and * with respect to g are both equal to λ.
Proof. 
See ([12], Theorem 15). □
Using the dual projective flatness and Corollary 2, Reference ([12], Theorem 16) showed that the λ -logarithmic divergence satisfies a generalized Pythagorean theorem, which generalizes the property of Bregman divergence outlined in Section 2.2.
Theorem 8
(Generalized Pythagorean theorem). Let P , Q , R M . Then:
L λ , f ( x Q , x P ) + L λ , f ( x R , x Q ) = L λ , f ( x R , x P )
if and only if the -geodesic between Q and R and the * -geodesic between Q and P meet g -orthogonally at Q.
To summarize, the dually flat geometry becomes a dually projectively flat geometry with constant sectional curvature λ , and the Hessian metric becomes a conformal Hessian metric. Nevertheless, the primal and dual geodesics are still straight lines (up to time reparametrizations), and the generalized Pythagorean theorem holds.
We say that the above λ -deformation framework is “canonical” because the statistical manifold ( M , g , , * ) , with a conformal Hessian metric g i j given by (29) and a pair of dual projectively flat affine connections Γ i j k , Γ i j * k given by (32) and (33), is the only statistical structure with constant curvature ([12], Theorem 15). Moreover, given such a statistical manifold, one can construct locally a λ -logarithmic divergence, which induces the given geometry.

5. Linking λ -Deformation to Rényi Entropy and Divergence

5.1. Relation between Tsallis’ and Rényi’s Deformation Expressions

Recall that Tsallis [39], in the context of statistical physics, introduced the generalized entropy:
H λ Tsallis [ p ] = p log λ 1 p d μ = 1 λ ( p ( ζ ) ) λ 1 d μ ;
note that we use λ here in place of q = 1 λ as in [40].
Tsallis entropy is related to Rényi entropy [41], defined as:
H λ Rényi [ p ] : = 1 λ log p 1 λ ( ζ ) d μ ,
through a monotonic transformation:
H λ Tsallis [ p ] = 1 λ e λ H λ Rényi [ p ] 1 .
In our current notation,
H λ Tsallis [ p ] = γ λ H λ Rényi [ p ] H λ Rényi [ p ] = κ λ H λ Tsallis [ p ] .
Rényi divergence (with Rényi index 1 λ ) is defined by:
H λ Rényi [ p | | p ] = 1 λ log ( p ( ζ ) ) 1 λ ( p ( ζ ) ) λ d μ .
Rényi divergence is additive: given two product measures p 1 p 2 and p 1 p 2 , we have:
H λ Rényi [ p 1 p 2 | | p 1 p 2 ] = H λ Rényi [ p 1 | | p 1 ] + H λ Rényi [ p 2 | | p 2 ] .
Because Tsallis entropy is not additive, this has been used as an argument for favoring Rényi entropy as a physical concept over Tsallis entropy; see [28] (Section 9.3) and [42].

5.2. λ -Exponential Family

Under the λ -deformation, there is an intrinsic link between the subtractive and divisive normalizations of the λ -deformed exponential family. Starting with the observation:
e κ λ ( t ) = ( 1 + λ t ) 1 / λ = exp λ ( t ) ,
we investigate the identity:
( 1 + λ ϑ · F ( ζ ) ) 1 / λ e φ λ ( ϑ ) = ( 1 + λ ( θ · F ( ζ ) ϕ λ ( θ ) ) ) 1 / λ .
Taking the λ -th power and equating both sides, we obtain the conditions for the above identity to hold:
θ = ϑ e λ φ λ ( ϑ ) ϑ = θ 1 λ ϕ λ ( θ ) , ϕ λ ( θ ) = 1 λ ( e λ φ λ ( ϑ ) 1 ) φ λ ( ϑ ) = 1 λ log ( 1 λ ϕ λ ( θ ) ) .
This fact led us to define a λ -exponential family that can be normalized both subtractively and divisively: the former denoted by p ( ζ | θ ) and the latter denoted by p ( ζ | ϑ ) .
Proposition 5
(Reparameterization equivalence). Let λ 0 . With respect to a given reference measure μ and a fixed vector of random functions F ( ζ ) = ( F 1 ( ζ ) , , F d ( ζ ) ) , the λ-exponential family is given by p ( λ ) ( ζ | θ ) under subtractive normalization and by p ( λ ) ( ζ | ϑ ) under divisive normalization; they are reparametrizations of each other:
p ( λ ) ( ζ | θ ) = exp λ ( θ · F ( ζ ) ϕ λ ( θ ) ) = exp λ ( ϑ · F ( ζ ) ) e φ λ ( ϑ ) = p ( λ ) ( ζ | ϑ ) .
Here, the function ϕ λ ( θ ) is called subtractive λ -potential and used for subtractive normalization, while φ λ ( ϑ ) is called divisive λ -potential and used for divisive normalization. Note that ϕ λ and φ λ may not have same domains. They satisfy:
ϕ λ ( θ ) = γ λ ( φ λ ( ϑ ) ) φ λ ( ϑ ) = κ λ ( ϕ λ ( θ ) ) ,
where:
κ λ ( t ) = 1 λ log ( 1 λ t ) , γ λ ( t ) = 1 λ 1 e λ t .
Note again that we use ϑ for the divisive normalization setting and distinguish it from θ for the subtractive normalization setting. For later convenience, we also note:
e λ φ λ ( ϑ ) = 1 λ ϕ λ ( θ ) .

5.2.1. Under Subtractive Normalization

The deformed exponential family takes the form:
p ( ζ | θ ) = exp λ θ · F ( ζ ) ϕ λ ( θ )
where θ · F ( ζ ) = i = 1 d θ i F i ( ζ ) , and the subtractive λ -potential ϕ λ ( θ ) is specified by the normalization:
1 = p ( ζ | θ ) d μ = exp λ θ · F ( ζ ) ϕ λ ( θ ) d μ .
This leads to:
ϕ λ θ i = p ˜ ( ζ | θ ) F i ( ζ ) d μ
with the escort transformation given by:
p ˜ ( ζ | θ ) = ( p ( ζ | θ ) ) 1 λ ( p ( ζ | θ ) ) 1 λ d μ .
Clearly, when λ 0 , we recover the regular exponential family (9). It was Tsallis who introduced the q-exponential family, where q = 1 λ .

5.2.2. Under Divisive Normalization

To deform the exponential family through divisive normalization, we use a smooth monotone function κ λ ( · ) and define a parametric probability family, which takes the form:
log p ( ζ | ϑ ) = κ λ ( ϑ · F ( ζ ) ) φ λ ( ϑ ) .
Note that we use the symbol ϑ to distinguish it from the parameter θ in the subtractive case. Here:
φ λ ( ϑ ) = log e κ λ ( ϑ · F ( ζ ) ) d μ
is the divisive normalization function, and it was assumed that:
e κ λ ( ϑ · F ( ζ ) ) d μ <
in the domain of ϑ (the natural parameter set). It is possible that the support of the density depends on the parameter ϑ , as in the case of the q-exponential family; see [17]. To avoid technicalities, we assumed that the support of p ( ζ | ϑ ) is independent of ϑ .
Writing out κ λ ( ) , the resulting family is:
p ( ζ | ϑ ) = ( 1 + λ ϑ · F ( ζ ) ) 1 / λ e φ λ ( ϑ ) ,
where the divisive λ -potential φ λ ( ϑ ) is given by:
φ λ ( ϑ ) = log ( 1 + λ ϑ · F ( ζ ) ) 1 / λ d μ
is finite on the parameter set. This family unifies the F ( ± α ) -families introduced in [12].

5.3. λ -Mixture Family

We next define a mixture-type family dual to the λ -exponential family, in an analogous way that an exponential family is dual to the mixture family. The form of the family is justified by its compatibility with the λ -duality.
Definition 5
( λ -mixture family). Let λ 0 , 1 be given. The λ-mixture family with respect to a fixed set of densities P 0 ( ζ ) , P 1 ( ζ ) , , P d ( ζ ) is defined by:
p ( λ ) ( ζ | η ) = 1 Z λ ( η ) i = 0 d η i P ˜ i ( ζ ) 1 / ( 1 λ ) ,
where η = ( η 1 , , η d ) is the mixture parameter satisfying 0 η i 1 and η 0 = 1 i = 1 d η i > 0 . Here, P ˜ i , i = 0 , 1 , , d denotes the escort transformation, as given by (36), of the given P i ’s:
P ˜ i ( ζ ) = ( P i ( ζ ) ) 1 λ ( P i ( ζ ) ) 1 λ d μ ,
where the denominator is assumed to exist, and Z λ ( η ) represents the integral:
Z λ ( η ) = i = 0 d η i P ˜ i ( ζ ) 1 / ( 1 λ ) d μ ,
which is assumed to converge for all η and to be differentiable under the integral sign.
Denote:
C i = ( P i ( ζ ) ) 1 λ d μ
and:
η ˜ i = 1 h λ ( η ) η i C i
where:
h λ ( η ) = i = 0 d η i C i .
Then, 0 η ˜ i 1 and i = 0 d η ˜ i = 1 . We can express p ( λ ) now in η ˜ :
p ( λ ) = 1 Z λ ( η ) i = 0 d η i P ˜ i ( ζ ) 1 / ( 1 λ ) = h λ ( η ) Z λ ( η ) i = 0 d η ˜ i ( P i ( ζ ) ) 1 λ 1 / ( 1 λ ) = e log h λ ( η ) log Z λ ( η ) 1 i = 1 d η ˜ i ( P 0 ( ζ ) ) 1 λ + i = 1 d η ˜ i ( P i ( ζ ) ) 1 λ 1 / ( 1 λ ) = e log h λ ( η ) log Z λ ( η ) 1 + i = 1 d η ˜ i ( P i ( ζ ) ) 1 λ ( P 0 ( ζ ) ) 1 λ ( P 0 ( ζ ) ) 1 λ 1 / ( 1 λ ) P 0 ( ζ ) .
Setting:
F i ( ζ ) = 1 1 λ P i ( ζ ) P 0 ( ζ ) 1 λ 1 ,
with d ν = P 0 ( ζ ) d μ , the density of the λ -mixture family p ( λ ) with respect to the new measure ν now has the form:
p ( λ ) ( ζ | η ˜ ) = 1 + ( 1 λ ) η ˜ · F ( ζ ) 1 / ( 1 λ ) e ψ 1 λ ( η ˜ ) ,
where ψ 1 λ ( η ˜ ) = log Z λ ( η ) log h λ ( η ) . Thus, we showed the following:
Proposition 6
(Relation between λ -exponential and λ -mixture families). Suppose λ 0 , 1 . A λ-mixture family with pure densities:
P ( ζ ) = { P 0 ( ζ ) , P 1 ( ζ ) , , P d ( ζ ) }
becomes a λ-exponential family with the vector of random functions:
F ( ζ ) = { F 1 ( ζ ) , , F d ( ζ ) }
after a transformation of the dominating measure d μ d ν = P 0 ( ζ ) d μ and the random variables P ( ζ ) F ( ζ ) :
F i ( ζ ) = 1 1 λ P i ( ζ ) P 0 ( ζ ) 1 λ 1 = log 1 λ P i ( ζ ) P 0 ( ζ ) ,
and a reparameterization η η ˜ :
η ˜ i = 1 h λ C i η i η i = η ˜ i ( h λ C i )
with:
h λ = i = 0 d η i C i = i = 0 d η ˜ i C i .

5.4. Potential Functions as Rényi Entropies

We now show that our λ -duality framework is naturally compatible with the λ -exponential and λ -mixture families, with Rényi entropy and Rényi divergence replacing Shannon entropy and Kullback–Leibler divergence. In what follows, we assume λ < 1 .
Proposition 7
(For the λ -exponential family). With respect to the λ-exponential family defined by (37) with divisive potential function φ λ given by (38), we have:
(i) 
φ λ ( ϑ ) is λ-convex. Moreover, 1 λ D φ λ ( ϑ ) · θ > 0 .
(ii) 
The λ-conjugate variable η = D ( λ ) φ λ ( ϑ ) = D ϕ λ ( θ ) is the the escort expectation:
η = ( p ( ζ | θ ) ) 1 λ F ( ζ ) d μ ( p ( ζ | θ ) ) 1 λ d μ = p ˜ ( ζ | θ ) F ( ζ ) d μ .
(iii) 
The λ-conjugate function ψ λ ( η ) with respect to φ λ ( ϑ ) is given by:
ψ λ ( η ) = H λ Rényi [ p ( · | ϑ ) ] .
(iv) 
The λ-logarithmic divergence is the Rényi divergence:
L λ , φ λ ( ϑ , ϑ ) = H λ Rényi [ p ( · | ϑ ) | | p ( · | ϑ ) ] .
Proposition 8
(For the λ -mixture family). With respect to the λ-mixture family given by (39) with its potential function ψ λ ( η ) given by:
ψ λ ( η ) = 1 λ λ log i = 0 d η i P ˜ i 1 / ( 1 λ ) d μ = 1 λ λ log Z λ ( η ) ,
we have:
(i) 
The potential function ψ λ ( η ) is a λ-convex function of η.
(ii) 
The potential function ψ λ ( η ) is given by:
ψ λ ( η ) = H λ Rényi [ p ( · | η ) ] .
(iii) 
The λ-logarithmic divergence is the Rényi divergence:
L λ , ψ λ ( η , η ) = H λ Rényi [ p ( · | η ) | | p ( · | η ) ] .
The proofs of the above Proposition 7 (about the λ -exponential family) and Proposition 8 (about the λ -mixture family) can be found in [17].

6. Summary and Conclusions

Our paper summarizes a canonical approach to deforming exponential and mixture families and the associated dually flat Hessian geometry. The λ -exponential family we introduced has two parameterizations (35) and (37):
p ( λ ) ( ζ | · ) = exp λ ( ϑ · F ( ζ ) ) e φ λ ( ϑ ) = exp λ ( θ · F ( ζ ) ϕ λ ( θ ) ) .
The two expressions reflect subtractive and divisive normalizations—a typical example of the former is the q-exponential family with associated Tsallis entropy, whereas an example of the latter is the F ( ± α ) -family and the associated Rényi entropy. These two versions of deformation to an exponential family are two faces of the same coin; furthermore, the λ -exponential family is also linked to the λ -mixture family, when λ 0 , 1 , via a reparameterization of the random functions F ( ζ ) above.
The coincidence of these two parameterizations of the deformed family is associated with the λ -duality, which is the main focus of our exposition. The λ -duality is a “deformation” (see Table 1) of the usual Legendre duality reviewed in Section 3.1. In a nutshell, instead of convex functions, we worked with λ -convex functions f such that 1 λ ( e λ f 1 ) is convex, for a fixed λ 0 . Furthermore, instead of the convex conjugate, we used the λ -conjugate given by:
f ( λ ) ( u ) = sup x 1 λ log ( 1 + λ x · u ) f ( x ) .
The expression of the λ -duality:
κ λ ( x · u ) = f ( x ) + f ( λ ) ( u ) ,
turns out to be a re-write of the Legendre duality between x ˜ and u:
x ˜ · u = g ( x ˜ ) + g * ( u ) , w i t h x ˜ = x e λ f ( x ) ;
and a re-write of the Legendre duality between x and u ˜ :
x · u ˜ = g ˜ * ( x ) + g ˜ ( u ˜ ) , w i t h u ˜ = u e λ f ( λ ) ( u ) .
Therefore, λ -duality is in essence the Legendre duality with a λ -dependent rescaling of the variables:
x ˜ = x e λ f ( x ) x = x ˜ 1 λ g ( x ˜ )
and:
u ˜ = u e λ f ( λ ) ( u ) u = u ˜ 1 λ g ˜ ( u ˜ ) .
The two pairs of convex functions g , g * and g ˜ , g ˜ * are linked with the pair of λ -convex functions f , f ( λ ) via:
g ( x ˜ ) = G λ , f ( x ) = γ λ ( f ( x ) ) = ( γ λ f ) ( x ) ; g * ( u ) = G λ , f ( λ ) ( u ) = γ λ ( f ( λ ) ( u ) ) = ( γ λ f ( λ ) ) ( u ) ; g ˜ ( u ˜ ) = G λ , f ( λ ) ( u ) = γ λ ( f ( λ ) ( u ) ) = ( γ λ f ( λ ) ) ( u ) ; g ˜ * ( x ) = G λ , f ( x ) = γ λ ( f ( x ) ) = ( γ λ f ) ( x ) .
The λ -duality leads to nontrivial mathematical questions, e.g., a differential calculus in the spirit of Rockafellar and analogous to functions of the Legendre type. Some of the derivations in the current paper were heuristic, and a complete and rigorous development is left for future research.
Coming back to the probability families, we first verified that the subtractive potential ϕ λ ( θ ) is convex in θ and the divisive potential φ λ ( ϑ ) is λ -convex in ϑ . Subtractive normalization using ϕ λ ( θ ) is associated with the regular Legendre duality, whereas divisive normalization using φ λ ( ϑ ) is associated with the λ -duality. This gives an interpretation of the distinctiveness of Rényi entropy (used in the latter) from Tsallis entropy (used in the former) based on their intimate connection to the λ -duality (for λ 0 ) or to the Legendre duality. As λ is the parameter that controls the curvature in the Riemannian geometry of these probability families (see [12]), our framework provides a simple parametric deformation from the dually flat geometry (of the exponential model) to the dually projectively flat geometry (of the λ -exponential model). We expect that this framework will generate new insights in the applications of the q-exponential family and related concepts in statistical physics and information science.

Author Contributions

Conceptualization, J.Z.; Formal analysis, J.Z. and T.-K.L.W.; Investigation, T.-K.L.W.; Writing—original draft, J.Z.; Writing—review & editing, J.Z. and T.-K.L.W. All authors have read and agreed to the published version of the manuscript.

Funding

J.Z. is supported by United States Air Force Office of Scientific Research, grant number AFOSR-FA9550-19-1-0213. T-K.L.W. is supported by NSERC Discovery Grant RGPIN-2019-04419 and a Connaught New Researcher Award.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Amari, S.I.; Nagaoka, H. Methods of Information Geometry; American Mathematical Society: Providence, RI, USA, 2000; Volume 191. [Google Scholar]
  2. Ay, N.; Jost, J.; Vân Lê, H.; Schwachhöfer, L. Information geometry and sufficient statistics. Probab. Theory Relat. Fields 2015, 162, 327–364. [Google Scholar] [CrossRef] [Green Version]
  3. Zhang, J.; Khan, G. From Hessian to Weitzenböck: Manifolds with torsion-carrying connections. Inf. Geom. 2019, 2, 77–98. [Google Scholar] [CrossRef]
  4. Zhang, J.; Khan, G. Statistical mirror symmetry. Differ. Geom. Its Appl. 2020, 73, 101678. [Google Scholar] [CrossRef]
  5. Naudts, J. Estimators, escort probabilities, and ϕ-exponential families in statistical physics. J. Inequal. Pure Appl. Math. 2004, 5, 102. [Google Scholar]
  6. Naudts, J.; Zhang, J. Rho–tau embedding and gauge freedom in information geometry. Inf. Geom. 2018, 1, 79–115. [Google Scholar] [CrossRef]
  7. Murata, N.; Takenouchi, T.; Kanamori, T.; Eguchi, S. Information geometry of U-Boost and Bregman divergence. Neural Comput. 2004, 16, 1437–1481. [Google Scholar] [CrossRef]
  8. Zhang, J. Divergence function, duality, and convex analysis. Neural Comput. 2004, 16, 159–195. [Google Scholar] [CrossRef]
  9. Amari, S.I.; Ohara, A. Geometry of q-exponential family of probability distributions. Entropy 2011, 13, 1170–1185. [Google Scholar] [CrossRef]
  10. Amari, S.I.; Ohara, A.; Matsuzoe, H. Geometry of deformed exponential families: Invariant, dually-flat and conformal geometries. Phys. A Stat. Mech. Appl. 2012, 391, 4308–4319. [Google Scholar] [CrossRef]
  11. Ohara, A.; Matsuzoe, H.; Amari, S.I. Conformal geometry of escort probability and its applications. Mod. Phys. Lett. B 2012, 26, 1250063. [Google Scholar] [CrossRef]
  12. Wong, T.K.L. Logarithmic divergences from optimal transport and Rényi geometry. Inf. Geom. 2018, 1, 39–78. [Google Scholar] [CrossRef] [Green Version]
  13. Pal, S.; Wong, T.K.L. Multiplicative Schröodinger problem and the Dirichlet transport. Probab. Theory Relat. Fields 2020, 178, 613–654. [Google Scholar] [CrossRef]
  14. Pal, S.; Wong, T.K.L. The geometry of relative arbitrage. Math. Financ. Econ. 2016, 10, 263–293. [Google Scholar] [CrossRef] [Green Version]
  15. Pal, S.; Wong, T.K.L. Exponentially concave functions and a new information geometry. Ann. Probab. 2018, 46, 1070–1113. [Google Scholar] [CrossRef] [Green Version]
  16. Wong, T.K.L. Information Geometry in Portfolio Theory. In Geometric Structures of Information; Springer: Berlin/Heidelberg, Germany, 2019; pp. 105–136. [Google Scholar]
  17. Wong, T.K.L.; Zhang, J. Tsallis and Rényi deformations linked via a new λ-duality. arXiv 2021, arXiv:2107.11925. [Google Scholar]
  18. Kurose, T. On the divergences of 1-conformally flat statistical manifolds. Tohoku Math. J. Second Ser. 1994, 46, 427–433. [Google Scholar] [CrossRef]
  19. Wong, T.K.L.; Yang, J. Logarithic divergence: Geometry and interpretation of curvature. In International Conference on Geometric Science of Information; Springer: Berlin/Heidelberg, Germany, 2019; pp. 413–422. [Google Scholar]
  20. Amari, S.I. Information Geometry and Its Applications; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  21. Eguchi, S. Geometry of minimum contrast. Hiroshima Math. J. 1992, 22, 631–647. [Google Scholar] [CrossRef]
  22. Matumoto, T. Any statistical manifold has a contrast function—On the C3-functions taking the minimum at the diagonal of the product manifold. Hiroshima Math. J 1993, 23, 327–332. [Google Scholar] [CrossRef]
  23. Nagaoka, H.; Amari, S.I. Differential Geometry of Smooth Families of Probability Distributions; Technical Report METR 82-7; University of Tokyo: Tokyo, Japan, 1982. [Google Scholar]
  24. Zhang, J. Referential duality and representational duality on statistical manifolds. In Proceedings of the Second International Symposium on Information Geometry and Its Applications, Tokyo, Japan, 12–16 December 2005; Volume 1216, pp. 58–67. [Google Scholar]
  25. Zhang, J. Nonparametric information geometry: From divergence function to referential-representational biduality on statistical manifolds. Entropy 2013, 15, 5384–5418. [Google Scholar] [CrossRef] [Green Version]
  26. Blondel, M.; Martins, A.F.; Niculae, V. Learning with Fenchel-Young losses. J. Mach. Learn. Res. 2020, 21, 1–69. [Google Scholar]
  27. Naudts, J. Generalized exponential families and associated entropy functions. Entropy 2008, 10, 131–149. [Google Scholar] [CrossRef] [Green Version]
  28. Naudts, J. Generalized Thermostatistics; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  29. Zhang, J. On monotone embedding in information geometry. Entropy 2015, 17, 4485–4499. [Google Scholar] [CrossRef] [Green Version]
  30. Eguchi, S. Information geometry and statistical pattern recognition. Sugaku Expos. 2006, 19, 197–216. [Google Scholar]
  31. Matsuzoe, H. Hessian structures on deformed exponential families and their conformal structures. Differ. Geom. Its Appl. 2014, 35, 323–333. [Google Scholar] [CrossRef]
  32. Newton, N.J. An infinite-dimensional statistical manifold modelled on Hilbert space. J. Funct. Anal. 2012, 263, 1661–1681. [Google Scholar] [CrossRef]
  33. Montrucchio, L.; Pistone, G. Deformed exponential bundle: The linear growth case. In International Conference on Geometric Science of Information; Springer: Berlin/Heidelberg, Germany, 2017; pp. 239–246. [Google Scholar]
  34. De Andrade, L.H.; Vieira, F.L.; Cavalcante, C.C. On Normalization Functions and ϕ-Families of Probability Distributions. In Progress in Information Geometry: Theory and Applications; Springer Nature Switzerland AG: Cham, Switzerland, 2021. [Google Scholar]
  35. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
  36. Villani, C. Topics in Optimal Transportation; American Mathematical Society: Providence, RI, USA, 2003. [Google Scholar]
  37. Villani, C. Optimal Transport: Old and New; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  38. Wong, T.K.L.; Yang, J. Pseudo-Riemannian geometry encodes information geometry in optimal transport. Inf. Geom. 2021, 1–29. [Google Scholar] [CrossRef]
  39. Tsallis, C. Possible generalization of Boltzmann–Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  40. Tsallis, C. What are the numbers that experiments provide. Quim. Nova 1994, 17, 468–471. [Google Scholar]
  41. Rényi, A. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics; The Regents of the University of California: Oakland, CA, USA, 1961. [Google Scholar]
  42. Van Erven, T.; Harremos, P. Rényi divergence and Kullback-Leibler divergence. IEEE Trans. Inf. Theory 2014, 60, 3797–3820. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Illustration of the λ -logarithmic divergence. Top: λ = 1 and f ( x ) = x ( 10 x ) . Bottom: λ = 1 and f ( x ) = 2 log x . In both cases, x = 4 and x = 8 , and we plot the function on the interval ( 2 , 9 ) . Note that the first-order logarithmic approximation (dashed grey curve) supports the graph of f from below.
Figure 1. Illustration of the λ -logarithmic divergence. Top: λ = 1 and f ( x ) = x ( 10 x ) . Bottom: λ = 1 and f ( x ) = 2 log x . In both cases, x = 4 and x = 8 , and we plot the function on the interval ( 2 , 9 ) . Note that the first-order logarithmic approximation (dashed grey curve) supports the graph of f from below.
Entropy 24 00193 g001
Table 1. Generalization of objects from the Hessian (dually flat) geometry to the λ -deformed (dually projectively flat) geometry.
Table 1. Generalization of objects from the Hessian (dually flat) geometry to the λ -deformed (dually projectively flat) geometry.
ObjectsConventional ( λ = 0 ) λ -Deformed
transformationLegendre λ -Legendre
conjugation sup x ( x · u f ( x ) ) sup x ( κ λ ( x · u ) f ( x ) )
potentialsconvex λ -convex
associated divergenceBregman λ -logarithmic
Riemannian metricHessianconformal Hessian
affine connectionsdually flatdually projectively flat
curvature of connections0constant λ 0
biorthogonal coordinates ( x , u ) ( x ˜ , u ) or ( x , u ˜ )
probability familyexponential λ -exponential
probability familymixture λ -mixture
associated divergenceKullback–LeiblerRényi
associated entropyShannonRényi/Tsallis
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, J.; Wong, T.-K.L. λ-Deformation: A Canonical Framework for Statistical Manifolds of Constant Curvature. Entropy 2022, 24, 193. https://doi.org/10.3390/e24020193

AMA Style

Zhang J, Wong T-KL. λ-Deformation: A Canonical Framework for Statistical Manifolds of Constant Curvature. Entropy. 2022; 24(2):193. https://doi.org/10.3390/e24020193

Chicago/Turabian Style

Zhang, Jun, and Ting-Kam Leonard Wong. 2022. "λ-Deformation: A Canonical Framework for Statistical Manifolds of Constant Curvature" Entropy 24, no. 2: 193. https://doi.org/10.3390/e24020193

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop