Next Article in Journal
On the Capacity and the Optimal Sum-Rate of a Class of Dual-Band Interference Channels
Next Article in Special Issue
Recall Performance for Content-Addressable Memory Using Adiabatic Quantum Optimization
Previous Article in Journal
Eigentimes and Very Slow Processes
Previous Article in Special Issue
Born-Kothari Condensation for Fermions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Generalized Stam Inequalities and Fisher–Rényi Complexity Measures

by
Steeve Zozor
1,*,
David Puertas-Centeno
2,3 and
Jesús S. Dehesa
2,3
1
GIPSA-Lab, Université Grenoble Alpes, 11 rue des Mathématiques, 38420 Grenoble, France
2
Instituto Carlos I de Física Teórica y Computacional, Universidad de Granada, 18071 Granada, Spain
3
Departamento de Física Atómica, Molecular y Nuclear, Universidad de Granada, 18071 Granada, Spain
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(9), 493; https://doi.org/10.3390/e19090493
Submission received: 21 August 2017 / Revised: 8 September 2017 / Accepted: 12 September 2017 / Published: 14 September 2017
(This article belongs to the Special Issue Foundations of Quantum Mechanics)

Abstract

:
Information-theoretic inequalities play a fundamental role in numerous scientific and technological areas (e.g., estimation and communication theories, signal and information processing, quantum physics, …) as they generally express the impossibility to have a complete description of a system via a finite number of information measures. In particular, they gave rise to the design of various quantifiers (statistical complexity measures) of the internal complexity of a (quantum) system. In this paper, we introduce a three-parametric Fisher–Rényi complexity, named ( p , β , λ ) -Fisher–Rényi complexity, based on both a two-parametic extension of the Fisher information and the Rényi entropies of a probability density function ρ characteristic of the system. This complexity measure quantifies the combined balance of the spreading and the gradient contents of ρ , and has the three main properties of a statistical complexity: the invariance under translation and scaling transformations, and a universal bounding from below. The latter is proved by generalizing the Stam inequality, which lowerbounds the product of the Shannon entropy power and the Fisher information of a probability density function. An extension of this inequality was already proposed by Bercher and Lutwak, a particular case of the general one, where the three parameters are linked, allowing to determine the sharp lower bound and the associated probability density with minimal complexity. Using the notion of differential-escort deformation, we are able to determine the sharp bound of the complexity measure even when the three parameters are decoupled (in a certain range). We determine as well the distribution that saturates the inequality: the ( p , β , λ ) -Gaussian distribution, which involves an inverse incomplete beta function. Finally, the complexity measure is calculated for various quantum-mechanical states of the harmonic and hydrogenic systems, which are the two main prototypes of physical systems subject to a central potential.

1. Introduction

The definition of complexity measures to quantify the internal disorder of physical systems is an important and challenging task in science, basically because of the many facets of the notion of disorder [1,2,3,4,5,6,7,8,9,10,11,12]. It seems clear that a unique measure is unable to capture the essence of such a vague notion. In the scalar continuous-state context we consider in this paper, many complexity measures based on the probability distribution describing a system have been proposed in the literature, attempting to capture simultaneously the spreading (global) and the oscillatory (local) behaviors of such a distribution [10,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29]. They mostly depend on entropy-like quantities such as the Shannon entropy [30], the Fisher information [31] and their generalizations. The measures of complexity of a probability density ρ proposed up until now, say C [ ρ ] , making use of two information-theoretic properties, share several properties (see e.g., [32]), such as e.g., the invariance by translation or by a scaling factor (i.e., for any x 0 R and σ > 0 , for ρ ˜ ( x ) = 1 σ ρ x x 0 σ , they satisfy C [ ρ ˜ ] = C [ ρ ] ). For instance, the disorder may be invariant from a move of a (referential independent) center of mass. Moreover, all the proposed measures are also lowerbounded, which means that there exists in a certain sense a distribution of minimal complexity, which is the probability density that reaches the lower bound.
In this paper, we generalize the complexity measures of global-local character published in the literature (see e.g., [10,12,23,24,26,27,29]) to grasp both the spreading and the fluctuations of a probability density ρ by the introduction of a three-parametric Fisher–Rényi complexity, which involves the Rényi entropy [33] and generalized Fisher information [34,35,36]. The products of these two generalized information-theoretic tools, which are translation and scaling invariant as well as lowerbounded, can be used as generalized complexity measures of ρ .
Historically, the first inequality involving the Shannon entropy and the Fisher information was proved by Stam [37] under the form
F [ ρ ] N [ ρ ] 2 π e ,
where F and N are, respectively, the (nonparametric) Fisher information of ρ ,
F [ ρ ] = R d d x log [ ρ ( x ) ] 2 ρ ( x ) d x
and the Shannon entropy power of ρ , i.e., an exponential of the Shannon entropy H,
N [ ρ ] = exp 2 H [ ρ ] where H [ ρ ] = R ρ ( x ) log [ ρ ( x ) ] d x .
In fact, the Fisher information concerns a density parametrized by a parameter θ and the derivative is vs. θ . When this parameter is a position parameter, this leads to the nonparametric Fisher information. Concerning the entropy power, more rigorously, a factor 1 2 π e affects N and the bound in the Stam inequality is then unity. This factor does not change anything for our purpose, hence, for sake of simplicity, we omit it. The lower bound in Inequality (1) is achieved for the Gaussian distribution ρ ( x ) exp 1 2 x 2 up to a translation and a scaling factor (where ∝ means “proportional to”). In other words, the so-called Fisher–Shannon complexity C [ ρ ] = F [ ρ ] N [ ρ ] , which is translation and scale invariant, is always higher than 2 π e (and thus cannot be zero) and the distribution of lowest complexity is the Gaussian, exhibiting (also) through this measure its fundamental aspect. The proof of this inequality lies in the entropy power inequality and on the de Bruijn identity, two information theoretic inequalities, both being reached in the Gaussian context [37,38].
Although introduced respectively in the estimation context through the Cramér–Rao bound [31,39,40] and in communication theory through the coding theorem of Shannon [30,38], these quantities found applications in physics as previously mentioned (and also in the earlier papers [41,42] and that of Stam). In particular, the analysis of a signal with these measures was proposed by Vignat and Bercher [43] and the Fisher–Shannon complexity C [ ρ ] = F [ ρ ] N [ ρ ] is widely applied in atomic physics or quantum mechanics for instance [25,26,44,45,46,47].
Recently, the Stam inequality was extended by substituting the Shannon entropy by the Rényi entropies (a family of entropies characterizing by a parameter playing a role of focus [33]), and the Fisher information by a generalized two-parametric family of the Fisher information introduced by [34,35,36]. As we will see later on, this extended inequality involves, however, two free parameters because one of the two Fisher parameters is linked to the Rényi one. This constraint is imposed so as to determine the sharp bound of the inequality and the minimizers in the framework of the (stretched) Tsallis distributions [48,49]. Thus, this extended inequality allows to define again a complexity measure, based on this generalized Fisher information and the Rényi entropy power [27].
In this paper, we study the full three-parametric Fisher–Rényi complexity, disconnecting the two parameters tuning the extended Fisher information and the parameter tuning the Rényi entropy. Like Bercher, we use an approach based on the Gagliardo–Nirenberg inequality. This inequality allows for proving the existence of a lower bound of the complexity when the parameters are decoupled, in a certain range. The minimizers are thus implicitly known as a solution of a nonlinear equation (or through a complicated series of integrations and inversion of nonlinear functions). Moreover, the sharp bound of the associated extended Stam inequality is explicitly known, once the minimizers have been determined. We propose here an indirect approach allowing (i) to extend a step further the domain where the Stam inequality holds (or where the complexity is non trivially lower-bounded); (ii) to determine explicitly the minimizers; and (iii) to find the sharp bound, regardless of the knowledge of the minimizers.
The structure of the paper is the following. In Section 2, we introduce both the λ -dependent Rényi entropy power and the ( p , β ) -Fisher information, so generalizing the usual (i.e., translationally invariant) Fisher information. Then, we propose a complexity measure based on these two information quantities, the ( p , β , λ ) -Fisher–Rényi complexity, and we study its fundamental properties regarding the invariance under translation and scaling transformations and, above all, the universal bounding from below. In particular, we come back briefly to the results of Lutwak [34] or of Bercher [35] concerning the sharpness of the bound and the minimizers, derived only when the three parameters belong to a two-dimensional manifold, finding that our results remain indeed valid in a domain slightly wider than theirs. In Section 3, the core of the paper, we come back to the lower bound (or to the extended Stam inequality) dealing with a wide three-dimensional domain. In this extended domain, which includes that of the previous section, we are able to derive explicitly the minimizers and the sharp lower bound, regardless the knowledge of the minimizers. In order to do this, we introduce a special nonlinear stretching of the state, leading to the so-called differential-escort distribution [50]. This geometrical deformation allows us to start from the Bercher–Lutwak inequality and to introduce a supplementary degree of freedom so as to decouple the parameters (in a certain range). This approach is the key point for the determination of the extended domain where the complexity is bounded from below (the generalized Stam inequality). Moreover, we provide an explicit expression for the densities which minimize this complexity, expression involving the inverse incomplete beta function. In Section 4, we apply the previous results to some relevant multidimensional physical systems subject to a central potential, whose quantum-mechanically allowed stationary states are described by wave functions that factorize into a potential-dependent radial part and a common spherical part. Focusing on the radial part, we calculate the three-parametric complexity of the two main prototypes of d-dimensional physical systems, the harmonic (i.e., oscillator-like) and hydrogenic systems, for various quantum-mechanical states and dimensionalities. Finally, three appendices containing details of the proofs of various propositions of the paper are reported.

2. ( p , β , λ ) -Fisher–Rényi Complexity and the Extended Stam Inequality

In this section, we firstly review the extension of the Stam inequality based on the efforts of Lutwak et al. and Bercher [34,35,36], or more generally, based on that of Agueh [51,52]. To this aim, we introduce a three-parametric Fisher–Rényi complexity, showing its scaling and translation invariance and non-trivial bounding from below. We then come back to the results of Lutwak or Bercher concerning the determination of the sharp bound and the minimizers of its associated complexity, where a constraint on the parameters was imposed. Indeed, the constraint they imposed can be slightly relaxed, as we will see in this section.

2.1. Rényi Entropy, Extended Fisher Information and Rényi–Fisher Complexity

Let us begin with the definitions of the following information-theoretic quantities of the probability density ρ : the Rényi entropy power N λ [ ρ ] , the ( p , β ) -Fisher information F p , β [ ρ ] , and the ( p , β , λ ) -Fisher–Rényi complexity C p , β , λ [ ρ ] .
Definition 1
(Rényi entropy power [33]). Let λ R + * . Provided that the integral exists, the Rényi entropy power of index λ of a probability density function ρ is given by
N λ [ ρ ] = exp 2 H λ [ ρ ] w h e r e H λ [ ρ ] = 1 1 λ log R [ ρ ( x ) ] λ d x ,
where the limiting case λ 1 gives the Shannon entropy power N [ ρ ] = N 1 [ ρ ] lim λ 1 N λ [ ρ ] .
The entropy H λ was introduced by Rényi in [33] as a generalization of the Shannon entropy. In this expression, through the exponent λ applied to the distribution, more weight is given to the tail ( λ < 1 ) or to the head ( λ > 1 ) of the distribution [34,53,54,55]. This measure found many applications in numerous fields such as e.g., signal processing [56,57,58,59,60,61,62], information theory to reformulate the entropy power inequality [63], statistical inference [64], multifractal analysis [65,66], chaotic systems [67], or in physics as mentioned in the introduction (see ref. above). For instance, the Rényi entropies were used to reformulate the Heisenberg uncertainty principle (see [68,69,70,71,72] or [73,74] where this formulation also appears and is applied in quantum physics).
Whereas the power applied to the probability density ρ in the Rényi entropy aims at making a focus on heads or tails of the distribution, one may wish to act similarly dealing with the Fisher information. In this case, since both the density and its derivative are involved, one may wish to stress either some parts of the distribution, or some of its variations (small or large fluctuations). Thus, two different power parameters for ρ and its derivative, respectively, can be considered leading with our notations to the following definition of the bi-parametric Fisher information.
Definition 2
( ( p , β ) -Fisher information [34,35,36]). For any p ( 1 , ) and any β R + * , the ( p , β ) -Fisher information of a continuously differentiable density ρ is defined by
F p , β [ ρ ] = R | [ ρ ( x ) ] β 1 d d x log [ ρ ( x ) ] | p ρ ( x ) d x 2 p β ,
provided that this integral exists. When ρ is strictly positive on a bounded support, the integration is to be understood over this support, but it must be differentiable on the closure of this support.
It is straightforward to see that F 2 , 1 is the usual Fisher information. When it exists, lim p + [ F p , β ] β 2 is the essential supremum of ρ β 1 d d x log [ ρ ] . Conversely, 1 β [ F 1 , β ] β 2 is the total variation of ρ β . For p = 2 , this extended Fisher information is closely related to the α -Fisher information introduced by Hammad in 1978 when dealing with a position parameter [75]. Note also that a variety of generalized Fisher information was applied especially in non-extensive physics [76,77,78,79].
From the Rényi entropy power and the ( p , β ) -Fisher information, we define a ( p , β , λ ) -Fisher–Rényi complexity by the product of these quantities, up to a given power.
Definition 3
( ( p , β , λ ) -Fisher–Rényi complexity). We define the ( p , β , λ ) -Fisher–Rényi complexity of a probability density ρ by
C p , β , λ [ ρ ] = F p , β [ ρ ] N λ [ ρ ] β ,
provided that the involved quantities exist.
We choose to elevate the product of the entropy power and Fisher information to the power β > 0 for simplification reasons. Indeed, it does not change the spirit of this measure of complexity, whereas it allows to express symmetry properties in a more elegant manner, as we will see later on.
This quantity has the minimal properties expected for a complexity measure (see e.g., [32]), as stated in the next subsection.

2.2. Shift and Scale Invariance, Bounding from Below and Minimizing Distributions

The first property of the proposed complexity C p , β , λ [ ρ ] is the invariance under the basic translation and scaling transformations.
Proposition 1.
The ( p , β , λ ) -Fisher–Rényi complexity of the probability density ρ is invariant under any translation x 0 R and scaling factor σ > 0 applied to ρ; i.e., for ρ ˜ ( x ) = 1 σ ρ x x 0 σ , C p , β , λ [ ρ ˜ ] = C p , β , λ [ ρ ] .
Proof. 
This is a direct consequence of a change of variables in the integrals, showing that N λ [ ρ ˜ ] = σ 2 N λ [ ρ ] (justifying the term of entropy power) for any λ , and that F p , β [ ρ ˜ ] = σ 2 F p , β [ ρ ] , whatever ( p , β ) . ⎕
From now, due to these properties, all the definitions related to probability density functions will be given up to a translation and scaling factor. In other words, when evoking a density ρ , except when specified, we will deal with the family 1 σ ρ x x 0 σ for any x 0 R and σ > 0 .
More important, the complexity has a universal, non-trivial bounding from below so that the distribution corresponding to this minimal complexity can thus be viewed as the less complex one.
Proposition 2
(Extended Stam inequality). For any p > 1 ,
( β , λ ) D p = ( β , λ ) R + * 2 : β 1 p * ; 1 p * + min ( 1 , λ ) ,
with p * = p p 1 the Holder conjugate of p, their exists a universal optimal positive constant K p , β , λ , that bounds from below the ( p , β , λ ) -Fisher–Rényi complexity of any density ρ, i.e.,
ρ , C p , β , λ [ ρ ] K p , β , λ .
The optimal bound is achieved when, up to a shift and a scaling factor,
ρ p , β , λ = u ϑ w i t h ϑ = p * β p * 1 ,
and where u is a solution of the differential equation
d d x d d x u p 2 d d x u + γ ϑ u λ ϑ 1 u ϑ 1 1 λ = 0 ,
with γ determined a posteriori to impose that u ϑ sums to unity. When λ 1 , the limit has to be taken, leading to γ ϑ u λ ϑ 1 u ϑ 1 1 λ γ u ϑ 1 log u .
Proof. 
The proof is mainly based on the sharp Gagliardo–Nirenberg inequality [52], as explained with details in Appendix A. ⎕
Finally, the minimizers of the ( p , β , λ ) -Fisher–Rényi complexity and the tight bound satisfy a remarkable property of symmetry, as stated hereafter.
Proposition 3.
Let us consider the involutary transform
T p : β , λ β p * + λ 1 λ p * , 1 λ .
The minimizers of the complexity satisfy the relation
ρ p , T p ( β , λ ) ρ p , β , λ λ ,
and the optimal bounds satisfy the relation
K p , T p ( β , λ ) = λ 2 K p , β , λ .
Proof. 
See Appendix B. ⎕
A difficulty to determine the sharp bound and the minimizer is to solve the nonlinear differential Equation (10). One can find in Corollary 3.2 in [52] a series of explicit equations allowing to determine the solution and thus the optimal bound of in Equation (A1), but in general the expression of u remains on an integral form. Agueh, however, exhibits several situations where the solution is known explicitly (and thus the optimal bound as well), as summarized in the next subsection.

2.3. Some Explicitly Known Minimizing Distributions

The particular cases are issued of special cases of saturation of the Gagliardo–Nirenberg, some of them being studied by Bercher [35,80,81] or Lutwak [34]. All these cases are restated hereafter, with the notations of the paper. Let us first recall the definition of the stretched deformed Gaussian, studied by Lutwak [34] or Bercher [35,80,81], for instance, also known as stretched q-Gaussian or stretched Tsallis distributions [48,49] and intensively studied in non-extensive physics.
Definition 4
(Streched deformed Gaussian distribution). Let p > 1 and λ > 1 p * . The ( p , λ ) -stretched deformed Gaussian distribution is defined by
g p , λ ( x ) 1 + ( 1 λ ) | x | p * + 1 λ 1 , f o r λ 1 , exp | x | p * , f o r λ = 1 ,
where ( · ) + = max ( · , 0 ) (the case λ = 1 is indeed obtained taking the limit).
This distribution plays a fundamental role in the extended Stam inequality, as we will see in the next subsections and in the next section.

2.3.1. The Case β = λ

For any p > 1 , and for
( β , λ ) B p = ( β , λ ) D p : β = λ ,
one obtains that the minimizing distribution of the ( p , β , λ ) -Fisher–Rényi complexity is the ( p , λ ) -stretched deformed Gaussian distribution,
ρ p , λ , λ = g p , λ
(see Corollary 3.4 in [52], (i) where λ = q / s ; and (ii) where λ = s / q , respectively; the case λ = 1 is obtained taking the limit λ 1 (resp. lower and upper limit) or by a direct computation). This situation is nothing more than that studied by Bercher in [35] or Lutwak in [34]. Remarkably, by a mass transport approach, Lutwak proved in [34] that this relation is valid for λ > 1 1 + p * , i.e., for
( β , λ ) L p = ( β , λ ) R + * 2 : β = λ > 1 1 + p * .
Note that the exponent of the Lutwak expression is not the same as ours, but β > 0 allowing to take the Lutwak relation to the adequate exponent so as to obtain our formulation.

2.3.2. Stretched Deformed Gaussian: The Symmetric Case

Immediately, from the relation Equation (12) induced by the involution T p , one obtains, after a re-parametrization λ 1 λ and an adequate scaling, for any p > 1 and
( β , λ ) B ¯ p = ( β , λ ) D p : β = p * + 1 λ p *
that the minimizing distribution is again a stretched deformed Gaussian,
ρ p , p * + 1 λ p * , λ = g p , 2 λ .
Again, starting from the Lutwak result, the validity of this result extends to
( β , λ ) L ¯ p = ( β , λ ) R + * 2 : 0 < β = p * + 1 λ p * < 1 + 1 p * ,
and the symmetry of the bound given by Proposition 3 remains valid. Indeed, the minimizers in L p satisfying the differential equation of the Gagliardo–Nirenberg as given in Appendix A, the reasoning of this appendix and of the Appendix B holds.

2.3.3. Dealing with the Usual Fisher Information

This situation corresponds to p = 2 and β = 1 . Then, for
( β , λ ) A 2 = ( β , λ ) D 2 : β = 1 ,
one obtains the minimizing distribution for λ 1 ,
ρ 2 , 1 , λ ( x ) cos 1 λ | x | 2 1 λ 𝕝 0 ; π 2 e { 1 λ } ( | x | ) ,
where 𝕝 A denotes the indicator function of set A, 1 = ı (remember that cos ( ı x ) = cosh ( x ) ), e is the real part and 1 0 is to be understood as + (see Corollary 3.3 in [51] with λ = s / q and Corollary 3.4 in [51] with λ = q / s , respectively). The case λ = 1 is again obtained by taking the limit, leading to the Gaussian distribution ρ 2 , 1 , 1 . (See previous cases, with p = 2 , that corresponds also to the usual Stam inequality.)

2.3.4. The Symmetrical of the Usual Fisher Information

From the relation Equation (12) induced by the involution T p , after a re-parametrization λ 1 λ and an adequate scaling, for p = 2 and
( β , λ ) A ¯ 2 = ( β , λ ) D 2 : β = λ + 1 2 ,
the minimizing distribution for λ 1 takes the form
ρ 2 , λ + 1 2 , λ ( x ) cos λ 1 | x | 2 λ 1 𝕝 0 ; π 2 e { λ 1 } ( | x | )
(with, again, the Gaussian as the limit when λ 1 ).
The graphs in Figure 1 describe the domain D p (for a given p). Therein, we also represent the particular domains L p (Bercher–Lutwak situation), L ¯ p (transformation of L p ), A 2 and A ¯ 2 , where the explicit expressions of the minimizing distributions are known from the works of [34,35,51,52].

3. Extended Optimal Stam Inequality: A Step Further

In this section, we further extend the previous Stam inequality, namely by largely widening the domain for the parameters and disentangling the two connected parameters. For this, we use the differential-escort deformation introduced in [50], which is the key tool allowing for introducing a new degree of freedom. Afterwards, we will give the minimizing distribution that results in a new deformation of the Gaussian family intimately linked with the inverse incomplete beta functions.

3.1. Differential-Escort Distribution: A Brief Overview

We have already realized the crucial role that the power operation of a probability density function ρ plays. The subsequent escort distribution duly normalized, ρ ( x ) α R ρ ( x ) α d x , is a simple monoparametric deformation of ρ (see e.g., [82]). Notice that the parameter α allows us to explore different regions of ρ , so that, for α > 1 , the more singular regions are amplified and, for α < 1 , the less singular regions are magnified. A careful look at the minimizing distributions of the usual Stam inequality shows that the x-axis is stretched via a power operation. This makes us guess that a certain nonlinear stretching may also play a key role in the saturation (i.e., equality) of the extended Stam inequality.
These ideas led us to the definition of the differential-escort distribution of a probability distribution ρ (see also [50]), motivated by the following principle. The power operation provokes a two-fold stretching in the density itself and in the differential interval so as to conserve the probability in the differential intervals: ρ α ( y ) d y = ρ ( x ) d x with ρ α ( y ) = ρ ( x ( y ) ) α .
Definition 5
(Differential-escort distributions). Given a probability distribution ρ ( x ) and given an index α R , the differential-escort distribution of ρ of order α is defined as
E α [ ρ ] ( y ) = ρ x ( y ) α ,
where y ( x ) is a bijection satisfying d y d x = ρ ( x ) 1 α and y ( 0 ) = 0 .
The differential-escort transformation E α exhibits various properties studied in detail in [50]. We present here the key ones, allowing the extension of the Stam inequality in a wider domain than that of the previous section.
Property 1.
The differential-escort transformation satisfies the composition relation
E α E α = E α E α = E α α
whereis the composition operator. Moreover, since E 1 is the identity, for any α 0 , E α is invertible and,
E α 1 = E α 1 .
In addition to the trivial case α = 1 , keeping invariant the distribution, a remarkable case is given by α = 0 , leading to the uniform distribution. This case is non surprising since then x ( y ) is nothing more than the inverse of the cumulative density function, well known to uniformize a random vector [83].
In the sequel, we focus on the differential-escort distributions obtained for α > 0 . Under this condition, when ρ is continuously differentiable, its differential-escort is also continuously differentiable. This is important to be able to define its ( p , λ ) -Fisher information (see Definition 2). Under this condition, the differential-escort transformation induces a scaling property on the index of the Rényi entropy power (for this quantity it remains true for any α R ), the ( p , β ) -Fisher information, and thus on the subsequent complexity as stated in the following proposition.
Proposition 4.
Let a probability distribution ρ and an index α > 0 . Then, the Rényi entropy powers of ρ and its differential-escort distribution E α [ ρ ] satisfy that
N λ E α [ ρ ] = N 1 + α ( λ 1 ) [ ρ ] α
for any λ R + * . Moreover, if the density ρ is continuously differentiable, then the extended Fisher information of ρ and its differential-escort distribution E α [ ρ ] satisfy that
F p , β E α [ ρ ] = α 2 β F p , α β [ ρ ] α
for any p > 1 , β R + * .
Consequently, the ( p , β , λ ) -Fisher–Rényi complexity of ρ and of E α [ ρ ] satisfy the relation
C p , β , λ E α [ ρ ] = α 2 C p , A α ( β , λ ) [ ρ ] .
Proof. 
It is straightforward to note that
N λ E α [ ρ ] 1 λ 2 = R E α [ ρ ] ( y ) λ d y = R E α [ ρ ] y ( x ) λ d y d x d x = R ρ ( x ) α λ + 1 α d x = N 1 + α ( λ 1 ) [ ρ ] α ( 1 λ ) 2 ,
leading to Equation (28).
Similarly,
F p , β E α [ ρ ] p β 2 = R | E α [ ρ ] ( y ) β 2 d d y E α [ ρ ] ( y ) | p E α [ ρ ] ( y ) d y = R | E α [ ρ ] ( y ( x ) ) β 2 d d x E α [ ρ ] ( y ( x ) ) d x d y | p E α [ ρ ] ( y ( x ) ) d y d x d x = R | ρ ( x ) α ( β 2 ) d d x ρ ( x ) α ρ ( x ) α 1 | p ρ ( x ) d x = R | α ρ ( x ) α β 2 d d x ρ ( x ) | p ρ ( x ) d x ,
leading to Equation (29).
Relation (30) is a consequence of Equations (28) and (29) together with Definition 3 of the complexity. ⎕
One may mention [84] where the author studies the effect of a rescaling of the Tsallis non-additive parameter, equivalent to the entropic parameter of the Rényi entropy, and that is exactly that of Equation (28). In particular, this rescaling has an effect on the maximum entropy distribution in such a way that it is equivalent to elevate this particular distribution to a power. Here, the spirit is slightly different since we start from a given distribution and the nonlinear stretching is made on the state (x-axis) of any probability density in such a way that it is elevated to an exponent. The stretching is intimately linked to the distribution, being of maximum entropy or not, and the scaling effect on the Rényi is a consequence of this nonlinear stretching. The study of the links between the present result and that of [84] goes beyond the scope of our work and remains as a perspective.

3.2. Enlarging the Validity Domain of the Extended Stam Inequality

We have now all the ingredients to enlarge the domain of validity of the Stam inequality. Moreover, we are able to determine an explicit expression of the minimizer by the mean of a special function, i.e., more simple to determine than as in Proposition 2, and of the tight bound as well.
To this aim, let us consider the following affine transform A a and the set of transformation for a R + * ,
A a : ( β , λ ) ( a β , 1 + a ( λ 1 ) ) and A ( β , λ ) = A a ( β , λ ) : a R + * R + * 2 .
Then, for any strictly positive real a, one can apply Proposition 2 to E a [ ρ ] , that is, for p > 1 , ( β , λ ) D p , C p , β , λ E a [ ρ ] K p , β , λ . Thus, from Proposition 4, one immediately has that C p , A a ( β , λ ) [ ρ ] a 2 K p , β , λ K p , A a ( β , λ ) . Moreover, this inequality is sharp since it is achieved for E a [ ρ ] = ρ p , β , λ , i.e., for ρ p , A a ( β , λ ) = E a 1 [ ρ p , β , λ ] .
As a conclusion, the existence of a universal optimal positive constant bounding the complexity (see Proposition 2) extends from D p to A ( D p ) . Note that A ( β , λ ) is the overlap of the line defined by the point ( 0 , 1 ) and ( β , λ ) itself (achieved for a = 1 ), and R + * 2 , as depicted Figure 2. Then, it is straightforward to see that D ˜ p A ( D p ) = ( β , λ ) R + * 2 : λ > 1 β p * (see Figure 2a). The approach is thus the following:
  • Consider a point ( β , λ ) D ˜ p and find an index α R + * such that A α ( β , λ ) D p , which is a point of the intersection between D p and the line joining ( 0 , 1 ) and ( β , λ ) .
  • Apply Proposition 2 for the point ( p , A α ( β , λ ) ) , leading to the minimizing distribution ρ p , A α ( β , λ ) and its corresponding bound.
  • Then, remarking that A α 1 A α ( β , λ ) = ( β , λ ) , the minimizer of the extended complexity writes ρ p , β , λ = E α ρ p , A α ( β , λ ) and the corresponding bound can be computed from this minimizer or noting that K p , β , λ = α 2 K p , A α ( β , λ ) .
The same procedure obviously applies dealing with L p : A ( L p ) = { ( β , λ ) R + * 2 : 1 β p * < λ < β + 1 } appears to be a subset of D ˜ p (see Figure 2b). Similarly, one can also deal with L ¯ p : A ( L ¯ p ) = ( β , λ ) R + * 2 : λ > 1 p * β p * + 1 also appears to be a subset of D ˜ p (see Figure 2c). Remarkably, A ( D p ) = A ( L p ) A ( L ¯ p ) . Moreover, we have explicit expressions for the minimizers in both L p and L ¯ p , which greatly eases determining the minimizers in D ˜ p (including D p itself).
These remarks, together with both the knowledge of the minimizing distributions and the bound on L p L ¯ p , lead to the following definition and proposition.
Definition 6
( ( p , β , λ ) -Gaussian distribution). For any p > 1 and ( β , λ ) R + * 2 , we define the ( p , β , λ ) -Gaussian distribution as
g p , β , λ ( x ) 1 B 1 1 p * , q p , β , λ ; p * | x | | 1 λ | 1 p * 1 | 1 λ | 𝕝 0 ; B 1 p * , q p , β , λ p * | x | | 1 λ | 1 p * , i f λ 1 , exp G 1 1 p * ; β 1 β 1 p * p * | x | β 1 𝕝 0 ; Γ ( 1 / p * ) 𝕝 ( 0 ; 1 ) ( β ) p * | x | , i f λ = 1 , β 1 , exp | x | p * , i f β = λ = 1 ,
with
q p , β , λ = β 1 | 1 λ | + 𝕝 R + ( 1 λ ) p .
T p is the involutary transform defined Equation (11). B ( a , b , x ) = 0 x t a 1 ( 1 t ) b 1 d t is the incomplete beta function, defined when a > 0 and for x [ 0 ; 1 ) (see [85]), and B ( a , b ) = lim x 1 B ( a , b , x ) , that is the standard beta function if b > 0 and infinite otherwise. B 1 is thus the inverse incomplete beta function. Finally, G ( a , x ) = 0 x t a 1 exp ( t ) d t is the incomplete gamma function, defined when a > 0 and for x R [85], and Γ ( a ) = lim x + G ( a , x ) is the gamma function. By definition, z α = | z | α e ı α Arg ( t ) where 0 Arg ( t ) < 2 π . Finally, by convention 1 / 0 = + .
Note that, when b > 0 , the inverse incomplete beta function is well known and tabulated in the usual mathematical softwares since it is the inverse cumulative function of the beta distributions [86]. Otherwise, as the incomplete beta function writes through an hypergeometric function [87] (see also [85,86]), also well known and tabulated, B 1 can be at least numerically computed. The incomplete beta function contains many special cases for particular parameters [87,88]. For instance, when a + b is a negative integer, they express as elementary functions [87].
Similarly, when its argument is positive, the incomplete gamma function and its inverse are well known and tabulated because they are linked to the cumulative distribution of gamma laws [86]. Even for negative arguments, the incomplete gamma function is very often tabulated in mathematical software. Otherwise, one can write it using a confluent hypergeometric function [85] (see also [86,87]), generally tabulated. Thus, it can be inverted at least numerically. The incomplete gamma function also contains special cases for particular parameters. For instance, G 1 2 , x 2 = erf ( x ) , where erf is the error function [85]. Hence, for p = 2 and λ = 1 , the ( p , β , λ ) -Gaussian writes in terms of the inverse error function.
Now, from the procedure previously described, we obtain the Stam inequality with the widest possible domain, together with the minimizing distributions and the explicit tight lower bound.
Proposition 5
(Stam inequality in a wider domain). The ( p , β , λ ) -Fisher–Rényi complexity is non trivially lower bounded as follows:
p > 1 , ( β , λ ) D ˜ p = ( β , λ ) R + * 2 : λ > 1 β p * , C p , β , λ [ ρ ] K p , β , λ .
The minimizers are explicitly given by
argmin ρ C p , β , λ [ ρ ] = g p , β , λ ,
the ( p , β , λ ) -Gaussian of Definition 6. Proposition 3 remains valid in D ˜ p . Moreover, the tight bound is
K p , β , λ = 2 p * ζ p , β , λ p * ζ p , β , λ | 1 λ | 1 p * p * ζ p , β , λ p * ζ p , β , λ | 1 λ | ζ p , β , λ | 1 λ | + 1 p B 1 p * , ζ p , β , λ | 1 λ | + 1 p 2 , i f λ 1 , 2 e 1 p * Γ 1 p * β p * 1 p 2 , i f λ = 1 ,
with
ζ p , β , λ = β + ( λ 1 ) + p * .
Proof. 
See Appendix C. ⎕

4. Applications to Quantum Physics

Let us now apply the ( p , β , λ ) -Fisher–Rényi complexity for some specific values of the parameters to the analysis of the two main prototypes of d-dimensional quantum systems subject to a central (i.e., spherically symmetric) potential; namely, the hydrogenic and harmonic (i.e., oscillator-like) systems. The wave functions of the bound stationary states of these systems have the same angular part, so that we concentrate here on the radial distribution in both position and momentum spaces.

4.1. Brief Review on the Quantum Systems with Radial Potential

The time-independent Schrödinger equation of a single-particle system in a central potential V ( r ) can be written as
1 2 d 2 + V ( r ) Ψ r = E n Ψ r ,
(atomic units are used from here onwards), where d denotes the d-dimensional gradient operator and the position vector r = ( x 1 , , x d ) in hyperspherical units is given by r , θ 1 , θ 2 , , θ d 1 r , Ω d 1 , Ω d 1 S d 1 the unit d-dimensional sphere, where r r = i = 1 d x i 2 R + and x i = r k = 1 i 1 sin θ k cos θ i for 1 i d and with θ i [ 0 ; π ) for i < d 1 , θ d 1 ϕ [ 0 ; 2 π ) and θ d = 0 by convention. The physical wave functions are known to factorize (see e.g., [89,90,91]) as
Ψ n , l , μ ( r ) = R n , l ( r ) Y l , { μ } ( Ω d 1 ) ,
where R n , l ( r ) and Y l , { μ } Ω d 1 denote the radial and the angular part, respectively, being l , μ l μ 1 , μ 2 , , μ d 1 the hyperquantum numbers associated to the angular variables Ω d 1 θ 1 , θ 2 , , θ d 1 , which may take all values consistent with the inequalities l μ 1 μ 2 μ d 1 m 0 .
As already stated, the angular part Y l , { μ } is independent of the potential V and its expression is detailed in [14,17,91,92], for instance. Only the radial part R n , l is dependent on V (and also on the energy level n and the angular quantum number l), being the solution of the radial differential equation
1 2 d 2 d r 2 d 1 2 r d d r + l ( l + d 2 ) 2 r 2 + V ( r ) R n , l ( r ) = E n R n , l ( r )
(see e.g., [14,17,92] for further details). Then, the associated radial probability density ρ ( r ) is given by
ρ n , l ( r ) d r = S d 1 Ψ ( r ) 2 d r = R n , l ( r ) 2 r d 1 d r ,
where we have taken into account the volume element d r = r d 1 d r d Ω d 1 and the normalization of the hyperspherical harmonics Y l , { μ } Ω d 1 to unity.
Then, the wavefunction associated to the momentum of the system is given by the Fourier transform Ψ ˜ of Ψ . It is known that, again, Ψ ˜ writes as the product of a radial and angular part
Ψ ˜ n , l , μ ( k ) = M n , l ( k ) Y l , { μ } ( Ω d 1 ) ,
with the the radial part being the modified Hankel transform of R n , l ,
M n , l ( k ) = ( ı ) l k 1 d 2 R + r d 2 R n , l ( r ) J l + d 2 1 ( k r ) d r ,
with J ν the Bessel function of the first king and order ν (see e.g., [14,17,91,92]). Again, it leads to the radial probability density function
γ n , l ( k ) = M n , l ( k ) 2 k d 1 .
In the following, we will focus on the ( p , β , λ ) -Fisher–Rényi complexity of the radial densities ρ n , l ( r ) and γ n , l ( k ) of the d-dimensional harmonic and hydrogenic systems.

4.2. ( p , β , λ ) -Fisher–Rényi Complexity and the Hydrogenic System

The bound states of a d-dimensional hydrogenic system, where V ( r ) = Z r (Z denotes the nuclear charge) are the physical solutions of Equation (40), which correspond to the known energies
E n ( h ) = Z 2 2 η 2 where η = n + d 3 2 ; n = 1 , 2 ,
(see [14,89,90]). The radial eigenfunctions are given by
R n , l ( h ) ( r ) = R n , l 2 Z η d 1 2 r ˜ l e r ˜ 2 L η L 1 ( 2 L + 1 ) ( r ˜ ) .
L is the grand orbital angular momentum quantum number, r ˜ is a dimensionless parameter, and the normalization coefficient R n , l are given by
L = l + d 3 2 , l = 0 , 1 , , n 1 ; r ˜ = 2 Z η r and R n , l = Z Γ ( η L ) η 2 Γ ( η + L + L ) ,
respectively, with L n ( α ) ( x ) the Laguerre polynomials [85,87]. Then, the radial probability density (41) of a d-dimensional hydrogenic stationary state ( n , l , { μ } ) is given in position space by
ρ n , l ( h ) ( r ) = R n , l r ˜ 2 L + 2 e r ˜ L η L 1 ( 2 L + 1 ) ( r ˜ ) 2 .
Furthermore, using 8.971 in [87], one can compute d ρ n , l ( h ) d r = 2 Z η d ρ n , l ( h ) d r ˜ .
On the other hand, the modified Hankel transform of R n , l Equation (43) gives the radial part of the wavefunction in the conjugated momentum space as [14,89,90]
M n , l ( k ) = M n , l η Z d 1 2 k ˜ l ( 1 + k ˜ 2 ) L + 2 G η L 1 ( L + 1 ) 1 k ˜ 2 1 + k ˜ 2 ,
where k ˜ is a dimensionless parameter and the normalization coefficient M n , l are given by
k ˜ = η Z k and M n , l = 4 2 L + 3 Γ ( η L ) Γ ( L + 1 ) 2 η 2 2 π Z Γ ( η + L + 1 ) ,
and where G n ( α ) ( x ) denotes the Gegenbauer polynomials [85,87]. This gives the radial probability density function in the momentum space as
γ n , l ( h ) ( k ) = M n , l k ˜ 2 L + 2 ( 1 + k ˜ 2 ) 2 L + 4 G η L + 1 ( L + 1 ) 1 k ˜ 2 1 + k ˜ 2 2 .
Furthermore, using 8.939 in [87], one can compute d γ n , l ( h ) d k = 2 Z η d γ n , l ( h ) d k ˜ .
These expressions can thus be injected into Equations (4)–(6) to evaluate the ( p , β , λ ) -Fisher–Rényi complexity of both ρ n , l ( h ) and γ n , l ( h ) . Due to the special form of the density, involving orthogonal polynomials, this can be done using for instance a Gauss-quadrature method for the integrations [86].
For illustration purposes, we depict in Figure 3 the behavior of the Fisher information F p , β , of the Rényi entropy power N λ , and of the ( p , β , λ ) -Fisher–Rényi complexity C p , β , λ (normalized by the lower bound) of the radial position density ρ n , l ( h ) of the d-dimensional hydrogenic system, versus n and l, for the parameters ( p , β , λ ) = ( 2 , 1 , 7 ) and in dimensions d = 3 and 12. Therein, we firstly observe that, for a given quantum state of the system (so, when n and l are fixed), the Fisher information decreases (see left graph) and the Rényi entropy power increases (see center graph) when d goes from 3 to 12. This indicates that the oscillatory degree and the spreading amount of the radial electron distribution have a decreasing and increasing behavior, respectively, when the dimension is increasing. The resulting combined effect, as captured and quantified by the the Fisher–Rényi complexity (see right graph), is such that the complexity has a clear dependence on the difference n l in such a delicate way that it decreases when n l = 1 , but it increases when n l is bigger than unity as d is increasing.
To better understand this phenomenon, we have to look carefully at the opposite behavior of the Fisher information and the Rényi entropy power versus the pair ( n , l ) .
Indeed, for the two dimensionality cases considered in this work, the Fisher information presents a decreasing behavior when l is increasing and n is fixed, reflecting essentially that the number of oscillations of the radial electron distribution is gradually smaller; keep in mind that η L = n l is the degree of the Laguerre polynomials which controls the radial electron distribution. At the smaller dimension ( d = 3 ), a similar behavior is observed when l is fixed and n is increasing, while the opposite behavior occurs at the higher dimension ( d = 12 ). This indicates that the radial fluctuations are bigger in number as n increases and their amplitudes depend on the dimension d so that they are gradually smaller (bigger) at the high (small) dimension. This is because the dimension, hidden in both the hyperquantum numbers η and L, tunes the coefficients of the Laguerre polynomials and thus the amplitude height of the oscillations.
In the case of the Rényi quantity, which is a global spreading measure, the behavior for fixed l and n increasing is clearly increasing, whereas, for fixed n, it is slowly decreasing versus l; this indicates that the radial electron distribution gradually spreads more and more (less and less) all over the space when n ( l ) is increasing.
Then, in Figure 4, the parameter dependence of the ( p , β , λ ) -Fisher–Rényi complexity C p , β , λ (duly normalized to the lower bound) for the radial distribution of various states ( n , l ) of the d-dimensional hydrogenic system in position space with dimensions d = 3 and 12, is investigated for the sets ( p , β , λ ) = ( 2 , 0.8 , 7 ) , ( 2 , 1 , 1 ) (usual Fisher–Shannon complexity) and ( 5 , 2 , 7 ) . Roughly speaking, the average behavior of the complexity versus ( n , l ) is similar for both dimensional cases to the one shown in the right graph of the previous figure. Of course, for a given pair ( n , l ), the behavior of the complexity in terms of the dimension is quantitatively different according to the values of the parameters. Let us just point out, for instance, that the comparison of the behavior of C 5 , 2 , 7 versus d and the corresponding ones of the other complexities shows that the complexity with higher value of p is more sensitive to the radial electron fluctuations with higher amplitudes.
A similar study for the previous entropy- and complexity-like measures in momentum space has been done in Figure 5 and Figure 6. Briefly, we observe that the behavior of these momentum quantities are in accordance with the analysis of the corresponding ones in position space, which has just been discussed. Note that here again the difference n l determines the degree of the Gegenbauer polynomials that control the momentum density γ n , l ( h ) , so that the influence of n , l and d is formally similar to that for the position density ρ n , l ( h ) . Here, the influence of d on the height of the radial oscillation of the electron distribution (through the coefficients of the Gegenbauer polynomials) is the same for the two dimensionality cases considered in this work.
Let us highlight that the ( n , l , d ) -behavior of the Rényi power entropy in momentum space is just the opposite to the corresponding position one, manifesting the conjugacy of the two spaces, which is the spread of the position and momentum electron distributions are opposite.

4.3. ( p , β , λ ) -Fisher–Rényi Complexity and the Harmonic System

The bound states of a d-dimensional harmonic (i.e., oscillator-like) system, where V ( r ) = 1 2 ω 2 r 2 (without loss of generality, the mass is assumed to be unity), are known to have the energies
E n ( o ) = ω 2 n + L + 3 2 with n = 0 , 1 , , l = 0 , 1 ,
(see e.g., [90,93,94]). The radial eigenfunctions writes in terms of the Laguerre polynomials as
R n , l ( o ) ( r ˜ ) = R n , l ω d 1 4 r ˜ l e 1 2 r ˜ 2 L n ( L + 1 2 ) r ˜ 2 ,
where r ˜ is a dimensionless parameter, and the normalization coefficient R n , l are given by
r ˜ = ω r and R n , l = 2 ω Γ ( n + 1 ) Γ n + L + 3 2 ,
respectively. Then, the associated radial position density is thus given by
ρ n , l ( o ) ( r ) = R n , l r ˜ 2 L + 2 e r ˜ 2 L n ( L + 1 2 ) r ˜ 2 2 .
As for the hydrogenic system, using 8.971 in [87], one can compute d ρ n , l ( o ) d r = ω d ρ n , l ( o ) d r ˜ , and thus the ( p , β , λ ) -Fisher–Rényi of ρ n , l ( o ) . Remarkably, R n , l is invariant by the modified Hankel transform, so that the momentum radial density is formally the same as the position radial density.
For illustration purposes, we plot in Figure 7 the behavior of the Fisher information F 2 , 1 , the Rényi entropy power N 7 and the ( 2 , 1 , 7 ) -Fisher–Rényi complexity C 2 , 1 , 7 of the radial position distribution of the d-dimensional harmonic system for various values of the quantum numbers n and l at the dimensions d = 3 and 12. Figure 8 depicts C p , β , λ duly renormalized by its lower bound, for the triplets of complexity parameters ( p , β , λ ) = ( 2 , 0.8 , 7 ) , ( 2 , 1 , 1 ) and ( 5 , 2 , 7 ) , respectively. In these graphs, one can make a similar interpretation as for the hydrogenic case. Note, however, that here the degree of the Laguerre polynomials involved in the distribution ρ n , l ( o ) only depends on n; this fact makes more regular the behavior of the previous information-theoretical measures in the oscillator case than in the hydrogenic one. Concomitantly, as n increases, the spreading of the distribution also increases. Conversely, parameters l and d have a relatively small influence on both the smoothness of the oscillation and on the spreading (compared to that of n). Thus, unsurprisingly, both the Fisher information and the Rényi entropy power are weakly influenced by l (especially at the higher dimension) and by d. The Fisher–Rényi complexity, which quantifies the combined oscillatory and spreading effects, exhibits a very regular increasing behavior in terms of n.
Most interesting is the parameter-dependence of the complexity. Indeed, we can play with the complexity parameter to stress different aspects of the oscillator density and thus to reveal differences between the quantum states of the system. For instance, as one can see in Figure 8, the usual Fisher–Rényi complexity is unable to quantify the difference between the states of a given n versus the orbital number l and the dimension d (especially when n 1 , whereas the systems are quite different). This holds even playing with λ or β , while increasing parameter p (right graph), these states are distinguishable. This graph clearly shows the potentiality of the family of complexities C p , β , λ to analyze a system, especially thanks to the full degree of freedom we have between the complexity parameters p , β and λ .

5. Conclusions

In this paper, we have defined a three-parametric complexity measure of Fisher–Rényi type for a univariate probability density ρ that generalizes all the previously published quantifiers of the combined balance of the spreading and oscillatory facets of ρ . We have shown that this measure satisfies the three fundamental properties of a statistical complexity, namely, the invariance under translation and scaling transformations and the universal bounding from below. Moreover, the minimizing distributions are found to be closely related to the stretched Gaussian distributions. We have used an approach based on the Gagliardo–Nirenberg inequality and the differential-escort transformation of ρ . In fact, this inequality was previously used by Bercher and Lutwak et al. to find a biparametric extension of the celebrated Stam inequality which lowerbounds the product of the Rényi entropy power and the Fisher information. We have extended this biparametric Stam inequality to a three-parametric one by using the idea of differential-escort deformation of a probability density.
Then, we have numerically analyzed the previous entropy-like quantities and the three-parametric complexity measure for various specific quantum states of the two main prototypes of multidimensional electronic systems subject to a central potential of Coulomb (the d-dimensional hydrogenic atom) and harmonic (the d-dimensional isotropic harmonic oscillator) character. Briefly, we have found that the proposed complexity allows to capture and quantify the delicate balance of the gradient and the spreading contents of the radial electron distribution of ground and excited states of the system. The variation of the three parameters of the proposed complexity allows one to stress differently this balance in the various radial regions of the charge distribution.
The results found in this work can be generalized in various ways that remain open. Indeed, the Gagliardo–Nirenberg relation is quite powerful since it involves the p-norm of the function u, the q-norm of its j-th derivative and the s-norm of its m-th derivative, where p , q , s and the integers j , m are linked by inequalities (see [95]). This leaves open the possibility to define still more extended (complete) complexity measures, with higher-order (in terms of derivative) measures of information. Even more interesting, this inequality-based relation holds for any dimension d 1 ; thus, it supports the possibility to extend our univariate results to multidimensional distributions, but with tighter restrictions on the parameters. The main difficulty in this case is related with the multidimensional extension of the validity domain by using the differential-escort technique or a similar one.

Acknowledgments

The authors are very grateful to the CNRS (Steeve Zozor) and the Junta de Andalucía and the MINECO–FEDER under the grants FIS2014–54497 and FIS2014–59311P (Jesús Sánchez-Dehesa) for partial financial support. Moreover, they are grateful for the warm hospitality during their stays at GIPSA–Lab of the University of Grenoble–Alpes (David Puertas-Centeno) and Departamento de Física Atómica, Molecular y Nuclear of the University of Granada (Steeve Zozor) where this work was partially carried out.

Author Contributions

The authors contributed equally to this work.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Proposition 2

Appendix A.1. The Case λ ≠ 1

The result of the proposition is a direct consequence of the Gagliardo–Nirenberg inequality [35,52,95], stated in our context as follows: let p > 1 , s > q 1 and θ = p ( s q ) s ( p + p q q ) ; then, there exists an optimal strictly positive constant K, depending only on p , q and s such that for any function u : R R + ,
K d d x u p θ u q 1 θ u s ,
provided that the involved quantities exist, the equality being achieved for u solution of the differential equation
d d x d d x u p 2 d d x u + u q 1 = γ u s 1 ,
where γ > 0 is such that u s is fixed and can be chosen arbitrarily (it corresponds to a Lagrange multiplier, see Equations (26) and (27) in [52]). Finding u thus allows to determine the optimal constant K. Note that, if the equality in (A1) is reached for u γ , then it is also reached for u ¯ γ = δ u γ ( x ) for any δ > 0 . One can see that u ¯ γ satisfies the differential equation d d x d d x u ¯ p 2 d d x u ¯ + δ p q u ¯ q 1 γ δ p s u ¯ s 1 = 0 . Thus, function u reaching the equality in Equation (A1) can also be chosen as the solution of the differential equation d d x d d x u p 2 d d x u + κ u q 1 ζ u s 1 = 0 , where κ > 0 and ζ > 0 can be chosen arbitrarily. As we will see later on, a judicious choice allowing to include the limit case s q is to take κ = ζ = γ s q , i.e., to chose function u reaching the equality in Equation (A1) as the solution of the differential equation
d d x d d x u p 2 d d x u + γ u q 1 u s 1 s q = 0 ,
where γ > 0 can be arbitrarily chosen.

Appendix A.1.1. The Sub-Case λ < 1

Following the very same steps than in [35], let us consider first
λ = q s < 1 .
With u s integrable, one can normalize it, that is, writing it under the form u = ρ 1 s = ρ λ q with ρ a probability density function. Thus, u s = 1 and from the Gagliardo–Nirenberg inequality,
ρ λ q 1 d d x ρ p θ ρ λ q q 1 θ s θ K 1 .
Simple algebra allows to write the terms of the left-hand side in terms of the generalized Fisher information and of the Rényi entropy power, respectively, to conclude that
F p , λ q 1 p + 1 [ ρ ] θ p p λ q 1 p + 1 2 N λ [ ρ ] 1 θ q 1 λ 2 s θ K 1 .
Using 1 1 p = 1 p * , let us then denote
β = λ q + 1 p * = 1 s + 1 p * ,
and note that the conditions imposed on p , q and s together with λ > 0 impose
β 1 p * ; 1 p * + λ ,
once p and λ are given. Simple algebra allows thus to show that θ p p λ q + 1 p * 2 = 1 θ q 1 λ 2 = θ β 2 > 0 : the exponent of the Fisher information and of the entropy power in Equation (A4) are thus equal. Moreover, θ being strictly positive, both sides of Equation (A4) can be elevated to exponent 2 θ leading to the result of the proposition, where the bound is given by
K p , β , λ = s 2 K 2 θ ,
where s and θ can be expressed by their parametrization in p , β , λ . Finally, the differential Equation (10) satisfied by the minimizer u comes from Equation (A3) noting that s = p * β p * 1 and q = λ p * β p * 1 , remembering that ρ = u s and thus that γ is to be chosen such that u s sums to unity.

Appendix A.1.2. The Sub-Case λ > 1

Consider now
λ = s q > 1
and u = ρ 1 q = ρ λ s , leading to
F p , λ s 1 p + 1 [ ρ ] θ p p λ s 1 p + 1 2 N λ [ ρ ] 1 s 1 λ 2 q θ K 1 .
Denoting now
β = λ s + 1 p * = 1 q + 1 p * ,
imposing
β 1 p * ; 1 p * + 1
once p and λ are given. Simple algebras allows thus to show that θ p p λ s 1 p + 1 2 = 1 s 1 λ 2 = θ β 2 > 0 : again, the exponent of the Fisher information and of the entropy power in Equation (A6) are equal. Here again, θ > 0 allowing to elevate both side of Equation (A6) to exponent 2 θ . The bound is now given by
K p , β , λ = q 2 K 2 θ
where q and θ can be expressed by their parametrization in p , β , λ . Finally, as for the previous case, the differential Equation (10) satisfied by the minimizer u comes from Equation (A3) noting that now q = p * β p * 1 and s = λ p * β p * 1 , remembering that now ρ = u q and thus that γ is to be chosen such that u q sums to unity.

Appendix A.2. The Case λ = 1

The minimizer for λ = 1 can be viewed as the limiting case λ 1 , i.e., s q .
One can also process as done by Agueh in [52] to determine the sharp bound of the Gagliardo–Nirenberg inequality. To this end, let us consider the minimization problem
inf 1 p R | d d x u ( x ) | p d x 1 q R [ u ( x ) ] q log u ( x ) d x : u 0 , R [ u ( x ) ] q d x = 1
for p > 1 and q 1 (see Chapters 5 and 6 in [96] justifying the existence of a minimum). Hence, there exists an optimal constant K such that for any function u such that u q sums to unity,
1 p R d d x u ( x ) p d x 1 q R [ u ( x ) ] q log u ( x ) d x K .
Now, fix a function u and consider v ( x ) = γ 1 q u ( γ x ) for some γ > 0 . v q also sums to unity and thus can be put in the previous inequality, leading to
f u ( γ ) γ p q + p 1 p R d d x u ( x ) p d x 1 q R [ u ( x ) ] q log u ( x ) d x 1 q 2 log γ K
for any γ > 0 . Thus, this inequality is necessarily satisfied for the γ that minimizes f u ( γ ) . A rapid study of f u allows to conclude that it is minimum for
γ = p q ( p + q ( p 1 ) ) R | d d x u ( x ) | p d x q p + q ( p 1 ) .
Now, injecting Equation (A11) in Equation (A10) gives
1 p + q ( p 1 ) log R | d d x u ( x ) | p d x R [ u ( x ) ] q log u ( x ) d x K ˜ ,
with K ˜ = q K + 1 p + q ( p 1 ) log p q ( p + q ( p 1 ) ) 1 . Consider now u min the minimizer of problem (A8). Obviously, f u min ( γ ) is minimum for γ = 1 , that gives, from Equation (A11), R | d d x u min ( x ) | p d x = p q ( p + q ( p 1 ) ) and from Equation (A9), being an equality, R [ u min ( x ) | q log u min ( x ) d x = 1 p + q ( p 1 ) q K . Injecting these expressions in Equation (A12) allows concluding that this inequality is sharp, and moreover that its minimizer coincides with that of the minimization problem (A8).
Inequality (8) is obtained by injecting u = ρ 1 q in Equation (A12) and after some trivial algebra and denoting β = 1 q + 1 p * 1 p * ; 1 + 1 p * , confirming that it can be viewed as a limit case λ 1 .
Let us now solve the minimization problem (A8), that is, from the Lagrangian technique [97], to minimize R F ( x , u , u ) d x , where F ( x , u , u ) = 1 p | d d x u ( x ) | p 1 q [ u ( x ) ] q log u ( x ) γ [ u ( x ) ] q and where u = d d x u and γ is the Lagrange multiplier. The solution of this variational problem is given by the Euler–Lagrange equation [97], F u d d x F u = 0 , that writes here after a re-parametrization δ = 1 q + q γ
d d x d d x u p 2 d d x u u q 1 log u + δ = 0 .
δ is to be determined a posteriori so as to satisfy the constraint R [ u ( x ) ] q d x = 1 . Again, one can easily see that if the bound in Equation (A12) is achieved for u min , then it is also achieved for u ¯ δ ( x ) = σ u min ( σ q x ) whatever σ > 0 . Reporting u min ( x ) = σ 1 u ¯ σ ( σ q x ) in the differential equation allows to see that u ¯ σ is a solution of the differential equation d d x d d x u ¯ p 2 d d x u ¯ σ p + q ( p 1 ) u ¯ q 1 log u ¯ log σ + δ = 0 . Choosing σ = exp ( δ ) and rewriting σ p + q ( p 1 ) = γ , one can thus choose the minimizer u as the solution of the differential equation
d d x d d x u p 2 d d x u γ u q 1 log u = 0 ,
where γ is to be determined a posteriori so as to satisfy the constraint R [ u ( x ) ] q d x = 1 . This result is precisely the limit case of the differential Equation (A3) when s q .

Appendix B. Proof of Proposition 3

For λ = 1 , Relations (12) and (13) induced by Transform (11) of the indexes are obvious since T p ( β , 1 ) = ( β , 1 ) .
Then, for λ 1 , Relation (12) comes from the fact that the function u solution of Equation (A3) depends only on p , q and s. Let us write ( β , λ ) and ϑ the parameters for the first situation of the above proof, i.e., λ = q s and β = 1 s + 1 p * = λ q + 1 p * , and ( β ¯ , λ ¯ ) and ϑ ¯ the parameters for the second situation, i.e., λ ¯ = s q and β ¯ = 1 q + 1 p * = λ ¯ s + 1 p * . It is straightforward to see that λ ¯ = 1 λ and β ¯ = β λ 1 λ p * + 1 p * = β p * + λ 1 λ p * , i.e., ( p , β ¯ , λ ¯ ) = ( p , T p ( β , λ ) ) , and, conversely, that ( p , β , λ ) = ( p , T p ( β ¯ , λ ¯ ) ) . Since the optimal u is fixed once p , q and s are given, one has u p , T p ( β , λ ) = u p , β , λ . Finally, simple algebra allows to show that ϑ ¯ = λ ϑ and ϑ = λ ¯ ϑ ¯ , which finishes the proof.
Now, Relation (13) immediately comes from Equations (A5) and (A7) together with λ = q s .

Appendix C. Proof of Proposition 5

Appendix C.1. The (p,β,λ)-Fisher–Rényi Complexity is Lowerbounded over D ˜ p

As detailed in the text, consider a point ( β , λ ) D ˜ p . Thus, there exists an index α > 0 such that A α ( β , λ ) L p L ¯ p . Applying Propositions 2 and 4, we have
C p , β , λ [ ρ ] = α 2 C p , A α ( β , λ ) E α E α 1 [ ρ ] α 2 K p , A α ( β , λ ) K p , β , λ .
Finally, denoting ( β ˜ , λ ˜ ) = A α ( β , λ ) , the minimizers satisfy E α 1 ρ p , β , λ = g p , λ ˜ (see Section 2.3.1), or E α 1 ρ p , β , λ = g p , 2 λ ˜ (see Section 2.3.2), that is,
ρ p , β , λ = E α [ g p , λ ˜ ] , if A α ( β , λ ) L p , E α [ g p , 2 λ ˜ ] , if A α ( β , λ ) L ¯ p .

Appendix C.2. Explicit Expression for the Minimizers.

In the sequel, we determine the differential-escort transformation E α [ g p , λ ] with λ < 1 . Let us denote by Z p , λ = R 1 + ( 1 λ ) | x | p * 1 λ 1 d x = 2 B 1 p * , 1 1 λ 1 p * p * ( 1 λ ) 1 p * the normalization coefficient of the distribution g p , λ [35,87]). Hence, as defined in Definition (5), E α [ g p , λ ] ( y ) = g p , λ ( x ( y ) ) α with
d y d x = g p , λ ( x ) 1 α = Z p , λ α 1 1 + ( 1 λ ) | x | p * 1 α λ 1 .
Thus, y ( x ) writes
y ( x ) = Z p , λ α 1 sign ( x ) 0 | x | 1 + ( 1 λ ) t p * 1 α λ 1 d t = κ p , λ , α sign ( x ) 0 ( 1 λ ) | x | p * 1 + ( 1 λ ) | x | p * τ 1 p * 1 1 τ α 1 λ 1 1 p * 1 d τ
when making the change of variables τ = ( 1 λ ) t p * 1 + ( 1 λ ) t p * and denoting κ p , λ , α = Z p , λ α 1 p * ( 1 λ ) 1 p * . One can recognize in the integral the incomplete beta function B ( a , b , x ) = 0 x t a 1 ( 1 t ) b 1 d t defined when e { a } > 0 and for x [ 0 ; 1 ) [85]. Here, a = 1 p * > 0 , b = α 1 λ 1 1 p * and noting that ( 1 λ ) | x | p * 1 + ( 1 λ ) | x | p * [ 0 ; 1 ) . Hence,
y ( x ) = κ p , λ , α sign ( x ) B 1 p * , α 1 λ 1 1 p * ; ( 1 λ ) | x | p * 1 + ( 1 λ ) | x | p * .
Note that y κ p , λ , α : R B 1 p * , α 1 λ 1 1 p * ; B 1 p * , α 1 λ 1 1 p * , where B ( a , b ) = lim x 1 B ( a , b , 1 ) is the beta function [85,86,87]; B ( a , b ) is thus infinite when b 0 .
Denoting B 1 the inverse of incomplete beta function, we obtain
1 + ( 1 λ ) | x ( y ) | p * = 1 1 B 1 1 p * , α 1 λ 1 1 p * ; | y | κ p , λ , α
and, thus,
E α g p , λ ( y ) 1 B 1 1 p * , α 1 λ 1 1 p * ; | y | κ p , λ , α α 1 λ 𝕝 0 ; B 1 p * , α 1 λ 1 1 p * | y | κ p , λ , α
Note that from B ( a , a , x ) = a 1 ( x 1 x ) a [86,87]), we naturally recover that E 1 g p , λ = g p , λ .
Finally, let us remark that
D ˜ p = ( β , λ ) R + * 2 : 1 p * β < λ < 1 ( β , λ ) R + * 2 : λ > 1 ( β , 1 ) , β R + * ,
the first ensemble being a subset of A [ L p ] and the second one a subset of A [ L ¯ p ] . We treat now these three cases separately.

Appendix C.2.1. The Case 1 − p*β < λ < 1

Following Appendix C.1, let us first determine α such that A α ( β , λ ) L p , which is α such that α β = 1 + α ( λ 1 ) . Hence,
α = 1 β + 1 λ and A α ( β , λ ) = β β + 1 λ , β β + 1 λ .
The fact that β > 0 and λ < 1 insures that β + 1 λ 0 .
From Section 2.3.1 and Appendix C.1, the minimizer of the complexity is thus given by
ρ p , β , λ = E 1 β + 1 λ g p , β β + 1 λ .
One can easily see that β β + 1 λ 1 1 + p * ; 1 , and thus we immediately get from Equation (A17),
ρ p , β , λ ( x ) 1 B 1 1 p * , β λ 1 λ 1 p * ; | y | κ p , α β , α 1 1 λ 𝕝 0 ; B 1 p * , β λ 1 λ 1 p * | y | κ p , α β , α .
Noting that β λ 1 λ = β 1 1 λ + 1 p , it appears that this density is nothing more than the ( p , β , λ ) -Gaussian of Definition 6 (remember that the families of density are defined up to a shift and a scaling).

Appendix C.2.2. The Case λ > 1

Following again Appendix C.1, let us first determine α such that A α ( β , λ ) L ¯ p , i.e., such that α β = p * + 1 [ 1 + α ( λ 1 ) ] p * . We thus obtain
α = p * p * β + λ 1 and A α ( β , λ ) = p * β p * β + λ 1 , 1 + p * ( λ 1 ) p * β + λ 1 .
The fact that β > 0 and λ > 1 insures that p * β + λ 1 0 .
From Section 2.3.1 and Appendix C.1, the minimizers for the complexity expresses
ρ p , β , λ = E p * p * β + λ 1 g p , 1 p * ( λ 1 ) p * β + λ 1 .
One can easily has that 1 p * ( λ 1 ) p * β + λ 1 1 p * ; 1 and thus we immediately get from Equation (A17)
ρ p , β , λ ( y ) 1 B 1 1 p * , β 1 λ 1 ; | y | κ p , 1 α ( λ 1 ) , α 1 λ 1 𝕝 0 ; B 1 p * , β 1 λ 1 | y | κ p , 1 α ( λ 1 ) , α .
The density is again nothing more than the ( p , β , λ ) -Gaussian of Definition 6.

Appendix C.2.3. The Case λ = 1

We exclude here the trivial point β = 1 . Now, taking α = 1 β gives A α ( β , 1 ) = ( 1 , 1 ) . We know that the minimizer for β = 1 is given by g p , 1 ( x ) = Z p , 1 1 exp | x | p * with Z p , 1 = R exp ( | x | p * ) d x = 2 Γ 1 p * p * [35,87].
Following again Appendix C.1, we have to determine
E 1 β g p , 1 ( y ) = g p , 1 ( x ( y ) ) 1 β = Z p , 1 1 β exp | x ( y ) | p * β
with
d y d x = g p , 1 1 1 β = Z p , 1 1 β β exp β 1 β | x | p * ,
and thus
y ( x ) = Z p , 1 1 β β sign ( x ) 0 | x | exp β 1 β t p * d t .
Viewing this integral in the complex plane (here in the real line), one can make the change of variables τ = β 1 β t p * , i.e., t = β 1 β 1 p * τ 1 p * to obtain
y ( x ) = Z p , 1 1 β β p * β 1 β 1 p * sign ( x ) 0 β 1 β | x | p * τ 1 p * 1 exp ( τ ) d τ ,
where β 1 β 1 p * is complex in general, real only if β 1 β 0 , i.e., if β ( 0 ; 1 ) . One can recognize in the integral the incomplete gamma function G ( a , x ) = 0 x t a 1 exp ( t ) d t , defined for e { a } > 0 and for any complex x [85]. We then obtain,
y ( x ) = κ p , β sign ( x ) β 1 β 1 p * G 1 p * ; β 1 β | x | p * ,
where κ p , β = Z p , 1 1 β β p * . Note that the term in square brackets is real and positive, and takes its values over R + if β > 1 (remember that we excluded the trivial situation β = 1 ), and over 0 ; Γ 1 p * if β < 1 .
Denoting G 1 the inverse of the incomplete gamma function, this gives
1 β | x ( y ) | p = 1 β 1 G 1 1 p * ; β 1 β 1 p * | y | κ p , 1
defined for | y | κ p , 1 < Γ ( 1 / p * ) 𝕝 ( 0 ; 1 ) ( β ) with the convention 1 / 0 = + . We thus achieve
ρ p , β , 1 ( y ) exp 1 1 β G 1 1 p * ; β 1 β 1 p * | y | κ p , 1 𝕝 0 ; Γ ( 1 / p * ) 𝕝 ( 0 ; 1 ) ( β ) | y | κ p , 1 .
We again recover the ( p , β , λ ) -Gaussian.

Appendix C.3. Symmetry through the Involution T p .

For λ = 1 , the result is trivial since T p ( β , 1 ) = ( β , 1 ) (see Equation (11)).
Now, for λ 1 , let us denote ( β ¯ , λ ¯ ) = T p ( β , λ ) = p * β + λ 1 p * λ , 1 λ the involutary transform of ( β , λ ) . Some simple algebra allows to show that if 1 β p * < λ < 1 , then λ ¯ > 1 , and reciprocally. Thus, it is straightforward to see that q p , T ( β , λ ) = q p , β , λ and that 1 | 1 λ ¯ | = λ | 1 λ | , leading to
g p , T p ( β , λ ) g p , β , λ λ .
Now, if λ < 1 , the optimal bound is given by K p , β , λ = α 2 K p , α β , α β (see Equations (A19) and (30)). Then, λ ¯ > 1 and thus K p , T p ( β , λ ) = α ¯ 2 K p , α ¯ β ¯ , 1 + α ¯ ( λ ¯ 1 ) (see Equations (A22) and (30), where α is here denoted by α ¯ and ( β , λ ) is obviously replaced by ( β ¯ , λ ¯ ) ). Simple algebraic manipulations allow us to see that α ¯ = λ β and that T p ( α β , α β ) = ( α ¯ β ¯ , 1 + α ¯ ( λ ¯ 1 ) ) , hence K p , T p ( β , λ ) = λ β 2 K p , T p ( α β , α β ) = ( λ α ) 2 K p , α β , α β from Proposition 3. We then obtain again K p , T p ( β , λ ) = λ 2 K p , β , λ . The case λ > 1 is treated in a similar way, leading to the same conclusion.

Appendix C.4. Explicit Expression of the Lower Bound.

Let us first consider the case λ < 1 . Thus, ζ p , β , λ = β (see Equation (37)). From Equations (A19), (A20) and (30), we have
K p , β , λ = α 2 K p , α β , α β = ( α β ) 2 K p , α β , α β β 2
that is, noting that α β = ζ p , β , λ ζ p , β , λ + | 1 λ | ,
K p , β , λ = ζ p , β , λ ζ p , β , λ + | 1 λ | 2 K p , ζ p , β , λ ζ p , β , λ + | 1 λ | , ζ p , β , λ ζ p , β , λ + | 1 λ | ζ p , β , λ 2 ,
when λ > 1 . Thus, ζ p , β , λ = β + λ 1 p * (see Equation (37)). Denoting ( β ¯ , λ ¯ ) = T p ( β , λ ) (see Equation (11)) and noting that λ ¯ = 1 λ < 1 and applying successively Equation (13) (see previous subsection), Equations (A19), (A20) and (30) (where α is denoted here α ¯ and ( β , λ ) is obviously replaced by ( β ¯ , λ ¯ ) ), we have
K p , β , λ = 1 λ 2 K p , β ¯ , λ ¯ = ( α ¯ β ¯ ) 2 K p , α ¯ β ¯ , α ¯ β ¯ λ 2 β ¯ 2 .
It is straightforward to see that λ 2 β ¯ 2 = β + λ 1 p * = ζ p , β , λ and that α ¯ β ¯ = p * β + λ 1 p * β + λ 1 + p * ( λ 1 ) = ζ p , β , λ ζ p , β , λ + | λ 1 | so that Equation (A31) still holds.
The case λ = 1 can be viewed as the limit case, or using Equations (A25) and (30) to conclude that Equation (A31) still holds. It remains to evaluate l 2 K p , l , l = l 2 C p , l , l ( g p , l ) with l 1 . The evaluation of N l ( g p , l ) and F p , l ( g p , l ) was conducted for instance in [34], which gives with our notations, for l < 1
l 2 K p , l , l = 2 p * p * l 1 l 1 p * p * l ( p * + 1 ) l 1 l 1 l + 1 p B 1 p * , 1 1 l 1 p * 2
and
K p , 1 , 1 = 2 e 1 p * Γ 1 p * p * 1 p 2 .
Noting that 1 1 l 1 p * = l 1 l + 1 p and taking l = ζ p , β , λ ζ p , β , λ + | 1 λ | , we achieve the wanted result from Equation (A31).

References

  1. Sen, K.D. Statistical Complexity. Application in Electronic Structure; Springer: New York, NY, USA, 2011. [Google Scholar]
  2. López-Ruiz, R.; Mancini, H.L.; Calbet, X. A statistical measure of complexity. Phys. Lett. A 1995, 209, 321–326. [Google Scholar] [CrossRef]
  3. López-Ruiz, R. Shannon information, LMC complexity and Rényi entropies: A straightforward approach. Biophys. Chem. 2005, 115, 215–218. [Google Scholar] [CrossRef] [PubMed]
  4. Chatzisavvas, K.C.; Moustakidis, C.C.; Panos, C.P. Information entropy, information distances, and complexity in atoms. J. Chem. Phys. 2005, 123, 174111. [Google Scholar] [CrossRef] [PubMed]
  5. Sen, K.D.; Panos, C.P.; Chatzisavvas, K.C.; Moustakidis, C.C. Net Fisher information measure versus ionization potential and dipole polarizability in atoms. Phys. Lett. A 2007, 364, 286–290. [Google Scholar] [CrossRef] [Green Version]
  6. Bialynicki-Birula, I.; Rudnicki, Ł. Entropic uncertainty relations in quantum physics. In Statistical Complexity. Application in Electronic Structure; Sen, K.D., Ed.; Springer: Berlin, Germay, 2010. [Google Scholar]
  7. Dehesa, J.S.; López-Rosa, S.; Manzano, D. Entropy and complexity analyses of D-dimensional quantum systems. In Statistical Complexities: Application to Electronic Structure; Sen, K.D., Ed.; Springer: Berlin, Germany, 2010. [Google Scholar]
  8. Huang, Y. Entanglement detection: Complexity and Shannon entropic criteria. IEEE Trans. Inf. Theor. 2013, 59, 6774–6778. [Google Scholar] [CrossRef]
  9. Ebeling, W.; Molgedey, L.; Kurths, J.; Schwarz, U. Entropy, complexity, predictability and data analysis of time series and letter sequences. In Theory of Disaster; Springer: Berlin, Germany, 2000. [Google Scholar]
  10. Angulo, J.C.; Antolín, J. Atomic complexity measures in position and momentum spaces. J. Chem. Phys. 2008, 128, 164109. [Google Scholar] [CrossRef] [PubMed]
  11. Rosso, O.A.; Ospina, R.; Frery, A.C. Classification and verification of handwritten signatures with time causal information theory quantifiers. PLoS ONE 2016, 11, e0166868. [Google Scholar] [CrossRef] [PubMed]
  12. Toranzo, I.V.; Sánchez-Moreno, P.; Rudnicki, Ł.; Dehesa, J.S. One-parameter Fisher-Rényi complexity: Notion and hydrogenic applications. Entropy 2017, 19, 16. [Google Scholar] [CrossRef]
  13. Angulo, J.C.; Romera, E.; Dehesa, J.S. Inverse atomic densities and inequalities among density functionals. J. Math. Phys. 2000, 41, 7906–7917. [Google Scholar] [CrossRef]
  14. Dehesa, J.S.; López-Rosa, S.; Martínez-Finkelshtein, A.; Yáñez, R.J. Information theory of D-dimensional hydrogenic systems: Application to circular and Rydberg states. Int. J. Quantum Chem. 2010, 110, 1529–1548. [Google Scholar] [CrossRef]
  15. López-Rosa, S.; Esquievel, R.O.; Angulo, J.C.; Antolín, J.; Dehesa, J.S.; Flores-Gallegos, N. Fisher information study in position and momentum spaces for elementary chemical reactions. J. Chem. Theor. Comput. 2010, 6, 145–154. [Google Scholar] [CrossRef] [PubMed]
  16. Romera, E.; Sánchez-Moreno, P.; Dehesa, J.S. Uncertainty relation for Fisher information of D-dimensional single-particle systems with central potentials. J. Math. Phys. 2006, 47, 103504. [Google Scholar] [CrossRef]
  17. Sánchez-Moreno, P.; Zozor, S.; Dehesa, J.S. Upper bounds on Shannon and Rényi entropies for central potential. J. Math. Phys. 2011, 52, 022105. [Google Scholar] [CrossRef]
  18. Zozor, S.; Portesi, M.; Sánchez-Moreno, P.; Dehesa, J.S. Position-momentum uncertainty relation based on moments of arbitrary order. Phys. Rev. A 2011, 83, 052107. [Google Scholar] [CrossRef]
  19. Martin, M.T.; Plastino, A.R.; Plastino, A. Tsallis-like information measures and the analysis of complex signals. Phys. A Stat. Mech. Appl. 2000, 275, 262–271. [Google Scholar] [CrossRef]
  20. Portesi, M.; Plastino, A. Generalized entropy as measure of quantum uncertainty. Phys. A Stat. Mech. Appl. 1996, 225, 412–430. [Google Scholar] [CrossRef]
  21. Massen, S.E.; Panos, C.P. Universal property of the information entropy in atoms, nuclei and atomic clusters. Phys. Lett. A 1998, 246, 530–533. [Google Scholar] [CrossRef]
  22. Guerrero, A.; Sanchez-Moreno, P.; Dehesa, J.S. Upper bounds on quantum uncertainty products and complexity measures. Phys. Rev. A 2011, 84, 042105. [Google Scholar] [CrossRef]
  23. Dehesa, J.S.; Sánchez-Moreno, P.; Yáñez, R.J. Crámer-Rao information plane of orthogonal hypergeometric polynomials. J. Comput. Appl. Math. 2006, 186, 523–541. [Google Scholar] [CrossRef]
  24. Antolín, J.; Angulo, J.C. Complexity analysis of ionization processes and isoelectronic series. Int. J. Quantum Chem. 2009, 109, 586–593. [Google Scholar] [CrossRef]
  25. Angulo, J.C.; Antolín, J.; Sen, K.D. Fisher-Shannon plane and statistical complexity of atoms. Phys. Lett. A 2008, 372, 670–674. [Google Scholar] [CrossRef]
  26. Romera, E.; Dehesa, J.S. The Fisher-Shannon information plane, an electron correlation tool. J. Chem. Phys. 2004, 120, 8906–8912. [Google Scholar] [CrossRef] [PubMed]
  27. Puertas-Centeno, D.; Toranzo, I.V.; Dehesa, J.S. The biparametric Fisher-Rényi complexity measure and its application to the multidimensional blackbody radiation. J. Stat. Mech. Theor. Exp. 2017, 2017, 043408. [Google Scholar] [CrossRef]
  28. Sobrino-Coll, N.; Puertas-Centeno, D.; Toranzo, I.V.; Dehesa, J.S. Complexity measures and uncertainty relations of the high-dimensional harmonic and hydrogenic systems. J. Stat. Mech. Theor. Exp. 2017, 2017, 083102. [Google Scholar] [CrossRef]
  29. Puertas-Centeno, D.; Toranzo, I.V.; Dehesa, J.S. Biparametric complexities and the generalized Planck radiation law. arXiv, 2017; arXiv:1704.08452v. [Google Scholar]
  30. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 623–656. [Google Scholar] [CrossRef]
  31. Fisher, R.A. On the mathematical foundations of theoretical statistics. Philos. Trans. R. Soc. A 1922, 222, 309–368. [Google Scholar]
  32. Rudnicki, Ł.; Toranzo, I.V.; Sánchez-Moreno, P.; Dehesa., J.S. Monotone measures of statistical complexity. Phys. Lett. A 2016, 380, 377–380. [Google Scholar]
  33. Rényi, A. On measures of entropy and information. In Proceedings of the 4th Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 20 June–30 July 1960; pp. 547–561. [Google Scholar]
  34. Lutwak, E.; Yang, D.; Zhang, G. Cramér-Rao and moment-entropy inequalities for Rényi entropy and generalized Fisher information. IEEE Trans. Inf. Theor. 2005, 51, 473–478. [Google Scholar] [CrossRef]
  35. Bercher, J.F. On a (β,q)-generalized Fisher information and inequalities invoving q-Gaussian distributions. J. Math. Phys. 2012, 53, 063303. [Google Scholar] [CrossRef] [Green Version]
  36. Lutwak, E.; Lv, S.; Yang, D.; Zhang, G. Extension of Fisher information and Stam’s inequality. IEEE Trans. Inf. Theor. 2012, 58, 1319–1327. [Google Scholar] [CrossRef]
  37. Stam, A.J. Some inequalities satisfied by the quantities of information of Fisher and Shannon. Inf. Control 1959, 2, 101–112. [Google Scholar] [CrossRef]
  38. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  39. Kay, S.M. Fundamentals for Statistical Signal Processing: Estimation Theory; Prentice Hall: Upper Saddle River, NJ, USA, 1993. [Google Scholar]
  40. Lehmann, E.L.; Casella, G. Theory of Point Estimation, 2nd ed.; Springer: New York, NY, USA, 1998. [Google Scholar]
  41. Bourret, R. A note on an information theoretic form of the uncertainty principle. Inf. Control 1958, 1, 398–401. [Google Scholar] [CrossRef]
  42. Leipnik, R. Entropy and the uncertainty principle. Inf. Control 1959, 2, 64–79. [Google Scholar] [CrossRef]
  43. Vignat, C.; Bercher, J.F. Analysis of signals in the Fisher-Shannon information plane. Phys. Lett. A 2003, 312, 27–33. [Google Scholar] [CrossRef]
  44. Sañudo, J.; López-Ruiz, R. Statistical complexity and Fisher-Shannon information in the H-atom. Phys. Lett. A 2008, 372, 5283–5286. [Google Scholar]
  45. Dehesa, J.S.; López-Rosa, S.; Manzano, D. Configuration complexities of hydrogenic atoms. Eur. Phys. J. D 2009, 55, 539–548. [Google Scholar] [CrossRef]
  46. López-Ruiz, R.; Sañudo, J.; Romera, E.; Calbet, X. Statistical complexity and Fisher-Shannon information: Application. In Statistical Complexity. Application in Electronic Structure; Springer: New York, NY, USA, 2012. [Google Scholar]
  47. Manzano, D. Statistical measures of complexity for quantum systems with continuous variables. Phys. A Stat. Mech. Appl. 2012, 391, 6238–6244. [Google Scholar] [CrossRef]
  48. Gell-Mann, M.; Tsallis, C. (Eds.) Nonextensive Entropy: Interdisciplinary Applications; Oxford University Press: Oxford, UK, 2004. [Google Scholar]
  49. Tsallis, C. Introduction to Nonextensive Statistical Mechanics—Approaching a Complex World; Springer: New York, NY, USA, 2009. [Google Scholar]
  50. Puertas-Centeno, D.; Rudnicki, L.; Dehesa, J.S. LMC-Rényi complexity monotones, heavy tailed distributions and stretched-escort deformation. 2017; in preparation. [Google Scholar]
  51. Agueh, M. Sharp Gagliardo-Nirenberg inequalities and mass transport theory. J. Dyn. Differ. Equ. 2006, 18, 1069–1093. [Google Scholar] [CrossRef]
  52. Agueh, M. Sharp Gagliardo-Nirenberg inequalities via p-Laplacian type equations. Nonlinear Differ. Equ. Appl. 2008, 15, 457–472. [Google Scholar]
  53. Costa, J.A.; Hero, A.O., III; Vignat, C. On solutions to multivariate maximum α-entropy problems. In Proceedings of the 4th International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, Lisbon, Portugal, 7–9 July 2003; pp. 211–226. [Google Scholar]
  54. Johnson, O.; Vignat, C. Some results concerning maximum Rényi entropy distributions. Ann. Inst. Henri Poincare B Probab. Stat. 2007, 43, 339–351. [Google Scholar] [CrossRef]
  55. Nanda, A.K.; Maiti, S.S. Rényi information measure for a used item. Inf. Sci. 2007, 177, 4161–4175. [Google Scholar] [CrossRef]
  56. Panter, P.F.; Dite, W. Quantization distortion in pulse-count modulation with nonuniform spacing of levels. Proc. IRE 1951, 39, 44–48. [Google Scholar] [CrossRef]
  57. Loyd, S.P. Least squares quantization in PCM. IEEE Trans. Inf. Theor. 1982, 28, 129–137. [Google Scholar] [CrossRef]
  58. Gersho, A.; Gray, R.M. Vector Quantization and Signal Compression; Kluwer: Boston, MA, USA, 1992. [Google Scholar]
  59. Campbell, L.L. A coding theorem and Rényi’s entropy. Inf. Control 1965, 8, 423–429. [Google Scholar] [CrossRef]
  60. Humblet, P.A. Generalization of the Huffman coding to minimize the probability of buffer overflow. IEEE Trans. Inf. Theor. 1981, 27, 230–232. [Google Scholar] [CrossRef]
  61. Baer, M.B. Source coding for quasiarithmetic penalties. IEEE Trans. Inf. Theor. 2006, 52, 4380–4393. [Google Scholar] [CrossRef]
  62. Bercher, J.F. Source coding with escort distributions and Rényi entropy bounds. Phys. Lett. A 2009, 373, 3235–3238. [Google Scholar] [CrossRef] [Green Version]
  63. Bobkov, S.G.; Chistyakov, G.P. Entropy Power Inequality for the Rényi Entropy. IEEE Trans. Inf. Theor. 2015, 61, 708–714. [Google Scholar] [CrossRef]
  64. Pardo, L. Statistical Inference Based on Divergence Measures; Chapman & Hall: Boca Raton, FL, USA, 2006. [Google Scholar]
  65. Harte, D. Multifractals: Theory and Applications, 1st ed.; Chapman & Hall/CRC: Boca Raton, FL, USA, 2001. [Google Scholar]
  66. Jizba, P.; Arimitsu, T. The world according to Rényi: Thermodynamics of multifractal systems. Ann. Phys. 2004, 312, 17–59. [Google Scholar] [CrossRef]
  67. Beck, C.; Schögl, F. Thermodynamics of Chaotic Systems: An Introduction; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
  68. Bialynicki-Birula, I. Formulation of the uncertainty relations in terms of the Rényi entropies. Phys. Rev. A 2006, 74, 052101. [Google Scholar] [CrossRef]
  69. Zozor, S.; Vignat, C. On classes of non-Gaussian asymptotic minimizers in entropic uncertainty principles. Phys. A Stat. Mech. Appl. 2007, 375, 499–517. [Google Scholar] [CrossRef]
  70. Zozor, S.; Vignat, C. Forme entropique du principe d’incertitude et cas d’égalité asymptotique. In Proceedings of the Colloque GRETSI, Troyes, France, 11–14 Septembre 2007. (In French). [Google Scholar]
  71. Zozor, S.; Portesi, M.; Vignat, C. Some extensions to the uncertainty principle. Phys. A Stat. Mech. Appl. 2008, 387, 4800–4808. [Google Scholar] [CrossRef]
  72. Zozor, S.; Bosyk, G.M.; Portesi, M. General entropy-like uncertainty relations in finite dimensions. J. Phys. A 2014, 47, 495302. [Google Scholar] [CrossRef]
  73. Jizba, P.; Dunningham, J.A.; Joo, J. Role of information theoretic uncertainty relations in quantum theory. Ann. Phys. 2015, 355, 87–115. [Google Scholar] [CrossRef]
  74. Jizba, P.; Ma, Y.; Hayes, A.; Dunningham, J.A. One-parameter class of uncertainty relations based on entropy power. Phys. Rev. E 2016, 93, 060104. [Google Scholar] [CrossRef] [PubMed]
  75. Hammad, P. Mesure d’ordre α de l’information au sens de Fisher. Rev. Stat. Appl. 1978, 26, 73–84. (In French) [Google Scholar]
  76. Pennini, F.; Plastino, A.R.; Plastino, A. Rényi entropies and Fisher information as measures of nonextensivity in a Tsallis setting. Phys. A Stat. Mech. Appl. 1998, 258, 446–457. [Google Scholar] [CrossRef]
  77. Chimento, L.P.; Pennini, F.; Plastino, A. Naudts-like duality and the extreme Fisher information principle. Phys. Rev. E 2000, 62, 7462–7465. [Google Scholar] [CrossRef]
  78. Casas, M.; Chimento, L.; Pennini, F.; Plastino, A.; Plastino, A.R. Fisher information in a Tsallis non-extensive environment. Chaos Solitons Fractals 2002, 13, 451–459. [Google Scholar] [CrossRef]
  79. Pennini, F.; Plastino, A.; Ferri, G.L. Semiclassical information from deformed and escort information measures. Phys. A Stat. Mech. Appl. 2007, 383, 782–796. [Google Scholar] [CrossRef]
  80. Bercher, J.F. On generalized Cramér-Rao inequalities, generalized Fisher information and characterizations of generalized q-Gaussian distributions. J. Phys. A 2012, 45, 255303. [Google Scholar] [CrossRef] [Green Version]
  81. Bercher, J.F. Some properties of generalized Fisher information in the context of nonextensive thermostatistics. Phys. A Stat. Mech. Appl. 2013, 392, 3140–3154. [Google Scholar] [CrossRef] [Green Version]
  82. Bercher, J.F. On escort distributions, q-gaussians and Fisher information. In Proceedings of the 30th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, Chamonix, France, 4–9 July 2010; pp. 208–215. [Google Scholar]
  83. Devroye, L. Non-Uniform Random Variate Generation; Springer: New York, NY, USA, 1986. [Google Scholar]
  84. Korbel, J. Rescaling the nonadditivity parameter in Tsallis thermostatistics. Phys. Lett. A 2017, 381, 2588–2592. [Google Scholar] [CrossRef]
  85. Olver, F.W.J.; Lozier, D.W.; Boisvert, R.F.; Clark, C.W. NIST Handbook of Mathematical Functions; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  86. Abramowitz, M.; Stegun, I.A. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables; Dover: New York, NY, USA, 1970. [Google Scholar]
  87. Gradshteyn, I.S.; Ryzhik, I.M. Table of Integrals, Series, and Products, 7th ed.; Academic Press: San Diego, CA, USA, 2007. [Google Scholar]
  88. Prudnikov, A.P.; Brychkov, Y.A.; Marichev, O.I. Integrals and Series, Volume 3: More Special Functions; Gordon and Breach: New York, NY, USA, 1990. [Google Scholar]
  89. Nieto, M.M. Hydrogen atom and relativistic pi-mesic atom in N-space dimensions. Am. J. Phys. 1979, 47, 1067–1072. [Google Scholar] [CrossRef]
  90. Yáñez, R.J.; van Assche, W.; Dehesa, J.S. Position and momentum information entropies of the D-dimensional harmonic oscillator and hydrogen atoms. Phys. Rev. A 1994, 50, 3065–3079. [Google Scholar] [CrossRef] [PubMed]
  91. Avery, J.S. Hyperspherical Harmonics and Generalized Sturmians; Kluwer Academic: Dordrecht, The Netherlands, 2002. [Google Scholar]
  92. Yáñez, R.J.; van Assche, W.; González-Férez, R.; Sánchez-Dehesa, J. Entropic integrals of hyperspherical harmonics and spatial entropy of D-dimensional central potential. J. Math. Phys. 1999, 40, 5675–5686. [Google Scholar] [CrossRef]
  93. Louck, J.D.; Shaffer, W.H. Generalized orbital angular momentum of the n-fold degenerate quantum-mechanical oscillator. Part I. The twofold degenerate oscillator. J. Mol. Spectrosc. 1960, 4, 285–297. [Google Scholar]
  94. Louck, J.D.; Shaffer, W.H. Generalized orbital angular momentum of the n-fold degenerate quantum-mechanical oscillator. Part II. The n-fold degenerate oscillator. J. Mol. Spectrosc. 1960, 4, 298–333. [Google Scholar]
  95. Nirenberg, L. On elliptical partial differential equations. Annali della Scuola Normale Superiore di Pisa 1959, 13, 115–169. [Google Scholar]
  96. Gelfand, I.M.; Fomin, S.V. Calculus of Variations; Prentice Hall: Englewood Cliff, NJ, USA, 1963. [Google Scholar]
  97. Van Brunt, B. The Calculus of Variations; Springer: New York, NY, USA, 2004. [Google Scholar]
Figure 1. (a) the domain D p for a given p is represented by the gray area (here p > 2 ). The thick line belongs to D p . The dashed line represents L p , corresponding to the Lutwak situation of Section 2.3.1, where the relation holds and the minimizers are explicitly known (stretched deformed Gaussian distributions), whereas L ¯ p corresponds to Section 2.3.2 ( B p and B ¯ p obtained by the Gagliardo–Nirenberg inequality are their restrictions to D p ); (b) same situation for p = 2 , with the domains A 2 and A ¯ 2 (dashed lines) that correspond to the situations of Section 2.3.3 and Section 2.3.4, respectively, ( L 2 and L ¯ 2 are not represented for the clarity of the figure).
Figure 1. (a) the domain D p for a given p is represented by the gray area (here p > 2 ). The thick line belongs to D p . The dashed line represents L p , corresponding to the Lutwak situation of Section 2.3.1, where the relation holds and the minimizers are explicitly known (stretched deformed Gaussian distributions), whereas L ¯ p corresponds to Section 2.3.2 ( B p and B ¯ p obtained by the Gagliardo–Nirenberg inequality are their restrictions to D p ); (b) same situation for p = 2 , with the domains A 2 and A ¯ 2 (dashed lines) that correspond to the situations of Section 2.3.3 and Section 2.3.4, respectively, ( L 2 and L ¯ 2 are not represented for the clarity of the figure).
Entropy 19 00493 g001
Figure 2. Given a p, the domain in gray represents D ˜ p , where we know that the ( p , β , λ ) -Fisher–Rényi complexity is optimally lower bounded and where the minimizers can be deduced from proposition 2. (a) the domain in dark gray represents D p , which is obviously included in D ˜ p ; the dot is a particular point ( β , λ ) D p and the dotted line represents its transform by A ; (b) the domain in dark gray represents A ( L p ) D ˜ p , which obviously contains L p represented by the dashed line; (c) same as (b) with L ¯ p and A ( L ¯ p ) D ˜ p . This illustrates that D ˜ p = A ( L p ) A ( L ¯ p ) .
Figure 2. Given a p, the domain in gray represents D ˜ p , where we know that the ( p , β , λ ) -Fisher–Rényi complexity is optimally lower bounded and where the minimizers can be deduced from proposition 2. (a) the domain in dark gray represents D p , which is obviously included in D ˜ p ; the dot is a particular point ( β , λ ) D p and the dotted line represents its transform by A ; (b) the domain in dark gray represents A ( L p ) D ˜ p , which obviously contains L p represented by the dashed line; (c) same as (b) with L ¯ p and A ( L ¯ p ) D ˜ p . This illustrates that D ˜ p = A ( L p ) A ( L ¯ p ) .
Entropy 19 00493 g002
Figure 3. Fisher information F p , β (left graph), Rényi entropy power N λ (center graph), and ( p , β , λ ) -Fisher–Rényi complexity C p , β , λ (right graph) of the radial hydrogenic distribution in position space with dimensions d = 3 ( ) , 12 ( * ) versus the quantum numbers n and l. The complexity parameters are p = 2 , β = 1 , λ = 7 .
Figure 3. Fisher information F p , β (left graph), Rényi entropy power N λ (center graph), and ( p , β , λ ) -Fisher–Rényi complexity C p , β , λ (right graph) of the radial hydrogenic distribution in position space with dimensions d = 3 ( ) , 12 ( * ) versus the quantum numbers n and l. The complexity parameters are p = 2 , β = 1 , λ = 7 .
Entropy 19 00493 g003
Figure 4. ( p , β , λ ) -Fisher–Rényi complexity (normalized to its lower bound), C p , β , λ , with ( p , λ , β ) = ( 2 , 0.8 , 7 ) , ( 2 , 1 , 1 ) , ( 5 , 2 , 7 ) for the radial hydrogenic distribution in the position space with dimensions d = 3 ( ) and 12 ( * ) .
Figure 4. ( p , β , λ ) -Fisher–Rényi complexity (normalized to its lower bound), C p , β , λ , with ( p , λ , β ) = ( 2 , 0.8 , 7 ) , ( 2 , 1 , 1 ) , ( 5 , 2 , 7 ) for the radial hydrogenic distribution in the position space with dimensions d = 3 ( ) and 12 ( * ) .
Entropy 19 00493 g004
Figure 5. Fisher information F p , β (left graph), Rényi entropy power N λ (center graph), and ( p , β , λ ) -Fisher–Rényi complexity C p , β , λ (right graph) of the radial hydrogenic distribution in momentum space with dimensions d = 3 ( ) , 12 ( * ) versus the quantum numbers n and l. The complexity parameters are p = 2 , β = 1 , λ = 7 .
Figure 5. Fisher information F p , β (left graph), Rényi entropy power N λ (center graph), and ( p , β , λ ) -Fisher–Rényi complexity C p , β , λ (right graph) of the radial hydrogenic distribution in momentum space with dimensions d = 3 ( ) , 12 ( * ) versus the quantum numbers n and l. The complexity parameters are p = 2 , β = 1 , λ = 7 .
Entropy 19 00493 g005
Figure 6. ( p , β , λ ) -Fisher–Rényi complexity (normalized to its lower bound), C p , β , λ , with ( p , λ , β ) = ( 2 , 0.8 , 7 ) , ( 2 , 1 , 1 ) , ( 5 , 2 , 7 ) for the radial hydrogenic distribution in the momentum space with dimensions d = 3 ( ) and 12 ( * ) .
Figure 6. ( p , β , λ ) -Fisher–Rényi complexity (normalized to its lower bound), C p , β , λ , with ( p , λ , β ) = ( 2 , 0.8 , 7 ) , ( 2 , 1 , 1 ) , ( 5 , 2 , 7 ) for the radial hydrogenic distribution in the momentum space with dimensions d = 3 ( ) and 12 ( * ) .
Entropy 19 00493 g006
Figure 7. Fisher information F p , β (left graph), Rényi entropy power N λ (center graph), and ( p , β , λ ) -Fisher–Rényi complexity C p , β , λ (right graph) versus n and l for the radial harmonic system in position space with dimensions d = 3 ( ) , 12 ( * ) . The informational parameters are p = 2 , β = 1 , λ = 7 .
Figure 7. Fisher information F p , β (left graph), Rényi entropy power N λ (center graph), and ( p , β , λ ) -Fisher–Rényi complexity C p , β , λ (right graph) versus n and l for the radial harmonic system in position space with dimensions d = 3 ( ) , 12 ( * ) . The informational parameters are p = 2 , β = 1 , λ = 7 .
Entropy 19 00493 g007
Figure 8. ( p , β , λ ) -Fisher–Rényi complexity (normalized to its lower bound) C p , β , λ with ( p , λ , β ) = ( 2 , 0.8 , 7 ) , ( 2 , 1 , 1 ) , ( 5 , 2 , 7 ) for the oscillator system in the position space with dimensions d = 3 ( ) , 12 ( * ) .
Figure 8. ( p , β , λ ) -Fisher–Rényi complexity (normalized to its lower bound) C p , β , λ with ( p , λ , β ) = ( 2 , 0.8 , 7 ) , ( 2 , 1 , 1 ) , ( 5 , 2 , 7 ) for the oscillator system in the position space with dimensions d = 3 ( ) , 12 ( * ) .
Entropy 19 00493 g008

Share and Cite

MDPI and ACS Style

Zozor, S.; Puertas-Centeno, D.; Dehesa, J.S. On Generalized Stam Inequalities and Fisher–Rényi Complexity Measures. Entropy 2017, 19, 493. https://doi.org/10.3390/e19090493

AMA Style

Zozor S, Puertas-Centeno D, Dehesa JS. On Generalized Stam Inequalities and Fisher–Rényi Complexity Measures. Entropy. 2017; 19(9):493. https://doi.org/10.3390/e19090493

Chicago/Turabian Style

Zozor, Steeve, David Puertas-Centeno, and Jesús S. Dehesa. 2017. "On Generalized Stam Inequalities and Fisher–Rényi Complexity Measures" Entropy 19, no. 9: 493. https://doi.org/10.3390/e19090493

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop