Next Article in Journal
A Connection between Probability, Physics and Neural Networks
Previous Article in Journal
Geometric Learning of Hidden Markov Models via a Method of Moments Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Linear (h,φ)-Entropies for Quasi-Power Sequences with a Focus on the Logarithm of Taneja Entropy †

by
Valérie Girardin
1,* and
Philippe Regnault
2
1
LMNO UMR CNRS 6139, Normandie Université, 14000 Caen, France
2
LMR UMR CNRS 9008, Université de Reims Champagne-Ardenne, BP 1039, CEDEX 2, 51687 Reims, France
*
Author to whom correspondence should be addressed.
Presented at the 41st International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, Paris, France, 18–22 July 2022.
Phys. Sci. Forum 2022, 5(1), 9; https://doi.org/10.3390/psf2022005009
Published: 3 November 2022

Abstract

:
Conditions are highlighted for generalized entropies to allow for non-trivial time-averaged entropy rates for a large class of random sequences, including Markov chains and continued fractions. The axiomatic-free conditions arise from the behavior of the marginal entropy of the sequence. Apart from the well-known Shannon and Rényi cases, only logarithmic versions of Sharma–Taneja–Mittal entropies may fulfill these conditions. Their main properties are detailed.

1. Introduction

Quantifying the information—or uncertainty—of a sequence of random variables has been considered since the foundation of information theory in [1], where the entropy rate of a random sequence X = ( X n ) n N * is defined as the limit S ( X ) = lim n + S ( X 1 , X n ) / n , with S ( X 1 , X n ) denoting the entropy of the n first coordinates of X for n N * a positive integer. Shannon’s original concept of entropy naturally generalizes to various entropies, introduced in the literature for a better fit to certain complex systems through proper axiomatic requirements. Most of the classical examples given in Table 1 (first column) belong to the class of ( h , φ ) -entropies
S h , φ ( X ) = S h , φ ( m X ) = h i E φ ( m X ( i ) ) ,
where h and φ are real functions satisfying compatibility assumptions and m X is the distribution of a discrete random variable X on E; see Section 2 below and also [2,3]. When h is the identity, the corresponding ( h , φ ) -entropy is a φ -entropy, denoted by S φ .
Ref. [1] established the existence and a closed-form expression for the entropy rate S ( X ) of homogeneous ergodic Markov chains. This relies on the chain rule—strong additivity property—of the Shannon entropy, S ( X n , X n + 1 ) = S ( X n ) + i E m X n ( i ) S ( X n + 1 | X n = i ) , where S ( X n + 1 | X n = i ) denotes the Shannon entropy of the distribution of X n + 1 conditional to ( X n = i ) . This chain rule implicitly implies that, for ergodic Markov chains, the sequence of marginal entropies S ( X 1 : n ) grows (asymptotically) linearly with n.
This type of linearity appears as an interesting property for a generalized entropy to measure—in a meaningful way—the information carried by a random process: all the variables contribute equally to the global information of the sequence. Below, an ( h , φ ) -entropy will be said to be linear for a random sequence X = ( X n ) n N * if the marginal entropy sequence S h , φ ( X 1 : n ) n grows linearly with n, where X 1 : n = ( X 1 , , X n ) . Furthermore, Ciuperca et al. [4] show that linearity is required for obtaining a non-trivial time-averaged ( h , φ ) -entropy rate
S h , φ ( X ) = lim n + 1 n S h , φ ( X 1 : n ) .
Self-evidently, the Shannon entropy is linear for independent and identically distributed (i.i.d.) sequences and most ergodic Markov chains. A natural attempt to prove linearity of other entropies consists in investigating some additive functional identity replacing the chain rule. Regnault et al. [5] establish the linearity of Rényi entropies for homogeneous ergodic Markov chains, based on the identity R q ( X , Y ) = R q ( X ) + i E m X * q ( i ) R q ( Y | X = i ) , where m X * q = ( m X ( i ) q / j E m X ( j ) q ) i E denotes the q -escort distribution of X; explicit weighted expressions for the Rényi entropy rates follow. Most likely, such an approach cannot be systematically generalized to any other ( h , φ ) -entropy by lack of a proper additive functional identity.
Ciuperca et al. [4] successfully explore an axiomatic-free alternative approach, originating in analytic combinatorics. It consists of deriving the asymptotic behavior of S h , φ ( X 1 : n ) from the behavior of the associated Dirichlet series
Λ ( X 1 : n ; s ) = Λ ( m 1 : n ; s ) = x 1 : n E n m 1 : n ( x 1 : n ) s ,
where s > 0 and m 1 : n is the distribution of X 1 : n . A random sequence X is a quasi-power (QP) sequence if sup { m 1 : n ( i 1 : n ) : i 1 : n E n } converges to zero as n tends to infinity, and if a real number σ 0 < 1 , and strictly positive analytic functions c and λ , with λ strictly decreasing and λ ( 1 ) = c ( 1 ) = 1 , such that for all real numbers, s > σ 0 , some ρ ( s ) ( 0 , 1 ) exists, such that
Λ ( m 1 : n ; s ) = c ( s ) · λ ( s ) n 1 + R n ( s )
for all n, where R n is an analytic function, such that | R n ( s ) | = O ρ ( s ) n 1 λ ( s ) n 1 ; see [6]. For any ( h , φ ) -entropy that is linear for QP sequences, the entropy rate (2) is proven to reduce to either the Shannon or Rényi entropy rate, up to a constant factor. The large class of QP sequences includes i.i.d. sequences, most ergodic Markov chains with atomic state spaces, and more complex ergodic dynamical systems, such as random continued fractions.
The present paper identifies only three types of ( h , φ ) -entropies that are linear—up to obvious equivalence conditions—for QP sequences: Shannon and Rényi entropies, plus a third one, the logarithm of Taneja entropy, which we will call log-Taneja entropy. This result, valid for all QP sequences, extends [7], dedicated to Markov chains. As stated in Theorem 2, it relies on the asymptotic behavior of ( h , φ ) -entropies stated in Theorem 1; in Section 4, we highlight a classification of ( h , φ ) -entropies into five exclusive classes, depending on the growth rate of the marginal entropies: linear, over-linear, sub-linear, contracting, constant.
Further, a pertinent choice of the function h changes the class of the φ -entropy. A well-known example is provided by the linear Rényi entropy that appears as the logarithm of Tsallis entropy (up to constants) while the latter is either constant or over-linear, depending on the parameter. Another interesting case considered below is the Sharma–Taneja–Mittal (STM) entropy
T q , p ( X ) = a q , p i E m ( i ) q m ( i ) p ,
where a q , p = 2 1 q 2 1 p 1 , and q p are two positive parameters; see [8]. Frank and Plastino [9] show that STM entropy is the only entropy that gives rise to thermostatistics based on escort mean values, while [10] and the references therein highlight connections to statistical mechanics. The φ -entropy
T q ( X ) = a q i E m ( i ) q log m ( i ) ,
where a q = 2 q 1 , called Taneja entropy in the literature, appears as the limit of T q , p ( X ) as p tends to q , with q > 0 fixed, and Shannon entropy is obtained for q = 1 . The logarithm transforms both of these contracting or over-linear entropies into linear ones. In other words, their associated entropy rates (2) will be non-trivial for QP sequences. Since, to our knowledge, the log-STM and log-Taneja entropies have never been considered in the literature, this paper ends by studying their main properties.

2. Quasi Power Log Entropies

Generally speaking, the ( h , φ ) -entropies defined by (1), with φ : [ 0 , 1 ] R + and h : R + R , are such that either φ is concave and h increases or φ is convex and h decreases. Moreover, with h and φ , S h , φ takes only non-negative values. For most functionals of interest in the literature, h and φ are locally equivalent to products of power and logarithmic functions, which led in [4] to the definition of the class of quasi-power log entropies.
A φ -entropy is a quasi-power log (QPL) if a R * , s R and δ { 0 , 1 } exist, such that φ is QPL at 0, in the sense that
φ ( x ) 0 a x s log x δ .
Further, an ( h , φ ) -entropy is QPL if both (7) are satisfied and b R + * , t R and ε { 0 , 1 } exist, such that
h ( z ) Φ b z t ( log z ) ε ,
where
Φ = 0 if s > 1 , a if s = 1 , δ = 0 , S ( a ) if s < 1 , δ = 0 , S ( a ) if s 1 , δ = 1 ,
with S ( a ) denoting the sign of a; see [11]. Of course, (8) makes sense only if z t ( log z ) ϵ is well defined in a neighborhood of Φ , inducing the following restrictions on the parameters:
( δ = 0 ) ( a > 0 )   or   ( t Z , ε = 0 ) and ( δ = 1 ) ( a < 0 )   or   ( t Z , ε = 0 ) .
Note that for s = 0 and δ = 0 , the φ -entropy collapses to a constant. Column 1 of Table 1 shows the most classical entropies in the literature that are QPL.

3. Asymptotic Behavior of the Marginal Entropies of QP Sequences

The asymptotic behaviors of the sequences of marginal QPL entropies of any random sequence X = ( X n ) n N * are controlled by the Dirichlet series (3) of the sequences, as made explicit in the next theorem. This special case of [11] (Theorem 1), where the asymptotic behaviors of divergence functionals are addressed, deserves to be detailed for the entropy of QP sequences. Note that Markov chains are considered in [7] in order to obtain weighted rate formulas involving Perron–Frobenius eigenvalues and eigenvectors.
Theorem 1. 
Let X = ( X n ) n N * be a QP sequence. For any QPL φ-entropy,
S φ ( X 1 : n ) n r n H φ ( X ) ,
with
H φ ( X ) = a c ( s ) λ ( s ) λ ( s ) δ and r n = ( n 1 ) δ λ ( s ) n 1 .
Further, for any QPL ( h , φ ) -entropy,
S h , φ ( X 1 : n ) n r n H h , φ ( X ) ,
where
H h , φ ( X ) = b a t c ( s ) t log λ ( s ) ε i f   s 1 , δ = 0 , b c ( s ) t a λ ( s ) λ ( s ) 1 t log λ ( s ) ε i f   s 1 , δ = 1 , b a t log a ε i f   s = 1 , δ = 0 , b a λ ( 1 ) t i f   s = 1 , δ = 1 ,
and
r n = ( n 1 ) δ t + ε λ ( s ) t ( n 1 ) if   s 1 , and r n = ( n 1 ) δ t log ( n 1 ) δ ε if   s = 1 .
Proof. 
For any QP sequence, the probability m 1 : n ( i 1 : n ) converges to 0 as n tends to infinity, for any sequence ( i n ) E N * . Since φ is QPL at 0, and thanks to (4),
S φ ( X 1 : n ) = i 1 : n E n φ ( m 1 : n ( i 1 : n ) ) a Λ n ( s ) λ ( s ) n 1 a c ( s ) if δ = 0 , a Λ n ( s ) ( n 1 ) λ ( s ) n 1 a λ ( s ) λ ( s ) c ( s ) if δ = 1 ,
and (9) follows.
Further, if h satisfies (8), then for s 1 ,
S φ ( X 1 : n ) = λ ( s ) ( n 1 ) t b a t c ( s ) t if δ = 0 , ε = 0 , ( n 1 ) λ ( s ) ( n 1 ) t b a t c ( s ) t log λ ( s ) if δ = 0 , ε = 1 , ( n 1 ) t λ ( s ) ( n 1 ) t b c ( s ) t a λ ( s ) λ ( s ) t if δ = 1 , ε = 0 , ( n 1 ) t + 1 λ ( s ) ( n 1 ) t b l a b l a × b c ( s ) t a λ ( s ) λ ( s ) t log λ ( s ) if δ = 1 , ε = 1 ,
and for s = 1 ,
S h , φ ( X 1 : n ) b a t if δ = 0 , ε = 0 , b a t log a if δ = 0 , ε = 1 , ( n 1 ) t b a λ ( 1 ) t if δ = 1 , ε = 0 , ( n 1 ) t log ( n 1 ) b a λ ( 1 ) t if δ = 1 , ε = 1 ,
and (11) follows. □
In [7,11], the first quantities in (10) and (12) are called rescaled φ -entropy and ( h , φ ) -entropy rates of X . They constitute proper—non-degenerated—information measures associated with φ or ( h , φ ) -entropies. The rescaled entropy rate H h , φ depends on X only through the parameters [ λ , c ] of the QPP; see [7] for an interpretation of these parameters for Markov chains. Note that the averaging sequence is defined up to the asymptotic equivalence r n r n .
Table 1 presents (for the classical entropies) the values of Φ , the averaging sequence r = ( r n ) n , and the rescaled entropy rates H h , φ ( X ) , depending on the values of the parameters. Table 2 details the dependencies for φ -entropies.

4. Classification of QPL ( h , φ ) -Entropies—Dependence on h

Classification of all QPL ( h , φ ) -entropies into five exclusive classes derives from Theorem 1, for QP sequences, according to the asymptotic behavior of the marginal entropy, or, equivalently, to the type of rescaling sequence r . The important point is that this classification depends only on the functions φ and h—through the parameters [ s , δ ] and [ t , ε ] involved in the QPL properties (7) and (8), and not in the specific dynamics of the QP sequence X . In particular, the classification is valid for a wide class of atomic Markov chains, including finite ones, as shown in [7].
The classification of φ -entropies into four classes easily derives from either (9) or Table 2. Indeed, four types of behaviors for the sequences of marginal entropies are possible, according to the values of s and δ . First, for s = 1 and δ = 1 (Shannon entropy or equivalent), the marginal entropy S h , φ ( X 1 : n ) increases linearly with n and, hence, r n = n ; it is the only linear QPL φ -entropy. Second, for s < 1 , δ = 0 , and s < 1 , δ = 1 , the rescaling sequences are respectively λ ( s ) n 1 and ( n 1 ) λ ( s ) n 1 . Equation (4) states that λ is decreasing with s and that λ ( 1 ) = 1 . So, λ ( s ) > 1 for s < 1 , and the marginal entropy explodes exponentially fast, up to Φ = lim n S φ ( X 1 : n ) ; such an entropy is called over-linear or expanding. Third, for s > 1 (with either δ = 0 or 1), both r n and the marginal entropy decrease (in absolute value) exponentially fast to 0; such an entropy is called contracting. Finally, for s = 1 and δ = 0 (Tsallis entropy or equivalent), the marginal entropy converges to some finite value, and the rescaling sequence is constant.
The discussion easily extends to ( h , φ ) -entropies through the asymptotic behaviors (11) of marginal entropies or the rescaling sequence r n in (12). The following classification of ( h , φ ) -entropies ensues, shown in Table 3 according to parameters. The last column of Table 1 shows the classes of classical entropies.
  • Contracting case: r n = o ( 1 ) . The marginal entropy decreases to 0, typically exponentially fast.
  • Constant case: | r n | = k R + . The marginal entropy converges to a non-degenerate value.
  • Sub-linear case: r n = o ( n ) and r n ω ( 1 ) . The marginal entropy increases slower than n, typically as n c or n c log n , with 0 < c < 1 .
  • Linear case: r n = k n , with k R + . The marginal entropy increases linearly with the number of variables.
  • Over-linear (or expanding) case: r n ω ( n ) . The marginal entropy increases faster than n, typically exponentially fast.
Transforming a φ -entropy into an ( h , φ ) -entropy with a QPL function h leads to a change of class in several cases. Any transformation h with t < 0 exchanges the over-linear and contracting classes, while the choice of ε has no impact on these two classes. Taking t = ε = 1 would transform the linear Shannon entropy into an intricate entropy of the over-linear class.
Further, taking t = 0 and ε = 1 transforms the Tsallis entropy of the constant class into the Rényi entropy of the linear class. It also transforms Sharma–Taneja–Mittal (STM) entropies (5), especially Taneja entropies (6) that are over-linear for s = q < 1 and contracting for s = q > 1 , into entropies of the linear class.

5. Linear QPL Entropies for QP Sequences

Since both STM and Taneja entropies are positive (due to the pertinent choice of parameters), their logarithms are well defined, which allows us to derive new ( log , φ ) -entropies and, hence, complete Shannon and Rényi entropies in the linear class.
Definition 1. 
The log-STM entropy and log-Taneja entropy are defined for any random variable X, taking values in a set E with distribution m, respectively, by L q , p ( X ) = S log , a q , p ( x q x p ) ( X ) = log i E m ( i ) q m ( i ) p , and L q ( X ) = S log , x q log x ( X ) = log i E m ( i ) q log m ( i ) .
In particular, the Rényi entropy appears as a log-STM entropy for p = 1 , up to multiplicative and additive constants, here omitted for the sake of simplicity.
The classification and ensuing discussion presented in Section 4 lead to the following characterization of the class of linear QPL ( h , φ ) -entropies.
Theorem 2. 
Due to obvious equivalences, such as φ 1 0 φ 2 and h 1 Φ h 2 , and multiplicative constants, exactly three types of linear QPL ( h , φ ) -entropies exist for QP sequences: Shannon entropy, with [ s , δ , t , ε ] = [ 1 , 1 , 1 , 0 ] ; Rényi entropies, with [ s , δ , t , ε ] = [ q , 0 , 0 , 1 ] and q 0 ; log-Taneja entropies, with [ s , δ , t , ε ] = [ q , 1 , 0 , 1 ] and q 0 .
Moreover, the entropy rates associated with these entropies are either the Shannon entropy rate (for Shannon entropy) or Rényi (for both the Rényi and log-Taneja entropies).
Obviously, all log-STM entropies are also linear and equivalent to Rényi entropies since
a q , p ( x q x p ) 0 a q , p x p if q > p and a q , p ( x q x p ) 0 a q , p x q if q < p .
To our knowledge, log-Taneja entropies and log-STM entropies have never been considered in the literature, so we will now establish some of their properties.
Various properties of STM entropies are studied in [12]. STM and Taneja entropies are symmetric, continuous, and positive. Symmetry and continuity are preserved by composition by the logarithm, yielding symmetry and continuity of log-STM and log-Taneja entropies. However, L q is clearly not positive.
For a number N of equiprobable states, the STM entropy is a q , p ( N 1 q N 1 p ) while Taneja entropy is a q N 1 q log N . Both increase with N except when q , p > 1 . The property transfers to the logarithmic versions.
Borges and Roditi [12] show that STM-entropies are concave for 0 < q < 1 < p and convex for both 0 < q < 1 with p < 1 , and q , p < 0 ; all other cases present no definite concavity. Since the logarithm is a concave and increasing function, the log-STM entropies are also concave for 0 < q < 1 < p , the other cases being indefinite.
Scarfone [13] gives the maximum STM entropy distribution, while Beitollahi and Azhdari [14] compute Taneja entropy for numerous classical distributions, among which Bernoulli and geometric. Both transfer to the logarithmic versions.
STM entropies are axiomatically characterized as the family of φ -entropies—with φ : [ 0 , 1 ] R , such that φ ( 1 / 2 ) = 1 / 2 , satisfying the functional identity
i E j E φ ( m ( i ) n ( j ) ) = i E m ( i ) q j E φ ( n ( j ) ) + j E n ( j ) p i E φ ( m ( i ) ) ,
where m = ( m ( i ) ) i E and n = ( n ( j ) ) j E denote arbitrary probability distributions on E; see [15] and the references therein. From a probabilistic point of view, (13) states that the entropy T q , p ( X , Y ) of a pair of independent variables, with distributions m X and m Y , satisfies, for the Dirichlet series Λ defined by (3),
T q , p ( X , Y ) = Λ ( m X ; q ) T q , p ( Y ) + Λ ( m Y ; p ) T q , p ( X ) ,
For any fixed q and p tending to q , (14) yields for the Taneja entropy of independent variables, T q ( X , Y ) = Λ ( m X ; q ) T q ( Y ) + Λ ( m Y ; q ) T q ( X ) . In particular, if Λ ( m X ; q ) = Λ ( m Y ; q ) = Λ ( m ; q ) , then T q ( X , Y ) = Λ ( m , q ) [ T q ( Y ) + T q ( X ) ] , that is a “weighted additivity” rule for independent variables with equal Dirichlet series at q —or, equivalently, equal Rényi entropy of parameter q .
The axiomatic property (13) for STM entropies yields an alternative proof of the linearity of log-STM entropies for i.i.d. sequences of random variables. For the sake of simplicity, let us detail it for log-Taneja entropies. Let X = ( X n ) be an i.i.d. sequence with common distribution m. Some simple calculation from the definition gives
L q ( m ) = ( 1 q ) R q ( m ) + log 1 q S m * q + 1 1 q R q ( m ) ,
where m * q denotes the q -escort distribution associated with m. The escort transformation is known to preserve independence in the sense that ( m n ) * q = ( m * q ) n ; see, e.g., [5] and the references therein. Moreover, both the Shannon entropy S and Rényi entropy R q are well-known to be additive for independent variables, and hence S ( ( m n ) * q ) = n S ( m * q ) and R q ( m n ) = n R q ( m ) . Applying (15) to m n thus yields
L q ( X 1 : n ) = L q ( m n ) = ( 1 q ) n R q ( m ) + log n 1 q S ( m * q ) + 1 1 q R q ( m ) .
Obviously, the right-hand term of the sum is negligible with respect to n, so that L q ( X 1 : n ) n ( 1 q ) R q ( m ) . Finally, the entropy rate of the sequence reduces to the Rényi entropy of the common distribution.

Author Contributions

Both authors have contributed equally to all aspects of this manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  2. Menéndez, M.L.; Morales, D.; Pardo, L.; Salicrú, M. (h,ϕ)-entropy differential metric. Appl. Math. 1997, 42, 81–98. [Google Scholar] [CrossRef] [Green Version]
  3. Basseville, M. Divergence measures for statistical data processing—An annotated bibliography. Signal Process. 2013, 93, 621–633. [Google Scholar] [CrossRef]
  4. Ciuperca, G.; Girardin, V.; Lhote, L. Computation and Estimation of Generalized Entropy Rates for Denumerable Markov Chains. IEEE Trans. Inf. Theory 2011, 57, 4026–4034. [Google Scholar] [CrossRef] [Green Version]
  5. Regnault, P.; Girardin, V.; Lhote, L. Weighted Closed Form Expressions Based on Escort Distributions for Rényi Entropy Rates of Markov Chains. In Geometric Science of Information; Lecture Notes in Computer Science; Nielsen, F., Barbaresco, F., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 648–656. [Google Scholar] [CrossRef]
  6. Vallée, B. Dynamical sources in information theory: Fundamental intervals and word prefixes. Algorithmica 2001, 29, 262–306. [Google Scholar] [CrossRef] [Green Version]
  7. Girardin, V.; Lhote, L.; Regnault, P. Different Closed-Form Expressions for Generalized Entropy Rates of Markov Chains. Methodol. Comput. Appl. Probab. 2019, 21, 1431–1452. [Google Scholar] [CrossRef] [Green Version]
  8. Sharma, B.D.; Taneja, I.J. Entropy of type (α,β) and other generalized measures in information theory. Metrika 1975, 22, 205–215. [Google Scholar] [CrossRef]
  9. Frank, T.; Plastino, A. Generalized thermostatistics based on the Sharma-Mittal entropy and escort mean values. Eur. Phys. J. B- Matter Complex Syst. 2002, 30, 543–549. [Google Scholar] [CrossRef]
  10. Kolesnichenko, A.V. Two-parameter Sharma–Taneja–Mittal entropy as the basis of family of equilibrium thermodynamics of nonextensive systems. Prepr. Keldysh Inst. Appl. Math. 2020, 36, 35. [Google Scholar] [CrossRef]
  11. Girardin, V.; Lhote, L. Rescaling Entropy and Divergence Rates. IEEE Trans. Inf. Theory 2015, 61, 5868–5882. [Google Scholar] [CrossRef]
  12. Borges, E.P.; Roditi, I. A family of nonextensive entropies. Phys. Lett. A 1998, 246, 399–402. [Google Scholar] [CrossRef]
  13. Scarfone, A.M. A Maximal Entropy Distribution Derivation of the Sharma-Taneja-Mittal Entropic Form. Open Syst. Inf. Dyn. 2018, 25, 1850002. [Google Scholar] [CrossRef]
  14. Beitollahi, A.; Azhdari, P. Exponential family and Taneja’s entropy. Appl. Math. Sci. 2010, 4, 2013–2019. [Google Scholar]
  15. Suyari, H.; Ohara, A.; Wada, T. Mathematical Aspects of Generalized Entropies and their Applications. J. Phys. Conf. Ser. 2010, 201, 011001. [Google Scholar] [CrossRef]
Table 1. Some classical entropies, with parameters p , q > 0 . From left to right: Parameters of (7), Φ = lim S φ ( X 1 : n ) , parameters of (8), h { Φ } = lim n S h , φ ( X 1 : n ) , rescaling sequence, entropy rate and type (−1: contracting, 0: constant, 1: sub-linear, 2: linear, 3: over-linear).
Table 1. Some classical entropies, with parameters p , q > 0 . From left to right: Parameters of (7), Φ = lim S φ ( X 1 : n ) , parameters of (8), h { Φ } = lim n S h , φ ( X 1 : n ) , rescaling sequence, entropy rate and type (−1: contracting, 0: constant, 1: sub-linear, 2: linear, 3: over-linear).
Entropy [ a , s , δ ] Φ [ b , t , ε ] h { Φ } r n H h , φ ( X ) Type
φ ( x ) , h ( z )
Shannon S [ 1 , 1 , 1 ] + [ 1 , 1 , 0 ] + n 1 λ ( 1 ) 2
x log x , z
Rényi R q [ 1 , q , 0 ] q > 1 :0 1 1 q , 0 , 1 + n 1 1 1 q log λ ( q ) 2
x q , 1 1 q log z q < 1 : +
Tsallis [ 1 , q , 0 ] q > 1 :0 1 q 1 , 0 , 0 1 q 1 1 1 q 1 0
x q , 1 q 1 ( 1 z ) q < 1 : + 1 1 q , 1 , 0 + λ ( q ) n 1 1 1 q c ( q ) 3
STM T q , p
x q x p , z p q
q > p : [ 1 , p , 0 ] p > 1 :0 1 p q , 1 , 0 0 λ ( p ) n 1 1 q p c ( p ) −1
p < 1 : + 3
q < p : [ 1 , q , 0 ] p > 1 :00 λ ( q ) n 1 1 p q c ( q ) −1
p < 1 : + + 3
Taneja T q [ 1 , q , 1 ] q > 1 :0 [ 1 , 1 , 0 ] 0 ( n 1 ) λ ( q ) n 1 c ( q ) λ ( q ) λ ( q ) −1
x q log x , z q < 1 : + + 3
Table 2. Limit marginal φ -entropy Φ = lim n S φ ( X 1 : n ) , averaging sequence r = ( r n ) n N , and H φ ( X ) , according to the values of a, s, δ , and sign S ( a ) of a.
Table 2. Limit marginal φ -entropy Φ = lim n S φ ( X 1 : n ) , averaging sequence r = ( r n ) n N , and H φ ( X ) , according to the values of a, s, δ , and sign S ( a ) of a.
s < 1 s = 1 s > 1
δ = 0 S ( a ) a0 Φ
λ ( s ) n 1 1 λ ( s ) n 1 r n
a c ( s ) a a c ( s ) H φ ( X )
δ = 1 S ( a ) S ( a ) 0 Φ
( n 1 ) λ ( s ) n 1 n 1 ( n 1 ) λ ( s ) n 1 r n
a c ( s ) λ ( s ) λ ( s ) a λ ( 1 ) a c ( s ) λ ( s ) λ ( s ) H φ ( X )
Table 3. Type of the ( h , φ ) -entropy according to the parameters in (7) and (8): 1 contracting, 0 constant, 1 sub-linear, 2 linear, 3 over-linear.
Table 3. Type of the ( h , φ ) -entropy according to the parameters in (7) and (8): 1 contracting, 0 constant, 1 sub-linear, 2 linear, 3 over-linear.
t < 0 t = 0 , ε = 0 t = 0 , ε = 1 t > 0
δ = 0 , s < 1 −1023
δ = 0 , s = 1 0000
δ = 0 , s > 1 302−1
δ = 1 , s < 1 −1023
δ = 1 , s > 1 302−1
δ = 1 , s = 1 t < 0 t = 0 , ε = 0 t = 0 , ε = 1 0 < t < 1
−1011
t = 1 , ε = 0 t = 1 , ε = 1 t > 1
233
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Girardin, V.; Regnault, P. Linear (h,φ)-Entropies for Quasi-Power Sequences with a Focus on the Logarithm of Taneja Entropy. Phys. Sci. Forum 2022, 5, 9. https://doi.org/10.3390/psf2022005009

AMA Style

Girardin V, Regnault P. Linear (h,φ)-Entropies for Quasi-Power Sequences with a Focus on the Logarithm of Taneja Entropy. Physical Sciences Forum. 2022; 5(1):9. https://doi.org/10.3390/psf2022005009

Chicago/Turabian Style

Girardin, Valérie, and Philippe Regnault. 2022. "Linear (h,φ)-Entropies for Quasi-Power Sequences with a Focus on the Logarithm of Taneja Entropy" Physical Sciences Forum 5, no. 1: 9. https://doi.org/10.3390/psf2022005009

Article Metrics

Back to TopTop