Next Article in Journal
An ESR Framework for the Study of Consciousness
Next Article in Special Issue
Estimation for Entropy and Parameters of Generalized Bilal Distribution under Adaptive Type II Progressive Hybrid Censoring Scheme
Previous Article in Journal
Deep Neural Network Model for Approximating Eigenmodes Localized by a Confining Potential
Previous Article in Special Issue
Unifying Aspects of Generalized Calculus
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Calibration Invariance of the MaxEnt Distribution in the Maximum Entropy Principle

1
Section for the Science of Complex Systems, Center for Medical Statistics, Informatics, and Intelligent Systems (CeMSIIS), Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
2
Complexity Science Hub Vienna, Josefstädterstrasse 39, 1080 Vienna, Austria
3
Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University, 11519 Prague, Czech Republic
Entropy 2021, 23(1), 96; https://doi.org/10.3390/e23010096
Submission received: 11 December 2020 / Revised: 7 January 2021 / Accepted: 9 January 2021 / Published: 11 January 2021
(This article belongs to the Special Issue The Statistical Foundations of Entropy)

Abstract

:
The maximum entropy principle consists of two steps: The first step is to find the distribution which maximizes entropy under given constraints. The second step is to calculate the corresponding thermodynamic quantities. The second part is determined by Lagrange multipliers’ relation to the measurable physical quantities as temperature or Helmholtz free energy/free entropy. We show that for a given MaxEnt distribution, the whole class of entropies and constraints leads to the same distribution but generally different thermodynamics. Two simple classes of transformations that preserve the MaxEnt distributions are studied: The first case is a transform of the entropy to an arbitrary increasing function of that entropy. The second case is the transform of the energetic constraint to a combination of the normalization and energetic constraints. We derive group transformations of the Lagrange multipliers corresponding to these transformations and determine their connections to thermodynamic quantities. For each case, we provide a simple example of this transformation.

1. Introduction

The maximum entropy principle (MEP) is one of the most fundamental concepts in equilibrium statistical mechanics. It was originally proposed by Jaynes [1,2] in order to connect information entropy introduced by Shannon and thermodynamic entropy introduced by Clausius, Boltzmann, and Gibbs. Although the MEP was originally introduced for the case of Shannon entropy, with the advent of generalized entropies [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17] the natural effort was to apply the maximum entropy principle beyond the case of Shannon entropy. Another question that arose naturally is whether the MEP can be applied to other than ordinary linear constraints. Examples of the constraints that might be considered in connection with the MEP are escort constraints [18,19,20], Kolmogorov–Nagumo means [21,22], or more exotic types of constraints [23]. It brought some discussion about the applicability of the principle for the case of generalized entropies [24,25] and nonlinear constraints and its thermodynamic interpretation [26,27,28,29,30]. Indeed, MEP is not the only one extremal principle in statistical physics, let us mention, e.g., the principle of maximum caliber [31] which is useful in non-equilibrium physics. In this paper, we stick, however, to MEP, as it is the most widespread principle and the theory of generalized thermostatistics has been mainly focused on MEP. For a recent review of other principles, see also in [32]. For the discussion between entropy arising from information theory and thermodynamics, see in [33]. For the sake of simplicity, let us consider canonical ensemble, i.e., fluctuations in internal energy. For the case of the grand-canonical ensemble, one can obtain similar results to the ones presented in this paper for the case of a chemical potential μ .
In order to grasp the debate about the applicability of the MEP, let us emphasize that the MEP consists of two main parts:
(I)
Finding a distribution (MaxEnt distribution) that maximizes entropy under given constraints.
(II)
Plugging the distribution into the entropic functional and calculating physical quantities as thermodynamic potentials, temperature, or response coefficients (specific heat, compressibility, etc.).
The first part is rather a mathematical procedure of finding a maximum subject to constraints. This is done by the method of Lagrange multipliers, by defining a Lagrange function in the form
L a g r a n g e f u n c t i o n = e n t r o p y ( L a g r a n g e m u l t i p l i e r ) · ( c o n s t r a i n t )
The Lagrange multipliers’ role at this stage is to ensure fulfillment of constraints as they are determined from the set of equations obtained from the maximization of the Lagrange function. This procedure is known in statistics as Softmax, a method used to infer distribution from given data. Shore and Johnson [34,35] therefore studied MEP as a statistical inference procedure and established a set of consistency axioms. Shore and Johnson’s work heated a debate about whether MEP for generalized entropies can be also understood as a statistical inference method satisfying the consistency requirements [24,36,37,38,39,40,41]. In [42], it was shown that the class of entropies satisfying the original Shore–Johnson axioms is wider than previously thought. Moreover, in [43], the connection between Shore–Johnson axioms and Shannon–Khinchin axioms was investigated and the equivalence of information theory and statistical inference axiomatics was established.
In the second part, the physical interpretation of entropy starts to arise. Similar to the case of Lagrangian mechanics, where the Lagrangian is the difference between kinetic and potential energy and the Lagrange multipliers play the role of the normal force to the constraints, here the entropy becomes a thermodynamic state variable. For Shannon entropy and linear constraints, the Lagrange multipliers become inverse temperature and free entropy, respectively.
The main aim of this paper is to discuss the relation between points (I) and (II). In the first part, it is possible to find a class of entropic functionals and constraints leading to the same MaxEnt distribution. However, in the second part, different entropy and/or constraints lead to different thermodynamics and different relations between physical quantities and Lagrange multipliers. The two main messages of this paper are listed below.
(i)
For each MaxEnt distribution, there exists the whole class of entropies and constraints leading to generally different thermodynamics.
(ii)
It is possible to establish transformation relations of Lagrange parameters (and subsequently the thermodynamic quantities) for classes of entropies and constraints giving the same MaxEnt distribution.
We call the latter transformation relation calibration invariance of the MaxEnt distribution. A straightforward consequence is that in order to fully determine the statistical properties of a thermal system in equilibrium, it is not enough to measure the statistical distribution of energies.
The rest of the paper is organized as follows. In the next section, we briefly discuss the main aspects of MEP for the case of general entropic functional and general constraints. In the following two sections, we introduce two simple transformations of entropic functional (Section 3) and constraints (Section 4) that lead to the same MaxEnt distribution and derive transformations between the Lagrange multipliers. These transformations form a group. After the general derivation, we provide a few simple examples for each case. The last section is devoted to conclusions.

2. Maximum Entropy Principle in Statistical Physics

Maximum entropy principle is the way of obtaining the representing probability distribution from the limited amount of information. Our aim is to find the probability distribution of the system P = { p i } i = 1 n under the set of given constraints. In the simplest case, the principle can be formulated as follows.
Maximum entropy principle:Maximize entropy S ( P ) under the normalization constraint f 0 ( P ) = 0 , and energy constraint f E ( P ) = 0 .
The normalization condition is considered in the regular form, i.e., f 0 ( P ) = i p i 1 = 1 1 . Moreover, we have a class of constraints, which originally described the average energy of the system. Therefore, we call them energy constraints. We consider only one energy constraint, for simplicity, although there can be more constraints, and they do not have to consider only internal energy but also other thermodynamic quantities. In the original formulation, the energy constraint is linear in probabilities, i.e.,
f E ( P ) = i p i E i E = E E ,
but it can be generally any nonlinear function of probabilities—escort means provide an example. A large class of energy constraints can be written in a separable form, which means that f E ( P ) = E ( P ) E , i.e., in the form expressing the “expected” internal energy (macroscopic variable) as a function of probability distribution (microscopic variable). This class of constraints plays a dominant role in the thermodynamic systems.
In order to find a solution of the Maximum entropy principle, we use a common method of Lagrange multipliers, which can be done through maximization of Lagrange function:
L ( P ; α , β ) = S ( P ) α f 0 ( P ) β f E ( P )
The maximization procedure leads to the set of equations
L ( P ; α , β ) p i = 0 i { 1 , , n } L ( P ; α , β ) α = f 0 ( P ) = 0 L ( P ; α , β ) β = f E ( P ) = 0
from which we determine the resulting MaxEnt distribution. In order to obtain a unique solution, we require that the entropic functional should be a Schur-concave symmetric function [42].
As a consequence, we obtain the values of Lagrange multipliers α and β . From the strictly mathematical point of view, Lagrange multipliers are just auxiliary parameters to be solved from the set of Equation (3). However, in physics, Lagrange parameters also have a physical interpretation. In Lagrangian mechanics, Lagrange parameters play the role of normal force to the constraints. Similarly, in ordinary statistical mechanics based on Shannon entropy H ( P ) = i p i log p i and linear constraints (1), the Lagrange multipliers have the particular physical interpretation:
β = 1 T ( inverse temperature ) ,
α = S 1 T E ( free entropy ) .
Note that the free entropy is, similarly to Helmholtz free energy, a Legendre transform of entropy w.r.t. internal energy. For the case of ordinary thermodynamics (Shannon entropy and linear constraints), it is equal to the logarithm of the partition function.
This interpretation is valid only in this case. In the case, when we use different entropy functional or different constraints, these relation between Lagrange multipliers and thermodynamic quantities are no longer valid. This is even the case, when the resulting MaxEnt distribution is the same.
The main aim of this paper is to show how the invariance of MaxEnt distribution affects the Lagrange multipliers and their relations to thermodynamic quantities. Let us now solve Equation (3). The first set of equations leads to
S ( P ) p i α f 0 ( P ) p i β f E ( P ) p i = 0 .
Let us assume the normalization in the usual way which leads to f 0 ( P ) p i = 1 . Moreover, let us consider separable energy constraint, so f E ( P ) p i = E ( P ) p i . The resulting probability distribution can be expressed as
p i = S p i ( 1 ) α + β E ( P ) p i .
where ( 1 ) denotes inverse function of S / p i (provided it exists and is unique). We can express α by multiplying the equation by p i and summing over i, which leads to
α = P S ( P ) β P E ( P )
where X = i x i p i and P = ( p 1 , , p n ) . By plugging back to the previous equation, we can get β as
β = Δ i ( S ( P ) ) Δ i ( E ( P ) )
where Δ i ( X ) = x i X is the difference from the average.
The solution of Equation (3) depends on the internal energy E. However, in thermodynamics it is natural to invert the relation β = β ( E ) and express the relevant quantities in terms of β , so E = E ( β ) . With that, we can calculate dependence of entropy on β :
S β = i S p i p i β = i α + β E ( P ) p i p i β = β i f E p i p i β = β f E E E β
For separable energy constraints, f E E = 1 , so we obtain the well-known relation
S β = β E β β = S E .
Let us now define the Legendre conjugate of entropy called free entropy (also called Jaynes parameter [44] or Massieu function [45]):
ψ = S S E E = S β E
Free entropy is connected to Helmholtz free energy as ψ = β F . The difference between α and ψ can be expressed as
ψ α = S P S β ( E P E )
Therefore, we can understand the difference ψ α as the Legendre transform of ψ with respect to P. From this, we see that the difference between ψ and α is a constant (not depending on thermodynamic quantities), if two independent conditions are fulfilled, i.e., E = P E ( P ) and S = P S + a . The former constraint leads to linear energy constraints, while the latter one leads to the the conclusion that the entropy must be in trace form S ( P ) = i g ( p i ) . Moreover, the function g has to fulfill the following equation,
g ( x ) a x = x g ( x )
leading to g ( x ) = a x log ( x ) + b x which is equivalent to Shannon entropy.
In the next sections, we will explore how the transformation of the entropy and the energy constraint that leaves the MaxEnt distribution invariant affects the Lagrange multipliers and their relation to thermodynamic quantities.

3. Calibration Invariance of MaxEnt Distribution with Entropy Transformation

The simplest transformation of Lagrange functional that leaves the MaxEnt distribution invariant is to consider an arbitrary increasing function of entropy, i.e., we replace S ( P ) by c ( S ( P ) ) , where c ( x ) > 0 . Let us note that this transform preserves the uniqueness of the MEP because it is easy to show that if S ( P ) is Schur-concave, c ( S ( P ) ) is also Schur-concave [42] which is a sufficient condition for uniqueness of the MaxEnt distribution.
In this case, the Lagrange equations are adjusted as follows,
c ( S ( P ) ) S ( P ) p i α c f 0 ( P ) p i β c E ( P ) p i = 0
leading to
α c = c ( S ( P ) ) P S ( P ) β c P E ( P )
and
β c = c ( S ( P ) ) Δ i ( P S ( P ) ) Δ i ( P E ( P ) )
so we get that the function c causes rescaling of α and β , so
α c = c ( S ( P ) ) α
β c = c ( S ( P ) ) β
while its ratio remains unchanged, i.e., α c / β c = α / β . Actually, the set of increasing functions conform a group of Lagrange multipliers, because it is easy to show that the Lagrange parameters related to the entropy c 1 ( c 2 ( S ( P ) )
β c 1 c 2 = c 1 ( c 2 ( S ( P ) ) · c 2 ( S ( P ) ) β = c 1 ( c 2 ( S ( P ) ) β c 2
which can be described as the group operation ( c 1 c 2 ) c 1 ( c 2 ) · c 2 .
An important property of this transformation is that it changes the extensive–intensive duality of the conjugated pair of thermodynamic variables and the respective forces while it maintains the distribution. Notably, by changing the entropic functional from extensive (i.e., S ( n ) U ( n ) ) to non-extensive, it changes β from intensive (i.e., size-independent, at least in the thermodynamic limit) to non-intensive, i.e., explicitly size-dependent. This point has been discussed in connection with q-non-extensive statistical physics of [29,30] and the relation to the zeroth law of thermodynamics was shown in [46]. As one can see from the example below, although Rényi entropy and Tsallis entropy have the same maximizer, the corresponding thermodynamics is different. While Rényi entropy is additive (and therefore extensive for systems where U ( n ) n ) and the temperature is intensive, Tsallis entropy is non-extensive, and the corresponding temperature explicitly depends on the size of the system.
Let us finally mention that the difference between free entropy and Lagrange parameter α transforms as
ψ c α c = ( c ( S ) c ( S ) P S ( P ) c ( S ) β ( E P E ( P ) ) = c ( S ) ψ α + ( c ( S ) c ( S ) · S ) .
While free entropy and other thermodynamic potentials are transformed, the heat change remains invariant under this transformation:
đ Q c = T c d c ( S ) = T c ( S ) c ( S ) d S = T d S = đ Q .
Example 1.
We exemplify the calibration invariance on two popular examples of closely related entropies.
  • Rényi entropy and Tsallis entropy: Two most famous examples of generalized entropies are Rényi entropy R q ( P ) = 1 1 q ln i p i q and Tsallis entropy S q ( P ) = 1 1 q i p i q 1 . Their relation can be expressed as
    R q ( P ) = c q ( S q ( P ) ) = 1 1 q ln ( 1 q ) S q ( P ) + 1
    and therefore we obtain that
    c q ( S q ( P ) ) = 1 1 + ( 1 q ) S q = 1 i p i q .
    The difference between free entropy and α can be obtained as
    ψ R α R = 1 i p i q ( ψ S α S ) + R q ( P ) S q ( P ) i p i q .
    One can therefore see that even though Rényi and Tsallis entropy lead to the same MaxEnt distribution, their thermodynamic quantities, such as temperature or free entropy, are different. Whether the system follows Rényi or Tsallis entropy depends on additional facts, as e.g., (non)-extensitivity and (non)-intensivity of thermodynamic quantities.
  • Shannon entropy and Entropy power: A similar example is provided with Shannon entropy H ( P ) = i p i ln 1 / p i and entropy power P ( P ) = i 1 / p i p i . The relation between them is simply
    H ( P ) = c ( P ( P ) ) = log ( P ( P ) ) ,
    so we obtain that
    c ( P ( P ) ) = 1 / ( P ( P ) ) = exp ( H ( P ) ) .
    For the difference between free entropy and α, we obtain that
    0 = ψ H α H = 1 P ( P ) ψ P α P + H ( P ) 1
    from which we get that
    ψ P α P = P ( P ) 1 log P ( P ) .
    Therefore, we see that even that the MaxEnt distribution remains unchanged, the relation between α and free energy is different.

4. Calibration Invariance of MaxEnt Distribution with Constraints Transformation

Similarly, one can uncover the invariance of the MaxEnt distribution when the constraints are transformed in a certain way. Generally, if two sets of constraints define the same domain, the resulting Maximum entropy principle should lead to equivalent results. We will not be so general, but we focus on a specific situation, which might be quite interesting for thermodynamic applications. Let us remind two conditions, which we assume: normalization f 0 ( P ) = 0 and energy constraint f E ( P ) = 0 . Let us investigate the latter. Similarly to the previous case, it is possible to take any function g of f E ( P ) , for which g ( y ) = 0 if y = 0 . More generally, we can also take into account the normalization constraint and replace the original energy condition by
g ( f 0 ( P ) , f E ( P ) ) = 0
for any g ( x , y ) , for which g ( x , y ) = 0 y = 0 . Let us investigate the Maximum entropy principle for this case. We can express the Lagrange function as
L ( P ) = S ( P ) α g f 0 ( P ) β g g ( f 0 ( P ) , f E ( P ) )
which leads to a set of equations
S ( P ) p i α g f 0 ( P ) p i β g G ( 1 , 0 ) f 0 ( P ) p i + G ( 0 , 1 ) E ( P ) p i = 0
where G ( 1 , 0 ) = g ( x , y ) x | ( 0 , 0 ) and G ( 0 , 1 ) = g ( y , x ) x | ( 0 , 0 ) . We take again into account that f 0 ( P ) p i = 1 , multiply the equations by p i and some over i. This gives us
α g = P S ( P ) β g G ( 1 , 0 ) + G ( 0 , 1 ) P E ( P ) .
By plugging α g back, we end with relation for β g :
β g = 1 G ( 0 , 1 ) Δ i ( P S ( P ) ) Δ i ( P E ( P ) ) .
For α g we end with
α g = P S ( P ) Δ i ( P S ( P ) ) Δ i ( P E ( P ) ) f E ( P ) 1 + G ( 1 , 0 ) G ( 0 , 1 ) 1 P E ( P ) .
Thus, we end again with rescaling of α g and β g , which reads
α g ( α , β ) = α G ( 1 , 0 ) G ( 0 , 1 ) β ,
β g ( β ) = β G ( 0 , 1 ) .
The ratio of Lagrange multipliers is also transformed, so we get
α g β g = G ( 0 , 1 ) α β G ( 1 , 0 ) .
Again, the set of all functions fulfilling the aforementioned condition conform a group. The group operation can be described by the relation between coefficients G ( 1 , 0 ) and G ( 0 , 1 ) for the composite function g ( x , y ) = g 1 ( x , g 2 ( x , y ) ) . We obtain that
G ( 1 , 0 ) = G 1 ( 1 , 0 ) + G 1 ( 0 , 1 ) G 2 ( 1 , 0 )
G ( 0 , 1 ) = G 1 ( 0 , 1 ) G 2 ( 0 , 1 )
which leads to group relations
α g ( α , β ) = α g 1 ( α g 2 ( α , β ) , β g 2 ( β ) ) G 1 ( 1 , 0 ) G 1 ( 0 , 1 ) β g 2 ( β )
β g ( β ) = β g 2 ( β ) G 1 ( 0 , 1 ) .
Example 2.
Here we mention two simple examples of the aforementioned transformation.
  • Energy shift:Under this scheme, we can assume the constant shift in the energy spectrum. Let us rewrite the constraint f ( P ) in the following form,
    f E ( P ) = p i E i E = p i ( E i E ) ( E E )
    which allows us to identify the function g ( x , y ) as
    g ( x , y ) = y E x + E
    We obtain G ( 1 , 0 ) = E and G ( 0 , 1 ) = 1 , which means that α = α β E .
  • Latent escort means:Apart from linear means, it is possible to use some generalized approaches. One of these examples is provided by so-called escort mean:
    E q = E q = i p i q E i i p i q
    which for q = 1 becomes an ordinary linear mean, when P = { p i } i = 1 n are normalized to one. When we use this class of means in the Maximum entropy principle, the normalization is enforced by the normalization condition f 0 ( P ) = 0 , therefore for q = 1 we obtain the same results. Nevertheless, by taking q = 1 for the results with escort distribution, the energy constraint is actually expressed as
    p i E i p i E
    can be understood in the same way as considered before in this section, i.e., as a combination of a normalization constraint and energy constraint. In this case the function g has the following form,
    g ( x , y ) = y + E x + 1 E .
    Therefore, we obtain that G ( 1 , 0 ) = E and G ( 0 , 1 ) = 1 , which correspond to the previous example for E = E . Therefore, the latent energy mean can be understood in terms of MaxEnt procedure as the shift of the energy spectrum by its average energy.

5. Conclusions

In this paper, we have discussed the calibration invariance of MEP, which means that for a given MaxEnt distribution, there exists a whole class of entropies and constraints that lead to different thermodynamics (Thermodynamic quantities and response coefficients generally have different behavior. For example, from intensive temperature we can obtain temperature that explicitly depends on the size of the system). We have stressed that the MEP procedure consists of two parts, where the first part, consisting of determining the MaxEnt distribution, is rather a mathematical tool, while the second part, making connection between Lagrange multipliers and thermodynamic quantities, is a specific for application of MEP in statistical physics. Indeed, the paper does not cover all possible transformations leading to the same MaxEnt distribution (let us mention, at least, the additive duality of Tsallis entropy, where maximizing S 2 q with linear constraint leads to the same result as maximizing S q with escort constraints [47]). The main lesson of this paper is that in order to fully determine a thermal system in equilibrium, we need to measure not only probability distribution, but also all relevant thermodynamic quantities (as entropy). Moreover, the transformation between Lagrange parameters and its connection to thermodynamic potentials can be useful in situations when one is not certain about the exact form of entropy.

Funding

This research was funded by the Austrian Science Fund (FWF), project I 3073, Austrian Research Promotion agency (FFG), project 882184 and by the Grant Agency of the Czech Republic (GAČR), grant No. 19-16066S.

Acknowledgments

I would like to thank Petr Jizba for helpful discussions.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Jaynes, E.T. Information Theory and Statistical Mechanics. Phys. Rev. 1957, 106, 620. [Google Scholar] [CrossRef]
  2. Jaynes, E.T. Information Theory and Statistical Mechanics. II. Phys. Rev. 1957, 108, 171. [Google Scholar] [CrossRef]
  3. Burg, J.P. The relationship between maximum entropy spectra and maximum likelihood spectra. Geophysics 1972, 37, 375–376. [Google Scholar] [CrossRef]
  4. Rényi, A. Selected Papers of Alfréd Rényi; Akademia Kiado: Budapest, Hungary, 1976; Volume 2. [Google Scholar]
  5. Havrda, J.H.; Charvát, F. Quantification Method of Classification Processes. Concept of Structural α-Entropy. Kybernetika 1967, 3, 30–35. [Google Scholar]
  6. Sharma, B.D.; Mitter, J.; Mohan, M. On Measures of “Useful” Information. Inf. Control 1978, 39, 323–336. [Google Scholar] [CrossRef] [Green Version]
  7. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  8. Frank, T.; Daffertshofer, A. Exact time-dependent solutions of the Renyi Fokker-Planck equation and the Fokker-Planck equations related to the entropies proposed by Sharma and Mittal. Physica A 2000, 285, 351–366. [Google Scholar] [CrossRef]
  9. Kaniadakis, G. Statistical mechanics in the context of special relativity. Phys. Rev. E 2002, 66, 056125. [Google Scholar] [CrossRef] [Green Version]
  10. Jizba, P.; Arimitsu, T. The world according to Rényi: Thermodynamics of multifractal systems. Ann. Phys. 2004, 312, 17–59. [Google Scholar] [CrossRef]
  11. Hanel, R.; Thurner, S. A comprehensive classification of complex statistical systems and an ab-initio derivation of their entropy and distribution functions. Europhys. Lett. 2011, 93, 20006. [Google Scholar] [CrossRef]
  12. Thurner, S.; Hanel, R.; Klimek, P. Introduction to the Theory of Complex Systems; Oxford University Press: Oxford, UK, 2018. [Google Scholar]
  13. Korbel, J.; Hanel, R.; Thurner, S. Classification of complex systems by their sample-space scaling exponents. New J. Phys. 2018, 20, 093007. [Google Scholar] [CrossRef]
  14. Tempesta, P.; Jensen, H.J. Universality classes and information-theoretic Measures of complexity via Group entropies. Sci. Rep. 2020, 10, 1–11. [Google Scholar] [CrossRef] [Green Version]
  15. Ilić, V.M.; Stankovixcx, M.S. Generalized Shannon-Khinchin axioms and uniqueness theorem for pseudo-additive entropies. Physica A 2014, 411, 138–145. [Google Scholar] [CrossRef] [Green Version]
  16. Ilić, V.M.; Scarfone, A.M.; Wada, T. Equivalence between four versions of thermostatistics based on strongly pseudoadditive entropies. Phys. Rev. E 2019, 100, 062135. [Google Scholar] [CrossRef] [Green Version]
  17. Czachor, M. Unifying Aspects of Generalized Calculus. Entropy 2020, 22, 1180. [Google Scholar] [CrossRef]
  18. Beck, C.; Schlögl, F. Thermodynamics of Chaotic Systems: An Introduction; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
  19. Abe, S. Geometry of escort distributions. Phys. Rev. E 2003, 68, 031101. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Bercher, J.-F. On escort distributions, q-gaussians and Fisher information. AIP Conf. Proc. 2011, 1305, 208. [Google Scholar]
  21. Czachor, M.; Naudts, J. Thermostatistics based on Kolmogorov-Nagumo averages: Unifying framework for extensive and nonextensive generalizations. Phys. Lett. A 2002, 298, 369–374. [Google Scholar] [CrossRef] [Green Version]
  22. Scarfone, A.M.; Matsuzoe, H.; Wada, T. Consistency of the structure of Legendre transform in thermodynamics with the Kolmogorov-Nagumo average. Phys. Lett. A 2016, 380, 3022–3028. [Google Scholar] [CrossRef]
  23. Bercher, J.-F. Tsallis distribution as a standard maximum entropy solution with ‘tail’ constraint. Phys. Lett. A 2008, 372, 5657–5659. [Google Scholar] [CrossRef] [Green Version]
  24. Pressé, S.; Ghosh, K.; Lee, J.; Dill, K.A. Nonadditive Entropies Yield Probability Distributions with Biases not Warranted by the Data. Phys. Rev. Lett. 2013, 111, 180604. [Google Scholar] [CrossRef] [PubMed]
  25. Oikonomou, T.; Bagci, B. Misusing the entropy maximization in the jungle of generalized entropies. Phys. Lett. A 2017, 381, 207–211. [Google Scholar] [CrossRef] [Green Version]
  26. Tsallis, C.; Mendes, R.S.; Plastino, A.R. The role of constraints within generalized nonextensive statistics. Phys. A 1998, 286, 534–554. [Google Scholar] [CrossRef]
  27. Martínez, S.; Nicolás, F.; Peninni, F.; Plastino, A. Tsallis’ entropy maximization procedure revisited. Phys. A 2000, 286, 489–502. [Google Scholar] [CrossRef] [Green Version]
  28. Plastino, A.; Plastino, A.R. On the universality of thermodynamics’ Legendre transform structure. Phys. Lett. A 1997, 226, 257–263. [Google Scholar] [CrossRef]
  29. Rama, S.K. Tsallis Statistics: Averages and a Physical Interpretation of the Lagrange Multiplier β. Phys. Lett. A 2000, 276, 103–108. [Google Scholar] [CrossRef] [Green Version]
  30. Campisi, M.; Bagci, G.B. Tsallis Ensemble as an Exact Orthode. Phys. Lett. A 2007, 362, 11–15. [Google Scholar] [CrossRef] [Green Version]
  31. Dixit, P.D.; Wagoner, J.; Weistuch, C.; Pressé, S.; Ghosh, K.; Dill, K.A. Perspective: Maximum caliber is a general variational principle for dynamical systems. J. Chem. Phys. 2018, 148, 010901. [Google Scholar] [CrossRef] [Green Version]
  32. Lucia, U. Stationary Open Systems: A Brief Review on Contemporary Theories on Irreversibility. Physica A 2013, 392, 1051–1062. [Google Scholar] [CrossRef]
  33. Palazzo, P. Hierarchical Structure of Generalized Thermodynamic and Informational Entropy. Entropy 2018, 20, 553. [Google Scholar] [CrossRef] [Green Version]
  34. Shore, J.E.; Johnson, R.W. Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy. IEEE Trans. Inf. Theor. 1980, 26, 26–37. [Google Scholar] [CrossRef] [Green Version]
  35. Shore, J.E.; Johnson, R.W. Properties of cross-entropy minimization. IEEE Trans. Inf. Theor. 1981, 27, 472–482. [Google Scholar] [CrossRef] [Green Version]
  36. Uffink, J. Can the maximum entropy principle be explained as a consistency requirement? Stud. Hist. Philos. Mod. Phys. 1995, 26, 223–261. [Google Scholar] [CrossRef] [Green Version]
  37. Tsallis, C. Conceptual Inadequacy of the Shore and Johnson Axioms for Wide Classes of Complex Systems. Entropy 2015, 17, 2853–2861. [Google Scholar] [CrossRef] [Green Version]
  38. Pressé, S.; Ghosh, K.; Lee, J.; Dill, K.A. Reply to C. Tsallis’ “Conceptual Inadequacy of the Shore and Johnson Axioms for Wide Classes of Complex Systems”. Entropy 2015, 17, 5043–5046. [Google Scholar] [CrossRef] [Green Version]
  39. Oikonomou, T.; Bagci, G.B. Rényi entropy yields artificial biases not in the data and incorrect updating due to the finite-size data. Phys. Rev. E 2019, 99, 032134. [Google Scholar] [CrossRef] [Green Version]
  40. Jizba, P.; Korbel, J. Comment on “Rényi entropy yields artificial biases not in the data and incorrect updating due to the finite-size data”. Phys. Rev. E 2019, 100, 026101. [Google Scholar] [CrossRef] [Green Version]
  41. Oikonomou, T.; Bagci, G.B. Reply to “Comment on Rényi entropy yields artificial biases not in the data and incorrect updating due to the finite-size data”. Phys. Rev. E 2019, 100, 026102. [Google Scholar] [CrossRef] [Green Version]
  42. Jizba, P.; Korbel, J. Maximum Entropy Principle in Statistical Inference: Case for Non-Shannonian Entropies. Phys. Rev. Lett. 2019, 122, 120601. [Google Scholar] [CrossRef] [Green Version]
  43. Jizba, P.; Korbel, J. When Shannon and Khinchin meet Shore and Johnson: Equivalence of information theory and statistical inference axiomatics. Phys. Rev. E 2020, 101, 042126. [Google Scholar] [CrossRef]
  44. Plastino, A.; Plastino, A.R. Tsallis Entropy and Jaynes’ Information Theory Formalism. Braz. J. Phys. 1999, 29, 50–60. [Google Scholar] [CrossRef]
  45. Naudts, J. Generalized Thermostatistics; Springer: London, UK, 2011. [Google Scholar]
  46. Biró, T.S.; Ván, P. Zeroth law compatibility of nonadditive thermodynamics. Phys. Rev. E 2011, 83, 061147. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Wada, T.; Scarfone, A.M. Connections between Tsallis’ formalisms employing the standard linear average energy and ones employing the normalized q-average energy. Phys. Lett. A 2005, 335, 351–362. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Korbel, J. Calibration Invariance of the MaxEnt Distribution in the Maximum Entropy Principle. Entropy 2021, 23, 96. https://doi.org/10.3390/e23010096

AMA Style

Korbel J. Calibration Invariance of the MaxEnt Distribution in the Maximum Entropy Principle. Entropy. 2021; 23(1):96. https://doi.org/10.3390/e23010096

Chicago/Turabian Style

Korbel, Jan. 2021. "Calibration Invariance of the MaxEnt Distribution in the Maximum Entropy Principle" Entropy 23, no. 1: 96. https://doi.org/10.3390/e23010096

APA Style

Korbel, J. (2021). Calibration Invariance of the MaxEnt Distribution in the Maximum Entropy Principle. Entropy, 23(1), 96. https://doi.org/10.3390/e23010096

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop