**1. Introduction**

The maximum entropy principle (MEP) is one of the most fundamental concepts in equilibrium statistical mechanics. It was originally proposed by Jaynes [1,2] in order to connect information entropy introduced by Shannon and thermodynamic entropy introduced by Clausius, Boltzmann, and Gibbs. Although the MEP was originally introduced for the case of Shannon entropy, with the advent of generalized entropies [3–17] the natural effort was to apply the maximum entropy principle beyond the case of Shannon entropy. Another question that arose naturally is whether the MEP can be applied to other than ordinary linear constraints. Examples of the constraints that might be considered in connection with the MEP are *escort constraints* [18–20], *Kolmogorov–Nagumo means* [21,22], or more exotic types of constraints [23]. It brought some discussion about the applicability of the principle for the case of generalized entropies [24,25] and nonlinear constraints and its thermodynamic interpretation [26–30]. Indeed, MEP is not the only one extremal principle in statistical physics, let us mention, e.g., the *principle of maximum caliber* [31] which is useful in non-equilibrium physics. In this paper, we stick, however, to MEP, as it is the most widespread principle and the theory of generalized thermostatistics has been mainly focused on MEP. For a recent review of other principles, see also in [32]. For the discussion between entropy arising from information theory and thermodynamics, see in [33]. For the sake of simplicity, let us consider canonical ensemble, i.e., fluctuations in internal energy. For the case of the grand-canonical ensemble, one can obtain similar results to the ones presented in this paper for the case of a chemical potential *μ*.

In order to grasp the debate about the applicability of the MEP, let us emphasize that the MEP consists of two main parts:

**Citation:** Korbel, J. Calibration Invariance of the MaxEnt Distribution in the Maximum Entropy Principle. *Entropy* **2021**, *23*, 96. https:// doi.org/10.3390/e23010096

Received: 11 December 2020 Accepted: 9 January 2021 Published: 11 January 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional clai-ms in published maps and institutio-nal affiliations.

**Copyright:** © 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).


The first part is rather a mathematical procedure of finding a maximum subject to constraints. This is done by the *method of Lagrange multipliers*, by defining a Lagrange function in the form

*Lagrange f unction* = *entropy* − (*Lagrange multiplier*) · (*constraint*)

The Lagrange multipliers' role at this stage is to ensure fulfillment of constraints as they are determined from the set of equations obtained from the maximization of the Lagrange function. This procedure is known in statistics as *Softmax*, a method used to infer distribution from given data. Shore and Johnson [34,35] therefore studied MEP as a statistical inference procedure and established a set of consistency axioms. Shore and Johnson's work heated a debate about whether MEP for generalized entropies can be also understood as a statistical inference method satisfying the consistency requirements [24,36–41]. In [42], it was shown that the class of entropies satisfying the original Shore–Johnson axioms is wider than previously thought. Moreover, in [43], the connection between Shore–Johnson axioms and Shannon–Khinchin axioms was investigated and the equivalence of information theory and statistical inference axiomatics was established.

In the second part, the physical interpretation of entropy starts to arise. Similar to the case of Lagrangian mechanics, where the Lagrangian is the difference between kinetic and potential energy and the Lagrange multipliers play the role of the normal force to the constraints, here the entropy becomes a thermodynamic state variable. For Shannon entropy and linear constraints, the Lagrange multipliers become inverse temperature and free entropy, respectively.

The main aim of this paper is to discuss the relation between points (I) and (II). In the first part, it is possible to find a class of entropic functionals and constraints leading to the same MaxEnt distribution. However, in the second part, different entropy and/or constraints lead to different thermodynamics and different relations between physical quantities and Lagrange multipliers. The two main messages of this paper are listed below.


We call the latter transformation relation *calibration invariance* of the MaxEnt distribution. A straightforward consequence is that in order to fully determine the statistical properties of a thermal system in equilibrium, it is not enough to measure the statistical distribution of energies.

The rest of the paper is organized as follows. In the next section, we briefly discuss the main aspects of MEP for the case of general entropic functional and general constraints. In the following two sections, we introduce two simple transformations of entropic functional (Section 3) and constraints (Section 4) that lead to the same MaxEnt distribution and derive transformations between the Lagrange multipliers. These transformations form a group. After the general derivation, we provide a few simple examples for each case. The last section is devoted to conclusions.

#### **2. Maximum Entropy Principle in Statistical Physics**

Maximum entropy principle is the way of obtaining the representing probability distribution from the limited amount of information. Our aim is to find the probability distribution of the system *<sup>P</sup>* = {*pi*}*<sup>n</sup> <sup>i</sup>*=<sup>1</sup> under the set of given constraints. In the simplest case, the principle can be formulated as follows.

**Maximum entropy principle:** *Maximize entropy S*(*P*) *under the normalization constraint f*0(*P*) = 0*, and energy constraint fE*(*P*) = 0*.*

The normalization condition is considered in the regular form, i.e., *f*0(*P*) = ∑*<sup>i</sup> pi* − 1 = 1 − 1. Moreover, we have a class of constraints, which originally described the average energy of the system. Therefore, we call them *energy constraints*. We consider only one energy constraint, for simplicity, although there can be more constraints, and they do not have to consider only internal energy but also other thermodynamic quantities. In the original formulation, the energy constraint is linear in probabilities, i.e.,

$$f\_{\mathbb{E}}(P) = \sum\_{i} p\_{i} E\_{i} - E = \langle E \rangle - E,\tag{1}$$

but it can be generally any nonlinear function of probabilities—escort means provide an example. A large class of energy constraints can be written in a *separable form*, which means that *fE*(*P*) = E(*P*) − *E*, i.e., in the form expressing the "expected" internal energy (macroscopic variable) as a function of probability distribution (microscopic variable). This class of constraints plays a dominant role in the thermodynamic systems.

In order to find a solution of the Maximum entropy principle, we use a common method of Lagrange multipliers, which can be done through maximization of Lagrange function:

$$\mathcal{L}(P; \mathfrak{a}, \boldsymbol{\beta}) = \mathcal{S}(P) - \mathfrak{a}f\_0(P) - \beta f\_{\mathbb{E}}(P) \tag{2}$$

The maximization procedure leads to the set of equations

$$\begin{cases} \frac{\partial \mathcal{L}(P; \boldsymbol{u}, \boldsymbol{\beta})}{\partial p\_i} &= 0 \quad \forall \, i \in \{1, \dots, n\} \\ \frac{\partial \mathcal{L}(P; \boldsymbol{u}, \boldsymbol{\beta})}{\partial \boldsymbol{\alpha}} &= \quad f\_0(P) = 0 \\ \frac{\partial \mathcal{L}(P; \boldsymbol{u}, \boldsymbol{\beta})}{\partial \boldsymbol{\beta}} &= \quad f\_{\mathbb{E}}(P) = 0 \end{cases} \tag{3}$$

from which we determine the resulting MaxEnt distribution. In order to obtain a unique solution, we require that the entropic functional should be a Schur-concave symmetric function [42].

As a consequence, we obtain the values of Lagrange multipliers *α* and *β*. From the strictly mathematical point of view, Lagrange multipliers are just auxiliary parameters to be solved from the set of Equation (3). However, in physics, Lagrange parameters also have a physical interpretation. In Lagrangian mechanics, Lagrange parameters play the role of normal force to the constraints. Similarly, in ordinary statistical mechanics based on Shannon entropy *H*(*P*) = − ∑*<sup>i</sup> pi* log *pi* and linear constraints (1), the Lagrange multipliers have the particular physical interpretation:

$$
\beta\_{\parallel} = \frac{1}{T} \quad \text{(inverse temperature)}, \tag{4}
$$

$$\text{area} = \text{s} - \frac{1}{T}E \quad \text{(free entropy)}.\tag{5}$$

Note that the free entropy is, similarly to Helmholtz free energy, a Legendre transform of entropy w.r.t. internal energy. For the case of ordinary thermodynamics (Shannon entropy and linear constraints), it is equal to the logarithm of the partition function.

This interpretation is valid only in this case. In the case, when we use different entropy functional or different constraints, these relation between Lagrange multipliers and thermodynamic quantities are no longer valid. This is even the case, when the resulting MaxEnt distribution is the same.

The main aim of this paper is to show how the invariance of MaxEnt distribution affects the Lagrange multipliers and their relations to thermodynamic quantities. Let us now solve Equation (3). The first set of equations leads to

$$\frac{\partial S(P)}{\partial p\_i} - a \frac{\partial f\_0(P)}{\partial p\_i} - \beta \frac{\partial f\_E(P)}{\partial p\_i} = 0. \tag{6}$$

Let us assume the normalization in the usual way which leads to *<sup>∂</sup> <sup>f</sup>*0(*P*) *<sup>∂</sup>pi* = 1. Moreover, let us consider separable energy constraint, so *<sup>∂</sup> fE*(*P*) *<sup>∂</sup>pi* <sup>=</sup> *<sup>∂</sup>*E(*P*) *<sup>∂</sup>pi* . The resulting probability distribution can be expressed as

$$p\_i^\star = \frac{\partial S}{\partial p\_i}^{(-1)} \left[ \alpha + \beta \frac{\partial \mathcal{E}(P)}{\partial p\_i} \right]. \tag{7}$$

where (−1) denotes inverse function of *∂S*/*∂pi* (provided it exists and is unique). We can express *α* by multiplying the equation by *pi* and summing over *i*, which leads to

$$\mathfrak{a} = \langle \nabla\_P \mathcal{S}(P) \rangle - \beta \langle \nabla\_P \mathcal{E}(P) \rangle \tag{8}$$

where *<sup>X</sup>* <sup>=</sup> <sup>∑</sup>*<sup>i</sup> xi pi* and <sup>∇</sup>*<sup>P</sup>* = ( *<sup>∂</sup> ∂p*<sup>1</sup> , ... , *<sup>∂</sup> <sup>∂</sup>pn* ). By plugging back to the previous equation, we can get *β* as

$$\beta = \frac{\Delta\_i(\nabla \mathcal{S}(P))}{\Delta\_i(\nabla \mathcal{E}(P))} \tag{9}$$

where Δ*i*(*X*) = *xi* − *X* is the difference from the average.

The solution of Equation (3) depends on the internal energy *E*. However, in thermodynamics it is natural to invert the relation *β* = *β*(*E*) and express the relevant quantities in terms of *β*, so *E* = *E*(*β*). With that, we can calculate dependence of entropy on *β*:

$$\frac{\partial S}{\partial \beta} = \sum\_{i} \frac{\partial S}{\partial p\_i} \frac{\partial p\_i}{\partial \beta} = \sum\_{i} \left( a + \beta \frac{\partial \mathcal{E}(P)}{\partial p\_i} \right) \frac{\partial p\_i}{\partial \beta} = \beta \sum\_{i} \frac{\partial f\_E}{\partial p\_i} \frac{\partial p\_i}{\partial \beta} = \beta \left( -\frac{\partial f\_E}{\partial E} \frac{\partial E}{\partial \beta} \right) \tag{10}$$

For separable energy constraints, *<sup>∂</sup> fE <sup>∂</sup><sup>E</sup>* = −1, so we obtain the well-known relation

$$\frac{\partial \mathcal{S}}{\partial \mathcal{S}} = \beta \frac{\partial E}{\partial \beta} \Rightarrow \beta = \frac{\partial \mathcal{S}}{\partial E}.\tag{11}$$

Let us now define the Legendre conjugate of entropy called *free entropy* (also called Jaynes parameter [44] or Massieu function [45]):

$$
\psi = \mathcal{S} - \frac{\partial \mathcal{S}}{\partial E} E = \mathcal{S} - \beta E \tag{12}
$$

Free entropy is connected to Helmholtz free energy as *ψ* = −*βF*. The difference between *α* and *ψ* can be expressed as

$$
\Psi - \mathfrak{a} = (\mathcal{S} - \langle \nabla\_P \mathcal{S} \rangle) - \beta (E - \langle \nabla\_P \mathcal{E} \rangle) \tag{13}
$$

Therefore, we can understand the difference *ψ* − *α* as the Legendre transform of *ψ* with respect to *P*. From this, we see that the difference between *ψ* and *α* is a constant (not depending on thermodynamic quantities), if two independent conditions are fulfilled, i.e., *E* = ∇*P*E(*P*) and *S* = ∇*PS* + *a*. The former constraint leads to linear energy

constraints, while the latter one leads to the the conclusion that the entropy must be in trace form *S*(*P*) = ∑*<sup>i</sup> g*(*pi*). Moreover, the function *g* has to fulfill the following equation,

$$\mathbf{x}\,\mathbf{g}(\mathbf{x}) - a\mathbf{x} = \mathbf{x}\mathbf{g}'(\mathbf{x})\tag{14}$$

leading to *g*(*x*) = −*ax* log(*x*) + *bx* which is equivalent to Shannon entropy.

In the next sections, we will explore how the transformation of the entropy and the energy constraint that leaves the MaxEnt distribution invariant affects the Lagrange multipliers and their relation to thermodynamic quantities.

#### **3. Calibration Invariance of MaxEnt Distribution with Entropy Transformation**

The simplest transformation of Lagrange functional that leaves the MaxEnt distribution invariant is to consider an arbitrary increasing function of entropy, i.e., we replace *S*(*P*) by *c*(*S*(*P*)), where *c* (*x*) > 0. Let us note that this transform preserves the uniqueness of the MEP because it is easy to show that if *S*(*P*) is Schur-concave, *c*(*S*(*P*)) is also Schurconcave [42] which is a sufficient condition for uniqueness of the MaxEnt distribution.

In this case, the Lagrange equations are adjusted as follows,

$$
\alpha'(S(P))\frac{\partial S(P)}{\partial p\_i} - \alpha\_\varepsilon \frac{\partial f\_0(P)}{\partial p\_i} - \beta\_\varepsilon \frac{\partial \mathcal{E}(P)}{\partial p\_i} = 0 \tag{15}
$$

leading to

$$\mathfrak{a}\_{\varepsilon} = \mathfrak{c}'(S(P)) \langle \nabla\_P S(P) \rangle - \beta\_{\mathfrak{c}} \langle \nabla\_P \mathcal{E}(P) \rangle \tag{16}$$

and

$$\beta\_{\mathbb{C}} = c'(S(P)) \frac{\Delta\_i(\nabla\_P S(P))}{\Delta\_i(\nabla\_P \mathcal{E}(P))} \tag{17}$$

so we get that the function *c* causes *rescaling* of *α* and *β*, so

$$\mathfrak{a}\_{\mathfrak{c}} = c'(S(P))\,\mathfrak{a} \tag{18}$$

$$\beta\_{\mathbb{C}} = \mathfrak{c}'(S(P))\,\beta \tag{19}$$

while its ratio remains unchanged, i.e., *αc*/*β<sup>c</sup>* = *α*/*β*. Actually, the set of increasing functions conform a group of Lagrange multipliers, because it is easy to show that the Lagrange parameters related to the entropy *c*1(*c*2(*S*(*P*))

$$\beta\_{c\_1 \circ c\_2} = c\_1'(c\_2(S(P)) \cdot c\_2'(S(P)) \, \beta = c\_1'(c\_2(S(P)) \beta\_{c\_2} \tag{20}$$

which can be described as the group operation (*c*<sup>1</sup> ◦ *c*2) → *c* <sup>1</sup>(*c*2) · *c* 2.

An important property of this transformation is that it changes the extensive–intensive duality of the conjugated pair of thermodynamic variables and the respective forces while it maintains the distribution. Notably, by changing the entropic functional from extensive (i.e., *S*(*n*) ∼ *U*(*n*)) to non-extensive, it changes *β* from intensive (i.e., size-independent, at least in the thermodynamic limit) to non-intensive, i.e., explicitly size-dependent. This point has been discussed in connection with *q*-non-extensive statistical physics of [29,30] and the relation to the zeroth law of thermodynamics was shown in [46]. As one can see from the example below, although Rényi entropy and Tsallis entropy have the same maximizer, the corresponding thermodynamics is different. While Rényi entropy is additive (and therefore extensive for systems where *U*(*n*) ∼ *n*) and the temperature is intensive, Tsallis entropy is non-extensive, and the corresponding temperature explicitly depends on the size of the system.

Let us finally mention that the difference between free entropy and Lagrange parameter *α* transforms as

$$
\psi\_{\mathbb{E}} - a\_{\mathbb{E}} = \left( \varepsilon(\mathbb{S}) - \varepsilon'(\mathbb{S}) \langle \nabla\_P \mathbb{S}(P) \rangle - \varepsilon'(\mathbb{S}) \beta \langle \mathbb{E} - \langle \nabla\_P \mathbb{S}(P) \rangle \rangle \right) \\
= \varepsilon'(\mathbb{S}) (\psi - a) + (\varepsilon(\mathbb{S}) - \varepsilon'(\mathbb{S}) \cdot \mathbb{S}).\tag{21}
$$

While free entropy and other thermodynamic potentials are transformed, the heat change remains invariant under this transformation:

$$\mathbf{d} \, \mathbf{Q}\_{\mathcal{E}} = T\_{\mathbf{c}} \mathbf{d} \, \mathbf{c}(S) = \frac{T}{c'(S)} \mathbf{c}'(S) \mathbf{d}S = T \mathbf{d}S = \mathbf{d}Q. \tag{22}$$

**Example 1.** *We exemplify the calibration invariance on two popular examples of closely related entropies.*

• **Rényi entropy and Tsallis entropy***: Two most famous examples of generalized entropies are Rényi entropy Rq*(*P*) = <sup>1</sup> <sup>1</sup>−*<sup>q</sup>* ln ∑*<sup>i</sup> p q i and Tsallis entropy Sq*(*P*) = <sup>1</sup> 1−*q* ∑*<sup>i</sup> p q <sup>i</sup>* − 1 *. Their relation can be expressed as*

$$R\_{\emptyset}(P) = c\_{\emptyset}(S\_{\emptyset}(P)) = \frac{1}{1-q} \ln\left[ (1-q)S\_{\emptyset}(P) + 1 \right] \tag{23}$$

*and therefore we obtain that*

$$\mathcal{L}\_q'(\mathcal{S}\_q(P)) = \frac{1}{1 + (1 - q)\mathcal{S}\_q} = \frac{1}{\sum\_i p\_i^{\overline{q}}}.\tag{24}$$

*The difference between free entropy and α can be obtained as*

$$
\psi\_R - \mathfrak{a}\_R = \frac{1}{\sum\_i p\_i^{\tilde{q}}} (\psi\_S - \mathfrak{a}\_S) + \left( R\_q(P) - \frac{S\_q(P)}{\sum\_i p\_i^{\tilde{q}}} \right). \tag{25}
$$

*One can therefore see that even though Rényi and Tsallis entropy lead to the same MaxEnt distribution, their thermodynamic quantities, such as temperature or free entropy, are different. Whether the system follows Rényi or Tsallis entropy depends on additional facts, as e.g., (non)-extensitivity and (non)-intensivity of thermodynamic quantities.*

• **Shannon entropy and Entropy power***: A similar example is provided with Shannon entropy H*(*P*) = ∑*<sup>i</sup> pi* ln 1/*pi and entropy power* P(*P*) = ∏*i*(1/*pi*) *pi . The relation between them is simply*

$$H(P) = \mathfrak{c}(\mathcal{P}(P)) = \log(\mathcal{P}(P)),\tag{26}$$

*so we obtain that*

$$
\mathcal{L}'(\mathcal{P}(P)) = 1/\left(\mathcal{P}(P)\right) = \exp(-H(P)).\tag{27}
$$

*For the difference between free entropy and α, we obtain that*

$$0 = \psi\_H - \mathfrak{a}\_H = \frac{1}{\mathcal{P}(P)} (\psi\_{\mathcal{P}} - \mathfrak{a}\_{\mathcal{P}}) + (H(P) - 1) \tag{28}$$

*from which we get that*

$$
\psi\_{\mathcal{P}} - \mathfrak{a}\_{\mathcal{P}} = \mathcal{P}(P)(1 - \log \mathcal{P}(P)).\tag{29}
$$

*Therefore, we see that even that the MaxEnt distribution remains unchanged, the relation between α and free energy is different.*

#### **4. Calibration Invariance of MaxEnt Distribution with Constraints Transformation**

Similarly, one can uncover the invariance of the MaxEnt distribution when the constraints are transformed in a certain way. Generally, if two sets of constraints define the same domain, the resulting Maximum entropy principle should lead to equivalent results. We will not be so general, but we focus on a specific situation, which might be quite interesting for thermodynamic applications. Let us remind two conditions, which we assume: normalization *f*0(*P*) = 0 and energy constraint *fE*(*P*) = 0. Let us investigate the latter. Similarly to the previous case, it is possible to take any function *g* of *fE*(*P*), for which *g*(*y*) = 0

if *y* = 0. More generally, we can also take into account the normalization constraint and replace the original energy condition by

$$\lg(f\_0(P), f\_\mathbb{E}(P)) = 0\tag{30}$$

for any *g*(*x*, *y*), for which *g*(*x*, *y*) = 0 ⇒ *y* = 0. Let us investigate the Maximum entropy principle for this case. We can express the Lagrange function as

$$\mathcal{L}(P) = \mathcal{S}(P) - \mathfrak{a}\_{\mathcal{S}} f\_0(P) - \beta\_{\mathcal{S}} \mathfrak{g}\left(f\_0(P), f\_E(P)\right) \tag{31}$$

which leads to a set of equations

$$\frac{\partial S(P)}{\partial p\_i} - a\_\% \frac{\partial f\_0(P)}{\partial p\_i} - \beta\_\% \left[ G^{(1,0)} \frac{\partial f\_0(P)}{\partial p\_i} + G^{(0,1)} \frac{\partial \mathcal{E}(P)}{\partial p\_i} \right] = 0 \tag{32}$$

where *G*(1,0) = *<sup>∂</sup>g*(*x*,*y*) *<sup>∂</sup><sup>x</sup>* <sup>|</sup>(0,0) and *<sup>G</sup>*(0,1) <sup>=</sup> *<sup>∂</sup>g*(*y*,*x*) *<sup>∂</sup><sup>x</sup>* |(0,0). We take again into account that *∂ f*0(*P*) *<sup>∂</sup>pi* = 1, multiply the equations by *pi* and some over *<sup>i</sup>*. This gives us

$$\mathfrak{a}\_{\mathcal{J}} = \langle \nabla\_P \mathcal{S}(P) \rangle - \beta\_{\mathcal{J}} \left[ G^{(1,0)} + G^{(0,1)} \langle \nabla\_P \mathcal{E}(P) \rangle \right]. \tag{33}$$

By plugging *α<sup>g</sup>* back, we end with relation for *βg*:

$$\beta\_{\mathcal{S}} = \frac{1}{G^{(0,1)}} \frac{\Delta\_i(\nabla\_P \mathcal{S}(P))}{\Delta\_i(\nabla\_P \mathcal{E}(P))} \,. \tag{34}$$

For *α<sup>g</sup>* we end with

$$\mathfrak{a}\_{\mathcal{S}} = \langle \nabla\_P S(P) \rangle - \frac{\Lambda\_i(\nabla\_P S(P))}{\Delta\_i(\nabla\_P \mathcal{E}(P))} \langle \nabla f\_E(P) \rangle \left[ 1 + \frac{G^{(1,0)}}{G^{(0,1)}} \frac{1}{\langle \nabla\_P \mathcal{E}(P) \rangle} \right]. \tag{35}$$

Thus, we end again with rescaling of *α<sup>g</sup>* and *βg*, which reads

$$\alpha\_{\mathcal{S}}(\mathfrak{a}, \mathfrak{E}) = \mathfrak{a} - \frac{G^{(1,0)}}{G^{(0,1)}} \mathfrak{E} \, , \tag{36}$$

$$
\beta\_{\mathcal{S}}(\beta) = \frac{\beta}{G^{(0,1)}}.\tag{37}
$$

The ratio of Lagrange multipliers is also transformed, so we get

$$\frac{\mathfrak{a}\_{\mathcal{S}}}{\beta\_{\mathcal{S}}} = G^{(0,1)} \frac{\mathfrak{a}}{\beta} - G^{(1,0)}.\tag{38}$$

Again, the set of all functions fulfilling the aforementioned condition conform a group. The group operation can be described by the relation between coefficients *G*(1,0) and *G*(0,1) for the composite function *g*(*x*, *y*) = *g*1(*x*, *g*2(*x*, *y*)). We obtain that

$$\mathbf{G}^{(1,0)} = \mathbf{G}\_1^{(1,0)} + \mathbf{G}\_1^{(0,1)} \mathbf{G}\_2^{(1,0)} \tag{39}$$

$$\mathbf{G}^{(0,1)} = \mathbf{G}\_1^{(0,1)} \mathbf{G}\_2^{(0,1)} \tag{40}$$

which leads to group relations

$$\mathfrak{a}\_{\mathcal{S}}(\mathfrak{a},\mathfrak{\beta}) \quad = \mathfrak{a}\_{\mathcal{S}^1}(\mathfrak{a}\_{\mathcal{S}^2}(\mathfrak{a},\mathfrak{\beta}),\mathfrak{\beta}\_{\mathcal{S}^2}(\mathfrak{\beta})) - \frac{G\_1^{(1,0)}}{G\_1^{(0,1)}} \mathfrak{f}\_{\mathcal{S}^2}(\mathfrak{\beta}) \tag{41}$$

$$\beta\_{\mathcal{S}}(\beta) \quad = \frac{\beta\_{\mathcal{S}2}(\beta)}{G\_1^{(0,1)}}. \tag{42}$$

**Example 2.** *Here we mention two simple examples of the aforementioned transformation.*

• **Energy shift:** *Under this scheme, we can assume the constant shift in the energy spectrum. Let us rewrite the constraint f*(*P*) *in the following form,*

$$f\_E(P) = \sum p\_i E\_i - E = \sum p\_i (E\_i - E') - (E - E') \tag{43}$$

*which allows us to identify the function g*(*x*, *y*) *as*

$$\mathbf{g}(\mathbf{x}, \mathbf{y}) = \mathbf{y} - E'\mathbf{x} + E' \tag{44}$$

*We obtain G*(1,0) <sup>=</sup> <sup>−</sup>*E and G*(0,1) <sup>=</sup> <sup>1</sup>*, which means that <sup>α</sup>* <sup>=</sup> *<sup>α</sup>* <sup>−</sup> *<sup>β</sup>E .*

• **Latent escort means:** *Apart from linear means, it is possible to use some generalized approaches. One of these examples is provided by so-called escort mean:*

$$E\_q = \langle E \rangle\_q = \frac{\sum\_i p\_i^q E\_i}{\sum\_i p\_i^q} \tag{45}$$

*which for <sup>q</sup>* = <sup>1</sup> *becomes an ordinary linear mean, when <sup>P</sup>* = {*pi*}*<sup>n</sup> <sup>i</sup>*=<sup>1</sup> *are normalized to one. When we use this class of means in the Maximum entropy principle, the normalization is enforced by the normalization condition f*0(*P*) = 0*, therefore for q* = 1 *we obtain the same results. Nevertheless, by taking q* = 1 *for the results with escort distribution, the energy constraint is actually expressed as*

$$\frac{\sum p\_i E\_i}{\sum p\_i} - E \tag{46}$$

*can be understood in the same way as considered before in this section, i.e., as a combination of a normalization constraint and energy constraint. In this case the function g has the following form,*

$$g(x, y) = \frac{y + E}{x + 1} - E.\tag{47}$$

*Therefore, we obtain that <sup>G</sup>*(1,0) <sup>=</sup> <sup>−</sup>*<sup>E</sup> and <sup>G</sup>*(0,1) <sup>=</sup> <sup>1</sup>*, which correspond to the previous example for E* = *E. Therefore, the latent energy mean can be understood in terms of MaxEnt procedure as the shift of the energy spectrum by its average energy.*

#### **5. Conclusions**

In this paper, we have discussed the calibration invariance of MEP, which means that for a given MaxEnt distribution, there exists a whole class of entropies and constraints that lead to different thermodynamics (Thermodynamic quantities and response coefficients generally have different behavior. For example, from intensive temperature we can obtain temperature that explicitly depends on the size of the system). We have stressed that the MEP procedure consists of two parts, where the first part, consisting of determining the MaxEnt distribution, is rather a mathematical tool, while the second part, making connection between Lagrange multipliers and thermodynamic quantities, is a specific for application of MEP in statistical physics. Indeed, the paper does not cover all possible transformations leading to the same MaxEnt distribution (let us mention, at least, the additive duality of Tsallis entropy, where maximizing *S*2−*<sup>q</sup>* with linear constraint leads to the same result as maximizing *Sq* with escort constraints [47]). The main lesson of this paper is that in order to fully determine a thermal system in equilibrium, we need to measure not only probability distribution, but also all relevant thermodynamic quantities (as entropy). Moreover, the transformation between Lagrange parameters and its connection to thermodynamic potentials can be useful in situations when one is not certain about the exact form of entropy.

**Funding:** This research was funded by the Austrian Science Fund (FWF), project I 3073, Austrian Research Promotion agency (FFG), project 882184 and by the Grant Agency of the Czech Republic (GACR), grant No. 19-16066S. ˇ

**Acknowledgments:** I would like to thank Petr Jizba for helpful discussions.

**Conflicts of Interest:** The author declares no conflict of interest.
