Next Article in Journal
Information Storage in Liquids with Ordered Molecular Assemblies
Next Article in Special Issue
An Information Approach to the Dynamics in Farm Income: Implications for Farmland Markets
Previous Article in Journal
Burning Speed and Entropy Production Calculation of a Transient Expanding Spherical Laminar Flame Using a Thermodynamic Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tsallis Entropy, Escort Probability and the Incomplete Information Theory

Department of Physics, Zanjan University, P.O.Box 45196-313, Zanjan, Iran
*
Author to whom correspondence should be addressed.
Entropy 2010, 12(12), 2497-2503; https://doi.org/10.3390/e12122497
Submission received: 31 October 2010 / Revised: 25 November 2010 / Accepted: 27 November 2010 / Published: 21 December 2010
(This article belongs to the Special Issue Advances in Statistical Mechanics)

Abstract

:
Non-extensive statistical mechanics appears as a powerful way to describe complex systems. Tsallis entropy, the main core of this theory has been remained as an unproven assumption. Many people have tried to derive the Tsallis entropy axiomatically. Here we follow the work of Wang (EPJB, 2002) and use the incomplete information theory to retrieve the Tsallis entropy. We change the incomplete information axioms to consider the escort probability and obtain a correct form of Tsallis entropy in comparison with Wang’s work.

1. Introduction

The entropy is the key concept to extract universal features of a system from its microscopic details. In the statistical mechanics two forms are considered to describe the concept of entropy. In the first form, the entropy is defined as the logarithm of total number of microstates in phase/Hilbert space multiplied by a constant coefficient (the Boltzmann veiwpoint). In the second form it is written in terms of the probability to occupy the microstates (the Gibbs viewpoint) [1]. Using simple algebraic manipulation which can be found in every textbooks of statistical mechanics, the equality between these two definitions is proved. In the information theory, the entropy measures the uncertainty in an ensemble. Shannon derived the same form for the entropy which is similar to the Gibbs relation. He used the axioms of the information theory that are intuitively correct for non-interacting systems in physics [2]. Jaynes showed that all obtained results in the statistical mechanics are an immediate consequence of the Shannon entropy [3]. In spite of all successes of Boltzmann-Gibbs-Shannon entropy to discover the thermodynamics of systems from mechanical details in the equilibrium conditions, it is not able to interpret the complexity in natural systems. In 1988 Tsallis introduced a new definition for entropy which successfully describes the statistical features of complex systems [4]:
S q = k 1 i = 1 N p i q 1 q
where k is a positive constant (choosing an appropriate unit we can set it to one), p i stands for probability for occupation of i-th state of the system, N counts the known microstates of the systems and q is a positive real parameter. Tsallis entropy is non-extensive, which means that if two identical systems combine, the entropy of combined system is not equal to summation of entropy of its subsystems. Using simple calculation we can show that in the limit q 1 the Tsallis entropy tend to the BGS entropy.
Non-extensive statistical mechanics which is established by optimization of Tsallis entropy in presence of appropriate constraints, can interpret properties of many physical systems [5,6,7,8]. Since the thermodynamic limit is a meaningless concept to nanosystems, we cannot derive many familiar thermodynamical results. It has been shown that nanosystems obey the non-extensive statistical mechanics [9,10]. The existence of long range interaction between system’s entities fails the condition of non-interacting components for a system when we want to derive the BGS entropy. In this case simulations show that the entropy and energy are non-extensive [11,12]. There are many natural or social systems with small size or long range interaction between their components, therefore we can use the non-extensive statistical mechanics to study such systems. It was found that the wealth distribution for an economic agent in a conservative exchange market can be classified by Tsallis entropic index q, which distinguishes two different regimes—the large and small size market [13]. Tsallis statistics can also be used to obtain spatio-temporal and magnitudinal distribution of seismic activities [14,15]. Many other applications of the non-extensive statistical mechanics are found in the references of the Tsallis book [8].
Several attempts have been done to extract the Tsallis entropy from axioms of the information theory. Among them the works of Wang are more plausible [16,17], but he has obtained a different form for the non-extensive entropy instead of the usual form of the Tsallis entropy.
S q = 1 i = 1 N p i 1 q
Here we follow Wang’s method in using the axioms of the incomplete information theory except for the escort probability which will be introduced in the next section. The usual form of Tsallis entropy for escort probability is the consequence of our attempt.
We discuss the incomplete knowledge and its relation to the information theory in the next section. The third section is devoted to introducing the escort probability and derivation of Tsallis entropy from axioms of the incomplete information theory. Finally we summarize our work and discuss its advantages and disadvantages.

2. Incomplete Information Theory

The set of outcomes of a random variable, { x 1 , , x N } , and their corresponding occurrence probabilities, { p 1 , , p N } , is called an ensemble. N is the number of distinguished outcomes. The main goal of statistical mechanics is the determination of the outcomes probability with regard to some constraints. The constraints are usually given as an equation like f ( x 1 , , x N , p 1 , , p N ) = 0 . According to maximum entropy principle, the entropy which is a function of probabilities, S ( p 1 , , p N ) , should be maximized in an equilibrium condition. The Lagrange Multiplier method can be used to optimize the entropy with accompany to constraints.
The functional form of the entropy is derived from axioms of the information theory. These axioms consider the entropy of a microcanonical ensemble, if all probabilities are equal to each other. In this case the entropy is equal to the Hartley measure, I ( N ) = S ( 1 N , , 1 N ) . For other cases, entropy is obtained directly from Hartley measure. The axioms of the information theory are,
(i)
I ( 1 ) = 0 ,
(ii)
I ( e ) = 1 ,
(iii)
I ( N ) I ( N + 1 ) ,
(iv)
I ( N × M ) = I ( N ) + I ( M ) ,
(v)
I ( N ) = S ( p 1 , , p n ) + a = 1 n p a I ( N a ) .
The first axiom states the value of zero, i.e., for certain outcome the entropy is equal to zero. The second axiom determines a unit to measure the uncertainty. The Hartley measure is an increasing function of the number of outcomes as stated in the third axiom. The fourth axiom states the extensivity feature of the Hartley measure and the entropy. The fifth axiom is called composition rule and it states that if we partition the set of outcomes into n distinct subsets, a = 1 n N a = N , then any subset can be considered as a new event with probability of occurrence p a belonging to the a-th subset. The uncertainty of the mother set in terms of its outcomes is equal to the uncertainty of the mother set in terms of its subsets plus the weighted combination of the uncertainty of subsets. The fifth axiom allows us to derive the entropy for an ensemble with non-equal probability outcomes. It can be shown that the unique function which satisfies the above axioms is I ( N ) = ln N , therefore the Shannon entropy is obtained by a simple calculation,
S ( p 1 , , p n ) = a = 1 n p a ln p a
Two later axioms are completely related to the complete knowledge about the subsets, a = 1 n p a = 1 . Sometimes this assumption may fail [16,17,18,19] and the sum of the probability of events becomes less than unity, a = 1 n p a < 1 , which means that we do not know the set of events completely and some events may possibly be remained unknown. For example earthquakes with strength of greater than ten Richter have not been recorded yet but they would be possible. Our knowledge about the seismic events are incomplete in this respect [15]. Wang, in his works assumed that, a real parameter q exists so that p a q = 1 . He also changed the last two axioms of the information theory to include the incomplete information condition [16,17],
(iv′)
I ( N × M ) = I ( N ) + I ( M ) + ( 1 q ) I ( N ) I ( M ) ,
(v′)
I ( N ) = S ( p 1 , , p n ) + a = 1 n p a q I ( N a ) .
The second axiom is also changed into I ( ( 1 + ( 1 q ) ) 1 1 q ) = 1 for simplicity of algebraic manipulations. It is clear that, I ( N ) = N 1 q 1 1 q , is an increasing monotonic function and satisfies the axiom (iv ). The entropy can be obtained by using the (v ) axiom,
S ( p 1 , , p n ) = 1 a = 1 n p a 1 q
The above entropy has a different form than the usual Tsallis entropy. In the following section we introduce the escort probability and change the axioms of the information entropy to consider it. By using the same method as the Wang’s work we obtain the Tsallis entropy in terms of the escort probability.

3. Tsallis Entropy in Terms of the Escort Probability

Correspond to any probability p a in an incomplete set of probabilities, we can define an effective real probability π a as follows [20,21,22,23],
π a = p a q a = 1 n p a q
where q is a real positive parameter. This is the actual probability which can be measured from empirical data and is called the escort probability. In order to conclude the escort probabilities in entropy we should change the last two axioms of the information theory. Entropy dependence on the escort probabilities represents the incompleteness of our knowledge,
(iv′′)
I ( N × M ) = I ( N ) + I ( M ) + g ( q ) I ( N ) I ( M ) ,
(v′′)
I ( N ) = S ( π 1 , , π n ) + a = 1 n π a I ( N a ) .
The function g ( q ) will be determined later as a consequence of the entropy maximization principle.
If we choose I ( N ) = N g ( g ) 1 g ( q ) as a solution which satisfies (iv ) axiom, then entropy gets the following form.
S ( π 1 , , π n ) = 1 a = 1 n π a 1 + g ( g ) g ( q )
There is a straightforward calculation to show the above entropy approaches to the BGS form in the limit, g ( q ) 0 . Any result in the incomplete knowledge condition should approach its counterpart in the complete information state if we allow the function g ( q ) tends to zero. For an example we consider the canonical ensemble. The above entropy should be maximized to accompany with the following constraints.
a = 1 n π a = 1
a = 1 n π a x a = X
Using the method of Lagrange multipliers, the escort probabilities are derived. In this respect we define the following auxiliary function.
R ( π a ) = 1 a = 1 n π a 1 + g ( q ) g ( q ) + γ ( a = 1 n π a 1 ) β ( a = 1 n π a x a X )
The equation δ R ( π i ) = 0 gives us the stationary form for the escort probability,
π a = ( 1 β x a γ ) 1 g ( q ) ( 1 + g ( q ) γ g ( q ) ) 1 g ( q )
Applying the normalization condition we have,
Z = ( 1 + g ( q ) γ g ( q ) ) 1 g ( q ) = a = 1 n ( 1 β x a γ ) 1 g ( q )
The Equation (8) approaches to π a e x p ( β x a ) , in the limit g ( q ) 0 if we put γ = 1 g ( q ) . To ensure that the derived form for the escort probability maximize the entropy, we test the second derivative of the function R,
d 2 R ( π a ) d π a 2 = ( g ( q ) + 1 ) π a g ( q ) 1
The first criterion for the function g ( q ) is obtained,
g ( q ) + 1 > 0
Monotonicity of the entropy gives us the second criterion,
d g ( q ) d q > 0
Applying the above criteria the function g ( q ) is not determined uniquely. In special case g ( q ) = q 1 , we can recover the usual form of the Tsallis entropy,
S ( π 1 , , π n ) = 1 a = 1 n π a q q 1
In the above proof we use Equation 6 as a definition for the expectation value of the random variable x. Other definitions for the expectation value [24,25] also lead us to the same form to Tsallis entropy.

4. Conclusions

In conclusion, we change the axioms of the information theory to include the escort probability as a sign of incompleteness of our knowledge. In a similar way to the Wang’s method, we obtain the usual form of Tsallis entropy but in terms of the escort probability.

References

  1. Gibbs, G.W. Elementary Principle in Sttistical Mechanics; Longmans Green and Company: New York, NY, USA, 1928. [Google Scholar]
  2. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423, 623–656. [Google Scholar] [CrossRef]
  3. Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev. 1957, 106, 620–630. [Google Scholar] [CrossRef]
  4. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–488. [Google Scholar] [CrossRef]
  5. Tsallis, C. Non Extensive Statistical Mechanics and Its Applications; Abe, S., Okamoto, Y., Eds.; Springer: Berlin, Germany, 2001. [Google Scholar]
  6. Tsallis, C. Nonextensive Enropy-Interdisciplinary Applications; Gell-Mann, M., Tsallis, C., Eds.; Oxford University Press: New York, NY, USA, 2004. [Google Scholar]
  7. Tsallis, C.; Gell-Mann, M.; Sato, Y. Special issue on the nonextensive statistical mechanics. Europhys. News 2005, 36, 186–189. [Google Scholar] [CrossRef]
  8. Tsallis, C. Introduction to Nonextensive Statistical Mechanics; Springer: Berlin, Germany, 2009. [Google Scholar]
  9. Mohazzabi, P.; Mansoori, G.A. Nonextensivity and nonintensivity in nonosystems: A molecular dynamics simulation. J. Comput. Theor. Nanosci. 2005, 2, 1–10. [Google Scholar]
  10. Mohazzabi, P.; Mansoori, G.A. Why Nanosystems and Macroscopic Systems Behave Differently. Int. J. Nanosci. Nanotechnol. 2006, 1, 46–53. [Google Scholar]
  11. Grigera, J.R. Extensive and non-extensive thermodynamics. A molecular dynamic test. Phys. Lett. A 1996, 217, 47–51. [Google Scholar] [CrossRef]
  12. Anteneodo, C. Nonextensive scaling in a long-range Hamiltonian system. Physica A 2004, 342, 112–118. [Google Scholar] [CrossRef]
  13. Darooneh, A.H. Insurance pricing in small size markets. Physica A 2007, 380, 411–417. [Google Scholar] [CrossRef]
  14. Darooneh, A.H.; Dadashinia, C. Analysis of the spatial and temporal distributation between successive earthquakes: Nonextensive statistical mechanics viewpoint. Physica A 2008, 387, 3647–3654. [Google Scholar] [CrossRef]
  15. Darooneh, A.H.; Mehri, A. A nonextensive modification of the Gutenberg-Richter law: q-stretched exponential form. Physica A 2010, 389, 509–514. [Google Scholar] [CrossRef]
  16. Wang, Q.A. Incomplete statistics: Nonextensive generalization of statistical mechanics. Chaos Soliton. Fractal. 2001, 12, 1431–1437. [Google Scholar] [CrossRef]
  17. Wang, Q.A. Nonextensive statistics and incomplete information. Euro. Phys. J. B 2002, 26, 357–368. [Google Scholar] [CrossRef]
  18. Rényi, A. On measures of entropy and information. In Proceedings of the 4th Berkeley Symposium on Mathematics, Statistics and Probability, Statistical Laboratory of the University of California, Berkeley, CA, USA, 20 June–30 June, 1960; University of California Press: Berkeley, California, 1961; Volume 1, pp. 547–561. [Google Scholar]
  19. Rényi, A. Probability Theory; North-Holland: Amsterdam, The Netherlands, 1970. [Google Scholar]
  20. Abe, S. Remark on the escort distribution representation of nonextensive statistical mechanics. Phys. Lett. A 2000, 275, 250–253. [Google Scholar] [CrossRef]
  21. Naudts, J. Generalised Exponential Families and Associated Entropy Functions. Entropy 2008, 10, 131–149. [Google Scholar] [CrossRef]
  22. Naudts, J. Parameter estimation in non-extensive thermostatistics. Physica A 2006, 365, 42–49. [Google Scholar] [CrossRef]
  23. Campos, D. Rényi and Tsallis entropies for incomplete or overcomplete systems of events. Physica A 2010, 389, 981–992. [Google Scholar] [CrossRef]
  24. Tsallis, C. The role of constraints within generalized nonextensive statistics. Physica A 1998, 261, 534–554. [Google Scholar] [CrossRef]
  25. Tsallis, C.; Plastino, A.R.; Alvarez-Estrada, R.F. Escort mean values and the characterization of power-law-decaying probability densities. J. Math. Phys. 2009, 50, 043303. [Google Scholar] [CrossRef] [Green Version]

Share and Cite

MDPI and ACS Style

Darooneh, A.H.; Naeimi, G.; Mehri, A.; Sadeghi, P. Tsallis Entropy, Escort Probability and the Incomplete Information Theory. Entropy 2010, 12, 2497-2503. https://doi.org/10.3390/e12122497

AMA Style

Darooneh AH, Naeimi G, Mehri A, Sadeghi P. Tsallis Entropy, Escort Probability and the Incomplete Information Theory. Entropy. 2010; 12(12):2497-2503. https://doi.org/10.3390/e12122497

Chicago/Turabian Style

Darooneh, Amir Hossein, Ghassem Naeimi, Ali Mehri, and Parvin Sadeghi. 2010. "Tsallis Entropy, Escort Probability and the Incomplete Information Theory" Entropy 12, no. 12: 2497-2503. https://doi.org/10.3390/e12122497

Article Metrics

Back to TopTop