Next Article in Journal
A Weakly Informative Prior for Resonance Frequencies
Previous Article in Journal
On Spacetime Duality and Bounce Cosmology of a Dual Universe
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Legendre Transformation and Information Geometry for the Maximum Entropy Theory of Ecology †

Department of Physics, University at Albany (SUNY), Albany, NY 12222, USA
Presented at the 40th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, online, 4–9 July 2021.
Phys. Sci. Forum 2021, 3(1), 1; https://doi.org/10.3390/psf2021003001
Published: 3 November 2021

Abstract

:
Here I investigate some mathematical aspects of the maximum entropy theory of ecology (METE). In particular I address the geometrical structure of METE endowed by information geometry. As novel results, the macrostate entropy is calculated analytically by the Legendre transformation of the log-normalizer in METE. This result allows for the calculation of the metric terms in the information geometry arising from METE and, by consequence, the covariance matrix between METE variables.

1. Introduction

The method of maximum entropy (MaxEnt) is usually associated with Jaynes’ work [1,2,3] connecting statistical physics and the information entropy proposed by Shannon [4]—although its mathematics is known since Gibbs [5]. It consists of selecting probability distributions by maximizing a functional—namely entropy—usually under a set of expected values constraints, arriving at what is known as Gibbs distributions. Since Shore and Johnson [6] MaxEnt has been understood as a general method for inference—see also [7,8,9]—hence it is not surprising that (i) Gibbs distributions are what is known in statistical theory as exponential family—the only distributions for which sufficient statistics exist (see e.g., [10]), (ii) MaxEnt encompasses the methods of Bayesian statistics [11], and (iii) MaxEnt has found successful applications in several fields of science (e.g., [12,13,14,15,16,17,18,19,20,21,22]).
One of the scientific fields in which MaxEnt has been successfully applied is macroecology. The work of Harte and collaborators [23,24,25,26,27] presents what is known as the maximum entropy theory of ecology (METE). It consists of finding, through MaxEnt, a joint conditional distribution for the abundance of a species and the metabolic rate of its individuals. From the marginalization and expected values of the MaxEnt distribution, it is possible to obtain (i) the species abundance distribution (Fisher’s log series), (ii) the species-area distribution, (iii) the distribution for metabolic rates over individuals, and (iv) the relationship between the metabolic rate of individuals in a species and that species abundance—for a comprehensive confirmation of METE with experimental data see [28]. In a recent article Harte [29] brings forward the need for dynamical models based on MaxEnt, as METE assume the variables to be static—It is relevant to say that Jaynes applied dynamical methods based on information theory for nonequilibrium statistical mechanics [30] leading to what is known as maximum caliber [31,32]. However, maximum caliber assumes a Hamiltonian dynamics and, therefore, does not generalize to ecology and other complex systems.
The field known as information geometry (IG) [33,34,35,36] assigns a Riemannian geometry structure to probability distributions. In information geometry the distances are given by the Fisher-Rao information metric (FRIM) [37,38], which is the only metric in accordance with the grouping property of probability distributions [39]. IG has found important applications for probabilistic dynamical systems [34,40,41,42,43]. Here the FRIM terms for the distributions arising from METE will be calculated. In a future publication I will evolve METE into entropic dynamical models for ecology, as explained in [43], in order to do so it is necessary to calculate the macrostate entropy and the FRIM terms—which can be obtained from differentiating the macrostate entropy. Therefore, present article performs the calculations necessary for an entropic dynamics model for macroecology.
The layout of the paper is as follows: The following section (2) presents MaxEnt in general terms followed by the MaxEnt process in METE. In particular we obtain the macrostate entropy through the Legendre transform, and the Lambert W special function [44,45], which is a novel result to the best of my knowledge. Section 3 presents some general results of IG and calculate the information metric terms for METE. Section 4 concludes the present article by commenting on possible applications and perspectives for IG in a dynamical theory of macroecology.

2. Maximum Entropy

In information theory, probability distributions encode the available information about a system’s variables x X . MaxEnt consists of updating from a prior distribution q ( x ) —usually, but not necessarily, taken to be uniform—to a posterior ρ ( x ) that maximizes the entropy functional under a set of constraints meant to represent the known information about the system. Usually these constraints are the expected values A i of a set of real valued functions { a i ( x ) } namely sufficient statistics. The distribution ρ is found as the solution to the following optimization problem
max ρ H [ ρ ] = d x ρ ( x ) log ρ ( x ) q ( x ) ,
s . t . d x ρ ( x ) = 1
d x a i ( x ) ρ ( x ) = A i .
where d x refers to the appropriate measure of the set X ; if one is interested in a discrete set X = { x μ } , where μ corresponds to an enumeration of X , we have d x = μ , if one is interested in a continuous subset of real variables, e.g., X = [ a , b ] , we have d x = a b d x .
The solution of (1) is the Gibbs distribution
ρ ( x | λ 1 , λ 2 , . . . , λ n ) = q ( x ) Z ( λ ) exp i = 1 n λ i a i ( x ) ,
where λ = { λ i } is the set of Lagrange multipliers dual to the expected values A = { A i } and Z ( λ ) is a normalization factor given by
Z ( λ ) = d x q ( x ) exp λ i a i ( x ) .
Above, and on the remainder of this article, we use Einstein’s summation notation A i B i = i A i B i . The expected values can be recovered as
A i = 1 Z Z λ i = F λ i , where F ( λ ) log ( Z ( λ ) ) .
We will refer to F as the log-normalizer, which displays a role similar to free energy in statistical mechanics.
If one is able to invert the equations arriving from (4), obtaining this way λ i ( A ) they can express the probability distributions in terms of the expected values, ρ ( x | A ) = ρ ( x | λ ( A ) ) . This also allows one to calculate the entropy H at its maximum—that means H [ ρ ( x | A ) ] for ρ in (2)—as a function of the expected values, rather than a functional of ρ , obtaining
H ( A ) H [ ρ ( x | λ ( A ) ) ] = d x ρ ( x | λ ( A ) ) log ρ ( x | λ ( A ) ) q ( x ) = λ i ( A ) A i F ( λ ( A ) ) .
We will refer to H ( A ) as the macrostate entropy, which is what we refer to in statistical mechanics as thermodynamical entropy—meaning the one that appears in the laws of thermodynamics (Since the arguments that identify the macrostate entropy as the thermodynamical entropy assume that the sufficient statistics are conserved quantities in a Hamiltonian dynamics [2], analogous ‘laws of thermodynamics’ - e.g., conservation of A 2 in (12) or an impossibility of H in (15) to decrease— are not expected in ecological systems). One can see from (5) that H ( A ) is the Legendre transformation [46] of F ( λ ) . It also follows that λ i = H A i .

METE

The first step towards a MaxEnt description involves choosing the appropriate variables for the problem at hand. In METE [24] one assumes an ecosystem of S species supporting N individuals with a total metabolic rate E, meaning in a unit of time the ecosystem consumes a quantity E of energy. The state of the system x on MaxEnt is defined for a singular species as the number of individuals (abundance) n, n { 1 , 2 , , N } and the metabolic rate of an individual of that species ε , ε [ 1 , E ] —note that one can choose a system of units so that the smallest metabolic rate is the unit, ε m i n = 1 . We represent the state as x = ( n , ε ) .
The second step consists of assigning the sufficient statistics that appropriately captures the information about the system. In METE [24] the statistics chosen are the number of individuals in the species a 1 ( n , ε ) n and the total metabolic rate a 2 ( n , ε ) n ε . Substituting these into the defined expected value constrains for the sufficient statistics (1), we obtain constraints on average abundance per species
A 1 = n = 1 N 1 E d ε n ρ ( n , ε | λ ) = N S N ,
and a constrain on the average metabolic consumption per species
A 2 = n = 1 N 1 E d ε n ε ρ ( n , ε | λ ) = E S E .
The defined variable N and E will replace A 1 and A 2 , respectively, when convenient.
Having the state variables and the sufficient statistics chosen, we can compute all quantities defined in the previous subsection for the specific system defined by METE. With a uniform prior q, justified by the fact that at its level of complexity organisms should be considered as distinguishable, this leads to the canonical distribution (2) of the form
ρ ( n , ε | λ ) = 1 Z ( λ ) e λ 1 n e λ 2 n ε ,
where the normalization factor (3) is given by
Z ( λ ) = n = 1 N 1 E d ε e λ 1 n e λ 2 n ε = n = 1 N e λ 1 n e λ 2 n e λ 2 n E λ 2 n ,
from which the expected values (4) can be calculated as
A 1 = N = 1 λ 2 Z ( λ ) n = 1 N e λ 1 n ( e λ 2 n e λ 2 n E ) ,
A 2 = E = 1 λ 2 1 + 1 Z ( λ ) n = 1 N e λ 1 n ( e λ 2 n E e λ 2 n E ) .
These are complicated equations, however some approximations may make them more treatable.
A fair assumption, knowing what the variables are supposed to represent, is that there are far more individuals than species, N S and the average metabolic rate per individual is far greater than the unit of metabolic rate E / N = E / N 1 . This allows for a sequence of approximation that we will treat like assumptions here, namely (i) e λ 2 n E e λ 2 n , (ii) E e λ 2 n E e λ 2 n , (iii) λ 1 + λ 2 1 , and (iv) e ( λ 1 + λ 2 ) N 1 . Further explanation on the validity of these assumptions, under S N E , can be seen in [24,26] and their confirmation by numerical calculation can be seen in [24]. Under this understanding we can substitute (9) into (10a) obtaining
N = n = 1 N e λ 1 n ( e λ 2 n e λ 2 n E ) n = 1 N 1 n e λ 1 n e λ 2 n e λ 2 n E n = 1 N e ( λ 1 + λ 2 ) n n = 1 N 1 n e ( λ 1 + λ 2 ) n
N 1 ( λ 1 + λ 2 ) log ( λ 1 + λ 2 ) .
We can also rewrite (10b) obtaining
E = 1 λ 2 + n = 1 N e λ 1 n ( e λ 2 n E e λ 2 n E ) n = 1 N 1 n e λ 1 n e λ 2 n e λ 2 n E 1 λ 2 + N .
In order to obtain the macrostate entropy analytically (5) one needs to perform the Legendre transformation for METE, which includes inverting (11) and (12) obtaining λ 1 ( N , E ) and λ 2 ( N , E ) . In page 149 of [24] it is said to be unfeasible. However, it is possible to do so obtaining
λ 1 = β ( N ) 1 E N , and λ 2 = 1 E N ,
where
β ( N ) N W 1 1 N 1 , β ˙ N     d β d N = N 2 N β ( N ) 1 ,
and W 1 refers to the second main branch of the Lambert W function (see [44,45]). The details on how (13) inverts (11) and (12) are presented in Appendix A. The macrostate entropy can be calculated directly from (5) as
H ( N , E ) = N β ( N ) + log ( E N ) log N β ( N ) + 1 .
With the calculation of the macrostate entropy finished, we can move into a geometric description of METE.

3. Information Geometry

This section presents the elementary notions of IG—for more in depth discussion and examples see e.g., [33,34,35,36]—and some useful identities for the IG of Gibbs distributions. IG consists of assigning a Riemmanian geometry structure to the space of probability distributions, meaning if a set of distributions p ( x | θ ) is parametrized by a finite number of coordinates, θ = { θ i } , the distances—which are a measure of distinguishability— d between the neighbouring distributions P ( x | θ + d θ ) and P ( x | θ ) are given by d 2 = g i j d θ i d θ j . The work of Cencov [39] demonstrated that the only metric invariant under Markov embeddings—and, therefore, the only one adequate to represent a space of probability distributions—is the metric of the form
g i j = d x P ( x | θ ) log P ( x | θ ) θ i log P ( x | θ ) θ j ,
know as FRIM.
Considering the MaxEnt results presented in previous section, we can restrict our investigation to the Gibbs distributions using the expected values A as coordinates— θ i = A i and P ( x | θ ) = ρ ( x | A ) as in (2). Two useful expressions arise in that case—for proofs see e.g., [33]—first: the metric terms are the Hessian of the negative of macrostate entropy, meaning
g i j = 2 H A i A j = A i A j ,
and second: the covariance matrix between the sufficient statistics a i ( x ) is the inverse matrix of g i j , meaning
C i j g j k = δ k i , where C i j = a i ( x ) a j ( x ) A i A j .
We can, then, see how these quantities are calculated for METE.

Information Geometry of METE

By substituting the macrostate entropy for METE (15) in (17) we obtain the FRIM terms:
g 11 = β ˙ ( N ) + 1 ( E N ) 2 , g 12 = g 21 = 1 ( E N ) 2 , g 22 = 1 ( E N ) 2 , and g = β ˙ ( N ) ( E N ) 2 .
where g = det g i j . Per (18) and from the general form of inverse matrix of a two dimensional matrix, the covariance matrix terms can be calculated directly inverting (19) obtaining
C 11 = g 22 g = N β ( N ) N 2 , C 12 = C 21 = g 12 g = N β ( N ) N 2 , and C 22 = g 11 g = E 2 2 E N + N β ( N ) ;
completing the calculation. The matrix C i j can be interpreted directly as the covariance between a species abundance and its total metabolic rate—METE sufficient statistics. The information metric terms presented in (19) allow for further studies on dynamical ecology from a information theory background, as we will comment in the following section.

4. Discussion and Perspectives

The present article calculates the macrostate entropy (15) for METE. This was made possible by the analytical calculation of the Lagrange multipliers (13) as functions of the expected values (10), previously believed to be unfeasible. This allows for a complete description of METE in terms of the average abundance N and the expected metabolic rate E of each of the ecosystem species. This opens a broad range of investigations possible by analytical calculations. In particular, the IG arising from METE is presented by calculating the FRIM terms in (19). Independently of any geometric interpretation, that was equivalent to calculate the covariance between METE sufficient statistics (20).
The variables that define an ecosystem’s state are not expected to remain constant. Because of this, and the growing relevance of IG in dynamical systems, the calculations made in the present article are an important step into expanding maximum entropy ideas into further investigation in macroecology. The calculations done here allow for evolving METE into an entropic dynamics for ecology, as in the framework developed in [43], this venue of research will be explored in future publication.

Institutional Review Board Statement

This study did not involve humans nor other animals.

Informed Consent Statement

This study did not involve humans.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

I would like to thank A. Caticha, J. Harte, E.A. Newman, and C. Camargo for insightful discussions.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. On the Lambert W Function

In this appendix we will explain how (13) inverts (11) and (12). The Lambert W function is defined as the solution of
W ( x ) e W ( x ) = x .
The python library SciPy [47] implements the numerical calculation of W. This relates to (11b) in the following manner: by defining the variable β = λ 1 + λ 2 we obtain
1 N = β log β 1 β N e 1 β N = 1 N ,
hence β = N W 1 N 1 . It is relevant to say that, from (A1), W ( x ) is multivalued—the terminology Lambert W ‘function’ is used loosely. The several single-valued functions that solve (A1) are known as the different ‘branches’ of the Lambert W. In (13) and (14) only the W 1 branch was taken into account. Given our object of study, we will restrict to functions that are guaranteed to give a β that is real for large N . As explained in [44], the two branches W 0 ( x ) and W 1 ( x ) are real and analytic for e 1 < x < 0 , of equivalently β is real for N > e . Coherent with the fact that (11) was derived for large N .
Figure A1 presents the graphs of β obtained from the W 0 ( x ) and W 1 ( x ) branches, as well as a comparison to the β obtained numerically from inverting (11a). Even though per (A2) the β obtained by both branches inverts (11b), it can be seen from Figure A1 that only the one obtained from W 1 ( x ) approximates the inverse of (11a) for large N and, therefore, it is the only one appropriate for the present investigation.
To complete the claim that λ 1 and λ 2 in (13) are calculated analytically, it is relevant to say that W 1 1 N can be calculated using the series expansion (see page 153 in [44])
W 1 1 N = m = 0 a m z m , where z = 2 ( log N 1 ) ,
and a m is defined recursively as a 0 = 1 , a 1 = 1 , and
a m = 1 m + 1 a m 1 k = 2 m 1 k a k a m + 1 k .
Note that real z implies N > e , which is coherent with the condition for W 1 to be real.
Figure A1. Graphical comparison between the functions defined as: β 0 ( N ) N W 0 1 N 1 , β 1 ( N ) N W 1 1 N 1 , and β i ( N ) —obtained numerically from inverting (11a), here using S = N / N = 20 . W 0 and W 1 have complex values for N < e , the graph above only plots the real part in that region.
Figure A1. Graphical comparison between the functions defined as: β 0 ( N ) N W 0 1 N 1 , β 1 ( N ) N W 1 1 N 1 , and β i ( N ) —obtained numerically from inverting (11a), here using S = N / N = 20 . W 0 and W 1 have complex values for N < e , the graph above only plots the real part in that region.
Psf 03 00001 g0a1

References

  1. Jaynes, E.T. Information theory and statistical mechanics. I. Phys. Rev. 1957, 106, 620. [Google Scholar] [CrossRef]
  2. Jaynes, E.T. Gibbs vs Boltzmann entropies. Am. J. Phys. 1965, 33, 391–398. [Google Scholar] [CrossRef]
  3. Jaynes, E.T. Probability Theory: The Logic of Science; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar] [CrossRef]
  4. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  5. Gibbs, J. Elementary Principles in Statistical Mechanics; Yale University Press: New Haven, CT, USA, 1902; reprinted by Ox Bow Press, Connecticut 1981. [Google Scholar]
  6. Shore, J.; Johnson, R. Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy. IEEE Trans. Inf. Theory 1980, 26, 26–37. [Google Scholar] [CrossRef] [Green Version]
  7. Skilling, J. The Axioms of Maximum Entropy. In Maximum-Entropy and Bayesian Methods in Science and Engineering; Erickson, G.J., Smith, C.R., Eds.; Springer: Dordrecht, The Netherlands, 1988. [Google Scholar] [CrossRef]
  8. Caticha, A. Relative Entropy and Inductive Inference. In AIP Conference Proceedings; American Institute of Physics: College Park, MD, USA, 2004; Volume 707, pp. 75–96. [Google Scholar] [CrossRef]
  9. Vanslette, K. Entropic Updating of Probabilities and Density Matrices. Entropy 2017, 19, 664. [Google Scholar] [CrossRef] [Green Version]
  10. Daum, F. The Fisher-Darmois-Koopman-Pitman theorem for random processes. In Proceedings of the 25th IEEE Conference on Decision and Control, Athens, Greece, 10–12 December 1986; pp. 1043–1044. [Google Scholar] [CrossRef]
  11. Caticha, A.; Giffin, A. Updating Probabilities. In AIP Conference Proceedings; American Institute of Physics: College Park, MD, USA, 2006; Volume 872, pp. 31–42. [Google Scholar] [CrossRef] [Green Version]
  12. Golan, A. Information and Entropy Econometrics—A Review and Synthesis. Found. Trends(R) Econom. 2008, 2, 1–145. [Google Scholar] [CrossRef]
  13. Bianconi, G. Entropy of network ensembles. Phys. Rev. E 2009, 79. [Google Scholar] [CrossRef] [Green Version]
  14. Caticha, A.; Golan, A. An entropic framework for modeling economies. Phys. A Stat. Mech. Its Appl. 2014, 408, 149–163. [Google Scholar] [CrossRef]
  15. Vicente, R.; Susemihl, A.; Jericó, J.P.; Caticha, N. Moral foundations in an interacting neural networks society: A statistical mechanics analysis. Phys. A Stat. Mech. Its Appl. 2014, 400, 124–138. [Google Scholar] [CrossRef] [Green Version]
  16. Yong, N.; Ni, S.; Shen, S.; Ji, X. An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle. Phys. A Stat. Mech. Its Appl. 2016, 456, 222–227. [Google Scholar] [CrossRef]
  17. Delgado-Bonal, A.; Martín-Torres, J. Human vision is determined based on information theory. Sci. Rep. 2016, 6. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. De Martino, A.; De Martino, D. An introduction to the maximum entropy approach and its application to inference problems in biology. Heliyon 2018, 4, e00596. [Google Scholar] [CrossRef] [Green Version]
  19. Cimini, G.; Squartini, T.; Saracco, F.; Garlaschelli, D.; Gabrielli, A.; Caldarelli, G. The statistical physics of real-world networks. Nat. Rev. Phys. 2019, 1, 58–71. [Google Scholar] [CrossRef] [Green Version]
  20. Dixit, P.D.; Lyashenko, E.; Niepel, M.; Vitkup, D. Maximum Entropy Framework for Predictive Inference of Cell Population Heterogeneity and Responses in Signaling Networks. Cell Syst. 2020, 10, 204–212.e8. [Google Scholar] [CrossRef]
  21. Radicchi, F.; Krioukov, D.; Hartle, H.; Bianconi, G. Classical information theory of networks. J. Physics Complex. 2020, 1, 025001. [Google Scholar] [CrossRef]
  22. Caldarelli, G.; Nicola, R.D.; Vigna, F.D.; Petrocchi, M.; Saracco, F. The role of bot squads in the political propaganda on Twitter. Commun. Phys. 2020, 3. [Google Scholar] [CrossRef]
  23. Harte, J.; Zillio, T.; Conlisk, E.; Smith, A.B. Maximum entropy and the state-variable approach to macroecology. Ecology 2008, 89, 2700–2711. [Google Scholar] [CrossRef]
  24. Harte, J. Maximum Entropy and Ecology: A Theory of Abundance, Distribution, and Energetics; Oxford University Press: Oxford, UK, 2011. [Google Scholar]
  25. Harte, J.; Newman, E.A. Maximum information entropy: A foundation for ecological theory. Trends Ecol. Evol. 2014, 29, 384–389. [Google Scholar] [CrossRef]
  26. Brummer, A.; Newman, E. Derivations of the Core Functions of the Maximum Entropy Theory of Ecology. Entropy 2019, 21, 712. [Google Scholar] [CrossRef] [Green Version]
  27. Newman, E.A.; Wilber, M.Q.; Kopper, K.E.; Moritz, M.A.; Falk, D.A.; McKenzie, D.; Harte, J. Disturbance macroecology: A comparative study of community structure metrics in a high-severity disturbance regime. Ecosphere 2020, 11. [Google Scholar] [CrossRef] [Green Version]
  28. Xiao, X.; McGlinn, D.J.; White, E.P. A Strong Test of the Maximum Entropy Theory of Ecology. Am. Nat. 2015, 185, E70–E80. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Harte, J.; Umemura, K.; Brush, M. DynaMETE: A hybrid MaxEnt-plus-mechanism theory of dynamic macroecology. Ecol. Lett. 2021. [Google Scholar] [CrossRef]
  30. Jaynes, E.T. Where do we stand on maximum entropy? In The Maximum Entropy Principle; Levine, R.D., Tribus, M., Eds.; MIT Press: Cambridge, MA, USA, 1979. [Google Scholar] [CrossRef]
  31. Pressé, S.; Ghosh, K.; Lee, J.; Dill, K.A. Principles of maximum entropy and maximum caliber in statistical physics. Reviews of Modern Physics 2013, 85, 1115–1141. [Google Scholar] [CrossRef] [Green Version]
  32. González, D.; Davis, S. The maximum caliber principle applied to continuous systems. J. Phys. Conf. Ser. 2016, 720, 012006. [Google Scholar] [CrossRef]
  33. Caticha, A. The basics of information geometry. In AIP Conference Proceedings; American Institute of Physics: College Park, MD, USA, 2015; Volume 1641, pp. 15–26. [Google Scholar] [CrossRef] [Green Version]
  34. Amari, S. Information Geometry and Its Applications; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar] [CrossRef]
  35. Ay, N.; Jost, J.; Lê, H.V.; Schwachhöfer, L. Information Geometry; Springer International Publishing: Berlin/Heidelberg, Germany, 2017. [Google Scholar] [CrossRef]
  36. Nielsen, F. An Elementary Introduction to Information Geometry. Entropy 2020, 22, 1100. [Google Scholar] [CrossRef] [PubMed]
  37. Fisher, R.A. Theory of Statistical Estimation. Math. Proc. Camb. Philos. Soc. 1925, 22, 700–725. [Google Scholar] [CrossRef] [Green Version]
  38. Rao, C.R. Information and the accuracy attainable in the estimation of statistical parameters. Bull. Calcutta Math. Soc. 1945, 37, 81. [Google Scholar] [CrossRef]
  39. Cencov, N.N. Statistical decision rules and optimal inference. In Translations of Mathematical Monographs; American Mathematical Society: Providence, RI, USA, 1981; Volume 53. [Google Scholar]
  40. Hayashi, M.; Watanabe, S. Information geometry approach to parameter estimation in Markov chains. Ann. Stat. 2016, 44, 1495–1535. [Google Scholar] [CrossRef] [Green Version]
  41. Felice, D.; Cafaro, C.; Mancini, S. Information geometric methods for complexity. Chaos Interdiscip. J. Nonlinear Sci. 2018, 28, 032101. [Google Scholar] [CrossRef]
  42. Ruppeiner, G. Riemannian geometry in thermodynamic fluctuation theory. Rev. Mod. Phys. 1995, 67, 605. [Google Scholar] [CrossRef]
  43. Pessoa, P.; Costa, F.X.; Caticha, A. Entropic dynamics on Gibbs statistical manifolds. Entropy 2021, 23, 494. [Google Scholar] [CrossRef] [PubMed]
  44. Corless, R.M.; Jeffrey, D.J. The LambertW function. In The Princeton Companion to Applied Mathematics; Higham, N.J., Dennis, M.R., Glendinning, P., Martin, P.A., Santosa, F., Tanner, J., Eds.; Princeton University Press: Princeton, NJ, USA, 2016; Chapter III-17; pp. 151–155. [Google Scholar] [CrossRef]
  45. Lehtonen, J. The Lambert W function in ecological and evolutionary models. Methods Ecol. Evol. 2016, 7, 1110–1118. [Google Scholar] [CrossRef]
  46. Nielsen, F. Legendre Transformation and Information Geometry. Technical Report CIG-MEMO2. 2010. Available online: https://www2.sonycsl.co.jp/person/nielsen/Note-LegendreTransformation.pdf (accessed on 1 November 2021).
  47. Lambert, W. SciPy Documentation. Available online: https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.lambertw.html (accessed on 20 March 2021).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pessoa, P. Legendre Transformation and Information Geometry for the Maximum Entropy Theory of Ecology. Phys. Sci. Forum 2021, 3, 1. https://doi.org/10.3390/psf2021003001

AMA Style

Pessoa P. Legendre Transformation and Information Geometry for the Maximum Entropy Theory of Ecology. Physical Sciences Forum. 2021; 3(1):1. https://doi.org/10.3390/psf2021003001

Chicago/Turabian Style

Pessoa, Pedro. 2021. "Legendre Transformation and Information Geometry for the Maximum Entropy Theory of Ecology" Physical Sciences Forum 3, no. 1: 1. https://doi.org/10.3390/psf2021003001

APA Style

Pessoa, P. (2021). Legendre Transformation and Information Geometry for the Maximum Entropy Theory of Ecology. Physical Sciences Forum, 3(1), 1. https://doi.org/10.3390/psf2021003001

Article Metrics

Back to TopTop