Next Article in Journal / Special Issue
Self-Organization of Genome Expression from Embryo to Terminal Cell Fate: Single-Cell Statistical Mechanics of Biological Regulation
Previous Article in Journal
A New Chaotic System with Multiple Attractors: Dynamic Analysis, Circuit Realization and S-Box Design
Previous Article in Special Issue
Stochastic Thermodynamics: A Dynamical Systems Approach
 
 
Comment published on 14 April 2018, see Entropy 2018, 20(4), 285.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Remarks on the Maximum Entropy Principle with Application to the Maximum Entropy Theory of Ecology

Dipartimento di Matematica “Tullio Levi-Civita”, Università degli Studi di Padova, 35122 Padova, Italy
Entropy 2018, 20(1), 11; https://doi.org/10.3390/e20010011
Submission received: 1 November 2017 / Revised: 27 November 2017 / Accepted: 21 December 2017 / Published: 27 December 2017
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)

Abstract

:
In the first part of the paper we work out the consequences of the fact that Jaynes’ Maximum Entropy Principle, when translated in mathematical terms, is a constrained extremum problem for an entropy function H ( p ) expressing the uncertainty associated with the probability distribution p. Consequently, if two observers use different independent variables p or g ( p ) , the associated entropy functions have to be defined accordingly and they are different in the general case. In the second part we apply our findings to an analysis of the foundations of the Maximum Entropy Theory of Ecology (M.E.T.E.) a purely statistical model of an ecological community. Since the theory has received considerable attention by the scientific community, we hope to give a useful contribution to the same community by showing that the procedure of application of MEP, in the light of the theory developed in the first part, suffers from some incongruences. We exhibit an alternative formulation which is free from these limitations and that gives different results.

1. Introduction

Maximum Entropy Principle MEP (E.T. Jaynes, 1957, see [1,2]) is a powerful inference principle which allows to determine the probability distribution that describes a system on the basis of the information available, usually in the form of averages of observables (random variables) of interest for the system. It is based on: (i) the enumeration of the system states i = 1 , , N ; (ii) the introduction of one or more functions that translate the information available of the system in the form of constraints on the probability i.e., as f ( p ) = c where c is a vector of average values; and (iii) a function measuring the uncertainty associated to a probability distribution candidate to describe the system. We consider here only systems with a finite number of states since the extra generality of considering an infinite number of states is not necessary for the aims of this work. The sought distribution is the one for which uncertainty is maximal on the set of distributions for which f ( p ) = c . This distribution thus represents the least biased estimate on the basis of the given information whether or not its prescriptions are empirically confirmed by experiments. Each the above outlined Steps (i)–(iii) has a profound meaning and impact on the implementation of the MEP procedure. For example, if the states of the system are a priori equally probable, the uncertainty function to use is Shannon entropy H ( p ) , while if they are statistically described by a prior distribution q the uncertainty function is (minus) the relative entropy D ( p | q ) . If the systems states are the result of a coarse-graining procedure from a bigger set of a priori equiprobable states, then a prior distribution q which counts the number of coarse-grained states has to be used for D ( p | q ) in order to render the MEP procedure invariant with respect to the coarse graining as explained in [3]. See also [4] where a number of different probability distribution used in ecology are derived for a system of individuals arranged in different cells by simply making different assumptions on the individuals (indistinguishable or not), on the constraints and on the coarse graining. The variety of MEP responses simply reflects the facts that different pieces of information are considered; if the previsions fail to be confirmed by the experiment, this is the signal that relevant information for the statistical description on the system under study has been neglected.
A basic tenet of the Maximum Entropy Principle is that two observer which are given the same information expressed by Steps (i) and (ii) above must obtain the same result upon application of the MEP, i.e., the solution to the constrained extremum problem has to be unique. Notice that the MEP procedure does not specify the probability p = ( p 1 , , p N ) to use; thus if one observer uses p and another uses p which is related to p by a one-to-one transformation p = g ( p ) , provided that they set up the constraints in the form f ( p ) = c and f ( g ( p ) ) = c respectively, they are translating in mathematical terms the same information on the system. This represents the same degree of freedom of using different (but related by an invertible transformation) systems of coordinates to describe the position of a point in physics. Therefore, to ensure that the results of the application of the MEP procedure are independent of the choice of the p used the uncertainty function has to be defined accordingly. Proposition 1 in Section 2 below is the main tool to determine the form of the uncertainty function using g-related distributions.
The analysis of this subtile point of the application of MEP is the main aim of this paper. We argue that since there seems not to be a preferred choice for p they have to be considered all equivalent; therefore the related uncertainty functions are also equivalent and can be derived by a single given one. The question is: which is the translation in mathematical terms of the initial information which is entitled to use as uncertainty function the Shannon entropy? We give an answer by resorting to the Boltzmann statistical derivation of the entropy function, which has the same form of Shannon one. This statistical procedure, based on independent tosses of indistinguishable particles in bins, can be applied to different models and allows determining from a sort of “first principle” the form of the entropy function to use. We think that there is room for further investigation of this subtile issue of the application of MEP.
In the second part of the paper we apply the above findings to a specific application of MEP to ecology called Maximum Entropy Theory of Ecology (M.E.T.E.) as exposed in [5]. M.E.T.E. is a purely statistical model of an ecological community. Using the Maximum Entropy Principle the theory aims at inferring the form of some of the most used distributions in ecology from the knowledge of macroscopic information on the system: number of individuals, number of species and total metabolic requirement. MEP has been applied in the field of statistical ecology mainly to derive one of the most important probability distributions, the Species Abundance Distribution (SAD) but not exclusively (see [6] or [7] where it is used for modeling species geographic distributions with presence-only data). Several papers ([3,8,9]) have revised the application of the MEP inference principle to ecological problems, stressing the importance of the choice of the system’ coordinate [10,11,12], of the prior distribution [4,8] and of the entropy function [3,6]. The interest of [5] is that it considers simultaneously information on the distribution of individuals among species (SAD) and on the distribution of metabolic energy rate among individuals. In this richer scenario a number of probability distributions for different observables of interest are derived. The output of the work has grown to become a comprehensive theory of application of MEP to ecological problems, called Maximum Entropy Theory of Ecology (see the book [13,14]) which has been extensively tested with real ecosystem data in [15,16,17]. In Section 4.1 we review the assumptions of the model in [5] and the related MEP solution; in Section 4.2 we propose an alternative application of the MEP procedure starting from the same initial information but giving a different result. In Section 5 we discuss the non equivalence of the two procedures using the Proposition 1 below and motivate why the MEP solution in [5] is flawed by some inconsistences.

2. MEP Formulation Using Different Variables

The Maximum Entropy Principle is generally expressed as: p is the distribution that maximizes Shannon entropy H ( p ) over the set of distributions allowed by the constraints. Let p defined by p = g - 1 ( p ) with g invertible be a different choice of the variable to be used to express the probability distribution and the constraints and suppose that for the solution of this constrained extremum problem we use Lagrange multipliers method. Then, the following Proposition holds (see Appendix A for a proof).
Proposition 1.
p ^ is the constrained extremum of H ( p ) over the set f ( p ) = c if and only if p ^ = g - 1 ( p ^ ) is the constrained extremum of H ( g ( p ) ) over the set f ( g ( p ) ) = c .
From the above Proposition 1 we learn that if we are given the entropy function and the constraints as ( p , f ( p ) , H ( p ) ) and we want to use another variable p = g - 1 ( p ) , to find the same MEP solution we have to transform the constraints and the entropy function accordingly using ( p , f g ( p ) , H g ( p ) ) . In this way, changing variable and finding the maximum are operations whose order can be interchanged. A natural question arises in this setting: which one is the couple ( p , f ( p ) ) translating in mathematical terms the initial information on the system that “is justified” in using the entropy function in Shannon form? We will give a partial answer to this question in Section 3 below.
Note that the invariance requirement with respect to the choice of the independent variable is different from the requirement, introduced in the axiomatic derivation of MEP in [18], of invariance in the form of H ( p ) with respect to a transformation of the type y = Γ ( x ) , where x D are the (possibly infinite) states of a system and p = p ( x ) . For a system with finite states the transformation Γ (called coordinate transformation in [18]) reduces to a relabeling of the states. Shannon entropy H ( p ) = i = 1 N p i ln p i being additive is clearly invariant in form, expressing the fact that different observers may use different labels for the N system states. This relabeling transformation
p k = g k ( p 1 , , p N ) = p φ k ( 1 , , N ) k = 1 , , N
where φ k is a permutation of the N state labels is thus a particular case of the more general g considered in the above Proposition 1. It is clearly an invertible transformation in which H ( p ) and H ( g ( p ) ) do coincide, but this is no longer assured for more general transformations g; see Section 3 below.

3. A Simple, Ecologically Oriented Example

To throw light on the questions considered above we take a very simple system used in a number of physical models that for our purposes can be considered as a minimal example of an ecological community. Consider a set I of N 0 identical individuals (balls) which belong to S species (indistinguishable urns). Suppose that each species contains at least one individual so that the maximal number of individuals allowed to one species is N = N 0 ( S 1 ) . Suppose that the species labels can be interchanged so that the observable quantity is not the number of individuals n α of a given species α but the number S n of species which have exactly n individuals, for n = 1 , , N .
Suppose that the set of species S is equipped with the uniform probability 1 / S and consider the random variable ν : S N , α ν ( α ) = n α , N = { 1 , N } that assigns to a species its abundance. Let ϕ ( n ) be the density associated to the random variable ν (in the sequel P denotes probability)
ϕ ( n ) = P ( ν = n ) = P ( α ν 1 ( n ) ) = | ν 1 ( n ) | S = S n S , n N
where | A | is the number of elements of A. Set for later use S n = ν 1 ( n ) S , therefore | S n | = S n .
Let the set of individuals I be equipped with the uniform probability 1 / N 0 , let α : I S , i α ( i ) be the assignation of each individual to a species and consider the random variable ν α : I N , i ν ( α ( i ) ) . Moreover, consider the following subset I n of I obtained by grouping together all the individuals belonging to the species having n individuals (we call this subset the “multispecies” n)
I n = { i I : ν ( α ( i ) ) = n } I , I n = | I n | .
As a straightforward consequence, we have that I n = n S n . Let ϕ I ( n ) be the density associated to the random variable ν α that is
ϕ I ( n ) = P ( ν α = n ) = P ( i I n ) = I n N 0 , n N .
Both ϕ and ϕ I are probability distributions on N ; it is easy to see that since I n = n S n , they are related by an invertible transformation ϕ I = g ( ϕ ) defined as
ϕ I ( n ) = g n ( ϕ ) = n S N 0 ϕ ( n ) n = 1 , , N .
In ecology the distribution ϕ is called Species Abundance Distribution (SAD) and is probably the most important (and investigated) indicator of how an ecological community is structured. While ϕ ( n ) gives the probability of extracting a species of abundance n from the set of species, ϕ I ( n ) gives the probability that an individual extracted from the set I belongs to a species of abundance n. Real communities are composed of many species with few individuals (rare species) and few ones which contain the majority of the individuals (common species). As a consequence, the two probability distribution are very different.
The information on the system resumed by the numbers N 0 , S sets the following equalities
N 0 = n = 1 N n S n = n = 1 N I n and S = n = 1 N S n = n = 1 N 1 n I n .
If we divide the above two equalities by S or N 0 respectively, we obtain the following equations to be satisfied by ϕ
N 0 S = n = 1 N n ϕ ( n ) , 1 = n = 1 N ϕ ( n )
and ϕ I respectively
1 = n = 1 N ϕ I ( n ) , S N 0 = n = 1 N 1 n ϕ I ( n ) .
Constraints (6) and (7) are g-related, that is ϕ satisfies Equation (6) if and only if g ( ϕ ) satisfies Equation (7). Note that in first case we we are constraining the weighted arithmetic average of 1 , , N while in second one we are constraining the weighted harmonic average. Note also that the actual information being used is the ratio N 0 / S or equivalently S / N 0 , so that the MEP description of the system is independent of the size N 0 , S of the system provided that the ratio N 0 / S is kept constant.
Suppose that two different observers translate in mathematical terms the information contained in the data N 0 , S using the two above introduced variables ϕ and ϕ I and that both invoke the MEP to find the least informative probability distribution allowed by the constraints. The application of MEP with entropy function H ( ϕ ) = n = 1 N ϕ ( n ) ln ϕ ( n ) and Constraint (6) gives, upon application of the Lagrange multipliers methods, the geometric probability distribution
ϕ ^ ( n ) = e λ n Z ( λ ) , Z ( λ ) = n = 1 N e λ n
while the MEP procedure with the same entropy function H ( ϕ I ) = n = 1 N ϕ I ( n ) ln ϕ I ( n ) and Constraint (7) gives the solution
ϕ ^ I ( n ) = e γ 1 n Z ( γ ) , Z ( γ ) = n = 1 N e γ 1 n
whose corresponding SAD is
ϕ ˜ ( n ) = g 1 ( ϕ ^ I ) ( n ) = N 0 S n e γ 1 n Z ( γ ) .
It is easy to see that ϕ ^ g 1 ( ϕ ^ I ) , therefore with this example we have shown that changing independent variables and using the MEP with the Shannon entropy are non commuting operations in the general case. Had we used Constraint (7) with the transformed entropy
H ( ϕ I ) = H ( g 1 ( ϕ I ) ) = n = 1 N ϕ I ( n ) N 0 n S ln [ ϕ I ( n ) N 0 n S ]
we would have obtained the solution g ( ϕ ^ ) with ϕ ^ in Equation (8). Note that the transformed entropy in Equation (11) does not have the form of a relative entropy function therefore it can not be obtained by introducing a suitable prior probability distribution. Note also that the non commutativity considered above is different from the non commutativity of MEP procedure with respect to coarse graining of the system state considered in [3,4].

Which One is the Correct Application of MEP?

Both ϕ ( n ) in Equation (8) and ϕ ˜ ( n ) in Equation (10) result from the application of the MEP starting from the same initial information N 0 , S on the system and provide non equivalent answers to the question of how the abundance of individuals is distributed among the species. Since the set of distributions allowed by the constraints (feasible set) in the two formulations Equations (6) and (7) are related by an invertible transformation D I = g ( D ) , the non equivalence must reside in the choice of the form of the entropy functions which are not g-related according to Proposition 1. On the basis of the same Proposition, once the form of the entropy function is established in a given formulation of MEP ( p , f ( p ) , H ( p ) ) , it is also determined for all g-related formulations. It seems thus necessary to invoke a sort of first principle to establish the correct form of the entropy function in the initial MEP formulation.
A possible approach could be to compare the form of the resulting probability distribution against field data. This is not entirely satisfactory since the discrepancy between the MEP solution and the empirical curve could be due to the fact that we have neglected (or we are not aware of) some information on the system which is relevant for the question addressed. In this case the discrepancy between the model and the empirical pattern is a signal that other factors shaping the probability distribution are acting on the system. By the way, for the problem considered above, it is generally acknowledged in the ecological literature (see e.g., [5]) that the geometric SAD in Equation (8) has a poor fit with the empirical SAD, while the one introduced in (10) could provide a better fit since is more flexible (it can be monotonically decreasing or display an interior mode depending on the value of the Lagrange multiplier).
At least for systems with a discrete number of states another criterion can be introduced, which uses the combinatorial argument providing one of the justifications for the form of Shannon entropy historically preceding the axiomatic derivation in [18].
This is the celebrated Boltzmann problem of statistical mechanics [19]. It is useful to briefly review its solution here because it tells us how to determine the correct form of the entropy function from first principles in cases where its determination is not obvious. It deals with the distribution of K indistinguishable particles (individuals) among M equally spaced energy levels ϵ = 1 , , M . Boltzmann original formulation dealt with arbitrarily spaced energy levels but this extra generality is not needed here. If n ϵ is the number of particles having energy ϵ and n = ( n 1 , , n M ) is the vector of occupation numbers of the energy levels, the logarithm of the probability of n is
ln W ( n ) = ln 1 M K K ! n 1 ! n M !
which using Stirling approximation ln n ! n ln n can be rewritten as
ln W ( n ) = K ln K ϵ n ϵ ln n ϵ .
The maximization of ln W ( n ) under the constraints ϵ n ϵ = K and ϵ ϵ n ϵ = E 0 gives the most common, i.e., least informative vector of occupation numbers. The solution of the constrained maximization problem is the celebrated Boltzmann-Gibbs distribution
n ϵ = K e β ϵ ϵ = 1 M e β ϵ , ψ ( ϵ ) = n ϵ K = e β ϵ ϵ = 1 M e β ϵ
Note that writing n ϵ = ψ ( ϵ ) K , ln W in Equation (13) coincides up to a constant with
H ( ψ ) = ϵ ψ ( ϵ ) ln ψ ( ϵ ) .
In the same manner, if we have S indistinguishable species to be assigned in the N levels (abundances) of N let s = ( S 1 , , S N ) be the occupation vector with constraints n S n = S and n n S n = N 0 . As before, setting S ϕ ( n ) = S n the least informative probability distribution is
ϕ ( n ) = e λ n n e λ n = e λ n Z ( λ )
and the associated entropy is
H ( ϕ ) = n ϕ ( n ) ln ϕ ( n )
Therefore the form of the entropy function for the formulation given by Equation (6) of the MEP problem using the variable ϕ can be derived by Boltzmann combinatorial argument.
Let us see which is the resulting entropy function if we apply Boltzmann argument to the formulation given by Equation (7) using ϕ I . Remember that
( A ) ϕ ( n ) = P r o b ( α S n ) , ( B ) ϕ I ( n ) = P r o b ( i I n )
so basically in (A) we are “tossing” S species in the N bins S n while in (B) we are “tossing” N 0 individuals in the N multispecies I n . In the Boltzmann argument the tosses are supposed to be independent and this is true for (A) but not for (B). Indeed a multispecies I n is the grouping of S n species each having n individuals so the number of individuals in I n can be updated only by multiples of n, i.e., adding or subtracting a species of n individuals. Hence the tosses of single individuals does not represent independent tosses. If we want to support the choice of the entropy function by Boltzmann argument we have to consider tosses of groups of n individuals. Let i n be the number of individuals tossed in I n . Therefore the related occupation vector has the form
i = ( i 1 1 , i n n , i N N ) , W ( i ) = 1 N N 0 N 0 ! i 1 1 ! i N N !
which leads to an entropy (recall that i n = N 0 ϕ I ( n ) ) of the form
H ( ϕ I ) = n N 0 n ϕ I ( n ) ln N 0 n ϕ I ( n )
and under the Constraints (7) above to the solution
ϕ I ( n ) = n S N 0 e λ n Z ( λ ) = g ( ϕ ^ ( n ) ) .
We have shown that in the case where the probability distribution being used can be linked at least conceptually to independent repetitions of an experiment, we have a criterion to derive the correct form of the entropy function from a first principle. Of course to build a general theory this simple example deserves further generalization.

4. Maximum Entropy Theory of Ecology

4.1. Review of M.E.T.E. Assumptions

Let us review the assumptions of M.E.T.E. as exposed in [5]. In [5], various metrics (probability distributions) are derived but here we will content ourselves with the analysis of the derivation of two of them, namely the Species Abundance Distribution ϕ ( n ) and the distribution of metabolic energy ψ ( ϵ ) between the individuals.
Following [5], we consider a community made of N 0 individuals living on an area A 0 , subdivided into S species and having total metabolic energy rate E 0 . Assume that each species has at least one individual and that the individual metabolic rate is a discrete quantity which ranges from 1 to a maximum M in a suitable energy unit. With these assumptions, the abundance of a species α is ν ( α ) = 1 , N with N = N 0 S + 1 and the individual metabolic rate is m ( i ) M = { 1 , , M } , with M = E 0 S 0 + 1 . Each individual is labelled by a double label ( α , ϵ ) indicating the species α = 1 , , S and the metabolic rate ϵ = 1 , , M . Here we depart from [5] by assuming for the sake of simplicity that the individual metabolic rate is a discrete quantity while in [5] it is assumed to be a continuous one. This change does not affects our argument since for dealing with the continuous case it is enough to substitute the sums with integrals.
The information on the system is limited to the values N 0 , S, E 0 , and A 0 , but for our aims the knowledge of A 0 is not relevant. Therefore we are dealing with a non spatial model of a community of S non interacting species. On the basis of this sole knowledge we would like to answer to two questions: ( Q 1 ) How is the energy distributed among the individuals? ( Q 2 ) How are the individuals distributed among the species? Note that, since the species’ label can be exchanged, the correct formulation of ( Q 2 ) is how many of the species have exactly n individuals for n = 1 , , N , which is the information contained in ϕ ( n ) .
Both questions can be answered in a non deterministic manner; for ( Q 2 ) , we use the density ϕ introduced in Equation (1), while, for ( Q 1 ) , we introduce the map m : I M , i m ( i ) that we will consider as a random variable and its density ψ ( ϵ ) on M
ψ ( ϵ ) = P ( m = ϵ ) = P ( i m 1 ( ϵ ) ) = I ϵ N 0 , ϵ M
where I ϵ = m 1 ( ϵ ) I and I ϵ = | I ϵ | . For later use, consider also the joint probability distribution
P ( n , ϵ ) = P r o b ( i I n , i I ϵ ) = 1 N 0 | I n I ϵ |
having marginals, respectively, ϕ I ( n ) and ψ ( ϵ ) , since the space I is doubly partitioned in multispecies and energy classes.

4.2. Our Solution of M.E.T.E. Problem

The following constraints contain all the information ( N 0 , S , E 0 ) available on the system
ϵ ψ ( ϵ ) = 1 , ϵ ϵ ψ ( ϵ ) = E 0 N 0
n ϕ ( n ) = 1 , n n ϕ ( n ) = N 0 S
and a straightforward application of MEP with the constraints in Equation (23) and the entropy in Equation (15) [with the constraints in Equation (24) and the entropy in Equation (17) respectively] gives the answers ψ ^ in Equation (14) [ ϕ ^ in Equation (16)] to the above questions ( Q 1 ) and ( Q 2 ) , i.e.,
ψ ^ ( ϵ ) = e β ϵ Z ( β ) = P ( i I ϵ ) , ϕ ^ ( n ) = e λ n Z ( λ ) = P ( α S n ) .
Basically, we are solving twice Boltzmann problem of statistical mechanics. Now, if one is interested in the joint probability
Q ( n , ϵ ) = P ( α S n , i I ϵ )
we proceed as before, introducing the vector c = ( C 1 , 1 , C N , M ) where C n , ϵ counts the number of times the object ( α , i ) is assigned to the bin S n × I ϵ . Hence
W ( c ) = 1 ( N M ) N 0 S ( N 0 S ) ! C 1 , 1 ! C N , M !
with
ln W ( c ) = N 0 S ln N 0 S n , ϵ C n , ϵ ln C n , ϵ
to be maximized under the decoupled constraints ( ϵ C n , ϵ = S n , n C n , ϵ = n ϵ )
n , ϵ C n , ϵ = N 0 S , n , ϵ n C n , ϵ = N 0 , n , ϵ ϵ C n , ϵ = E 0 .
The related entropy is, setting C n , ϵ = Q ( n , ϵ ) N 0 S
H N × M ( Q ) = n , ϵ Q ( n , ϵ ) ln Q ( n , ϵ )
and the maximum entropy distribution with Constraint (28) can be shown to be
Q ( n , ϵ ) = ϕ ( n ) ψ ( ϵ ) = e λ n Z ( λ ) e β ϵ Z ( β ) .
We claim that Equation (30) is the correct solution of the MEP problem dealt with in [5,13] in the sense that the choice of the entropy function adopted is supported by the Boltzmann argument. We do not claim that this solution has a better fit with empirically derived patterns with respect to others but only that is derived in a consistent way. In this solution the two random variables ν and m are uncorrelated, therefore the energy and species constraints can be dealt with separately giving Equations (14) and (16) or at the same time. This is a well known property of MEP: if the constraints concern only the marginals of a joint probability, then the MEP solution is the product of two unidimensional distributions.

5. Analysis of the Application of MEP in M.E.T.E.

The starting point of M.E.T.E. theory is the following probability distribution in Equation (31) on N × M introduced in [5], formula (4c). Quoting [5], page 2702 above formula (1a) : “ R ( n , ϵ ) is the probability that if a species is picked at random form the species list, then it has abundance n, and if an individual is picked at random from that species with abundance n, then its metabolic rate is ϵ ” (is between [ ϵ , ϵ + d ϵ ] in the original continuum energy formulation, italics is ours)
R ( n , ϵ ) = ϕ ( n ) Θ ( ϵ | n ) .
This a delicate point: we think that using “that species” makes the above definition logically inconsistent because if there are more than one species with abundance n it is impossible to know which species is being picked and so the probability of picking an individual with a given metabolic rate is not uniquely defined. One can readily convince oneself of this by considering a toy model of the systems along the lines of example in Box 7.1 in [13], page. 144, but with at least two species having equal abundance and containing individuals with the same metabolic rate. The only way for us to be logically consistent is to substitute “that species” with “a species”. This is equivalent to picking an individual of given metabolic rate from the multispecies I n . In this paper we have thus intended the definition of R ( n , ϵ ) as amended in this way.
Note however that in the sequel of [5], below formula (4c), it is (correctly) written “a species”. Note also that the same ambiguity is present in the book [13], Chapter 7, pages 142 and 143. In our framework, we thus rephrase the definition as
R ( n , ϵ ) = ϕ ( n ) P ( i I ϵ | i I n )
and using the definition of conditional probability P ( A , B ) = P ( A ) P ( B | A ) and (4), (22) also as
R ( n , ϵ ) = ϕ ( n ) P ( i I ϵ , i I n ) P ( i I n ) = ϕ ( n ) P ( n , ϵ ) ϕ I ( n ) = N 0 S n P ( n , ϵ )
Therefore, the probability distributions P ( n , ϵ ) and R ( n , ϵ ) are related by the invertible transformation g (we use the same symbol for the sake of simplicity)
P ( n , ϵ ) = g n ( R ( n , ϵ ) ) = n S N 0 R ( n , ϵ ) .
Moreover, the information N 0 , S , E 0 sets the following constraints on R ( n , ϵ ) , see ( 1 a ) ( 1 c ) in [5]
n , ϵ n R ( n , ϵ ) = N 0 S
n , ϵ ϵ n R ( n , ϵ ) = E 0 S
n , ϵ R ( n , ϵ ) = 1
hence also R ( n , ϵ ) is properly normalized. In [5], the MEP procedure is applied as follows: the probability distribution R ( n , ϵ ) is the one that maximize the Shannon entropy
H ( R ) = n , ϵ R ( n , ϵ ) ln R ( n , ϵ )
on the set defined by Constraints (34)–(36) above. It is therefore the least informative probability distribution on the basis of the sole knowledge of the macroscopic ratios E 0 / S , N 0 / S . Note that, as observed in [5], if the previous ratios are known, E 0 / N 0 is also known.
Standard application of the MEP gives the probability distribution depending on the unknown multipliers λ and β (see [5], formula (6))
R ( n , ϵ ) = e λ n β ϵ n Z ( λ , β ) .
The Lagrange multipliers λ and β have to be determined by inserting Equation (38) into the constraints in Equations (34) and (35) but an analytical solution of the resulting equations does not exist. Adopting some approximations in [5] we are lead to the following explicit formulae for the marginal (the SAD distribution)
Φ ( n ) = ϵ R ( n , ϵ ) 1 ln λ 1 e λ n n
which is the celebrated Fisher log-series ([20]) and to
Ψ ( ϵ ) = S N 0 n R ( n , ϵ ) λ β e λ β ϵ ( 1 e λ β ϵ ) 2
for the energy marginal. Moreover, the conditional probability is
Θ ( ϵ | n ) β n e β n ϵ .
What is striking in the above result, is that it prescribes a non zero correlation (Equation (41)) between the distribution of energy among individuals and the distribution of abundance among species, even if in the initial information nothing seems to hint at a possible correlation. This is for us the signal of a flaw in the above application of MEP.

The Correct Form of Entropy Function of M.E.T.E. Problem

Our aim now is to derive the form of the entropy function for the MEP problem in the R ( n , ϵ ) variable with constraints on energy and species in Equations (34)–(36) using the Boltzmann argument. Since R ( n , ϵ ) is not a joint probability distribution, we derive the entropy function for the distribution P ( n , ϵ ) = P ( i I n , i I ϵ ) in (22) which is related to R ( n , ϵ ) by the change of variable (33) and use the Proposition 1. The g-related constraints of Equations (34)–(36) for P ( n , ϵ ) are
n , ϵ P ( n , ϵ ) = 1 , n , ϵ 1 n P ( n , ϵ ) = S N 0 , n , ϵ ϵ P ( n , ϵ ) = E 0 N 0 .
Note that using P ( n , ϵ ) the constraints are only on the marginals of P ( n , ϵ ) hence unlike for R ( n , ϵ ) the MEP solution is the product of the marginals, see (48) below. Given the occupation vector i = ( i 1 , , i N ) of the N 0 individuals in the N multispecies, for every n we have to distribute i n individuals among the M energy groups I ϵ and each arrangement i n = ( i n , 1 , , i n , M ) with i n = ϵ i n , ϵ has probability
W n ( i n ) = 1 M i n i n ! i n , 1 ! i n , M !
Therefore, taking into account that we have i n / n independent tosses in the bin I n , we have to use the multiplicity factor W ( i ) in Equation (19)
W = W ( i ) n = 1 N W n ( i n )
hence
ln W = ln W ( i ) + ln n = 1 N W n ( i n ) .
The maximum of ln W is reached when both terms in the right hand side are maximized, but the first term ln W ( i ) is independent of the arrangement in the energy levels. Therefore we can maximize the second term in the right hand side of Equation (44) for fixed values of i and then maximize the first term with respect to i . Now
ln n = 1 N W n ( i n ) = n ln W n ( i n ) = n , ϵ i n , ϵ ln i n , ϵ + f ( i )
is to be maximized under the N + 1 constraints (note that the constraint n , ϵ i n , ϵ = N 0 is already enforced since n i n = N 0 )
n , ϵ ϵ i n , ϵ = E 0 , and ϵ i n , ϵ = i n , n = 1 , , N .
The unconstrained extremum problem for the Lagrange function
G = n , ϵ i n , ϵ ln i n , ϵ β ( n , ϵ ϵ i n , ϵ E 0 ) n μ n ( ϵ i n , ϵ i n )
gives the solution
i n , ϵ = i n e β ϵ n e β ϵ = i n e β ϵ Z ( β ) .
Since i n , ϵ = P ( n , ϵ ) N 0 and i n = ϕ I ( n ) N 0 we have
P ( n , ϵ ) = ϕ I ( n ) e β ϵ Z ( β )
and to get the complete solution we have to maximize W ( i ) , which has already been dealt with in the previous section, Equations (19) and (20). Therefore, the MEP procedure for the distribution P ( n , ϵ ) with the constraints gives the solution
P ( n , ϵ ) = n S N 0 e λ n Z ( λ ) e β ϵ Z ( β ) .
Using the change of variables in Equation (33) and Proposition 1, we find the solution of the MEP problem in [5] with respect to the variables R ( n , ϵ )
R ( n , ϵ ) = N 0 S n P ( n , ϵ ) = e λ n Z ( λ ) e β ϵ Z ( β )
which coincides with Q ( n , ϵ ) in (30).

6. Discussion

A non trivial task in the application of the MEP is the translation in mathematical terms (i.e., as constraints on the sought probability distribution) of the information available on the system. In some cases one may hesitate between mathematically equivalent formalizations. What is not so well known is that the use of Shannon entropy H ( p ) may not be justified in all cases. We have provided a criterion based on Boltzmann method of the most probable occupation vector to derive the correct form of the entropy function in a given formulation. Note that the problem addressed here is different from the search of a suitable prior distribution for the relative entropy D ( p | q ) . In the second part of the paper we have applied this analysis to critically examine the Maximum Entropy Theory of Ecology (M.E.T.E.). Even if the application of MEP contained in M.E.T.E. seems flawed, the resulting SAD proposed by M.E.T.E. is the Fisher log-series, which is widely accepted in the ecological community and considered as giving a best fit for many ecological communities (although the use of log-series SAD has been questioned recently, see [21]). Therefore, the previsions of species abundance based on M.E.T.E. may well be in good agreement with field data. What seems to be hardly giving a sound base in M.E.T.E. is the claim that the information contained in N 0 , S , E 0 produces a coupling between the distribution of abundance between species and the distribution of metabolic energy between individuals.

Acknowledgments

We thank A. Maritan, S. Suweis, A. Tovo, M. Formentin and M. Pavon for fruitful discussions on the subject of this paper.

Conflicts of Interest

The authors declare no conflict of interest

Appendix A

Proof of Proposition 1.
To seek for the constrained extrema of H ( p ) over the set f ( p ) c = 0 , c R k , we introduce the Lagrange function
G ( p , λ ) = H ( p ) + α = 1 k λ α ( f α ( p ) c α )
p ^ is a constrained extremum if and only if for all i
( p G ( p ^ , λ ) ) i = ( H ( p ^ ) ) i + α = 1 k λ α f α ( p ^ ) p i = H ( p ^ ) + d f T ( p ^ ) λ i = 0 .
Let H ( p ) = H ( g ( p ) ) and f ( p ) = f ( g ( p ) ) where p = g ( p ) is invertible and
G ( p , μ ) = H ( p ) + α = 1 k μ α ( f α ( p ) c α ) .
p is a constrained extremum of G over the set f ( p ) = c if and only if for all i
( p G ( p , μ ) ) i = ( H ( p ) ) i + α = 1 k μ α f α ( p ) p i = 0
that is
j H ( g ( p ) ) p j g j ( p ) p i + j , α μ α f α ( g ( p ) ) p j g j ( p ) p i = 0
or, in compact notation
d g T ( p ) p H ( g ( p ) ) + d f T ( g ( p ) ) μ = 0
Since g is invertible, det g 0 , and p H ( g ( p ) ) + d f T ( g ( p ) ) μ = 0 if and only if p ^ = g ( p ) is a constrained extremum for H ( p ) and λ = μ . ☐

References

  1. Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev. 1957, 106, 620. [Google Scholar] [CrossRef]
  2. Jaynes, E.T. Probability Theory: The Logic of Science; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  3. Banavar, J.R.; Maritan, A.; Volkov, I. Applications of the principle of maximum entropy: From physics to ecology. J. Phys. 2010, 22, 063101. [Google Scholar] [CrossRef] [PubMed]
  4. Haegeman, B.; Rampal, S.E. Entropy maximization and the spatial distribution of species. Am. Nat. 2010, 175, E74–E90. [Google Scholar] [CrossRef] [PubMed]
  5. Harte, J.; Zillio, T.; Conlinsk, E.; Smith, A.B. Maximum entropy and the state-variable approach to macroecology. Ecology 2008, 89, 2700–2711. [Google Scholar]
  6. Dewar, R.C.; Porte, A. Statistical mechanics unifies different ecological patterns. J. Theor. Biol. 2008, 251, 389–403. [Google Scholar]
  7. Phillips, S.J.; Anderson, R.P.; Schapire, R.E. Maximum entropy modeling of species geographic distributions. Ecol. Model. 2006, 190, 231–259. [Google Scholar] [CrossRef]
  8. Pueyo, S.; He, F.; Zillio, T. The maximum entropy formalism and the idiosyncratic theory of biodiversity. Ecol. Lett. 2007, 10, 1017–1028. [Google Scholar] [CrossRef] [PubMed]
  9. Frank, S.A. Measurement scale in maximum entropy models of species abundance. J. Evolut. Biol. 2011, 24, 485–496. [Google Scholar] [CrossRef] [PubMed]
  10. Haegeman, B.; Loreau, M. Limitations of entropy maximization in ecology. Oikos 2008, 117, 1700–1710. [Google Scholar] [CrossRef]
  11. Shipley, B. Limitations of entropy maximization in ecology: A reply to Haegeman and Loreau. Oikos 2009, 118, 152–159. [Google Scholar] [CrossRef]
  12. Haegeman, B.; Loreau, M. Trivial and non trivial applications of entropy maximization in ecology: A reply to Shipley. Oikos 2009, 118, 1270–1278. [Google Scholar] [CrossRef]
  13. Harte, J. Maximum Entropy and Ecology. A Theory of Abundance, Distribution, and Energetics; Oxford University Press: Oxford, UK, 2011. [Google Scholar]
  14. Harte, J.; Newman, A. Maximum information entropy: A foundation for ecological theory. Trends Ecol. Evol. 2014, 29, 384–389. [Google Scholar] [CrossRef] [PubMed]
  15. White, P.E.; Thibault, K.M.; Xiao, X. Characterizing species abundance distributions across taxa and ecosystems using a simple maximum entropy model. Ecology 2012, 93, 1772–1778. [Google Scholar] [CrossRef] [PubMed]
  16. McGlinn, D.J.; Xiao, X.; Kitzes, J.; White, E.P. Exploring the spatially explicit predictions of the Maximum Entropy Theory of Ecology. Glob. Ecol. Biogeogr. 2015, 24, 675–684. [Google Scholar] [CrossRef]
  17. Xiao, X.; McGlinn, D.J.; White, E.P. A strong test of the maximum entropy theory of ecology. Am. Nat. 2015, 185, E70–E80. [Google Scholar] [CrossRef] [PubMed]
  18. Shore, J.; Johnson, R. Axiomatic derivation of the Principle of Maximum Entropy and the Principle of Minimum Cross-Entropy. IEEE Trans. Inf. Theory 1980, 26, 26–37. [Google Scholar] [CrossRef]
  19. Schrodinger, E. Statistical Thermodynamics; Courier Corporation: North Chelmsford, MA, USA, 1989. [Google Scholar]
  20. Fisher, R.A.; Corbet, A.S.; Williams, C.B. The relation between the number of species and the number of individuals in a random sample of an animal population. J. Anim. Ecol. 1943, 12, 42–58. [Google Scholar] [CrossRef]
  21. Tovo, A.; Suweis, S.; Formentin, M.; Favretti, M.; Volkov, I.; Banavar, J.R.; Maritan, A. Upscaling species richness and abundances in tropical forests. Sci. Adv. 2017, 3, e1701438. [Google Scholar] [CrossRef] [PubMed]

Share and Cite

MDPI and ACS Style

Favretti, M. Remarks on the Maximum Entropy Principle with Application to the Maximum Entropy Theory of Ecology. Entropy 2018, 20, 11. https://doi.org/10.3390/e20010011

AMA Style

Favretti M. Remarks on the Maximum Entropy Principle with Application to the Maximum Entropy Theory of Ecology. Entropy. 2018; 20(1):11. https://doi.org/10.3390/e20010011

Chicago/Turabian Style

Favretti, Marco. 2018. "Remarks on the Maximum Entropy Principle with Application to the Maximum Entropy Theory of Ecology" Entropy 20, no. 1: 11. https://doi.org/10.3390/e20010011

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop