Next Article in Journal
Power, Efficiency and Fluctuations in a Quantum Point Contact as Steady-State Thermoelectric Heat Engine
Previous Article in Journal
Comparing Information Metrics for a Coupled Ornstein–Uhlenbeck Process
Previous Article in Special Issue
A General Framework for Fair Regression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Maximum Entropy Analysis of Flow Networks: Theoretical Foundation and Applications

1
School of Engineering and Information Technology, The University of New South Wales, Northcott Drive, Canberra, ACT 2600, Australia
2
Ambrosys GmbH, 14469 Potsdam, Germany
3
Institute for Physics and Astrophysics, University of Potsdam, 14469 Potsdam, Germany
4
Institut für Strömungsmechanik und Technische Akustik, Technische Universität Berlin, 10623 Berlin, Germany
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(8), 776; https://doi.org/10.3390/e21080776
Submission received: 24 June 2019 / Revised: 29 July 2019 / Accepted: 31 July 2019 / Published: 8 August 2019
(This article belongs to the Special Issue Entropy Based Inference and Optimization in Machine Learning)

Abstract

:
The concept of a “flow network”—a set of nodes and links which carries one or more flows—unites many different disciplines, including pipe flow, fluid flow, electrical, chemical reaction, ecological, epidemiological, neurological, communications, transportation, financial, economic and human social networks. This Feature Paper presents a generalized maximum entropy framework to infer the state of a flow network, including its flow rates and other properties, in probabilistic form. In this method, the network uncertainty is represented by a joint probability function over its unknowns, subject to all that is known. This gives a relative entropy function which is maximized, subject to the constraints, to determine the most probable or most representative state of the network. The constraints can include “observable” constraints on various parameters, “physical” constraints such as conservation laws and frictional properties, and “graphical” constraints arising from uncertainty in the network structure itself. Since the method is probabilistic, it enables the prediction of network properties when there is insufficient information to obtain a deterministic solution. The derived framework can incorporate nonlinear constraints or nonlinear interdependencies between variables, at the cost of requiring numerical solution. The theoretical foundations of the method are first presented, followed by its application to a variety of flow networks.

Graphical Abstract

1. Introduction

The past decade witnessed a tremendous growth of interest in the structural and dynamic properties of networks, especially from the perspective of statistical physics, e.g., [1,2,3,4,5,6,7,8,9,10]. This was triggered by many new applications, e.g., the Internet and Web 2.0 applications, e.g., [11,12]; human social networks, e.g., [3,4,8]; electrical power networks with distributed generation and storage, e.g., [13,14]; land transport, air traffic and freight networks, e.g., [15,16,17,18,19]; complex dynamical systems and causal networks, e.g., [20,21,22]; nonequilibrium chemical reaction and biological networks, such as metabolic networks, e.g., [23,24,25]; ecological and global climate networks, e.g., [26,27]; and “networks of networks”, e.g., [28,29,30,31]. A literature review spanning all of these applications would be vast; we here defer to several reviews and monographs [3,4,5,7,9,19,21,30,31]. However, despite the breadth and depth of these studies, many fundamental questions on the analysis of networks have not been satisfactorily resolved. In particular, while many studies have analyzed the dynamics of networks, much less attention has been devoted to the analysis of dynamics on networks, or the combined problem of dynamics on and by networks. Despite several pioneering studies within several disciplines, a general probabilistic framework for the analysis of flows on networks subject to uncertainty has not been presented.
We here define a flow network as a set of nodes connected by links, which carries flow(s) of one or more quantities; typically but not exclusively of conserved quantities. Such networks can be represented by undirected or directed graph structures such as those illustrated in Figure 1a,b. Many flow networks will carry flows that are driven by or associated with differences in a physical field or potential, while in others, the demand is governed by parameters other than potentials. The concept of a flow network thus provides a unifying theoretical framework for the analysis of networks from many disparate disciplines, e.g., pipe flow, fluid flow, electrical, chemical reaction, ecological, epidemiological, neurological, communications, transportation, financial, economic and human social networks. Consider, for example, two types of flow network of importance to human society: (i) pipe flow networks used for the distribution of potable water and natural gas in populated areas, e.g., [32,33], and (ii) transportation networks used by motor vehicles, e.g., [15,16,17,18,19]. In the latter case, the network can have a physical manifestation such as a road or rail network, or could be a virtual (topological) network represented by aircraft or shipping routes. A third example of increasing interest is that of epidemiological networks, which govern the spread of disease, e.g., [34,35]; these commonly require interconnections between networks of different character, or “networks of networks” [28,29,30,31].
The aim of this study is to present a general framework for the analysis of any flow network based on the maximum entropy (MaxEnt) method [36,37,38,39,40,41], the fundamental tool of statistical physics. The method is founded on the generic (probabilistic or information-theoretic) definition of entropy, a measure of the spread of a probability distribution, which in thermodynamics has a deep connection to observable irreversible change. In the MaxEnt method, the user maximizes the entropy of a system, subject to any physical constraints, to infer its state, expressed in the form of a probability distribution. This can be post-processed to extract additional information on the system, such as its statistical moments. The power of the MaxEnt method lies in its ability to infer the state of an underconstrained (underdetermined) dynamical system, i.e., for which the number of unknowns exceeds the number of equations. In many situations it also enables a substantial reduction in model order. The classical example of this utility is demonstrated by chemical thermodynamics, for which the MaxEnt method generally furnishes a 23-order-of-magnitude reduction compared to the underlying (molecular or canonical) dynamical system. In addition to its outstanding successes in statistical thermodynamics, over the past few decades, the MaxEnt method provided new insights in many other fields of research, throughout all branches of science, engineering, mathematics and the social sciences [38,41,42,43].
We first sketch the relationship between the present framework and conventional thermodynamics: whereas in many problems of physics, a system will consist of a discrete or continuous set of (macro)states embedded in a continuous (physical or phase) space, in a flow network the geometry is partly discretized. Adopting a statistical physics formulation, we assign state variables to the flow rates through each internal and external link of the network, and also (if defined) to the potentials at each node. A network ensemble can then be conceptualized by considering all copies of the system at different times or with different initial conditions, as is usual in statistical physics. The (generic) entropy of the system can then be defined using the Boltzmann equation H = ln P [44,45,46], where P is the governing probability distribution, which in this case expresses the allocation of entities (flow rate and potential state variables) to states (their possible values). For discrete, distinguishable entities and states, a multinomial distribution results: P = N ! i q i n i / n i ! , where n i entities are allocated to the ith state, q i is the prior probability of allocating an entity to the ith state, and N is the total number of entities. Here we note that the prior probability q i represents the ith state of the “unconstrained” system, and is needed to account for states of unequal weighting (degeneracy). In the asymptotic limits N and n i / N p i , this gives the relative entropy H = i p i ln ( p i / q i ) [38,47,48]. The network will also be subject to a variety of constraints, in the form of specified values of various statistical moments, physical laws and network properties; these must be incorporated into the optimization procedure. Maximizing the relative entropy H , subject to the constraints, is then equivalent to maximizing P (in the asymptotic limit) subject to the same constraints, to give the most probable state of the network. From the information-theoretic perspective, this can also be interpreted as the least informative description of the network, or that which contains the least information subject to the constraints [36,37,38,41]. In the present study, for mathematical convenience we consider continuous flows on a discretized network, giving a continuous, multivariate form of the above apparatus.
This work is set out as follows. In Section 2, we define the generalized flow network under consideration, including a standard set of specifications applicable to potential-flow and transportation networks. In Section 3, we provide a general formulation for the probability distribution, entropy and constraints for this standard network, and work through the operation of the MaxEnt method to its main findings. For this we examine the handling of nonlinear moment constraints and interdependencies, not usually examined under the MaxEnt framework. We also discuss the role of time in the MaxEnt formulation. In Section 4, we discuss the application of the method to a variety of flow networks. A brief comparison between the MaxEnt method and Bayesian inference is provided in Section 5. The conclusions are then summarized in Section 6.

2. Network Specifications

Consider a generalized flow network such as one of those represented in Figure 1a,b. For maximum generality, we consider all possible undirected graphs (with unspecified flow directions), directed graphs or digraphs (with fixed flow directions), simple graphs (no self-loops or multiple connections), loop-graphs (with self-loops) and multigraphs (with multiple connections). The multigraphs can be further classified into undirected and directed forms, and those without or with loops. Weighted graphs are treated as continuous forms of multigraphs. We do not specifically discuss bipartite graphs or networks of networks, but these can be analyzed by extension of the formulation given here.
We first consider a flow network defined by the following set of quantitative specifications, here termed the “standard set”:
(1)
A set of N N 0 = N 0 vertices (nodes), with index i or j.
(2)
A set of M N 0 internal edges (including loop edges) between pairs of nodes, commonly represented by an N × N adjacency matrix A , in which each element A i j indicates the connectivity between nodes i and j. On an undirected graph, A i j { 0 , 1 } , indicating Boolean connectivity (hence A is symmetric with a zero diagonal), so i = 1 N j = 1 N A i j = [ 2 M , 2 M L ] respectively for undirected simple graphs and undirected loop-graphs, where L N 0 is the number of self-loops. On a digraph, A i j { 0 , 1 } refers strictly to the connection from node i to j, thus incorporating the flow direction (so A can be asymmetric), hence i = 1 N j = 1 N A i j = M . On a multigraph, A i j N 0 indicates the number of edges from node i to j, whence i = 1 N j = 1 N A i j = [ 2 M , 2 M L , M ] respectively for undirected multigraphs, undirected loop multigraphs and multidigraphs. For weighted graphs, A i j [ 0 , ) = R 0 + (or more generally A i j R ), representing the weight of each edge. Double summation then gives i = 1 N j = 1 N A i j = [ 2 W , 2 W W L , W ] respectively for undirected, undirected loop and directed weighted graphs, where W , W L R 0 + (or more generally W , W L R ) are respectively the total weight and the weight of self-loops.
(3)
A set of P N 0 external edges (links) to nodes, represented by an N-dimensional external adjacency vector U , in which each element U i N 0 indicates the number of external links to node i. For undirected graphs, digraphs and multigraphs, P = i = 1 N U i . (While not implemented here, the external links could alternatively be represented by P connections to a fictitious external node, allowing all links to be united into an ( N + 1 ) × ( N + 1 ) augmented adjacency matrix A ˜ [24,33].) For a weighted graph, if the external edges are also weighted then W P = i = 1 N U i , where W P R 0 + (or more generally W P R ) is the total weight of external edges.
(4)
A set of M internal flow rates Q i j , i , j { 1 , , N } , of some countable quantity B from nodes i to j along the i j th edge, measured in units of B s 1 . In general, the flow rates will be functions of time and/or the graph ensemble. For a simple graph, they can be grouped into the N × N flow rate matrix Q , with nonexistent edges handled by assigning Q i j = 0 or Q i j = undefined . (Alternatively, the admissible flow rates Q n can be stacked into a flow rate vector Q , for which the i j th and nth edges are connected by a lookup table [32,33].) In addition:
(a)
On an undirected graph, the non-diagonal flow rates Q i j R , i j , are antisymmetric Q i j = Q j i and can reverse direction. In contrast, on a digraph they are strictly nonnegative Q i j R 0 + and need not be correlated Q i j Q j i . We recognize that in all networks, the flows are quantized (especially significant for transport networks), but for maximum generality, we here adopt the continuum assumption with real-valued flow rates.
(b)
A graph with self-loops can have Q i i R 0 + , i.e., Q i i 0 . Such flows are incompatible with potential flow networks, but can be realized in other systems such as transport networks.
(c)
On a multigraph, there may be up to K = max ( A i j ) N 0 connections from node i to j, with flow rates labelled Q i j ( k ) for k { 1 , , K } . These can be united into the N × N × K third-order matrix Q , assigning Q i j ( k ) = 0 or Q i j ( k ) = undefined for nonexistent edges. If summed along the kth direction, this gives an N × N total internal flow rate matrix Q ^ , with entries Q i j = k = 1 K Q i j ( k ) .
(d)
For networks with alternating electrical currents, it is convenient to consider complex flow rates (phasors) Q i j ( k ) C , commonly expanded as Q i j ( k ) = I i j ( k ) = | I i j ( k ) | exp ( ı ψ i j ( k ) ) , where | I i j ( k ) | is the current amplitude, ψ i j ( k ) is the phase and ı = 1 .
(e)
For multidimensional flows, we define Q i j ( k ) c as the flow of species c { 1 , , C } on the i j ( k ) th edge, where C is the total number of species (e.g., cars, trucks and buses on a road network, or independent chemical species on a chemical reaction network). These can be collated into the N × N × K × C fourth-order matrix Q , which if summed along the kth direction gives the N × N × C total internal flow rate matrix Q ^ . For brevity, we refer to flows of a C-dimensional vector quantity B with components B c { B 1 , , B C } .
(f)
In some flow networks, a given edge into a node may be connected only to some outward edges (e.g., certain traffic junctions in cities, or protected nodes in electrical networks). Such nodes must be treated as graphical objects in their own right, leading to embedded networks-of-networks. Such complications, while important, are not examined further here.
(5)
A set of P external flow rates Θ i ( m ) R , i { 1 , , N } , m { 1 , , O } , where O = max ( U i ) , denoting flow on the mth external link to node i, here defined positive if an inward flow. These flow rates are also measured in units of B s 1 , and can be grouped into the N × O matrix Θ , again assigning Θ i ( m ) = 0 or Θ i ( m ) = undefined for nonexistent external links. Furthermore:
(a)
For networks with a maximum of one external link to each node ( O = 1 ), we need only consider the N-vector Θ of external flow rates Θ i .
(b)
For alternating current networks, we consider complex external flow rates (phasors) Θ i ( m ) = | Θ i ( m ) | exp ( ı η i ( m ) ) C , where | Θ i ( m ) | is the external current amplitude and η i ( m ) the phase.
(c)
For multidimensional flows, we consider the flow rate Θ i ( m ) c of each species c, which can be assembled into the N × O × C external flow rate matrix Θ . This can be summed along the m direction to give the N × C total external flow rate matrix Θ ^ , with entries m = 1 O Θ i ( m ) c .
(6)
For potential flow systems: a set of N potentials E i R at each node i { 1 , , N } , united into the N-dimensional vector E . For alternating current networks, we consider complex potentials (phasors) E i C , commonly written as E i = V i = | V i | exp ( ı ϕ i ) , where | V i | is the electrical potential amplitude and ϕ i is the phase. For multidimensional flows, we consider the independent potentials E i c , assembled into the N × C matrix E .
(7)
For potential flow systems: a set of M potential losses (negative differences), on a simple graph given by:
E i j = Δ E i j = E i E j , i , j
These are generally interpreted as driving forces for the flow rates Q i j , measured in units of a difference or gradient in the intensive variable conjugate to B. Examples include losses in pressure, electrical potential, temperature (or reciprocal temperature) and chemical potential (or chemical potential divided by temperature). For simple graphs, these can be assembled into the N × N matrix E . In a multigraph, the potential losses are independent of subindex k:
E i j ( k ) = E i j , i , j , k
but it is useful computationally to retain all terms in an N × N × K matrix E . For multicomponent flows, the potential losses E i j ( k ) c = Δ E i j ( k ) c = E i c E j c give the N × N × K × C matrix E .
(8)
For potential flow systems: a set of M resistance or constitutive relations for each edge, which for purely local dependencies can be written as:
E i j ( k ) = Δ E i j ( k ) = R i j ( k ) ( Q i j ( k ) ) , i , j , k
where R i j ( k ) is the i j ( k ) th resistance relation. For example, in electrical circuits, (3) give linear relations E i j ( k ) = R i j ( k ) Q i j ( k ) (Ohm’s law), where R i j ( k ) is the resistance (for direct currents) or impedance (for alternating currents) of the i j ( k ) th edge. In pipe flow networks, (3) are often written as power law relations E i j ( k ) = X i j ( k ) Q i j ( k ) | Q i j ( k ) | α 1 (Blasius’ law), where X i j ( k ) is a parameter and 1 α 2 is a coefficient [49,50], or can be expressed in more complicated forms such as the Colebrook equation [51]. Some resistance relations may not be functions: i.e., they may allow multiple solutions. For multidimensional flows, (3) applied to the cth pair of flow rates and potential losses gives:
E i j ( k ) c = R i j ( k ) c ( Q i j ( k ) c ) , i , j , k , c
In multidimensional flows, there is also the possibility of cross-phenomenological transport processes, e.g., of thermodiffusive, thermoelectric, electrokinetic, electroosmotic or galvanomagnetic phenomena, e.g., [52,53,54,55,56]. These are usually represented by linear Onsager relations, valid close to equilibrium, of the form:
E i j ( k ) b = c = 1 C R i j ( k ) b c Q i j ( k ) c , i , j , k , b
in which R i j ( k ) b c is the b c th phenomenological resistance, which will satisfy reciprocity R i j ( k ) b c = R i j ( k ) c b . However, in general these will require the (nonlinear) relations:
E i j ( k ) b = c = 1 C R i j ( k ) b c ( Q i j ( k ) c ) , i , j , k , b
in which R i j ( k ) b c is now the b c th phenomenological resistance function. The connections between reciprocal functions may be complicated. Equations (3)–(6) can be assembled into the function:
E = R ( Q )
where R is an N × N × K × C resistance operator, which in general may contain non-local effects, cross-phenomenological effects and other dependencies.
Combining the above points, we consider a flow network defined by the standard set of specifications:
S = { N , M , P , K , O , A , U , C , B , Q , Θ , E , E , R }
which from (1)–(7) and various other constraints—to be discussed—will involve some functional interdependencies. It is convenient here to amalgamate all graph properties into the discrete function:
G = G ( N , M , P , K , O , A , U )
and also to remove the redundancy between potentials and potential differences, to give the set:
S = { G , C , B , Q , Θ , E , E , R }
where E is a vector of reference potentials E c at a nominated node.
We note that many networks—especially in transportation—do not have potentials, potential differences or resistances. Such networks can be examined using the above standard set (10), with the demand handled by the use of cost or travel time functions instead of potentials, in many cases augmented by various strategies or optimization schemes for route selection [15,16,18].

3. Maximum Entropy Analysis

3.1. Overview

We now consider the analysis of flow networks by the maximum entropy (MaxEnt) method. This always consists of the following steps [38]:
(1)
Definition of a joint probability and/or probability density function (pdf), to quantify all uncertainties in the specification of a given system;
(2)
Definition of a relative entropy function for the network, based on this probabilistic representation;
(3)
Incorporation of information about the a priori distribution of the system over its parameter space, in the form of a prior probability and/or pdf;
(4)
Encoding of other background information, for example any physical laws or known parameter values, in the form of constraints;
(5)
Maximization of the relative entropy function, subject to the prior and constraints, to predict the state of the system.
Philosophically, the MaxEnt algorithm has been shown to select the most probable description of the system [44,45,46], or its least informative description [36,37,38,41], consistent with the constraints. This is generally interpreted as the stationary state of the system, equivalent to its “typical” or “most representative” state. In typical thermodynamic systems involving isolated systems or canonical ensembles—for which the probabilities and constraints are defined over the contents of the system or the ensemble—the stationary state is interpreted as the equilibrium position of the system, e.g., [57,58,59,60]. However, in forced flow systems such as flow networks—in which the probabilities and constraints are defined over constant flow rates or fluxes into or through part of the system—the stationary state must instead be interpreted as the steady state of the system [61,62,63]. (We note that the term “steady-state” is somewhat misleading, since it refers only to the mean flow and not its fluctuations; a steady-state flow need not be steady in time, only in the mean.) More elaborate formulations involving time-dependent constraints are discussed further in Section 3.6.
Each step of the MaxEnt framework, applied to the analysis of flow networks, is discussed in turn.

3.2. Uncertainty and Probabilistic Representation

A flow network is always subject to some uncertainty associated with a lack of knowledge of its specification. For example, there may be uncertainty in the known values of its instantaneous flow rates Q and potential differences Δ E ; in its controlling external flows Θ ; in its more fundamental properties such as the resistance functions R ; or even concerning the existence of specific internal or external links (represented by A or U ) or the number of nodes N. The maximum entropy (MaxEnt) method of Jaynes provides a rigorous technique with which to predict the state of the network, despite these inherent uncertainties in its specification.
We must first represent the uncertainties in quantitative form. Following the well-established line of reasoning developed by Bayes [64], Laplace [65] and many others, extended by Polya [66,67], Cox [68] and Jaynes [38], the uncertainty (or degree of belief) in a discrete parameter should be expressed as a probability. Writing Υ n for a discrete random variable which takes values n over the domain Ω n N or Ω n Z , this probability is defined by:
p ( n | I ) = Prob . ( Υ n = n | I )
Some authors refer to this as a probability mass function (pmf) [69]. Equation (11) is conditioned on the background information I, which contains all that is known from the problem specification (including the parameter space, prior and constraints). Equation (11) will satisfy normalization, n Ω n p ( n | I ) = 1 .
Alternatively, for a real parameter x, the uncertainty should be expressed as a pdf. For a continuous random variable Υ x which takes values x over the domain Ω x R , this is defined by the probability:
p ( x | I ) d x = Prob . ( x Υ x x + d x | I )
again conditioned on some background information I. This will again satisfy normalization Ω x p ( x | I ) d x = 1 . Extending this to an s-dimensional continuous vector or matrix quantity x , we can write the joint probability:
p ( x | I ) d x = Prob . ( x Υ x x + d x | I ) = Prob . x 1 Υ x 1 x 1 + d x 1 x s Υ x s x s + d x s | I
in which d x = i = 1 s d x i is the product of differentials of all terms. We can also consider a joint probability containing both discrete and continuous variables, thus invoking a mixed probability mass and density function, for simplicity also referred to here as a pdf.
Accordingly, the uncertainty in a flow network defined by the standard set of specifications (10) is given by the joint probability:
p ( S | I ) = p ( G , C , B , Q , Θ , E , E , R | I ) d Q d Θ d E d E d R
This definition accounts for the fact that G , C and B are discrete, while the other variables are continuous. Equation (14) is again conditioned on the background information I. In (14), the configurational specifications are listed first, but this does not imply any priority order. If some parameters are not required, e.g., if there is no need to consider potentials or potential differences (such as in a transport network), these can be omitted from (14).
In consequence, for any given problem involving a flow network, we can write a joint pdf to describe its uncertainty over the set of unknown parameters, conditional on those which are known. For example, if the network configuration, flow species, external flow rates, reference potentials and resistance functions are known, but there is uncertainty over the internal flow rates and potential differences, we adopt the joint conditional pdf:
p ( Q , E | G , C , B , Θ , E , R , I ) = p ( G , C , B , Q , Θ , E , E , R | I ) p ( G , C , B , Θ , E , R | I )
which as indicated, is equivalent to the original pdf (14) divided by the pdf of known parameters. The latter is defined by:
p ( G , C , B , Θ , E , R | I ) = Ω Q Ω E p ( G , C , B , Q , Θ , E , E , R | I ) d Q d E
i.e., by marginalization (summation or integration) of the original pdf (14) over the unknown parameters. For simplicity, in the following we adopt the notation:
p ( f | g , I ) = p ( f , g | I ) p ( g | I ) = p ( f , g | I ) Ω f p ( f , g | I )
where f and g respectively denote the sets of unknown and known parameters and operators, while Ω f indicates summation or integration, as appropriate, with respect to the discrete or continuous parameters within Ω f , the differentials being implicit [69].
Based on the above analysis, we can therefore speak of the level of description of a flow network, given by the sets of unknown and known parameters and operators in its specifying pdf. At the lowest level of description, only the background information I is known, giving the pdf (14). At the next few levels, successively more information will be included, for example on its graphical properties G . At higher levels, knowledge of other features, such as the resistance functions R , will also be included. The flow network specified by (15) is posed at an even higher level of description. At the highest level of description, a flow network can be said to be fully specified; i.e., its description—including the prior and constraints embedded within I—furnishes sufficient information to fully determine all properties, including all flow rates within each edge and (if included) all potentials at each node. At this level, there is no need to invoke a probabilistic formulation, since the problem is now deterministic.
For some networks, the conditional pdf (17) might be separable into distinct sums and/or integrals over particular parameters or subsets, allowing dramatic simplification. However, the above probabilistic representation will always remain valid, regardless of whether such separability is possible.

3.3. Entropy and Prior Probabilities

The relative entropy of a flow network is defined over all uncertainties in the network, and therefore follows directly from its probabilistic representation (17). From the Boltzmann approach discussed in the Introduction, we adopt the multivariate relative entropy (negative Kullback–Leibler divergence [47]), in shorthand form:
H = Ω f p ( f | g , I ) ln p ( f | g , I ) q ( f | g , I )
where q ( f | g , I ) is the joint prior pdf or simply the prior, defined at the same level of description as the problem specification (17). The sums and/or integrals in (18) are evaluated over the domain Ω f of all unknown parameters.
The prior represents one part of the background information I, providing a probabilistic baseline or “reference distribution” for the flow network in the absence of constraints. It thereby accounts for different a priori weights of the parameter values. Indeed, for any continuous parameters, to ensure dimensional consistency and scale invariance, the prior cannot be omitted [37,38].
In many MaxEnt analyses, the prior is often assumed constant by default, equivalent to adopting the improper prior q = 1 [38]. If; however, one knows that a random variable is not distributed uniformly over its parameter space—if it is more probable, a priori, for certain values or subdomains to be occupied—then this information must be included in the prior. In thermodynamics, this is referred to as the degeneracy. Similar (opposite) considerations apply to less probable values or domains, or those which must be strictly excluded. Sometimes these restrictions can be formulated as strict constraints rather than imposed within the prior. An illustrative example is the adjacency matrix A of an N-node multigraph, which is drawn from the domain Ω A = N 0 N × N . In the absence of a prior or some other restriction, the MaxEnt algorithm will assign equal weightings to all graphs within this ensemble, i.e., will be heavily biased towards graphs with infinite numbers of edges. Another example is a constructed potential flow network such as an electrical or pipe flow network: a priori, no edge can carry an infinite flow rate, in either direction. Unless this information is encoded within a prior, the MaxEnt algorithm will be biased towards the consideration of extremely high (unphysical) flow rates. It is, therefore, crucial to incorporate any restrictions on the parameter specifications within the problem formulation, to avoid inferring a solution for a system which is far from the network of interest.

3.4. Moment Constraints

The second component of the background information I consists of constraints upon or between parameters, in MaxEnt analysis usually expressed in the form of mathematical moments (expectations). These always include normalization of the joint pdf:
1 = Ω f p ( f | g , I ) = 1
If the problem is formulated using several probabilities or pdfs, each must be normalized. A flow network may also be subject to linear global constraints on various functions r ( f ) of the unknown parameters—which could be scalars, vectors or in matrix form—summed and/or integrated with respect to all unknowns:
r = Ω f r ( f ) p ( f | g , I ) = v r
where v r indicates some specified value of r. Some flow networks may also have intermediate or local constraints, in which the expectation is calculated over a lower-dimensional projection h = π ( f ) of the unknowns:
r h = Ω h Ω f r ( h ) p ( f | g , I ) = v r
These are less mathematically tractable, being functions of the remaining unknowns f \ h . Many flow networks may also be subject to nonlinear global constraints of the form:
F ( r ) = F Ω f r ( f ) p ( f | g , I ) = 0
where F is a nonlinear function of the expectations of r ( f ) , whence in general F ( r ) F ( r ) . It is also possible to have nonlinear local constraints, defined by
F ( r h ) = F Ω h Ω f r ( h ) p ( f | g , I ) = 0
Finally, any of the above constraints could be imposed as an inequality, for example for the linear global constraints:
r = Ω f r ( f ) p ( f | g , I ) v r or r = Ω f r ( f ) p ( f | g , I ) v r
In this study, both linear and nonlinear global equality constraints are examined in detail, and we also comment on inequality constraints. In principle, many other kinds of constraints can be included in the MaxEnt method, although moment constraints are best suited to extremization by the method of the calculus of variations.
Physically, the moment constraints (20)–(21) are usually identified with the time average:
r ¯ = Ω t r ( t ) p ( t | I )
where t is the discrete or continuous time, p ( t | I ) is the probability or pdf of time and Ω t is the time domain, or alternatively the ensemble average:
r ˜ = Ω ρ r ( ρ ) p ( ρ | I )
where ρ is the discrete index or continuous density of individual realizations, p ( ρ | I ) is the probability or pdf of a realization and Ω ρ is the domain of realizations. These are not the only viable choices; indeed, moment constraints can be identified with any function which satisfies the properties of a Reynolds average [70].
We therefore consider three categories of moment constraints on a flow network: (i) those arising from specified moments of various physical parameters, e.g., means, variances or higher-order moments of particular flow rates or potential differences, as defined by (20)–(21); (ii) those imposed by physical laws, in particular the conservation laws (Kirchhoff’s laws) and resistance functions; and (iii) those arising from constraints on the network properties (the graph ensemble). Philosophically, the physical laws and graphical constraints are appropriately imposed in the mean, to allow for fluctuations about their mean values; such analysis is particularly well suited to the MaxEnt framework.
Consider the latter two categories of constraints, for a flow network defined by the standard set of specifications (10). Firstly, Kirchhoff’s laws can be written as:
(1)
Kirchhoff’s first law: Applied in the mean, this states that the mean flow rate of each species c through each node i will be zero at steady state. For an undirected graph, this can be written as the sums of inflows and outflows:
0 = m = 1 O Θ i ( m ) c + 1 2 j = 1 N k = 1 K Q j i ( k ) c Q i j ( k ) c , i , c = m = 1 O Θ i ( m ) c + k = 1 K Q i i ( k ) c j = 1 N k = 1 K Q i j ( k ) c , i , c
using Q j i ( k ) c = Q i j ( k ) c except for i = j . On a directed graph, the flow paths into and out of each node must be counted separately:
0 = m = 1 O Θ i ( m ) c + j = 1 N k = 1 K Q j i ( k ) c Q i j ( k ) c , i , c
Reversing the sum and expectation operators, (27)–(28) can be assembled into the N × C matrix equation:
Θ ^ + ϵ Q ^ Q ^ 1 = Θ ^ + ϵ Φ ( Q ^ ) = 0
where 0 and 1 are vectors or matrices of 0s or 1s of appropriate dimension, denotes the transpose, ϵ = 1 2 for an undirected graph and 1 for a directed graph, and Φ is an N × C operator on Q ^ . Equation (29) makes use of the k-summed flow rate matrices Θ ^ and Q ^ , with the latter viewed from its N × N face, so the bracketed term accounts for the row and column sums of Q ^ respectively.
(2)
Kirchhoff’s second law: Applied in the mean, this states that at steady state, the mean potential losses must be in balance around each connected loop, or equivalently that the mean potentials at each node must be constant. For a flow network with multidimensional potentials, the first definition gives an expression for each loop (cycle) on the network:
i j ( k ) E i j ( k ) c = 0 ,
in which the potential losses are added in the assigned direction of . In a multigraph, (30) applies to each parallel edge k { 1 , , K } , while in a directed graph, the two loop orientations (clockwise or anticlockwise) can be independent. A search algorithm is required to identify a maximal set of Λ independent loops, expressed by the N × N × K loop adjacency matrices M for { 1 , , Λ } , containing elements M i j ( k ) { 1 , 0 , 1 } to indicate the presence and orientation of each edge in the loop. These can be stacked into the N × N × K × Λ loop adjacency matrix M . Equation (30) can then be rewritten as the Λ × C matrix equation:
M E = all i j ( k ) M i j ( k ) E i j ( k ) c = 0
where ⊘ is a contraction product over the common indices of two matrices, given by the sum of their element-wise products, which subsumes the vector scalar product (dot product) for vectors and the tensor scalar product (double dot product) for second order tensors. In some networks, additional terms (e.g., reservoir potentials, pump heads, electrical source potentials, etc.) may also appear in (30)–(31).
Secondly, the resistance function constraints (7) can be imposed either (i) strictly, thereby eliminating the potential differences as independent variables; or (ii) in the mean, based on the nonlinear relation:
E = R ( Q )
In this case, E must be included as an unknown within f . We note that the form of (32), rather than E = R ( Q ) , is necessary to ensure consistency between the MaxEnt and deterministic solutions for a fully determined problem, when calculated using mean potential differences and flow rates. For alternating currents in electrical systems, (32) should instead be formulated in terms of root-mean-square quantities, thus connecting the second moments, ( E ) 2 = R ( Q 2 ) ). Thirdly, in some networks there may be uncertainty in the number C or identities B of the flow species, necessitating their inclusion in f and the imposition of species constraints, for example:
Ψ ( C ) = 0
where Ψ is a vector or matrix operator. Finally, the graphical constraints could take many forms, depending on the problem specification. These again divide into (i) strict constraints, for example on the number of nodes or their connections, and (ii) weaker constraints such as moment constraints, for example on the expected degrees of certain nodes. The strict graphical constraints can be written as:
Γ ( G ) = 0
where Γ is a vector or matrix operator and G represents the properties of an individual graph. These relations can be included directly in the problem formulation I and do not require extremization. On the other hand, graphical moment constraints can be written in moment form as:
Ξ ( G ) = 0
where Ξ is another vector or matrix operator, based on the expected properties G of the graph ensemble. We see that (34) defines a “microcanonical graph ensemble”, whereas (35) can be interpreted as a “canonical graph ensemble”. In either case (34)–(35), the loop adjacency matrix M will be a function of the graph properties, respectively G or G . In more complicated networks, there may also be coupling constraints between the graphical properties and physical parameters.

3.5. Extended MaxEnt Algorithm

We now provide an extended formulation of Jaynes’ MaxEnt algorithm, based on the relative entropy (18) subject to normalization (19), linear (20) and nonlinear (22) global constraints. Collecting the latter into generic stacked vector or matrix forms r = v r and F ( r ) = 0 respectively, we write the Lagrangian [36,37,38,41]:
L = Ω f p ( f | g , I ) ln p ( f | g , I ) q ( f | g , I ) κ Ω f p ( f | g , I ) 1 η Ω f r ( f ) p ( f | g , I ) v r ζ F ( r )
where κ , η and ζ are Lagrangian multipliers respectively for each type of constraint, of compatible order and dimension to enable contraction into scalars. Applying the calculus of variations, we set the total variation of (36) to zero:
δ L = 0 = L p ( f | g , I ) δ p ( f | g , I )
Since each δ p can be non-zero, this gives L / p = 0 for each f Ω f . Taking the functional derivative of (36) with respect to p, making use of the chain rule:
F ( r ) p = F ( r ) r r p = F ( r ) r p
followed by normalization (19) gives the inferred pdf:
p * ( f | g , I ) = q ( f | g , I ) Z exp [ η r ( f ) ζ F ( r ) r ( f ) ]
where Z = exp ( κ + 1 ) is the partition function. Equation (39) is an extended (nonlinear) form of the Boltzmann distribution, which can be solved in conjunction with the constraints to give p * , Z, η and ζ . Once the pdf (39) is available, the nth moment of any other uncertain parameter a ( f ) can also be calculated:
a ( f ) n = Ω f a ( f ) n p * ( f | g , I )
e.g., n = 1 gives the mean, while n { 1 , 2 } gives the mean and variance.
We note that previous treatments of the MaxEnt method, based almost exclusively on linear constraints, give an analytic solution for p * , with the Lagrangian multipliers then computed numerically [36,37,38,39,40,41]. In this study, in which (39) contains nonlinear constraints and interdependencies, its solution will usually require an iterative computational scheme. Depending on the problem formulation, nonlinear constraints can give rise to a non-convex Lagrangian, which would preclude the use of the standard tools of convex optimization, e.g., [71], and with the possibility of multiple solutions, e.g., [72]. In the authors’ experience of water distribution, electrical and transport networks, we did not encounter significant computational difficulties or multiple solutions, although we note that the resistance function constraints (6) in such systems tend to be monotonic. We have, however, encountered examples of sharp transitions, for example in pipe flows due to the laminar-turbulent transition, e.g., [32,33], and in transport networks due to routing changes, e.g., [73]. In contrast, for a problem with nonlinear constraints and Gaussian priors, we identified an analytical solution based on matrix operations [74]. We note that a general numerical algorithm for MaxEnt analysis based on nonlinear constraints is not available, and that it may be necessary to implement a tailored solution for a given problem.
If any inequality constraints such as (24) are included in (36), these can be handled by a Lagrangian multiplier which is real-valued at equality and which vanishes when the inequality applies. Usually, an inequality test is implemented at each iteration, enabling the inequality Lagrangian multipliers to be switched on or off as appropriate, e.g., [32]. For some problems, inequality constraints can also be handled using integration limits, e.g., [73]. If any local constraints (21) or (23) are included in (36), they and their corresponding Lagrangian multipliers will be functions of the non-marginalized variables, providing a more elaborate solution than (39), but which accords with the same mathematical framework.
As an example, consider a flow network defined by the standard specifications (10), subject to constraints on normalization (19), global expectation values of some physical properties (20), Kirchhoff’s node laws (29), Kirchhoff’s loop laws (31), resistance functions (32), expected number of species (33) and expected graph properties (35). (Please note that only some expected values (20) should be imposed, otherwise the problem will become overdetermined.) The Lagrangian, for brevity written in expectation form, now becomes:
L = ln p q κ 1 1 λ Q v Q μ Θ v Θ ν E v E α Θ ^ + ϵ Φ ( Q ^ ) β M ( G ) E γ E R ( Q ) δ Ψ ( C ) ϕ Ξ ( G )
where λ , μ , ν , α , β , γ , δ and ϕ are Lagrangian multiplier vectors or matrices of appropriate order and dimension. Combining like terms and taking the functional derivative yields:
p * = q Z exp [ λ Q μ Θ ν E α Θ ^ ϵ α Φ ( Q ^ ) Q ^ β M ( G ) E β M ( G ) G E γ E + γ R ( Q ) Q δ Ψ ( C ) C ϕ Ξ ( G ) G ]
Depending on the network and its level of description, many of the terms in (42) may simplify further. In principle, (42) can be solved in conjunction with the constraints to give the pdf, the partition function and all Lagrangian multipliers. Since p * is implicit within each expectation, (42) will usually require numerical solution.

3.6. Role of Time

We finally comment on the role of time in MaxEnt analysis. In traditional applications in statistical mechanics and in the above formulation, the probabilities, entropy function and constraints are considered independent of time, usually invoking the ergodic hypothesis that the time- and ensemble-averages (25)–(26) are equivalent [57]. However, for many networks it may be desirable to adopt a time-based formulation, in which the probabilities, entropy function and constraints are functions of time. Explicitly, instead of (36), the Lagrangian can be written:
L ( t ) = Ω f p ( f | g , t , I ) ln p ( f | g , t , I ) q ( f | g , t , I ) κ ( t ) Ω f p ( f | g , t , I ) 1 η ( t ) Ω f r ( f ) p ( f | g , t , I ) v r ( t ) ζ ( t ) F ( r ( t ) )
Philosophically, (43) makes a clear distinction between time and ensemble averaging, essential for the probabilistic representation of a time-dependent dynamical system. The suitability of the MaxEnt algorithm will now depend on the time scales of the network. For simple dynamics, in which the response times of relaxation processes within the network are significantly faster than that of the forcing, the MaxEnt algorithm will provide the “moving stationary state” of the network, or in other words its asymptotic distribution subject to the forcing. If on the other hand, the relaxation processes are significantly slower than the forcing, the network may experience periods of quiescence followed by sudden responses (phase changes or tipping points), for which it may necessary to construct a dynamical model to meaningfully capture the dynamics. For systems with multiple time-scales or more complicated dynamics, the network response to forcing can become very complicated [21]. For such systems, the observer may need to seek insights from both maximum entropy and dynamical models, in the same way that an observer of crystallization or chemical reaction processes will seek insights from both thermodynamics and kinetics.
An alternative path-based or maximum calibre approach for the analysis of time-varying systems is to redefine the entropy and all constraints as integrals over time [75,76,77], i.e., by integrating (43) with respect to time. The stationary state of the system then becomes the description of its most probable path through time. While this approach is well-defined, with deep connections to the principle of least action [77,78], it requires a very detailed knowledge of the behaviour of the constraints over all times, which is unlikely to be available for most flow networks of interest.

4. Applications

The above MaxEnt framework (42) can be used to infer the state of any flow network constrained by mean parameter values, physical laws (such as Kirchhoff’s laws) and/or network properties. This includes the analysis of a wide variety of networks, including pipe flow, electrical, chemical reaction, communications, transportation, epidemiological, human economic and social networks. Since the method is probabilistic, it enables the probabilistic prediction of the flow and other properties of a network when there is insufficient information to obtain a deterministic solution. As noted, the mean properties (or any other moments) of the system can be inferred from the probabilistic solution, by calculating their expected values.
In recent years, variants of the above MaxEnt formulation (42) have been applied to the analysis of a variety of engineered flow networks. These include:
(1)
Analyses of pipe flow networks used for water distribution systems, incorporating constraints on normalization (19), linear and nonlinear global expectation values (20) and (22), Kirchhoff’s node and loop laws (29) and (31), and resistance functions (32) [32,33,73,74,79,80,81,82,83,84,85]. This includes development of the analytical formulation, iterative numerical methods to handle nonlinear constraints, and rapid semi-analytical (quadratic programming) schemes for partition function integration. More recently, this work was extended by the use of reduced parameter basis sets to ensure consistency regardless of network representation [33,73,81]. Research also extended to comparative analyses of different prior probability functions [32,33,73,74,79], the use of priors to encode “soft constraints” [73,74,79], and formulation of an alternative linear algebra solution method for nonlinear systems with Gaussian priors [73,74,79]. The method was also demonstrated by application to a 1123-node, 1140-pipe water distribution network from Torrens, ACT, Australia [32,33,73]. An example set of inferred water flow rates on this network is illustrated in Figure 2a; further details of this analysis are given in [32].
(2)
Analyses of electrical networks, incorporating constraints on normalization (19), global expectation values (20) and (22), Kirchhoff’s node and loop laws (29) and (31), and resistance or impedance functions (32) [73,86]. This includes analysis of a 400-node electrical network in Campbell, ACT, Australia, subject to solar forcing from distributed household photovoltaic systems. An example set of inferred electrical power flows on this network is illustrated in Figure 2b, showing flow reversals due to high solar forcing; further details of this analysis are given in [73,86].
(3)
Analyses of transport networks, reformulated in terms of trip flow rates (from origins to destinations) rather than link flow rates, and incorporating constraints on normalization (19), global expectation values (20) and (22), various forms of Kirchhoff’s node laws (29) and various cost constraints [73,87]. These include different formulations using the gravity model of transport flows, or for route selection by proportional assignment or equilibrium assignment (cost minimization) methods.
(4)
Derivation of graph priors for several graph ensembles [88], invoking the relative entropy:
H G = Ω G p ( G | I ) ln p ( G | I ) q ( G | I )
where G is reinterpreted to represent a graph macrostate, p ( G | I ) and q ( G | I ) are the posterior and prior probabilities and Ω G is the graph ensemble. This is used in place of the Shannon entropy typically used in MaxEnt analyses of networks, e.g., [6,7,9,10,28,29,30]:
H G S h = Ω G p ( G | I ) ln p ( G | I )
in which G represents an individual graph. The use of graph priors enables a simplified accounting over graph macrostates—taking advantage of the most important feature of statistical mechanics—thereby simplifying the analysis of networks with uncertainty in the network structure.
Considerable research is still required on the full extent of the MaxEnt formulation (42) for flow networks, especially for the analysis of multidimensional flows and chemical reaction networks, the formulation of numerical algorithms for particular classes of nonlinear constraints, and the analysis of coupled flow and graphical properties due to uncertainties in both the state and structure of the network.

5. Comparison to Bayesian Inference

An alternative method for probabilistic inference is that of Bayesian inference, in which one calculates the probability of an hypothesis or model using Bayes’ rule [64,65]. In the terminology of the present analysis, this gives:
p ( f | g , I ) = p ( g | f , I ) p ( f | I ) p ( g | I ) = p ( g | f , I ) p ( f | I ) Ω f p ( g | f , I ) p ( f | I )
where p ( f | g , I ) is the posterior probability or pdf, p ( f | I ) is the prior probability or pdf of f , p ( g | f , I ) is the likelihood function, and p ( g | I ) is the prior probability or pdf of g , commonly termed the evidence. This approach makes use of the Bayesian understanding of a probability as a value between 0 and 1, assigned based on what is known, which need not correspond to a measurable frequency [38]. The application of Bayesian inference to the analysis of flow networks has been studied by many authors, including comparative analyses of the MaxEnt and Bayesian approaches, e.g., [73,74,89,90]. For flow network problems involving moment constraints as formulated here, the MaxEnt method offers the advantage (usually) of simpler mathematical operations compared to the Bayesian method, in which constraints are generally incorporated in a more complicated form, such as delta functions. However, for problems which include nonlinear constraints, this advantage of the MaxEnt method may not be so prominent. For problems involving significant observational data, the Bayesian method will be the more the natural choice, since it allows the incorporation of individual data points rather than summary data expressed in the form of moment constraints.
For Gaussian priors, it can be shown that the MaxEnt and Bayesian methods give the same or very similar posterior mean values, but their covariances are different, e.g., [73,74]. This arises from the different algorithms used: in the Bayesian method, the interactions between variables are applied through the likelihood function, using second or higher-order cross-terms within the posterior pdf. In contrast, the MaxEnt method incorporates interactions between variables using Lagrange multipliers, avoiding second-order correlation terms in the posterior covariance. This suggests that the MaxEnt method could provide a numerical advantage over Bayesian inference for moment-constrained problems of the type examined here, in cases which avoid covariance terms in its integrations.

6. Conclusions

We present a generalized MaxEnt framework for inferring the state of a flow network—here defined as a set of nodes and links which carries one or more flows—subject to uncertainty in the parameters of interest and in the network itself. In this method, the network uncertainty is represented by a joint probability function over its unknowns, subject to all that is known. This gives a relative entropy function which is maximized, subject to the constraints, to determine the most probable or most representative state of the network. The constraints can include “observable” constraints on various parameters, “physical” constraints such as conservation laws and frictional properties, and “graphical” constraints arising from uncertainty in the network structure itself. Since the method is probabilistic, it enables the prediction of network properties when there is insufficient information to obtain a deterministic solution. For linear constraints, the MaxEnt framework is analytic, but can also handle nonlinear constraints or nonlinear interdependencies between variables, generally at the cost of requiring numerical solution. The MaxEnt formulation provided can be applied to a large variety of flow networks across many different disciplines, including pipe flow, fluid flow, electrical, chemical reaction, ecological, epidemiological, neurological, communications, transportation, financial, economic and human social networks.

Author Contributions

Conceptualization, R.K.N., M.A. and M.S.; Formal analysis, R.K.N., M.A., M.S. and S.H.W.; Methodology, R.K.N., M.A., M.S. and S.H.W.; Writing—original draft, R.K.N., M.A. and M.S.; Writing—review & editing, R.K.N., M.A., M.S. and S.H.W.

Funding

This research was funded by the Go8/DAAD Australia-Germany Joint Research Cooperation Scheme 2013-15, the Australian Research Council Discovery Projects grant DP140104402, and also supported by French sources including Institute Pprime, CNRS, (former) Région Poitou-Charentes and l’Agence Nationale de la Recherche Chair of Excellence (TUCOROM), all in Poitiers, France, and CentraleSupélec, Gif-sur-Yvette, France.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Barabasi, A.L.; Albert, R. Emergence of scaling in random networks. Science 1999, 286, 509–512. [Google Scholar] [CrossRef] [PubMed]
  2. Strogatz, S.H. Exploring complex networks. Nature 2001, 410, 268–276. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Dorogovtsev, S.N.; Mendes, J.F.F. Evolution of networks. Adv. Phys. 2002, 51, 1079–1187. [Google Scholar] [CrossRef] [Green Version]
  4. Albert, R.; Barabási, A.-L. Statistical mechanics of complex networks. Rev. Mod. Phys. 2002, 74, 47–97. [Google Scholar] [CrossRef] [Green Version]
  5. Newman, M.E.J. The structure and function of complex networks. SIAM Rev. 2003, 45, 167–256. [Google Scholar] [CrossRef]
  6. Park, J.; Newman, M.E.J. Statistical mechanics of networks. Phys. Rev. E 2004, 70, 066117. [Google Scholar] [CrossRef] [Green Version]
  7. Barrat, A.; Barthélemy, M.; Vespignani, A. Dynamical Processes on Complex Networks; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
  8. Castellano, C.; Fortunato, S.; Loreto, V. Statistical physics of social dynamics. Rev. Mod. Phys. 2009, 81, 591–646. [Google Scholar] [CrossRef] [Green Version]
  9. Newman, N. Networks: An Introduction; Oxford University Press: Oxford, UK, 2010. [Google Scholar]
  10. Squartini, T.; Garlaschelli, D. Maximum-Entropy Networks: Pattern Detection, Network Reconstruction and Graph Combinatorics; Springer: Cham, Switzerland, 2017. [Google Scholar]
  11. Barabási, A.-L.; Albert, R.; Jeong, H. Scale-free characteristics of random networks: the topology of the world-wide web. Phys. A 2000, 281, 69–77. [Google Scholar] [CrossRef]
  12. Fu, F.; Liu, L.; Wang, L. Empirical analysis of online social networks in the age of Web 2.0. Phys. A 2008, 387, 675–684. [Google Scholar] [CrossRef]
  13. Caamaño-Martín, E.; Laukamp, H.; Jantsch, M.; Erge, T.; Thornycroft, J.; De Moor, H.; Cobben, S.; Suna, D.; Gaiddon, B. Interaction between photovoltaic distributed generation and electricity networks. Prog. Photovolt. Res. Appl. 2008, 16, 629–643. [Google Scholar] [CrossRef]
  14. Buldyrev, S.V.; Parshani, R.; Paul, G.; Stanley, H.E. Havlin, S. Catastrophic cascade of failures in interdependent networks. Nature 2010, 464, 1025–1028. [Google Scholar] [CrossRef] [PubMed]
  15. Wilson, A.G. A statistical theory of spatial distribution models. Transp. Res. 1967, 1, 253–269. [Google Scholar] [CrossRef]
  16. Willumsen, L.G. Estimation of an O-D Matrix from Traffic Counts: A Review; Working Paper 99; Institute of Transport Studies, University of Leeds: Leeds, UK, 1978. [Google Scholar]
  17. Guimera, R.; Mossa, S.; Turtschi, A.; Amaral, L.A.N. The worldwide air transportation network: Anomalous centrality, community structure, and cities’ global roles. Proc. Natl. Acad. Sci. USA 2005, 102, 7794–7799. [Google Scholar] [CrossRef] [PubMed]
  18. de Ortúzar, J.D.; Willumsen, L.G. Modelling Transport, 4th ed.; Wiley: New York, NY, USA, 2011. [Google Scholar]
  19. Barthélemy, M. Spatial networks. Phys. Rep. 2011, 499, 1–101. [Google Scholar] [CrossRef] [Green Version]
  20. Arenas, A.; Díaz-Guilera, A.; Kurths, J.; Moreno, Y.; Zhoug, C. Synchronization in complex networks. Phys. Rep. 2008, 469, 93–153. [Google Scholar] [CrossRef] [Green Version]
  21. George, B.; Kim, S. Spatio-Temporal Networks; Springer: Heidelberg, Germany, 2013. [Google Scholar]
  22. Albantakis, L.; Marshall, W.; Hoel, E.; Tononi, G. What caused what? A quantitative account of actual causation using dynamical causal networks. Entropy 2019, 21, 459. [Google Scholar] [CrossRef]
  23. Jeong, H.; Tombor, B.; Albert, R.; Oltvai, Z.N.; Barabasi, A.L. The large-scale organization of metabolic networks. Nature 2000, 407, 651–654. [Google Scholar] [CrossRef] [Green Version]
  24. Famili, I.; Palsson, B.O. The convex basis of the left null space of the stoichiometric matrix leads to the definition of metabolically meaningful pools. Biophys. J. 2003, 85, 16–26. [Google Scholar] [CrossRef]
  25. Guimera, R.; Amaral, L.A.N. Functional cartography of complex metabolic networks. Nature 2005, 433, 895. [Google Scholar] [CrossRef]
  26. Reichstein, M.; Falge, E.; Baldocchi, D.; Papale, D.; Aubinet, M.; Berbigier, P.; Bernhofer, C.; Buchmann, N.; Gilmanov, T.; Granier, A.; et al. On the separation of net ecosystem exchange into assimilation and ecosystem respiration: review and improved algorithm. Glob. Chang. Biol. 2005, 11, 1424–1439. [Google Scholar] [CrossRef]
  27. Donges, J.F.; Zou, Y.; Marwan, N.; Kurths, J. Complex networks in climate dynamics: Comparing linear and nonlinear network construction methods. Eur. Phys. J. Spec. Top. 2009, 174, 157–179. [Google Scholar] [CrossRef]
  28. Bianconi, G. Statistical mechanics of multiplex networks: Entropy and overlap. Phys. Rev. E 2013, 87, 062806. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Menichetti, G.; Remondini, D.; Bianconi, G. Correlations between weights and overlap in ensembles of weighted multiplex networks. Phys. Rev. E 2014, 90, 062817. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Boccaletti, S.; Bianconi GCriado, R.; del Genio, C.I.; Gómez-Gardeñes, J.; Romance, M.; Sendiña-Nadal, I.; Wang, Z.; Zanin, M. The structure and dynamics of multilayer networks. Phys. Rep. 2014, 544, 1–122. [Google Scholar] [CrossRef] [Green Version]
  31. Kivelä, M.; Arenas, A.; Barthélemy, M.; Gleeson, J.P.; Moreno, Y.; Porter, M.A. Multilayer networks. J. Complex Netw. 2014, 2, 203–271. [Google Scholar] [CrossRef] [Green Version]
  32. Waldrip, S.H.; Niven, R.K.; Abel, M.; Schlegel, M. Maximum entropy analysis of hydraulic pipe flow networks. J. Hydraul. Eng. ASCE 2016, 142, 04016028. [Google Scholar] [CrossRef]
  33. Waldrip, S.H.; Niven, R.K.; Abel, M.; Schlegel, M. Reduced-parameter method for maximum entropy analysis of hydraulic pipe flow networks. J. Hydraul. Eng. ASCE 2018, 144, 04017060. [Google Scholar] [CrossRef]
  34. Perra, N.; Gonçalves, B.; Pastor-Satorras, R.; Vespignani, A. Activity driven modeling of time varying networks. Sci. Rep. 2002, 2, 469. [Google Scholar] [CrossRef]
  35. Keeling, M.J.; Eames, K.T.D. Networks and epidemic models. J. R. Soc. Interface 2005, 2, 295–307. [Google Scholar] [CrossRef] [Green Version]
  36. Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev. 1957, 106, 620–630. [Google Scholar] [CrossRef]
  37. Jaynes, E.T. Information theory and statistical mechanics. In Brandeis University Summer Institute, Lectures in Theoretical Physics, Vol. 3: Statistical Physics; Ford, K.W., Ed.; Benjamin-Cummings Publ. Co.: San Francisco, CA, USA, 1963; pp. 181–218, In Papers on Probability, Statistics and Statistical Physics; Rosenkratz, R.D., Ed.; D. Reidel Publ. Co.: Dordrecht, Holland, 1983; pp. 39–76. [Google Scholar]
  38. Jaynes, E.T. Probability Theory: The Logic of Science; Bretthorst, G.L., Ed.; Cambridge U.P.: Cambridge, UK, 2003. [Google Scholar]
  39. Tribus, M. Information theory as the basis for thermostatics and thermodynamics. J. Appl. Mech. Trans. ASME 1961, 28, 1–8. [Google Scholar] [CrossRef]
  40. Tribus, M. Thermostatics and Thermodynamics; D. Van Nostrand Co. Inc.: Princeton, NJ, USA, 1961. [Google Scholar]
  41. Kapur, J.N.; Kesevan, H.K. Entropy Optimization Principles with Applications; Academic Press, Inc.: Boston, MA, USA, 1992. [Google Scholar]
  42. Gzyl, H. The Method of Maximum Entropy; World Scientific: Singapore, 1995. [Google Scholar]
  43. Wu, N. The Maximum Entropy Method; Springer: Berlin, Germay, 1997. [Google Scholar]
  44. Boltzmann, L. Über die Beziehung zwischen dem zweiten Hauptsatze der Mechanischen Wärmetheorie und der Wahrscheinlichkeitsrechnung, respektive den Sätzen über das Wärmegleichgewicht. Wien. Ber. 1877, 76, 373–435, English Translated: Le Roux, J., 2002. Available online: http://users.polytech.unice.fr/~leroux/boltztrad.pdf (accessed on 1 June 2019).
  45. Planck, M. Über das Gesetz der Energieverteilung im Normalspektrum. Annalen der Physik 1901, 4, 553–563. [Google Scholar] [CrossRef]
  46. Ellis, R.S. Entropy, Large Deviations, and Statistical Mechanics; Springer: New York, NY, USA, 1985. [Google Scholar]
  47. Kullback, S.; Leibler, R.A. On information and sufficiency. Annals Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  48. Sanov, I.N. On the probability of large deviations of random variables. Mat. Sbornik 1957, 42, 11–44. (In Russian) [Google Scholar]
  49. Schlichting, H.; Gersten, K. Boundary Layer Theory, 8th ed.; Springer: New York, NY, USA, 2001. [Google Scholar]
  50. Niven, R.K. Simultaneous extrema in the entropy production for steady-state fluid flow in parallel pipes. J. Non-Equilib. Thermodyn. 2010, 35, 347–378. [Google Scholar] [CrossRef]
  51. Colebrook, C.F. Turbulent Flow in Pipes, With Particular Reference to the Transition Region Between the Smooth and Rough Pipe Laws. J. ICE 1939, 11, 133–156. [Google Scholar] [CrossRef]
  52. De Groot, S.R.; Mazur, P. Non-Equilibrium Thermodynamics; Dover Publications: New York, NY, USA, 1984. [Google Scholar]
  53. Kreuzer, H.J. Nonequilibrium Thermodynamics and Its Statistical Foundations; Clarendon Press: Oxford, UK, 1981. [Google Scholar]
  54. Demirel, Y. Nonequilibrium Thermodynamics; Elsevier: New York, NY, USA, 2002. [Google Scholar]
  55. Bird, R.B.; Stewart, W.E.; Lightfoot, E.N. Transport Phenomena, 2nd ed.; John Wiley & Sons: New York, NY, USA, 2002. [Google Scholar]
  56. Kondepudi, D.; Prigogine, I. Modern Thermodynamics: From Heat Engines to Dissipative Structures, 2nd ed.; John Wiley & Sons: Chichester, UK, 2015. [Google Scholar]
  57. Tolman, R.C. The Principles of Statistical Mechanics; Oxford University Press: London, UK, 1938. [Google Scholar]
  58. Davidson, N. Statistical Mechanics; McGraw-Hill: New York, NY, USA, 1962. [Google Scholar]
  59. Hill, T.L. Statistical Mechanics: Principles and Selected Applications; McGraw-Hill: New York, NY, USA, 1956. [Google Scholar]
  60. Callen, H.B. Thermodynamics and an Introduction to Thermostatistics, 2nd ed.; John Wiley: New York, NY, USA, 1985. [Google Scholar]
  61. Niven, R.K. Steady state of a dissipative flow-controlled system and the maximum entropy production principle. Phys. Rev. E 2009, 80, 021113. [Google Scholar] [CrossRef] [Green Version]
  62. Niven, R.K.; Noack, B.R. Control volume analysis, entropy balance and the entropy production in flow systems. In Beyond the Second Law: Entropy Production and Non-Equilibrium Systems; Dewar, R.C., Lineweaver, C., Niven, R.K., Regenauer-Lieb, K., Eds.; Springer: Berlin/Heidelberg, Germany, 2014; pp. 129–162. [Google Scholar]
  63. Niven, R.K.; Ozawa, H. Entropy production extremum principles. In Handbook of Applied Hydrology, 2nd ed.; Singh, V., Ed.; McGraw-Hill: New York, NY, USA, 2016; Chapter 32. [Google Scholar]
  64. Bayes, T. (presented by Price, R.). An essay towards solving a problem in the doctrine of chance. Philos. Trans. R. Soc. Lond. 1763, 53, 370–418. [Google Scholar]
  65. Laplace, P. Mémoire sur la probabilité des causes par les évènements. l’Académie Royale des Sciences 1774, 6, 621–656. [Google Scholar]
  66. Polya, G. Mathematics and Plausible Reasoning, Vol II, Patterns of Plausible Inference; Princeton U.P.: Princeton, NJ, USA, 1954. [Google Scholar]
  67. Polya, G. Mathematics and Plausible Reasoning, Vol II, Patterns of Plausible Inference, 2nd ed.; Princeton U.P.: Princeton, NJ, USA, 1968. [Google Scholar]
  68. Cox, R.T. The Algebra of Probable Inference; John Hopkins Press: Baltimore, MD, USA, 1961. [Google Scholar]
  69. Zwillinger, D. CRC Standard Mathematical Tables and Formulae; Chapman & Hal/CRC Press: Boca Raton, FL, USA, 2003. [Google Scholar]
  70. Monin, A.S.; Yaglom, A.M. Statistical Fluid Mechanics: Mechanics of Turbulence, Vol. I; Dover Publ.: Mineola, NY, USA, 1971. [Google Scholar]
  71. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  72. Favretti, M. Lagrangian submanifolds generated by the maximum entropy principle. Entropy 2005, 7, 1–14. [Google Scholar] [CrossRef]
  73. Waldrip, S.H. Probabilistic Analysis of Flow Networks using the Maximum Entropy Method. Ph.D. Thesis, The University of New South Wales, Canberra, Australia, 2017. [Google Scholar]
  74. Waldrip, S.H.; Niven, R.K. Comparison between Bayesian and maximum entropy analyses of flow networks. Entropy 2017, 19, 58. [Google Scholar] [CrossRef]
  75. Dewar, R.C. Information theory explanation of the fluctuation theorem, maximum entropy production and self-organized criticality in non-equilibrium stationary states. J. Phys. A Math. Gen. 2003, 36, 631–641. [Google Scholar] [CrossRef] [Green Version]
  76. Dewar, R.C. Maximum entropy production and the fluctuation theorem. J. Phys. A Math. Gen. 2005, 38, L371–L381. [Google Scholar] [CrossRef] [Green Version]
  77. Jaynes, E.T. The minimum entropy production principle. Ann. Rev. Phys. Chem. 1980, 31, 579–601. [Google Scholar] [CrossRef]
  78. Wang, Q.A. Maximum entropy change and least action principle for nonequilibrium systems, Astrophys. Space Sci. 2006, 305, 273–281. [Google Scholar] [CrossRef]
  79. Waldrip, S.H.; Niven, R.K. Bayesian and Maximum Entropy Analyses of Flow Networks with Gaussian or Non-Gaussian Priors, and Soft Constraints. In Proceedings of the 37th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2017), Sao Paulo, Brazil, 9–14 July 2017. Springer Proc. Math. Stat., 2018, 239, 285–294. [Google Scholar]
  80. Niven, R.K.; Waldrip, S.H.; Abel, M.; Schlegel, M. Probabilistic modelling of water distribution networks, extended abstract. In Proceedings of the 22nd International Congress on Modelling and Simulation (MODSIM2017), Hobart, Tasmania, Australia, 3–8 December 2017. [Google Scholar]
  81. Waldrip, S.H.; Niven, R.K.; Abel, M.; Schlegel, M. Consistent maximum entropy representations of pipe flow networks. In Proceedings of the 36th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2016), Ghent, Belgium, 10–15 July 2016. AIP Conf. Proc 1853, Melville NY USA, 2017, 070004. [Google Scholar]
  82. Waldrip, S.H.; Niven, R.K.; Abel, M.; Schlegel, M.; Noack, B.R. MaxEnt analysis of a water distribution network in Canberra, ACT, Australia. In Proceedings of the 34th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2014), Amboise, France, 21–26 September 2014. AIP Conf. Proc. 1641, Melville NY USA, 2015, 479–486. [Google Scholar]
  83. Niven, R.K.; Abel, M.; Waldrip, S.H.; Schlegel, M. Maximum entropy analysis of flow and reaction networks. In Proceedings of the 34th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2014), Amboise, France, 21–26 September 2014. AIP Conf. Proc. 1641, Melville NY USA, 2015, 271–278. [Google Scholar]
  84. Niven, R.K.; Abel, M.; Schlegel, M.; Waldrip, S.H. Maximum entropy analysis of flow networks. In Proceedings of the 33rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2013), Canberra, Australia, 15–20 December 2013. AIP Conf. Proc. 1636, Melville NY USA, 2014, 159–164. [Google Scholar]
  85. Waldrip, S.H.; Niven, R.K.; Abel, M.; Schlegel, M. Maximum entropy analysis of hydraulic pipe networks. In Proceedings of the 33rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2013), Canberra, Australia, 15–20 December 2013. AIP Conf. Proc. 1636, Melville NY USA, 2014, 180–186. [Google Scholar]
  86. Niven, R.K.; Waldrip, S.H.; Abel, M.; Schlegel, M. Probabilistic modelling of energy networks, extended abstract. In Proceedings of the 22nd International Congress on Modelling and Simulation (MODSIM2017), Hobart, Tasmania, Australia, 3–8 December 2017. [Google Scholar]
  87. Waldrip, S.H.; Niven, R.K.; Abel, M.; Schlegel, M. MaxEnt analysis of transport networks. In Proceedings of the 36th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2016), Ghent, Belgium, 10–15 July 2016. AIP Conf. Proc 1853, Melville NY USA, 2017, 070003. [Google Scholar]
  88. Niven, R.K.; Abel, M.; Waldrip, S.H.; Schlegel, M.; Guimera, R. Maximum entropy analysis of flow networks with structural uncertainty (graph ensembles). In Proceedings of the 37th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2017), Sao Paulo, Brazil, 9–14 July 2017. Springer Proc. Mathematics and Statistics 2018, 239, 261–274. [Google Scholar]
  89. Skilling, J.; Gull, S. Bayesian maximum entropy image reconstruction. In Spatial Statistics and Imaging: Papers from the Research Conference on Image Analysis and Spatial Statistics, Bowdoin College, Brunswick, Maine, Summer 1988; Possolo, A., Ed.; Institute of Mathematical Statistics: Hayward, CA, USA, 1991; pp. 341–367. [Google Scholar]
  90. Mohammad-Djafari, A.; Giovannelli, J.F.; Demoment, G.; Idier, J. Regularization, maximum entropy and probabilistic methods in mass spectrometry data processing problems. Int. J. Mass Spectrom. 2002, 215, 175–193. [Google Scholar] [CrossRef]
Figure 1. Graph representations of flow networks, with (a) undirected or (b) directed flows.
Figure 1. Graph representations of flow networks, with (a) undirected or (b) directed flows.
Entropy 21 00776 g001
Figure 2. (a) Case study water distribution network from Torrens, ACT, Australia, showing inferred flow rates on the network (lines) and delivered to houses (dots), based on constraints on normalization, Kirchhoff’s laws, a grouped inflow rate and a head difference, using a multidimensional Gaussian prior with zero means (after [32] Figure 7a, used with permission), and (b) case study electricity grid in Campbell, ACT, Australia, showing inferred mean power flows on the network and to houses, based on constraints on normalization, Kirchhoff’s laws and solar forcing (after [86] Figure 1, used with permission]).
Figure 2. (a) Case study water distribution network from Torrens, ACT, Australia, showing inferred flow rates on the network (lines) and delivered to houses (dots), based on constraints on normalization, Kirchhoff’s laws, a grouped inflow rate and a head difference, using a multidimensional Gaussian prior with zero means (after [32] Figure 7a, used with permission), and (b) case study electricity grid in Campbell, ACT, Australia, showing inferred mean power flows on the network and to houses, based on constraints on normalization, Kirchhoff’s laws and solar forcing (after [86] Figure 1, used with permission]).
Entropy 21 00776 g002

Share and Cite

MDPI and ACS Style

Niven, R.K.; Abel, M.; Schlegel, M.; Waldrip, S.H. Maximum Entropy Analysis of Flow Networks: Theoretical Foundation and Applications. Entropy 2019, 21, 776. https://doi.org/10.3390/e21080776

AMA Style

Niven RK, Abel M, Schlegel M, Waldrip SH. Maximum Entropy Analysis of Flow Networks: Theoretical Foundation and Applications. Entropy. 2019; 21(8):776. https://doi.org/10.3390/e21080776

Chicago/Turabian Style

Niven, Robert K., Markus Abel, Michael Schlegel, and Steven H. Waldrip. 2019. "Maximum Entropy Analysis of Flow Networks: Theoretical Foundation and Applications" Entropy 21, no. 8: 776. https://doi.org/10.3390/e21080776

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop