Next Article in Journal
Benefit-Cost Analysis of Security Systems for Multiple Protected Assets Based on Information Entropy
Previous Article in Journal
Curvature Entropy for Curved Profile Generation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Entropy and the Complexity of Graphs Revisited

by
Abbe Mowshowitz
1,* and
Matthias Dehmer
2,*
1
Department of Computer Science, The City College of New York (CUNY), 138th Street at ConventAvenue, New York, NY 10031, USA
2
Institute for Bioinformatics and Translational Research, UMIT, Eduard Wallnoefer Zentrum 1, 6060, Hall in Tyrol, Austria
*
Authors to whom correspondence should be addressed.
Entropy 2012, 14(3), 559-570; https://doi.org/10.3390/e14030559
Submission received: 16 January 2012 / Revised: 5 March 2012 / Accepted: 12 March 2012 / Published: 14 March 2012

Abstract

:
This paper presents a taxonomy and overview of approaches to the measurement of graph and network complexity. The taxonomy distinguishes between deterministic (e.g., Kolmogorov complexity) and probabilistic approaches with a view to placing entropy-based probabilistic measurement in context. Entropy-based measurement is the main focus of the paper. Relationships between the different entropy functions used to measure complexity are examined; and intrinsic (e.g., classical measures) and extrinsic (e.g., Körner entropy) variants of entropy-based models are discussed in some detail.

1. Introduction

Entropy applied to graphs is one of two major approaches to measuring the complexity of these protean mathematical structures. The aim of this paper is to classify and contextualize the various entropy-based measures that have been proposed since Rashevsky introduced the concept of topological information content in the mid-1950s [1]. Contemplating the complexity of graphs conjures up the parable of the blind men and the elephant. The approach taken depends on the objective and experience of the observer. Since the advent and spectacular growth of the Internet, networks have become a lightening rod for scientific investigation [2,3]. Efforts to assess and measure the complexity of networks are part of this gathering wave of interest. However, studies of graph complexity have a longer history and have proven important in a number of fields ranging from chemistry [4,5] to sociology [6,7]. Information-theoretic measures have been used to analyze ecological networks such as food webs [8,9]. Related measures have been applied to labeled chemical structures possessing bond types and hetero-atoms [10]. In this application a measure is used together with special weighting schemes allowing for efficient classification of chemical structures with the aid of support vector machines. The same type of measure can be applied to an arbitrary weighted graph [10].
The wide applicability of graphs and the absence of a complete set of invariants apart from the definition pose a great challenge to crafting useful measures of complexity. Nevertheless, such measures have been defined, applied and found useful [7,11,12,13,14,15]. Note that there has also been considerable effort in applying various kinds of graph entropies in network physics. For related work, see [16,17,18,19]. In this paper we present a systematic interpretation of entropy based measures of graph complexity.
To understand the significance and the limitations of the entropy approach we must first situate it in the broader context of efforts to characterize the complexity of graphs. Thus we begin by describing a general taxonomy and identifying the major threads of research in each area. Then we turn our attention to a detailed analysis of the different sub-branches of the taxonomy that rely on the use of the entropy concept.

2. Taxonomy

Two very different approaches to measuring the complexity of graphs have been developed. One may be termed deterministic, the other probabilistic.
  • The deterministic category encompasses the encoding, substructure count and generative approaches. Dominant in the encoding approach is Kolmogorov complexity. The second includes measures which count the number of substructures of a specified kind [20]. Generative approaches consist of measures based on operations required to generate a graph [21].
  • The probabilistic category includes measures that apply an entropy function to a probability distribution associated with a graph. This category is subdivided into intrinsic and extrinsic subcategories. Intrinsic measures use structural features of a graph to partition the graph (usually the set of vertices or edges) and thereby determine a probability distribution over the components of the partition. Extrinsic measures impose an arbitrary probability distribution on graph elements [22]. Both of these categories employ the probability distribution to compute an entropy value. Shannon’s entropy function is most commonly used, but several different families of entropy functions have been considered [23]. In the next section, we provide a brief overview of the main subcategories of the deterministic class of complexity measures. The probabilistic category is our main concern and will be examined in more detail in subsequent sections.
Table 1 summarizes the taxonomy described above.
Table 1. Taxonomy of graph complexity measures.
Table 1. Taxonomy of graph complexity measures.
Deterministic MeasuresProbabilistic Measures
EncodingIntrinsic (probability distribution derived from structural features)
Substructure CountExtrinsic (probability distribution externally imposed)
Generative

3. Deterministic Complexity Measures

First, we examine the encoding variety of deterministic measures of graph complexity. The principal encoding-based measure is Kolmogorov complexity which applies to any object that can be represented mathematically. For a given encoding scheme, the Kolmogorov measure assigns an object a complexity value equal to the length of the word (i.e., the number of characters taken from the code alphabet) needed to encode that object [24]. The length of a code word for an object is independent of the coding scheme in the sense that the length differs by a constant from one alphabet to another [24]. Kolmogorov complexity can be applied to graphs in a straightforward way, see, e.g., [21,25]. A graph may be defined by an adjacency list. In particular an undirected graph with n vertices is completely described by a binary string of length C ( n , 2 ) where each bit in the string indicates whether or not a particular pair of vertices is adjacent. Thus the Kolmogorov complexity of an n-vertex undirected graph with respect to a binary encoding scheme is at most C ( n , 2 ) . This figure can be treated as an upper bound since additional information about a graph (such as the symmetry structure) could be used to reduce the length of an adjacency list encoding [21].
An alternative encoding scheme for an n-vertex undirected graph could be obtained by labeling the vertices with the numbers 1 to n, and assigning a code word to each vertex whose length equals the size needed to encode the number n. The encoding for the entire graph would be the code word for n concatenated with the code word pairs corresponding to the edges in the graph. For example, the 4-vertex graph with five edges shown in Figure 1 could be assigned the following code word consisting of 33 = 3 + 5 · ( 2 · 3 ) binary digits: 100001010001011001100010011011100. In general, the length of such a code for an n-vertex graph is given by ( 2 | E | + 1 ) · C ( n ) where | E | is the number of edges and C ( n ) is the length of the code word for the number n. If there are relatively few edges in the graph this scheme might give a smaller code word length than the former code. Note, however, that the code word length for both schemes is fully determined by the number of vertices and edges.
A number of substructure measures to determine graph complexity have been proposed [12,26,27]. These measures are based on the substructure count of a given type in the graph. One such measure counts the number of distinct induced subgraphs. For example, the cycle C 4 has four distinct induced subgraphs and so does the complete graph K 4 . Both of these graphs therefore have complexity 4 according to this measure. In general, it has been claimed that the more complex a graph is, the more subgraphs the graph contains [26,27]. Constantine [20] defined the complexity of a graph to be the number of its spanning trees. In addition, Bonchev [27] defined the subgraph count S C for determining graph complexity as
S C ( G ) : = k = 0 | E | k S C
In particular, 2 S C is the known Platt index [28] counting the number of two-edge subgraphs. A similar concept based on the use of information theory is incorporated in the overall complexity indices [29]. For additional related work, see [29,30,31].
Generative measures of graph complexity have also been developed. Such a measure specifies (i) a base set of elementary graph structures, and (ii) a set of operations that allow for combining elements of the base set to generate a subgraph H of G. The complexity of a graph G is the number of operations on the elements of the base required to generate H. The boolean functions approach to measuring complexity in [32] provides an example. Here complexity is defined as the minimum number of union and intersection operations required to obtain the complete edge set of a network given a set of generator graphs isomorphic to stars.
Figure 1. Variant Kolmogorov Encoding of a graph.
Figure 1. Variant Kolmogorov Encoding of a graph.
Entropy 14 00559 g001

4. Probabilistic Measures of Graph Complexity

Probabilistic measures are typically associated with particular structural features of a graph. There are two different types of such measures, namely, intrinsic and extrinsic, both of which associate a probability distribution with elements (e.g., vertices or edges) of a graph. The numerical value of these measures is usually obtained by applying an entropy function to the probability distribution. These two types of probabilistic complexity measures differ in the way the probability distribution is determined. For intrinsic measures the distribution is induced by some structural feature of the graph. In the extrinsic case an arbitrary probability distribution is assigned to elements of the graph.
The entropy function adopted by Shannon [33] to analyze the performance of communication channels is the one most commonly used to measure the complexity of graphs. This particular function has been adopted because it satisfies the following three basic requirements of a measure of information interpreted as uncertainty removal: the amount of uncertainty removed by the receipt of messages from independent sources is the sum of their respective uncertainty removals; the amount of uncertainty removed by equiprobable messages increases monotonically with the number of messages; and the function is ‘well behaved’. More precisely, let H ( X ) be a function of the random variable X with probability distribution P X = { p 1 , , p n } . Suppose H ( X ) satisfies the following three properties: (i) H ( X , Y ) = H ( X ) + H ( Y ) , if X and Y are independent random variables; (ii) if P X is a uniform distribution, i.e., p i = 1 / n , then H ( X ) increases monotonically with n; and (iii) H ( X ) is a continuous function of P X . These three properties determine a function of the form H ( X ) = C i = 1 n p i log p i , where the constant C determines the base of the logarithm [34].
Shannon’s entropy function is a special case of Renyi entropy. The latter designates a family of functions determined by properties (ii) and (iii) only, i.e., by removing the additivity requirement. Renyi entropy has the form: H q ( X ) = 1 1 q ln i = 1 n p i q . The limit of H q ( X ) as q 1 is the Shannon entropy function [34]. Another generalization of Shannon entropy is Kendall information content, which defines a family of functions that play a role in statistical analysis. Any of these functions could be used to measure the complexity of a graph [23]. All of them share a characteristic ability to distinguish between ensembles of events based on their respective probability distributions. For further discussion, see [6,35].

4.1. Classical Graph Entropies

The first measure of graph complexity to appear in the literature was an intrinsic measure based on the symmetries of a graph [1,15]. This measure of complexity equated “information content” with the Shannon entropy of a finite probability scheme associated with the graph. In general, such classical measures are defined in the following way. If X is an arbitrary graph invariant, τ an equivalence criterion for inducing equivalence classes X i , the resulting graph entropy is
I ( G , τ ) : = i = 1 k | X i | | X | log | X i | | X |
The quantity | X i | | X | can be interpreted as a probability value for X i . Mowshowitz [15] formalized the notion of topological information content introduced by Rashevsky [1] and traced the properties of the measure. This measure I a ( G ) is an index based on the orbits of the automorphism group of a graph.
I a ( G ) : = i = 1 k | V i | | V | log | V i | | V |
where | V i | is the cardinality of the i-th orbit of G, and k is the number of different orbits.
Another entropy-based measure, defined relative to the chromatic structure of a graph is called the chromatic information content [15,36]. This measure is defined as
I c ( G ) : = min V ^ i = 1 h n i ( V ^ ) | V | log n i ( V ^ ) | V |
where V ^ = { V i | 1 i h } , | V i | = n i ( V ^ ) denotes an arbitrary chromatic decomposition of a graph G, and h = χ ( G ) is the chromatic number of G. Using simple graph invariants such as metrical properties and degrees, various other graph entropies such as the radial centric information measures [5]
I cr ( G ) : = i = 1 ρ | N i σ v | | V | log | N i σ v | | V |
and the vertex degree equality-based information measure [5]
I δ ( G ) : = i = 1 δ ¯ | N i δ v | | V | log | N i δ v | | V |
have been developed. | N i σ v | is the number of vertices having the same eccentricity (σ). | N i δ v | is the number of vertices with degree equal to i and δ ¯ : = max v V δ v . δ v is the degree of v V . Bonchev [5] offers another alternative using weighted probability schemes based on such characteristics as distances and degrees resulting in so-called magnitude-based graph entropy measures. The following is an example.
I D ( G ) : = 1 | V | log 1 | V | i = 1 ρ ( G ) 2 k i | V | 2 log 2 k i | V | 2
where k i denotes the number of pairs of vertices at distance i from each other.
Extensions and related work on these measures can be found in [37,38].
The essential defining characteristic of classical entropy based measures is sensitivity to particular structural features of a graph. These measures differentiate graphs according to the structural feature (e.g., vertices, edges, degrees) used to partition the elements of the graph. A finite probability scheme consisting of the orbits of the automorphism group gives rise to a measure (topological information content) that differentiates graphs with different degrees of symmetry; a scheme based on color classes produces a measure (chromatic information content) that distinguishes between graphs with different independence characteristics. The relative character of graph complexity is evident in comparing the symmetry and color class measures applied to a complete graph on n vertices. The former measure gives the minimum value of 0 (since the group has but one orbit), while the latter gives the maximum value of l o g n (since at least n colors are required). Clearly, these two different structural measures respond differently to increasing edge density. Topological information content is useful in applications (e.g., chemistry) [12,16] in which reactivity or complementarity of structures plays a central role. Chromatic information content is suited to applications in which the identification of independent sets of elements is the main desideratum.

4.2. Körner Entropy

Körner [40] introduced the first extrinsic measure of graph complexity, now appropriately called Körner entropy, which is given by
H ( G , P ) : = lim t min U V t , P t ( U ) > 1 ϵ 1 t log ( χ ( G t ( U ) ) )
For V V ( G ) , the induced subgraph on V is denoted by G ( V ) and χ ( G ) is the chromatic number [39] of G, G t the t-th co-normal power [40] of G and
P t ( U ) : = x U P t ( x )
Note that this quantity does not express the structural information content of a graph (see previous section) because it was developed as a solution of a coding problem in information theory. There exist several definitions of Körner’s entropy which have been proven equivalent [41]. According to the Körner’s first definition [40], this quantity is related to the stable set problem which in turn is related to minimum entropy colorings of graphs whose computation is known to be NP-hard, see [42]. This means that Körner entropy does not offer a practical tool for measuring the complexity of large scale networks. Because the sub-additivity inequality holds for this measure, it has several applications such as quantum sorting, perfect hashing, and obtaining lower bounds for the size of boolean formulas, see [41]. Further examples and more information-theoretic interpretations of this measure can be found in [40,41].

4.3. Parametric Graph Entropies

Another way to define extrinsic complexity measures for graphs using Shannon’s entropy is associated with assigning a probability value to each vertex by employing functions which capture structural information of a graph [14,22,43]. Various information functions have been defined based on metrical properties and other graph invariants. Typically, such functions are parameterized by weight differences and characterize structural spread in a graph [22]. Also, the parametrization of the measures allows for defining optimization problems based on given data sets.
Let G = ( V , E ) be a graph and let f be an information function representing a positive function that maps vertices to the positive reals using structural features of a graph. Then, one obtains the measures [14,22,43]
I f ( G ) : = i = 1 | V | f ( v i ) j = 1 | V | f ( v j ) log f ( v i ) j = 1 | V | f ( v j )
I f V λ ( G ) : = λ log ( | V | ) + i = 1 | V | f ( v i ) j = 1 | V | f ( v j ) log f ( v i ) j = 1 | V | f ( v j )
representing families of graph entropy measures [14,22,43]. The vertex probabilities are defined by
p ( v i ) : = f ( v i ) j = 1 | V | f ( v j )
In order to derive concrete graph entropies, the parametric information functions [14,22,43]
f 1 ( v i ) : = α c 1 | S 1 ( v i , G ) | + c 2 | S 2 ( v i , G ) | + + c ρ ( G ) | S ρ ( G ) ( v i , G ) | c k > 0 , 1 k ρ ( G ) , α > 0
and
f 2 ( v i ) : = c 1 | S 1 ( v i , G ) | + c 2 | S 2 ( v i , G ) | + + c ρ ( G ) | S ρ ( G ) ( v i , G ) | c k > 0 , 1 k ρ ( G )
have been used. Criteria for selecting the parameters have been examined in [43]. Clearly, the interpretation of the resulting entropy depends on the information function chosen. When using f 1 , Dehmer et al. found that I f 1 measures a special kind of inner symmetry of the graph under consideration. This observation is based on the fact that the more the vertices differ with respect to their spherical neighborhoods, the smaller is the value of the measure and conversely [44].
Parametric entropy based measures rely on information functions to assign probabilities to the vertices of a graph. These vertex probabilities are then used to construct a finite probability scheme for a graph. This procedure gives greater scope to the definition of entropy based structural measures. Measures that are sensitive to very specific structures in a graph can be designed by choosing an appropriate information function. For example, the information function f 2 in Equation (10) gives rise to a measure I f 2 that can discriminate reasonably well between non-isomorphic graphs [43]. In particular, the resulting measure has low degeneracy [43] for chemical graphs and exhaustively generated graphs up to 10 vertices.

4.4. Non-Parametric Graph Entropies

A novel information function is
f 3 ( v i ) : = | λ i |
leading to the extrinsic graph entropy [45]
I f 3 ( G ) = i = 1 k | λ i | j = 1 | V | | λ j | log | λ i | j = 1 | V | | λ j |
where G is an undirected graph and λ 1 , λ 2 , , λ k are the non-zero eigenvalues of its characteristic polynomial. Note that this non-parametric graph complexity measure quantifies the entropy of the underlying graph topology based on the spectrum of G. Obviously, f 3 can also be calculated by using the eigenvalues of various other graph polynomials such as the distance polynomial and Wiener polynomial [46,47]. This leads to graph entropies which have yet to be explored. Other than f 3 , the information function
f 4 ( v i ) : = d ( v i , v 1 ) + d ( v i , v 2 ) + + d ( v i , v | V | )
is also non-parametric but is based on metrical properties of an underlying graph.
Non-parametric differ from parametric entropy based measures in the way their respective finite probability schemes are constructed. Schemes in the latter case are directly related to the probabilities assigned to the vertices of a graph; those in the former case are only indirectly related to the probabilities assigned to the vertices. As indicated above the function f 3 ( v i ) , defined in terms of the eigenvalues of the graph-theoretical matrix of a graph (such as the adjacency and distance matrix), gives rise to a finite probability scheme consisting of the ratios of individual eigenvalues to the sum of all the eigenvalues.
Interestingly, this measure is also highly effective in distinguishing between non-isomorphic graphs, especially chemical graphs and exhaustively generated graphs [45]. The existence of cospectral graphs justifies the general observation that eigenvalue-based graph invariants are not effective in distinguishing non-isomorphic graphs. However, this result suggests the contrary for several classes of graphs [48,49].

5. Conclusions

Entropy functions have been used successfully to capture different aspects of graph complexity. Much of the work in this area has been undertaken by those directly concerned with applications in chemistry, biology and sociology, which accounts in part for the different approaches and entropy functions found in the literature. In this paper we have discussed the main entropy-based approaches to graph complexity measurement by introducing a general taxonomy. Coverage of the many different applications of entropy-based measures was not meant to be exhaustive, so, for example, the overview did not include applications to weighted graphs. Classical entropy-based measures can handle the weighted case by modifying the criterion for decomposing graph elements into equivalence classes. Elements (e.g., vertices or edges) equivalent by virtue of automorphisms of the underlying unweighted graph elements can be differentiated by applying numerical criteria. If, for example, edges e and f are equivalent under an edge automorphism, they might yet be distinguished if the difference in their respective weights is greater than some given threshold value. Proceeding in this way, a finite probability scheme could be constructed to which an entropy function is applied.
Formally speaking, intrinsic and extrinsic measures of graph complexity differ in the underlying finite probability scheme used for applying an entropy function to obtain a quantitative value. Both types of measures attempt to differentiate graphs according to some structural feature. The classical measures use global structures such as symmetry or independent sets of graph elements. Topological information content [1], for example, relies on symmetry to distinguish graphs representing the molecular structure of chemical compounds. Different structures give rise to different classifications of graphs. Extrinsic measures extend the possibilities for classifying graphs according to structural features. Parametric measures make explicit use of probabilities assigned to vertices to construct finite probability schemes. Non-parametric measures abstract the construction of such schemes by using values only indirectly related to the probabilities assigned to the vertices. The latter measures extend the possibilities for crafting “designer” measures that differentiate graphs according to ever more subtle structural features. All the entropy based measures are linked to particular structural features and are thus limited in their ability to differentiate graphs. The example given earlier showing the extreme difference in the respective values assigned to the complete graph by topological information content and chromatic information content illustrates this assertion. Those wishing to classify graphs using entropy-based measures must be content with an ordering relative to some particular structural feature of interest.

Acknowledgements

Research was sponsored by the U.S. Army Research Laboratory and the U.K. Ministry of Defense and was accomplished under Agreement Number W911NF-06-3-0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Army Research Laboratory, the U.S. Government, the U.K. Ministry of Defense or the U.K Government. The U.S. and U.K. Governments are authorized to reproduce and distribute for Government purposes notwithstanding any copyright notation hereon. Matthias Dehmer thanks the Austrian Science Funds and the Standortagentur Tirol for supporting this work (project P22029-N13).

References and Notes

  1. Rashevsky, N. Life, information theory, and topology. Bull. Math. Biophys. 1955, 17, 229–235. [Google Scholar] [CrossRef]
  2. Albert, R.; Jeong, H.; Barabási, A.L. Diameter of the world wide web. Nature 1999, 401, 130–131. [Google Scholar]
  3. Albert, R.; Barabási, A.L. Statistical mechanics of complex networks. Rev. Mod. Phys. 2002, 74, 47. [Google Scholar] [CrossRef]
  4. Bonchev, D.; Trinajstić, N. Information theory, distance matrix and molecular branching. J. Chem. Phys. 1977, 67, 4517–4533. [Google Scholar] [CrossRef]
  5. Bonchev, D. Information Theoretic Indices for Characterization of Chemical Structures; Research Studies Press: Chichester, West Sussex, UK, 1983. [Google Scholar]
  6. Butts, C.T. The complexity of social networks: Theoretical and empirical findings. Soc. Network. 2001, 23, 31–71. [Google Scholar] [CrossRef]
  7. Mehler, A.; Weiß, P.; Lücking, A. A network model of interpersonal alignment. Entropy 2010, 12, 1440–1483. [Google Scholar] [CrossRef]
  8. Solé, R.V.; Montoya, J.M. Complexity and fragility in ecological networks. Proc. Roy. Soc. Lond. B Biol. Sci. 2001, 268, 2039–2045. [Google Scholar] [CrossRef] [PubMed]
  9. Ulanowicz, R.E. Quantitative methods for ecological network analysis. Comput. Biol. Chem. 2004, 28, 321–339. [Google Scholar] [CrossRef]
  10. Dehmer, M.; Barbarini, N.; Varmuza, K.; Graber, A. Novel topological descriptors for analyzing biological networks. BMC Struct. Biol. 2010, 10. [Google Scholar] [CrossRef]
  11. Basak, S.C.; Balaban, A.T.; Grunwald, G.D.; Gute, B.D. Topological indices: Their nature and mutual relatedness. J. Chem. Inf. Comput. Sci. 2000, 40, 891–898. [Google Scholar] [CrossRef] [PubMed]
  12. Bonchev, D.; Rouvray, D.H. (Eds.) Complexity in Chemistry, Biology, and Ecology; Mathematical and Computational Chemistry; Springer: New York, NY, USA, 2005.
  13. Emmert-Streib, F.; Dehmer, M. Information theoretic measures of UHG graphs with low computational complexity. Appl. Math. Comput. 2007, 190, 1783–1794. [Google Scholar] [CrossRef]
  14. Dehmer, M.; Mowshowitz, A. A history of graph entropy measures. Inform. Sci. 2011, 1, 57–78. [Google Scholar] [CrossRef]
  15. Mowshowitz, A. Entropy and the complexity of the graphs: I. An index of the relative complexity of a graph. Bull. Math. Biophys. 1968, 30, 175–204. [Google Scholar] [CrossRef]
  16. Anand, K.; Bianconi, G. Entropy measures for networks: Toward an information theory of complex topologies. Phys. Rev. E 2009, 80, 045102(R). [Google Scholar] [CrossRef]
  17. Solé, R.V.; Valverde, S. Information theory of complex networks: On evolution and architectural constraints. Lect. Notes Phys. 2004, 650, 189–207. [Google Scholar]
  18. Thurner, S. Statistical mechanics of complex networks. In Analysis of Complex Networks: From Biology to Linguistics; Dehmer, M., Emmert-Streib, F., Eds.; Wiley-VCH: Weinheim, Germany, 2009; pp. 23–45. [Google Scholar]
  19. Wilhelm, T.; Hollunder, J. Information theoretic description of networks. Physica A 2007, 388, 385–396. [Google Scholar] [CrossRef]
  20. Constantine, G. Graph complexity and the Laplacian matrix in blocked experiments. Linear and Multilinear Algebra 1990, 28, 49–56. [Google Scholar] [CrossRef]
  21. Bonchev, D. Kolmogorov’s information, Shannon’s entropy, and topological complexity of molecules. Bulg. Chem. Commun. 1995, 28, 567–582. [Google Scholar]
  22. Dehmer, M. Information processing in complex networks: Graph entropy and information functionals. Appl. Math. Comput. 2008, 201, 82–94. [Google Scholar] [CrossRef]
  23. Dehmer, M.; Mowshowitz, A. Generalized graph entropies. Complexity 2011, 17, 45–50. [Google Scholar] [CrossRef]
  24. Kolmogorov, A.N. Three approaches to the definition of information (in Russian). Probl. Peredaci Inform. 1965, 1, 3–11. [Google Scholar]
  25. Li, M.; Vitányi, P. An Introduction to Kolmogorov Complexity and Its Applications; Springer: New York, NY, USA, 1997. [Google Scholar]
  26. Bertz, S.H.; Sommmer, T.J. Rigorous mathematical approaches to strategic bonds and synthetic analysis based on conceptually simple new complexity indices. Chem. Commun. 1997. [Google Scholar] [CrossRef]
  27. Bonchev, D. Complexity analysis of yeast proteome network. Chem. Biodivers. 2004, 1, 312–326. [Google Scholar] [CrossRef]
  28. Platt, J.R. Influence of neighbor bonds on additive bond properties in paraffins. J. Chem. Phys. 1947, 15, 419–420. [Google Scholar] [CrossRef]
  29. Bonchev, D. Overall connectivities and topological complexities: A new powerful tool for QSPR/QSAR. J. Chem. Inf. Comput. Sci. 2000, 40, 934–941. [Google Scholar] [CrossRef] [PubMed]
  30. Bonchev, D. The overall Wiener index—A new tool for characterization of molecular topology. J. Chem. Inf. Comput. Sci. 2001, 41, 582–592. [Google Scholar] [CrossRef] [PubMed]
  31. Bonchev, D. The overall topological complexity indices. In Advances in Computational Methods in Science and Engineering; Simos, T., Maroulis, G., Eds.; VSP Publications: Melville, NY, USA, 2005; Volume 4B, pp. 1554–1557. [Google Scholar]
  32. Pudlák, P.; Rödl, V.; Savickiý, P. Graph complexity. Acta Informatica 1988, 25, 515–535. [Google Scholar]
  33. Shannon, C.E.; Weaver, W. The Mathematical Theory of Communication; University of Illinois Press: Urbana, IL, USA, 1949. [Google Scholar]
  34. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley Series in Telecommunications and Signal Processing; Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  35. Butts, C.T. An axiomatic approach to network complexity. J. Math. Sociol. 2000, 24, 273–301. [Google Scholar] [CrossRef]
  36. Mowshowitz, A. Entropy and the complexity of graphs II: The information content of digraphs and infinite graphs. Bull. Math. Biophys. 1968, 30, 225–240. [Google Scholar] [CrossRef] [PubMed]
  37. Bonchev, D. Information theoretic measures of complexity. In Encyclopedia of Complexity and System Science; Meyers, R., Ed.; Springer: New York, NY, USA, 2009; Volume 5, pp. 4820–4838. [Google Scholar]
  38. Todeschini, R.; Consonni, V.; Mannhold, R. Handbook of Molecular Descriptors; Wiley-VCH: Weinheim, Germany, 2002. [Google Scholar]
  39. Bang-Jensen, J.; Gutin, G. Digraphs, Theory, Algorithms and Applications; Springer: London, UK, 2002. [Google Scholar]
  40. Körner, J. Coding of an information source having ambiguous alphabet and the entropy of graphs. In Proceedings of the 6th Prague Conference on Information Theory, Prague, Czech Republic, 1973; pp. 411–425.
  41. Simonyi, G. Graph entropy: A survey. In Combinatorial Optimization; Cook, W., Lovász, L., Seymour, P., Eds.; DIMACS Series in Discrete Mathematics and Theoretical Computer Science; American Mathematical Society: Providence, RI, USA, 1995; Volume 20, pp. 399–441. [Google Scholar]
  42. Cardinal, L.; Fiorini, S.; Assche, G.V. On minimum entropy graph colorings. In Proceedings of the 2004 IEEE International Symposium on Information Theory, Piscataway, NJ, USA, 2004; p. 43.
  43. Dehmer, M.; Varmuza, K.; Borgert, S.; Emmert-Streib, F. On entropy-based molecular descriptors: Statistical analysis of real and synthetic chemical structures. J. Chem. Inf. Model. 2009, 49, 1655–1663. [Google Scholar] [CrossRef] [PubMed]
  44. Dehmer, M.; Mowshowitz, A.; Emmert-Streib, F. Connections between classical and parametric network entropies. PLoS One 2011, 6, e15733. [Google Scholar] [CrossRef]
  45. Dehmer, M.; Sivakumar, L.; Varmuza, K. Uniquely discriminating molecular structures using novel eigenvalue-based descriptors. MATCH Commun. Math. Comput. Chem. 2012, 67, 147–172. [Google Scholar]
  46. Polansky, O.E.; Bonchev, D. The Wiener number of graphs. I. General theory and changes due to graph operations. MATCH MATCH Commun. Math. Comput. Chem. 1986, 21, 133–186. [Google Scholar]
  47. Hosoya, H. On some counting polynomials. Discrete Appl. Math. 1988, 19, 239–257. [Google Scholar] [CrossRef]
  48. Balaban, A.T.; Harary, F. The characteristic polynomial does not uniquely determined the topology of a molecule. J. Chem. Doc. 1971, 11, 258–259. [Google Scholar] [CrossRef]
  49. Randić, M.; Vracko, M.; Novic, M. Eigenvalues as molecular descriptors. In QSPR/QSAR Studies by Molecular Descriptors; Diudea, M.V., Ed.; Nova Publishing: Huntington, NJ, USA, 2001; pp. 93–120. [Google Scholar]

Share and Cite

MDPI and ACS Style

Mowshowitz, A.; Dehmer, M. Entropy and the Complexity of Graphs Revisited. Entropy 2012, 14, 559-570. https://doi.org/10.3390/e14030559

AMA Style

Mowshowitz A, Dehmer M. Entropy and the Complexity of Graphs Revisited. Entropy. 2012; 14(3):559-570. https://doi.org/10.3390/e14030559

Chicago/Turabian Style

Mowshowitz, Abbe, and Matthias Dehmer. 2012. "Entropy and the Complexity of Graphs Revisited" Entropy 14, no. 3: 559-570. https://doi.org/10.3390/e14030559

Article Metrics

Back to TopTop