entropy-logo

Journal Browser

Journal Browser

Complexity, Criticality and Computation (C³)

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Complexity".

Deadline for manuscript submissions: closed (28 February 2017) | Viewed by 83169

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor

Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, NSW 2006, Australia
Interests: self-organisation; information theory; complex systems; artificial life; computational epidemiology
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Complex systems is a new approach to science and engineering that studies how relationships between parts give rise to collective emergent behaviours of the entire system, and how the system interacts with its environment.

Dynamics of a complex system cannot be predicted, or explained, as a linear aggregation of the individual dynamics of its components, and the interactions among the many constituent microscopic parts bring about synergistic macroscopic phenomena that cannot be understood by considering any single part alone. There is a growing awareness that complexity is strongly related to criticality: The behaviour of dynamical spatiotemporal systems at an order/disorder phase transition where scale invariance prevails.

Complex systems can also be viewed as distributed information-processing systems, in the domains ranging from systems biology and artificial life, to computational neuroscience, to digital circuitry, to transport networks. Consciousness emerging from neuronal activity and interactions, cell behaviour resultant from gene regulatory networks and swarming behaviour are all examples of global system behaviour emerging as a result of the local interactions of the individuals (neurons, genes, animals). Can these interactions be seen as a generic computational process? This question shapes the third component of our Special Issue, linking computation to complexity and criticality.

Prof. Mikhail Prokopenko
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • complexity
  • criticality
  • computation
  • emergent phenomena
  • self-organization
  • non-linear dynamics
  • phase transitions
  • information thermodynamics
  • distributed information-processing
  • computational neuroscience
  • swarming behavior
  • systems biology
  • artificial life
  • bio-inspired computing
  • agent-based simulation

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

150 KiB  
Editorial
Complexity, Criticality and Computation
by Mikhail Prokopenko
Entropy 2017, 19(8), 403; https://doi.org/10.3390/e19080403 - 04 Aug 2017
Viewed by 3751
Abstract
What makes a system “complex”?[...] Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))

Research

Jump to: Editorial

709 KiB  
Article
Multiscale Information Theory and the Marginal Utility of Information
by Benjamin Allen, Blake C. Stacey and Yaneer Bar-Yam
Entropy 2017, 19(6), 273; https://doi.org/10.3390/e19060273 - 13 Jun 2017
Cited by 25 | Viewed by 12033
Abstract
Complex systems display behavior at a range of scales. Large-scale behaviors can emerge from the correlated or dependent behavior of individual small-scale components. To capture this observation in a rigorous and general way, we introduce a formalism for multiscale information theory. Dependent behavior [...] Read more.
Complex systems display behavior at a range of scales. Large-scale behaviors can emerge from the correlated or dependent behavior of individual small-scale components. To capture this observation in a rigorous and general way, we introduce a formalism for multiscale information theory. Dependent behavior among system components results in overlapping or shared information. A system’s structure is revealed in the sharing of information across the system’s dependencies, each of which has an associated scale. Counting information according to its scale yields the quantity of scale-weighted information, which is conserved when a system is reorganized. In the interest of flexibility we allow information to be quantified using any function that satisfies two basic axioms. Shannon information and vector space dimension are examples. We discuss two quantitative indices that summarize system structure: an existing index, the complexity profile, and a new index, the marginal utility of information. Using simple examples, we show how these indices capture the multiscale structure of complex systems in a quantitative way. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Show Figures

Figure 1

552 KiB  
Article
Can a Robot Have Free Will?
by Keith Douglas Farnsworth
Entropy 2017, 19(5), 237; https://doi.org/10.3390/e19050237 - 20 May 2017
Cited by 13 | Viewed by 9623
Abstract
Using insights from cybernetics and an information-based understanding of biological systems, a precise, scientifically inspired, definition of free-will is offered and the essential requirements for an agent to possess it in principle are set out. These are: (a) there must be a self [...] Read more.
Using insights from cybernetics and an information-based understanding of biological systems, a precise, scientifically inspired, definition of free-will is offered and the essential requirements for an agent to possess it in principle are set out. These are: (a) there must be a self to self-determine; (b) there must be a non-zero probability of more than one option being enacted; (c) there must be an internal means of choosing among options (which is not merely random, since randomness is not a choice). For (a) to be fulfilled, the agent of self-determination must be organisationally closed (a “Kantian whole”). For (c) to be fulfilled: (d) options must be generated from an internal model of the self which can calculate future states contingent on possible responses; (e) choosing among these options requires their evaluation using an internally generated goal defined on an objective function representing the overall “master function” of the agent and (f) for “deep free-will”, at least two nested levels of choice and goal (d–e) must be enacted by the agent. The agent must also be able to enact its choice in physical reality. The only systems known to meet all these criteria are living organisms, not just humans, but a wide range of organisms. The main impediment to free-will in present-day artificial robots, is their lack of being a Kantian whole. Consciousness does not seem to be a requirement and the minimum complexity for a free-will system may be quite low and include relatively simple life-forms that are at least able to learn. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Show Figures

Figure 1

1119 KiB  
Article
Specific and Complete Local Integration of Patterns in Bayesian Networks
by Martin Biehl, Takashi Ikegami and Daniel Polani
Entropy 2017, 19(5), 230; https://doi.org/10.3390/e19050230 - 18 May 2017
Cited by 3 | Viewed by 6893
Abstract
We present a first formal analysis of specific and complete local integration. Complete local integration was previously proposed as a criterion for detecting entities or wholes in distributed dynamical systems. Such entities in turn were conceived to form the basis of a theory [...] Read more.
We present a first formal analysis of specific and complete local integration. Complete local integration was previously proposed as a criterion for detecting entities or wholes in distributed dynamical systems. Such entities in turn were conceived to form the basis of a theory of emergence of agents within dynamical systems. Here, we give a more thorough account of the underlying formal measures. The main contribution is the disintegration theorem which reveals a special role of completely locally integrated patterns (what we call ι-entities) within the trajectories they occur in. Apart from proving this theorem we introduce the disintegration hierarchy and its refinement-free version as a way to structure the patterns in a trajectory. Furthermore, we construct the least upper bound and provide a candidate for the greatest lower bound of specific local integration. Finally, we calculate the ι -entities in small example systems as a first sanity check and find that ι -entities largely fulfil simple expectations. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Show Figures

Figure 1

1219 KiB  
Article
Cockroach Swarm Optimization Algorithm for Travel Planning
by Joanna Kwiecień and Marek Pasieka
Entropy 2017, 19(5), 213; https://doi.org/10.3390/e19050213 - 06 May 2017
Cited by 15 | Viewed by 6996
Abstract
In transport planning, one should allow passengers to travel through the complicated transportation scheme with efficient use of different modes of transport. In this paper, we propose the use of a cockroach swarm optimization algorithm for determining paths with the shortest travel time. [...] Read more.
In transport planning, one should allow passengers to travel through the complicated transportation scheme with efficient use of different modes of transport. In this paper, we propose the use of a cockroach swarm optimization algorithm for determining paths with the shortest travel time. In our approach, this algorithm has been modified to work with the time-expanded model. Therefore, we present how the algorithm has to be adapted to this model, including correctly creating solutions and defining steps and movement in the search space. By introducing the proposed modifications, we are able to solve journey planning. The results have shown that the performance of our approach, in terms of converging to the best solutions, is satisfactory. Moreover, we have compared our results with Dijkstra’s algorithm and a particle swarm optimization algorithm. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Show Figures

Figure 1

458 KiB  
Article
Utility, Revealed Preferences Theory, and Strategic Ambiguity in Iterated Games
by Michael Harré
Entropy 2017, 19(5), 201; https://doi.org/10.3390/e19050201 - 29 Apr 2017
Cited by 7 | Viewed by 4515
Abstract
Iterated games, in which the same economic interaction is repeatedly played between the same agents, are an important framework for understanding the effectiveness of strategic choices over time. To date, very little work has applied information theory to the information sets used by [...] Read more.
Iterated games, in which the same economic interaction is repeatedly played between the same agents, are an important framework for understanding the effectiveness of strategic choices over time. To date, very little work has applied information theory to the information sets used by agents in order to decide what action to take next in such strategic situations. This article looks at the mutual information between previous game states and an agent’s next action by introducing two new classes of games: “invertible games” and “cyclical games”. By explicitly expanding out the mutual information between past states and the next action we show under what circumstances the explicit values of the utility are irrelevant for iterated games and this is then related to revealed preferences theory of classical economics. These information measures are then applied to the Traveler’s Dilemma game and the Prisoner’s Dilemma game, the Prisoner’s Dilemma being invertible, to illustrate their use. In the Prisoner’s Dilemma, a novel connection is made between the computational principles of logic gates and both the structure of games and the agents’ decision strategies. This approach is applied to the cyclical game Matching Pennies to analyse the foundations of a behavioural ambiguity between two well studied strategies: “Tit-for-Tat” and “Win-Stay, Lose-Switch”. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Show Figures

Figure 1

310 KiB  
Article
Criticality and Information Dynamics in Epidemiological Models
by E. Yagmur Erten, Joseph T. Lizier, Mahendra Piraveenan and Mikhail Prokopenko
Entropy 2017, 19(5), 194; https://doi.org/10.3390/e19050194 - 27 Apr 2017
Cited by 28 | Viewed by 6905
Abstract
Understanding epidemic dynamics has always been a challenge. As witnessed from the ongoing Zika or the seasonal Influenza epidemics, we still need to improve our analytical methods to better understand and control epidemics. While the emergence of complex sciences in the turn of [...] Read more.
Understanding epidemic dynamics has always been a challenge. As witnessed from the ongoing Zika or the seasonal Influenza epidemics, we still need to improve our analytical methods to better understand and control epidemics. While the emergence of complex sciences in the turn of the millennium have resulted in their implementation in modelling epidemics, there is still a need for improving our understanding of critical dynamics in epidemics. In this study, using agent-based modelling, we simulate a Susceptible-Infected-Susceptible (SIS) epidemic on a homogeneous network. We use transfer entropy and active information storage from information dynamics framework to characterise the critical transition in epidemiological models. Our study shows that both (bias-corrected) transfer entropy and active information storage maximise after the critical threshold ( R 0 = 1). This is the first step toward an information dynamics approach to epidemics. Understanding the dynamics around the criticality in epidemiological models can provide us insights about emergent diseases and disease control. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Show Figures

Figure 1

976 KiB  
Article
Complexity and Vulnerability Analysis of the C. Elegans Gap Junction Connectome
by James M. Kunert-Graf, Nikita A. Sakhanenko and David J. Galas
Entropy 2017, 19(3), 104; https://doi.org/10.3390/e19030104 - 08 Mar 2017
Cited by 3 | Viewed by 5456
Abstract
We apply a network complexity measure to the gap junction network of the somatic nervous system of C. elegans and find that it possesses a much higher complexity than we might expect from its degree distribution alone. This “excess” complexity is seen to [...] Read more.
We apply a network complexity measure to the gap junction network of the somatic nervous system of C. elegans and find that it possesses a much higher complexity than we might expect from its degree distribution alone. This “excess” complexity is seen to be caused by a relatively small set of connections involving command interneurons. We describe a method which progressively deletes these “complexity-causing” connections, and find that when these are eliminated, the network becomes significantly less complex than a random network. Furthermore, this result implicates the previously-identified set of neurons from the synaptic network’s “rich club” as the structural components encoding the network’s excess complexity. This study and our method thus support a view of the gap junction Connectome as consisting of a rather low-complexity network component whose symmetry is broken by the unique connectivities of singularly important rich club neurons, sharply increasing the complexity of the network. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Show Figures

Figure 1

5142 KiB  
Article
Emergence of Distinct Spatial Patterns in Cellular Automata with Inertia: A Phase Transition-Like Behavior
by Klaus Kramer, Marlus Koehler, Carlos E. Fiore and Marcos G.E. Da Luz
Entropy 2017, 19(3), 102; https://doi.org/10.3390/e19030102 - 07 Mar 2017
Cited by 6 | Viewed by 5149
Abstract
We propose a Cellular Automata (CA) model in which three ubiquitous and relevant processes in nature are present, namely, spatial competition, distinction between dynamically stronger and weaker agents and the existence of an inner resistance to changes in the actual state [...] Read more.
We propose a Cellular Automata (CA) model in which three ubiquitous and relevant processes in nature are present, namely, spatial competition, distinction between dynamically stronger and weaker agents and the existence of an inner resistance to changes in the actual state S n (=−1,0,+1) of each CA lattice cell n (which we call inertia). Considering ensembles of initial lattices, we study the average properties of the CA final stationary configuration structures resulting from the system time evolution. Assuming the inertia a (proper) control parameter, we identify qualitative changes in the CA spatial patterns resembling usual phase transitions. Interestingly, some of the observed features may be associated with continuous transitions (critical phenomena). However, certain quantities seem to present jumps, typical of discontinuous transitions. We argue that these apparent contradictory findings can be attributed to the inertia parameter’s discrete character. Along the work, we also briefly discuss a few potential applications for the present CA formulation. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Show Figures

Figure 1

731 KiB  
Article
Identifying Critical States through the Relevance Index
by Andrea Roli, Marco Villani, Riccardo Caprari and Roberto Serra
Entropy 2017, 19(2), 73; https://doi.org/10.3390/e19020073 - 16 Feb 2017
Cited by 14 | Viewed by 4881
Abstract
The identification of critical states is a major task in complex systems, and the availability of measures to detect such conditions is of utmost importance. In general, criticality refers to the existence of two qualitatively different behaviors that the same system can exhibit, [...] Read more.
The identification of critical states is a major task in complex systems, and the availability of measures to detect such conditions is of utmost importance. In general, criticality refers to the existence of two qualitatively different behaviors that the same system can exhibit, depending on the values of some parameters. In this paper, we show that the relevance index may be effectively used to identify critical states in complex systems. The relevance index was originally developed to identify relevant sets of variables in dynamical systems, but in this paper, we show that it is also able to capture features of criticality. The index is applied to two prominent examples showing slightly different meanings of criticality, namely the Ising model and random Boolean networks. Results show that this index is maximized at critical states and is robust with respect to system size and sampling effort. It can therefore be used to detect criticality. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Show Figures

Figure 1

2240 KiB  
Article
Synergy and Redundancy in Dual Decompositions of Mutual Information Gain and Information Loss
by Daniel Chicharro and Stefano Panzeri
Entropy 2017, 19(2), 71; https://doi.org/10.3390/e19020071 - 16 Feb 2017
Cited by 20 | Viewed by 7543
Abstract
Williams and Beer (2010) proposed a nonnegative mutual information decomposition, based on the construction of information gain lattices, which allows separating the information that a set of variables contains about another variable into components, interpretable as the unique information of one variable, or [...] Read more.
Williams and Beer (2010) proposed a nonnegative mutual information decomposition, based on the construction of information gain lattices, which allows separating the information that a set of variables contains about another variable into components, interpretable as the unique information of one variable, or redundant and synergy components. In this work, we extend this framework focusing on the lattices that underpin the decomposition. We generalize the type of constructible lattices and examine the relations between different lattices, for example, relating bivariate and trivariate decompositions. We point out that, in information gain lattices, redundancy components are invariant across decompositions, but unique and synergy components are decomposition-dependent. Exploiting the connection between different lattices, we propose a procedure to construct, in the general multivariate case, information gain decompositions from measures of synergy or unique information. We then introduce an alternative type of lattices, information loss lattices, with the role and invariance properties of redundancy and synergy components reversed with respect to gain lattices, and which provide an alternative procedure to build multivariate decompositions. We finally show how information gain and information loss dual lattices lead to a self-consistent unique decomposition, which allows a deeper understanding of the origin and meaning of synergy and redundancy. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Show Figures

Figure 1

638 KiB  
Article
Echo State Condition at the Critical Point
by Norbert Michael Mayer
Entropy 2017, 19(1), 3; https://doi.org/10.3390/e19010003 - 23 Dec 2016
Cited by 8 | Viewed by 4337
Abstract
Recurrent networks with transfer functions that fulfil the Lipschitz continuity with K = 1 may be echo state networks if certain limitations on the recurrent connectivity are applied. It has been shown that it is sufficient if the largest singular value of the [...] Read more.
Recurrent networks with transfer functions that fulfil the Lipschitz continuity with K = 1 may be echo state networks if certain limitations on the recurrent connectivity are applied. It has been shown that it is sufficient if the largest singular value of the recurrent connectivity is smaller than 1. The main achievement of this paper is a proof under which conditions the network is an echo state network even if the largest singular value is one. It turns out that in this critical case the exact shape of the transfer function plays a decisive role in determining whether the network still fulfills the echo state condition. In addition, several examples with one-neuron networks are outlined to illustrate effects of critical connectivity. Moreover, within the manuscript a mathematical definition for a critical echo state network is suggested. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Show Figures

Figure 1

818 KiB  
Article
Consensus of Second Order Multi-Agent Systems with Exogenous Disturbance Generated by Unknown Exosystems
by Xuxi Zhang, Qidan Zhu and Xianping Liu
Entropy 2016, 18(12), 423; https://doi.org/10.3390/e18120423 - 25 Nov 2016
Cited by 10 | Viewed by 3788
Abstract
This paper is concerned with consensus problem of a class of second-order multi-agent systems subjecting to external disturbance generated from some unknown exosystems. In comparison with the case where the disturbance is generated from some known exosystems, we need to combine adaptive control [...] Read more.
This paper is concerned with consensus problem of a class of second-order multi-agent systems subjecting to external disturbance generated from some unknown exosystems. In comparison with the case where the disturbance is generated from some known exosystems, we need to combine adaptive control and internal model design to deal with the external disturbance generated from the unknown exosystems. With the help of the internal model, an adaptive protocol is proposed for the consensus problem of the multi-agent systems. Finally, one numerical example is provided to demonstrate the effectiveness of the control design. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Show Figures

Figure 1

Back to TopTop