Next Article in Journal
A New Pooling Approach Based on Zeckendorf’s Theorem for Texture Transfer Information
Next Article in Special Issue
Information Theory for Agents in Artificial Intelligence, Psychology, and Economics
Previous Article in Journal
Parallel Mixed Image Encryption and Extraction Algorithm Based on Compressed Sensing
Previous Article in Special Issue
Behavioural Effects and Market Dynamics in Field and Laboratory Experimental Asset Markets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Natural Philosophy of Economic Information: Autonomous Agents and Physiosemiosis

by
Carsten Herrmann-Pillath
Max Weber Centre for Advanced Cultural and Social Studies, Erfurt University, 99089 Erfurt, Germany
Entropy 2021, 23(3), 277; https://doi.org/10.3390/e23030277
Submission received: 27 January 2021 / Revised: 23 February 2021 / Accepted: 23 February 2021 / Published: 25 February 2021

Abstract

:
Information is a core concept in modern economics, yet its definition and empirical specification is elusive. One reason is the intellectual grip of the Shannon paradigm which marginalizes semantic information. However, a precise concept of economic information must be based on a theory of semantics, since what counts economically is the meaning, function and use of information. This paper introduces a new principled approach to information that adopts the paradigm of biosemiotics, rooted in the philosophy of Charles S. Peirce and builds on recent developments of the thermodynamics of information. Information processing by autonomous agents, defined as autopoietic heat engines, is conceived as physiosemiosis operating according to fundamental thermodynamic principles of information processing, as elucidated in recent work by Kolchinsky and Wolpert (KW). I plug the KW approach into a basic conceptual model of physiosemiosis and present an evolutionary interpretation. This approach has far-reaching implications for economics, such as suggesting an evolutionary view of the economic agent, choice and behavior, which is informed by applications of statistical thermodynamics on the brain.

1. Introduction

This paper explores a semantic theory of information grounded in thermodynamics. The motivation is to clarify the role of information in economics, with a focus on the microfoundations: I will not explore wider dimensions on the macro level, such as the knowledge economy, digitalization or growing importance of intangible goods in economic structure. The relationship between thermodynamics and economics has been productive but is also fraught with conceptual difficulties. It has so far been almost exclusively developed in ecological economics [1], trailing the breakthrough contributions by Nicholas Georgescu-Roegen [2]. However, Georgescu-Roegen was also deeply influential in blocking explorations on information, as he erroneously battled against the Boltzmann approach to entropy [3]. Today, given recent advances in the thermodynamics of information, we have the opportunity to combine energetic aspects emphasized by Georgescu-Roegen and the thermodynamics of information [4], and to inquire whether such a synthesis has also consequences for the concept of information in economics. My paper explores generic aspects of information, and not economic information specifically. However, as we will see, the generic aspects include many considerations which have direct implications for economics, of which I explore one aspect in the final section—the theory of economic agency.
In modern economics, the concept of information has obtained a central position, with Hayek’s [5] breakthrough contribution on the “use of knowledge in society” pointing towards an integration of the micro and the macro level. Yet, and strangely, it mostly remains vaguely defined as far as the microfoundations are concerned. For example, one of the founders of modern information economics, Joseph Stiglitz, does not provide a definition in his 2017 overview of the field [6]. As amply demonstrated in the intellectual history of the term by Mirowski and Nik-Kah [7], despite its centrality the concept seems elusive and packed with many hidden ontological assumptions which must be put to closer scrutiny. Precise definitions which inform general economics can be found in game theory, where information refers to a set of possible states of the world and a function that partitions these states into what constitutes an agents’ knowledge [8], p. 67ff. This view is inspired by the Shannon concept of information: The “world” appears like the sender of messages, and the economic agent is the receiver who decodes the messages. The meaning of the game is covered by the construct of “common knowledge” which frames the agents’ strategies.
However, economic uses of information often go beyond this formal definition. In particular, since the inception of the information economics paradigm, information has been approached as a kind of “thing”, that is, as an economic good with extensive property. This corresponds to the Shannon view, yet there is no clear conception of what information constitutes materially on the micro level. In the wake of the IT revolution, this view has been bolstered by the immense economic significance of “data” in newly emerging technologies and business models, which highlights the quantitative aspects and appears to suggest a clear material interpretation. Interestingly, this development has heightened interest in the energetic aspects of information, for mere practical reasons [9]. However, so far these concerns have not inspired foundational work on energy and information in general, i.e., beyond the specific setting of digital technologies which literally embody Shannon information.
The pull of the Shannon concept of information is so strong that even most audacious attempts at introducing new perspectives eventually stay within its confines. Interestingly, two contributions by physicists loom large here. In his 1994 book, Robert Ayres [10] clearly diagnosed the need to distinguish Shannon information from another aspect of information which he called “survival relevant information” (SR information), thus directly referring to an evolutionary framework. Shannon information is “distinguishability information” (D information), which refers to the quantity of information that is embodied in a pattern referred to a given state space. SR information is information that is functional for sustaining the existence of that pattern. For example, we can measure the information content of a DNA molecule in two ways: one is to fix a state space for the structures and elements of the molecule, and hence determine its negentropy relative to that state space (D information), the other is to assign information value to only those parts of the molecule that fulfil a function in biological processes (SR information). However, Ayres failed in further sharpening the concept of SR information, and in his later work on energy and economy [11] he blanked out these more fundamental issues, thus finally dissociating his physical approach to economic growth from developing a physical theory of economic information that, after all, is the ultimate driver of growth, such as market intelligence or technological know-how: that is, he focused on the macro level and sidelined microfoundations. Similarly, in his 2015 book, Hidalgo [12] starts out from developing a physical view on information informed by thermodynamics but then falls back on the notion of complexity as determined by the Shannon measures. His measure of economic complexity in international trade [13] only catches the mere quantitative aspects of a state space of goods and technologies but does not cover the functions of those goods in the economic process.
In staying close to the Shannon concept of information and side-lining a more precise economic definition of information, economics fails to develop a theory of semantic information [14]. To be fair, this is not Shannon’s fault, who explicitly excluded semantic information in formulating his theory. When considering economic processes, the semantic dimension is essential because this refers to the use of information. This aspect is also fundamental for the general concept of information: Information is always relative to an observer for whom the information is valuable, in the sense of fulfilling a certain function, and information is about something, i.e., a referent [15]. In this paper, I suggest specifying the concept of semantic information by referring to Charles S. Peirce’s theory of signs [16], as deployed in modern biosemiotics [17,18]. This view allows one to approach the materiality of information in a comprehensive and systematic way, thus providing the basis for a physical view on information in economics: I refer to this as “physiosemiotics” [19].
The paper continues with developing the physiosemiotic view as a necessary complement to the Shannon concept of information. The physiosemiotic view essentially refers to an entity that interprets and uses information. I argue that the most general conception of that entity is that of an “autonomous agent” as introduced by Stuart Kauffman [20]. The autonomous agent is a physical entity which is capable of maintaining thermodynamic work cycles to sustain its existence as defined by internally determined goals. Fundamental economic notions such as “growth” and “efficiency” can be already meaningfully introduced at this step. In the next section, I sketch the important contribution by Kolchinsky and Wolpert [21] on a physical approach to semantic information, which allows one to formulate the basic economics of information grounded in thermodynamics. Going beyond Kolchinsky and Wolpert, I claim that this view can translated into the conceptual frame of Peircean semiotics. I conclude by sketching an important and exemplary consequence for economics proper, conceptualizing economic agents as autonomous agents.

2. Basic principles of the Physiosemiotics

2.1. Shannon Information versus Semiotics

Peirce’s theory of signs differs fundamentally from Shannon’s approach in being triadic, and not dyadic: Shannon information only distinguishes between sender and receiver, whereas semiotics introduces a “third”—the sign (more, precisely, “sign vehicle” or “representamen” in some distinctly Peircean terminology) (the following discussion refers to Figure 1) [22]. Based on this, semiotics makes the role of inference as a mechanism of gaining information from interpreting signs explicit, which is taken for granted in the Shannon approach (deliberately so, as being irrelevant for the question posed, i.e., transmission of information [23], p. 160ff). Finally, semiotics reassesses the roles of sender and receiver. As salient in the original use of Shannon information, the sender is the one who creates the information to be transmitted via the channel—the “message”. The message may be coded, and the receiver only needs to know the code used by the sender to retrieve the complete information, unless there is “noise” or limited channel capacity partly impeding complete transmission. In contrast, semiotics emphasizes the role of the receiver in creating the information content of the message: if the receiver already knows it, there would be no information at all, or if the information may be new, but is useless, it assumes an informational value of zero. In semiotics, information is mainly with the receiver, commonly denoted “interpretant” (although, as we shall see, the full picture remains triadic).
There is another, deeper distinction on the ontological level. What is the physical nature of information in the two frames of reference? In the Shannon case, physics comes into play when considering the interaction between sender and receiver via the channel, which is a purely physical process, including the message as a physical entity (such as electromagnetic waves). However, the state space in which the information content (“bits”) is defined is non-physical, since it is arbitrarily defined by the code employed by the sender and shared with the receiver. Although that may be physically embodied, it is not physically determined: the code may be embodied in physical structures such as the neurobiology of sender and receiver, but even then there is no physical explanation of that code, i.e., its embodied meaning.
This distinction becomes important when considering the relationship between Shannon information and physical entropy. In the Shannon approach, physical entropy only comes into play when we consider “noise” which is a physical phenomenon that impacts the physical transmission process. In terms of causal notions, there is an analytical distinction between the information as message, which is non-physical, and the transmission process; this appears to be another form of Cartesian dualism of mind and matter, thus suggesting that “information” is disembodied and hence can be conjugated with arbitrarily many forms of physical manifestations. This Cartesian dualism has deeply shaped the concept of information in economics.
In comparison, the triadic view of physiosemiotics allows one to distinguish between the physical nature of the causal interaction between sender and receiver, the physical nature of the sign and the physical nature of the consequences that define the value of information, i.e., its function: this is what I refer to as “physiosemiotics”. Regarding the first, we can generalize in treating the “channel” as a physical mechanism that causally connects states of an object also known as “sender” and states of an interpretant also known as “receiver”. Consider a snake (object) and a rabbit (receiver), which causally connect via audio-visual physical media, i.e., a “channel”. However, we cannot distinguish between channel and “message”, since the sign is part and parcel of the mechanism, such as a certain pattern of dimly discerned movement in the creaking grass. Hence, the sign is physical in turn, but only becomes causally operative via the interpretant. The interpretant responds to the sign in fulfilling a function, in case of the rabbit fleeing the snake to safeguard survival, which is a physical phenomenon in turn. In the physiosemiotic view, all aspects of information turn out to be physical.
The concept of “function” dovetails with the pragmatist approach to meaning and knowledge that Peirce introduced in philosophy [24]: knowledge is what solves problems that an agent is facing. Whereas “meaning” is almost necessarily imbued with mentalist references, “function” does not inherently refer to epistemic subjects. I cannot delve into the philosophical intricacies here, beginning with properly interpreting Peirce’s original views (which tend towards a sort of panpsychism) and the contemporary discussion about function (Searle [25], for example, approaches function as being “observer dependent”). My view follows modern biosemiotics receptions of Peirce’s framework.
One important consequence of the physiosemiotics view is how we conceptualize the “noise” in the Shannon approach. Peirce starts out from the assumption of fundamental randomness of the world (his “tychism”) and asks how regularities arise (to which he often refers as “habits”). Considering the channel, we can conceptualize this as a correlation between states of the sender and receiver. Then, “noise” is an exogenous causal impact on the communication which is not correlated at all to the causal process connecting these two states and results in lowering their correlation. Once the meaning of the message becomes blurred, the resulting response of the receiver is becoming randomized (in Peircean terms, exogenous noise reduces the potential for formation of “habit” in sender-receiver interaction).
Once noise becomes dominant, this means that both states become statistically independent. Then, referring to the receiver as system X and the sender as system Y, for the joint distribution p the condition p ( X , Y ) = p X p Y applies, such that the mutual information I between states of the two systems is zero:
I p ( X , Y ) = x , y p ( x , y ) log p ( x , y ) p ( x ) p ( y ) = 0
This can be expressed in Shannon entropies [26]:
I p ( X , Y ) = H ( X ) + H ( Y ) H ( X , Y )
Or in terms of conditional entropies:
I p ( X , Y ) = H ( X ) H ( X | Y ) = H ( Y ) H ( Y | X )
That means that whereas the channel in the Shannon view distinguishes between message and channel, in the Peircean view the channel is the message (paraphrasing MacLuhan’s famous dictum, “the medium is the message”), and there are simply different degrees of correlation between the states of X and Y. In other words, there is no separation between “noise” and “message” on part of the receiver who does not know (different from the external observer) what “noise” is and what “message” is (to be able to do so he would already need to know the message). Further, (3) makes clear that degrees of correlation do not imply any causal direction, which is important when further exploring the physical aspects. These correlations constitute in precise terms what Peirce denotes as “habits”, and there would be no separation between “message” (somewhere existing in a Platonic domain as an undisturbed phenomenon) and “noise”.
However, this raises an intricate issue: if we approach “habit” as mutual information, who is the interpreter of that information? Clearly, as stated, we implicitly posit a third party, that is, an observer who measures the statistical correlation between X and Y. Indeed, mutual information is a concept that is entirely independent from whether system X is capable of reflexively observing its correlation with Y. If we refer to physical entities in general, mutual information can indicate causal interdependence: Hence statistical inference, as in econometrics, is a way to identify causes. However, as long as we stay in the Shannon framework, we cannot subsume the causal force of the message under the causal mechanisms that work in the transmission process. If the observer infers a causal mechanism from mutual information, she presents a causal hypothesis that explains the correlation which is not part of the message. The question is whether there is a corresponding notion that does not imply reference to an external observer. This is the notion of function, if employed reflexively. That means, we do not refer to functions imposed exogenously on the correlated systems X and Y, but we assume that the mutual information feeds back on the underlying causal interdependencies in terms of sustaining or strengthening them, without intermediation by an observer.
Functional explanations state that a certain phenomenon exists because of is effects, such as in the standard example of the heart that has the function to pump blood through the organism [27,28]. Peirce already pointed out that this means to refer to a type of outcome as a causal force, and not just to a particular event as efficient cause [16]. This type of outcome could be achieved by various other means and relates to more general and more abstract frames of functions. The heart has the function to pump blood, the function of blood is to transport oxygen to all parts of the organism, this has the function to sustain metabolism, and so on. The debate about functionalism in the analytical philosophy of mind has shown that we can approach this in static terms but would need to take the entire reference frame of the specific function as a given, as in the example of the heart [29]. If we endogenize this reference frame, the only alternative seems to be evolutionary and selectionist explanations, commonly dubbed “etiological”. The paradigmatic approach is teleosemantics [30]. Teleosemantics comes close to the Peircean view [31] if we consider Peirce’s distinction between immediate, dynamic and final interpretants [16], with the final interpretant being the one which represents what we might refer to as fully revealed information, or “truth”, which in the context of functional explanations is tantamount to optimal or ideal functioning.
The evolutionary approach implies that information is a process concept, and not a “thing”: information grows, or, evolvability is what defines information [32]. The common critique directed at evolutionary explanations is that they would refer the value of information only to past selection events. However, this only applies in specific versions of evolutionary theory. If we include exaptation as a major evolutionary force, the notion of function includes the emergence of novelty independent from selection history [33]. That means that information is crucially dependent on context, such that any form of contextual change can change the function even radically. Most importantly, this also applies for a purely random variation of the original function, which in a different context may prove decisive for further change, even though in the moment at which it occurs it is without any function. Therefore, evolution is both past- and future-oriented: the future unfolding of the incipient function via ongoing actions of the interpretant creates a directedness of evolution, with the final interpretant as the vanishing point [34].

2.2. Autonomous Agents and the Physical Economy of Information

Now, if we approach semiosis in a dynamical form, as Peirce also did in distinguishing stages of evolving interpretants, we recognize that there is the possibility that functions may be fulfilled in diverse ways and with different degrees of proper performance: a rabbit may be too slow in processing the snake signs and running away. Signs may be erroneously interpreted (the snake is a mouse). Why does this matter? This is the point where economics comes into play. We only need to introduce the notion that information processing is costly. Indeed, this is the fundamental assumption in mainstream information economics which should be preserved in the otherwise dissenting view presented here.
There are two aspects that loom large. The first is that transmitting, storing and managing information comes with a cost. The second is that there are costs of inadequate functioning. Both can be combined in the notion of “efficiency” since this relates inputs and outcomes. Efficiency refers to achieving the best possible functioning at the lowest costs, which is a core economic concept. Interestingly, this concept has been received in evolutionary and behavioural ecology, referring to measures that are not anthropomorphic, such as market prices as measure of cost, but which are physical in a most general sense, that is, referring to energetic costs of information processing, such as in the context of foraging, and the energetic gains for maintaining metabolism, survival and, ultimately, reproduction [35].
We can generalize these observations by introducing the concept of “autonomous agent” AA following Kauffman [19], which is more specific than the uses of the term in the AI literature in explicitly referring to metabolism. I define an autonomous agent by the following properties:
  • An AA is an individual demarcated by physical boundaries between environment and internal systemic states. Environmental states are only accessible via “observations” (or: “measurements”).
  • The AA actively reproduces these boundaries and pursues the goal to sustaining its existence through time, involving a distinction between “self” and “non-self” (e.g., the immune system).
  • For achieving this goal, the AA operates a metabolism, i.e., manages flows of material and energetic throughputs to generate work. Observation is a kind of work.
  • These operations require the physical processing of information generated by observations that is necessary for obtaining and processing the resources needed by the metabolism, and generally, proper functioning of the AA.
We can combine the notions of metabolism and information processing in the concept of “work”: we define AA as a physical entity that generates physical work that is directed towards the goal to maintain its existence, and that therefore is “autopoietic”, i.e., does not depend on goal assignments by an external agent. This distinguishes all kinds of devices from AAs. Yet, the general concept of work refers to both. This concept is complex and certainly crucial for economic information, since implicitly most economic uses of information relate to the capacity to work, such as in the production function. However, after the demise of the labour theory of value, economics has never produced a concise approach to “work”.
The relationship between work and information springs to the eye if we consider the distinction between the release of free energy via work and via dissipation as heat [36], pp. 31ff. Heat, by definition, means to “burn information”, in the sense of randomizing the movements of molecules in the environment. Work involves negentropic and hence informational states in two ways [34], p. 326ff, [37]: firstly, work is enabled via constraining the release of energy (think of positioning a wheel in a waterfall), and second, thereby work generates ordered changes of the environment (such as lifting a weight counteracting gravity). The capacity of an AA to generate work depends on its available free energy which is determined by the available part of its internal energy and external energy inflows. At the same time, the capacity to work is determined by the information stored in the AA, which is tantamount to the constraints on the release of free energy and can be measured in as Shannon information. This relationship has been already recognized by Robert Ayres [10], p. 37ff, in the simplest form of stating:
H = A T 0
with H the Shannon entropy stored in the AA and A its potential to generate work, i.e., its free energy. In recent research on the thermodynamics of information, this relationship has been explored much deeper and systematically (for the details on the following, see [26]). For non-equilibrium systems such as AAs, we can establish a direct connection to mutual information. However, we introduce an important change: the system XAA does not correlate directly with the environment Y, but with an observation O. Considering the fundamental equality between physical and informational entropy S p = k H ( X ) , we can determine the change of physical entropy resulting from an observation as
Δ S O = k ( H ( X | O ) H ( X ) ) = k I ( X , O )
which allows one to determine the change of non-equilibrium free energy resulting from the observation:
Δ A = T Δ S O = k T I ( X , O )
This establishes the basic relation between energy, work and information, in terms of the value of information as free energy gained from which work can be generated.
This view defines the fundamental economics of AAs in purely physical terms, as an “autopoietic heat engine” (Figure 2). This diagram has been originally designed by Timothy Garrett [38] referring to economic systems in toto, but it directly applies to AAs as well.
The diagram describes what Kauffman refers to as a “thermodynamic work cycle”. The first important insight gained here is that we must neatly distinguish between two fundamentally different aspects of relations between the AA and the world: one is the world as “surroundings”, that is, a thermodynamic sink (T < TAA) without which the generation of work is physically impossible, following the laws of thermodynamics, and the other is that for sustaining its existence, the physical world is the “environment”, that is, the set of resources that are required by the AA metabolism. At a certain temperature TAA and pressure p, the boundary between AA and environment determines ΔA—the available energy in the environment that can be harnessed via the transfer of material and energy (apart from photosynthesis, mostly material) necessary to maintain the metabolism. This is exclusively determined by the internal AA operations, and hence depends on its information processing or “observation” in the previously established sense: the environment is physical, but at the same time semiotically determined, or, it is that part of the world that is functional in relation to the AA. In biological ecology, this is the “niche” [39]. The same physical world allows for a potentially unlimited number of “environments” also known as niches relating to various types and even individuals of AA. The only binding constraint is the general maximum availability of free energy flow per unit of time and space which could be harnessed for physical work, including the autopoietic kind and the activity of observation.
The next important insight is that autopoietic heat engines manifest a physical tendency to grow, depending on the efficiency of energetic transformations and of the work in relation to goal fulfilment. In detail, the inflow with rate a makes available energy accessible depending on internal parameters of transformational capacity α (which, most generally, can be conceived as embodied states of information). In addition, the physical transformations operate with a thermodynamic efficiency ε to produce work. We conceptualize work as being exclusively devoted to maintaining the AA (meaning, the AA does not fulfil any exogenously imposed function). Work changes the availability of energy in the environment. Hence, I assume, following Garrett, that this changes with the rate of work. This closes the autopoietic causal circuit. We conclude that the system will grow in the sense that the inflow with rate a will grow with a rate that is determined by the parametersα and ε:
d a d t = α d ( Δ A ) d t = α ε a
where we can define the product αε as the rate of return to autopoietic work.
Growth implies that more resources are consumed, such that in case of limited availability there are also limits to growth: a typical pattern emerging from such constraints is the logistic growth curve which is as widely applicably general characterization of growth processes in nature and the economy [40].
Growth is also essential when considering a population of AAs, since it drives competition over resources and hence creates the conditions for natural selection of AAs: if there is variation across AAs in exploiting resources, there is also variation in efficiency and hence growth. That means, there will be a change in the composition of the population, with a higher share of most efficient AAs at the carrying capacity. If we add the possibility of reproduction of AAs, we arrive at the simplest conceptual model of biological evolution. In this model, information as accumulating via natural selection would be directly referred to the capacity to harness energetic resources [41]. The question is how we can refer this to the general relationship between mutual information, entropy and energy established in (4) and (5).

3. The Physiosemiotic Approach to Semantic Information

In this section, I will present a more formal approach to physiosemiosis that builds on the important contribution of Kolchinsky and Wolpert [21] (KW). The KW approach directly ties up with the analysis of autonomous agents. However, KW explicitly exclude evolutionary considerations because they aim at constructing a general physical concept of semantic information and because they adopt the common critique of etiological explanations. In the second subsection, I will show how the KW approach can be plugged into the semiotic triad and thereby can be interpreted in the Peircean evolutionary framework. This results in a new physiosemiotic concept of semantic information.

3.1. Semantic Information and Thermodynamics

In the following, I present only the bare bones of the KW approach, referring the reader to the original contribution for more detail. I focus only on what KW call “stored information” and leave out the more complex discussion of “observed information”. This suffices to present the principles and fits to the previous discussion of AAs as embodying stored information that results from selection and determines the capacity to harness energy.
The central concept is that of a viability function, which is measured in terms of negentropy at a fixed state space:
V ( p X t ) : = S ( p X t ) = x t ( p x t ) log ( p x t )
In principle, this is a simple complexity measure: The reference point for assessing the complexity of an AA would be the purely random distribution of its constituent molecules in a constrained space. Clearly, that would also imply that there is no distinction between those possible distributions with high negentropy which have no autopoietic functional capacity and those which have. KW present a solution to that quandary. They distinguish between the system X and its environment Y, with the probabilistic distributions of respective random variables X and Y, px and py. They define “syntactic information” as the mere correlation of states between Y and X with the joint distribution pX,Y and the mutual information at time 0:
I p ( X 0 , Y 0 ) = x 0 , y 0 p ( x 0 , y 0 ) log p ( x 0 , y 0 ) p ( x 0 ) p ( y 0 )
I notice already here that in this formulation, there is not yet a distinction between the surroundings and the environment, i.e., the world as the set of hidden causes and the world as being informationally accessible for the AA, i.e. the world as being observed. This is crucial for my later reasoning, as we refer the viability function to either of the two when analysing AA-world relations [42].
Stored information can be measured as mutual information. However, for assigning informational value, this information must relate to the viability function. KW introduce the notion of “intervention”, which simply means scrambling the mutual information with the extreme of pure randomization, i.e., entire loss of any information. As we discussed previously, and as must be emphasized again, this suggests an external observer who defines and implements that intervention (by which the distinction between surrounding and environment is neutralized).
KW arrive at a measure of the total value of stored information:
Δ V t o t s t o r e d : = V ( p X t ) V ( p ^ X t f u l l ) = S ( p ^ X t f u l l ) S ( p X t )
where the roof indicates the scrambling, which is “full” when evaluating total value. However, what is most interesting is partial scrambling, which means to delete only a part of stored information. Then we can compare the different results for various scrambles and thereby assess the value of the scrambled part in terms of the different values of the viability function.
KW specify this intervention in terms of a coarse graining function ϕ ( y ) . This function corresponds to the notion of “observation” introduced in the previous section. The interaction between environment and system is conceived as a communication channel in the sense that certain states of the environment induce certain states of the system, which is measured in terms of the conditional probability p ( X | Y ) such that the intervened channel would be written p ^ X 0 | Y 0 ϕ .
We can then conceive the coarse graining function as defining which distinctions the system can make about the environment, resulting in a specific distribution of conditional probability:
p ^ ϕ ( x 0 | y 0 ) p ( x 0 | ϕ ( y 0 ) ) : = y 0 : ϕ ( y 0 ) = ϕ ( y 0 ) p ( x 0 , y 0 ) y 0 : ϕ ( y 0 ) = ϕ ( y 0 ) p ( y 0 )
KW use this formula to assess the effects of different coarse graining functions, with the extreme points of full scrambling where mutual information is zero and identity mapping.
KW now define an optimum criterion. The original viability function is indeterminate regarding the value of the various distributions with same negentropy. However, if we run a series of interventions with varying coarse graining functions, we can separate those functions which affect the viability of the system and which do not. This allows one to define an optimum intervention as that coarse graining function which minimizes the amount of syntactic information while maintaining the same value of the viability function:
p ^ X 0 , Y 0 o p t arg min p ϕ : ϕ Φ I p ^ ϕ ( X 0 , Y 0 )   s . t .   V ( p ^ ϕ X t ) = V ( p X t )
KW can now distinguish syntactic information and semantic information in terms of the mutual information of the optimal intervention:
S s t o r e d : = I p ^ o p t ( X 0 , Y 0 )
Further, KW define the efficiency of the system as
η s t o r e d : = S s t o r e d I p ( X 0 , Y 0 )
I add for later discussion that this implies that highest efficiency indicates no slack in the information stored in the AA, and that this information is only about causally relevant mutual information. At the same time, however, KW point to the possibility of redundancy if there is more than one optimal intervention: that means that the AA would have alternatives with same viability value to choose from.
Based on this, KW add the thermodynamic dimension. This is grounded in the Generalized Landauer Principle which states that any process that reduces the entropy of a system by n bits must at least export or can absorb at most n k B T ln 2 of energy, coupled to an environment (heat bath) of temperature T. We can refer this to the notion of work as deployed by an AA: the AA expends work to gain information about the environment which costs at least k B T ln 2 energy. If, as assumed previously, this work serves to harness energy, we end up with a simple measure of thermodynamic efficiency. Information that does not result in increasing energy inflows would be wasteful, and only information with a net gain would be functional. KW define the costs of acquiring new mutual information as:
W min = k B T ln 2 I p ( X 0 ; Y 0 )
If we combine this perspective with the concept of viability value as defined by Equation 8, KW arrive at the cost/benefit ratio:
κ s t o r e d = Δ V t o t s t o r e d I p ( X 0 ; Y 0 ) = S ( p ^ X t f u l l ) S ( p X t ) I p ( X 0 ; Y 0 )
called the “thermodynamic multiplier”. If this is larger than one, acquiring the information creates more energetic benefits than costs. Finally, KW relate the cost/benefit ratio to semantic efficiency:
κ s t o r e d = η s t o r e d Δ V t o t s t o r e d S s t o r e d
such that a lower semantic efficiency reduces the thermodynamic multiplier.
Now, consider the following problem. KW assume that the coarse graining function is modified exogenously by the observer. How can we envisage that this is generated endogenously by the AA? The only way is to assume an evolutionary process by which alternative ϕ ( y ) are generated as “mutations” which are the tested against the environment. There are two ways we can envisage that. The first is that the AA continuously tries out alternative ϕ ( y ) , which is thermodynamically costly and involves risk of failure. Therefore, another option is “vicarious selection” of ϕ ( y ) [43], which would mean to run alternative versions internally against a model of the world and apply an internally represented value function that relates to the viability function. I will discuss this in the next section.
The other option is to analyse evolutionary processes on the population level: so far, we have only considered single AAs. Here, it is straightforward to relate the various measures proposed by KW to the general model of the autonomous agent. This model did not specify how the parameter α is determined, which reflects the capacity of the AA to exploit the available energy in the environment. This capacity corresponds to stored information in the KW approach. This implies that on the population level we consider variants of AAs with diverse capacities to grow, which implies that, over time, there will be a change in the composition of the AA population, such that the most efficient AAs will tend to dominate.
That means, however, that we must refer the concept of information to the population level: even though stored information is a physical property of the individual AA, the information that is embodied in the AA is a population level phenomenon, causally rooted in the unfolding sequences of variation, selection and retention of variants. This leads us to reassess the notion of efficiency. As said, the maximum efficiency would imply that there is no more functionally neutral mutual information embodied: on the population level, that would correspond to the convergence of AA variants to one optimal type (or functionally equivalent forms). That would correspond to Fisher’s Fundamental Theorem on the population level: as a result of evolution, all variants would share the same information. However, this would imply that there is no more any variation available that could possibly generate a further growth of information.
This observation points to the general argument that maximum efficiency would minimize the capacity to evolve, which requires a potential of random variation [44]. In principle, this can be realized by preserving a gap between S s t o r e d and mutual information which can be activated in new coarse graining functions that would further increase semantic information. This would translate to retaining variance on the population level, thus ensuring evolvability. This important observation can be best elaborated in establishing the physiosemiotic framework.

3.2. The Physiosemiotic Triad

I will now plug the KW concepts into the physiosemiotic framework and build bridges to the general model of AA. I claim that the semiotic triad can be interpreted as a “unit” of semantic information, akin to the construct of Shannon information, and therefore substantially complementing and enhancing Shannon information. More specifically, the triad shows the fundamental structures of information embodied in AAs. Figure 3 shows the basic correspondences.
The semiotic object is the environment Y. This environment is semiotically constituted in the biological sense of a niche. That means, we do not directly refer to the “world” here, but the “surroundings” of the AA. The surroundings matter in two respects. First, as in Figure 1, they enable the physical generation of work in absorbing entropy export. Second, they represent the “unknown unknowns”, that is, in terms of statistical inference, hidden causes of any kind of correlations between X and Y. X is the interpretant. In combining the general model with KW, I posit that the interpretant realizes the function to maintain its existence, which, in KW, is captured by the viability function.
Now, the crucial difference between the physiosemiotic approach and KW is that KW approach interventions as exogenous to arrive at precise definitions of their various concepts and measures. Specifically, the coarse graining function is varied by the external observer to measure that share of syntactic information, which is semantic information, i.e., distinguishing non-functional mutual information from functional. In the physiosemiotic framework, the coarse graining function is the sign or representamen. Signs emerge endogenously in the entire causal structure of semiosis, and their value is determined by the interpretant.
To catch this with the KW framework, we formulate the principle of bimodality: the actions by the AA in fulfilling its functions are determined in two modes. One is the channel in which causal impacts of the environment result in a specific correlation between states of environment and states of AA, which, however, is not yet reflected in causal assumptions, that is, informationally compressed, thus still retaining much non-functional correlations. Think of the rabbit and the snake again, in the moment when the rabbit perceives a movement and a sound, there are many other motions and sounds in the environment simultaneously impinging on its state (as formulated earlier, “noise” and “message” in the Shannon sense are merged). The coarse graining function would select certain segments. If, according to KW, the function minimizes costs, it would identify exactly those segments which identify a snake and thus trigger functional action, i.e., fleeing the scene. This is a sign of the presence of the snake.
As a result, and already analysed in von Uexkuell’s work of the “Umwelt” of animals [45], the environment is ultimately constituted by semiosis: snakes become objects (and not parts of snakes merged with a stone). The snake is real, but its ontological recognition is driven by semiosis. However, this conclusion only holds if we consider stored information S. This is the information resulting from a series of interventions that have identified the optimum, i.e., the least costly state of information. In Peirce’s framework, this is the final interpretant. That means, as stated previously, we must understand the semiotic triad as manifesting a process, not a state. How can we achieve that without exploding the frame?
My suggestion is that we must change our fundamental assumptions about the ontology of causation, following and extending on Peirce’s ideas about the interplay of efficient and final causes in constituting the world as an ordered flow of phenomena [46]. This has been extensively elaborated by Terrence Deacon [34], building on earlier contributions in the biosemiotics literature, such as Stan Salthe’s work [47]. That means that we go back to the original Aristotelian distinctions [48]. This step is surprisingly simple if we ask the same question as originally asked by Aristotle: what does it mean to search for causes of observed phenomena, and in our case, more specifically, if we ask for causes of phenomena displayed by AAs? Clearly, considering the rabbit again, it does not suffice to refer to the efficient causal impact of physical media on its behaviour. The first step is that we must refer to the sign that triggers the fleeing response: coarse graining corresponds to the Aristotelian notion of a formal cause, as the sign categorizes the various efficient-causal impacts, and accordingly a type of efficient causes (and not their tokens) causes a type of action, i.e., fleeing. Yet, we must still ask why this specific relationship between sign and response realizes, and recurrently so (a “habit” in Peircean parlance). Here we must refer to function, and hence an Aristotelian final cause.
As in the previous discussion of function, ultimately, we must explain the function, which requires reference to a selection context. However, considering the case of exaptation, the final cause would not necessarily relate to a past selection history, but to the emergence of a new function “in being”, i.e., in the sense of the future-directedness [49]. This is much emphasized by Deacon. Accordingly, we can directly apply the concept of final cause on physiosemiosis. This cannot be grasped in the KW formalism but is implicit to the notion of optimal intervention as the final state of a series, i.e., we can think in terms of approximating interventions. In fact, this is implicit to the KW measure of efficiency, since this refers to that part of mutual information at t0 which is sufficient for viability at time τ.
In introducing the physiosemiotics approach I emphasized the pivotal role of the interpretant. However, we now realize that final causes and efficient causes relate to each other in the evolutionary emergence of information: the world is not just a “subjective” construct by the interpretant, but what is constructed physiosemiotically as the environment relates to hidden causes in the world. As argued by KW, the stored information eventually is information about causes, and more specifically, efficient causes. That means that the series of interventions ultimately results in a picture of the world that transcends mere mutual information but embodies causal forces. This is exactly Peirce’s view, moving from a mere stochastic world to a world of “habits”, i.e., causal regularities. This matches with classical positions in evolutionary epistemology, which claim that only evolutionary processes can generate true knowledge.
Finally, we must pick up the thread left loose in the previous section, where I argued that the coarse graining functions evolve on the population level (Figure 4). This idea has been adumbrated by Peirce who refers the notion of habit to “communities” [50]. In the biosemiotic context, this means that signs acquire meaning on the species level, whereas the individual level is that of “meaningless” variation. If we adopt this interpretation, we can conclude that signs and function are determined by functional causes on the population level, whereas efficient causes operate on the individual level. By implication, signs and the related species term assume existence in the ontological sense: this reflects Peirce’s vigorous rejection of metaphysical nominalism (the complex discussion of the species concept in biology is highly relevant here [51]).

4. Consequences for Theorizing the Economic Agent: The Case of Neuroeconomics

In conclusion, I want to explore some consequences for economics proper. I concentrate on the question: what does it mean to treat economic agents as autonomous agents? Economics approaches agency as choice based on the processing of information. Is the approach suggested here compatible with the conventional economic model, given the centrality of concepts such as efficiency, or is it an alternative view?
Information plays a crucial role in the standard conception of economic agent, which vacillates between various versions with one extreme pole of assuming a perfectly and completely informed rational agent in classical versions of general equilibrium theory and the other extreme of “zero intelligence agents” in some models of artificial markets [7]. The shades in between are mainly defined via two basic versions: one is assuming simply quantitatively constrained information, such as its unequal distribution across agents, and the other is adding cognitive limitations, as in behavioural economics, which jeopardize processing even of available information.
If we approach economic agents as AA, we arrive at a different picture. I cannot explore the details here due to limited space, but the basics [52]. The starting point is to approach the economic agent as a physical entity; for simplicity, I reduce this to the human brain in the first place, although in the full analysis this must be widened to the body and the conjugated entities, such as cultural artefacts. The focus on the brain allows me to connect directly with the recently emerging field of neuroeconomics. Which insights can be gained for neuroeconomics once we approach the brain as the information processor of human AAs? The main insight is that the brain is an evolutionary physiosemiotic system.
In my following discussion, I concentrate on the narrow field of neuroeconomics, and not economics in general, as this would open too many topics at the same time. I look at one of the dominant paradigms of neuroeconomics, the so called “good-based model” [53]. This is the clearest version of a general approach in neuroeconomics which aims at integrating neurosciences and economics on the basis of the established neoclassical model of choice, i.e., employing concepts such as utility function and optimization in building an embodied theory of subjective value. This model assumes that choice is based on a specific status of information, which refers to states of the world and assigned probabilities, but otherwise strictly follows the formal structure of the economic model. One important consequence is the systematic distinction between informational states, choice and action in a sequence that is linear and does not allow for any feedbacks from current action to previous stages of the decision process.
Recently, an alternative view has been formulated which matches with some other versions of the basic neuroeconomic model [54], such as the drift-diffusion model [55]. This view approaches the brain as an evolutionary system—a perspective that we already introduced in the previous section. This view is anticipated in the long-time neglected work by Friedrich Hayek, “The Sensory Order” [56], where Hayek approaches the brain as a homeostatic system, directly referring to the emerging general systems theory of his time. In the new approaches, the basic idea is that the brain constantly generates contextualized and competing action plans which are directly revised and evaluated during the flow of action [57,58,59]. This operates according to a selectionist logic, such that eventually one action plan will end up as determining the next action.
The important point is that this alternative model is explicitly grounded in phylogenetic considerations, referring to the general model of foraging in an uncertain environment, and thus directly tying up with the general model of AA. That means that there is a necessary connection between the requirements of harnessing resources and the way the physical structure of information processing in the brain evolved phylogenetically. This view has been most systematically elaborated in Cisek’s “affordance competition hypothesis” [60,61]. The adaptationist argument refers to the general scenario of foraging in which the human scavenger is constantly roaming an environment, facing two challenges. The first is that she never knows which directions end up with better action outcomes, which is only ascertained once action is taken, and hence other options are given up. It is crucial to recognize that this relates to a fundamental problem in the theory of information: if an agent must choose between different options that are uncertain, it does not know how to value the lost alternatives. Even assigning probabilities does not help, because the state space as such is partially unknown. This problem is leveraged when considering the multi-dimensionality of uncertainty, most basically, as the second challenge, the uncertain presence of predators and other threats to survival. Therefore, the human AA operates on the basis of “vicarious selection”, i.e., the internal evolutionary selection of action plans while acting. A core determinant is the capacity to interpret environmental cues, i.e., signs. This results in representing the environment as a set of affordances for action, mediated by the signs.
Accordingly, we can employ the physiosemiotic framework of analysing the AA to describe the economic agent. The economic aspect is defined by the energetics of information processing, which in turn determine the universal selective forces that impact on the AA. This can be regarded as a radically materialist or physicalist view of the economic agent, which will immediately invite the straightforward critique that many actions, even in the phylogenetically older states, do not involve energetics as in the foraging scenario, and that these become irrelevant once energetic constraints are no more binding in most types of actions, as in affluent modern societies. However, this critique suffers from a misunderstanding. The phylogenetic argument shows why fundamental structures of economic agency have evolved as a result of the energetics and economics of information processing [62]. These structures apply for all kinds of behaviour, whether explicitly directed at harnessing energy or not. This has immediate consequences for choosing among alternative models of economic agency: we conclude that the goods-based model, vindicating the standard neoclassical model, should be rejected in favour of models that integrate information processing, choice and action.
Recently, Karl Friston and collaborators [63,64] developed a “free energy theory of the brain” which is strongly compatible with the view developed here and offers a formal venue to combine the evolutionary approach with the thermodynamics of information as sketched in the previous section. However, they use statistical thermodynamics only in the formal sense, i.e., the eschew direct references to energetic considerations. Yet, their approach is relevant here because a core hypothesis is that the brain minimizes variational free energy in the sense of minimizing “surprise”, thus extending and improving on the rich literature on the “Bayesian brain” [65]. We can refer this to the KW notion of efficiency. The theoretical boundary of the most efficient state of embodied information would be one in which the “surplus” of mutual information about stored information is minimal. However, the free energy framework differs from the KW framework in regard to one crucial aspect that I already highlighted previously: the “observer” is endogenous, i.e., the AA itself, and there is no direct relationship between the external world and inner states of the AA.
The AA connects via the world via sensory inputs and via its actions which relate to the former in two ways: first, actions respond to sensory inputs and generate sensory inputs, and second, actions select the conditions under which sensory inputs are produced. This is called “active inference”—a view long presaged by the developmental psychologist Jean Piaget, a contemporary of Hayek who also received General Systems theory [66]. As Friston and collaborators have recently elucidated [42], this view is “pragmatist” in a principled and radical way: models of the world are embodied in the brain, and they are enacted via action. Now, if one approaches the information generating process in statistical terms, this means to implement a form of Bayesian learning, which, however, is different from most “Bayesian brain” approaches because it is non-representational: distinguishing between two types of statistical models, the recognition density and the generative models, the brain does not aim at producing an accurate representation of the world, but at reproducing an action pattern that strives to minimize surprise. The connection to the KW approach is straightforward here, because minimizing surprise means to minimize the entropy of the state of the organism, which, however, is now seen as embodying the observations that are parametrized via the AA’s own actions. This can be formally treated via the free energy principle.
We can relate the enacted generative models of the world to the signs in the physiosemiotics approach. Signs are actively inferred from a “noisy” world, and they motivate actions which result in the structuration of the world: the environment or the niche is that generative model. The “world” is semiotically constructed in which humans strive to minimize “surprise”.
The free energy approach implies a radical reformulation of economic agency since the concept of “utility” would be discarded as a general medium of optimization and substituted by free energy minimization, or, even more generally, safeguarding the low entropic states of the human AA, in the KW sense [67]. This is close to the view of the so-called “old institutionalists”, such as Thorstein Veblen, that human behaviour must be analysed in terms of “habits”, and not in terms of rational choice [68]. This seems to suggest, however, that habit formation and optimization are opposing principles. The free energy approach shows that this is not a contradiction, as the habits emerge from a continuous effort to impose order on the world, hence aiming at minimizing surprise in Friston’s sense. The institutionalists were informed by Peirce: Peirce’s notion of finality is much closer to the notion of optimization than it seems. This opens the vista to a reconciliation of hitherto radically opposing positions in economics.
Further, if we look at the dynamic aspects of behaviour, the free energy approach has the important implication that choice is strongly contextualized. This is also emphasized in the evolutionary approaches to action generation: if the flow of actions aims at minimizing surprise, there is a continuous evaluation of contexts and implied goals, thus allowing for instantaneous revisions of preference orderings [69]. What counts is whether the flow of actions results in a stable and sustainable pattern, especially when it comes to social interaction, and not to maximize a de-contextualized, fixed and stable utility function over the space of goods. This pattern is embodied and enacted semiotically, such that we cannot reduce it to internal states of the brain [70].
Hence, there is no neat analytical boundary between the agent and the external world, since both are deeply semiotically integrated. Accordingly, we must radically rethink the concept of information: there is no flow of information from world to agent which is then processed and results in choices, but information is created by the actions of the agent. This actor-centredness of the concept of information transpires from the foundational work on thermodynamic information that I presented in this paper.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

I am grateful to Winfried Nöth and Michael Harre for helpful comments.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Costanza, R.; Cumberland, J.H.; Daly, H.; Goodland, R.; Norgaard, R.B.; Kubiszewski, I.; Franco, C. An Introduction to Ecological Economics; CRC Press: Boca Raton, FL, USA; New York, NY, USA; London, UK, 2014. [Google Scholar]
  2. Georgescu-Roegen, N. The Entropy Law and the Economic Process; Harvard University Press: Cambridge, MA, USA, 1971. [Google Scholar]
  3. Jaynes, E.T. Comments on “Natural Resources: The Laws of Their Scarcity” by N. Georgescu–Roegen”. Available online: http://bayes.wustl.edu/etj/articles/natural.resources.pdf (accessed on 26 April 2020).
  4. Herrmann-Pillath, C. The Evolutionary Approach to Entropy: Reconciling Georgescu–Roegen’s Natural Philosophy with the Maximum Entropy Framework. Ecol. Econ. 2011, 70, 606–616. [Google Scholar] [CrossRef] [Green Version]
  5. Von Hayek, F.A. The Use of Knowledge in Society. Am. Econ. Rev. 1945, 35, 519–530. [Google Scholar]
  6. Stiglitz, J. The Revolution of Information Economics: The Past and the Future, NBER Working Paper 23780. 2017. Available online: http://www.nber.org/papers/w23780 (accessed on 12 January 2021).
  7. Mirowski, P.; Nik-Khah, E. The Knowledge We Have Lost in Information: The History of Information in Modern Economics; Oxford University Press: New York, NY, USA, 2017. [Google Scholar]
  8. Osborne, M.J.; Rubinstein, A. A Course in Game Theory; MIT Press: Cambridge, MA, USA, 1994. [Google Scholar]
  9. International Energy Agency. Digitalisation: Making Energy Systems More Connected, Efficient, Resilient and Sustainable. 2020. Available online: https://www.iea.org/topics/digitalisation (accessed on 30 April 2020).
  10. Ayres, R.U. Information, Entropy, and Progress: A New Evolutionary Paradigm; AIP Press: New York, NY, USA, 1994. [Google Scholar]
  11. Ayres, R.U.; Warr, B. The Economic Growth Engine: How Energy and Work Drive Material Prosperity; Edward Elgar: Cheltenham, UK; Northampton, MA, USA, 2009. [Google Scholar]
  12. Hidalgo, C.A. Why Information Grows: The Evolution of Order, from Atoms to Economies; Penguin: London, UK, 2016. [Google Scholar]
  13. Hausmann, R.; Hidalgo, C.A.; Bustos, S.; Coscia, M.; Simoes, A. The Atlas of Economic Complexity: Mapping Paths to Prosperity; MIT Press: Cambridge, MA, USA, 2013. [Google Scholar]
  14. Floridi, L. Semantic Conceptions of Information. In The Stanford Encyclopedia of Philosophy (Winter 2019 Edition); Zalta, E.N., Ed.; 2019; Available online: https://plato.stanford.edu/archives/win2019/entries/information-semantic/ (accessed on 25 April 2020).
  15. Kåhre, J. The Mathematical Theory of Information; Kluwer: Boston, MA, USA; Dordrecht, The Netherlands; London, UK, 2002. [Google Scholar]
  16. Short, T.L. Peirce’s Theory of Signs; Cambridge University Press: Cambridge, UK; New York, NY, USA, 2007. [Google Scholar]
  17. Salthe, S.N. Meaning in Nature: Placing Biosemiotics in Pansemiotics. In Biosemiotics: Information, Codes and Signs in Living Systems; Chapter 10; Barbieri, M., Ed.; Nova Publishers: New York, NY, USA, 2007. [Google Scholar]
  18. El-Hani, C.N.; Queiroz, J.; Emmeche, C. A Semiotic Analysis of the Genetic Information System. Semiotica 2006, 160, 1–68. [Google Scholar] [CrossRef]
  19. Deely, J. Physiosemiosis in the Semiotic Spiral: A Play of Musement. Sign Syst. Stud. 2001, 29, 27–47. [Google Scholar]
  20. Kauffman, S.A. Investigations; Oxford University Press: Oxfod, UK, 2000. [Google Scholar]
  21. Kolchinsky, A.; Wolpert, D.H. Semantic Information, Autonomous Agency and Non-Equilibrium Statistical Physics. Interface Focus 2018, 8, 20180041. [Google Scholar] [CrossRef]
  22. Queiroz, J.; Emmeche, C.; El-Hani, C.N. A Peircean Approach to ‘Information’ and its Relationship with Bateson’s and Jablonka’s Ideas. Am. J. Semiot. 2008, 24, 75–94. [Google Scholar] [CrossRef]
  23. Volkenstein, M.V. Entropy and Information; Birkhäuser: Boston, MA, USA; Berlin, Germany, 2009. [Google Scholar]
  24. Burch, R. Charles Sanders Peirce. In The Stanford Encyclopedia of Philosophy (Spring 2021 Edition); Zalta, E.N., Ed.; Available online: https://plato.stanford.edu/archives/spr2021/entries/peirce/ (accessed on 6 January 2021).
  25. Searle, J.R. The Construction of Social Reality; Free Press: New York, NY, USA, 1995. [Google Scholar]
  26. Parrondo, J.M.R.; Horowitz, J.M.; Sagawa, T. Thermodynamics of Information. Nat. Phys. 2015, 11, 131–139. [Google Scholar] [CrossRef]
  27. Wright, L. Functions. Philos. Rev. 1973, 82, 139–168. [Google Scholar] [CrossRef]
  28. Perlman, M. Changing the Mission of Theories of Teleology: DOs and DON’Ts for Thinking About Function. In Functions in Biological and Artifical Worlds; Krohs, U., Kroes, P., Eds.; MIT Press: Cambridge, MA, USA, 2009; pp. 17–35. [Google Scholar]
  29. Levin, J. Functionalism. In The Stanford Encyclopedia of Philosophy (Fall 2018 Edition); Zalta, E.D., Ed.; Available online: https://plato.stanford.edu/archives/fall2018/entries/functionalism/ (accessed on 6 January 2021).
  30. Macdonald, G.; Papineau, D. Introduction: Prospects and Problems for Teleosemantics. In Teleosemantics: New Philosophical Essays; Macdonald, G., Papineau, D., Eds.; Oxford University Press: Oxford, UK; New York, NY, USA, 2007; pp. 1–22. [Google Scholar]
  31. Millikan, R.G. Biosemantics. In The Oxford Handbook of Philosophy of Mind; McLaughlin, B.P., Beckerman, A., Walter, S., Eds.; Clarendon Press: Oxford, UK, 2009; pp. 394–406. [Google Scholar]
  32. Nöth, W.; Charles, S. Peirce’s Theory of Information: A Theory of the Growth of Symbols and of Knowledge. Cybern. Hum. Knowing 2012, 19, 137–161. [Google Scholar]
  33. Gould, S.J. The Structure of Evolutionary Theory; Belknap: Cambridge, MA, USA; London, UK, 2002. [Google Scholar]
  34. Deacon, T.W. Incomplete Nature: How Mind Emerged from Matter; Norton: New York, NY, USA, 2013. [Google Scholar]
  35. Pyke, G.H. Animal Movements—An Optimal Foraging Theory Approach, In Encyclopedia of Animal Behavior, 2nd ed.; Choe, J.C., Ed.; Academic Press: Cambridge, MA, USA, 2019; pp. 149–156. [Google Scholar]
  36. Atkins, P. Four Laws That Drive the Universe; Oxford University Press: Oxford, UK, 2007. [Google Scholar]
  37. Collier, J. Causation Is the Transfer of Information. In Causation, Natural Laws and Explanation; Sankey, H., Ed.; Kluwer: Dordrecht, The Netherlands, 1996; pp. 279–331. [Google Scholar]
  38. Garrett, T.J. Are There Basic Physical Constraints on Future Anthropogenic Emissions of Carbon Dioxide? Clim. Chang. 2011, 3, 437–455. [Google Scholar] [CrossRef] [Green Version]
  39. Odling-Smee, F.J.; Laland, K.; Feldman, M.W. Niche Construction: The Neglected Process in Evolution; Princeton University Press: Princeton, NJ, USA, 2003. [Google Scholar]
  40. Smil, V. Growth: From Microorganisms to Megacities; The MIT Press: Cambridge, MA, USA, 2019. [Google Scholar]
  41. Annila, A.; Salthe, S.N. Physical Foundations of Evolutionary Theory. J. Non-Equilib. Thermodyn. 2010, 35, 301–321. [Google Scholar] [CrossRef]
  42. Ramstead, M.J.; Kirchhoff, M.D.; Friston, K.J. A Tale of Two Densities: Active Inference Is Enactive Inference. Adapt. Behav. 2020, 28, 225–239. [Google Scholar] [CrossRef] [Green Version]
  43. Dennett, D.C. Darwin’s Dangerous Idea: Evolution and the Meanings of Life; Simon & Schuster: New York, NY, USA, 1995. [Google Scholar]
  44. Ulanowicz, R.E. Ecology, the Ascendent Perspective; Columbia University Press: New York, NY, USA, 1997. [Google Scholar]
  45. Von Uexkuell, J.; Kriszat, G. Streifzüge durch die Umwelten von Tieren und Menschen. Bedeutungslehre; Rowohlt: Hamburg, Germany, 1956. [Google Scholar]
  46. Hulswit, M. From Cause to Causation: A Peircean Perspective. Dordrecht; Kluwer Academic Publishers: Boston, MA, USA, 2002. [Google Scholar]
  47. Salthe, S.N. The System of Interpretance: Meaning as Finality’, Biosemiotics 2009, 1, 285–294. Biosemiotics 2009, 1, 285–294. [Google Scholar] [CrossRef]
  48. Falcon, A. Aristotle on Causality. In The Stanford Encyclopedia of Philosophy (Spring 2019 Edition); Zalta, E.N., Ed.; 2019; Available online: https://plato.stanford.edu/archives/spr2019/entries/aristotle-causality/ (accessed on 6 January 2021).
  49. Stadler, B.M.R.; Stadler, P.F.; Wagner, G.P.; Fontana, W. The Topology of the Possible: Formal Spaces Underlying Patterns of Evolutionary Change’. J. Theor. Biol. 2001, 213, 241–274. [Google Scholar] [CrossRef] [Green Version]
  50. Fuhrman, G. Rehabilitating Information. Entropy 2010, 12, 164–196. [Google Scholar] [CrossRef]
  51. Ereshefsky, M. Species. In The Stanford Encyclopedia of Philosophy (Fall 2017 Edition); Zalta, E.N., Ed.; Available online: https://plato.stanford.edu/archives/fall2017/entries/species/ (accessed on 6 January 2021).
  52. Herrmann-Pillath, C. Foundations of Economic Evolution: A Treatise on the Natural Philosophy of Economics; Edward Elgar: Cheltenham, UK, 2013. [Google Scholar]
  53. Padoa-Schioppa, C. Neurobiology of Economic Choice: A Good-based Model. Annu. Rev. Neurosci. 2011, 34, 333–359. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Hunt, L.T.; Hayden, B.Y. A Distributed, Hierarchical and Recurrent Framework for Reward-based Choice. Nat. Rev. Neurosci. 2017, 18, 172–182. [Google Scholar] [CrossRef]
  55. Fehr, E.; Rangel, A. Neuroeconomic Foundations of Economic Choice—Recent Advances. J. Econ. Perspect. 2011, 25, 3–30. [Google Scholar] [CrossRef] [Green Version]
  56. Von Hayek, F.A. The Sensory Order. An Inquiry into the Foundations of Theoretical Psychology; University of Chicago Press: Chicago, IL, USA, 1952. [Google Scholar]
  57. Cisek, P. Cortical Mechanisms of Action Selection: The Affordance Competition Hypothesis. Philos. Trans. R. Soc. B Biol. Sci. 2007, 362, 1585–1599. [Google Scholar] [CrossRef] [PubMed]
  58. Verschure, P.F.M.J. Synthetic Consciousness: The Distributed Adaptive Control Perspective. Philos. Trans. R. Soc. B Biol. Sci. 2016, 371, 20150448. [Google Scholar] [CrossRef] [PubMed]
  59. Wiers, R.W.; Gladwin, T.W. Reflective and impulsive processes in addiction and the role of motivation. In Reflective and Impulsive Determinants of Human Behaviour; Deutsch, R., Gawronski, B., Hofmann, W., Eds.; Routledge: New York, NY, USA, 2017; pp. 173–188. [Google Scholar]
  60. Cisek, P. Making Decisions Through a Distributed Consensus. Curr. Opin. Neurobiol. 2012, 22, 927–936. [Google Scholar] [CrossRef]
  61. Pezzulo, G.; Cisek, P. Navigating the Affordance Landscape: Feedback Control as a Process Model of Behavior and Cognition. Trends Cogn. Sci. 2016, 20, 414–424. [Google Scholar] [CrossRef] [PubMed]
  62. Mobbs, D.; Trimmer, P.T.; Blumstein, D.T.; Dayan, P. Foraging for Foundations in Decision Neuroscience: Insights from Ethology. Nat. Rev. Neurosci. 2018, 19, 419–427. [Google Scholar] [CrossRef] [Green Version]
  63. Friston, K.J.; Stephan, K.E. Free–energy and the Brain. Synthese 2006, 159, 417–458. [Google Scholar] [CrossRef] [Green Version]
  64. Friston, K. The Free–energy Principle: A Unified Brain Theory. Nat. Rev. Neurosci. 2010, 11, 127–138. [Google Scholar] [CrossRef]
  65. Clark, A. Whatever Next? Predictive Brains, Situated agents, and the Future of Cognitive Science. Behav. Brain Sci. 2013, 36, 181–204. [Google Scholar] [CrossRef] [PubMed]
  66. Piaget, J. L’Équilibration des Structures Cognitives: Problème Central du Développement; Presses Universitaires de France: Paris, France, 1975. [Google Scholar]
  67. Henriksen, M. Variational Free Energy and Economics Optimizing with Biases and Bounded Rationality. Front. In Psychol. 2020, 11, 549187. [Google Scholar] [CrossRef] [PubMed]
  68. Hodgson, G.M. Reclaiming Habit for Institutional Economics. J. Econ. Psychol. 2004, 25, 651–660. [Google Scholar] [CrossRef] [Green Version]
  69. Herrmann-Pillath, C. Evolutionary Mechanisms of Choice: Hayekian Perspectives on Neurophilosophical Foundations of Neuroeconomics. Econ. Philos. 2020, 1–20. [Google Scholar] [CrossRef]
  70. Herrmann-Pillath, C. Mechanistic Integration of Social Sciences and Neurosciences: Context and Causality in Social Neuroeconomics. In Social Neuroeconomics: Mechanistic Integration of the Neurosciences and the Social Sciences; Harbecke, J., Herrmann-Pillath, C., Eds.; Routledge: London, UK, 2020; pp. 47–73. [Google Scholar]
Figure 1. Comparison between the Shannon information theory and semiotics.
Figure 1. Comparison between the Shannon information theory and semiotics.
Entropy 23 00277 g001
Figure 2. Autonomous agents as autopoietic heat engines (modified after Garrett 2011)
Figure 2. Autonomous agents as autopoietic heat engines (modified after Garrett 2011)
Entropy 23 00277 g002
Figure 3. The Kolchinsky and Wolpert (KW) approach plugged into the physiosemiotics triad.
Figure 3. The Kolchinsky and Wolpert (KW) approach plugged into the physiosemiotics triad.
Entropy 23 00277 g003
Figure 4. Physiosemiosis and evolution.
Figure 4. Physiosemiosis and evolution.
Entropy 23 00277 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Herrmann-Pillath, C. The Natural Philosophy of Economic Information: Autonomous Agents and Physiosemiosis. Entropy 2021, 23, 277. https://doi.org/10.3390/e23030277

AMA Style

Herrmann-Pillath C. The Natural Philosophy of Economic Information: Autonomous Agents and Physiosemiosis. Entropy. 2021; 23(3):277. https://doi.org/10.3390/e23030277

Chicago/Turabian Style

Herrmann-Pillath, Carsten. 2021. "The Natural Philosophy of Economic Information: Autonomous Agents and Physiosemiosis" Entropy 23, no. 3: 277. https://doi.org/10.3390/e23030277

APA Style

Herrmann-Pillath, C. (2021). The Natural Philosophy of Economic Information: Autonomous Agents and Physiosemiosis. Entropy, 23(3), 277. https://doi.org/10.3390/e23030277

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop