Next Article in Journal
Polar Codes for Covert Communications over Asynchronous Discrete Memoryless Channels
Previous Article in Journal
A Geodesic-Based Riemannian Gradient Approach to Averaging on the Lorentz Group
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Non-Equilibrium Relations for Bounded Rational Decision-Making in Changing Environments

by
Jordi Grau-Moya
1,2,3,
Matthias Krüger
1,4 and
Daniel A. Braun
1,2,5,*
1
Max Planck Institute for Intelligent Systems, Stuttgart 70569, Germany
2
Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany
3
PROWLER.io, Cambridge CB2 1LA, UK
4
4th Institute for Theoretical Physics, Universität Stuttgart, Stuttgart 70569, Germany
5
Institute of Neural Information Processing, Universität Ulm, Ulm 89081, Germany
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(1), 1; https://doi.org/10.3390/e20010001
Submission received: 30 July 2017 / Revised: 17 December 2017 / Accepted: 18 December 2017 / Published: 21 December 2017

Abstract

:
Living organisms from single cells to humans need to adapt continuously to respond to changes in their environment. The process of behavioural adaptation can be thought of as improving decision-making performance according to some utility function. Here, we consider an abstract model of organisms as decision-makers with limited information-processing resources that trade off between maximization of utility and computational costs measured by a relative entropy, in a similar fashion to thermodynamic systems undergoing isothermal transformations. Such systems minimize the free energy to reach equilibrium states that balance internal energy and entropic cost. When there is a fast change in the environment, these systems evolve in a non-equilibrium fashion because they are unable to follow the path of equilibrium distributions. Here, we apply concepts from non-equilibrium thermodynamics to characterize decision-makers that adapt to changing environments under the assumption that the temporal evolution of the utility function is externally driven and does not depend on the decision-maker’s action. This allows one to quantify performance loss due to imperfect adaptation in a general manner and, additionally, to find relations for decision-making similar to Crooks’ fluctuation theorem and Jarzynski’s equality. We provide simulations of several exemplary decision and inference problems in the discrete and continuous domains to illustrate the new relations.

1. Introduction

A number of recent studies has pointed out mathematical equivalences between thermodynamic systems described by statistical mechanics and information processing systems [1,2,3,4]. In particular, it has been suggested that decision-makers with constrained information-processing resources can be described in analogy to closed physical systems in contact with a heat bath that seek to minimize energy [1]. In this analogy, decision-makers can be thought to act in a way that minimizes a cost function or, equivalently, that maximizes a utility function in lieu of an energy function. Classic decision theory [5,6] states that, given a set of actions X and a set of observations O , the perfectly rational decision-maker should choose the best possible action x * X that maximizes the expected utility  U ( x ) :
x * = argmax x U ( x ) = argmax x o O p ( o | x ) V ( o ) ,
where p ( o | x ) is the probability of the outcome o given action x and V ( o ) indicates the utility of this outcome. However, maximizing the expected utility is in general a costly computational operation that real decision-makers might not be able to perform.
Decision-makers that are unable to choose the best possible action x * due to a lack of computational resources have traditionally been studied in the field of bounded rationality. Originally proposed by Herbert Simon [7,8], bounded rationality comprises a medley of approaches ranging from optimization-based approaches like bounded optimality (searching for the program that achieves the best utility performance on a particular platform) [9,10,11] and meta-reasoning (optimizing the cost of reasoning) [12,13,14] to heuristic approaches that reject the notion of optimization [15,16,17]. Recently, new impulses for the development of bounded rationality theory have come from information-theoretic and thermodynamic perspectives on the general organization of perception-action-systems [1,3,18,19,20,21,22,23,24,25,26,27]. In the economic and game-theoretic literature, these models have precursors that have studied bounded rationality inspired by stochastic choice rules originally proposed by Luce, McFadden and others [2,28,29,30,31,32,33,34,35,36,37,38,39]. In most of these models, decision-makers face a trade-off between the attainment of maximum utility and the required information-processing cost measured as an entropy or relative entropy. The optimal solution to this trade-off usually takes the form of a Boltzmann-like distribution analogous to equilibrium distributions in statistical physics. The decision-making process can then be conceptualized as a change from a prior strategy distribution to a posterior strategy distribution, where the change is triggered by a change in the utility landscape. However, studying changes in equilibrium distributions neglects not only the time required for this change, but also the adaptation process itself.
The main contribution of this paper is to show that the analogy between equilibrium thermodynamics and bounded-rational decision-making [1] can be extended to the non-equilibrium domain under the assumption that the temporal evolution of the utility function is externally driven and does not depend on the decision-maker’s action. This allows for new predictions that can be tested in experimental setups investigating decision-makers that choose between multiple alternatives. When given sufficient time to adjust to the problem such a decision-maker may achieve a bounded optimal performance given the available precision, which may be described by an equilibrium distribution; for example, a dart thrower that has fully adapted her/his personal best performance after extensive training with prism glasses. However, if given insufficient time, the decision-maker may not achieve bounded optimal performance, but only an inferior performance biased by the specific information-processing mechanisms used by the decision-maker, which may in general be described by a non-equilibrium distribution; for example, a dart thrower that is wearing prism glasses for the first time and plays according to a non-adaptive strategy thereby “dissipating” utility. The connection between the non-equilibrium and equilibrium domains is tied with the concept of dissipation and its role in fluctuation theorems, which are important recent results in non-equilibrium thermodynamics.
The paper is organized as follows. In Section 2, we recapitulate the relation between bounded rational decision-making and equilibrium thermodynamics. In Section 3, we relate decision-making processes to non-equilibrium thermodynamics. In Section 4, we generalize concepts from non-equilibrium thermodynamics to make them applicable to a wider range of decision-making problems. In particular, we include a derivation of a generalized Jarzynski equality and a generalized Crooks’ theorem for decision-making. We provide simulations to illustrate the new relations in different decision-making scenarios. In Section 5, we discuss our results.

2. Equilibrium Thermodynamics and Decision-Making

In thermodynamics, closed physical systems in thermal equilibrium with their environment are described by equilibrium distributions that do not change over time. For example, a gas in a box distributes its particles evenly over the entire space and will stay this way and not spontaneously concentrate in a corner of the box. When changing constraints of the physical system, equilibrium thermodynamics allows predicting the final state after the change has taken place. For example, when opening a divider between two boxes, the gas will expand further until it fills the entire space evenly. This way, equilibrium thermodynamics allows describing system behaviour as a change from a prior equilibrium distribution to a posterior equilibrium distribution triggered by a change in external constraints.
On an abstract level, one can think about changes in the distribution of a random variable from a prior to a posterior distribution as the basis of information-processing. In Bayesian inference, for example, we update current prior beliefs p 0 ( x ) by means of a likelihood to obtain a posterior belief  p 1 ( x ) . Similarly, decision-making can be regarded as a process of changing a prior strategy p 0 ( x ) to a posterior strategy p 1 ( x ) through a process of deliberation [1], thereby emphasizing the stochastic nature of choice [40]. According to [1], such transitions from prior to posterior with information constraints can be formalized by optimizing the variational problem:
p 1 eq ( x ) = argmax p Δ F [ p ]
where:
Δ F [ p ] : = x p ( x ) Δ U ( x ) 1 β D KL ( p p 0 ) ,
is a free energy functional, Δ U ( x ) is a change in utility (analogous to the notion of gains and losses in prospect theory [15]), D KL ( · · ) is the Kullback–Leibler divergence or relative entropy and β is a real-valued parameter that translates from informational units into utility units. Accordingly, Equation (3) optimizes a trade-off between utility gains and information-processing resources quantified by the “information distance” between prior and posterior. In a physical system (where the energy function corresponds to a negative utility), Equation (3) evaluated at the optimum p 1 eq quantifies the negative free energy difference Δ F [ p 1 eq ] between the final state 1 and the initial state 0 assuming an isothermal process with respect to the inverse temperature β and a negative energy difference of Δ U = U 1 U 0 .
For a given information cost parameter β , the bounded rational decision-maker optimally trades off utility gain against informational resources according to Equation (2), thereby following the strategy:
p 1 eq ( x ) = 1 Z β p 0 ( x ) e β Δ U ( x )
with partition function Z β = x p 0 ( x ) e β Δ U ( x ) . When inserting the optimal strategy p 1 eq ( x ) into Equation (3), the certainty-equivalent value of strategy p 1 eq is determined by
Δ F eq : = Δ F [ p 1 eq ] = 1 β log Z β .
For β 0 , the cost of computation dominates, and the optimal strategy is given by the prior strategy p 1 eq ( x ) = p 0 ( x ) with the value lim β 0 Δ F [ p 1 eq ] = Δ U ( x ) p 0 ( x ) . This models a decision-maker that cannot afford any information-processing. When information costs are low ( β ), the optimal strategy p 1 eq ( x ) places all the probability mass on the maximum of Δ U ( x ) , and the value of the strategy is lim β Δ F [ p 1 eq ] = max x Δ U ( x ) . This models a perfectly rational decision-maker that can hand pick the best action. While this model includes maximum (expected) utility decision-making of Equation (1) as a special case, note that conceptually, the formulation of the decision problem as a variational problem in the probability distribution is very different from traditional approaches that define an optimization problem directly in the space of actions.
One possible objection to the strategy (4) is that it requires computing the partition sum Z β over all possible actions, which is in general an intractable operation; even though Equation (4) could still be of descriptive value. It should be noted, however, that the decision-maker is not required to explicitly compute p 1 eq ( x ) ; it suffices to produce a sample from p 1 eq ( x ) to generate a decision. This can be achieved, for example, by Markov Chain Monte Carlo (MCMC) methods that are specifically designed to avoid the explicit computation of partition sums [41]. In the following, we recapitulate two simple MCMC examples in the context of decision-making: a bounded rational decision-maker that uses a rejection sampling scheme and a bounded rational decision-maker that uses a variant of the Metropolis–Hastings scheme [42].

Exemplary Bounded Rational Decision-Makers

The optimal distribution (4) can be implemented, for example, by a decision-maker that follows a probabilistic satisficing strategy with aspiration level T max x Δ U ( x ) . Such a decision-maker optimizes the utility Δ U ( x ) by drawing samples from the prior distribution x s p 0 ( x ) and accepts with certainty the first sample x s with utility Δ U ( x s ) T reaching the aspiration level T or any sample with utility below the aspiration level with acceptance probability p accept = exp ( β ( Δ U ( x s ) T ) ) . The most efficient samplers use T = max x Δ U ( x ) . For samplers with T > max x Δ U ( x ) , the probability distribution (4) is still recovered, but more samples are required, as the acceptance probability p accept is decreased in this case. This strategy is a particular version of the rejection sampling algorithm and is shown in pseudo-code in Algorithm 1. We can see the direct connection between informational resources (“distance away from the prior”) and the average number of samples required until acceptance, as the expected number of required samples from p 0 to obtain one accepted sample from p 1 eq is given by n ¯ β = exp ( β T ) / Z β exp D KL p p 0 [43]. In the limit of zero information-processing with D KL p p 0 = 0 in the high-cost regime β 0 , the sampling complexity tends to its minimum n ¯ β 0 1 .
Algorithm 1 Rejection sampling.
  • repeat
  •      x p 0 ( x )
  •      u Uniform [ 0 , 1 ]
  •     if u exp ( β ( Δ U ( x ) T ) ) then accept
  • until accept
  • return x
In case we do not want to set an absolute aspiration level T, an incremental version of such a decision-maker can be realized by the Metropolis–Hastings scheme. Given a current action proposal x, the decision-maker generates a novel proposal x from p 0 ( x ) . If Δ U ( x ) Δ U ( x ) , then the sample is accepted with certainty. An inferior sample is accepted with probability p accept = exp ( β ( Δ U ( x ) Δ U ( x ) ) . The aspiration level in this case is variable and always given by the utility of the previous sample. This corresponds to a Markov chain with transition probability p ( x | x ) = p 0 ( x ) min { 1 , exp β Δ U ( x ) Δ U ( x ) } and stationary distribution p 1 eq ( x ) . This Markov chain fulfils detailed balance, i.e., p 1 eq ( x ) p ( x | x ) = p 1 eq ( x ) p ( x | x ) , which implies that after infinitely many repetitions, the samples x will follow the stationary distribution. This Markov chain is a particular version of the Metropolis–Hastings algorithm and is shown in pseudo-code in Algorithm 2. The longer the chain runs, the further the distribution of x will move away from the prior, i.e., the higher the informational resources will be. Finally, the chain reaches the equilibrium distribution.
Algorithm 2 Metropolis–Hastings sampling.
  • x p 0 ( x )  
  • repeat 
  •      x p 0 ( x )  
  •      u Uniform [ 0 , 1 ]  
  •     if u exp ( β ( Δ U ( x ) Δ U ( x ) ) ) then accept x x  
  • until chain has converged to equilibrium
  • return x

3. Non-Equilibrium Thermodynamics and Decision-Making

If decision-making is emulated by a Markov chain that converges to an equilibrium distribution and one wants to be absolutely certain that the chain has reached equilibrium, then one has to wait for an infinitely long time. For finite times, when considering only a limited number of samples from the chain, we are dealing in general with non-equilibrium any time process models, i.e., computational processes that can be interrupted at any time to deliver an answer; a representative example being the Metropolis–Hastings dynamics when Algorithm 2 is run for k N steps. The same holds true for a rejection sampling decision-maker. Even though Algorithm 1 generates equilibrium samples with a finite expected number of samples n ¯ β , before running the algorithm, it is unknown whether after a particular number of steps k, a sample will be accepted or not; to have certainty, we would have to allow for an infinite amount of time ( k ) . In an any time version of rejection sampling, the probability of not accepting a sample after k tries is given by q k = 1 Z ( β ) exp ( β T ) k , in which case the sample x s will be distributed according to the prior distribution p 0 ( x ) . The probability of accepting a sample that is distributed according to p 1 eq ( x ) after k tries is given by 1 q k . Accordingly, the action at time k is a mixture distribution of the form:
p k neq ( x ) = ( 1 q k ) p 1 eq ( x ) + q k p 0 ( x ) .
The distribution p k neq ( x ) is a non-equilibrium distribution that reaches equilibrium p k neq ( x ) p 1 eq ( x ) for k . In the following, we ask how far the tools of non-equilibrium thermodynamics are applicable to such any time decision-making processes.

3.1. Non-Equilibrium Thermodynamics

In thermodynamics, non-equilibrium processes are often modelled in the presence of an external parameter λ ( t ) [ 0 , 1 ] that determines how the energy function E λ ( x ) changes over time; for example, when switching on a potential in a linear fashion, the energy would be E λ ( x ) = E 0 ( x ) + λ ( E 1 ( x ) E 0 ( x ) ) . When the change in the parameter λ is done infinitely slowly (quasi-statically), the system’s probability distribution follows exactly the path of equilibrium distributions (for any λ ) p λ ( x ) = 1 Z λ e β E λ ( x ) . Importantly, when the switching of the external parameter λ is done in finite time, the trajectory in phase space of the evolving thermodynamic system can potentially be very different from the quasi-static case. In particular, the non-equilibrium path of probability distributions is going to be, in general, different from the equilibrium path. We define the trajectory of an evolving system as a finite sequence of states x : = ( x 0 , x 1 , x N ) at times t 0 , t 1 , , t N , and the probability of the trajectory as p ( x ) : = p ( x 0 | t 0 ) n = 1 N p ( x n | x n 1 , t n ) that follows Markovian dynamics. Since λ is then a function of time λ ( t n ) , we can effectively consider the energy as a function of state and time E ( x n , t n ) : = E λ ( t n ) ( x n ) . Accordingly, the internal energy of the system can change in two ways depending on changes in the two variables t n and x n . Assuming discrete time steps, an energy change due to a change in the external parameter is defined as the work [24,44]:
w ( x n 1 , t n 1 t n ) = E ( x n 1 , t n ) E ( x n 1 , t n 1 )
and an energy change due to an internal state change is defined as the heat [24,44]:
q ( x n 1 x n , t n ) = E ( x n , t n ) E ( x n 1 , t n ) .
For an entire process trajectory x 0 , x 1 , , x N measured at times t 0 , t 1 , , t N , the extracted work is W ( x ) = n = 1 N w ( x n 1 , t n 1 t n ) , and the heat transferred to the environment by relaxation steps is Q ( x ) = n = 1 N q ( x n 1 x n , t n ) . The sum of work and heat is the total energy difference Δ E ( x ) : = ( E ( x N , t N ) E ( x 0 , t 0 ) ) = W ( x ) + Q ( x ) . In expectation with respect to p ( x ) , we define the average work W : = W ( x ) p ( x ) , the average heat Q : = Q ( x ) p ( x ) and the average energy change Δ E : = Δ E ( x ) p ( x ) . With these averaged quantities, we obtain the first law of thermodynamics in its usual form:
Δ E = W + Q = W + T Δ S + W diss .
The heat Q can be decomposed into a reversible and an irreversible part given by the entropy difference Δ S = ( S ( t N ) S ( t 0 ) ) , which is multiplied by the temperature T and the average dissipation W diss . The concept of dissipation will be particularly useful later to quantify inefficacies in decision-making processes with limited time. By identifying the equilibrium free energy difference with Δ F : = ( F ( t N ) F ( t 0 ) ) = Δ E T Δ S , we can then write the first law as:
W = Δ F W diss .
In case of a quasi-static process, the extracted work W exactly coincides with the equilibrium free energy difference (thus, W diss = 0 ). In the case of a finite time process, we can express the average dissipated work as [45,46,47]:
W diss : = W diss ( x ) p ( x ) = Δ F W = 1 β D KL p ( x ) p ( x )
where D KL is the relative entropy that measures in bits the distinguishability between the probability of the forward in time trajectory p ( x ) and the probability of the backward in time trajectory p ( x ) : = p ( x N | t N ) n = 1 N p ( x n 1 | x n , t n 1 ) . From the positivity of the relative entropy, we can immediately see the non-negativity of entropy production W diss 0 , which allows stating the second law of thermodynamics in the form:
W Δ F .

3.1.1. Crooks’ Fluctuation Theorem

Equation (9) can be given in a more general form without averages. It is possible to relate the reversibility of a process with its dissipation at the trajectory level. Given a protocol Λ = ( λ 0 , λ 1 , λ N ) , i.e., a sequence of external parameters, the probability p ( x ) of observing a trajectory of the system in phase space compared with its time-reversal conjugate p ( x ) (when using the time-reversal protocol Λ = ( λ N , λ N 1 , λ 0 ) ) depends on the dissipation of the trajectory in the forward direction according to the following expression:
p ( x ) p ( x ) = e β W diss ( x ) ,
where W diss ( x ) = Δ F W ( x ) is the dissipated work of the trajectory. For this relation to be true, both backward and forward processes must start with the system in equilibrium. Intuitively, this means that the more the entropy production (measured by the dissipated work), the more distinguishable are the trajectories of the forward protocol compared to the backward protocol.

3.1.2. Jarzynski Equality

Additionally, another relation of interest in non-equilibrium thermodynamics has recently been found transforming the inequality of Equation (10) into an equality, the so-called Jarzynski equality [48]:
e β W ( x ) p ( x ) = e β Δ F
where the angle brackets denote an average over all possible trajectories x of a process that drives the system from an equilibrium state at λ = 0 to another state at λ = 1 . Specifically, the above equality says that, no matter how the driving process is implemented, we can determine equilibrium quantities from work fluctuations in the non-equilibrium process; or in other words, this equality connects non-equilibrium thermodynamics with equilibrium thermodynamics. In the following, we are interested in the question whether there exist similar relations such as the Jarzynski equality or Crooks’ fluctuation theorem and similar underlying concepts such as dissipation and time reversibility for the case of decision-making.

3.2. Non-Equilibrium Thermodynamics Applied to Bounded Rational Decision-Making

In direct analogy to the previous section, in the following, we consider decision-makers faced with the problem of optimizing a changing utility function. We assume that time is discretized into N steps t 0 , , t N . For each time step t n , the utility is assumed to be constant, but it can change between time steps, such that we have a sequence of decision problems expressed by the changes in utility Δ U ( x , t 0 t 1 ) , , Δ U ( x , t N 1 t N ) . At each time point t n , the decision-maker chooses action x n , such that we can summarize the decision-maker’s choices by a vector x : = ( x 0 , , x N ) . The behaviour of the decision-maker is characterized by the probability p ( x ) : = p ( x 0 | t 0 ) n = 1 N p ( x n | x n 1 , t n ) with p ( x 0 | t 0 ) = p 0 ( x 0 ) , assuming that the initial strategy is a bounded rational equilibrium strategy. In this setup, we assume that the changes in the utility function are externally driven, i.e., the decision-maker’s actions cannot change the temporal evolution of the utility function. Furthermore, note that the decision-maker does not know how the utility changes over time. Accordingly, the best the decision-maker can do is to optimize the current utility as much as possible.
At time t 0 , the decision-maker starts with selecting an action x 0 from the distribution p ( x 0 | t 0 ) and the utility changes instantly by Δ U ( x , t 0 t 1 ) . The decision-maker can then adapt to this utility change with the distribution p ( x 1 | x 0 , t 1 ) and select the action x 1 at time t 1 , but at this point, the utility is already changing again by Δ U ( x , t 1 t 2 ) . The adaptation from p ( x 0 | t 0 ) to p ( x 1 | x 0 , t 1 ) is analogous to a physical relaxation process and implies a strategy change between x 0 and x 1 . In general, at each time point t n 1 , the decision-maker chooses action x n 1 while the current utility changes by:
Δ U ( x n 1 , t n 1 t n ) = U ( x n 1 , t n ) U ( x n 1 , t n 1 ) .
This way, the decision-maker is always lagging behind the changes in utility, just like a physical system would lag behind the changes in the energy function. The utility Δ U ( x n 1 , t n 1 t n ) gained by the decision-maker at time point t n 1 parallels the concept of work in physics. For a whole trajectory, we define the total utility gain due to changes in the environment as U ( x ) = n = 1 N Δ U ( x n 1 , t n 1 t n ) . Note that the last decision x N can be ignored in this notation, as it does not contribute to the utility.
In Figure 1 (left column), we illustrate the setup for a one-step decision problem Δ U ( x , t 0 t 1 ) with behaviour vector x = ( x 0 , x 1 ) . An instantaneous change in the environment occurs at time t 0 represented by a vertical jump from λ 0 to λ 1 in the upper panels that translates directly into a change in free energy difference represented by Δ F in the lower panels. The system’s previous state at t 0 is given by p 0 eq ( x ) , i.e., the equilibrium distribution for U 0 . The new equilibrium is given by p 1 eq ( x ) , i.e., the equilibrium distribution for U 1 . In this case, the behaviour vector is x = ( x 0 , x 1 ) with x 0 p 0 eq ( x ) , and x 1 is ignored.
Similarly to Equation (8), we can now formulate the first law for decision-making as:
U = Δ F U diss
stating that the total average utility U : = U ( x ) p ( x ) is the difference between the bounded optimal utility (following the equilibrium strategy with precision β ) expressed by the equilibrium free energy difference Δ F and the dissipated utility U diss . The dissipation for a trajectory U diss ( x ) : = Δ F U ( x ) measures the amount of utility loss due to the inability of the decision-maker to act according to the equilibrium distribution. This is because the decision-maker cannot anticipate the changes in the environment. At most, the decision-maker could act according to the equilibrium distributions of the previous environment. Thus, even with full adaptation, the decision-maker will always lag behind one time step and will therefore always dissipate.
Due to an equivalent version of Equation (9), we can also state the second law for decision-making U diss 0 , which implies that a purely adaptive decision-maker can gain a maximum utility that cannot be larger than the free energy difference:
U Δ F .
Similarly, we can obtain equivalent relationships to the Crooks fluctuation theorem:
p ( x ) p ( x ) = e β U diss ( x ) ,
and the Jarzynski equality:
e β U ( x ) p ( x ) = e β Δ F
which both have the same implications as in the physical scenario and can be derived in the same way as in the physical counterpart [44]. In summary, we can say that an adaptive decision-maker, which has to act without knowing that the utility function has changed, follows the same laws as a thermodynamic physical system that is lagging behind the equilibrium.

3.3. Examples

In this section, we illustrate the applicability of thermodynamic non-equilibrium concepts in a series of simulations for different decision-making scenarios. In particular, we study two model classes: the first one contains simple one-step lag models of adaptation where equilibrium is always reached with one time step delay, and the second one contains more complex models of adaptation that do not necessarily equilibrate after one time step. In the first model class, we can easily study the relation between dissipation and the rate of information-processing, whereas in the second class of models, we can study more complex non-equilibrium phenomena such as learning hysteresis.

3.3.1. One-Step Lag Models of Adaptation

Consider a learner that is adapted to their environment such that their behaviour can be described by the equilibrium distribution p 0 ( x ) . For this idealized scenario, we assume that the learner can adapt their behaviour to any environment perfectly after a time lapse of Δ t . This also means that before the lapse of Δ t , the learner continues to follow their old strategy and is inefficient during this time span. We now consider two scenarios: first, where the environment changes suddenly by Δ U ( x ) , and second, where the environment changes slowly in N small steps of Δ U ( x ) / N . In the first case, the learner is going to dissipate the utility:
U diss = 1 β D KL p 0 ( x ) p 1 eq ( x ) ,
in the first time step. In all subsequent time steps, no more utility is wasted, assuming the environment does not change any more. In the second case, the utility function can be written as U t ( x ) = U 0 ( x ) + t N Δ U ( x ) for t N : 0 t N . To compute the dissipated utility, we need to compare the learner’s behaviour in time step t to the bounded optimal behaviour, which is:
p eq ( x , t ) = 1 Z p eq ( x , t 1 ) e β N Δ U ( x )
for t > 0 . The overall average dissipated utility for the whole process is then
U N diss = 1 β t = 1 N D KL p eq ( x , t 1 ) p eq ( x , t ) .
The net utility gain for the N-step scenario is U N net = Δ F U N diss . Note that:
U N diss U N + 1 diss
and consequently, in direct analogy to a quasi-static change in a thermodynamic system, we get vanishing dissipation ( U N diss 0 ) if the utility changes infinitely slowly ( N and Δ U ( x ) / N 0 ), such that the net utility equals the free energy difference U N net = Δ F .

3.3.2. Bayesian Inference as a One-Step Lag Process

Bayesian inference mechanisms naturally have step by step dynamics that update beliefs with new incoming observations. Again, we can consider two scenarios: first where the learner updates their belief abruptly by processing a huge chunk of data in one go, and second, where belief updates are incremental with small chunks of data at each time step. Here, we show how the size of the chunks of data affect the overall surprise of the decision-maker and how this relates to dissipation applying the free energy principle to Bayesian inference.
Traditionally, Bayes’ rule is obtained directly from the product rule of probabilities p ( θ , D ) = p ( θ ) p ( D | θ ) = p ( D ) p ( θ | D ) where θ correspond to the different available hypotheses and D corresponds to the dataset. However, Bayes’ rule can also be considered to be a consequence of the maximization of the free energy difference with the log-likelihood as a utility function [49,50,51]. In this view, the posterior belief p ( θ | D ) is a trade-off between maximizing the likelihood p ( D | θ ) and minimizing the distance from the prior p 0 ( θ ) such that:
p ( θ | D ) = argmax p ˜ Δ F [ p ˜ ] = argmax p ˜ p ˜ ( θ | D ) log p ( D | θ ) d θ 1 β p ˜ ( θ | D ) log p ˜ ( θ | D ) p 0 ( θ ) d θ
= 1 Z p 0 ( θ ) e β log p ( D | θ ) = 1 Z p 0 ( θ ) p ( D | θ ) β
is identical to Bayes’ rule when β = 1 . For β , we recover the maximum likelihood estimation method as the density update is p ( θ | D ) = δ ( θ θ MLE ) with θ MLE = argmax θ log p ( D | θ ) .
Such a Bayesian learner with prior p 0 ( θ ) that incorporates all the data X at once is going to experience the expected surprise S = p 0 ( θ ) log p ( D | θ ) d θ . In contrast, a Bayesian learner that incorporates the data slowly in N steps (thus, the dataset D = ( X 1 , , X N ) is divided in N parts) experiences an expected surprise of S = n = 1 N p ( θ | X 1 , , X n 1 ) log p ( X n | θ ) d θ . Here, the surprise S corresponds to the thermodynamic concept of work. The first law can then be written as:
Δ F + S = U diss
where the equivalent of dissipation corresponds to:
U diss = 1 β D KL ( p 0 ( θ ) p eq ( θ | D ) ) .
when processing all the data at once and to:
U diss = 1 β n = 1 N D KL ( p ( θ | X < n ) p eq ( θ | X n ) ) .
when processing the data in N steps where X < n = ( X 1 , , X n 1 ) and X n = ( X 1 , , X n ) . Thus, given that the equilibrium free-energy difference Δ F is a state function independent of the path (that means independent of whether data are processed all in one go or in small chunks), a system acquiring data slowly will have a reduced surprise S and therefore have less dissipation U diss .
In Figure 2, we show how the number of data chunks has an effect on the overall surprise and dissipation. In particular, we have a dataset D = ( x 1 , , x T ) consisting of T = 100 data points Gaussian distributed x N ( x ; μ d = 5 , σ d 2 = 4 ) that we divide into batches of different sizes b { 100 , 50 , 25 , 20 , 10 , 5 , 2 , 1 } . The decision-maker has prior belief p 0 ( θ ) about the mean θ = μ d and incorporates the data of every batch of data according to Bayes’ rule until all the data are incorporated. In general, the Bayesian learner processes the data in T / b steps; for example in the case of b = 100 , all data are processed at once (having thus high surprise), and in the case of b = 1 , it incorporates the data in T updates with an overall smaller surprise. In Figure 2, we show for different batch sizes the free energy optimum Δ F = log p 0 ( θ ) p ( D | θ ) , the surprise S and the dissipation U diss = Δ F S . It can be seen that when acquiring the data in small chunks, the surprise of the decision-maker and the dissipation are lower.

3.4. Dissipation and Learning Hysteresis

A common paradigm to study how humans learn is through adaptation tasks where subjects are exposed to changes in an environmental variable that they can counteract by changing an internal variable. Sensorimotor adaptation in humans has been extensively studied in these error-based paradigms, for example where subjects have to adapt their hand position (internal variable) to change a virtual end effector position represented by a dot on a screen (external variable).
Consider a utility function U v ( x ) = ( x μ v ) 2 . For v = 0 , we determine the prior behaviour of a decision-maker with p 0 ( x ) = e β U 0 ( x ) Z . Initially, the decision-maker obtains an average utility of U 0 p 0 , which corresponds to zero mismatch between the decision-maker and the environmental variable. A change of the environmental variable to v = 1 effectively changes the utility function to U 1 ( x ) = ( x μ 1 ) 2 , making p 0 non-optimal. This forces the decision-maker to reduce error adapting to the environmental variable by changing its probability distribution over his/her actions. When fully adapted to the new environment, the decision-maker again makes no errors (other than the errors due to motor noise). We illustrate this adaptation paradigm with a decision-maker that adapts according to the Metropolis–Hastings algorithm, which follows Markovian dynamics [52].

Crooks Theorem and Hysteresis Effects in Adaptation Tasks

Limited adaptation capabilities not only have an effect on the amount of obtained utility through the second law for decision-making U net Δ F , but also induce a time asymmetry in sequential decision-making processes. Hysteresis loops are a typical example of this asymmetry. Hysteresis is the phenomenon in which the path followed by a system due to an external perturbation, e.g., from state A to B, is not the same as the path followed in the reverse perturbation, e.g., from state B to A. When the system follows the same path for the forward perturbation and for the reverse perturbation, we say that the process is time symmetric (and therefore, it is not subject to hysteresis effects).
In the two left panels of Figure 3, we show a simulated trajectory of actions composed of 80 trials for an adaptation task using the Metropolis–Hastings algorithm with β = 22.5 , a Gaussian proposal g ( x | x ) = N ( x ; μ = x , σ p = 0.1 ) and acceptance criterion α ( x | x ) = min e β U ( x ) g ( x | x ) e β U ( x ) g ( x | x ) , 1 , when changing the environmental variable from μ 0 = 0.0 to μ 1 = 1.0 . In blue, we show the trajectory for the forward-in-time perturbation, which converges after a few dozen trials to the new equilibrium. In brown, we show the trajectory for the reversed perturbation where the process starts with the last trial (80) and ends with the initial trial (0). In the left panel, the perturbation is made instantaneously in one step at Trial 40 and in the right panel in multiple steps ( N = 23 ). The hysteresis effect is clearly seen in the instantaneous perturbation where the path of actions followed by the decision-maker in the forward perturbation is clearly different from a typical trajectory of actions taken when applying the reversed perturbation. When the perturbation is made in multiple steps, both typical backward and typical forward trajectories become more similar denoting a smaller hysteresis effect. In this way, hysteresis effects are tightly connected to the concept of dissipation.
Dissipation and the ratio between forward and backward probabilities of trajectories of actions correspond exactly to the Crooks theorem for decision-making:
p ( x ) p ( x ) = e β U diss ( x ) .
The probability of observing a trajectory of accepted actions x = ( x 0 , x 1 , x T ) for the Metropolis–Hastings algorithm is easily computed with p ( x ) = p ( x 0 ) t = 1 T g ( x t | x t 1 ) α ( x t | x t 1 ) . Similarly, the probability of observing the same trajectory in the backward protocol is p ( x ) = p eq ( x T ) t = 1 T g ( x T t | x T t + 1 ) α ( x T t | x T t + 1 ) . The dissipated utility is U diss = Δ F U tot where the free energy difference is computed between the final p 1 ( x ) = 1 Z e β U 1 ( x ) and initial equilibrium distributions p 0 ( x ) = 1 Z e β U 0 ( x ) , and the total utility gained U tot is the sum of the utilities Δ U ( x , t n t n + 1 ) at each environmental change at time t n . In the third panel of Figure 3, we show that the protocol with the instantaneous perturbation has higher dissipation (related to higher hysteresis) compared to the protocol with multiple small perturbations.

4. Generalized Non-Equilibrium Thermodynamics for Decision-Making with Deliberation

So far, we have studied decision-makers that were forced to select an action with no opportunity to respond to a change in the utility function. This could correspond, for example, to a scenario of trial-and-error learning, where the best available strategy is the prior strategy adapted to the environment before the utility changed. However, this restriction may not always be suitable. Consider for example a chess player that is shown a particular board configuration (corresponding to a change in utility) and now has a certain amount of time to decide on the next move. Similarly, consider the two introductory examples in Section 3, where we allow a sampling algorithm to run for a certain number of steps, and then, we stop and evaluate the action after the algorithm has adapted to the new utility. In general, such deliberation processes are expensive, and we assume in the following that the Kullback–Leibler divergence is an appropriate measure of this computational expense, as outlined in the Introduction.
In the following, we consider again decision-makers facing a sequence of decision problems expressed by the utility changes Δ U ( x , t 0 t 1 ) , , Δ U ( x , t N 1 t N ) . In contrast to the previous section where decision-makers had to decide before they could adapt to the utility change, decision-makers that deliberate select their action x n after they have (partially) adapted to the utility change:
Δ U ( x n , t n 1 t n ) = U ( x n , t n ) U ( x n , t n 1 ) .
Using this notation, we are able to summarize the decision-maker’s choice by a vector x : = ( x 0 , , x N ) and characterize its behaviour by the probability p ( x ) : = p ( x 0 | t 0 ) n = 1 N p ( x n | x n 1 , t n ) with p ( x 0 | t 0 ) = p 0 ( x 0 ) , assuming that the initial strategy is a bounded rational equilibrium strategy. Note that in the deliberation scenario, the initial state x 0 does not constitute a decision, but instead, we include the last decision x N .
This setup is illustrated again in Figure 1 (right column) for a one-step decision problem Δ U ( x , t 0 t 1 ) with behaviour vector x = ( x 0 , x 1 ) and with an instantaneous change in the environment occurring at time t 0 . In the deliberation scenario, the utility is determined after the deliberation time. During deliberation, the decision-maker has changed the strategy distribution from p 0 eq ( x ) to a non-equilibrium distribution p ˜ ( x ) (for example, the distribution (6) in the rejection sampling scheme) spending in the process a certain amount of resources and achieving an average net utility of U net = Δ F [ p ˜ ( x ) ] according to Equation (3). In this case, the behaviour vector is x = ( x 0 , x 1 ) with x 0 ignored and x 1 p ˜ ( x ) . In such a scenario with a single decision problem, we define, in analogy with the previous section, the average dissipated utility as [24,53]:
U diss : = Δ F U net = 1 β D KL p ˜ ( x ) p 1 eq ( x ) .
See Appendix for a derivation of (16) from (9). It readily follows from the positivity of the relative entropy D KL p q 0 that:
U net Δ F
with equality when p ˜ ( x ) = p 1 eq ( x ) . In the case of the rejection sampling decision-maker of Equation (6), this would correspond to an infinite amount of samples k . The inequality (17) shows that we cannot obtain more utility than the equilibrium free energy difference.
Let us now look at the general case. In contrast to an agent without deliberation capabilities, an agent that deliberates will be able to act according to a different distribution than the prior strategy. This means that when facing the utility change Δ U ( x , t n 1 t n ) at time t n , the agent chooses the action x n sampled from the posterior strategy, contrary to an agent without deliberation that chooses x n 1 sampled from the prior strategy. The deliberation process incurs a computational cost that is measured (in a similar fashion to stochastic thermodynamics [54] and previous formulations of bounded rationality given in the introduction) with the difference between the conditional stochastic entropies from prior to posterior:
s ( x n | x n 1 , t n ) s ( x n | x n 1 , t n 1 ) : = log p ( x n | x n 1 , t n ) p ( x n | x n 1 , t n 1 ) .
Note that the prior distribution p ( x n | x n 1 , t n 1 ) is the previous posterior distribution evaluated at x n instead of x n 1 . Basically, this measures the change in probability from prior behaviour to posterior behaviour of the newly chosen action x n .
Taking into account the computational cost of deliberation, we define the net utility of action x n due to a change in the environment as
u ( x n , t n 1 t n ) = Δ U ( x n , t n 1 t n ) 1 β log p ( x n | x n 1 , t n ) p ( x n | x n 1 , t n 1 ) ,
which generalizes the concept of work from the previous section. The expected change in net utility is the objective function that the decision-maker optimizes at each time step. The total net utility U net ( x ) = n = 1 N u ( x n , t n 1 t n ) takes the form of a non-equilibrium free energy:
U net ( x ) = n = 1 N Δ U ( x n , t n 1 t n ) 1 β n = 1 N log p ( x n | x n 1 , t n ) p ( x n | x n 1 , t n 1 ) .
at the trajectory level. Similarly to Equation (8), the first law for decision-making with deliberation costs is:
U net = Δ F U diss
and states that the total net utility U net = U net ( x ) p ( x ) is the difference between the bounded optimal utility (following the equilibrium strategy with precision β ) expressed by the equilibrium free energy difference Δ F and the dissipated utility U diss . The dissipation:
U diss ( x ) : = Δ F U net ( x )
measures the amount of utility loss if the decision-maker’s plan does not manage to produce an action from the equilibrium distribution, for example due to the lack of time for deliberation. However, a decision-maker with infinite deliberation time will not have this problem and therefore will not dissipate by wasting utility.
To investigate the counterpart of the second law, we need to determine whether U diss 0 holds. This can be achieved, for example, by first deriving the counterpart of the Crooks fluctuation theorem or the counterpart of the Jarzynski equation with subsequent application of Jensen’s inequality. In the following two theorems, we assume that the decision-makers satisfy the detailed balance condition. The detailed balance condition ensures two important characteristics. First, the stochastic process reaches equilibrium, and second, it ensures time-reversibility when in equilibrium. In a decision-making scenario, this translates into the following. First, when given enough computation time, the decision-makers manage to sample actions from the correct equilibrium distributions. Second, ideal decision-makers in equilibrium should not produce any entropy, which is exactly what happens if detailed balance is satisfied.
Theorem 1.
Crook’sfluctuation theorem for decision-making with deliberation costs states that:
p ( x ) p ( x ) = e β U diss ( x )
where the dissipated utility of a particular trajectory is U diss ( x ) = Δ F U net ( x ) as defined in Equation (18) and the probability of the trajectory using the backward protocol is p ( x ) = p ( x 0 | x 1 , t 0 ) p ( x 1 | x 2 , t 1 ) p ( x N | t N ) for N decision problems starting at time t N and going backwards up to t 0 . For the relation to be valid, we must assume that the starting distribution in the backward process is also in equilibrium, p ( x N | t N ) e β U ( x N , t N ) .
Proof. 
Here, we derive the relationship between reversibility and dissipation.
p ( x ) p ( x ) = p ( x 0 | t 0 ) p ( x 1 | x 0 , t 1 ) p ( x N | x N 1 , t N ) p ( x 0 | x 1 , t 0 ) p ( x 1 | x 2 , t 1 ) p ( x N | t N ) = e β U ( x 0 , t 0 ) Z 0 1 e β U ( x 0 , t 0 ) p ( x 1 | x 0 , t 1 ) p ( x 1 | x 0 , t 0 ) e β U ( x 1 , t 0 ) e β U ( x 1 , t 1 ) p ( x N | x N 1 , t N ) p ( x N | x N 1 , t N 1 ) e β U ( x N , t N 1 ) e β U ( x N , t N ) Z N = Z N Z 0 e β 1 β log p ( x 1 | x 0 , t 1 ) p ( x 1 | x 0 , t 0 ) e β Δ U ( x 1 , t 0 t 1 ) e β 1 β log p ( x N | x N 1 , t N ) p ( x N | x N 1 , t N 1 ) e β Δ U ( x N , t N 1 t N ) = e β Δ F β U net ( x ) = e β U diss ( x )
where in the second line, we have substituted p ( x n 1 | x n , t n 1 ) using the identity:
p ( x n 1 | x n , t n 1 ) = e β U ( x n 1 , t n 1 ) e β U ( x n , t n 1 ) p ( x n | x n 1 , t n 1 )
from detailed balance, and we assumed the initial distribution to be in equilibrium p ( x 0 | t 0 ) = e β U ( x 0 , t 0 ) Z 0 and that in the backward process the decision-maker starts also using the equilibrium strategy p ( x N | t N ) = 1 Z N e β U ( x N , t N ) . In the third line, we cancel out terms and apply the following two equalities p ( x n | x n 1 , t n ) p ( x n | x n 1 , t n 1 ) = e β 1 β log p ( x n | x n 1 , t n ) p ( x n | x n 1 , t n 1 ) and Δ U ( x n , t n 1 t n ) = U ( x n , t n ) U ( x n , t n 1 ) . Finally, in the last line, we employ the definition of the net utility in Equation (18) and Z N Z 0 = e β Δ F . ☐
Although at first sight, Equation (20) looks the same as the previous Crooks’ relation for the no-deliberation case (12), it is not the same. Here, the net utility is defined by Equation (18), which takes into account both the gain in utility and the computational costs of deliberating.
Theorem 2.
The Jarzynski equality for decision-making with deliberation costs states that:
e β U net ( x ) p ( x ) = e β Δ F .
Proof. 
exp β n = 1 N Δ U ( x n , t n 1 t n ) 1 β log p ( x n | t n , x n 1 ) p ( x n | t n 1 , x n 1 ) p ( x ) = = ( 1 . ) x 0 , x n , x N p ( x 0 | t 0 ) n = 1 N p ( x n | t n , x n 1 ) n = 1 N exp ( β U ( x n , t n ) ) exp ( β U ( x n , t n 1 ) ) n = 1 N p ( x n | t n 1 , x n 1 ) p ( x n | t n , x n 1 ) = ( 2 . ) x 0 , x n , x N p ( x 0 | t 0 ) exp ( β U ( x 1 , t 1 ) ) exp ( β U ( x 1 , t 0 ) ) n = 2 N exp ( β U ( x n , t n ) ) exp ( β U ( x n , t n 1 ) ) p ( x 1 | t 0 , x 0 ) n = 2 N p ( x n | t n 1 , x n 1 ) = ( 3 . ) 1 Z 0 x 1 x n , x N exp ( β U ( x 1 , t 1 ) ) n = 2 N exp ( β U ( x n , t n ) ) exp ( β U ( x n , t n 1 ) ) n = 2 N p ( x n | t n 1 , x n 1 ) = ( 4 . ) = 1 Z 0 x 2 x n , x N n = 2 N exp ( β U ( x n , t n ) ) exp ( β U ( x n , t n 1 ) ) n = 3 N p ( x n | t n 1 , x n 1 ) x 1 exp ( β U ( x 1 , t 1 ) ) p ( x 2 | t 1 , x 1 ) = exp ( β U ( x 2 , t 1 ) ) ( Detailed Balance ) = ( 5 . ) 1 Z 0 x N exp ( β U ( x N , t N ) ) = Z N Z 0 = e β Δ F
In ( 1 . ), we unfold the expression and exploit the equality e log p + log q = p q for the summation inside the exponential. In ( 2 . ), we cancel the trajectory probabilities n = 1 N p ( x n | t n , x n 1 ) and then take one term out of the two remaining products. In ( 3 . ), first, we use the equivalence exp ( β U ( x 1 , t 0 ) ) = Z 0 p eq ( x 1 | t 0 ) (because at time t 0 , the decision-maker is acting according to the equilibrium distribution) that allows us to cancel with p ( x 1 | t 0 , x 0 ) = p eq ( x 1 | t 0 ) , and second, we sum over x 0 with the only term that depends on it being p ( x 0 | t 0 ) . In ( 4 . ), we take one term of the second product and perform the sum over x 1 to obtain by detailed balance exp ( β U ( x 2 , t 1 ) ) that will allow us to cancel with the term in the denominator of the first product. We perform Steps ( 3 . ) and ( 4 . ) repeatedly until obtaining the last equivalence that proves the theorem.
Again, we note that the previously-proven Jarzynski relation from Equation (21) is not the same equation as in the no-deliberation case (13). In the deliberation case, the definition of the net utility is different and takes into account both the utility gain and the computational cost of deliberating.
We can now state the second law of decision-making with deliberation costs as:
U diss ( x ) p ( x ) = 1 β D KL p ( x ) p ( x ) 0
from Equation (20) by rearranging and taking expectations. The same inequality can be obtained from Equation (21) by applying Jensen’s inequality exp x exp x to recover U net ( x ) p ( x ) Δ F . Equation (21) connects finite with infinite time decision-making. That is, there is a relation between the equilibrium free-energy differences that is the maximum attainable net utility with unlimited computation time and the net utility obtained by decision-makers with limited computation time. In the next section, we will provide examples of how to use these relations to extract useful information from decision-making processes.

4.1. Examples

For the deliberation scenario, we illustrate the novel Jarzynski equality and Crooks theorem for decision-making in two decision-making scenario with clearly defined independent episodes: the first case is a discrete decision-making problem, and the second case is a continuous decision-making problem.

4.1.1. Jarzynski and Crooks Relations for Episodic Decision-Making with Deliberation

Choice-reaction-time experiments aimed to study information-processing in humans typically consider episodic tasks consisting of many trials; see [55] for a recent example. Here, we take a variation of Hicks episodic task with discrete action space, commonly used in the decision-making literature. In our variation of Hicks task, the decision-maker is shown a set of eight light bulbs. Initially, all light bulbs are turned off. Upon stimulus presentation, all light bulbs are turned on with different light intensities (representing different utilities) for a limited amount of time in which the decision-maker must choose the brightest light associated with the highest utility. The choice task is repeated many times, each time with different light intensities. For simplicity, our example contains only two stimuli: compare Utility 1 and Utility 2 in Figure 4A. When given enough time, a decision-maker with prior p 0 ( x ) chooses its actions according to the equilibrium distribution from Equation (4), as illustrated in Figure 4A for the uniform prior p 0 ( x ) = 1 8 that we assume in our example. In this case, the precision β specifies how well the light intensities can be told apart by a bounded optimal decision-maker.
In Figure 4, we model a decision-maker using the rejection sampling algorithm with the most efficient aspiration level given by the maximum utility max x Δ U ( x ) . In particular, we simulate the rejection sampling algorithm with a limited number of samples (parameterized by k), where the choice strategy is given by non-equilibrium probability distribution in Equation (6) from the Introduction, because we assume that a response has to be produced within a fixed amount of time.
In this kind of episodic task, the decision-maker always starts with the same prior p 0 ( x ) over the possible choices x. The probability of a trajectory of decisions x is defined as p ( x ) : = n = 1 N p ( x n | t n ) for each episode n, and the net utility for a trajectory is:
U 0 net ( x ) : = n = 1 N Δ U ( x n , t n 1 t n ) 1 β log p ( x n | t n ) p 0 ( x n ) .
Consequently, the equilibrium free energy is defined as Δ F : = max p ˜ ( x ) U 0 net ( x ) p ˜ ( x ) , which can also be decomposed into the sum of N independent equilibrium free energies Δ F = n = 1 N Δ U ( x n , t n 1 t n ) 1 β log p eq ( x n | t n ) p 0 ( x n ) p eq ( x n | t n ) where:
p eq ( x n | t n ) = p 0 ( x n ) exp ( β Δ U ( x n , t n 1 t n ) ) Z n
and the dissipated utility for a trajectory is U diss ( x ) : = Δ F U 0 net ( x ) .
We simulate trajectories with N = 2 by sampling repeatedly from Equation (6). In the first panel of Figure 4B, we show that, as expected, the more samples k a decision-maker can afford, the higher the average net utility U 0 net p ( x ) . In the second panel, it can be seen that the equilibrium free energy difference is invariant with respect to k and increases with higher precision β . Lastly, in the third panel, we plot the average dissipated utility U diss p ( x ) that measures how much utility is lost due to the limited number of available samples. The highest dissipation occurs for high β and few samples k because such a high-precision decision-maker can potentially obtain high utility, but the limited amount of samples restrain it. In the following, we consider both a Jarzynski-like relation and a fluctuation theorem valid for a fixed prior.

Jarzynski Equality for Decision-Making with Fixed Prior p 0

For a fixed prior, it can readily be shown that the following relation is valid:
e β U 0 net ( x ) p ( x ) = e β Δ F .
To illustrate the validity of Equation (23), we simulated a decision-maker that faces T times the same two decision problems from Figure 4A. We can estimate the left-hand side of Equation (23) with the empirical average 1 T i exp ( β U 0 net ( x i ) ) with the T trajectories of decisions, where x i p ( x ) . In the top row of Figure 4C, we show the empirical average converging to exp ( β Δ F ) (as expected by the law of large numbers) depending on the number of simulated trajectories T and precision β , empirically validating Equation (23). In the bottom row, we show how the second law for decision-making is fulfilled as the average net utility is less than the equilibrium free energy, thus satisfying the inequality (17).

Crooks’ Fluctuation Theorem for Decision-Making with Fixed Prior p 0

For the fixed prior, it can readily be shown that the following fluctuation relation holds:
p ˜ ( x ) p eq ( x ) = e β ( Δ F U 0 net ( x ) ) = e β U diss ( x )
where p eq ( x ) : = n = 1 N p eq ( x n | t n ) is the optimal equilibrium distribution over trajectories x . Note in this case that the probability distribution of the backward process p ( x ) coincides with the optimal equilibrium distribution p ( x ) = p eq ( x ) because of the independence of the decision problems. More specifically, the original Crooks theorem for decision-making from Equation (20) is valid only when the backward process starts in equilibrium. In our episodic task, all decision problems are independent, which makes the starting equilibrium distributions for all the backward processes coincide with the posterior equilibrium distributions of the forward process.
The fluctuation relation (24) for episodic tasks adopts a different meaning than the conventional relation. Specifically, the ratio between probabilities is now between the probability of observing a trajectory of actions when having finite time to make a decision (a sequence of non-equilibrium probabilities) and the probability of observing the same trajectory when having infinite time (a sequence of equilibrium probabilities). This ratio is governed by the exponential of the dissipated utility U diss ( x ) similarly to the original Crooks equation.
Equation (24) can be rewritten by re-arranging the terms and averaging over p ( x ) as
1 β D KL p ( x ) p eq ( x ) = U diss ( x ) p ( x ) .
Consequently, we see that purely from the trajectories of actions, we can obtain the average dissipated utility. We can test this relation in human experiments by comparing the trajectories of actions in two different conditions, first when having finite time and second when having as much time as needed. Then, from the probabilities of action trajectories, we can extract the average dissipated utility.

4.1.2. Jarzynski and Crooks Relations for Deliberating Continuous Decisions

Since many decision tasks take place in the continuous domain (for example, sensorimotor tasks), we now consider continuous state space problems. In particular, we repeat the same analysis as in the previous section by validating our Jarzynski equation, but this time in the continuous domain. Moreover, in this example, we allow for adaptive changes in the prior, such that the prior in one trial is equal to the posterior of the previous trial. In the following, we model decision-making as a diffusion process with Langevin dynamics that stops after a certain time t and emits an action x. The diffusion process uses gradient information to find the optimum utility and will converge to an equilibrium distribution for t . In our example, we will employ quadratic utility functions that allow for a closed form solution of the non-equilibrium probability density that changes over time.
Let x ( t ) R be the dynamics of computation that a decision-maker carries out when deliberating. The differential equation that describes the dynamics is:
x t = α U ( x ) x + α ξ ( t )
where ξ ( t ) is white Gaussian noise with mean ξ ( t ) = 0 and correlation ξ ( t ) ξ ( t ) = 2 D δ ( t t ) . Note that Equation (25) is closely related to learning algorithms that use gradient information such as Stochastic Gradient Descent (SGD). These algorithms find the minimum of a cost function by taking steps in the state space in the opposite direction of the gradient. Here, we see that the learning rate corresponds to the parameter α , which, in contrast with plain GD, not only multiplies the gradient, but also the noise term.
Equation (25) gives the dynamics of the decision-making process in terms of a stochastic differential equation, which can equivalently be expressed by the evolution of the probability p ( x , t ) described by the Fokker–Planck equation [56]:
p ( x , t ) t = α p ( x , t ) 2 U ( x ) x 2 α U ( x ) x p ( x , t ) x + D α 2 2 p ( x , t ) x 2 .
In order to compute the net utility, we need the probability of the non-equilibrium distribution up to a desired time t; thus, we need to solve the Fokker–Planck equation. For quadratic utility functions U y ( x ) = ( a y x 2 + b y x ) with coefficients a y and b y for environment y and initial Gaussian distribution with mean μ 0 and variance σ 0 2 , the solution is (see Appendix):
p ( x , t ) = 1 2 π σ 2 ( t ) e ( x μ ( t ) ) 2 2 σ 2 ( t )
with:
σ 2 ( t ) = α 2 D 2 c 1 e 2 c t + σ 0 2 e 2 c t μ ( t ) = e c t μ 0 b 1 2 a 1 ( 1 e c t )
where c = 2 α a 1 , and we assumed that the prior strategy is Gaussian distributed with mean μ 0 and variance σ 0 2 . The precision parameter relates to the other parameters with the relation β = 2 α D , which means that the higher the α , the more we take into account the gradient leading to a higher β , and the lower the noise D, also the higher β .
Following a similar approach as in the previous section, we expose a decision-maker to two utility functions given by U 1 ( x ) = 0.2 x 2 0.4 x 0.8 and U 2 ( x ) = 0.4 x 2 1.8 x + 1.025 shown in Figure 5A. The prior for the first utility is given by μ 0 = 0 and σ 0 2 = 1 . In Figure 5B, we show the net utility, equilibrium free-energy differences and dissipated utility (according to Equations (18) and (19)) for different values of β and number of steps k; corresponding to time t = k Δ t in Equation (27) for a given reference Δ t . In Figure 5C, we show the convergence of the Jarzynski term towards the true equilibrium free energy difference term depending on the number of trajectories to make the estimation. We can see on the bottom row that the second law for decision-making represented by the inequality (17) is fulfilled.

5. Discussion

In this paper, we highlighted the similarities between non-equilibrium thermodynamics and bounded rational decision-making in the case of agents that can deliberate before selecting an action and agents that cannot. Additionally, we derived a novel Jarzynski equality and a Crooks fluctuation theorem for decision-making scenarios with deliberation. We have shown how to use Jarzynski’s and Crooks’ equations in different scenarios to extract relevant variables of the decision-making process such as the equilibrium free energy difference, the average dissipated utility and the action-path probabilities for both equilibrium posterior distributions and distributions of the backward-in-time protocol. We have provided a number of examples for the no-deliberation and deliberation scenario, such as one-step lag dynamics, discrete choice tasks and continuous decision-making tasks that may be applicable both to cognitive and sensorimotor experiments [57].
In Section 3, we started out by directly translating physical non-equilibrium concepts to the decision-making domain in the case of decision-makers that cannot deliberate before acting and therefore lag behind changes in the utility landscape. In analogy to physical systems, we assumed that such decision-makers adapt to each utility change even though they are lagging behind, i.e., even after they have already chosen their action and there is no benefit of this adaptation at the current time step, but to improve their prior for the next choice. In physical systems, this does not constitute an issue, because there is a continuous adaptation to the energy gradient at every instant independent of how time is discretized. However, in the decision-making scenario, we assumed a single distinguished moment where the action is issued and the utility is evaluated. Therefore: Why should such decision-makers adapt at all after the action has been selected? Following the argument of no-free lunch theorems, there would be no benefit in adapting to arbitrary changes. Having a closer look at our examples in Section 3.3, it becomes evident that we implicitly assumed that the utility changes in each step were small, so there is a benefit in adapting the prior for the next trial. Such assumptions are typically made in learning scenarios, for example the i.i.d. assumption for inference problems or assumptions that utility changes in each time step are limited to a finite interval in decision-making problems. However, none of the non-equilibrium relations we discussed necessarily assume small utility changes. It should therefore be noted that, while the discussed non-equilibrium relations hold for arbitrary utility changes, in the context of non-deliberative decision-making, we would have to make additional assumptions such that utility changes in each step are small and can accumulate so that adaptation is beneficial. Importantly, the appropriateness of adaptation is not an issue when we assume a deliberation process where adaptation occurs before emitting an action, as there is a direct benefit of adaptation in the current trial. This is the general decision problem discussed in Section 4.
While we have considered mainly non-sequential decision-making problems here for simplicity, the same formalism could also be applied to sequential decision-making problems. In that case, one would replace the notion that an action corresponds to a discrete or continuous state x with the notion that an action might consist of choosing an entire trajectory x 1 : τ . In this case also, the utility U ( x 1 : τ , t ) would be defined over trajectories, and these utilities would change over episodes t. Again, one would have to assume that the utility function does not change while the trajectory x 1 : τ is generated. This corresponds to the fact that we assume that the utility is constant for each single episode t (cf. Figure 1), while the deliberative decision-maker can, as it were, sample the new utility function before emitting an action. An example would be finding a trajectory for a pendulum swing-up or a sequence of actions to navigate a maze. A path integral controller [58] would for example exactly produce such trajectories. A deliberative decision-maker would sample many such trajectories until time is up and one trajectory has to be selected, then the utility changes again, and the path integral controller samples new trajectories that have a different shape in line with the new utility function. Our assumption that the temporal evolution of the utility function does not depend on the decision-maker’s action implies that consecutive episodes are independent and can have different utility functions, but the decision-maker can carry its prior from one episode over to the next.
Recently, there has been a renewed interest in modelling decision-making with computational constraints [59,60] both in the computer science and the neuroscience literature, where there is growing evidence that the human brain might exploit sampling [22,61,62,63,64,65] for approximate inference and decision-making [66,67]. Such sampling models have been used for example to explain anchoring biases in choice tasks, because MCMC has finite mixing times and therefore exhibits a dependence on the prior distribution [68,69]. In particular, the idea of using the (expected) relative entropy or the mutual information as a computational cost has been suggested several times in the literature [2,3,23,33,70,71,72]. In [33] and similarly in [20], the authors derive the relative entropy as a control cost from an information-theoretic point of view, under axioms of monotonicity and invariance under relabelling and decomposition. In other fields such as robotics, the relative entropy has also been used as a control cost [18,21,25,58,73,74] to regularize the behaviour of the controller by penalizing controls that are far from the uncontrolled dynamics of the system or to deal with model uncertainty [75]. Naturally, questions regarding the generality of entropic costs as information-processing costs and their potential relation to algorithmic space-time resource constraints carry over to the non-equilibrium scenario and remain a topic for future investigations.
So far, only very few studies have established connections between non-equilibrium thermodynamics and decision-making in the literature, even though non-equilibrium analysis might provide a promising way to relate mechanistic dynamical models to conceptually simpler utility-based models that are often employed as normative models. Jarzynski-like and Crooks-like relations have been noted in the economics literature in gambling scenarios [76] and when studying the arrow of time for decision-making [77,78]. We reported preliminary results for the one-step delayed decision-making in [79,80]. In the machine learning literature, generalized fluctuation theorems have recently been used in [81] to train artificial neural networks with efficient exploration. In general, fluctuation theorems and Jarzynski equalities allow one to estimate free energy differences, which are very important in decision-making because the free energy directly relates to the value function, which is a central concept in control and reinforcement learning. Fluctuation theorems typically make the assumption that the temperature parameter is constant (isothermal transformations) and that initial states are in equilibrium. In our paper, we also made these assumptions, which may limit the generality of our results. Loosening these restrictions (cf. for example [82,83]) might be an important next step for future investigations of non-equilibrium relations in the decision-making context.
Regarding the connection between predictive power and dissipation, [24] has found that non-predictive systems are also systems that are highly dissipative. In [24], the authors consider the effects of a stochastic driving signal x mediated by an energy function E ( x , s ) on the state s of a Markov system with fixed transition probability p ( s | s , x ) . They regard the Markov system as a computing device and study how much information the state s carries about the driving signal x. They find a fundamental relationship between dissipation (energy efficiency) and lack of predictive power. Their results concern non-equilibrium trajectories when x changes at every time point. The intuition is that when a system naturally moves in the direction of a changing energy landscape, then this is not only more efficient energetically, but it can also be interpreted in the sense that the system predicts the changing energy landscape. Once the system equilibrates, the energy landscape (i.e., the external variable x) does not change any more, and the mutual information between state and external variable xvanishes, as does the dissipation. Therefore, the equilibrium state is of no particular interest in this analysis. If one were to apply this framework to a decision-maker, the decision-maker would be represented by the system with the state s, and the driving signal x would be the input provided to the decision-maker. One important difference between [24] and our formulation is that in [24], the driving signal x is stochastic and is sampled from a stationary probability distribution, whereas in our formulation, we assume a fixed deterministic driving signal (the sequence of utility functions) without an underlying probability distribution. Assuming such a fixed input does prohibit an analysis in terms of mutual information between s and x. Nevertheless, it would be straightforward to allow for stochastic changes in the utility function also in our formulation, and the results of [24] would be applicable and complementary. While in [24], the equilibrium is of no particular interest, in our analysis, we are interested in the approach to equilibrium and in the resources spent on the way, that is the time that is spent during deliberating where the environment is assumed to be roughly constant, i.e., it does not change too much on the short time scale of deliberating, then the environment changes again, and the decision-maker can adapt to this change by deliberation (in contrast, in [24], the decision-maker follows a fixed dynamics and does not adapt).
In conclusion, the results presented here bring the fields of stochastic thermodynamics and decision-making closer together by studying decision-making systems as statistical systems just like in thermodynamics. In this analogy, the energy function in physics corresponds to the utility functions in decision-making. Importantly, the statistical ensembles of both decisions and physical states can be conceptualized as non-equilibrium ensembles that reach equilibrium after a finite time adaptation process.

Acknowledgments

This study was supported by the ERC Starting Grant BRISC 678082, by the DFG Grant BR4164/1-1, the DFG Grant KR 3844/2-1 and the Max Planck Society. We thank our funding sources for supporting this study.

Author Contributions

J.G.-M.and D.A.B. conceived of the research. J.G.-M. did the derivations and performed the simulations. J.G.-M., M.K. and D.A.B. contributed to the main ideas of the paper. J.G.-M., M.K. and D.A.B. wrote the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Dissipation for a One-Step Decision Problem

In the following, we derive Equation (16) from Equation (9) for a one-step decision problem. Let x = ( x 0 , x 1 ) . The probabilities p ( x ) of the forward trajectory are p ( x ) = p 0 eq ( x 0 ) p ( x 1 | x 0 , t 1 ) , and the probabilities p ( x ) of the backward trajectory are p ( x ) = p 1 eq ( x 1 ) p ( x 0 | x 1 , t 0 ) . The detailed balance condition allows us to re-write p ( x 0 | x 1 , t 0 ) as p ( x 0 | x 1 , t 0 ) = e β U ( x 0 , t 0 ) e β U ( x 1 , t 0 ) p ( x 1 | x 0 , t 0 ) with e β U ( x 0 , t 0 ) = Z 0 p 0 eq ( x 0 ) and e β U ( x 1 , t 0 ) = Z 0 p 0 eq ( x 1 ) . With our notation in the deliberation scenario, x 1 is the decision, and x 0 is arbitrary and can be ignored. This effectively implies independence between x 0 and x 1 , such that p ( x 1 | x 0 , t 1 ) = p ( x 1 | t 1 ) and p ( x 0 | x 1 , t 0 ) = p 0 eq ( x 0 ) p 0 eq ( x 1 ) p ( x 1 | t 0 ) = p 0 eq ( x 0 ) . Substituting the previous identities in the KL-divergence, we obtain:
1 β D KL p ( x ) p ( x ) = 1 β x 0 , x 1 p 0 eq ( x 0 ) p ( x 1 | t 1 ) log p 0 eq ( x 0 ) p ( x 1 | t 1 ) p 1 eq ( x 1 ) p 0 eq ( x 0 ) = 1 β D KL p ( x 1 | t 1 ) p 1 eq ( x 1 ) = 1 β D KL p ˜ ( x ) p 1 eq ( x )
where we have made the replacement p ( · | t 1 ) = p ˜ ( · ) to obtain the notation from Figure 1.

Appendix B. Fokker-Planck Solution of Continuous Decision-Making Problem

A solution of the Fokker-Planck Equation (26) for known initial state x 0 can be found in [84]. Here, we sketch the solution when the initial state is Gaussian distributed.
Consider the following dynamics:
d x d t = A ( x , t ) + B ( x , t ) ξ ( t )
where A ( x , t ) = α U 1 x , B ( x , t ) = α . When imposing a quadratic utility function:
U y ( x ) = ( a y x 2 + b y x )
for an environment indexed by y = 1 , the associated Fokker–Planck equation is
P t = 2 α a 1 x x P + α b 1 x P + α 2 D 2 x 2 P .
We will solve this equation by first taking the Fourier transform in the variable x and then solving by the method of characteristics. The Fourier transform is:
P ^ t = c s P ^ s α 2 D s 2 P ^ + α b 1 i s P ^ = c s P ^ s + P ^ c 2 i s α 2 D s 2
where c = 2 α a 1 and c 2 = α b 1 . Now, applying the method of characteristics:
d P ^ d x = P ^ s d s d x + P ^ t d t d x
we obtain that d t = d x , s = s 0 e c t , and applying these relations, we get:
d P ^ d x = d P ^ d t = P ^ c 2 i s 0 e c t α 2 D s 0 2 e 2 c t
Integrating over t between t = 0 and t = t , we have that
d P ^ P ^ = d t c 2 i s 0 e c t α 2 D s 0 2 e 2 c t log P ^ | P ^ ( s 0 , t = 0 ) P ^ ( s , t ) = c 2 i s 0 c e c t α 2 D 2 c s 0 2 e 2 c t | t = 0 t = t .
Assuming a Gaussian distribution as a boundary condition with mean μ 0 and variance σ 0 2 , the Fourier transform for the boundary is:
P ^ ( s , t = 0 ) = exp σ 0 2 2 s 0 2 i s 0 μ 0 .
Then, the solution in frequency space is:
P ^ ( s , t ) = exp α 2 D 2 c s 2 ( 1 e 2 c t ) σ 0 2 s 2 e 2 c t + i s b 1 2 a 1 ( 1 e c t ) i s e c t = exp s 2 f 1 ( t ) i s f 2 ( t )
with f 1 ( t ) = α 2 D 2 c 1 e 2 c t σ 0 2 2 e 2 c t and f 2 ( t ) = e c t μ 0 b 1 2 a 1 ( 1 e c t ) . Transforming back to the signal domain, we obtain:
σ 2 ( t ) = 2 f 1 ( t ) = α 2 D c 1 e 2 c t + σ 0 2 e 2 c t μ ( t ) = f 2 ( t ) = e c t μ 0 b 1 2 a 1 ( 1 e c t ) .

References

  1. Ortega, P.A.; Braun, D.A. Thermodynamics as a theory of decision-making with information-processing costs. Proc. R. Soc. A Math. Phys. Eng. Sci. 2013, 469. [Google Scholar] [CrossRef]
  2. Wolpert, D.H. Information theory-the bridge connecting bounded rational game theory and statistical physics. In Complex Engineered Systems; Springer: New York, NY, USA, 2006; pp. 262–290. [Google Scholar]
  3. Tishby, N.; Polani, D. Information theory of decisions and actions. In Perception-Action Cycle; Springer: New York, NY, USA, 2011; pp. 601–636. [Google Scholar]
  4. Wolpert, D.H. The free energy requirements of biological organisms; implications for evolution. Entropy 2016, 18, 138. [Google Scholar] [CrossRef]
  5. Von Neumann, J.; Morgenstern, O. Theory of Games and Economic Behavior; Princeton University Press: Princeton, NJ, USA, 1944. [Google Scholar]
  6. Savage, L.J. The Foundations of Statistics; John Wiley and Sons: New York, NY, USA, 1954. [Google Scholar]
  7. Simon, H.A. A behavioural model of rational choice. Q. J. Econ. 1955, 69, 99–118. [Google Scholar] [CrossRef]
  8. Simon, H.A. Rational decision-making in business organizations. Am. Econ. Rev. 1979, 69, 493–513. [Google Scholar]
  9. Russell, S. Rationality and intelligence. In Proceedings of the 14th International Joint Conference on Artificial Intelligence, Montreal, QC, Canada, 20–25 August 1995; pp. 950–957. [Google Scholar]
  10. Russell, S.J.; Subramanian, D. Provably bounded-optimal agents. J. Artif. Intell. Res. 1995, 2, 575–609. [Google Scholar]
  11. Howes, A.; Lewis, R.; Vera, A. Rational adaptation under task and processing constraints: Implications for testing theories of cognition and action. Psychol. Rev. 2009, 116, 717–751. [Google Scholar] [CrossRef] [PubMed]
  12. Horvitz, E. Reasoning under Varying and Uncertain Resource Constraints; AAAI: Menlo Park, CA, USA, 1988; Volume 88, pp. 111–116. [Google Scholar]
  13. Dean, T. An Analysis of time-dependent planning. In Proceedings of the Seventh AAAI National Conference on Artificial Intelligence, Saint Paul, Minnesota, 21–26 August 1988. [Google Scholar]
  14. Zilberstein, S. Using any time algorithms in intelligent systems. AI Mag. 1996, 17, 73. [Google Scholar] [CrossRef]
  15. Kahneman, D. Maps of bounded rationality: Psychology for behavioural economics. Am. Econ. Rev. 2003, 93, 1449–1475. [Google Scholar] [CrossRef]
  16. Gigerenzer, G.; Goldstein, D.G. Reasoning the fast and frugal way: Models of bounded rationality. Psychol. Rev. 1996, 103, 650–669. [Google Scholar] [CrossRef] [PubMed]
  17. Camerer, C. Behavioral Game Theory: Experiments in Strategic Interaction; Princeton University Press: Princeton, NJ, USA, 2003. [Google Scholar]
  18. Todorov, E. Efficient computation of optimal actions. Proc. Natl. Acad. Sci. USA 2009, 106, 11478–11483. [Google Scholar] [CrossRef] [PubMed]
  19. Still, S. An information-theoretic approach to interactive learning. Europhys. Lett. 2009, 85, 28005. [Google Scholar] [CrossRef]
  20. Ortega, P.; Braun, D. Information, utility and bounded rationality. Lect. Notes Artif. Intell. 2011, 6830, 269–274. [Google Scholar]
  21. Braun, D.; Ortega, P.; Theodorou, E.; Schaal, S. Path integral control and bounded rationality. In Proceedings of the 2011 IEEE Symposium on Adaptive Dynamic Programming And Reinforcement Learning (ADPRL), Paris, France, 11–15 April 2011; pp. 202–209. [Google Scholar]
  22. Friston, K. The free-energy principle: A unified brain theory? Nat. Rev. Neurosci. 2010, 11, 127–138. [Google Scholar] [CrossRef] [PubMed]
  23. Rubin, J.; Shamir, O.; Tishby, N. Trading value and information in MDPs. Intell. Syst. Ref. Libr. 2012, 28, 57–74. [Google Scholar]
  24. Still, S.; Sivak, D.A.; Bell, A.J.; Crooks, G.E. Thermodynamics of prediction. Phys. Rev. Lett. 2012, 109, 120604. [Google Scholar] [CrossRef] [PubMed]
  25. Kappen, H.; Gómez, V.; Opper, M. Optimal control as a graphical model inference problem. Mach. Learn. 2012, 1, 1–11. [Google Scholar] [CrossRef]
  26. Vijayakumar, K.R.; Toussaint, M.; Vijayakumar, S. On stochastic optimal control and reinforcement learning by approximate inference. In Proceedings of the Robotics: Science and Systems, Sydney, Australia, 9–13 July 2012. [Google Scholar]
  27. Braun, D.A.; Ortega, P.A. Information-theoretic bounded rationality and ε-optimality. Entropy 2014, 16, 4662–4676. [Google Scholar] [CrossRef]
  28. Luce, R. Individual Choice Behavior; Wiley: Oxford, UK, 1959. [Google Scholar]
  29. Meginnis, J. A new Class of Symmetric Utility Rules for Gambles, Subjective Marginal Probability Functions, and a Generalized Bayes Rule; Columbia University, Graduate School of Business: New York, NY, USA, 1976; pp. 471–476. [Google Scholar]
  30. McFadden, D. Econometric models for probabilistic choice among products. J. Bus. 1980, 53, S13–S29. [Google Scholar] [CrossRef]
  31. McKelvey, R.D.; Palfrey, T.R. Quantal response equilibria for normal form games. Games Econ. Behav. 1995, 10, 6–38. [Google Scholar] [CrossRef]
  32. Fudenberg, D.; Levine, D. The Theory of Learning in Games; MIT Press: Cambridge, MA, USA, 1998. [Google Scholar]
  33. Mattsson, L.G.; Weibull, J.W. Probabilistic choice and procedurally bounded rationality. Games Econ. Behav. 2002, 41, 61–78. [Google Scholar] [CrossRef]
  34. Sims, C.A. Implications of rational inattention. J. Monetary Econ. 2003, 50, 665–690. [Google Scholar] [CrossRef]
  35. Polani, D.; Nehaniv, C.; Martinetz, T.; Kim, J. Relevant information in optimized persistence vs. progeny strategies. In Proceedings of the Tenth International Conference on the Simulation and Synthesis of Living Systems, Bloomington, IN, USA, 3–7 June 2006. [Google Scholar]
  36. Stratonovich, R. On value of information. Izv. USSR Acad. Sci. Tech. Cybern. 1965, 5, 3–12. [Google Scholar]
  37. Kanaya, F.; Nakagawa, K. On the practical implication of mutual information for statistical decisionmaking. IEEE Trans. Inf. Theory 1991, 37, 1151–1156. [Google Scholar] [CrossRef]
  38. Akamatsu, T. Cyclic flows, markov process and stochastic traffic assignment. Transp. Res. Part B Methodol. 1996, 30, 369–386. [Google Scholar] [CrossRef]
  39. Belavkin, R.V. Information trajectory of optimal learning. In Dynamics of Information Systems; Springer: New York, NY, USA, 2010; pp. 29–44. [Google Scholar]
  40. Rieskamp, J. The probabilistic nature of preferential choice. J. Exp. Psychol. Learn. Mem. Cogn. 2008, 34, 1446–1465. [Google Scholar] [CrossRef] [PubMed]
  41. Andrieu, C.; Freitas, N.; Doucet, A.; Jordan, M.I. An introduction to MCMC for machine learning. Mach. Learn. 2003, 50, 5–43. [Google Scholar] [CrossRef]
  42. Ortega, P.A.; Braun, D.A.; Tishby, N. Monte Carlo methods for exact & efficient solution of the generalized optimality equations. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–5 June 2014; pp. 4322–4327. [Google Scholar]
  43. Ortega, P.A.; Braun, D.A. Generalized thompson sampling for sequential decision-making and causal inference. Complex Adapt. Syst. Model. 2014, 2, 1–23. [Google Scholar]
  44. Crooks, G.E. Nonequilibrium measurements of free energy differences for microscopically reversible Markovian systems. J. Stat. Phys. 1998, 90, 1481–1487. [Google Scholar] [CrossRef]
  45. Jarzynski, C. Equalities and inequalities: Irreversibility and the second law of thermodynamics at the nanoscale. Annu. Rev. Condens. Matter Phys. 2011, 2, 329–351. [Google Scholar] [CrossRef]
  46. Gomez-Marin, A.; Parrondo, J.; van den Broeck, C. Lower bounds on dissipation upon coarse graining. Phys. Rev. E 2008, 78, 011107. [Google Scholar] [CrossRef] [PubMed]
  47. Roldán, É. Irreversibility and Dissipation in Microscopic Systems; Springer: New York, NY, USA, 2014. [Google Scholar]
  48. Jarzynski, C. Nonequilibrium equality for free energy differences. Phys. Rev. Lett. 1997, 78, 2690–2693. [Google Scholar] [CrossRef]
  49. Grünwald, P. The safe Bayesian. In Proceedings of the International Conference on Algorithmic Learning Theory, Lyon, France, 29–31 October 2012; pp. 169–183. [Google Scholar]
  50. Caticha, A.; Giffin, A. Updating Probabilities. In Bayesian Inference and Maximum Entropy Methods in Science and Engineering; AIP Publishing: Melville, NY, USA, 2006; Volume 872, pp. 31–42. [Google Scholar]
  51. Giffin, A.; Caticha, A. Updating Probabilities with Data and Moments. In Bayesian Inference and Maximum Entropy Methods in Science and Engineering; AIP Publishing: Melville, NY, USA, 2006; Volume 954, pp. 74–84. [Google Scholar]
  52. Chib, S.; Greenberg, E. Understanding the metropolis-hastings algorithm. Am. Stat. 1995, 49, 327–335. [Google Scholar]
  53. Gaveau, B.; Schulman, L. A general framework for non-equilibrium phenomena: The master equation and its formal consequences. Phys. Lett. A 1997, 229, 347–353. [Google Scholar] [CrossRef]
  54. Seifert, U. Entropy production along a stochastic trajectory and an integral fluctuation theorem. Phys. Rev. Lett. 2005, 95, 040602. [Google Scholar] [CrossRef] [PubMed]
  55. Ortega, P.A.; Stocker, A.A. Human decision-making under limited time. In Advances in Neural Information Processing Systems 29; Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2016; pp. 100–108. [Google Scholar]
  56. Garcia-Palacios, J. Introduction to the theory of stochastic processes and Brownian motion problems. arXiv, 2007; arXiv:cond-mat/0701242. [Google Scholar]
  57. Jarvstad, A.; Hahn, U.; Rushton, S.K.; Warren, P.A. Perceptuo-motor, cognitive, and description-based decision-making seem equally good. Proc. Natl. Acad. Sci. USA 2013, 110, 16271–16276. [Google Scholar] [CrossRef] [PubMed]
  58. Kappen, H.J. Path integrals and symmetry breaking for optimal control theory. J. Stat. Mech. Theory Exp. 2005, 2005, P11011. [Google Scholar] [CrossRef]
  59. Gershman, S.J.; Horvitz, E.J.; Tenenbaum, J.B. Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science 2015, 349, 273–278. [Google Scholar] [CrossRef] [PubMed]
  60. Parkes, D.C.; Wellman, M.P. Economic reasoning and artificial intelligence. Science 2015, 349, 267–272. [Google Scholar] [CrossRef] [PubMed]
  61. Moreno-Bote, R.; Knill, D.C.; Pouget, A. Bayesian sampling in visual perception. Proc. Natl. Acad. Sci. USA 2011, 108, 12491–12496. [Google Scholar] [CrossRef] [PubMed]
  62. Levy, R.P.; Reali, F.; Griffiths, T.L. Modeling the effects of memory on human online sentence processing with particle filters. In Proceedings of the 23rd Annual Conference on Neural Information Processing Systems Vancouver, BC, Canada, 7–10 December 2009; pp. 937–944. [Google Scholar]
  63. Griffiths, T.L.; Tenenbaum, J.B. Optimal predictions in everyday cognition. Psychol. Sci. 2006, 17, 767–773. [Google Scholar] [CrossRef] [PubMed]
  64. Sanborn, A.N.; Griffiths, T.L.; Navarro, D.J. Rational approximations to rational models: Alternative algorithms for category learning. Psychol. Rev. 2010, 117, 1144–1167. [Google Scholar] [CrossRef] [PubMed]
  65. Fiser, J.; Berkes, P.; Orbán, G.; Lengyel, M. Statistically optimal perception and learning: From behaviour to neural representations. Trends Cogn. Sci. 2010, 14, 119–130. [Google Scholar] [CrossRef] [PubMed]
  66. Lieder, F.; Griffiths, T.; Goodman, N. Burn-in, bias, and the rationality of anchoring. In Proceedings of the 26th Annual Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 2690–2798. [Google Scholar]
  67. Vul, E.; Goodman, N.; Griffiths, T.L.; Tenenbaum, J.B. One and done? Optimal decisions from very few samples. Cogn. Sci. 2014, 38, 599–637. [Google Scholar] [CrossRef] [PubMed]
  68. Lieder, F.; Griffiths, T.L.; Huys, Q.J.M.; Goodman, N.D. The anchoring bias reflects rational use of cognitive resources. Psychon. Bull. Rev. 2017. [Google Scholar] [CrossRef]
  69. Lieder, F.; Griffiths, T.L.; Huys, Q.J.M.; Goodman, N.D. Empirical evidence for resource-rational anchoring and adjustment. Psychono. Bull. Rev. 2017. [Google Scholar] [CrossRef] [PubMed]
  70. Genewein, T.; Leibfried, F.; Grau-Moya, J.; Braun, D.A. Bounded rationality, abstraction, and hierarchical decision-making: An information-theoretic optimality principle. Front. Robot. AI 2015, 2, 27. [Google Scholar] [CrossRef]
  71. Still, S.; Precup, D. An information-theoretic approach to curiosity-driven reinforcement learning. Theory Biosci. 2012, 131, 139–148. [Google Scholar] [CrossRef] [PubMed]
  72. Ortega, P.A.; Braun, D.A. A minimum relative entropy principle for learning and acting. J. Artif. Intell. Res. 2010, 38, 475–511. [Google Scholar]
  73. Theodorou, E.; Buchli, J.; Schaal, S. A generalized path integral control approach to reinforcement learning. J. Mach. Learn. Res. 2010, 9999, 3137–3181. [Google Scholar]
  74. Peters, J.; Mülling, K.; Altün, Y. Relative Entropy Policy Search. In Proceedings of the Twenty-Fourth National Conference on Artificial Intelligence, Atlanta, GA, USA, 11–15 July 2010; pp. 1607–1612. [Google Scholar]
  75. Grau-Moya, J.; Leibfried, F.; Genewein, T.; Braun, D.A. Planning with information-processing constraints and model uncertainty in markov decision processes. arXiv, 2016; arXiv:1604.02080. [Google Scholar]
  76. Hirono, Y.; Hidaka, Y. Jarzynski-type equalities in gambling: Role of information in capital growth. arXiv, 2015; arXiv:1505.06216. [Google Scholar]
  77. Mlodinow, L.; Brun, T.A. Relation between the psychological and thermodynamic arrows of time. Phys. Rev. E 2014, 89, 052102. [Google Scholar] [CrossRef] [PubMed]
  78. Roldán, É.; Neri, I.; Dörpinghaus, M.; Meyr, H.; Jülicher, F. Decision making in the arrow of time. Phys. Rev. Lett. 2015, 115, 250602. [Google Scholar] [CrossRef] [PubMed]
  79. Grau-Moya, J.; Braun, D.A. Bounded rational decision-making in changing environments. arXiv, 2013; arXiv:1312.6726. [Google Scholar]
  80. Grau-Moya, J.; Hez, E.; Pezzulo, G.; Braun, D. The effect of model uncertainty on cooperation in sensorimotor interactions. J. R. Soc. Interface 2013, 10, 20130554. [Google Scholar] [CrossRef] [PubMed]
  81. Hayakawa, T.; Aoyagi, T. Learning in neural networks based on a generalized fluctuation theorem. Phys. Rev. E 2015, 92, 052710. [Google Scholar] [CrossRef] [PubMed]
  82. Chatelain, C. A temperature-extended Jarzynski relation: Application to the numerical calculation of surface tension. J. Stat. Mech. Theory Exp. 2007, 2007, P04011. [Google Scholar] [CrossRef]
  83. Gong, Z.; Quan, H.T. Jarzynski equality, Crooks fluctuation theorem, and the fluctuation theorems of heat for arbitrary initial states. Phys. Rev. E 2015, 92, 012131. [Google Scholar] [CrossRef] [PubMed]
  84. Risken, H. Fokker-planck equation. In The Fokker-Planck Equation; Springer: New York, NY, USA, 1984; pp. 63–95. [Google Scholar]
Figure 1. Temporal structure of the one-step decision problem. An instantaneous change in the environment occurs at time t 0 represented by a vertical jump from λ 0 to λ 1 in the upper panels that translates directly into a change in free energy difference represented by Δ F in the lower panels. The system’s previous state at t 0 is given by p 0 eq ( x ) , i.e., the equilibrium distribution for U λ 0 ( x ) . The new posterior equilibrium is given by p 1 eq ( x ) , i.e., the equilibrium distribution for U λ 1 ( x ) . When given unlimited time, the decision-maker will eventually evolve to p 1 eq ( x ) . Deliberative and non-deliberative decision-makers differ in how much time they get to adapt to the change in utility before they have to choose an action x that provides them with the utility gain Δ U ( x ) = U λ 1 ( x ) U λ 0 ( x ) . Left: In direct analogy to physical thermodynamics, the non-deliberative decision-maker has to emit an action before it can adapt to any changes in utility and therefore acts according to the previous strategy p 0 eq ( x ) at time t 0 . On average, with such a strategy, the utility gained is U net = x p 0 eq ( x ) Δ U ( x ) at t 0 and the dissipation is U diss = Δ F U net . Right: The deliberative decision-maker is allowed to adapt to the change in utility for a certain time Δ t * before the action has to be emitted. This deliberation period allows the decision-maker to compute a better strategy p ˜ ( x ) . In this case, the net utility is U net = x p ˜ ( x ) Δ U 1 β D KL p ˜ ( x ) | | p 0 eq ( x ) .
Figure 1. Temporal structure of the one-step decision problem. An instantaneous change in the environment occurs at time t 0 represented by a vertical jump from λ 0 to λ 1 in the upper panels that translates directly into a change in free energy difference represented by Δ F in the lower panels. The system’s previous state at t 0 is given by p 0 eq ( x ) , i.e., the equilibrium distribution for U λ 0 ( x ) . The new posterior equilibrium is given by p 1 eq ( x ) , i.e., the equilibrium distribution for U λ 1 ( x ) . When given unlimited time, the decision-maker will eventually evolve to p 1 eq ( x ) . Deliberative and non-deliberative decision-makers differ in how much time they get to adapt to the change in utility before they have to choose an action x that provides them with the utility gain Δ U ( x ) = U λ 1 ( x ) U λ 0 ( x ) . Left: In direct analogy to physical thermodynamics, the non-deliberative decision-maker has to emit an action before it can adapt to any changes in utility and therefore acts according to the previous strategy p 0 eq ( x ) at time t 0 . On average, with such a strategy, the utility gained is U net = x p 0 eq ( x ) Δ U ( x ) at t 0 and the dissipation is U diss = Δ F U net . Right: The deliberative decision-maker is allowed to adapt to the change in utility for a certain time Δ t * before the action has to be emitted. This deliberation period allows the decision-maker to compute a better strategy p ˜ ( x ) . In this case, the net utility is U net = x p ˜ ( x ) Δ U 1 β D KL p ˜ ( x ) | | p 0 eq ( x ) .
Entropy 20 00001 g001
Figure 2. Surprise, dissipation and free energy optimum as a function of the number of data points per batch in a Bayesian inference task. When the decision-maker processes all the data in one step, it has maximum surprise and dissipation. However, when incorporating the data slowly, the surprise and dissipation are humble. The free energy optimum is only a function of the data independent of how they are incorporated.
Figure 2. Surprise, dissipation and free energy optimum as a function of the number of data points per batch in a Bayesian inference task. When the decision-maker processes all the data in one step, it has maximum surprise and dissipation. However, when incorporating the data slowly, the surprise and dissipation are humble. The free energy optimum is only a function of the data independent of how they are incorporated.
Entropy 20 00001 g002
Figure 3. Trajectories of actions from the Metropolis–Hastings algorithm with β = 22.5 and the proposed standard deviation σ p = 0.1 in a forward (blue) or backward (brown) protocol for an instant change in the environment (first panel) and for a slow change in the environment (second panel). In both cases, the total change in the environment is μ 0 = 0 to μ 1 = 1 . The last panels shows the dissipation for the forward protocol (blue) in both the instant or the slow change in the environment. The difference in probability densities of forward and backward trajectories relates directly to dissipation and to hysteresis effects.
Figure 3. Trajectories of actions from the Metropolis–Hastings algorithm with β = 22.5 and the proposed standard deviation σ p = 0.1 in a forward (blue) or backward (brown) protocol for an instant change in the environment (first panel) and for a slow change in the environment (second panel). In both cases, the total change in the environment is μ 0 = 0 to μ 1 = 1 . The last panels shows the dissipation for the forward protocol (blue) in both the instant or the slow change in the environment. The difference in probability densities of forward and backward trajectories relates directly to dissipation and to hysteresis effects.
Entropy 20 00001 g003
Figure 4. Episodic decision-making with deliberation. (A) Utility functions and equilibrium distributions for the two decision problems; (B) we show for different β and k (left) the average net utility, (middle) the free energy difference and (right) the average dissipated utility; (C) top panels: empirical averages approximating the Jarzynski expression in dependence of the number of trajectories T using different β and different number of available samples k; bottom panels: the associated expected net utility gain, which in the limit T is lower than the free energy difference (horizontal light red line).
Figure 4. Episodic decision-making with deliberation. (A) Utility functions and equilibrium distributions for the two decision problems; (B) we show for different β and k (left) the average net utility, (middle) the free energy difference and (right) the average dissipated utility; (C) top panels: empirical averages approximating the Jarzynski expression in dependence of the number of trajectories T using different β and different number of available samples k; bottom panels: the associated expected net utility gain, which in the limit T is lower than the free energy difference (horizontal light red line).
Entropy 20 00001 g004
Figure 5. Langevin dynamics simulations. (a) In blue, the different utility changes Δ U 1 and Δ U 2 , in red the prior p 0 and in purple the posterior for β = 0.5 ; (b) We show for different β and time t = k Δ t directly depending on k, (left) the average net utility, (middle) the free energy difference and (right) the average dissipated utility; (c) top panels: convergence of the empirical Jarzynski estimate depending on the number of trajectories T using different β and different numbers of update steps k. Bottom panels: the associated expected net utility gain, which in the limit T is lower than the free energy difference (horizontal light red line). With these simulations, we validate Equation (21).
Figure 5. Langevin dynamics simulations. (a) In blue, the different utility changes Δ U 1 and Δ U 2 , in red the prior p 0 and in purple the posterior for β = 0.5 ; (b) We show for different β and time t = k Δ t directly depending on k, (left) the average net utility, (middle) the free energy difference and (right) the average dissipated utility; (c) top panels: convergence of the empirical Jarzynski estimate depending on the number of trajectories T using different β and different numbers of update steps k. Bottom panels: the associated expected net utility gain, which in the limit T is lower than the free energy difference (horizontal light red line). With these simulations, we validate Equation (21).
Entropy 20 00001 g005aEntropy 20 00001 g005b

Share and Cite

MDPI and ACS Style

Grau-Moya, J.; Krüger, M.; Braun, D.A. Non-Equilibrium Relations for Bounded Rational Decision-Making in Changing Environments. Entropy 2018, 20, 1. https://doi.org/10.3390/e20010001

AMA Style

Grau-Moya J, Krüger M, Braun DA. Non-Equilibrium Relations for Bounded Rational Decision-Making in Changing Environments. Entropy. 2018; 20(1):1. https://doi.org/10.3390/e20010001

Chicago/Turabian Style

Grau-Moya, Jordi, Matthias Krüger, and Daniel A. Braun. 2018. "Non-Equilibrium Relations for Bounded Rational Decision-Making in Changing Environments" Entropy 20, no. 1: 1. https://doi.org/10.3390/e20010001

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop