Next Article in Journal
Option Pricing under Double Heston Jump-Diffusion Model with Approximative Fractional Stochastic Volatility
Next Article in Special Issue
Surface Diffusion by Means of Stochastic Wave Functions. The Ballistic Regime
Previous Article in Journal
On the Effectiveness of the Digital Legal Proceedings Model in Russia
Previous Article in Special Issue
Monte Carlo Simulation of a Modified Chi Distribution with Unequal Variances in the Generating Gaussians. A Discrete Methodology to Study Collective Response Times
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Microscopically Reversible Pathways with Memory

by
Jose Ricardo Arias-Gonzalez
Centro de Tecnologías Físicas, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain
Mathematics 2021, 9(2), 127; https://doi.org/10.3390/math9020127
Submission received: 16 May 2020 / Revised: 4 January 2021 / Accepted: 6 January 2021 / Published: 8 January 2021

Abstract

:
Statistical mechanics is a physics theory that deals with ensembles of microstates of a system compatible with environmental constraints and that on average define a thermodynamic state. The evolution of a small system is normally subjected to changing constraints, as set by a protocol, and involves a stochastic dependence on previous events. Here, we generalize the dynamic trajectories described by a realization of a physical system without dissipation to include those in which the history of previous events is necessary to understand its future. This framework is then used to characterize the processes experienced by the stochastic system, as derived from ensemble averages over the available pathways. We find that the pathways that the system traces in the presence of a protocol entail different statistics from those in its absence and prove that both types of pathways are equivalent in the limit of independent events. Such equivalence implies that a thermodynamic system cannot evolve away from equilibrium in the absence of memory. These results are useful to interpret single-molecule experiments in biophysics and other fields in nanoscience, as well as an adequate platform to describe non-equilibrium processes.

1. Introduction

Reversibility refers to quasistatic processes that invert isentropically. Such processes involve a sufficiently slow dynamics to prevent heat flows; more in depth, they take place through a timeless succession of states along which there is no energy dissipation. Reversible processes are normally analyzed by equilibrium statistics: a so-called partition function is used to characterize the thermodynamic states described by a system by pondering all possible configurations of microstates compatible with each state [1]. For a system in which fate is not dependent on either the past or the present, like many macroscopic systems, both equilibrium and frictionless quasistatic processes can be examined through the same mathematical framework because such system is able to explore all the possible configurations in an indefinite time. In this paper, we introduce for the first time memory effects of full extent through an ab initio theory that reveals differential statistics.
Phase space is a long-standing concept to analyze the dynamics of physical systems, transcending the configuration space of position coordinates. From the classical to the quantum realm, the system—a point comprising a set of generalized position and momentum coordinates—traces a pathway (or trajectory) described by the time-ordered union of points in this space [2]. Phase-space trajectories determine the evolution of the thermodynamic microstates of similarly constructed systems under the same protocol through so-called processes, which involve ensemble averages over such trajectories [3]. The principle of microscopic reversibility asserts that the probability of any pathway of a system realization through phase space is equivalent to that of the time reversed pathway. While individual pathways are reversible, processes may not, thus entailing energy dissipation [4]. We herein apply the principle of microscopic reversibility to systems that recall not only the present but also the past by an unprecedented mathematical treatment, in which phase space trajectories are analyzed exactly.
Microscopic reversibility and phase space underlie the theory of dynamical systems in physics but they usually remain at the conceptual level. With the advent of nanoscience, the urge for understanding so-called small systems [5]—those for which energy exchanges are smaller or similar to the thermal level—is pushing these concepts to the direct applications. A small system typically measures from a few to hundreds of nanometers and contains from one to some thousands of atoms or molecules. Due to the scale nature, a small system is deemed to progress along only one of the possible trajectories by a certain protocol. Its energy balance in terms of work and heat as a function of the temperature has to be evaluated at the single phase-space pathway level. In addition, the protocol by which this system evolves has to be considered when ensemble-average thermodynamics are addressed, especially when memory effects are present.
Biological systems have become central players in this urge for understanding small systems. For many processes that take place in the cell, the study of each molecular trajectory individually is crucial for a complete comprehension of the role of fluctuations [6]. Biophysical processes have traditionally been analyzed by bulk (ensemble-average) strategies but the importance of tackling them at the single-molecule level and at the single-chemical reaction has raised much both scientific and technological interest in the last twenty years [7,8]. In this regard, replication, transcription, and translation in molecular biology, just to name a few, are processes in which thorough investigation requires single-molecule approaches [9]: nucleotides or amino acids are incorporated sequentially by a protein in which operation determines a certain copying direction and a mechanism, both of them responsible for chain stability and information fidelity. Sustained by physical interactions, these biological processes encompass naturally memory effects because nucleotide and amino acid polymers carry genetic meaning. Protein folding is another in singulo process that showcases the firm link between physical interactions and memory, and how this link brings thermodynamic consequences to the structural and functional fate of a polypeptide [10]. Certainly, both the synthesis protocol and the amino acid sequence stochastically guide protein folding dynamics through preferential phase-space trajectories [11]. Another biological example in which memory is a key ingredient is that of learning, in which organisms collect information to perform complex control tasks [12]. In general, small systems may present mutual and internal correlations due to physical interactions [13,14], which, steered by a protocol, make up their biography, a history of events that influence their future.
To analyze theoretically small systems with memory, thermodynamics uses stochastic variables and statistical mechanics [15,16], often considering evolutions where only the immediate present is necessary to inspect the future (known as memoryless or Markovian evolutions). Tsallis entropy generalization enabled non-extensive analyses of physical systems that keep memory on previous events, that is, of systems that recall not only the present but also the past events (non-Markovian evolutions) [17]; this generalization in turn compacts non-equilibrium dynamics elegantly. Open system formalisms, including spectral analysis, introduce memory effects through non-Markovian approaches to address the environment surrounding the system under study [18,19,20,21,22]. Complete memory effects have been considered in the study of spatial chains made up of physical subunits, including those with symbolic meaning, to access non-equilibrium dynamics [23] and information management in nanoscale systems [24]. They have also been taken into account to analyze abstract strings of symbols for the paradigm of communication [25]. However, full memory effects in temporal chains of events have never been treated from an exact perspective.
In the following, we develop a general framework for reversible pathways with memory to address the evolution of physical systems. Although not restricted to them, we will discuss this framework in the context of small systems, where strong interest resides in their stochastic behavior and in the wealth of current applications. We analyze microscopic pathways exactly, tracing explicitly the memory that a physical system retains along the pathway it describes, and we derive consistency constraints. Our theory discerns between protocol-driven and equilibrium pathways. We analyze their associated dynamics at the level of the ensemble average, as taken over the pathways through which the system can evolve. We end up by illustrating our theory in the context of computing biomolecular systems, where memory effects are inherent to their thermodynamic modeling.

2. Analysis

2.1. Concepts and Terminology

The evolution of a small system is strongly affected by the thermal fluctuations. Under the action of external perturbations, the statistical characterization of any set of ensembles of the same system does not determine a unique thermodynamic description because different perturbations drive the system preferentially through different sets of pathways. Then, it is necessary to introduce the protocol, λ , which is a collection of control parameters that describes the mechanism and external constraints that act over the system [5]. Given an initial and a final microstate of the system at time instants t = t 0 and t = t f , respectively, every pathway (or trajectory) that connects them in phase space will be specified by a temporal sequence, ν = { x t 0 , , x t , , x t f } , of stochastic events X t = x t ( x t X , being X the domain of the random variables with cardinality | X | , and t = t 0 , , t f ) under protocol λ .
We will use the term event for x t and microstate for the event at time t plus the ordered sequence of previous events that the system recalls. We will use the term state for the ensemble-average over the pathways that the system can follow until time t driven by protocol λ . This is the notion of macrostate in statistical mechanics [1], but since it is misleading in a nanoscale context, we prefer using state. In the limit in which the memory of a system at time t extends to its complete history, { x t 0 , , x t 1 } , microstates and pathways are equivalent. In the limit of independent events, events and microstates are equivalent and the term pathway is not necessary since microstates do not depend on how they have been reached by the system. A state is solely determined by the protocol by which events have been driven. For the sake of conceptual clarity, we will also distinguish between process and pathway, the former being obtained under averages over ensembles of the latter. The principle of microscopic reversibility, expressed at the introduction, stands at the level of the pathway.
We will tackle sequences ν as directional, stochastic chains with memory [26] in the time domain. Events experienced by the system at each time instant, x t , involve a set of stochastic variables according to the degrees of freedom of the system:
x t q 1 ( t ) , , q D ( t ) ; p 1 ( t ) , , p D ( t ) ,
or, abbreviated, x t q h ( t ) , p h ( t ) , h = 1 , , D , where D is the number of degrees of freedom and q h and p h are generalized space and momentum coordinates, respectively.
The microstate of the system at time t is not only determined by the value of X t but also by how x t has been reached. Therefore, the energy of each microstate is a function of the previous events, namely
E ( x t ; x t 1 , , x t 0 ) ,
which account for the memory, i.e., the relative interactions of the present event, x t , with its previous ones, { x t 1 , , x t 0 } . As noted, we will represent the random variables corresponding to the memory of a microstate in a thermodynamic function by a semicolon followed by the variable values with decreasing subscript order.
In addition to the energy, central to the thermodynamic description of a physical system is the entropy, which characterizes the level of reversibility in the system’s evolution by pondering the level of uncertainty in the stochastic events along phase-space pathways. In this regard, it is connected to the entropy defined in information theory, as reflected in the mathematical similarity between the Gibbs and the Shannon entropies (appearing later on) [27].

2.2. Protocol-Driven Pathways

The probability of a microstate, according to a canonical formalism, is:
p ( λ ) x t | x t 1 , , x t 0 = e β E x t ; x t 1 , , x t 0 x t e β E x t ; x t 1 , , x t 0 = e β E x t ; x t 1 , , x t 0 Z ( λ ) ; x t 1 , , x t 0 ,
with β = 1 / ( k T ) (T the absolute temperature and k the Boltzmann constant), x t p ( λ ) x t | x t 1 , , x t 0 = 1 and the partition function given by
Z ( λ ) ; x t 1 , , x t 0 x t e β E x t ; x t 1 , , x t 0 .
Note that x t q 1 ( t ) , , q D ( t ) p 1 ( t ) , , p D ( t ) . We use the prime symbol in x t to emphasize that the sum does not affect the previous events, x t 0 , , x t 1 , since they are fixed at present time t. For completeness, consider that for t = t 0 , p ( λ ) ( x t 0 ) = exp β E ( x t 0 ) / x t 0 exp β E ( x t 0 ) . p ( λ ) x t | x t 1 , , x t 0 is the probability that random variable X (which, as explained in the previous subsection, comprises a set of generalized position and momentum coordinates, Equation (1)) takes value x at time t provided that the values of this random variable for the previous instants, t 1 , , t 0 , are X t 1 = x t 1 , , X t 0 = x t 0 , respectively.
The probability of a pathway is:
p ν ( λ ) Pr ( λ ) { X t 0 = x t 0 , , X t f = x t f } = p ( λ ) ( x t 0 , , x t f ) = p ( λ ) ( x t 0 ) p ( λ ) ( x t 0 + 1 | x t 0 ) p ( λ ) ( x t f | x t f 1 , x t 0 ) = t = t 0 t f p ( λ ) x t | x t 1 , , x t 0 .
p ( λ ) ( x t 0 , , x t f ) is the probability that a particular sequence of microstates take place, that is, that random variable X takes the specific sequence of values X t 0 = x t 0 , , X t f = x t f from t 0 to t f . According to probability theory [28], it is obtained as a product of the conditional probabilities p ( λ ) x t | x t 1 , , x t 0 , for t = t 0 to t = t f . This probability can be expressed in a more compact form [26]:
p ν ( λ ) = e β E ν Z ν ( λ ) ,
such that ν = 1 N p ν ( λ ) x t 0 , , x t f p ( λ ) ( x t 0 , , x t f ) = 1 . In this equation,
E ν E ( x t 0 , , x t f ) = t = t 0 t f E x t ; x t 1 , , x t 0 ,
and Z ν ( λ ) is the sequence-dependent partition function,
Z ν ( λ ) t = t 0 t f Z ( λ ) ; x t 1 , , x t 0
= x t 0 , , x t f exp β t = t 0 t f E x t ; x t 1 , , x t 0 ν ( λ ) = 1 N exp β E ν ν ,
with E ν ν the two-sequence energy,
E ν ν t = t 0 t f E x t ; x t 1 , , x t 0 .
N = | X | Δ t + 1 , Δ t = t f t 0 , is the number of configurations along pathway ν , which is the result of combining Δ t + 1 events and | X | = | Q | D | P | D possibilities for each event ( Q and P are the discrete domains of q h and p h , h = 1 , , D , respectively, and | Q | and | P | indicate the number of elements in the range of Q and P , respectively) and subscript ν ( λ ) in the sigma symbol reminds that the sum over the multiple x t variables, which are correlated due to memory effects, has to be evaluated according to the constraints imposed by the protocol.
The probability of a pathway, Equation (6), is a function of its energy, E ν (see Equation (7)), which is a sum over the energies of the microstates that the system has passed through in its evolution between microstates x t 0 and x t f . The pathway energy is in essence the action, S , in the discrete domain; more in depth, it is the abbreviated action, S 0 , minus the action itself, as defined in classical mechanics [2], divided by the time elapsed between the initial and final microstates, namely S 0 ( ν ) S ( ν ) / Δ t . The probability of a pathway thus weights the dynamics of a system according to the exponential of the value that this functional difference takes on such a pathway.
The expected value of the instantaneous energy, which is a state function known as the internal energy, is:
E t λ x t 0 , , x t p ( λ ) ( x t 0 , , x t ) E x t ; x t 1 , , x t 0 ,
where we have used that
p ( λ ) ( x t 0 , , x t ) = x t + 1 , , x t f p ( λ ) ( x t 0 , , x t f ) ,
straightforward from the expansion of the probability as a product of conditional probabilities, Equation (5), and the fact that x t p ( λ ) x t | x t 1 , , x t 0 = 1 , t = t 0 , , t f . This equation for the probability of a truncated temporal sequence can be expressed as ( t < t f )
p ( λ ) ( x t 0 , , x t ) = e β E x t 0 , , x t Z ( λ ) x t 0 , , x t 1 ,
where, in analogy to Equations (7) and (8) for the whole temporal sequence, the energy and the partition function of a truncated sequence are, respectively:
E x t 0 , , x t = i = t 0 t E x i ; x i 1 , , x t 0 ,
Z ( λ ) x t 0 , , x t 1 = i = t 0 t Z ( λ ) ; x i 1 , , x t 0 .
The expected value of the pathway energy,
E ν λ ν = 1 N p ν ( λ ) E ν = x t 0 , , x t f p ( λ ) ( x t 0 , , x t f ) E x t 0 , , x t f ,
is equal to the sum over the expected instant energies that conform the pathway, namely
E ν λ = t = t 0 t f E t λ .
The proof to this result can be found in Appendix A. Although the expected value of a pathway function, like the energy in Equation (16), does not depend on a specific pathway, we will keep the subscript “ ν ” within the angle brackets to denote that it is an ensemble average over a pathway function.
We now introduce the entropy at t with reference to both the protocol and the pathway that the system is following as [29]:
S ( λ ) ( x t ; x t 1 , , x t 0 ) k ln p ( λ ) ( x t | x t 1 , , x t 0 ) ,
with expected value
S t ( λ ) λ = x t 0 , , x t p ( λ ) ( x t 0 , , x t ) S ( λ ) x t ; x t 1 , , x t 0 .
This entropy constitutes the state function used in thermodynamics [1,3], with the caveat that it observes memory effects. Such entropy is used in information theory under the name of conditional entropy [27].
We can also introduce the pathway entropy:
S ν ( λ ) k ln p ν ( λ ) = t = t 0 t f S ( λ ) x t ; x t 1 , , x t 0 .
The last part of this equation is a direct consequence of Equations (5) and (18). Its expected value,
S ν ( λ ) λ = ν = 1 N p ν ( λ ) S ν ( λ ) = x t 0 , , x t f p ( λ ) ( x t 0 , , x t f ) S ( λ ) x t 0 , , x t f ,
fulfills an analogous expression to that of the energy, Equation (17), namely
S ν ( λ ) λ = t = t 0 t f S t ( λ ) λ ;
see Appendix A for the proof.
Finally, we introduce the Helmholtz free energy at t with reference to the protocol and the pathway as:
F ( λ ) ( x t ; x t 1 , , x t 0 ) k T ln Z ( λ ) ; x t 1 , , x t 0 ,
which actually does not depend on the present (instant t), in contrast to the energy and the entropy, Equations (2) and (18), respectively. Its expected value (state function) is:
F t ( λ ) λ = x t 0 , , x t p ( λ ) ( x t 0 , , x t ) F ( λ ) x t ; x t 1 , , x t 0 .
The corresponding pathway Helmholtz free energy is:
F ν ( λ ) k T ln Z ν ( λ ) = t = t 0 t f F ( λ ) x t ; x t 1 , , x t 0 .
The last part of this equation follows from Equations (8) and (23). Its expected value,
F ν ( λ ) λ = ν = 1 N p ν ( λ ) F ν ( λ ) = x t 0 , , x t f p ( λ ) ( x t 0 , , x t f ) F ( λ ) x t 0 , , x t f ,
fulfills
F ν ( λ ) λ = t = t 0 t f F t ( λ ) λ ;
see Appendix A. The definitions of pathway thermodynamic potentials formally resemble those used in information theory for string of symbols [24] although their physical meaning is different.
In general, for a given thermodynamic potential “A”, it is possible to introduce its instantaneous and pathway versions under a protocol,
A ( λ ) ( x t ; x t 1 , , x t 0 ) ,
A ν ( λ ) t = t 0 t f A ( λ ) ( x t ; x t 1 , , x t 0 ) ,
respectively. State functions are constructed by taking expected values:
A t ( λ ) λ = x t 0 , , x t p ( λ ) ( x t 0 , , x t ) A ( λ ) x t ; x t 1 , , x t 0 ,
A ν ( λ ) λ = ν = 1 N p ν ( λ ) A ν ( λ ) = x t 0 , , x t f p ( λ ) ( x t 0 , , x t f ) A ( λ ) x t 0 , , x t f .
Instantaneous and pathway expected values fulfill:
A ν ( λ ) λ = t = t 0 t f A t ( λ ) λ .
From Equations (2), (18), and (23), and from Equations (7), (20), and (25), it is immediate to demonstrate the energy conservation for both instants and pathways, respectively:
F ( λ ) ( x t ; x t 1 , , x t 0 ) = E ( x t ; x t 1 , , x t 0 ) T S ( λ ) ( x t ; x t 1 , , x t 0 ) ,
F ν ( λ ) = E ν T S ν ( λ ) .
In the same way, the energy is conserved for both instant and pathway state functions:
F t ( λ ) λ = E t λ T S t ( λ ) λ ,
F ν ( λ ) λ = E ν λ T S ν ( λ ) λ ;
see Appendix A for the demonstrations of these results.
The internal energy for instants and pathways follows next laws:
E t λ = β ln Z ( λ ) ; x t 1 , , x t 0 λ , E ν λ = β ln Z ν ( λ ) λ ,
respectively. Corresponding expressions for the expected value of the entropies are
S t ( λ ) λ = T F ( λ ) x t ; x t 1 , , x t 0 λ , S ν ( λ ) λ = T F ν ( λ ) λ .
These four equations are demonstrated in Appendix A.

2.3. Equilibrium Pathways

The probability of a pathway described by the system under equilibrium conditions is:
p ν = exp β E ν Z ,
such that ν = 1 N p ν = 1 . In this equation, we use the path energy, as given by Equation (7), and the partition function [1,3]
Z x t 0 , , x t f exp β t = t 0 t f E x t ; x t 1 , , x t 0 = ν = 1 N exp β E ν ,
where, in contrast to Equation (9), the sums are extended to both present and previous events. Certainly, there are no prime variables and the sums are nested in Equation (40), unlike in the protocol-driven Equation (9). Partition function Z is not built on the basis of a particular protocol; therefore, the resulting probability does not limit how the different pathways are accessed.
The equilibrium partition function Z measures the available pathways ν that connect the microstates at t = t 0 and t = t f , constructed as temporal sequences of stochastic events X t and statistically weighted by the exponential of their energies t E ( x t ; x t 1 , , x t 0 ) ; these energies, in turn, account for the memory, i.e., the relative interactions of every present microstate, x t , with its previous ones, { x t 0 , , x t 1 } . The sequence-dependent partition function, Z ν ( λ ) , in contrast, measures the exponential energy-weighted pathways that connect these states considering that at each step, x t , the previous events, { x t 1 , , x t 0 } , are unchangeable and that the sequence of events is stochastically determined by protocol λ .
The probabilities expressed in Equation (39) represent pathways along which the external constraints to the system do not change with time. The dynamics so described are, therefore, timeless. As we will see, the expected values of the pathway thermodynamic potentials reduce to explicit functions of Z, as shown generally for equilibrium thermodynamics [1,3]. It is demonstrated (Appendix A) that, to recover Equation (39), having in mind that [28]
p ν t = t 0 t f p x t | x t 1 , , x t 0 ,
the probability of a microstate under equilibrium pathways must be:
p x t | x t 1 , , x t 0 = e β E x t ; x t 1 , , x t 0 f x t 0 , , x t x t e β E x t ; x t 1 , , x t 0 f x t 0 , , x t ,
where x t p x t | x t 1 , , x t 0 = 1 and
f x t 0 , , x t = x t + 1 , , x t f e β E x t + 1 ; x t , , x t 0 e β E x t f ; x t f 1 , , x t 0
= x t + 1 , , x t f exp β i = t + 1 t f E x i ; x i 1 , , x t 0 .
For completeness, it is important to note that
f x t 0 , , x t f = 1 ,
and that
x t 0 e β E ( x t 0 ) f ( x t 0 ) = Z .
It follows straightforward from Equation (44) that
f x t 0 , , x t = x t + 1 e β E x t + 1 ; x t , , x t 0 f x t 0 , , x t + 1 .
We will call functions f x t 0 , , x t , t = t 0 , , t f , partition factors [30].
The probability of a microstate under equilibrium pathways, Equation (42), can be expressed as
p x t | x t 1 , , x t 0 = e β E x t ; x t 1 , , x t 0 Z x t ; x t 1 , , x t 0 ,
with
Z x t ; x t 1 , , x t 0 f x t 0 , , x t 1 f x t 0 , , x t
being the equilibrium instant partition function. Equation (49) reduces to Equation (4) for protocol-driven dynamics. In analogy to Equation (8), the equilibrium pathway partition function builds up as
Z = t = t 0 t f Z x t ; x t 1 , , x t 0 .
The reduced probability for a truncated equilibrium temporal sequence is:
p x t 0 , , x t = x t + 1 , , x t f p x t 0 , , x t f = e β E x t 0 , , x t Z x t 0 , , x t ,
where
Z x t 0 , , x t = i = t 0 t Z x i ; x i 1 , , x t 0 = Z f x t 0 , , x t
is the partition function for equilibrium truncated temporal sequences; see Appendix A.
For a thermodynamic potential “A”, the instantaneous and pathway versions under equilibrium conditions are represented by dropping superscript λ in Equations (28) and (29), namely
A x t ; x t 1 , , x t 0 ,
A ν t = t 0 t f A x t ; x t 1 , , x t 0 .
Their expected values (state functions) are constructed by using probability Equations (42) and (39), respectively:
A t = x t 0 , , x t p x t 0 , , x t A x t ; x t 1 , , x t 0 ,
A ν = ν = 1 N p ν A ν = x t 0 , , x t f p ( x t 0 , , x t f ) A x t 0 , , x t f ,
fulfilling the next relation between instantaneous and pathway equilibrium state functions:
A ν = t = t 0 t f A t .
The proof to Equation (57) is formally equivalent to that for Equation (17).
The equilibrium instant and pathway energies are given by Equations (2) and (7), respectively, because they are independent of the protocol. However, state functions do depend on the presence or absence of a protocol because a different probability is used in the expressions of their expected values. Then, in analogy to Equations (11) and (16), these state functions read:
E t = x t 0 , , x t p ( x t 0 , , x t ) E x t ; x t 1 , , x t 0 ,
E ν = ν = 1 N p ν E ν = x t 0 , , x t f p ( x t 0 , , x t f ) E x t 0 , , x t f .
They keep the next relation:
E ν = t = t 0 t f E t ,
which proof is formally equivalent to that for Equation (17).
The equilibrium instant and pathway entropies depend on the presence or absence of a protocol because they operate on a probability. In analogy to Equations (18) and (20), these thermodynamic potentials are built by using instant and pathway equilibrium probability Equations (39) and (42):
S ( x t ; x t 1 , , x t 0 ) k ln p ( x t | x t 1 , , x t 0 ) ,
S ν k ln p ν = t = t 0 t f S x t ; x t 1 , , x t 0 ,
with expected values:
S t = x t 0 , , x t p ( x t 0 , , x t ) S x t ; x t 1 , , x t 0 ,
S ν = ν = 1 N p ν S ν = x t 0 , , x t f p ( x t 0 , , x t f ) S x t 0 , , x t f .
Finally, the equilibrium instant and pathway Helmholtz free energies, which also depend on the presence or absence of protocol because they operate on partition functions, are given in analogy to Equations (23) and (25) by:
F ( x t ; x t 1 , , x t 0 ) k T ln Z x t ; x t 1 , , x t 0 ,
F ν k T ln Z = t = t 0 t f F x t ; x t 1 , , x t 0 ,
respectively, with expected values:
F t = x t 0 , , x t p ( x t 0 , , x t ) F x t ; x t 1 , , x t 0 ,
F ν = ν = 1 N p ν F ν = x t 0 , , x t f p ( x t 0 , , x t f ) F x t 0 , , x t f .
The equilibrium entropy and free energy fulfill the next relations between pathway and instant expected values:
S ν = t = t 0 t f S t , F ν = t = t 0 t f F t ,
which proofs are formally equivalent to that for Equation (17).
Similar conservation energy equations to those found for protocol-driven dynamics are obtained for equilibrium. Their expressions (and proofs in Appendix A) follow by dropping superscript λ in Equations (33)–(36), namely.
F ( x t ; x t 1 , , x t 0 ) = E ( x t ; x t 1 , , x t 0 ) T S ( x t ; x t 1 , , x t 0 ) ,
F ν = E ν T S ν ,
F t = E t T S t ,
F ν = E ν T S ν .
In analogy to Equations (37) and (38), equilibrium processes follow the next relations for state functions:
E t = β ln Z x t ; x t 1 , , x t 0 , E ν = β ln Z ,
respectively. Corresponding expressions for the entropies are
S t = T F x t ; x t 1 , , x t 0 , S ν = T F .
The right-hand side expressions in Equations (74) and (75) are known in the context of equilibrium thermodynamics in the absence of memory as U E ν , S S ν = k ν p ν ln p ν , and F F ν = F ν = k T ln Z [1,3].

2.4. Relations between Protocol-Driven and Equilibrium Treatments

The following relations for instantaneous partition functions hold:
Z ( λ ) ; x t 1 , x t 0 Z x t ; x t 1 , x t 0 λ = 1 , Z x t ; x t 1 , x t 0 Z ( λ ) ; x t 1 , x t 0 = 1 ;
see Appendix A for their demonstrations. These expressions indicate that the equilibrium and protocol driven partition functions approach one another on average ratios.
Similar expressions can be found for partition functions along truncated sequences:
Z ( λ ) x t 0 , , x t 1 Z x t 0 , , x t λ = 1 , Z x t 0 , , x t Z ( λ ) x t 0 , , x t 1 = 1 ,
which are trivially demonstrated from the definitions of p x t 0 , , x t and p ( λ ) x t 0 , , x t . Particular cases of previous equations are those for the complete sequences, that is, when t = t f :
Z ν ( λ ) Z λ = 1 , Z Z ν ( λ ) = 1 .
Straightforward consequences for the expected values of the probability ratios are next:
p x t | x t 1 , x t 0 p ( λ ) x t | x t 1 , x t 0 λ = 1 , p ( λ ) x t | x t 1 , x t 0 p x t | x t 1 , x t 0 = 1 ,
p x t 0 , , x t p ( λ ) x t 0 , , x t λ = 1 , p ( λ ) x t 0 , , x t p x t 0 , , x t = 1 ,
p ν p ν ( λ ) λ = 1 , p ν ( λ ) p ν = 1 .
The evolution of a system does not only depend on the protocol but also on the physical nature of the system itself. In this regard, partition functions and probability distributions for protocol-driven dynamics are equivalent to their associated equilibrium processes under ensemble averages, according to Equations (76)–(81). These relations indicate that, on average, the stochastic nature of any protocol-driven process is equivalent to that of the equilibrium process.
The Kullback-Leibler distance (or relative entropy) between two conditional probability distributions p and q are used as defined elsewhere [27]:
D p x t | x t 1 , , x t 0 | | q x t | x t 1 , , x t 0 x t 0 , , x t p x t 0 , , x t ln p x t | x t 1 , , x t 0 q x t | x t 1 , , x t 0 = ln p x t | x t 1 , , x t 0 q x t | x t 1 , , x t 0 p .
They are positive for protocol-driven and equilibrium instantaneous probability distributions, as expected:
D p ( λ ) x t | x t 1 , , x t 0 | | p x t | x t 1 , , x t 0 = ln Z x t ; x t 1 , , x t 0 Z ( λ ) ; x t 1 , , x t 0 λ 0 ,
D p x t | x t 1 , , x t 0 | | p ( λ ) x t | x t 1 , , x t 0 = ln Z ( λ ) ; x t 1 , , x t 0 Z x t ; x t 1 , , x t 0 0 ,
where we have applied Jensen’s inequality [27] and Equation (76).
Using the chain rule for relative entropies [27],
D p x t 0 , x t | | q x t 0 , , x t = i = t 0 t D p x i | x i 1 , , x t 0 | | q x i | x i 1 , , x t 0 ,
it follows that
D p ( λ ) x t 0 , , x t | | p x t 0 , , x t = ln Z x t 0 , x t Z ( λ ) x t 0 , x t 1 λ = i = t 0 t ln Z x i ; x i 1 , x t 0 Z ( λ ) ; x i 1 , x t 0 λ 0 ,
D p x t 0 , , x t | | p ( λ ) x t 0 , , x t = ln Z ( λ ) x t 0 , x t 1 Z x t 0 , x t = i = t 0 t ln Z ( λ ) ; x i 1 , x t 0 Z x i ; x i 1 , x t 0 0 .
In particular,
D p ν ( λ ) | | p ν = ln Z Z ν ( λ ) λ 0 ,
D p ν | | p ν ( λ ) = ln Z ν ( λ ) Z 0 .
These formal results and those in Equation (78) appeared in previous works in the context of information chains [24,26]; we show them again for the sake of completeness in temporal chains.

2.5. Protocol-Driven Dynamics and Equilibrium Approach Each Other for Weak Memory Effects

We observed previously that sequence-dependent and equilibrium statistics are mutually convergent when a chain construction is sufficiently smoothly-dependent on its history [26]. We next formulate and demonstrate the Independence Limit theorem for instants, for which proof is in Appendix B:
Theorem 1
(Instant Independence Limit). Let ν = { x t 0 , , x t , , x t f } and ν = { x t 0 , , x t , , x t f } be two reversible pathways with memory along which a certain system may evolve. Let Z x t ; x t 1 , , x t 0 and Z ( λ ) ; x t 1 , , x t 0 be the equilibrium and protocol-driven instantaneous partition functions (Equations (49) and (4), respectively), relative to different histories until time t. If the normalized energy difference for microstate x t relative to distinct histories fulfills:
E x t ; x t 1 , , x t 0 E x t ; x t 1 , , x t 0 / ( k T ) 0 ( k T > 0 )
and the normalized energy differences for future events within the same pathway relative to distinct microstates x t and x t at time t fulfill:
E x i ; x i 1 , , x t + 1 , x t , x t 1 , x t 0 E x i ; x i 1 , , x t + 1 , x t , x t 1 , , x t 0 / ( k T ) 0 f o r i = t + 1 , , t f ,
then Z x t ; x t 1 , , x t 0 / Z ( λ ) ; x t 1 , , x t 0 1 .
This theorem provides the adequate link between protocol-driven and equilibrium pathways in the absence of friction. It states that when memory-induced variations in the energetic cost for advancing one temporal step are sufficiently low with respect to the thermal level ( k T ), the evolution of the system by a defined mechanism approaches an equilibrium thermalization of independent events.
It is easy to prove (Appendix B) the two following corollaries for truncated and full partition functions:
Corollary 1
(Truncated Pathway Independence Limit). If, in addition to the condition expressed in Equation (91), the one in Equation (90) extends to all instants before t, namely
E x i ; x i 1 , , x t 0 E x i ; x i 1 , , x t 0 / ( k T ) 0 f o r i = t 0 , , t ,
then Z x t 0 , , x t / Z ( λ ) x t 0 , , x t 1 1 .
Corollary 2
(Full Pathway Independence Limit). If, for all instants, t = t 0 , , t f ,
E x t ; x t 1 , , x t 0 E x t ; x t 1 , , x t 0 / ( k T ) 0 ,
then Z / Z ν ( λ ) 1 .
Since the extension of Equation (90) to all instants comprises Equation (91), a sufficient condition to meet the Independence Limit for instants, truncated pathways and full pathways is E x t ; x t 1 , , x t 0 E x t ; x t 1 , , x t 0 / ( k T ) 0 , t = t 0 , , t f , which is the condition of Equation (93) in the last corollary. Interestingly, this last corollary recovers the Independence Limit Theorem found elsewhere in the context of spatial chains with memory [26].

3. Applications

Computing systems, either natural, like the ones taking place in biological cells, or artificial, like tape-based technologies, generate stepwise evolutions through the intervention of Turing machines. They are of model significance to illustrate the above theory because they show how physical interactions within the tape and between the tape and the Turing-like processing machine influence both the spatial arrangements at the tape and the history of the whole system. In particular, biological replication, transcription, and translation comprise both a space- and time-dependent directional, stochastic chain with memory. More in depth, their dynamics consists of a single protein—a polymerase or ribosome—that builds a single template-directed polymer as a function of the time through incorporation of monomers one at a time.
Thermodynamic potentials for a growing biopolymer are simulated in Figure 1. The system exchanges energies near the thermal level at each time instant. Equilibrium pathways exploit memory effects greater than protocol-driven pathways because the system is assumed to explore indefinitely each monomer alternative at each time step, thus improving the chances for correct monomer selection under positive feedback. In particular, the entropy decreases, thus indicating a lower incidence of errors, and the internal and free energies become more negative, thus indicating a higher stability of the resulting polymer as time passes.
The protocol-driven results in Figure 1 just address the case in which the polymer only grows during its synthesis. Monomers may actually be removed during polymer synthesis due to proofreading and editing activities. In these situations, the system explores the available pathways in a longer time, thus approaching the equilibrium results (also shown in Figure 1) and thus reducing the incidence of errors. This applicative perspective can be modeled with the above theory by selecting a protocol that combines shrinking and growing dynamics.

4. Discussion

We have extended thermodynamics to systems that evolve along specific sequences of events driven by changing constraints in the frictionless limit. To do that, we have introduced pathway- and protocol-dependent functions, including thermodynamic potentials, that characterize the microstates of the system in the presence of memory. Similarly, we have analyzed the equilibrium evolution of systems that keep memory of previous events by introducing pathway-dependent functions. Under ensemble averages, protocol-driven and equilibrium thermodynamic functions become state functions because they lose their dependence over the pathways. Our theory discriminates between microscopic reversibility and equilibrium, which, as presented in the theorem, converge to the same concept when memory effects become negligible. Given that randomness is characterized by probability, we find that, on average terms, the stochastic nature of protocol-driven and equilibrium processes are equivalent because the expected value of the ratio between any protocol-driven probability and the equilibrium probability is always unity, with independence of which the protocol is. Finally, we have applied our framework in the context of template-directed biopolymer synthesis, a molecular process that objectifies the connection between thermodynamic and information entropies. This nanoscale illustration—in which a protein replicates, transcribes, or translates an information carrier—shows that information requires energy and that information is another manifestation of entropy (like heat to energy).
When memory is present, the number of different microstates that a system can attain increases with the number of events that it can recall because each microstate involves a configurational history of events. When memory extends to all previous events at each time step, there is a bijection between available microstates until a particular time t and pathways to reach them; the system actually resets whenever it explores new pathways in these conditions. If memory effects can be cut off down to a finite number of previous events, as, for example, in Markov (memoryless) dynamics or in the case of independent events, microstates can be recurrently visited within a particular pathway, that is, without starting over. The lower the number of nearest temporal neighbors to be considered in the memory, the lower the revisiting period. This revisiting period can be assumed as the so-called Poincaré recurrence time, which increases with the number of past events that stochastically influence the present. In the limit in which the origin of the system is at t = and the memory extends to all previous events, the Poincaré recurrence time tends to infinity because the system cannot restart.
The evolution of a system is a consequence of the existence of a protocol, which represents constraints that change with time. If the protocol is sufficiently smooth in the time dependence, the system has time to visit many microstates before the constraints has substantially changed, thus evolving near or at equilibrium. When the environment evolves very rapidly (involving that the protocol has a sharp time dependence), the system cannot follow its changes due to memory effects. For long times, the fact that the system cannot relax to its original state implies that it presents hysteresis and evolves away from equilibrium.
The existence of a protocol, therefore, biases the pathways through which a system progresses between two microstates. If the memory of the system is very long, the system does not lose correlations with previous events at short periods of time. In these conditions, it is not possible to assume ensemble averages and time averages as interchangeable, which make evolutions no longer ergodic.
Considering that fluctuation theorems assume both microscopic reversibility and Markovianity [5,6], and that Markovianity is a special case of non-Markovianity, we speculate that our theory may describe non-equilibrium thermodynamics within a unified framework.

Funding

Work supported by Ministerio de Ciencia e Innovación, grant number PID2019-107391RB-I00.

Acknowledgments

The author wishes to thank the Universtitat Politècnica de València for general support through the Attraction Talent program.

Conflicts of Interest

The author declares no conflict of interest. The founding sponsor had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

Appendix A. Proofs

We next demonstrate the expressions of Section 2.2, Section 2.3 and Section 2.4.
Proof of Equation (17).
From Equations (16), (7), and (12), it follows that
E ν λ = x t 0 , , x t f p ( λ ) ( x t 0 , , x t f ) t = t 0 t f E x t ; x t 1 , , x t 0 = t = t 0 t f x t 0 , , x t p ( λ ) ( x t 0 , , x t ) E x t ; x t 1 , , x t 0 = t = t 0 t f E t λ ,
which proves Equation (17). □
The proof to Equation (22) is similar to this for Equation (17) and can be found in Reference [27] in the context of Information Theory for the therein called conditional entropy. The proofs to Equation (27) and to Equation (32) for a general thermodynamic potential “A” (following Equations (28) and (29)) parallel those for Equations (17) and (22).
Proof of Equation (35).
From Equations (19), (18), and (3), it follows that
S t ( λ ) λ = x t 0 , , x t p ( λ ) ( x t 0 , , x t ) S ( λ ) x t ; x t 1 , , x t 0 = k x t 0 , , x t p ( λ ) ( x t 0 , , x t ) ln p ( λ ) ( x t | x t 1 , , x t 0 ) = k x t 0 , , x t p ( λ ) ( x t 0 , , x t ) β E x t ; x t 1 , , x t 0 ln Z ( λ ) ; x t 1 , , x t 0 = 1 T E t λ F t ( λ ) λ ,
where we have used Equations (11), (23), and (24). □
Proof of Equation (36).
This expression appears by taking sums over subscript t on Equation (35) and using Equations (17), (22), and (27). This result was demonstrated in a previous work through a different strategy [24]. □
Proof of Equation (37).
From Equations (11) and (37), left, satisfies
E t λ = x t 0 , , x t p ( λ ) ( x t 0 , , x t ) E x t ; x t 1 , , x t 0 = x t 0 , , x t p ( λ ) ( x t 0 , , x t 1 ) p ( λ ) x t | x t 1 , , x t 0 E x t ; x t 1 , , x t 0 = x t 0 , , x t p ( λ ) ( x t 0 , , x t 1 ) exp β E x t ; x t 1 , , x t 0 Z ( λ ) ; x t 1 , , x t 0 E x t ; x t 1 , , x t 0 = x t 0 , , x t p ( λ ) ( x t 0 , , x t 1 ) 1 Z ( λ ) ; x t 1 , , x t 0 β exp β E x t ; x t 1 , , x t 0 = x t 0 , , x t 1 p ( λ ) ( x t 0 , , x t 1 ) β x t p ( λ ) x t | x t 1 , , x t 0 x t 0 , , x t p ( λ ) ( x t 0 , , x t ) β ln Z ( λ ) ; x t 1 , , x t 0 = β ln Z t ( λ ) λ ,
where we have used Equations (3), (5), and (12). To abbreviate, we have inserted β ln Z t ( λ ) β ln Z ( λ ) ; x t 1 , , x t 0 . Equation (37), right, is obtained by taking sums over subscript t on Equation (37), left. More in depth, since
x t 0 , , x t p ( λ ) ( x t 0 , , x t ) β ln Z ( λ ) ; x t 1 , , x t 0 = x t 0 , , x t f p ( λ ) ( x t 0 , , x t f ) β ln Z ( λ ) ; x t 1 , , x t 0 = ν = 1 N p ν ( λ ) β ln Z ( λ ) ; x t 1 , , x t 0
so that
t = t 0 t f β ln Z t ( λ ) λ = β t = t 0 t f ln Z t ( λ ) λ = β ln t = t 0 t f Z t ( λ ) λ = ν = 1 N p ν ( λ ) β ln Z ν ( λ ) = β ln Z ν ( λ ) λ ,
then, considering t = t 0 t f E t λ = E ν λ from above, it follows that
t = t 0 t f E t λ = β ln Z t ( λ ) λ E ν λ = β ln Z ν ( λ ) λ
which proves Equation (37), right. This equation was also demonstrated in a previous work by an alternative strategy [24]. □
Proof of Equation (38).
From Equations (35), (23), and (24), and Equation (37), left, in this order, Equation (38), left, satisfies
S t ( λ ) λ = 1 T F t ( λ ) E t ( λ ) λ = 1 T k T ln Z t ( λ ) + β ln Z t ( λ ) λ = T F t ( λ ) λ .
To abbreviate, we have inserted T F t ( λ ) T F ( λ ) x t ; x t 1 , , x t 0 . Equation (38), right, is obtained by taking sums over subscript t on Equation (38), left. Following the above demonstration for t = t 0 t f E t λ = E ν λ , it is easy to see that t = t 0 t f S t ( λ ) λ = S ν ( λ ) λ . Likewise, considering the above demonstration for t = t 0 t f β ln Z t ( λ ) λ = β ln Z ν ( λ ) λ , it follows that
t = t 0 t f T F t ( λ ) λ = T t = t 0 t f F t ( λ ) λ = ν = 1 N p ν ( λ ) T F ν ( λ ) = T F ν ( λ ) λ .
Then,
t = t 0 t f S t ( λ ) λ = T F t ( λ ) λ S ν ( λ ) λ = T F ν ( λ ) λ ,
which proves Equation (38), right. This equation was also demonstrated in a previous work by an alternative strategy [24]. □
Proof of Equation (42).
We want to demonstrate that p ν p ( x t 0 , , x t f ) = t = t 0 t f p x t | x t 1 , , x t 0 , where p ν is given by Equation (39) and p x t | x t 1 , , x t 0 by Equation (42).
Certainly, Equation (42) can be expressed as:
p x t | x t 1 , , x t 0 = e β E x t ; x t 1 , , x t 0 f x t 0 , , x t f x t 0 , , x t 1
by using property Equation (47). Then,
p ν = e β E ν t = t 0 t f f x t 0 , , x t f x t 0 , , x t 1 = e β E ν f t 0 f t 0 + 1 f t 0 + 2 f t f 2 f t f 1 f t f Z f t 0 f t 0 + 1 f t 0 + 2 f t f 2 f t f 1 = e β E ν Z ,
where we have used Equations (45) and (46). We have labeled partition factors f t f ( x t 0 , , x t ) , t = t 0 , , t f , to abbreviate. □
Proof of Equations (51) and (52).
Equation (51) is obtained by taking sums in the pathway probability product decomposition of Equation (41) from the most recent event variable, x t f , to the furthest event variable, x t + 1 .
With regard to the second part of Equation (51),
p x t 0 , , x t = i = t 0 t e β E x i ; x i 1 , x t 0 Z x i ; x i 1 , , x t 0 = e β E x t 0 , , x t i = t 0 t Z x i ; x i 1 , , x t 0 .
For Z x t 0 , , x t i = t 0 t Z x i ; x i 1 , , x t 0 , we find
Z x t 0 , , x t = i = t 0 t x i e β E x i ; x i 1 , , x t 0 f x t 0 , , x i f x t 0 , , x i = Z f x t 0 , , x t ,
where we have used Equation (46). □
Proof of Equation (76).
The left part expands as:
Z ( λ ) ; x t 1 , x t 0 Z x t ; x t 1 , x t 0 λ = x t 0 , , x t p ( λ ) x t 0 , , x t Z ( λ ) ; x t 1 , x t 0 Z x t ; x t 1 , x t 0 = x t 0 , , x t 1 p ( λ ) x t 0 , , x t 1 x t p ( λ ) x t | x t 1 , , x t 0 Z ( λ ) ; x t 1 , x t 0 Z x t ; x t 1 , x t 0 = x t 0 , , x t 1 p ( λ ) x t 0 , , x t 1 x t p x t | x t 1 , , x t 0 = 1 .
The proof to the right part is formally analogous. □

Appendix B. Proof of the Independence Limit Theorem for Instants, Truncated, and Full Pathways

We begin with the proof for the theorem for instants. We will use the following inequality:
e z 1 e z z ,
which was proven elsewhere [31]. For mild memory effects, we express the condition of Equation (90) as
β E x t ; x t 1 , , x t 0 E x t ; x t 1 , , x t 0 ε , ( β 1 / ( k T ) > 0 ) .
Firstly, we demonstrate that
Z ( λ ) ; x t 1 , , x t 0 Z ( λ ) ; x t 1 , , x t 0 ε 0 1 .
Proof of Equation (A5).
From Equation (4),
Z ( λ ) ; x t 1 , , x t 0 Z ( λ ) ; x t 1 , , x t 0 = x t e β E x t ; x t 1 , , x t 0 e β E x t ; x t 1 , , x t 0 = x t e β E x t ; x t 1 , , x t 0 e β E x t ; x t 1 , , x t 0 E x t ; x t 1 , , x t 0 1 x t e β E x t ; x t 1 , , x t 0 e β E x t ; x t 1 , , x t 0 E x t ; x t 1 , , x t 0 1 x t e β E x t ; x t 1 , , x t 0 β E x t ; x t 1 , , x t 0 E x t ; x t 1 , , x t 0 e β E x t ; x t 1 , , x t 0 E x t ; x t 1 , , x t 0 x t e β E x t ; x t 1 , , x t 0 ε e ε = Z ( λ ) ; x t 1 , , x t 0 ε e ε ,
where we have used Equation (A3) followed by Equation (A4). Then,
Z ( λ ) ; x t 1 , , x t 0 Z ( λ ) ; x t 1 , , x t 0 1 ε e ε ε 0 0 ,
which demonstrates Equation (A5). □
Secondly, we demonstrate that
Z x t ; x t 1 , , x t 0 Z ( λ ) ; x t 1 , , x t 0 ε 0 1 .
Proof of Equation (A6).
Using Equation (47), Equation (49) can be expressed as:
Z x t ; x t 1 , , x t 0 = x t e β E x t ; x t 1 , , x t 0 f ( x t 0 , , x t 1 , x t ) f ( x t 0 , , x t ) .
Then,
Z x t ; x t 1 , , x t 0 Z ( λ ) ; x t 1 , , x t 0 = x t e β E x t ; x t 1 , , x t 0 f ( x t 0 , , x t 1 , x t ) f ( x t 0 , , x t ) 1 x t e β E x t ; x t 1 , , x t 0 f ( x t 0 , , x t 1 , x t ) f ( x t 0 , , x t ) 1 .
To continue, we need to prove that
f ( x t 0 , , x t 1 , x t ) f ( x t 0 , , x t ) ε 0 1 .
For mild memory effects, we express the condition of Equation (91) as
β Δ E x i ; x i 1 , , x t + 1 , x t , x t , x t 1 , , x t 0 ε ,
for i = t + 1 , , t f , where
Δ E x i ; x i 1 , , x t + 1 , x t , x t , x t 1 , , x t 0 E x i ; x i 1 , , x t + 1 , x t , x t 1 , , x t 0 E x i ; x i 1 , , x t + 1 , x t , x t 1 , , x t 0 .
From Equation (44),
f x t 0 , , x t 1 , x t f x t 0 , , x t = x t + 1 , , x t f exp β i = t + 1 t f E ( x i ; x i 1 , , x t 0 ) exp β i = t + 1 t f Δ E x i ; x i 1 , , x t , x t , x t 1 , , x t 0 1 x t + 1 , , x t f exp β i = t + 1 t f E ( x i ; x i 1 , , x t 0 ) exp β i = t + 1 t f Δ E x i ; x i 1 , , x t , x t , x t 1 , , x t 0 1 x t + 1 , , x t f exp β i = t + 1 t f E ( x i ; x i 1 , , x t 0 ) t f t ε e t f t ε = f x t 0 , , x t t f t ε e t f t ε ,
where we have used Equation (A3) followed by Equation (A9). Then,
f x t 0 , , x t 1 , x t f x t 0 , , x t 1 t f t ε e t f t ε ε 0 0 ,
which demonstrates Equation (A8). As a consequence, from Equation (A7),
Z x t ; x t 1 , , x t 0 Z ( λ ) ; x t 1 , , x t 0 Z ( λ ) ; x t 1 , , x t 0 t f t ε e t f t ε .
Then,
Z x t ; x t 1 , , x t 0 Z ( λ ) ; x t 1 , , x t 0 1 t f t ε e t f t ε ε 0 0 ,
which demonstrates Equation (A6). □
Finally, multiplying Equation (A5) by Equation (A6), we find:
Z x t ; x t 1 , , x t 0 Z ( λ ) ; x t 1 , , x t 0 ε 0 1 ,
which completes the proof of the theorem for instants.
The proof for the truncated sequence version is straightforward from the above results using Equations (15) and (52):
Z ( x t 0 , , x t ) Z ( λ ) ( x t 0 , , x t 1 ) = i = t 0 t Z ( x i ; x i 1 , , x t 0 ) Z ( λ ) ( ; x i 1 , , x t 0 ) ε 0 1 .
The proof for the full pathways follows from the Equations (8) and (50) and from the fact that Z / Z ν ( λ ) = Z ( x t 0 , , x t f ) / Z ( λ ) ( x t 0 , , x t f 1 ) (also see References [26,31]).

References

  1. Chandler, D. Introduction to Modern Statistical Mechanics; Oxford University Press: New York, NY, USA, 1987. [Google Scholar]
  2. Goldstein, H.; Poole, C.; Safko, J. Classical Mechanics, 3rd ed.; Addison Wesley: San Francisco, CA, USA, 2002. [Google Scholar]
  3. Pathria, R.K.; Beale, P.D. Statistical Mechanics, 3rd ed.; Academic Press: Boston, MA, USA, 2011. [Google Scholar]
  4. Crooks, G.E. On thermodynamic and microscopic reversibility. J. Stat. Mech. 2011. [Google Scholar] [CrossRef]
  5. Bustamante, C.; Liphardt, J.; Ritort, F. The Nonequilibrium Thermodynamics of Small Systems. Phys. Today 2005, 58, 43–48. [Google Scholar] [CrossRef] [Green Version]
  6. Ritort, F. Nonequilibrium fluctuations in small systems: From physics to biology. Adv. Chem. Phys. 2008, 137, 31–123. [Google Scholar]
  7. Bustamante, C. In singulo biochemistry: When Less Is More. Annu. Rev. Biochem. 2008, 77, 44–50. [Google Scholar] [CrossRef]
  8. Arias-Gonzalez, J.R. Single-molecule portrait of DNA and RNA double helices. Integr. Biol. 2014, 6, 904–925. [Google Scholar] [CrossRef] [Green Version]
  9. Bustamante, C.; Cheng, W.; Mejia, Y.X. Revisiting the Central Dogma One Molecule at a Time. Cell 2011, 144, 480–497. [Google Scholar] [CrossRef] [Green Version]
  10. Moret, M.A.; Bisch, P.M.; Mundim, K.C.; Pascutti, P.G. New Stochastic Strategy to Analyze Helix Folding. Biophys. J. 2002, 82, 1123–1132. [Google Scholar] [CrossRef] [Green Version]
  11. Alexander, L.M.; Goldman, D.H.; Wee, L.M.; Bustamante, C. Non-equilibrium dynamics of a nascent polypeptide during translation suppress its misfolding. Nat. Commun. 2019, 10, 2709. [Google Scholar] [CrossRef] [Green Version]
  12. Goldt, S.; Seifert, U. Stochastic Thermodynamics of Learning. Phys. Rev. Lett. 2017, 118, 010601. [Google Scholar] [CrossRef] [Green Version]
  13. Jarzynski, C. Stochastic and Macroscopic Thermodynamics of Strongly Coupled Systems. Phys. Rev. X 2017, 7, 011008. [Google Scholar] [CrossRef] [Green Version]
  14. Strasberg, P.; Esposito, M. Stochastic thermodynamics in the strong coupling regime: An unambiguous approach based on coarse graining. Phys. Rev. E 2017, 95, 062101. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Seifert, U. Stochastic thermodynamics, fluctuation theorems and molecular machines. Rep. Prog. Phys. 2012, 75, 126001. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Seifert, U. From Stochastic Thermodynamics to Thermodynamic Inference. Annu. Rev. Condens. Matter Phys. 2019, 10, 171–192. [Google Scholar] [CrossRef]
  17. Tsallis, C. Possible Generalization of Boltzmann-Gibbs Statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  18. Breuer, H.-P.; Laine, E.-M.; Piilo, J.; Vacchini, B. Colloquium: Non-Markovian dynamics in open quantum systems. Rev. Mod. Phys. 2016, 88, 021002. [Google Scholar] [CrossRef] [Green Version]
  19. Rivas, A.; Huelga, S.F.; Plenio, M.B. Quantum non-Markovianity: Characterization, quantification and detection. Rep. Prog. Phys. 2014, 77, 094001. [Google Scholar] [CrossRef]
  20. Taranto, P.; Pollock, F.A.; Milz, S.; Tomamichel, M.; Modi, K. Quantum Markov Order. Phys. Rev. Lett. 2019, 122, 140401. [Google Scholar] [CrossRef] [Green Version]
  21. Thomas, G.; Siddharth, N.; Banerjee, S.; Ghosh, S. Thermodynamics of non-Markovian reservoirs and heat engines. Phys. Rev. E 2018, 97, 062108. [Google Scholar] [CrossRef] [Green Version]
  22. Yu, S.; Wang, Y.-T.; Ke, Z.-J.; Liu, W.; Meng, Y.; Li, Z.-P.; Zhang, W.-H.; Chen, G.; Tang, J.-S.; Li, C.-F.; et al. Experimental Investigation of Spectra of Dynamical Maps and their Relation to non-Markovianity. Phys. Rev. Lett. 2018, 120, 060406. [Google Scholar] [CrossRef]
  23. Ventéjou, B.; Sekimoto, K. Progressive quenching: Globally coupled model. Phys. Rev. E 2018, 97, 062150. [Google Scholar] [CrossRef] [Green Version]
  24. Arias-Gonzalez, J.R. Thermodynamic framework for information in nanoscale systems with memory. J. Chem. Phys. 2017, 147, 205101. [Google Scholar] [CrossRef]
  25. Arias-Gonzalez, J.R. Writing, Proofreading and Editing in Information Theory. Entropy 2018, 20, 368. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Arias-Gonzalez, J.R. Information management in DNA replication modeled by directional, stochastic chains with memory. J. Chem. Phys. 2016, 145, 185103. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 1st ed.; Wiley-Interscience: Hoboken, NJ, USA, 1991. [Google Scholar]
  28. Fisz, M. Probability Theory and Mathematical Statistics, 3rd ed.; Krieger Publishing Company: Malabar, FL, USA, 1980. [Google Scholar]
  29. Seifert, U. Entropy production along a stochastic trajectory and an integral fluctuation theorem. Phys. Rev. Lett. 2005, 95, 040602. [Google Scholar] [CrossRef] [Green Version]
  30. Arias-Gonzalez, J.R. Entropy involved in fidelity of DNA replication. PLoS ONE 2012, 7, e42272. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Arias-Gonzalez, J.R.; Aleja, D. Comment on “Information management in DNA replication modeled by directional, stochastic chains with memory” [J. Chem. Phys. 145, 185103 (2016)]. J. Chem. Phys. 2020, 152, 047101. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Entropy, internal energy, and Helmholtz free energy as a function of the time during the templated-directed synthesis of a biopolymer. The graphs show expected potentials per monomer considering independent events (solid lines), protocol-driven pathways (dotted lines) and equilibrium pathways (dashed lines). Each incorporated monomer either releases an energy of 0.5 kT if it fits correctly or absorbs an energy 0.5 kT otherwise. Monomers are selected from a pool of 4 different elements, only one fitting correctly at a time. The memory is modeled by a power law of the form 1 / ( Δ t ) 3 / 2 , where Δ t is the number of elapsed time steps [24,26].
Figure 1. Entropy, internal energy, and Helmholtz free energy as a function of the time during the templated-directed synthesis of a biopolymer. The graphs show expected potentials per monomer considering independent events (solid lines), protocol-driven pathways (dotted lines) and equilibrium pathways (dashed lines). Each incorporated monomer either releases an energy of 0.5 kT if it fits correctly or absorbs an energy 0.5 kT otherwise. Monomers are selected from a pool of 4 different elements, only one fitting correctly at a time. The memory is modeled by a power law of the form 1 / ( Δ t ) 3 / 2 , where Δ t is the number of elapsed time steps [24,26].
Mathematics 09 00127 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Arias-Gonzalez, J.R. Microscopically Reversible Pathways with Memory. Mathematics 2021, 9, 127. https://doi.org/10.3390/math9020127

AMA Style

Arias-Gonzalez JR. Microscopically Reversible Pathways with Memory. Mathematics. 2021; 9(2):127. https://doi.org/10.3390/math9020127

Chicago/Turabian Style

Arias-Gonzalez, Jose Ricardo. 2021. "Microscopically Reversible Pathways with Memory" Mathematics 9, no. 2: 127. https://doi.org/10.3390/math9020127

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop