Next Article in Journal
What is Fair Pay for Executives? An Information Theoretic Analysis of Wage Distributions
Next Article in Special Issue
Entropy and Divergence Associated with Power Function and the Statistical Application
Previous Article in Journal
The Maximum Entropy Formalism and the Prediction of Liquid Spray Drop-Size Distribution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Transport of Heat and Charge in Electromagnetic Metrology Based on Nonequilibrium Statistical Mechanics

National Institute of Standards and Technology, 325 Broadway, MS 818.01, Boulder, CO, USA
*
Author to whom correspondence should be addressed.
Entropy 2009, 11(4), 748-765; https://doi.org/10.3390/e11040748
Submission received: 11 September 2009 / Accepted: 26 October 2009 / Published: 3 November 2009
(This article belongs to the Special Issue Distance in Information and Statistical Physics Volume 2)

Abstract

:
Current research is probing transport on ever smaller scales. Modeling of the electromagnetic interaction with nanoparticles or small collections of dipoles and its associated energy transport and nonequilibrium characteristics requires a detailed understanding of transport properties. The goal of this paper is to use a nonequilibrium statistical-mechanical method to obtain exact time-correlation functions, fluctuation-dissipation theorems (FD), heat and charge transport, and associated transport expressions under electromagnetic driving. We extend the time-symmetric Robertson statistical-mechanical theory to study the exact time evolution of relevant variables and entropy rate in the electromagnetic interaction with materials. In this exact statistical-mechanical theory, a generalized canonical density is used to define an entropy in terms of a set of relevant variables and associated Lagrange multipliers. Then the entropy production rate are defined through the relevant variables. The influence of the nonrelevant variables enter the equations through the projection-like operator and thereby influences the entropy. We present applications to the response functions for the electrical and thermal conductivity, specific heat, generalized temperature, Boltzmann’s constant, and noise. The analysis can be performed either classically or quantum-mechanically, and there are only a few modifications in transferring between the approaches. As an application we study the energy, generalized temperature, and charge transport equations that are valid in nonequilibrium and relate it to heat flow and temperature relations in equilibrium states.

1. Introduction

Measurable electromagnetic quantities are modeled by transport expressions in terms of time-correlation or frequency-domain functions. These transport quantities include the electric and magnetic polarizations and associated thermal interactions, the electrical conductivity, and equations of motion. As measurements are made at smaller and smaller scales, and on nonequilibrium systems, the details of the interactions of applied fields with molecules become more important. Progress in nanotechnology research requires calculating the combined microscopic fields and heat transfer interactions for a handful of dipoles in biomaterials, nanowires, and in electromagnetic-chemistry reactions. At these scales, bulk properties are insufficient for modeling. In this paper we use a projection-like operator approach to statistical mechanics to develop expressions for the evolution of the polarization, magnetization, electrical and thermal transport coefficients, temperature, and related expressions.
The projection-operator approach was pioneered by Zwanzig, Mori, and many others [1,2]. The theoretical approach used here has its roots in the work of Zwanzig [1] that was later generalized and extended by Robertson [3]. Others have added to its development, for example, Mori, Grabert, and Oppenheim [2,4,5,6,7]. Robertson extended Zwanzig’s projection-operator approach to an exact theory that uses a projection-like operator. The Robertson version of the Zwanzig projection-operator approach uses a generalized canonical density operator that is obtained by maximizing the information entropy subject to theoretical constraints on the expectations of the relevant variables at specific times. These relevant variables comprise a subset of variables that the experimentalist uses to describe the system. A common misconception is the idea that the Robertson theory is a variant of the Jaynesian information-theory approach that uses values of the constraints from experiments to construct the relevant density function; it is not. This approach, as noted by Nettleton [8], is not restricted to using expectations of constraints to obtain a Gibbsian distribution, but rather a different form of the entropy could be used. Unlike the information-theory algorithm of Jaynes, this theory uses theoretical constraints to obtain the least biased form for the relevant distribution function as a function of the relevant variables and Lagrangian multipliers and is exactly consistent with the quantum Liouville equation. This theory defines expected values of relevant variables and a nonequilibrium entropy for a dynamically driven system that reduces to the thermodynamic variables and entropy in the appropriate limit. The advantage of this approach in studying time evolution of relevant variables is that the equations incorporate both relevant and irrelevant information, are exact, are Hamiltonian-based, have a direct relation to thermodynamics, and are based on reversible microscopic equations. The system is described by a set of relevant variables, but in order to maintain an exact solution to Liouville’s equation, irrelevant information is incorporated by the use of a projection-like operator. This correction for the irrelevant variables manifests and defines relaxation and dissipation [9]. A common argument against the projection-operator theories is that it has not yet be discovered how to model them in numerical simulators; however Nettleton [10] has made significant progress in this area and Equation (27) of this paper eliminates the projection-like operator in the equations of motion.
In Robertson’s approach, the full density operator ρ ( t ) equals a relevant canonical density operator σ ( t ) that is developed from constraints on relevant variables only, plus a relaxation correction term that accounts for irrelevant information. The statistical-density operator ρ ( t ) satisfies the Liouville equation, whereas the relevant canonical-density operator σ ( t ) does not. By including the effects of the irrelevant information, this approach is exact and time-symmetric. This theory yields an expression that exhibits all the required properties of a nonequilibrium entropy, and yet it is based entirely on time-symmetric equations. Although the method exactly solves the Liouville equation, the evolution operator used in this approach is nonunitary, due to the non-Hermitian character of the projection-like operator, and reduces to a unitary operator only in the very special case where there are no irrelevant variables. In the past, other researchers have developed nonequilibrium statistical mechanical theories by adding a source term to Liouville’s equation [11], but the approach used in this paper does not require a source term. This approach has been used previously to study the microscopic time evolution of electromagnetic properties [12,13,14]. The theory can be formulated either quantum-mechanically or classically.
This paper starts by reviewing the statistical-mechanical background of the approach. Then a novel approach to studying the entropy and its production using this statistical-mechanical theory is developed, new equations of motion are presented, and transport coefficients for electromagnetic and thermal quantities are presented. Finally, applications are made to noise, the determination of Boltzmann’s constant, and generalized temperature.

2. Theoretical Background

Consider a material that is driven by applied electromagnetic fields. We assume that the material contains both permanent electric and magnetic moments. The field power transmitted into the material is partitioned into the internal energy stored in the fields and lattice, the energy dissipated in material losses, the work performed by the fields to polarize dielectric or magnetic material, and the energy to drive currents on conductors in the system. Any material losses in materials transform field energy into heat. For example, the dissipative currents on electromagnetic cavity walls are formed from the conversion of some of the electromagnetic energy that enters the cavity into mechanical motion of conduction electrons and heat. The transformation of electromagnetic field energy into the kinetic energy of the currents on the cavity walls and heat results in entropy production. The entropy production rate originates from the work done by an external electromagnetic generator that maintains the fields in the cavity.
In this analysis the fields are treated classically, whereas the polarizations and internal energy are assumed to be operators. The applied electromagnetic fields are denoted by E and H , and the effective local fields in the material are E p and H m . There may also be heat exchange with a reservoir, modeled by a convective term - · Q h that would be included in the Hamiltonian in the internal energy or as a separate term. The internal-energy density u ( r ) contains the stored electromagnetic energy in free space, kinetic energy, electrostatic potential energy, dipole-dipole, spin-spin, spin-lattice, anisotropy, exchange, and other interactions. In the material there are microscopic electric, p ( r ) , and magnetic, m ( r ) , polarization operators.
As the applied field impinges on a material specimen, depolarization fields may be formed in the material that modify the fields in the material from the applied fields to form ( E p , H m ).
The Hamiltonian is
H ( t ) = { u ( r ) - p ( r ) · E ( r , t ) - μ 0 m ( r ) · H ( r , t ) } d 3 r
= { u ( r ) - ϕ ( r , t ) ρ f ( r , t ) - μ 0 m ( r ) · H ( r , t ) } d 3 r
In the second form of the Hamiltonian, the electric polarization terms has been replaced with a potential term and a modified internal energy density, where - ϕ = E p . We have assumed there is a microscopic displacement operator d ( r ) that satisfies, · d ( r ) = ρ f ( r ) , where ρ f ( r ) is the free charge, and also · p = - ρ b .
The dynamical variables we use are a set of operators, or classically, a set of functions of phase F 1 ( r ) , F 2 ( r ) , · · · . For normalization, F 0 = 1 is sometimes included in the set. We will assume a normalization of the density function σ so that no λ 0 or F 0 is required. The operators F n ( r ) are functions of r and phase variables, but are not explicitly time dependent. The time dependence enters through the driving fields in the Hamiltonian and when the trace operation is performed. These operators are, for example, the microscopic internal-energy density u ( r ) , and the electromagnetic polarizations m ( r ) , p ( r ) . Associated with these operators are a set of thermodynamic fields that are not operators and do not depend on phase, such as generalized temperature and local electromagnetic fields such as E p ( r , t ) , H m ( r , t ) , and temperature. In any complex system, in addition to the set of F n ( r ) , there are many other uncontrolled or unobserved variables that are categorized as irrelevant variables.
A brief overview of the approach for calculating the equations of motion for the electric and magnetic polarizations and internal energy will be presented. For details please refer to [3,12,15,16]. Later, these results will be used to study the entropy and its evolution.
In the Robertson theory there are two density operators. The first is the full statistical-density operator ρ ( t ) that encompasses all information of the system in relation to the Hamiltonian and that satisfies the Liouville equation,
d ρ / d t = - i L ( t ) ρ ( t ) = 1 i [ H ( t ) , ρ ( t ) ]
L ( t ) is the time-dependent Liouville operator, and H ( t ) is the Hamiltonian that as a consequence of the applied field, is time dependent.
In addition to ρ ( t ) , a relevant canonical-density operator σ ( t ) is constructed. Robertson chose to construct σ ( t ) by maximizing the information entropy subject to a finite set of constraints on the expected values of operators that are in the Hamiltonian at equation time t. In the equilibrium limit the expected values of the relevant operators are the thermodynamic potentials. The associated Lagrangian multipliers can be identified as thermodynamic forces. The entropy summarizes our state of uncertainty in the expected values of the relevant variables at time t
S ( t ) = - k B T r σ ( t ) ln σ ( t )
where T r denotes trace and in a classical analysis will represent integration over phase variables. We need to note that other forms of the entropy other than Equation (3) could be used to construct σ ( t ) from the relevant variables contained in the Hamiltonian. The constraints are
< F n ( r ) > T r ( F n ( r ) ρ ( t ) ) = T r ( F n ( r ) σ ( t ) )
Maximization by the common variational procedure leads to the generalized canonical density,
σ ( t ) = exp ( - λ ( t ) * F )
where we require
T r ( exp ( - λ ( t ) * F ) ) = 1
This Gibbsian form of the entropy appears to be very reasonable since the condition of maximal information entropy is the most unbiased and maximizes the uncertainty in σ ( t ) consistent with the constraints at each point in time. If one chooses not use the form in Equation (3) to generate σ ( t ) from a set of constraints, then the researcher must possess additional information beyond the set of constraints. This leads to an internal contradiction since we assume we only have information expressed by the constraints. We want to emphasize that this is not just another Jaynesian inverse problem in that here the constraints are theoretical values < F n > T r ( F n ρ ( t ) ) and together with the Lagrangian multipliers satisfy exact equations of motion derived from Liouville’s equation (see Equation (16)). Therefore the entropy in Equation (3), unlike in the Jaynesian approach, is fully consistent with Liouville’s equation, the Hamiltonian, and Schrödinger’s equation. In Equation (6), λ ( r , t ) are Lagrangian multipliers that are not functions of phase and are not operators and are related to local nonquantized generalized forces, such as temperature and electromagnetic fields and together with < F n > are determined by the simultaneous solution of Equations (4) and (16). We use the notation λ * F = d r n λ n ( r , t ) F n ( r ) . Notice that the information that is assumed known is only a subset of the total possible information on the system. The effects of unknown variables are contained in irrelevant variables that will be included in the theory exactly by a projection-like operator and these effects are also manifest in the Lagrangian multipliers.
In the case of the set of relevant variables U = T r ( u σ ) , P = T r ( p σ ( t ) ) , M = T r ( m σ ( t ) ) , the constructed generalized relevant density function is
σ ( t ) = exp ( ( - β ( r , t ) ( u ( r ) - p ( r ) · E p ( r , t ) - μ 0 m ( r ) · H m ( r , t ) ) ) d r Z
where Z is the partition function, and μ 0 is the permeability of free space. In terms of u and microscopic displacement vector d , this becomes
σ ( t ) = exp ( ( - β ( r , t ) ( u ( r ) - d ( r ) · E p ( r , t ) ) - b ( r ) · H m ( r , t ) ) ) d r Z
The dynamical evolution of the relevant variables, that is, the reversible evolution through the Hamiltonian, is denoted by
F ˙ n ( r ) = i L F n ( r ) = - 1 i [ H ( t ) , F n ( r ) ]
Note the difference in sign from that of the statistical density operator evolution in Equation (2). In addition to the dynamical evolution of the relevant variables, the time derivative of the expected values of the relevant variables is also influenced by the irrelevant variables, and that influence manifests itself as dissipation.
Robertson’s approach is based on developing an exact integral equation for ρ ( t ) , in terms of σ ( t ) . This relationship is, if we use Oppenheim’s extended initial condition, [5]
ρ ( t ) = σ ( t ) + T ( t , 0 ) χ ( 0 ) - 0 t d τ T ( t , τ ) { 1 - P ( τ ) } i L ( τ ) σ ( τ )
where the initial condition is χ ( 0 ) = ρ ( 0 ) - σ ( 0 ) (note that Oppenheim and Levine [5] generalized the analysis of Robertson to include this more generalized initial condition). T is a non-unitary evolution operator with T ( t , t ) = 1 and satisfies
T ( t , τ ) τ = T ( t , τ ) ( 1 - P ( τ ) ) i L ( τ )
where P ( t ) is a non-Hermitian projection-like operator defined by the functional derivative
P ( t ) A n = 1 m d 3 r δ σ ( t ) δ < F n ( r ) > T r ( F n ( r ) A )
for any operator A [3], and d σ / d t = P ( t ) d ρ / d t . In Robertson’s pioneering work he has shown that Equation (12) is equivalent to the Kawasaki-Grunton and Grabert’s projection operators, and is a generalization of the Mori and Zwanzig projection operators [17]. Although P 2 = P , it is not necessarily Hermitian and as a consequence is not a projection operator. We will see that the non-unitary behavior of T models dissipation.
An important identity was proven previously [5,15]
i L σ ( t ) = - λ * F ˙ ¯ σ
and
T r ( i L σ ( t ) ) = - λ * < F ˙ ¯ > = 0
where the bar is defined for any operator A as A ¯ = 0 1 σ x ( t ) A σ - x ( t ) d x [5,18]. In a classical analysis A ¯ = A . Equation (14) is related to microscopic energy conservation for the dynamical evolution of the relevant variables.
Due to the invariance of the trace operation under unitary transformations, it is known that the Neumann entropy - k B T r ( ρ ( t ) ln ρ ( t ) ) , formed from the full statistical-density operator ρ ( t ) that satisfies the Liouville equation (2) in an isolated system, is independent of time and cannot be the nonequilibrium entropy (for example, the case studied by Evans [19] without thermostats is just this case). However, the entropy, - k B T r ( σ ( t ) ln σ ( t ) ) , is not constant in time and has all the properties of a nonequilibrium entropy and reduces to the thermodynamic entropy in the appropriate limit.
In the very special and unrealizable case, when all F ˙ n ( r ) can be written as a linear sum of the set F m ( r ) such that
F ˙ n ( r ) a m d 3 r a m n ( r , r ) F m ( r )
then the last terms on the RHS of Equation (10) are zero, and then σ ( t ) = ρ ( t ) . In this special case T ( t , τ ) evolves unitarily. This is a consequence of the identity P F m σ = F m σ [5,17]. The function a m n does not depend on the phase variables. When Equation (15) applies for all n(see Robertson [14] Equation (A7), and Oppenheim and Levine [5]), the relaxation/dissipation terms in Equation (10) are absent. Equation (15) would not apply for almost all systems, and in addition, causality, through the Kramer’s-Kronig condition, requires dissipation. In any real system with decoherence, ρ ( t ) σ ( t ) . As noted by Oppenheim and Levine and Robertson [3,5], it is the failure of Equation (15) for physical systems that models the relaxation term in Equation (10) and the resultant entropy production. In measurement systems irrelevant variables influence and mix with the dynamical variable evolution and cause decoherence and irreversibility. Note that it is the non-unitary property of T ( t , τ ) that produces the relaxation/dissipative component to Equation (10). Since we have selected a finite set of relevant variables to describe our system, the dissipative components relate to the existence of irrelevant variables, and hence the non-unitary behavior of T [9].
In the above, we have considered only dynamically driven systems that are isolated except for the driving fields and where all interactions are modeled by the Hamiltonian. In an open system, ρ ( t ) does not evolve unitarily and ρ ( t ) need not satisfy Equation (2) [6].
It has been shown previously that the exact time evolution of the relevant variables can be expressed for a dynamically driven system as [3,5]
< F n ( r ) > t = < F ˙ n ( r ) > + T r ( F ˙ n ( r ) T ( t , 0 ) χ ( 0 ) )
- 0 t T r i L F n ( r ) T ( t , τ ) ( 1 - P ( τ ) ) i L σ ( τ ) d τ
The first term on the RHS is the reversible contribution, the second term is the initial-condition contribution, and the last term is due to dissipation. Equation (4) and Equation (16) form a closed system of equations, and the procedure for determining the Lagrange multipliers in terms of < F n > is to solve Equation (4) and Equation (16) simultaneously. Note the difference between this approach and the Jaynesian approach that would use only data in Equation (4) to obtain λ and < F n > . For operators that are odd under time reversal, such as the magnetic moment, the first term on the right hand side of Equation (16) is nonzero, whereas for functions even under time reversal, such as dielectric polarization and microscopic entropy, this term is zero. However, the third term in Equation (16) in any dissipative system is nonzero. The relaxation correction term that appears in this formalism is essential and is a source of the time-dependence in the entropy rate. Although these equations are nonlinear, in many cases linear approximations have been successfully made [20]. For open systems Equation (16) is modified only by adding a source term [21]. Transport coefficients and the related fluctuation-dissipation (FD) relations for conductivity, susceptibility, noise, and other quantities follow naturally from Equation (16).
If we use Equation (11) and multiply by σ and integrate and use T ( t , t ) = 1 , we see that the non-unitary behavior of T is manifest in the relaxation term in the evolution of σ ( t ) :
σ ( t ) = T ( t , 0 ) σ ( 0 ) + 0 t T ( t , θ ) ( 1 - P ( θ ) ) i L σ ( θ ) + σ ( θ ) θ d θ

3. Entropy

In this section we study the entropy and its evolution from the perspective of the Robertson approach, although new results are included. There have been many other approaches developed in the literature from other perspectives, and no attempt has been made to review all of these. From Equation (3) and Equation (5) we can study the entropy that accounts for both the reversible and nonreversible behavior for the relevant electromagnetic variables, electric polarization, P = T r ( p σ ) , the magnetic polarization, M = T r ( m σ ) , and the internal-energy density, U = T r ( u σ )
S ( t ) = - k B < ln ( σ ) > = k B λ ( r , t ) * < F ( r ) >
= k B β ( U - P · E p - μ 0 M · H m + ( 1 / β ) ln Z ) d 3 r
F 0 = 1 has been included in the set of operators for normalization, and the free energy is F = - k B T ln Z . The Lagrangian multipliers in this example were - β E p , - β μ 0 H m , and β. The temperature is very general and reduces to the thermodynamic temperature at equilibrium. A more general form of the mean energy of a quantum oscillator defines the temperature Lagrangian multiplier for noise and other applications: 1 / β = ( ω / 2 ) coth ( ω / 2 k B T ) , and in a high-temperature approximation this reduces to 1 / β k B T .
The Lagrangian multipliers are related to the functional derivatives of the entropy by the conditions
δ S ( t ) δ < F n > = k B λ n
δ S ( t ) δ U = 1 T
In equilibrium the temperature Lagrange multiplier is well defined, but the interpretation of temperature for a system out of equilibrium is complex. If we assume that H m and T are independent of P , the Lagrangian multiplier related to the local electrical field can be expressed as
- δ S ( t ) δ P δ S ( t ) δ U = E p + δ E p δ P · P = δ U δ P S δ F δ P T
where the → denotes a thermodynamic result. If we assume that E p and T are independent of M , the Lagrangian multiplier related to the local magnetic field can be expressed as
- δ S ( t ) δ M δ S ( t ) δ U = μ 0 H m + μ 0 δ H m δ M · M = δ U δ M S δ F δ M T
In the following sections we will apply the theory we have developed to various problems in electromagnetism. Before doing this, we need to define the entropy density and production rate. The entropy density s d ( t ) follows a equation of the form
s d ( r , t ) t + · J e ( r , t ) = σ d ( r , t )
where J e is an entropy current. In Equation (23), σ d is the entropy density production rate due to irreversible processes and relaxation and has units of entropy per second per unit volume.
The microscopic entropy rate that is integrated over the volume of interest is defined as
s ˙ ( t ) k B λ * F ˙ = k B λ * i L F
The expected value of the microscopic, dynamical contribution to the entropy production rate vanishes due to Equation (14) and invariance of the trace under cyclic permutations [5]:
< s ˙ ¯ ( t ) > = k B T r ( λ * F ˙ ¯ σ ) = k B λ * T r ( F ˙ σ ) = < s ˙ ( t ) > = 0
Equation (25) is a result of the microreversibility of the dynamics of the relevant variables and is what would be expected for reversible microscopic equations of motion in an isolated system with dynamical evolution of the relevant variables. Equation (25) is also seen as a direct consequence of conservation of energy on the microscale since λ 1 / T . As a consequence of Equation (25), if we have conserved quantities the dynamical evolution satisfies: F ˙ n = - · j n . We then have k B λ * j = k B i λ i j i · n d A ; that is any reversible contribution to the entropy production rate is due to fluxes entering or leaving the system. Beyond that there will also be irreversible contributions to the entropy production in the relaxation terms in Equation (26). In the case of an isolated system with dynamical electromagnetic driving that has microscopic internal energy u, magnetization m , local field H m , and generalized temperature T with β = 1 / k B T , Equation (25) requires: k B d 3 r { β ( r , t ) ( < u ˙ ( r ) > - μ 0 < m ˙ ( r ) > · H m ( r , t ) ) } = 0 . Since β is a common factor in the integral, Equation (25) is equivalent to the conservation of microscopic energy < u ˙ ( r ) > - μ 0 < m ˙ ( r ) > · H m ( r , t ) = 0 . However, we will see that the total macroscopic entropy rate, which includes the effects of the projection operator, does not vanish. The total entropy evolution equation that contains both relevant and irrelevant effects can be formed from Equation (16) by multiplying by λ and integrating over space. The entropy evolution is
d S d t Σ ( t ) = k B λ * < F > t = T r ( χ ( 0 ) s ˙ ( t ) T ( t , 0 ) )
+ 1 k B 0 t s ˙ ( t ) T ( t , τ ) ( 1 - P ( τ ) ) s ˙ ¯ ( τ ) d τ
(note that d S / d t ( t = 0 ) = 0 ). This equation can be re-expressed as
d S d t = Σ ( t ) = T r ( χ ( 0 ) s ˙ ( t ) T ( t , 0 ) ) + 1 k B 0 t s ˙ ( t ) T ( t , τ ) s ˙ ¯ ( τ ) d τ
- 0 t s ˙ ( t ) T ( t , τ ) F ¯ * < F ˙ > * < F F ¯ > - 1 d τ
 
where we have used Equation (A7) in [18] to eliminate P ( t ) and therefore any complications for simulations using the projection-like operator P, Equation (25) through the identity
P s ˙ ¯ ( τ ) σ = F ¯ σ ( τ ) * δ < s ˙ > δ < F > - δ s ˙ δ < F >
= - F ¯ σ ( τ ) * δ s ˙ δ λ δ λ δ < F > = k B F ¯ σ ( τ ) * < F ˙ > * < F F ¯ > - 1
Note that in all examined special cases, since ( 1 - P ) is positive and < s ˙ > = 0 , we have d S / d t 0 . We see that only the variables that change sign under time reversal, such as magnetic moment, contribute to Equation (28), since for even variables, < F ˙ n > = 0 . Equation (26) and Equation (27) are exact FD relations where the fluctuations are in terms of the dynamically driven entropy rate. Note that the LHS of Equation (26) and Equation (27) can be written alternatively from Equation (49) in the Appendix as - k B λ * Δ [ F F ¯ ] * λ t . These FD equations are time-reversal invariant and reduce to the traditional FD relations such as Nyquist’s theorem in equilibrium. Equation (26) indicates that the entropy rate satisfies a FD relationship in terms of the microscopic entropy production rate s ˙ ( t ) = i L s ( t ) . Equation (26) will form the basis of our applications to various electromagnetic driving and measurement problems. The LHS of Equation (26) represents the dissipation, and the last term on the RHS represents the fluctuations in terms of the microscopic entropy production rate s ˙ ( t ) . Due to incomplete information there are contributions from the positive semi-definite relaxation terms in Equation (26) for almost all many-body systems. The projection operator, that is, the state of knowledge of the system described by the relevant variables, acts to decrease the entropy rate and in the limit of no decoherence, Equation (26) has d S / d t 0 . In Equation (27) the entropy is decomposed into two terms, the first term is the total entropy rate and the second term with Equation (28) is a correction due to knowledge on the relevant variables Lagrangian multipliers that decreases the uncertainty and therefore Σ ( t ) . For an open system, Equation (26) would be modified by adding an entropy source term.
To summarize, for a dynamically driven and isolated system, the expected value of the microscopic entropy rate ( < s ˙ > ) is zero, due to conservation of energy and the microscopic reversibility of the underlying equations of motion. However in a complex system there are other uncontrolled variables in addition to the relevant ones that act to produce dissipation and irreversibility and a net positive macroscopic entropy evolution. Whereas ρ ( t ) satisfies Liouville’s equation, σ ( t ) does not, and this is a consequence of irrelevant variables. Since Equation (26) is exact, systems away from equilibrium can be modeled.
Boltzmann’s constant enters the entropy in Equation (3) as an arbitrary scalar, whereas Boltzmann’s constant enters Equation (26) in a nontrivial way in terms of fluctuations of microscopic entropy production. From Equation (26), if we neglect the initial condition terms or assume a large time, we have a relation that could be used to measure Boltzmann’s constant in terms of the total entropy-production rate ( Σ ( t ) ) and the fluctuations in the microscopic entropy rate
k B = 0 t s ˙ ( t ) T ( t , τ ) ( 1 - P ( τ ) ) s ˙ ¯ ( τ ) τ d τ Σ ( t )
In this context, we see that the entropy rate or production rate is a natural way to estimate Boltzmann’s constant, which has units of entropy ( J / K ). In fact, since Equation (29) contains only microscopic and macroscopic entropy rates, it could be a definition of Boltzmann’s constant. Note that the entropy production rate could be from mechanical, electrical, or hydrodynamic sources. This expression reverts to the standard Johnson noise relation for stationary processes that has been used previously to measure k B [22]. However, the expression is much more general, in that any equilibrium or nonequilibrium system that has entropy production could be used to obtain Boltzmann’s constant.

4. Equations of Motion and Correlation Functions

We can use Equation (13) and Equation (16) to obtain exact evolution equations for relevant material parameters in electromagnetism. The evolution of the electric polarization is
P ( r , t ) t = T r ( χ ( 0 ) p ˙ ( r ) T ( t , 0 ) ) - 0 t T r p ˙ ( r ) T ( t , τ ) ( 1 - P ( τ ) ) i L σ ( τ ) d τ
= T r ( χ ( 0 ) p ˙ ( r ) T ( t , 0 ) ) - 0 t d r p ˙ ( r ) T ( t , τ ) ( 1 - P ( τ ) ) p ˙ ¯ ( r ) · β ( r , τ ) E p ( r , τ ) d τ
- 0 t d r p ˙ ( r ) T ( t , τ ) ( 1 - P ( τ ) ) μ 0 m ˙ ¯ ( r ) · β ( r , τ ) H m ( r , τ ) d τ
+ 0 t d r p ˙ ( r ) T ( t , τ ) ( 1 - P ( τ ) ) u ˙ ¯ ( r ) β ( r , τ ) d τ
 
The equation for the displacement field is the same as Equation (30), with p replaced with d . As another illustration, we obtain the equation of motion for the magnetization
M ( r , t ) t = < m ˙ ( r ) > + T r ( χ ( 0 ) m ˙ ( r ) T ( t , 0 ) ) - 0 t T r m ˙ ( r ) T ( t , τ ) ( 1 - P ( τ ) ) i L σ ( τ ) d τ
= T r ( χ ( 0 ) m ˙ ( r ) T ( t , 0 ) ) - 0 t d r m ˙ ( r ) T ( t , τ ) ( 1 - P ( τ ) ) μ 0 m ˙ ¯ ( r ) · β ( r , τ ) H m ( r , τ ) d τ
- 0 t d r m ˙ ( r ) T ( t , τ ) ( 1 - P ( τ ) ) p ˙ ¯ ( r ) · β ( r , τ ) E p ( r , τ ) d τ
+ 0 t d r m ˙ ( r ) T ( t , τ ) ( 1 - P ( τ ) ) u ˙ ¯ ( r ) β ( r , τ ) d τ
We can identify < m ˙ > = - μ 0 | γ e f f | M × H e f f . The equation of motion for the internal-energy density is
U ( r , t ) t = < u ˙ ( r ) > + T r ( χ ( 0 ) u ˙ ( r ) T ( t , 0 ) ) - 0 t T r u ˙ ( r ) T ( t , τ ) ( 1 - P ( τ ) ) i L σ ( τ ) d τ
= T r ( χ ( 0 ) p ˙ ( r ) T ( t , 0 ) ) - 0 t d r u ˙ ( r ) T ( t , τ ) ( 1 - P ( τ ) ) p ˙ ¯ ( r ) · β ( r , τ ) E p ( r , τ ) d τ
- 0 t d r u ˙ ( r ) T ( t , τ ) ( 1 - P ( τ ) ) μ 0 m ˙ ¯ ( r ) · β ( r , τ ) H m ( r , τ ) d τ
+ 0 t u ˙ ( r ) T ( t , τ ) ( 1 - P ( τ ) ) u ˙ ¯ ( r ) β ( r , τ ) d τ
Equations (30) through (32) are exact, coupled nonlinear equations that must be solved in conjunction with Equations (4) for the unknowns λ n and < F n > . In general, this is not a simple task, but in many examples we can make approximations to linearize the kernels. P ( t ) performs a subtraction of flux at large times to maintain integrability. Robertson showed how Equation (31) reduces to the Landau-Lifshitz equation using appropriate assumptions when the kernel K was approximated by using M × ( M × H ) δ ( t - τ ) = ( M M - | M | 2 I ) · ( H - H m ) δ ( t - τ ) K · ( H - H m ) . The electric polarization Equation (30) was linearized and solved in [20]. The LHS of Equations (30) through (32) can be alternatively expressed in terms of the time derivatives of the Lagrangian multipliers by use of Equation (47). This is useful for obtaining generalized heat and polarization evolution equations.
As an example, the kernel in Equation (30), which is related to the electric polarization response, can be written through the use of Equation (13) as
K p ( r , t ) = 0 t T r p ˙ ( r ) T ( t , τ ) ( 1 - P ( τ ) ) i L σ d τ
= 0 t d r p ˙ ( r ) T ( t , τ ) ( 1 - P ( τ ) ) p ˙ ¯ ( r ) k B T · E p ( r , τ ) d τ E l e c t r i c R e s p o n s e
+ 0 t d r p ˙ ( r ) T ( t , τ ) ( 1 - P ( τ ) ) μ 0 m ˙ ¯ ( r ) k B T · H m ( r , τ ) τ M a g n e t o - e l e c t r i c R e s p o n s e
- 0 t d r p ˙ ( r ) T ( t , τ ) ( 1 - P ( τ ) ) u ˙ ¯ ( r ) k B T d τ T h e r m o e l e c t r i c R e s p o n s e
 
This kernel contains the effects of cross-correlations between p , m , and u. The Fourier transform of the linear version of the first term on the RHS is i ω χ ( ω ) . The kernel related to magnetic polarization response is
K m ( r , t ) = 0 t T r m ˙ ( r ) T ( t , τ ) ( 1 - P ( τ ) ) i L σ ( τ ) d τ

5. Transport Coefficients

Using the second form in Equation (1), transport coefficients can be obtained by use of from Equation (16). When we are dealing with microscopic quantities such as charge density that are conserved, we can write
ρ ˙ f ( r ) + · j ϕ ( r , t ) = 0
and if the microscopic internal energy is conserved
u ˙ ( r ) + · j u ( r , t ) = 0
We assume that the fluxes j u and j ϕ have zero normal components on the outer surfaces of the system. Then using Equations (1), (16), and (32) and integration by parts we obtain
U t = · 0 t d r 1 T ( r , τ ) K u u ( r , t , r , τ ) · T ( r , τ ) d τ
- · 0 t d r K u ϕ ( r , t , r , τ ) · E p ( r , τ ) d τ
- · 0 t d r 1 T ( r , τ ) K u ϕ ( r , t , r , τ ) · T ( r , τ ) ϕ ( r , τ ) d τ
where
K a b ( r , t , r , τ ) = j a ( r , t ) T ( t , τ ) ( 1 - P ( τ ) ) j ¯ b ( r , τ ) k B T ( r , τ )
We neglected magnetic effects and used E p = - ϕ . For a delta-function dependence in space and time this is in the form d U / d t = · ( κ u u · T ) - · ( κ u ϕ · E p ) - · ( κ u ϕ · ϕ T ) , where κ u u is the thermal conductivity. An expression for the electrical conductivity tensor can be obtained from an equation similar to Equation (30)
< ρ f > t + · 0 t d r K ϕ ϕ ( r , t , r , τ ) · E p ( r , τ ) d τ
= · 0 t d r 1 T ( r , τ ) K ϕ u ( r , t , r , τ ) · T ( r , τ ) d τ
- · 0 t d r 1 T ( r , τ ) K ϕ ϕ ( r , t , r , τ ) · ϕ ( r , τ ) T ( r , τ ) d τ
For a delta function dependence in space and time, this reduces to d < ρ f > / d t + · ( σ ϕ · E p ) = · ( κ ϕ u · ϕ T ) - · ( κ ϕ ϕ · ϕ T ) where σ ϕ is the response function related to the electrical conductivity. For metals this is e 2 n τ / m .
The thermal conductivity and heat capacity reduce to the normal definition by Green for linear response [23,24]
κ u ( r , t ) 1 k B T 2 0 d r j u ( r , t ) e - i L 0 τ j ¯ u ( r , τ ) d τ
The heat capacity ( J / k g K) times the density ( k g / m 3 ) in linear response is expressed in terms of the internal-energy density fluctuations in u
c u u ( r ) 1 k B T 2 d r u ( r ) u ¯ ( r ) - < u ( r ) > < u ¯ ( r ) >

6. Applications to Boltzmann’s Constant Determination, and Nyquist Noise Through Entropy Production

Noise in electromagnetic systems is a result of Johnson-Nyquist thermal noise, magnetic polarization noise, flicker noise, and shot noise. The Nyquist formulation of Johnson noise is valid for equilibrium and stationary processes. Equation (26) applied to noise problems indicates that fluctuations in entropy rate relate to the total contributions of all the noise sources. The Johnson noise equation has been used to estimate Boltzmann’s constant and serve as a temperature relation [22], but the formalism developed here allows this type of approach to be generalized and to be cast in terms of entropy production. Equation (29) forms a consistent and meaningful procedure for measurements of generalized noise, temperature, and Boltzmann’s constant, and arguably is a more fundamental approach. The lack of metrology for the measurement of entropy and its production has limited its use in this area.
The thermal noise process in equilibrium without a macroscopic driving voltage, is usually modeled by use of the principle of detailed balance, where the electromagnetic power absorbed by a resistor is balanced by the emission of electromagnetic energy by the resistor. In this view, random fluctuations in the velocity of charges in a resistor produces a zero mean voltage, but has nonzero fluctuations in the voltage in a circuit. This produces a net power flow in the circuit, but at the same time there is an equivalent absorption of the electromagnetic power by the resistor that generates heat. The approach in this paper is to understand the noise process in terms of the concept of entropy production rate and the dynamically driven entropy rate fluctuations. An argument based on entropy production rate follows the same lines of reasoning as Nyquist followed using power fluctuations. The microscopic entropy production rate due to random currents in the resistor at constant temperature has a zero mean by Equation (25), because the charged-particle motion in forward and time-reversed paths compensate. At equilibrium, the reversible entropy production rate fluxes that result from the transmitted and absorbed electromagnetic waves in the circuit exactly balance. However, the fluctuations in the microscopic entropy rate are nonzero, by Equation (26), just as < v 2 > 0 in Nyquist’s approach.
In order to derive the Nyquist relation from Equation (26), we begin with a system that is composed of a waveguide terminated at both ends with a resistance R and with a bias steady current I 0 . The microscopic entropy production rate that is due to random fluctuating charge densities is j i is s ˙ i = k B F ˙ * λ ( 1 / 2 ) d 3 r j i · E i / T i ( 1 / 2 ) I 0 v i / T i , where I 0 is a constant bias current and < j i > = 0 . Also, the total entropy production rate for a resistance R is Σ = I 0 2 R / T . For a stationary process and use of the Wiener-Khinchine theorem with T i = T , In the linear approximation, Equation (26) reduces to the Nyquist relation in a frequency band Δ f that is 4 k B T R Δ f = < v v * > . Note that in this case Nyquist’s theorem can be re-expressed in terms of the entropy production rate in the waveguide resistors under a steady-state driving current (that cancels out) as d S / d t k B Δ f = < v 2 > / 4 R T . In the special case of Johnson noise for an impedance Z instead of a resistor
2 k B T Δ f = < v ( ω ) v * ( ω ) > Z ( ω ) + Z * ( ω )
However, Equation (26) is completely general, so it can be used as a basis to model nonequilibrium noise problems where the microstates temperature of the fluctuating quantities ( T i ) need not be the same as the macroscopic temperature. Because entropy production is the ratio of power dissipated to temperature, it may also prove to be a very useful quantity for studying temperature and other electromagnetic problems.

7. Generalized Temperature

Temperature at equilibrium is well defined. Away from equilibrium the concept of temperature is fuzzy [25]. Equation (20) can be used to model temperature, but it requires an explicit representation for the entropy.
A very general model for calculating the temperature of a material subjected to an electric driving field can be expressed by use of Equations (37), ( 39), (46), and (48). These equations describe the evolution of the generalized temperature in electric driving, even away from equilibrium [18]. If we simplify by assuming delta-function dependencies in space and time we obtain
( c u u + c u p ( 1 ) ) T ( r , t ) t + c u p ( 2 ) · E p ( r , t ) t
= · ( κ u u · T ) + · ( κ u ϕ · E p ) - · ( κ u ϕ · ϕ T )
χ ρ f u + χ ρ f d · E p T ( r , t ) t + χ ρ f d ( 1 ) · E p ( r , t ) t
= - · ( σ ϕ · E p ) - · ( κ ϕ u · T ϕ ) - · ( κ ϕ ϕ · ϕ T )
These equations can contain nonequilibrium effects and reduce to Fourier’s law in the special case of equilibrium without applied fields. Solving these equations for T and E p will yield a temperature for an electromagnetically driven system.

8. Conclusions

The goal of this paper was to use the Robertson statistical-mechanical method to derive heat and charge transport and entropy rate equations for electromagnetic quantities coupled to thermal variables. The equations of motion can be reduced to familiar Landau-Lifshitz and Bloch equations for magnetic response and generalized Debye equations for dielectric response. The advantage of the method developed in this paper is that it is exact and it yields transport and evolution equations for relevant variables with the inclusion of the effects of irrelevant variables. We have found that entropy production and fluctuations form a central, fundamental connection between Boltzmann’s constant, temperature, and noise. Equations with cross-correlations between relevant variables have been developed for charge current, electrical conductivity, heat-capacity density, and entropy evolution. Although the equations are complicated, this is to be expected for any exact solution. We presented applications in the areas of nonequilibrium heat flow, noise, Boltzmann’s constant, and suggested other measurements in nonequilibrium situations.

Note

This paper is a U.S. Government work and not protected by U.S. copyright.

Appendix

Alternative Expansion of the Evolution of the Relevant Quantities

We can re-express the LHS of Equation (16) for the various relevant variables in terms of time derivative of the Lagrangian multipliers. This casts the LHS of Equation (16) in terms of measurable quantities such as temperature and field rates and thereby produces generalized heat and polarization equations.
< F ( r ) > t = - d r { < F ( r ) F ¯ ( r ) > - < F ( r ) > < F ¯ ( r ) > } * λ ( r , t ) t
- d r Δ [ F ( r ) F ¯ ( r ) ] * λ ( r , t ) t
We also defined Δ [ a b ] = < a b ¯ > - < a > < b ¯ > .
Using this equation, the internal energy density can be re-expressed in terms of time derivatives of the Lagrangian multipliers
U ( r , t ) t = d r 1 T Δ [ u ( r ) u ¯ ( r ) ] k B T T ( r , t ) t
- d r 1 T Δ [ u ( r ) p ¯ ( r ) · E p ] k B T T ( r , t ) t + d r Δ [ u ( r ) p ¯ ( r ) ] k B T · E p ( r , t ) t
- d r 1 T Δ [ u ( r ) m ¯ ( r ) · H m ] k B T T ( r , t ) t + d r Δ [ u ( r ) m ¯ ( r ) ] k B T · H m ( r , t ) t
( c u u + c u p ( 1 ) + c u m ( 1 ) ) T ( r , t ) t + c u p ( 2 ) · E p ( r , t ) t + c u m ( 2 ) · H m ( r , t ) t
where the last line assumes a δ ( r - r ) dependence in the kernels. The polarization satisfies
P ( r , t ) t = d r Δ [ p ( r ) p ¯ ( r ) ] k B T · E p ( r , t ) t - d r Δ [ p ( r ) p ¯ ( r ) · E p ( r , t ) ] k B T 2 · T ( r , t ) t
+ d r Δ [ p ( r ) m ¯ ( r ) ] k B T · H m ( r , t ) t - d r Δ [ p ( r ) m ¯ ( r ) · H m ( r , t ) ] k B T 2 · T ( r , t ) t
+ d r 1 T Δ [ p ( r ) u ¯ ( r ) ] k B T T ( r , t ) t
χ p u + χ p p · E p + χ p m · H m T ( r , t ) t + χ p p ( 1 ) · E p ( r , t ) t + χ p m ( 1 ) · H m ( r , t ) t
Similar relations can be written for D , B , and M . If we use Equation (47), but for D instead of P and take divergence and · D = ρ f , then
< ρ f > t ( r , t ) = d r Δ [ ρ f ( r ) d ¯ ( r ) ] k B T · E p ( r , t ) t - d r Δ [ ρ f ( r ) d ¯ ( r ) · E p ( r , t ) ] k B T 2 · T ( r , t ) t
+ d r Δ [ ρ f ( r ) m ¯ ( r ) ] k B T · H m ( r , t ) t - d r Δ [ ρ f ( r ) m ¯ ( r ) · H m ( r , t ) ] k B T 2 · T ( r , t ) t
+ d r 1 T Δ [ ρ f ( r ) u ¯ ( r ) ] k B T T ( r , t ) t
χ ρ f u + χ ρ f d · E p + χ ρ f m · H m T ( r , t ) t + χ ρ f d ( 1 ) · E p ( r , t ) t + χ ρ f m ( 1 ) · H m ( r , t ) t
An alternative equation for the entropy evolution or entropy production on the LHS of Equation (26), without magnetic effects, in terms of time derivatives of the Lagrangian multipliers can also be constructed from Equation (45)
d S ( t ) d t - k B λ * Δ [ F F ¯ ] * λ t d r { T · κ · T T 2
+ d r ( k B β 2 Δ [ u ( r ) p ¯ ( r ) ] · E p ( r , t ) t + k B T β 2 E p ( r , t ) · Δ [ p ( r ) u ¯ ( r ) ] T ( r , t ) t
+ k B β 2 E p ( r , t ) · Δ [ p ( r ) p ¯ ( r ) ] · E p ( r , t ) t - k B T β 2 E p ( r , t ) · Δ [ p ( r ) p ¯ ( r ) ] · E p ( r , t ) T ( r , t ) t ) }
where we used the heat equation c T / t = · κ · T .

References

  1. Zwanzig, R. Ensemble method in the theory of irreversibility. J. Chem. Phys. 1960, 33, 1338–1341. [Google Scholar]
  2. Mori, H. Transport, collective motion, and brownian motion. Prog. Theor. Phys. 1965, 33, 423–455. [Google Scholar] [CrossRef]
  3. Robertson, B. Equations of motion in nonequilibrium statistical mechanics. Phys. Rev. 1966, 144, 151–161. [Google Scholar] [CrossRef]
  4. Grabert, H. Projection-Operator Techniques in Nonequilibrium Statistical Mechanics; Springer-Verlag: Berlin, Germany, 1982. [Google Scholar]
  5. Oppenheim, I.; Levine, R.D. Nonlinear transport processes: Hydrodynamics. Physica 1979, 99A, 383–402. [Google Scholar] [CrossRef]
  6. Yu, M.B. Influence of environment and entropy production of a nonequilibrium open system. Phys. Lett. A 2008, 372, 2572–2577. [Google Scholar] [CrossRef]
  7. Berne, J.B.; Harp, G.D. On the Calculation of Time Correlation Functions. In Advances in Chemical Physics IVII; Prigogine, I., Rice, S.A., Eds.; Wiley: New York, NY, USA, 1970; p. 63. [Google Scholar]
  8. Nettleton, R.E. Validity of RPA for the initial state in the evolution of fluids. J. Phys. A 2005, 38, 3651–3660. [Google Scholar] [CrossRef]
  9. Weiss, U. Quantum Dissipative Systems; World Science Publishing Company: Singapore, 1999. [Google Scholar]
  10. Nettleton, R.E. Perturbation treatment of non-linear transport via the Robertson statistical formalism. Ann. Phys. 1999, 8, 425–436. [Google Scholar] [CrossRef]
  11. Zubarev, D.N.; Morozov, V.; Ropke, G. Statistical Mechanics of Nonequilibrium Processes: Basic Concepts, Kinetic Theory; Wiley: New York, NY, USA, 1996. [Google Scholar]
  12. Baker-Jarvis, J.; Kabos, P. Dynamic constitutive relations for polarization and magnetization. Phys. Rev. E 2001, 64, 56127. [Google Scholar] [CrossRef]
  13. Baker-Jarvis, J.; Kabos, P.; Holloway, C.L. Nonequilibrium electromagnetics: Local and macroscopic fields using statistical mechanics. Phys. Rev. E 2004, 70, 036615. [Google Scholar] [CrossRef]
  14. Robertson, B. Equations of motion of nuclear magnetism. Phys. Rev. 1967, 153, 391–403. [Google Scholar] [CrossRef]
  15. Robertson, B. Nonequilibrium Statistical Mechanics. In Physics and Probability: Essays in Honor of Edwin T. Jaynes; Grandy, W.T., Milonni, P.W., Eds.; Cambridge University Press: New York, NY, USA, 1993; p. 251. [Google Scholar]
  16. Baker-Jarvis, J. Time-dependent entropy evolution in microscopic and macroscopic electromagnetic relaxation. Phys. Rev. E 2005, 72, 066613. [Google Scholar] [CrossRef]
  17. Robertson, B. Applications of Maximum Entropy to Nonequilibrium Statistical Mechanics. In The Maximum Entropy Formalism; MIT Press: Cambridge, MA, USA, 1978; p. 289. [Google Scholar]
  18. Robertson, B. Equations of motion in nonequilibrium statistical mechanics II, Energy transport. Phys. Rev. 1967, 160, 175–183. [Google Scholar] [CrossRef]
  19. Evans, D.J. Response theory as a free-energy extremum. Phys. Rev. A 1985, 32, 2923–2925. [Google Scholar] [CrossRef] [PubMed]
  20. Baker-Jarvis, J.; Janezic, M.D.; Riddle, B. Dielectric polarization equations and relaxation times. Phys. Rev. E 2007, 75, 056612. [Google Scholar] [CrossRef]
  21. Robertson, B.; Mitchell, W.C. Equations of motion in nonequilibrium statistical mechanics. III: Open systems. J. Math. Phys. 1971, 12, 563–568. [Google Scholar]
  22. Benz, S.; Rogalla, H.; White, D.R.; Qu, J.; Dresselhaus, P.D.; Tew, W.L.; Nam, S.W. Improvements in the NIST Johnson Noise Thermometry System; Instrumentation and Measurement, IEEE Transactions: Broomfield, CO, USA, 2008; pp. 36–43. [Google Scholar]
  23. Green, M.S. Comment on a paper of Mori on time correlation expressions for transport. Phys. Rev. 1960, 119, 829–830. [Google Scholar] [CrossRef]
  24. Kubo, R.; Toda, M.; Hashitsume, N. Nonequilibrium Statistical Mechanics; Springer: Berlin, Germany, 1985; Volume 2. [Google Scholar]
  25. Casas-Vazquez, J.; Jou, D. Temperature in nonequilibrium states: a review of open problems and current proposals. Rep. Prog. Phys. 2003, 66, 1937–2023. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Baker-Jarvis, J.; Surek, J. Transport of Heat and Charge in Electromagnetic Metrology Based on Nonequilibrium Statistical Mechanics. Entropy 2009, 11, 748-765. https://doi.org/10.3390/e11040748

AMA Style

Baker-Jarvis J, Surek J. Transport of Heat and Charge in Electromagnetic Metrology Based on Nonequilibrium Statistical Mechanics. Entropy. 2009; 11(4):748-765. https://doi.org/10.3390/e11040748

Chicago/Turabian Style

Baker-Jarvis, James, and Jack Surek. 2009. "Transport of Heat and Charge in Electromagnetic Metrology Based on Nonequilibrium Statistical Mechanics" Entropy 11, no. 4: 748-765. https://doi.org/10.3390/e11040748

Article Metrics

Back to TopTop