Next Article in Journal
On the Importance of Electron Diffusion in a Bulk-Matter Test of the Pauli Exclusion Principle
Next Article in Special Issue
The Entropy of Deep Eutectic Solvent Formation
Previous Article in Journal
On the Entropy of Oscillator-Based True Random Number Generators under Ionizing Radiation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

An Informational Theoretical Approach to the Entropy of Liquids and Solutions

Department of Physical Chemistry, The Hebrew University of Jerusalem, Edmond J. Safra Campus, Givat Ram, Jerusalem 9190401, Israel
Entropy 2018, 20(7), 514; https://doi.org/10.3390/e20070514
Submission received: 30 May 2018 / Accepted: 27 June 2018 / Published: 9 July 2018

Abstract

:
It is well known that the statistical mechanical theory of liquids has been lagging far behind the theory of either gases or solids, See for examples: Ben-Naim (2006), Fisher (1964), Guggenheim (1952) Hansen and McDonald (1976), Hill (1956), Temperley, Rowlinson and Rushbrooke (1968), O’Connell (1971). Information theory was recently used to derive and interpret the entropy of an ideal gas of simple particles (i.e., non-interacting and structure-less particles). Starting with Shannon’s measure of information (SMI), one can derive the entropy function of an ideal gas, the same function as derived by Sackur (1911) and Tetrode (1912). The new deviation of the same entropy function, based on SMI, has several advantages, as listed in Ben-Naim (2008, 2017). Here we mention two: First, it provides a simple interpretation of the various terms in this entropy function. Second, and more important for our purpose, this derivation may be extended to any system of interacting particles including liquids and solutions. The main idea is that once one adds intermolecular interactions between the particles, one also adds correlations between the particles. These correlations may be cast in terms of mutual information (MI). Hence, we can start with the informational theoretical interpretation of the entropy of an ideal gas. Then, we add correction due to correlations in the form of MI between the locations of the particles. This process preserves the interpretation of the entropy of liquids and solutions in terms of a measure of information (or as an average uncertainty about the locations of the particles). It is well known that the entropy of liquids, any liquids for that matter, is lower than the entropy of a gas. Traditionally, this fact is interpreted in terms of order-disorder. The lower entropy of the liquid is interpreted in terms of higher degree of order compared with that of the gas. However, unlike the transition from a solid to either a liquid, or to a gaseous phase where the order-disorder interpretation works well, the same interpretation would not work for the liquid-gas transition. It is hard, if not impossible, to argue that the liquid phase is more “ordered” than the gaseous phase. In this article, we interpret the lower entropy of liquids in terms of SMI. One outstanding liquid known to be a structured liquid, is water, according to Ben-Naim (2009, 2011). In addition, heavy water, as well as aqueous solutions of simple solutes such as argon or methane, will be discussed in this article.

1. Introduction

It is well known that the entropy of liquids is lower than the entropy of the corresponding vapor of the same substances at a given temperature and pressure [1,2,3,4,5,6,7,8,9,10]. This fact is manifested in the slope of the liquid (l)–gas (g) coexistence curve in the phase diagram. The slope of the P(T) curve, is given by the Clapeyron equation:
  ( d P d T ) e q = Δ S v Δ V v > 0
where Δ S v is the entropy of vaporization, Δ V v is the volume of vaporization, Δ V v = V ( g ) V ( l ) , and the derivative is taken along the liquid-gas coexistence curve in the phase diagram, Figure 1.
Since Δ V v is always very large and positive, the positive slope of the liquid-gas equilibrium curve means that Δ S v is always positive. It should be noted that Equation (1) is an exact equation. An approximate equation, known as the Clausius-Clapeyron equation, may be obtained by assuming that V g V l , and that the vapor is an ideal gas, i.e., V g = R T / P , where R is the gas constant, and P the pressure. Using this approximation, one gets:
  ( d P d T ) e q = P Δ S v R T = P Δ H v R T 2
where we used the equality T Δ S v = Δ H v .
Another well-known approximate empirical law is the Trouton Law. It states that the entropy of vaporization at one atmospheric pressure of many liquids is almost constant:
Δ S v 85 87 / J   mol 1   K 1
Table 1 shows a few values of the entropy of vaporization of some liquids. Note that the values of Δ S v   for water, ethanol, and methanol are much larger than the values for the other liquids.
Traditionally, the positive value of the entropy of vaporization and the entropy of sublimation is interpreted in terms of order-disorder. Following the erroneous, but very common interpretation of entropy as a measure of disorder, one concludes that the liquid state is more “ordered” than the gaseous phase.
Unfortunately, unlike the case of sublimation (the transition from solid to gas), where the order-disorder argument seems to work, in the case of vaporization, the order-disorder argument fails. It is hard, if not impossible, to argue that the liquid phase is more ordered than the gaseous phase.
In this article, we propose a novel interpretation of the positive entropy of vaporization in terms of mutual information, as per Shannon [11] and Ben-Naim [12]. It will be shown that in general, the transition from a state of non-interaction between particles to a state when intermolecular interaction is operative, the Shannon measure of information (SMI) always decreases. This argument is also carried over from SMI into the entropy, the latter being viewed as a special case of the former.
In the next section, we shall briefly outline the derivation of entropy from the SMI. We shall then show in Section 3, that whenever we “turn on” the interactions among the particles, the entropy of the system will always decrease. A special case of non-ideal gas will be discussed in Section 4.

2. Entropy of an Ideal Gas of Simple Particles

In this section, we outline the procedure by which we obtain the entropy of an ideal gas from the SMI. We discuss here simple particles, i.e., particles that have no internal degrees of freedom. This means that the particles are spherical and that the full specification of the (classical) microstate of a system of N particles required 3N coordinates of locations and 3N velocities (or momenta).
The procedure of obtaining the entropy of an ideal gas from SMI consists of essentially four steps:
  • Calculation of the locational SMI at equilibrium
  • Calculation of the momentum SMI at equilibrium
  • Adding a correction due to the uncertainty principle
  • Adding a correction due to the indistinguishability of the particles
The resulting SMI of both locations and momenta at equilibrium leads to the entropy function, S ( E , V , N ) , or S ( T , V , N ) which was originally derived from the Boltzmann entropy by Sackur [13], and Tetrode [14].
We shall be very brief here; a more detailed derivation is available in Ben-Naim [12,15].

2.1. The Locational SMI of a Particle in a 1D Box of Length L

Suppose we have a particle confined to a one-dimensional (1D) “box” of length L. We can define the continuous SMI by:
  H ( X ) = f ( x ) log f ( x ) d x
where f ( x ) d x is the probability of finding the particle between x and x + dx. Next, we calculate the density distribution, which maximizes the locational SMI, H ( X ) in (4). The result is:
  f e q ( x ) = 1 L
We identify the distribution that maximizes the SMI as the equilibrium (eq) distribution by substituting (5) in (4):
  H ( locations   in   1 D ) = log L
We now acknowledge that the location X of the particle cannot be determined with absolute accuracy, i.e., there exists a small interval h x within which we do not care where the particle is. Therefore, we correct Equation (6) by subtracting log h x . Thus, we write instead of (6),
  H ( X ) = log L log h x
In Equation ( 7 ) we effectively defined H ( X ) for the finite number of intervals n = L / h . See Figure 2. Note that when h x 0 ,   H ( X ) diverges to infinity. Here, we do not take the mathematical limit, but we stop at h x small enough, but not zero. Note also that in writing Equation (7) we do not have to specify the units of length, as long as we use the same units for L and h x .

2.2. The Velocity SMI of a Particle in a 1D “Box” of Length L

Next, we calculate the probability distribution that maximizes the continuous SMI, subject to two conditions:
  f ( x ) d x = 1
  x 2 f ( x ) d x = σ 2 = c o n s t a n t
The result is the normal distribution:
  f e q ( x ) = exp [ x 2 / 2 σ 2 ] 2 π σ 2
Again, we use the subscript eq for equilibrium. Applying this result to a classical particle having average kinetic energy m < v x 2 > 2 , and using the relationship between the standard deviation σ 2 and the temperature of the system:
σ 2 = k B T m ,
we get the equilibrium velocity distribution of one particle in a 1D system:
  f e q ( v x ) = m 2 π k B T   exp [ m v x 2 2 k B T ]
Here, k B is the Boltzmann constant, m is the mass of the particle, and T the absolute temperature. The value of the continuous SMI for this probability density is:
  H m a x ( velocity   in   1 D ) = 1 2 log ( 2 π e k B T / m )
Similarly, we can write the momentum distribution in 1D, by transforming from v x p x = m v x , to get:
  f e q ( p x ) = 1 2 π m k B T   exp [ p x 2 2 m k B T ]
and the corresponding maximum SMI:
  H m a x ( momentum   in   1 D ) = 1 2 log ( 2 π e m k B T )
Again, recognizing the fact that there is a limit to the accuracy within which we can determine the velocity (or the momentum) of the particle, we correct the expression in (15) by subtracting log h p ,   where h p is a small, but finite interval:
  H m a x ( momentum   in   1 D ) = 1 2 log ( 2 π e m k B T ) log h p
Note again that if we choose the units of h p of momentum as m a s s   l e n g t h / t i m e , the same as of m k B T , then the whole expression under the logarithm will be a pure number.

2.3. Combining the SMI for the Location and Momentum of a Particle in a 1D System

We now combine the two results. Assuming that the location and the momentum (or velocity) of the particles are independent events, we write:
H m a x ( location   and   momentum ) = H m a x ( location )   +   H m a x ( momentum ) = log [ L 2 π e m k B T h x h p ]
Recall that h x and h p were chosen to eliminate the divergence of the SMI when the location and momentum were treated as continuous random variables.
In writing (17), we assume that the location and the momentum of the particle are independent. However, quantum mechanics imposes restriction on the accuracy in determining both the location x and the corresponding momentum p x . In Equations (7) and (16), h x and h p were introduced because we did not care to determine the location and the momentum with an accuracy better than h x and h p , respectively. Now, we must acknowledge that nature imposes upon us a limit on the accuracy with which we can determine simultaneously the location and the corresponding momentum. Thus, in Equation (17), h x and h p cannot both be arbitrarily small, but their product must be of the order of Planck constant h = 6.626   ×   10 34   J   s . Thus, we set:
  h x h p h
And instead of (17), we write:
  H m a x ( location   and   momentum ) = log [ L 2 π e m k B T h ]

2.4. The SMI of a Particle in a Box of Volume V

We consider again one simple particle in a cubic box of volume V. We assume that the location of the particle along the three axes x, y and z are independent. Therefore, we can write the SMI of the location of the particle in a cube of edges L, and volume V as:
  H ( location   in   3 D ) = 3 H m a x ( location   in   1 D )
Similarly, for the momentum of the particle, we assume that the momentum (or the velocity) along the three axes x, y and z are independent. Hence, we write:
  H m a x ( momentum   in   3 D ) = 3 H m a x ( momentum   in   1 D )
We combine the SMI of the locations and momenta of one particle in a box of volume V, taking into account the uncertainty principle. The result is:
  H m a x ( location   and   momentum   in   3 D ) = 3 log [ L 2 π e m k B T h ]

2.5. The SMI of Locations and Momenta of N Independent Particles in a Box of Volume V

The next step is to proceed from one particle in a box to N independent particles in a box of volume V. Giving the location ( x , y , z ) , and the momentum ( p x , p y , p z ) of one particle within the box, we say that we know the microstate of the particle. If there are N particles in the box, and if their microstates are independent, we can write the SMI of N such particles simply as N times the SMI of one particle, i.e.,
  SMI (   N   independent   particles ) = N × SMI ( one   particle )
This equation would have been correct if the microstates of all the particles were independent. In reality, there are always correlations between the microstates of all the particles; one is due to the indistinguishability of the particles, the second is due to intermolecular interactions among all the particles. We shall discuss these two sources of correlation separately. In this section, we introduce the correlation due to indistinguishability. In the next section, we introduce the correlation due to intermolecular interactions.
Recall that the microstate of a single particle includes the location and the momentum of that particle. Let us focus on the location of one particle in a box of volume V. We have written the locational SMI as:
  H m a x ( location ) = log V
If we have N particles and the locations of all the particles are independent, we can write:
  H m a x ( locations   of   N   particles ) = i = 1 N H m a x ( one   particle )
However, when the particles are indistinguishable, we have the correct Equation (25).
We can define the mutual information corresponding to the correlation between the particles as:
  I ( 1 ,   2 , , N ) = ln N !
Thus, instead of (25), we will have for SMI for N indistinguishable particles, will have:
  H ( N   p a r t i c l e s ) = i = 1 N H ( o n e   p a r t i c l e ) ln N !
Using the SMI for the location and momentum of one particle in (22), we can write the final result for the SMI of N indistinguishable (but non-interacting) particles as:
  H ( N   indistinguishable   particles ) = N log   V ( 2 π m e k B T h 2 ) 3 2 log N !
Using the Stirling approximation for log N ! (note again that we use the natural logarithm) in the form:
  log N ! N log   N N
We have the final result for the SMI of N indistinguishable particles in a box of volume V, and temperature T:
  H ( 1 , 2 , N ) = N log   [ V N ( 2 π m k B T h 2 ) 3 2 ] + 5 2 N
By multiplying the SMI of N particles in a box of volume V, at temperature T, by a constant factor ( k B , if we use the natural log, or k B log e 2 if the log is to the base 2), one gets the entropy, the thermodynamic entropy of an ideal gas of simple particles. This equation was derived by Sackur and by Tetrode in 1912, by using the Boltzmann definition of entropy.
One can convert this expression into the entropy function S ( E , V , N ) , by using the relationship between the total kinetic energy of the system, and the total kinetic energy of all the particles:
  E = N m v 2 2 = 3 2 N k B T
The explicit entropy function of an ideal gas is:
  S ( E , V , N ) = N k B ln [ V N ( E N ) 3 2 ] + 3 2 k B N [ 5 3 + ln ( 4 π m 3 h 2 ) ]
We can use this equation as a definition of the entropy of a system characterized by constant energy, volume, and number of particles. Note that when we combine all the terms under the logarithm sign, we must get a dimensionless quantity.

3. The Entropy of a System of Interacting Particles

In this section, we show that whenever we “turn on” the intermolecular interactions at constant T, V, N, the entropy of the system will reduce. We will show that whenever there are interactions, there are also correlations and these correlations may be cast in the form of mutual interactions.
We start with the classical canonical partition function (PF) of a system characterized by the variable T, V, N:
  Q ( T , V , N ) = Z N N ! Λ 3 N
where Λ 3 is the momentum partition function, and Z N is the configurational P F of the system.
  Z N = d R N exp [ β U N ( R N ) ]
The probability density for finding the particles at a specific configuration R N = R 1 , , R N is:
  P ( R N ) = exp [ β U N ( R N ) ] Z N
where k B is the Boltzmann constant and T the absolute temperature. In the following, we put k B = 1 to facilitate the connection between the entropy change and the change in the SMI.
When there are no intermolecular interactions (ideal gas), the configurational PF is Z N = V N , and the corresponding partition function is reduced to:
  Q i g ( T , V , N ) = V N N ! Λ 3 N
We define the change in the Helmholtz energy due to the interactions as:
  Δ A = A A i g = T ln   Q ( T , V , N ) Q i g ( T , V , N ) = T ln Z N V N
This change in Helmholtz energy corresponds to the process of “turning on” the interaction among all the particles at constant ( T , V , N ) .
The corresponding change in the entropy is:
Δ S = Δ A T = ln Z N V N + T 1 Z N Z N T   =   ln Z N N lnV + 1 T d R N P ( R N ) U N ( R N )
We now substitute U N ( R N ) from (35) into (38) to obtain:
Δ S = N ln   V P ( R N ) ln P ( R N ) d R N
Note that the second term on the rhs of (39) has the form of a SMI. We can also write the first term on the rhs of (39) as SMI of ideal gas.
For ideal gas U N ( R N ) = 0 , and:
  P i g ( R N ) = ( 1 / V ) N P ( R 1 ) P ( R 2 ) ( R N )
Hence:
Δ S = ln P i g ( R N ) P ( R N ) ln P ( R N ) d R N
Since P i g ( R N ) = ( 1 / V ) N is independent of R N , we can rewrite ln P i g ( R N ) as:
  ln P i g ( R N ) = P i g ( R N ) ln P i g ( R N ) d R N
Hence, Equation (41) is rewritten as:
Δ S = P ( R N ) ln [ P ( R N ) P i g ( R N ) ] d R N = P ( R N ) ln [ P ( R N ) i = 1 N P ( R 1 ) ] d R N
The last expression on the rhs of (43) has the form of mutual information. We define the correlation function among the N particles as:
  g ( 1 ,   2 ,   ,   N ) = P ( R N ) i = 1 N P ( R 1 )
Using (44) in (45) we get the final form of the entropy change:
  Δ S = P ( R N ) ln g ( R N ) d ( R N ) = I ( 1 ,   2 ,   ,   N )
Thus, except for the Boltzmann constant and change in the base of the logarithm, Δ S is equal to the negative mutual information I ( 1 ,   2 ,   ,   N ) , between the locations of the N particles. Since the mutual information is always positive (see Ben-Naim, [12]), the change in entropy in (45) is always negative. We can conclude that no matter what the interactions are, whenever we “turn off” the interactions, the entropy of the system will always increase.

4. Entropy of Non-Ideal Gas

We next derive a particular case of Equation (45), where we know an explicit expression for the correlation functions.
In the limit of very low density, when only pair interactions are operative, but interactions among more than two particles are rare and can be neglected, we can get a more useful expression for Δ S . We can write the configurational PF as:
  Z N = d R N i < j exp [ β U i j ]
where U i j is the pair potential between particles i and j.
Define the so-called Mayer ffunction, by:
  f i j = exp ( β U i j ) 1
and rewrite Z N as:
  Z N = d R N i < j ( f i j + 1 ) = d R N [ 1 + i < j f i j + f i j f j k + ]
Neglecting all terms beyond the first sum, we obtain:
  Z N = V N + N ( N 1 ) 2 f 12 d R N = V N + N ( N 1 ) 2 V N 2 f 12 d R 1 d R 2
We now identify the second virial coefficient as:
  B 2 ( T ) = 1 2 V f 12 d R 1 d R 2
and rewrite Z N as:
  Z N = V N N ( N 1 ) V N 1 B 2 ( T ) = V N [ 1 N ( N 1 ) V B 2 ( T ) ]
The corresponding Helmholtz energy change is:
Δ A = A A i g = T ln   Z N V N = T ln   [ 1 N ( N 1 ) 2 V 2 f 12 ( R 1 , R 2 ) d R 1 d R 2 ]
Since we have engaged the low-density limit, we can rewrite (52) as:
Δ A T N ( N 1 ) 2 V 2 f 12 ( R 1 , R 2 ) d R 1 d R 2
Note that N ( N     1 ) V 2 ρ 2 , hence we can use the approximation ln ( 1 ρ 2 B ) ρ 2 B , where B is the integral in (52).
In this limit, the entropy change for the process of “turning on” the interactions is:
Δ S N ( N 1 ) 2 V 2 × [ f 12 ( R 1 , R 2 ) d R 1 d R 2 + T f 12 T d R 1 d R 2 ]
We now use the following limiting behavior of the pair distribution and the pair correlation function:
  P ( R 1 , R 2 ) = g ( R 1 , R 2 ) V 2
  g ( R 1 , R 2 ) = exp [ β U 12 ]
We can rewrite (54) as:
Δ S = N ( N 1 ) 2 P ( R 1 , R 2 ) ln g ( R 1 , R 2 ) d R 1 d R 2
Thus, except for the base of the logarithm, the internal in Equation (57) is the mutual information for a pair of particles. In a system of N particles, there are altogether N ( N     1 ) 2 pairs of particles. Therefore, the entropy change, up to a constant, is the mutual interaction between all the pairs of particles in the system, i.e.,
Δ S = N ( N 1 ) 2 I ( R 1 ; R 2 )
Since the mutual information is always positive, Δ S will be negative.
Thus, we see again that “turning on” the interaction will reduce the entropy of the system.
It should be noted that the correlation function can be either positive (i.e., ln g 0 ) or negative (i.e., ln g 0 ), but the average in (57) must always be positive. For more details, see Ben-Naim [15].

5. Discussion and Conclusions

We have seen that experimentally the entropy change of vaporization is always positive. Traditionally, this fact is interpreted in terms of relative disorder of the gaseous phase compared with the liquid phase. However, using the informational interpretation of entropy, we can interpret the change in entropy as consisting of two steps.
We define the entropy of vaporization as the difference in entropy for transferring one mole of the substance from the liquid state to the gaseous phase at equilibrium. Thus, we write:
Δ S v = S ( T , V g , N ) S ( T , V l , N )
where N is the Avogadro number, T is the temperature of the two phases, and V g and V l are the molar volumes of the substance in the two phases, with V g V l .
Our interpretation of the entropy of vaporization consists of two steps, shown schematically in Figure 3. We start with one mole at ( T , V l , N ) with entropy value S l = S ( T , V l , N ) . We first “turn off” the interactions at constant T and V. The resulting change in entropy is:
Δ S ( turning   off   the   interactions ) = S i g ( T , V l , N ) S l ( T , V l , N ) 0
This change in entropy is due to the correlation among the particles. The second step is to expand the mole of ideal gas from the volume V l to the volume V g , i.e.,
Δ S ( expansion ) = S i g ( T , V g , N ) S i g ( T , V l , N ) 0
Thus, the entropy of vaporization consists of two contributions: one, due to mutual information; and the second, due to the expansion of the ideal gas from V l to V g .
Both of these contributions are positive, and both are interpreted in terms of increasing the value of the locational SMI (the momentum SMI is constant for this process). We note here that water has a larger entropy of vaporization compared to other “normal” liquids. Again, this fact is traditionally interpreted as due to the structure of water. Our interpretation of the entropy of vaporization of water is based on the strong intermolecular interactions among the water molecules (hydrogen bond), which lead to stronger correlations among the water molecules. This interpretation applies to any hydrogen-bonded liquid such as methanol and ethanol, as can be seen from Table 1. For these liquids, one cannot invoke the “structure” to explain the large entropy of vaporization. The informational interpretation, however, is the same; namely, strong hydrogen bonds lead to strong correlations, hence, mutual information.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ben-Naim, A. A Molecular Theory of Solutions; Oxford University Press: Oxford, UK, 2006. [Google Scholar]
  2. Ben-Naim, A. Molecular Theory of Water and Aqueous Solutions; Part I: Understanding Water; World Scientific: Singapore, 2009. [Google Scholar]
  3. Ben-Naim, A. Molecular Theory of Water and Aqueous Solutions; Part II: The Role of Water in Protein Folding, Self-Assembly and Molecular Recognition; World Scientific: Singapore, 2011. [Google Scholar]
  4. Fisher, I.Z. Statistical Theory of Liquids; University of Chicago Press: Chicago, IL, USA, 1964. [Google Scholar]
  5. Gray, C.G.; Gubbins, K.E. Theory of Molecular Fluids, Volume 1: Fundamentals; Clarendon Press: Oxford, UK, 1984. [Google Scholar]
  6. Guggenheim, E.A. Mixtures: The Theory of The Equlibrium Properties of Some Simple Classes of Mixtures Solutions and Alloys; Clarendon Press: Oxford, UK, 1952. [Google Scholar]
  7. Hansen, J.P.; McDonald, I.R. Theory of Simple Liquids; Academic Press: London, UK, 1976. [Google Scholar]
  8. Hill, T.L. Introduction to Statistical Mechanics; Addison-Wesley: Reading, MA, USA, 1960. [Google Scholar]
  9. O’Connell, J.P. Thermodynamic properties of solutions based on correlation functions. Mol. Phys. 1971, 20, 27. [Google Scholar] [CrossRef]
  10. Temperley, H.N.V.; Rowlinson, J.S. Rushbrooke 1968, Physics of Simple Liquids; North-Holland Publishing Co.: Amsterdam, The Netherlands, 1968. [Google Scholar]
  11. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  12. Ben-Naim, A. Information Theory; Part I: An Introduction to the Fundamental Concept; World Scientific: Singapore, 2017. [Google Scholar]
  13. Sackur, O. Die Anwendung der kinetischen Theorie der Gase auf chemische Probleme. Annalen der Physik 1911, 36, 958. [Google Scholar] [CrossRef]
  14. Tetrode, H. Die chemische Konstante der Gase und das elementare Wirkungsquantum. Annalen der Physik 1912, 38, 434. [Google Scholar] [CrossRef]
  15. Ben-Naim, A. A Farewell to Entropy: Statistical Thermodynamics Based on Information; World Scientific: Singapore, 2008. [Google Scholar]
Figure 1. Schematic phase diagram of a normal liquid (Left) and water (Right). Note the different slopes of the solid-liquid equilibrium lines.
Figure 1. Schematic phase diagram of a normal liquid (Left) and water (Right). Note the different slopes of the solid-liquid equilibrium lines.
Entropy 20 00514 g001
Figure 2. Transition from the continuous to the discrete case. (a) A dart hits a one-dimensional segment of length L. There are infinite possible locations for the dart; (b) Passage from the infinite to the discrete description of the states.
Figure 2. Transition from the continuous to the discrete case. (a) A dart hits a one-dimensional segment of length L. There are infinite possible locations for the dart; (b) Passage from the infinite to the discrete description of the states.
Entropy 20 00514 g002
Figure 3. The two-step transition from a liquid to an ideal gas; First, the interactions are “turned off” at constant volume Vl, then there is an expansion to a larger molar volume, Vg.
Figure 3. The two-step transition from a liquid to an ideal gas; First, the interactions are “turned off” at constant volume Vl, then there is an expansion to a larger molar volume, Vg.
Entropy 20 00514 g003
Table 1. Entropies of vaporization of liquids at their normal boiling point.
Table 1. Entropies of vaporization of liquids at their normal boiling point.
Δ S v / J   m o l 1   K 1
Benzene+87.2
Carbon disulfide+83.7
Carbon tetrachloride+85.8
Cyclohexane+85.1
Dimethyl ether+86.0
Methane+73.2
Methanol+104.1
Ethanol+110.0
Water+109.1

Share and Cite

MDPI and ACS Style

Ben-Naim, A. An Informational Theoretical Approach to the Entropy of Liquids and Solutions. Entropy 2018, 20, 514. https://doi.org/10.3390/e20070514

AMA Style

Ben-Naim A. An Informational Theoretical Approach to the Entropy of Liquids and Solutions. Entropy. 2018; 20(7):514. https://doi.org/10.3390/e20070514

Chicago/Turabian Style

Ben-Naim, Arieh. 2018. "An Informational Theoretical Approach to the Entropy of Liquids and Solutions" Entropy 20, no. 7: 514. https://doi.org/10.3390/e20070514

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop