Next Article in Journal
Formation and Flocking Control Algorithms for Robot Networks with Double Integrator Dynamics and Time-Varying Formations
Previous Article in Journal
Attitude Synchronization of a Group of Rigid Bodies Using Exponential Coordinates
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The “Real” Gibbs Paradox and a Composition-Based Resolution

by
Fabien Paillusson
School of Mathematics and Physics, University of Lincoln, Brayford Pool, Lincoln LN6 7TS, UK
Entropy 2023, 25(6), 833; https://doi.org/10.3390/e25060833
Submission received: 4 April 2023 / Revised: 12 May 2023 / Accepted: 21 May 2023 / Published: 23 May 2023
(This article belongs to the Section Thermodynamics)

Abstract

:
There is no documented evidence to suggest that J. W. Gibbs did not recognize the indistinguishable nature of states involving the permutation of identical particles or that he did not know how to justify on a priori grounds that the mixing entropy of two identical substances must be zero. However, there is documented evidence to suggest that Gibbs was puzzled by one of his theoretical findings, namely that the entropy change per particle would amount to k B ln 2 when equal amounts of any two different substances are mixed, no matter how similar these substances may be, and would drop straight to zero as soon as they become exactly identical. The present paper is concerned with this latter version of the Gibbs paradox and, to this end, develops a theory characterising real finite-size mixtures as realisations sampled from a probability distribution over a measurable attribute of the constituents of the substances. In this view, two substances are identical, relative to this measurable attribute, if they have the same underlying probability distribution. This implies that two identical mixtures do not need to have identical finite-size realisations of their compositions. By averaging over composition realisations, it is found that (1) fixed composition mixtures behave as homogeneous single-component substances and (2) in the limit of a large system size, the entropy of mixing per particle shows a continuous variation from k B ln 2 to 0, as two different substances are made more similar, thereby resolving the “real” Gibbs paradox.

1. Introduction

In most modern texts, the Gibbs paradox is referred to as the inability to ground the extensivity of thermodynamic potentials, such as the entropy or the Helmholtz free energy, within the framework of classical statistical mechanics developed by J. W. Gibbs in [1]. In his famous 1992 discussion of the Gibbs paradox, E. T. Jaynes summarises the situation well [2]:
“For 60 years, textbooks and teachers (including, regrettably, the present writer) have impressed upon students how remarkable it was that Gibbs, already in 1902, had been able to hit upon this paradox which foretold—and had its resolution only in—quantum theory with its lore about indistinguishable particles, Bose and Fermi statistics, etc.”
Jaynes, however, contends that Gibbs committed a mathematical mistake in his latest writings, which is ultimately responsible for the whole confusion around extensivity in statistical mechanics:
“In particular, Gibbs failed to point out that an “integration constant” was not an arbitrary constant, but an arbitrary function. However, this has, as we shall see, nontrivial physical consequences. What is remarkable is not that Gibbs should have failed to stress a fine mathematical point in almost the last words he wrote; but that for 80 years thereafter all textbook writers (except possibly Pauli) failed to see it.”
In the above passage, the mentioned integration constant would arise from a result derived by Gibbs in Chapter IV of his book on statistical mechanics. This point will be important for what we are going to discuss in a few paragraphs.
Whilst Jaynes’ intention may have principally been to celebrate Gibbs’ work and push against the necessity of a quantum foundation for classical statistical mechanics, these two quotes ironically end up further cementing a narrative that attributes a fault to Gibbs, which would have inadvertently led astray the physics and thermodynamics community for at least 80 years.
It is important to point out that such a narrative stems from a less-than-charitable, or even perhaps dishonest, reading of Gibbs’ writing in his final book on statistical mechanics [1]. Jaynes even ventured to suggest that Gibbs was older and in bad health, which explains why his latest piece of work was lacking compared to previous ones. Interestingly, even biographies written just a few years after Gibbs’ death, such as [3], do not appear to mention any purported intellectual decline or poor health in the later years of his life. However, one just needs to read the preface of Elements of Statistical Mechanics [1] to see that it was never Gibbs’ intention to consider systems with varying particle numbers; thus, any analysis of extensivity for the results presented in the preceding chapters is automatically excluded until Chapter XV, which is the final chapter of his book:
“Finally, in Chapter XV, we consider the modification of the preceding results which is necessary when we consider systems composed of a number of entirely similar particles of several kinds, all of which kind being entirely similar to each other, and when one of the numbers of variations to be considered is that of the numbers of the particles of the various kinds which are contained in the system.”
At this point, the reader may be reminded that the alleged “integration constant” mistake (conflating a function of the particle number with a constant) that Gibbs made, according to Jaynes, is in Chapter IV.
A full discussion of how both the aforementioned extensivity issue and the purported mistakes attributed to Gibbs are actually not supported by any evidence in Gibbs’ writings has been provided elsewhere [4]. In what follows, we shall assume that the few quotes provided above are sufficient to cast some suspicion about the historical, historiographical, and conceptual veracity of the main interpretation of Gibbs’ paradox found in most textbooks. We shall now safely depart from this view and delve into what we term as the “real” Gibbs’ paradox.
In his 1876 work on the equilibrium of heterogeneous substances [5], Gibbs found a somewhat intriguing result grounded in thermodynamics (but easily reproducible in statistical mechanics by later authors), where the change in entropy would amount to k B ln 2 per unit of mass/matter upon mixing equal amounts of any two different non-reactive substances. The surprising fact was that this result held no matter how similar these substances would be. Here, again, we may directly quote Gibbs [5]:
“The fact is not less significant that the increase of entropy due to the mixture of gases of different kinds … is independent of the nature of the gases … and of the degree of similarity between them.”
If the substances were considered to be exactly identical, however, one would find the entropy change to be exactly zero.
In [6], it is reported that Pierre Duhem was likely the first to mention the paradoxical nature of this result and highlight the “absurd consequences” that violated the continuity principle. To our knowledge, very few textbooks on statistical mechanics actually refer to the latter when discussing the Gibbs paradox. The only two such texts we are aware of are [7,8].
Van Kampen proposed a discussion of this version of the Paradox in [9]. In this discussion, he proposes the following reasoning to dispel the notion that there is any paradox at all
“But suppose that [substances] A and B are so similar that the experimenter has no physical way of distinguishing between them. Then he does not have the semi-permeable walls needed for the second process, but on the other hand the first one will look perfectly reversible to him.”
This leads to the conclusion that there should not be any change in entropy as long as the experimenter does not have any physical mean (or is not interested in using any such means) to separate two substances. Indeed, in the above quote, the second process mentioned refers to obtaining the final entropy by determining the work performed by a semi-permeable membrane letting A pass through and not B, for example.
This statement by van Kampen on the operational interpretation of the entropy of mixing was recently demonstrated experimentally with colloids in [10], but had already been mentioned by Gibbs one hundred years earlier in [5]:
“when we say that two gas-masses of the same kind are mixed under similar circumstances there is no change of energy or entropy, we do not mean that the gases that have been mixed can be separated without change to external bodies. On the contrary, the separation of the gases is entirely impossible … because we do not recognise any difference in the substance of the two masses”,
and was also likely known by Duhem as well.
The author remains uncertain that van Kampen’s argument dispels the paradox pointed out by Gibbs and Duhem for all possible imaginable substances. For example, in his explanation, van Kampen assumes that either one can certainly discriminate between substance A and substance B, or one is incapable of discriminating them at all. However, what if, for example, substances A and B actually comprise the same molecules, but not in the same proportions? Surely a gas containing 80% dioxygen and 20% nitrogen is not “identical” to a gas containing 20% dioxygen and 80% nitrogen. How do we separate these two substances then, and what is the corresponding work needed to do so? As proposed in [4], it is possible to conceive of a generalised semi-permeable membrane that only has a given probability (less than or equal to unity) to bring one of the molecules of a given type from one side of the box to the other side, to separate substances A and B. A similar approach was used to experimentally test the Landauer bound in [11].
These new developments allow one to imagine degrees of similarity between substances rather than a binary identical/different view of substance similarity.
A further issue that arises when thinking about the mixing of real substances is well illustrated by a thought experiment discussed in [12]: If two glasses of milk are poured from the same bottle, they will correspond to two different finite realisations of the same underlying substance (i.e., the milk coming from the bottle). Consequently, the two glasses are unlikely to contain the very same amounts of, say, fat globules or casein proteins, despite the fact that they are sampled from the same underlying substance. If one were to mix these two glasses into a larger container, would one expect a non-zero mixing entropy from a statistical mechanics or thermodynamics perspective? More importantly, for the purpose of the present paper, how can the experiment be reproduced in the first place?
Before moving on to what the present article seeks to address, it will be instructive to look first at a common approach that has been extensively used in the past two decades to describe mixtures, discrete or otherwise, either in the context of the Gibbs paradox (e.g., in s [4,12,13]) or in the context of multimer assemblies [14] and phase equilibria of polydisperse systems (e.g., [15,16,17,18,19]). One way to conceive this approach consists in considering a mixture comprising a fixed integral number N i of particles of species i among, say, m species. If the system has a total of N particles, then we must have that N = i = 1 m N i . For an ideal gas model confined to a volume V, the corresponding free energy in the canonical ensemble is easily found to be in the large N i limit (see, e.g., [12])
β F ( N , { N i } , β ) N ln V Λ 3 N + i = 1 m N i ln N i ,
where, for illustration purposes, we considered a gas of particles with no relevant internal degrees of freedom and possessing identical masses (thus distinguished by means other than their mass) and the thermal length scale Λ . The approach then goes on to make the following prescription:
N i = N p ( i ) ,
where p ( i ) is interpreted as being the probability of having a particle of species i in the system. Substituting the above prescription into Equation (1) gives
F ( N , { N i } , β ) N k B T ln ( n Λ 3 ) N N k B T s ( p ) ,
where n = N / V is the particle number density and
s ( p ) = i = 1 m p i ln p i .
At this stage, it is worth pointing out that some disagreement exists in the naming convention used to qualify the quantity k B s ( p ) appearing in Equation (3) for the free energy of a single substance. For example, in [12], k B s ( p ) is called the mixing entropy, while in [13], k B s ( p ) is considered the characteristic of the composition of a substance (no matter how this substance is put together), as described by the probability p ( i ) , and is, therefore, called composition entropy. Given that the present work is a continuation of the work proposed in [13], we shall adopt the latter naming convention in what follows. On that latter view, refs. [4,13] have defined the mixing entropy as the change in the composition entropy between a system comprising N A and N B particles of two initially separated substances, A and B, and that of a substance C resulting from putting A and B together in the same volume. If we denote p A , p B , and p C = N A N p A + N B N p B as the composition probabilities of substances A, B, and C, respectively, then the (dimensionless) entropy change is found to be [13]
Δ S / k B = N A D K L ( p A | p C ) + N B D K L ( p B | p C ) ,
where
K L ( p | q ) i = 1 m p ( i ) ln q ( i ) p ( i )
is the Kullback–Leibler divergence of the probability p from probability q. In the context of strongly asymmetric mixing, e.g., N A N B , it was shown in [13] that Equation (5) reduces to a prescription by Sollich et al. for determining the phase equilibria of polydisperse systems, for instance in [18,19], where a dominant (parent) composition starts coexisting with a minority (incipient) composition. The traditional Gibbs mixing thought experiment is, however, more in line with a fully symmetric case, where N A = N B = N / 2 . In this case, Equation (5) reduces to [4,13]
Δ S / k B = N D J S ( p A | p B ) ,
where
D J S ( p A | p B ) 1 2 i = 1 m p A ( i ) ln 2 p A ( i ) p A ( i ) + p B ( i ) + p B ( i ) ln 2 p B ( i ) p A ( i ) + p B ( i ) ,
is the Jensen–Shannon divergence between p A and p B , which corresponds to a square metric [20] between the probability distributions p A and p B .
The problem with the above approach is that it entirely relies on the prescription that N i = N p ( i ) . However, upon inspection, this prescription does not appear justified for three reasons:
  • Firstly, from a mathematical standpoint, if a mixture is characterised by a probability distribution p ( i ) , then each time a particle of that substance is added to the system, the species i is selected with probability p ( i ) . In a simpler case, where only two species are possible, i.e., m = 2 , the problem becomes analogous to the flipping of N coins. In that case, the number of particles of a given species shall follow a binomial distribution, as studied in [21], and N p ( i ) is the (binomial) expectation value of N i . In practice, however, a single realisation of N coin flips is not expected to give exactly N i = N p ( i ) . Thus, the proposed prescription, albeit heuristically intuitive, conflates an instantiated value taken by a random variable with its expectation value; this is akin to a sort of ‘mean field’ approximation, especially given that N i ends up in various logarithmic functions in statistical thermodynamics.
  • Secondly, from a conceptual standpoint, there is a problem with the substitution of N i by N p ( i ) ; the latter is often not an integer. Gibbs’ statistical mechanics, or its quantum extension, establishes a relationship between the dimensionality of state spaces to be explored and the number of particles in the system. These spaces possess an integral number of dimensions, not a fractional one.
  • Finally, the prescription is often taken after the Stirling approximation has been used, which must assume that each  N i is large; this cannot be guaranteed for all system sizes for all composition probabilities.
Despite the above critique of the N i = N p ( i ) prescription, Equation (7) ultimately appears to make sense from a physical standpoint, as discussed in [4,13], and it can be obtained from a substantial body of existing works [12,15,19]. It is just that (a) the prescription is not mathematically consistent with the probability theory and (b) it is not conceptually satisfactory. More particularly, there is no notion of what happens if N cannot be assumed to be infinite. The aim of the present paper is, therefore, to extend the work on binary mixtures carried out in [21] to generalised mixtures, so as to more firmly ground both mathematically and conceptually the validity of Equation (7), and propose mixing entropy expressions for finite N.
In what follows, we propose a new formalism that enables one to address the above questions. More specifically, we consider that substances are ultimately defined by composition probabilities and that one can only ever mix two finite composition realisations of different substances. Given that the entropy change will depend on the specific realisations one is looking at, the reproducibility requirement of thermodynamics compels us to seek realisation-independent mixing entropy expressions. We obtain the latter by averaging over composition realisations in both substances to be mixed. We believe that this approach follows the operational view promoted by Gibbs and van Kampen, but when the two substances cannot be separated with certainty and have some overlap in their composition.

2. Materials and Methods

The method we introduce in this paper expands upon a preliminary work by the present author on binary mixtures [21]. This generalisation can be formulated in the following presuppositions:
1-
Discrete mixtures comprising m possible identifiable species, labeled from 1 to m, are characterised—in a definitional sense—by an ideal composition corresponding to a set of fixed probabilities { p ( i ) } i = 1 , , m , satisfying
i = 1 m p ( i ) = 1 .
2-
Any real mixture with a finite number N of particles is but one of many realisations obtained from independently sampling each N particle identity from the sample space { 1 , , m } , with the corresponding probabilities { p ( i ) } i = 1 , , m .
3-
Let N i denote the random variable representing the number of particles of species i in a given mixture and let C = { N 1 , , N m } represent the multivariate random variable characterising the empirical composition. If the mixture comprises N particles. We must have that
i = 1 m N i = N .
4-
We denote C { N 1 , , N m } the composition realisation of a given mixture, such that for any species index i, we have N i = N i . Given that the indicator random variables for the N particles are considered independent and identically distributed with { p ( i ) } i = 1 , , m , it follows that the probability distribution for C satisfies a multinomial distribution:
M N , m , p ( C ) = N ! i = 1 m N i ! j = 1 m p ( j ) n j ,
where it is understood that N i must comply with Equation (10) as well.
5-
The Helmholtz free energy  F ( N , β = 1 k B T , C ) of an N-particle system is also a random variable via the composition random variable C . We seek the realisation-independent Helmholtz free energy of the system F ( N , β ) by averaging over all possible composition realisations:
F ( N , β ) F ( N , β , C ) C M N , m , p ( C ) F ( N , β , C ) .
To address Gibbs’ paradox with the above formalism, we also need to specify an ensemble and a model to apply it to. We choose to work in the Gibbs canonical ensemble, i.e., fixed temperature and a fixed amount of matter. Given that the paradox arises already in the case of ideal gases, the very first step is, therefore, to discuss it in the context of N non-interacting, non-relativistic, and independent particles confined in a box of volume V. This is what shall be done in the rest of this paper.
For a system of N particles comprising N i particles of species i of mass M i corresponding to a given composition realisation C , the Gibbs canonical partition function reads
Z ( N , β , C ) = V N i = 1 m z i N i N i ! Λ i 3 N i ,
where z i is the internal dimensionless canonical partition function of a particle of type i, Λ i = ξ / 2 π M i k B T is a characteristic length scale associated with the species i, and ξ is the area—preserved by the Hamiltonian flow—of an elementary two-dimensional face of a polygonal cell of a partition of the phase space. Of course, it is common in modern texts to identify Λ i to the thermal de Broglie wavelength by setting ξ = h , the Planck constant, but this is not a necessity.
From Equation (11), we have that
1 i = 1 m N i ! = M N , m , p ( C ) N ! j = 1 m p ( j ) N j ,
we find
β F ( N , β ) = N l n V + N ln ( Λ ˜ 3 / z ˜ ) + ln N ! N s ( p ) + H ( M N , m , p | M N , m , p ) ,
where Λ ˜ = e i = 1 m p ( i ) ln Λ i is a single effective characteristic length scale for the substance, z ˜ = e i = 1 m p ( i ) ln z i is an effective internal partition function for the substance, s ( p ) i = 1 m p ( i ) ln p ( i ) is the composition entropy, H ( M N , m , p | M N , m , p ) C M N , m , p ( C ) ln M N , m , p ( C ) is the composition realisation entropy, and we use the fact that N i = C P ( C ) N i = N p ( i ) for the multinomial distribution (11).
One of the major difficulties in regard to using Equation (15) for analytical results for finite-sized systems is that the term H ( M N , m , p | M N , m , p ) corresponds to the entropy of the multinomial distribution and there is no known closed-form expression for it. We are, therefore, bound to explore different limiting regimes in the cases of different kinds of composition models, and propose conclusions on a case-by-case basis.

3. Results

Equation (15) is exact for any finite system consisting of N particles and characterised by a composition probability map p. This will serve as our starting point to elucidate the thermodynamic behaviours of general mixtures when they are considered on their own and also upon mixing any two mixtures. In particular, we shall also try to explore cases of so-called continuous polydisperse systems. In what follows, we shall use as a safeguarding strategy the principle that, in the large N limit, the realisation-independent free energy of mixture composition models should become extensive. Composition models that do not satisfy this requirement shall be considered provisionally inappropriate for the study of the thermodynamics of substances and would warrant further inspection outside of the scope of the present paper.

3.1. Case of a Single Mixture

As mentioned before, the potential utility of the proposed formalism is going to depend on one’s ability to evaluate the entropy of the multinomial distribution H ( P ( C ) | M N , m , p ( C ) ) . In what follows, we will use the following convenient rewriting of this entropy:
H ( M N , m , p | M N , m , p ) = ln N ! + N s ( p ) + i = 1 m N i = 0 N B N , p ( N i = N i ) ln ( N i ! ) ,
where B N , p ( N i = N i ) is the binomial probability distribution for finding N i = N i particles of type i in the composition realisation given the specified N and p ( i )
B N , p ( N i = N i ) N ! ( N N i ) ! N i ! p ( i ) N i ( 1 p ( i ) ) N N i .

3.1.1. Finite Discrete Mixtures: m and Map p Are Fixed and N > m

The case of finite discrete mixtures corresponds to the “typical” situation that most thermodynamicists might have in mind for a mixture, i.e., there is a clearly defined set of m different chemical species and various probabilities assigned to each species, and N is often much larger than m. If N is large enough, such that N p ( i ) 10 for any i = 1 , , m , then one may use the Stirling approximation for N i ! and take the normal distribution limit of B N , p ( N i = N i ) giving [22] (see also Appendix A for numerical comparisons)
H ( M N , m , p | M N , m , p ) = m 1 2 ln ( 2 π N e ) + 1 2 i = 1 m ln p ( i ) + O 1 N .
One notes that the first term in Equation (18) is of order ∼ ln N , the second term is, at most, of order ∼ ln m , and the remainder of order ∼ 1 / N . This means that, in the large N limit, using the Stirling approximation, the realisation-free Helmholtz free energy per particle for finite discrete mixtures becomes
F ( N , T ) N k B T ( ln ( n v Λ ˜ 3 / z ˜ ) s ( p ) 1 ) ,
where we introduce the particle density n v N / V . Three important remarks follow from Equations (18) and (19):
  • At fixed composition, i.e., at fixed m and probability map p on { 1 , , m } , s ( p ) is a constant, and the mixture can be essentially conceived as a homogeneous substance. All thermodynamic properties of the system will be identical to that of a single component system with only the total substance particle density n v (as opposed to partial densities) playing a role.
  • In the large system size limit, the realisation-independent free energy becomes proportional to the total amount of matter N in the system, i.e., becomes extensive. Consequently, given the safeguarding criterion we chose, the finite discrete mixture model described above constitutes a valid composition model of general mixtures.
  • By applying the defining relation for the thermodynamic entropy S ( N , T ) F T to Equation (19), we obtain
    S ( N , T ) N k B ln V 3 2 N k B ln T + N k B T ln z ˜ T + K ( p ) ,
    where K ( p ) is a function that depends only on the composition characterised by the map p. This result for the entropy expression of a substance was already anticipated by Gibbs in 1876 in [5] in the case where z ˜ = 1 (and with slightly different notations).
“[Equation (20)] applies to all gases of constant composition for which the matter is entirely determined by a single variable [N]”
We should note that Equation (19) has already been obtained by the author in [4], but within the heuristic theoretical framework described in the introduction section, whereby whole particle numbers for each species were directly replaced by their composition averages into the expression of the canonical partition function. This amounts to a sort of “mean-field” approximation, which is somewhat unjustified from a mathematical standpoint and has the conceptual disadvantage of invoking fractional particle numbers, which is hardly satisfactory.
On the contrary, the formalism proposed in Equations (9)–(15) leading to Equation (19) is free from these shortcomings.

3.1.2. Infinite Discrete Mixtures: m > > N and N 2 p ( i ) 1

It is tempting to imagine that so-called continuous mixtures or continuous polydisperse mixtures would correspond to the composition model we term as infinite discrete mixtures, where m N , no matter the system size, including the thermodynamic limit, and where the values of all the probabilities p ( i ) are monotonously decreasing functions of m with zero as the asymptotic limit for N 2 p ( i ) . A trivial example of this model would be the uniform probability model p ( i ) = 1 / m when m N 2 1 .
For infinite discrete mixture models, obtaining a particle of a given species becomes incredibly rare and the number N of particles in the system is too small compared to m for any single realisation of the composition C = { N 1 , , N m } to provide any faithful representation of the composition probabilities p ( i ) . Consequently, the third term of Equation (16) is dominated by the value N i = 2 i.e.,
i = 1 m N i = 0 N B N , p ( N i = N i ) ln N i ! N ( N 1 ) ln 2 2 i = 1 m p ( i ) 2 O ( N 2 p ( i ) ) 1 .
It follows that for infinite discrete mixtures, including cases where m tends to infinity at finite N, as well as in thermodynamic limit cases where the m limit is taken first, and only then the limit of infinite N, we obtain the entropy of multinomial distribution (see Appendix A for numerical comparisons)
H ( M N , m , p | M N , m , p ) ln N ! + N s ( p ) ,
which, upon being substituted in Equation (15), gives
β F ( N , β ) N ln V + N ln ( Λ ˜ 3 / z ˜ ) .
It is quite straightforward to see that Equation (23) does not provide an extensive form for the realisation-independent free energy F . Therefore, given our extensivity criterion aimed at ensuring consistency with thermodynamics, we shall consider the infinite discrete mixture model described in this sub-section as being inadequate to model a typical thermodynamic situation. In particular, it does not appear to capture the thermodynamics displayed by continuous polydisperse systems, such as colloids.

3.1.3. Finite Continuous Mixtures: p ( i ) = ρ ( i ) Δ ( i ) and N > m

The inadequacy of the infinite continuous mixture model to characterise an expected thermodynamic behaviour of a single mixture stems from adopting a value for m, which is much larger than N. In that case, composition realisations with 0 or 1 particle per species are overwhelmingly more likely than those with at least one species having two particles. This pathological behaviour need not be rooted in the physics of the problem. Rather, it may arise from an inadequate categorisation of the different species in the system relative to the available number of particles N. In this situation, one may need to resort to a coarse-graining procedure, effectively grouping together different species into a much smaller number of effective particle species. With this in mind, we introduce finite continuous mixture composition models for which the probabilities p ( i ) can be expressed as p ( i ) = ρ ( i ) Δ ( i ) for any i = 1 , , m , where ρ is the probability density of a continuous random variable X and where Δ ( i ) is an interval size of X associated to the species index value i. This is, in fact, a special case of finite discrete mixtures, where we require the intermediate probability density ρ to be a smooth function and to not wildly change as m is increased. In fact, in this model, we expect the experimental ρ to converge to some theoretical one as m is increased.
There are two distinct, but non-mutually exclusive, situations that would give rise to such models:
  • Mathematical limit: The model for map p ( i ) may depend on m in a manner such that, as m becomes large “enough”, p ( i ) is very well-approximated by the distribution function of a continuous variable and is identical to it in the infinite m limit. Let us investigate in more detail the typical example of a binomial model for map p ( i ) with parameters m and w, i.e.,
    p ( i ) = m ! i ! ( m i ) ! w i ( 1 w ) m i ,
    where w is a new parameter characteristic of the composition model. Within this model, the most probable type of particle is that with the integer value, the closest to m w . Any particle index i m w will have a lower probability that decays rapidly to approach very small values as m becomes large enough. Indeed, in the large m limit at a fixed w, we have that the binomial distribution tends to a normal distribution (see Appendix B for a numerical comparison)
    p ( i ) e ( i m w ) 2 2 m w ( 1 w ) 2 π m w ( 1 w ) .
    Consider now the map between the species label i taking values in { 1 , , m } and a new variable r i / m taking values in the finite interval of the rationals { 1 / m , , 1 } . From Equation (25) and the bijective character between i and r, it follows that
    Prob ( r ) e ( r w ) 2 2 w ( 1 w ) / m 2 π w ( 1 w ) / m 1 m .
    In expression (26), r must be limited to take values in the set { 1 / m , , 1 } . However, we really do have that Prob ( r ) = ρ ( r ) Δ ( r ) , where ρ ( x ) is the probability density function of a normally distributed continuous random variable, and where we can identify Δ ( r ) = 1 / m . (cf. Figure A2 in Appendix B).
  • Experimental considerations: Whether the composition is characterised by the preparation protocol or by a measurement technique, any experimental process is accompanied by a corresponding finite precision. Thus, in practice, even if one were to use a particle attribute that takes continuous values and is associated with a theoretical underlying ρ to characterise the composition of a mixture, it is bound to be expressed in terms of a discrete set { Δ ( 1 ) , , Δ ( m ) } of a finite number m of continuous intervals effectively corresponding to the m identifiable ’species’ of the system with the provided resolution and particle number. If m were to be increased, this would just improve the “granularity” of these intervals without making the set actually continuous. If this granularity is fine enough, one may define an empirical probability density ρ emp ( i ) p ( i ) / Δ ( i ) and fit it to a corresponding continuous probability density model. Convergence to a stable continuous model is expected as m (i.e., the resolution) is increased. This experimental resolution aspect can also be justified under the assumption that the excess free energy of the mixture solely depends on a subset of moments of the polydisperse distribution [19], where these moments are related to the precision of the measurement.
Regardless of whether the use of a continuous random variable and its associated probability density is justified by a mathematical limit of the composition model or inevitable experimental precision considerations, or both, ultimately, these preconditions allow one to write p ( i ) = ρ ( i ) Δ ( i ) , where ρ is the probability density of a continuous random variable and for which N > m .
Given that N > m , we can use the approximate expression for the entropy of the multinomial distribution from Equation (18) and apply it to finite continuous mixtures. Substituting it into Equation (15), we obtain the following result in the limit of large N:
F ( N , T ) N k B T ( ln ( n v Λ ˜ 3 / z ˜ ) s Δ ( ρ ) 1 ) ,
where we use the Taylor–McLaurin formula to the first order to replace the discrete sum for the composition entropy s ( p ) by s Δ ( ρ ) + ρ ( x ) ln ( ρ ( x ) Δ ( x ) ) d x .
We note from Equation (27) that F is extensive and, thus, finite continuous mixtures serve as valid composition models for probing the thermodynamic behaviors of general mixtures in finite and large N system sizes. This is not too surprising given that, as stated above, what we call finite continuous models are special cases of finite discrete models with some additional regularity and smoothness requirements. Therefore, Equation (20) would hold as well for such mixture models.

3.2. Mixing of Two Substances

We will now utilize the results obtained in the previous section to calculate the entropy change that occurs when two finite discrete mixtures, substances A and B, are mixed. We denote m the number of distinct possible chemical species in substances A and B so that all particle types of substances A and B have a label i, taking values in the same sample space { 1 , , m } . We denote p A ( i ) (resp. p B ( i ) ) as the probability for substance A (resp. B) to have a particle of type i.
As is common in the literature, we consider a mixing scenario comprising two different equilibrium situations:
  • Firstly, let us consider a situation where a box of volume V is separated into two equally sized compartments by a removable wall with N / 2 particles of substance A in, say, the left-hand side compartment, and N / 2 particles of substance B in the right-hand-side compartment.
  • Secondly, the wall separating the substances is removed so as to let them intermix until equilibrium is reached.
This scenario is the usual one within which the Gibbs paradox of mixing tends to be discussed. The quantitative aspects may be affected if one is not mixing an equal amount of substances A and B, or further contributions to the entropy changes can be expected if the densities in the two compartments are not initially the same.

3.2.1. Free Energy before Removing the Wall

Before the wall is removed, each substance is separated into its own compartment and at equilibrium at the same temperature. Introducing the composition of multivariate random variables C A and C B , the free energy in each compartment is also a random variable, which is a function of either C A or C B , which, upon averaging over composition realisations in each compartment, gives
β F A / B N 2 , β = N 2 ln Λ ˜ A / B 3 z ˜ A / B ln V 2 s ( p A / B ) + ln N 2 ! + H ( M N 2 , m , p A / B | M N 2 , m , p A / B ) ,
where we have simply reproduced Equation (15) with indices A or B to specify whether the substance being looked at is characterised by p A or p B .
In the end, from the additivity of the free energy, we obtain that
F unmix ( N , β ) F A N 2 , β + F B N 2 , β .

3.2.2. Free Energy after Having Removed the Wall

Once the separating wall has been removed, substances A and B will mix and eventually reach equilibrium. After mixing, the number of particles of type i is going to be associated with the random variable N i mix = N i A + N i B , where N i A / B is the random variable corresponding to the number of particles of type i in substance A (resp. B). Given that each substance composition follows a multinomial distribution M N 2 , m , p A / B , it follows that
N i mix = N 2 ( p A ( i ) + p B ( i ) ) N p mix ( i ) ,
where we identify p mix the ideal composition probability distribution of the mixture of A and B. In Equation (30), the averaging over composition realisations is done by using the joint probability measure Prob ( C A , C B ) = M N 2 , m , p A ( C A ) M N 2 , m , p B ( C B ) , where the substances are considered completely independent.
The repeated mixing of N / 2 particles from different composition realisations of substances A and B can be conceived as an actual protocol to sample N particles from the effective composition p mix . The probability of obtaining a given composition C mix = { N 1 mix , , N m mix } then follows the multinomial distribution
M N , m , p mix ( C mix ) = N ! i = 1 m N i mix ! j = 1 m p mix N i mix = N ! i = 1 m ( N i A + N i B ) ! j = 1 m p mix N j A + N j B ,
where the last equality makes the connection between N mix and N A / B stemming from the aforementioned mixing protocol used to sample p mix .
For two given composition realisations, C A and C B , each comprising N / 2 particles, we also have that, after mixing, the canonical partition function reads
Z mix ( N , β , C A , C B ) = V N i = 1 m ( N i A + N i B ) ! j = 1 m z j Λ j 3 N j A + N j B .
Substituting Equation (31) into Equation (32), we have
Z mix ( N β , C A , C B ) = V N M N , m , p mix ( C A + C B ) N ! i = 1 m p mix N i A + N i B j = 1 m z j Λ j 3 N j A + N j B ,
where C A + C B = { N 1 A + N 1 B , , N m A + N m B } .
In the end, upon averaging over composition realisations C A and C B , we have
β F mix ( N , β ) = N ln V + ln N ! N s ( p mix ) + N 2 ln Λ ˜ A 3 Λ ˜ B 3 z ˜ A 3 z ˜ B 3
+ H ( M N 2 , m , p A M N 2 , m , p B | M N , m , p mix ) ,
where it is understood that the last term involves sums over values taken by N i A and N i B .

3.2.3. Entropy Change

Given that the energy in the system does not change upon mixing (because the species are non-reactive and non-interacting), we now define the entropy change upon mixing as being Δ S mix ( F mix + F unmix ) / T , which gives:
Δ S mix ( N , β ) = k B ln 2 N ( N / 2 ! ) 2 N ! + N k B D J S ( p A | p B ) + k B Δ H ,
where Δ H = H ( M N 2 , m , p A M N 2 , m , p B | M N , m , p mix ) + i = A , B H ( M N 2 , m , p i | M N 2 , m , p i ) and where D J S ( p A | p B ) is the Jensen–Shannon divergence introduced in Equation (8).
Equation (35) constitutes the main result of this paper and gives an exact realisation-independent expression of the classical mixing entropy of equal amounts of any two ideal-gas-like mixtures, A and B, for any system size N. A similar expression has already been found for binary mixtures in [21], but its generalisation to any extensive mixture model is, to our knowledge, new. It is worth going through the meaning of each of the terms in the entropy change expression.
  • ln 2 N ( N / 2 ! ) 2 N ! corresponds to the partitioning entropy, i.e., the entropy gained by releasing initially confined particles into double the initial volume. Note that this term is oblivious to the particle type and is, therefore, always positive, even if the substances are identical.
  • D J S ( p A | p B ) measures a specific, bounded, square distance between substances p A and p B . We shall propose an additional interpretation for this term a bit later.
  • Δ H represents the entropy change owing to composition realisations, i.e., to the fact that, upon repeating experiments, one is bound to have different sampled empirical compositions from the ideal compositions expressed by the probability distributions p A and p B .
Unfortunately, there is no closed-form expression for Δ H , which involves the entropy and cross-entropy of multinomial distributions. In practice, each of these terms in Δ H needs to be evaluated numerically for given composition models. For the large N, however, it can be shown (cf. Appendix C) that Δ H O ( ln N ) .
For the large N, we also have that the first term in Equation (35) is actually of the form ln N 2 N / 2 2 2 N N N + O ( ln N ) O ( ln N ) . Consequently, for the large N, i.e., for the system sizes initially considered by Gibbs, the only contribution (which happens to be extensive) left for the entropy of mixing is
Δ S mix Gibbs = N k B D J S ( p A | p B ) .
Note that the Jensen–Shannon divergence D J S ( p A | p B ) is positive definite and is bounded from above by ln 2 . Figure 1 illustrates the value of the Gibbs entropy of mixing for two substances, A and B, corresponding to a finite discrete model of the kind described in Equation (24). Contrary to the sentence by Gibbs quoted in Section 1, we see that the Gibbs mixing entropy varies continuously from ln 2 to 0 as the substance compositions become more similar.

4. Discussion

In 1876, Gibbs derived results that are generally undisputed; he showed that the mixing entropy of two substances would amount to k B ln 2 per particle, irrespective of the degree of similarity between the substances, but would discontinuously drop to 0 as soon as they were sensibly exactly identical.
In this article, we have developed an approach to address the thermodynamic behaviour of general mixtures. This approach relies on the characterisation of the composition of a substance by an a priori probability distribution p acting on the space of all identifiable species. For a given composition probability p, any real system can just sample from that distribution and provide finite-size realisations of it. Thermodynamic quantities are retrieved from averaging over composition realisations of the Helmholtz free energy, the Massieu potential for fixed temperature, and system size ensembles. We found that a restricted class of composition models—that of finite discrete mixtures—was suitable for describing the extensive behaviour of a given substance. These models also encompass the possibility of characterising the particle attribute with a continuous variable. On the contrary, it was found that attempting to characterise a continuous mixture by setting the number of particle types to be much larger than the number of particles N in the system was inconsistent with thermodynamics. One intuitive explanation of this failure is that such infinite continuous mixture models do not allow for any finite-size empirical realisation of size N to actually be representative of the underlying distribution they are sampled from, to the extent that they all appear to be completely different, and therefore incommensurate with, any of the other realisations. It must be noted that using intervals of a continuous random variable to group many species into a similar effective species is not the only way of dealing with the thermodynamically invalid behaviour of what we call infinite continuous mixture models. The crucial aspect is mostly to reduce the dimensionality of the space within which one characterises the composition. This can also be done via a dimensional reduction approach that relies on determining a few moments of the underlying probability distribution of a specific attribute. The readers interested in such an approach may look at the work developed in [18,19], for instance.
For the selected class of models, it was found that, at a fixed composition, the thermodynamic behaviour is indistinguishable from that of a homogeneous one-component system, as already anticipated by Gibbs in the 1870s.
Applying this new formalism to a mixing scenario of equal amounts of two different substances, a general exact expression was derived for the classical mixing entropy of ideal substances. It was found that (a) even in cases where the substances were identical (same underlying probability distribution), some contributions to the mixing entropy would be non-zero, namely the partitioning entropy and the composition realisation entropy, and (b) in the large system size limit, this exact expression converges to a universal quantity that we call the Gibbs mixing entropy, which is extensive, and proportional to the square distance between the two ideal probability distributions characterising the substances being mixed. This extensive entropy of mixing was shown to vary continuously from 0 to k B ln 2 per particle as the degree of dissimilarity was increased between the substances. This allowed us to provide a resolution to the paradoxical violation of the continuity principle pointed out by Duhem in the finding of Gibbs.
There are various open questions left to answer with regard to the view developed in the present paper and which will be left for future works: (1) while it is mostly straightforward to extend the proposed formalism to the semi-classical limit of quantum statistical mechanics of independent particles (essentially replacing discrete sums by integrals), it is much less obvious to the author as to how to implement it for quantum statistical situations, where the fermionic or bosonic characters of the particles matter. By this, we mean that, as discussed in [4], it is unclear how to make the fermion/boson character of elementary and composite particles compatible with both the existing body of works on polydisperse systems and the present one. (2) Many real substances are also (at least partially) reactive substances and, therefore, a given particle may not maintain a specific identity at all times. Some stationary compositions may be obtained but, in some systems with reactive processes occurring over time scales larger than thermodynamic experiments, some ergodicity considerations with regard to the present approach may be warranted.
In conclusion, in this paper, we have shown that what we have referred to as “real” Gibbs paradox could find a resolution within classical statistical mechanics by employing a composition-based description of general substances and examining averages over finite-size composition realisations.

Funding

This research was partially funded by the Leverhulme Trust project grant RPG-2021-039.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

This section focuses on numerical testing to determine the extent to which Equations (18) and (22) can serve as useful approximate expressions for the entropy of the multinomial distribution H ( M N , m , p | M N , m , p ) . In Figure A1, we test it on a simple uniform composition model, i.e., p ( i ) = 1 / m , and propose various cases that depend on the values of m and N.
Figure A1. Entropy of the binomial distribution. Numerical comparison of the exact entropy of the binomial distribution (blue open circles) with Equation (18) (green open diamonds) and Equation (22) (orange open triangles), focusing on a uniform composition model, where p ( i ) = 1 / m , and exploring various values of m and a function of N. (a) m = 20 shows good agreement between Equation (18) and the exact entropy from N 60 3 m . (b) For the case of m = 100 and N 100 , neither Equation (18) nor Equation (22) provide a good approximation to the exact entropy. However, both expressions appear to provide reliable upper and lower bounds, respectively. (c,d) When the value of m is larger than N, Equation (22) is a good approximation for the entropy of the multinomial distribution. In (c), the deviation from good fit starts to become apparent when N 100 m / 5 .
Figure A1. Entropy of the binomial distribution. Numerical comparison of the exact entropy of the binomial distribution (blue open circles) with Equation (18) (green open diamonds) and Equation (22) (orange open triangles), focusing on a uniform composition model, where p ( i ) = 1 / m , and exploring various values of m and a function of N. (a) m = 20 shows good agreement between Equation (18) and the exact entropy from N 60 3 m . (b) For the case of m = 100 and N 100 , neither Equation (18) nor Equation (22) provide a good approximation to the exact entropy. However, both expressions appear to provide reliable upper and lower bounds, respectively. (c,d) When the value of m is larger than N, Equation (22) is a good approximation for the entropy of the multinomial distribution. In (c), the deviation from good fit starts to become apparent when N 100 m / 5 .
Entropy 25 00833 g0a1

Appendix B

This section illustrates the notorious normal distribution limit of the binomial distribution in the context of a composition model following Equation (24). In particular, it shows the main idea underlying the mathematical justification of the introduced notion of finite continuous mixtures by comparing Equations (24) and (26). A numerical implementation of different values of m and w is given in Figure A2.
Figure A2. Normal limit of the binomial distribution. Comparison of Equations (24) (blue dots) and (26) (orange line) for various parameter values of m and w = 0.2 : (a) m = 50 , (b) m = 100 , (c) m = 500 , and (d) m = 1000 . We see that even for m = 50 , the normal distribution is already a good approximation of the binomial probability distribution, which becomes better as m is increased.
Figure A2. Normal limit of the binomial distribution. Comparison of Equations (24) (blue dots) and (26) (orange line) for various parameter values of m and w = 0.2 : (a) m = 50 , (b) m = 100 , (c) m = 500 , and (d) m = 1000 . We see that even for m = 50 , the normal distribution is already a good approximation of the binomial probability distribution, which becomes better as m is increased.
Entropy 25 00833 g0a2

Appendix C

Given that there is no closed-form expression for Δ H in Equation (35), we are bound to find limiting cases for relatively large N values. This can be done by exploiting a few properties of Δ H :
  • From Jensen’s inequality [23] f ( X ) f ( X ) , it follows that, by choosing f ( x ) = ln x , and
    X = M N , m , p mix ( C A + C B ) M N 2 , m , p A ( C A ) M N 2 , m , p B ( C B ) ,
    we have
    H ( M N 2 , m , p A M N 2 , m , p B | M N , m , p mix ) + H ( M N 2 , m , p A M N 2 , m , p B | M N 2 , m , p A M N 2 , m , p B ) 0 Δ H 0
  • It is possible to estimate the order of magnitude of Δ H in the large N limit. Indeed, given that the entropy of a multinomial distribution is positive, it automatically follows that
    Δ H H ( M N 2 , m , p A M N 2 , m , p B | M N , m , p mix ) + i = A , B H ( M N 2 , m , p i | M N 2 , m , p i ) 2 i = A , B H ( M N 2 , m , p i | M N 2 , m , p i ) ,
    where we use Equation (A2) and the fact that H ( M N 2 , m , p A M N 2 , m , p B | M N 2 , m , p A M N 2 , m , p B ) = i = A , B H ( M N 2 , m , p i | M N 2 , m , p i ) for the latter inequality. From Equation (18), it follows that Δ H O ( ln N ) for large N.

References

  1. Gibbs, J.W. Elementary Principles in Statistical Mechanics; Ox Bow Press: Woodbridge, CT, USA, 1981. [Google Scholar]
  2. Jaynes, E.T. The Gibbs’ paradox. In Proceedings of the Maximum Entropy and Bayesian Methods; Smith, C., Erickson, G., Neudorfer, P., Eds.; Kluwer Academic: Norwell, MA, USA, 1992; pp. 1–22. [Google Scholar]
  3. Hastings, C.S. Bibliographical Memoir of Josiah Willard Gibbs. In Proceedings of the Bibliographical Memoirs, Part of Volume VI; National Academy of Sciences: Washington, DC, USA, 1909. [Google Scholar]
  4. Paillusson, F. Gibbs’ paradox according to Gibbs and slightly beyond. Mol. Phys. 2018, 116, 3196. [Google Scholar] [CrossRef]
  5. Gibbs, J.W. On the Equilibrium of Heterogenous Substances; Connecticut Academy of Arts and Sciences: New Haven, CT, USA, 1876; p. 108. [Google Scholar]
  6. Darrigol, O. The Gibbs paradox: Early history and solutions. Entropy 2018, 20, 443. [Google Scholar] [CrossRef] [PubMed]
  7. Kondepudi, D.; Prigogine, I. Modern Thermodynamics, 2nd ed.; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2014. [Google Scholar]
  8. Sator, N.; Pavloff, N. Physique Statistique, 1st ed.; Vuibert: Paris, France, 2016. [Google Scholar]
  9. Van Kampen, N.G. The Gibbs’ paradox. In Proceedings of the Essays in Theoretical Physics: In Honor of Dirk ter Haar; Parry, W., Ed.; Pergamon: Oxford, UK, 1984. [Google Scholar]
  10. Cates, M.E.; Manoharan, V.N. Testing the Foundations of Classical Entropy: Colloid Experiments. Soft. Matt. 2015, 15, 6538. [Google Scholar] [CrossRef] [PubMed]
  11. Bérut, A.; Arakelyan, A.; Petrosyan, A.; Ciliberto, S.; Dillenschneider, R.; Lutz, E. Experimental verification of Landauer’s principle linking information and thermodynamics. Nature 2012, 5, 183. [Google Scholar] [CrossRef] [PubMed]
  12. Frenkel, D. Why Colloidal Systems Can be described by Statistical Mechanics: Some not very original comments on the Gibbs’ paradox. Mol. Phys. 2014, 112, 2325. [Google Scholar] [CrossRef]
  13. Paillusson, F.; Pagonabarraga, I. On the role of compositions entropies in the statistical mechanics of polydisperse systems. J. Stat. Mech. 2014, 2014, P10038. [Google Scholar] [CrossRef]
  14. Flory, P.J. Principles of Polymer Chemistry; Cornell University Press: Ithaca, NY, USA, 1953. [Google Scholar]
  15. Salacuse, J.J. Random systems of particles: An approach to polydisperse systems. J. Chem. Phys. 1984, 81, 2468. [Google Scholar] [CrossRef]
  16. Sollich, P. Projected free energy for polydisperse phase equilibria. Phys. Rev. Lett. 1998, 80, 1365. [Google Scholar] [CrossRef]
  17. Warren, P.B. Combinatorial entropy and the statistical mechanics of polydispersity. Phys. Rev. Lett. 1998, 80, 1369. [Google Scholar] [CrossRef]
  18. Sollich, P. Predicting phase equilibria in polydisperse systems. J. Phys. Condens. Matter 2002, 14, 79–117. [Google Scholar] [CrossRef]
  19. Sollich, P.; Warren, P.B.; Cates, M.E. Moment Free Energies for Polydisperse Systems. In Advances in Chemical Physics; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2001; Volume 116, pp. 265–336. [Google Scholar]
  20. Endres, D.M.; Schindelin, J.E. A new metric for probability distributions. IEEE Trans. Inf. Theory 2003, 49, 1858. [Google Scholar] [CrossRef]
  21. Paillusson, F. On the Logic of a Prior-Based Statistical Mechanics of Polydisperse Systems: The Case of Binary Mixtures. Entropy 2019, 21, 599. [Google Scholar] [CrossRef] [PubMed]
  22. Cichón, J.; Golebiewski, Z. On Bernouilli Sums and Bernstein Polynomials. Discret. Math. Theor. Comput. Sci. 2012, 179–190. [Google Scholar] [CrossRef]
  23. Jensen, J.L.W.V. Sur les fonctions convexes et les inégalités entre les valeurs moyennes. Acta Math. 1906, 30, 175–193. [Google Scholar] [CrossRef]
Figure 1. Dimensionless Gibbs mixing entropy per particle. The main plots represent D J S ( p A , p B ) for two probability distributions p A (blue) and p B (orange), which follow the composition model of Equation (24), with m = 70 and various values of w A and w B indicated in the figure. The similarity between the distribution is characterised by the absolute value in the difference between the w parameters | w A w B | . When it is zero (the probability graphs are on top of one another), the substances are identical in the sense that they have the same composition probabilities. The figures show that the dimensionless Gibbs mixing entropy per particle varies continuously from 0 to ln 2 0.69 as the similarity between the compositions p A and p B is decreased. The exact details of how it does so depend on how | w A w B | is varied. (a) Having both distributions moving closer to w = 0.5 in a symmetric fashion and (b) fixing w A = 0.1 , with w B varying from 0.1 to 0.9.
Figure 1. Dimensionless Gibbs mixing entropy per particle. The main plots represent D J S ( p A , p B ) for two probability distributions p A (blue) and p B (orange), which follow the composition model of Equation (24), with m = 70 and various values of w A and w B indicated in the figure. The similarity between the distribution is characterised by the absolute value in the difference between the w parameters | w A w B | . When it is zero (the probability graphs are on top of one another), the substances are identical in the sense that they have the same composition probabilities. The figures show that the dimensionless Gibbs mixing entropy per particle varies continuously from 0 to ln 2 0.69 as the similarity between the compositions p A and p B is decreased. The exact details of how it does so depend on how | w A w B | is varied. (a) Having both distributions moving closer to w = 0.5 in a symmetric fashion and (b) fixing w A = 0.1 , with w B varying from 0.1 to 0.9.
Entropy 25 00833 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Paillusson, F. The “Real” Gibbs Paradox and a Composition-Based Resolution. Entropy 2023, 25, 833. https://doi.org/10.3390/e25060833

AMA Style

Paillusson F. The “Real” Gibbs Paradox and a Composition-Based Resolution. Entropy. 2023; 25(6):833. https://doi.org/10.3390/e25060833

Chicago/Turabian Style

Paillusson, Fabien. 2023. "The “Real” Gibbs Paradox and a Composition-Based Resolution" Entropy 25, no. 6: 833. https://doi.org/10.3390/e25060833

APA Style

Paillusson, F. (2023). The “Real” Gibbs Paradox and a Composition-Based Resolution. Entropy, 25(6), 833. https://doi.org/10.3390/e25060833

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop