1. Introduction
The entropy multiparticle correlation expansion (MPCE) is an elegant statistical-mechanical formula that entails the possibility of reconstructing the total entropy of a many-particle system term by term, including at each step of summation the integrated contribution from spatial correlations between a specified number of particles.
The original derivation of the entropy MPCE is found in a book by H. S. Green (1952) [
1]. Green’s expansion applies for the canonical ensemble (CE). In 1958, Nettleton and M. S. Green derived an apparently different expansion valid in the grand-canonical ensemble (GCE) [
2]. It took the ingenuity of Baranyai and Evans to realize, in 1989, that the CE expansion can indeed be reshuffled in such a way as to become formally equivalent to the GCE expansion [
3].
A decisive step forward was eventually taken by Schlijper [
4] and An [
5], who have highlighted the similarity of the entropy formula to a cumulant expansion, and the close relationship with the cluster variation method (see, e.g., [
6]). Other papers wherein in various ways the combinatorial content of the entropy MPCE is emphasized are references [
7,
8,
9,
10].
Since the very beginning it has been clear that the successive terms in the entropy expansion for a homogeneous fluid are not all of equal importance. In particular, the contributions from correlations between more than two particles are only sizable at moderate and higher densities. However, while the two-body entropy is easily accessed in a simulation, computing the higher-order entropy terms is a prohibitive task (see, however, reference [
11]). Hence, the only viable method to compute the total entropy in a simulation remains thermodynamic integration (see e.g., [
12]). The practical interest for the entropy expansion has thus shifted towards the residual multiparticle entropy (RMPE), defined as the difference between excess entropy and two-body entropy. The RMPE is a measure of the impact of non-pair multiparticle correlations on the entropy of the fluid. For hard spheres, Giaquinta and Giunta have observed that the RMPE changes sign from negative to positive very close to freezing [
13]. At low densities the RMPE is negative, reflecting a global reduction (largely driven by two-body correlations) of the phase space available to the system as compared to the ideal gas. The change of sign of the RMPE close to freezing indicates that fluid particles, which at high enough densities are forced by more stringent packing constraints, start exploring, this time in a cooperative way, a different structural condition on a local scale, preluding to crystallization on a global scale. Since the original observation in [
13], a clear correspondence between the RMPE zero and the ultimate threshold for spatial homogeneity in the system has been found in many simple and complex fluids [
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24], thereby leading to the belief that the vanishing of the RMPE is a signature of an impending structural or thermodynamic transition of the system from a less ordered to a more spatially organized condition (freezing is just an example of many). Albeit empirical, this entropic criterion is a valid alternative to the far more demanding exact free-energy methods when a rough estimate of the transition point is deemed sufficient. For a simple discussion of the interplay between entropy and ordering, the reader is referred to reference [
25]; see instead references [
26,
27] for general considerations about the entropy of disordered solids.
A pertinent question to ask is, what happens to the RMPE on the solid side of the phase boundary, considering that an entropy expansion also holds for the crystal? This is precisely the problem addressed in this paper. Can the scope of the entropic criterion be extended in such a way that it also applies for melting? As it turns out, we can offer no definite answer to this question, since theory alone does not go far enough and we ran into a serious computational bottleneck: while the formulae are clear and the numerical procedure is straightforward, it is extremely hard to obtain reliable data for the two-body entropy of a three-dimensional crystal. We have only carried out a limited test on a triangular crystal of hard disks, but our results are affected by finite-size artifacts that make them inconclusive. Nevertheless, a few firm points have been established: (1) the approximate entropy expressions obtained by truncating the MPCE at a given order can all be derived from an explicit functional of the correlation functions up to that order; (2) the one-body entropy for a crystal is an extensive quantity (the same is held to be true for the two-body entropy, but our arguments are not sufficient for a proof); (3) the peaks present in the crystal one-body density have a nearly Gaussian shape; (4) we have also clarified the role of lattice symmetries in dictating the structure of the two-body density, which is explicitly determined at zero temperature.
This paper is organized as follows. In
Section 2 we resume the formalism of the entropy expansion for homogeneous fluids and provide the basic tools needed for its extension to crystals. Then, in
Section 3 we exploit the symmetries of one- and two-body density functions to predict the scaling of one- and two-body entropies with the size of the crystal. The final
Section 4 is reserved to concluding remarks.
2. Derivation of the Entropy MPCE
In this Section, we collect a number of well-established results on the entropy MPCE, with the only purpose of setting the language and notation for the rest of the paper. First, we recall the derivation of the entropy formula for a one-component system of classical particles in the canonical ensemble. Such an ensemble choice is by no means restrictive, since, as we show next, it is always possible to take advantage of the sum rules obeyed by the canonical correlation functions to arrange the entropy MPCE in an ensemble-invariant form. Then, in the following Section we present an application of the formalism to crystals.
The canonical partition function of a system of
N classical particles of mass
m at temperature
T is
, where the ideal and excess parts are given by
In Equation (
1),
V is the system volume,
,
is the thermal wavelength, and
is an arbitrary potential energy. As the particles are identical, for each
the cumulative sum of all
n-body terms in
U is invariant under permutations of particle coordinates (we can also say that
U is
-invariant,
being the symmetric group of the permutations on
N symbols). The CE average of a function
f of coordinates reads
where
is the configurational part of the canonical density function. Finally, the excess entropy
reads
We define a set of marginal density functions (MDFs) by
Owing to
-invariance of
, it makes no difference which vector radii are integrated out in Equation (
4); hence,
is
-invariant (for example,
). The following properties are obvious:
Then, the
n-body density functions (DFs), for
, can be expressed as
where the sum in (
6) is carried out over all
n-tuples of distinct particles (for example, the sum for
contains
terms). We note that
and
if no one-body term is present in
U, i.e., if no external potential acts on the particles (then
U is translationally invariant).
is the probability density of finding a particle in
; hence,
is the number density at
. Similarly,
is the probability density of finding one particle in
and another particle in
; hence,
is the density of the number of particle pairs at
. As
increasingly departs from
, the positions of two particles become less and less correlated, until
at infinite distance. We stress that this cluster property holds in full generality, even for a broken-symmetry phase.
The
n-body reduced density functions, for
, read
These functions fulfill the property
which also holds for
if we define
. For a homogeneous fluid,
. From now on, we adopt the shorthand notation
and
. Moreover, any integral of the kind
is hereafter denoted as
. For example, Equations (
3) and (
4) indicate that
.
To build up the CE expansion term by term, our strategy is to consider a progressively larger number of particles. For a one-particle system, the excess entropy in units of the Boltzmann constant is
, leading to a first-order approximation to the excess entropy of a
N-particle system in the form
(that is, each particle contributes to the entropy independently of the other particles). For a two-particle system, the excess entropy is
plus a remainder
, given by:
Equation (
9) suggests a second-order approximation for
, where each distinct pair of particles contributes the same two-body residual term to the entropy:
Notice that Equation (
10) is exact for
, i.e.,
. Similarly, for a three-particle system the excess entropy is
plus a remainder
:
Hence, a third-order approximation follows for
in the form
Again,
. Equation (
12) reproduces the first three terms in the rhs of Equation (5.9) of reference [
8], and one may legitimately expect that the further terms in the entropy expansion are similarly obtained by arguing for
like we did for
(see the proof in [
8]).
The general entropy formula finally reads:
This equation is trivially correct since, for any finite sequence
of numbers,
To prove (
14), it is sufficient to observe that, for each fixed
, the coefficient of
in the above sum is
A more compact entropy formula is
which follows from
The entropy expansion, (
13) or (
16), is only valid in the CE. Eliminating
in favor of
by Equation (
7), an overall constant comes out of the integral in Equation (
13), namely,
which, by Equation (
15), equals
; this term exactly cancels an identical term present in the ideal-gas entropy. In the end, a modified entropy MPCE emerges:
Notice that the first term in the rhs differs by
N from the ideal-gas entropy expression in the thermodynamic limit. In order that Equation (
19) conforms to the GCE entropy expansion, for each
n a suitable fluctuation integral of value
should be summed to (and subtracted from) the
n-th term in the expansion. For example, using Equations (
7) and (
8) the second-order term in (
13) can be rewritten as
Overall, the extra constants appearing in each term of the entropy formula (for example, the quantity
in Equation (
20)) add to
N. By absorbing such a
N in the first term of (
19) we recover the ideal-gas entropy in the thermodynamic limit, and the CE expansion becomes formally identical to the grand-canonical MPCE [
9].
In
Appendix A we present another derivation of the entropy formula in the CE, which is closer in spirit to the one given by H. S. Green. In parallel, we show that the approximation obtained by truncating the MPCE at a given order can be derived from a modified
distribution, which is an explicit functional of the spatial correlation functions up to that order.
3. The First Few Terms in the Expansion of Crystal Entropy
The entropy expansion in the CE is formally identical for a fluid system and a crystal, since the origin of (
13) is purely combinatorial. However, the DFs of the two phases are radically different: most notably, while
and
for a homogeneous fluid, the one-body density is spatially structured for a crystal—at least once the degeneracy due to translations and point-group operations has been lifted; we stress that
only provided that a specific determination of the crystal is taken, since otherwise
also in the “delocalized” crystalline phase. In practice, in order to fix a crystal in space we should imagine to apply a suitable symmetry-breaking external potential, whose strength is sent to zero after statistical averages have been carried out (in line with Bogoliubov’s advice to interpret statistical averages of broken-symmetry phases as quasiaverages [
28], which amounts to sending the strength of the symmetry-breaking potential to zero only after the thermodynamic limit has been taken). A way to accomplish this task is to constrain the position of just one particle. When periodic boundary conditions are applied, keeping one particle fixed will be enough to break the continuous symmetries of free space. As
N grows, the effect of the external potential becomes weaker and weaker, since it does not scale with the size of the system.
3.1. One-Body Entropy
A reasonable form of one-body density for a three-dimensional Bravais crystal without defects is the Tarazona ansatz [
29]:
where
is a temperature-dependent parameter, the
R’s are direct-lattice vectors, and the
G’s are reciprocal-lattice vectors (recall that
with
and
). Equation (
21) is a rather generic form of crystal density, which recently we have also applied in a different context [
30]. More generally, the one-body density appropriate to a perfect crystal must obey
for all
, and is thus necessarily of the form
Since
, it soon follows
. Calling
a primitive cell and
its volume,
as
(by the Riemann–Lebesgue lemma). In real space, a legitimate
function is
with
(integration bounds are left unspecified when the integral is over a macroscopic
V). In the zero-temperature/infinite-density limit, particles sit at the lattice sites and the one-body density then becomes
Equation (
23) is also recovered from Equation (
21) in the
limit.
For a crystalline solid, the one-body entropy, that is, the first term in the expansion of excess entropy, is (in units of
):
One may wonder whether the integral in (
24) is
in the infinite-size limit. The answer is affirmative, and a simple argument goes as follows. Let
be
; if
is strongly localized near
, then
in the cell around
and
. Actually, we can provide a rigorous proof that
is negative-semidefinite and its absolute value does not grow faster than
N. Using
for
and
for any
, we obtain
To estimate
we employ the one-body density in (
21), which is sufficiently generic for our purposes:
(the above result is nothing but Parseval’s theorem as applied to (
21)). The sum in the rhs of Equation (
26) is the three-dimensional analog of a Jacobi theta function (see, e.g., [
31]), whose value is
for
. Therefore, it follows from Equations (
25) and (
26) that the one-body entropy is at most
.
3.2. Two-Body Entropy
We now move to the problem of evaluating the two-body entropy
for a crystal. For a homogeneous fluid,
is an extensive quantity which, in
units, is equal to
For a crystal, we have from Equation (
20) that
As
for
,
is usually negative and zero exclusively for
. In terms of density functions,
is written as
We show below that Equation (
29) can be expressed as a radial integral, i.e., in a way similar to the two-body entropy for a fluid.
We can assign a radial structure to crystals by appealing to a couple of functions introduced in [
32], namely
and
where the inner integrals are over the direction of
r. For a homogeneous fluid,
and
. The authors of reference [
32] have sketched the profile of
and
for a crystal; both functions show narrow peaks at neighbor positions in the lattice, with an extra peak at zero distance for
, and the oscillations persist till large distances. The following sum rules hold (cf. Equation (
8) for
):
and
When the latter two formulae are rewritten as
it becomes apparent that both
and
decay to 1 at infinity. Similarly, we define:
which obviously vanishes at infinity. While
for a homogeneous fluid, we expect that
in the crystal. Putting Equations (
30)–(
35) together, we arrive at
Even though the integrand vanishes at infinity,
only if the envelope of
decays faster than
(
in two dimensions). A slower decay may be sufficient if
is computed through the first integral in (
36). For a spherically-symmetric interaction potential, the excess energy (i.e., the canonical average of the total potential energy
U) can also be written as a radial integral:
For the one-body density in (
21),
can be obtained in closed form. First we have:
Then, multiplying by
and finally integrating over
we arrive at
We see that the large-distance decay of
is usually slow, and the same will occur for
since
for large
r. In two dimensions, the one-body density and
functions respectively read:
where
is a Bessel function of the first kind. Since the envelope of
maxima decays as
at infinity, we see that the asymptotic vanishing of
is slower in two dimensions than in three.
Equation (
39) has a definite limit for
, corresponding to zero temperature. Indeed, using Poisson summation formula and the expression of Dirac’s delta in spherical coordinates, we obtain:
Hence,
reduces to a sum of delta functions centered at lattice distances (including the origin). The latter result is actually general. Inserting Equation (
23) in (
31), we obtain:
q.e.d. At zero temperature,
is given by the same sum of delta-function terms as in (
42), but for the first term,
, which is missing—see Equation (
92) below.
We add a final comment on possible alternative formulations of
for a crystal. One choice is to replace (
30) with
Apparently, this is a good definition since (see Equation (
8))
However, with this
we cannot write
as a radial integral—hence, option B is discarded altogether. Another possibility is
but this option is useless too, since
(observe that the inner integral is different from the one appearing in Equation (
8)).
3.3. Symmetries of the Two-Body Density
A general property of the two-body density for a crystal is the CE sum rule
Other constraints follow from the translational symmetry of local crystal properties. As for the one-body density, fulfilling
for every
, we must have that
in turn implying
Now observe [
33] that (i) any function of
and
can also be viewed as a function of
and
; (ii) under a
-translation, only the former variable is affected, not the relative separation. Hence, the most general function consistent with (
49) is:
where
and
In order that
it is sufficient that
We may reasonably expect that the most relevant term in the expansion (
50) is indeed the
one (also notice that
as
by the Riemann–Lebesgue lemma).
Equation (
50) is still insufficient to establish the scaling of two-body entropy with the size of the crystal. Some general results can be obtained under the (strong) assumption that
for any
. If we change the notation from
to
(which, by Equations (
51) and (
52), is a real and even function), then a necessary condition for
is:
The rationale behind Equation (
54) is particularly transparent near
, where the peaks of the one-body density are extremely narrow. As argued below (see Equation (
67) ff.),
as a function of
is roughly
in the primitive cell
centered in
,
denoting the only lattice site contained in
and roughly zero outside
. Since the integral of
over
equals 1, Equation (
54) will immediately follow.
Now writing
as a Fourier integral,
and using (
21) as one-body density, Equation (
54) yields
which can only hold for arbitrary
if
Next, from Equation (
30) we obtain:
For the one-body density in (
21), the inner integral becomes:
with
It is evident that
vanishes at infinity. Upon inserting (
59) in (
58), we finally obtain:
As
r increases, the second term gradually vanishes and the large-distance oscillations of
then exactly match those of
. As a countercheck, let us compute the integral of
over the macroscopic system volume (which, by Equations (
32) and (
33), should be
):
Under the assumption that
the entropy expansion for a crystal reads
Providing that it vanishes sufficiently rapidly at infinity, the function
can be written as a Fourier integral, and using (
21) as one-body density, the two-body entropy becomes
which is clearly
.
3.4. Two-Body Density at
In the zero-temperature limit, particles will be sitting at lattice sites, and the two-body density then becomes (see Equation (
23)):
which is of the form (
63). In Equation (
67),
is the indicator function of a Wigner–Seitz cell
centered at the origin (i.e.,
if
and
otherwise). While the factor
forces particles to be located at lattice sites, the only role of the
in (
67) is to prevent the possibility of double site occupancy. However, a
function with this property is not unique; the one provided in (
67) has the advantage of exactly complying with condition (
57) (see below). Equation (
67) indicates that the pair-correlation structure of a low-temperature solid is very different from the structure of a dense fluid close to freezing.
For
the Fourier transform reads:
Now observe that
is trivially periodic, and can thus be expanded in plane waves as
, with
. On the other hand,
Comparing Equations (
69) and (
70), we conclude that
For
the function
at Equation (
60) equals
for
and 0 for
, where
(
) is the radius of the largest (smallest) sphere inscribed in (circumscribed to)
. It then follows from Equation (
61) that
for
, while
for
(for a triangular crystal with spacing
a we have
and
, both comprised between the first, 0, and the second,
a, lattice distance).
, where
consists of infinitely narrow peaks centered at lattice distances; this implies that
everywhere but at the origin, where
while
is non-zero.
3.5. Scaling of Two-Body Entropy with N
We henceforth discuss in fully general terms how the two-body entropy scales with
N for a crystal, avoiding to make any simplifying hypothesis on the structure of
. Using an obvious short-hand notation, the two-body entropy reads
As we already know,
. From the inequality
, valid for all
, we derive
for
, and then obtain:
Clearly, estimating the size of the lower bound in Equation (
73) is a much simpler problem than working with
itself.
Taking
, it is evident that
shares all symmetries of
. Hence, we can write:
Observe that the
functions are nothing but Fourier coefficients, once the
h function has been expressed in terms of
and
:
By the Riemann–Lebesgue lemma,
as
(for arbitrary
). Moreover,
for
(for arbitrary
) since
for
. Similarly, for
we have that
Now observe that, for
,
Using the above equation, and changing the integration variables from
and
to
and
, we obtain:
where
In the special case
, we have
and
. Then, from Equation (
77) we derive
An independent computation of the integral leads to the same result:
which should be compared with Equation (
66). For
and
, we readily obtain
from both Equations (
66) and (
78), meaning that in this case the two-body entropy coincides with its lower bound in Equation (
73).
The quantity (
80) is clearly
, since the summand is rapidly converging to zero; this implies that the two-body entropy of a crystal is, at least for
, bounded from below by a
quantity. In the most general case, where Equations (
78) and (
79), rather, apply, we can only observe the following. As
G grows in size, for any fixed
and
both
and
get smaller, suggesting that
will decrease too. However, this is not enough to conclude that
is
, and the only way to settle the problem is numerical.
3.6. Numerical Evaluation of the Structure Functions
The utility of (
36) clearly relies on the possibility of determining the integrand in simulation with sufficient accuracy. First we see how the one-body entropy, Equation (
24), is computed. We start dividing
V into a large number
of identical cubes of volume
, chosen to be small enough that a cube contains the center of at most one particle. Let
(with
) be the occupancy of the
th cube in a given system configuration and
its canonical average as computed in a long Monte Carlo simulation of the weakly constrained crystal (to fix the center of mass of the crystal in space it is sufficient to keep one particle fixed; then, periodic boundary conditions will contribute to keep crystalline axes also fixed in the course of simulation). Given this setup, the local density at
(a point inside the
th cube) can be estimated as
and the integral in (
24) becomes
(notice that
; we need
and an infinitely long simulation to make (
82) an exact relation). Similarly, if
falls within the
th cube, then
and from Equation (
30) we derive
In the above formula
is the number of cubes whose center lies at a distance
r from
(to within a certain tolerance
), and the inner sum is carried out over those cubes only. Since
and
, an equivalent formula for
is
denoting
the number of particles found at a distance between
and
from the
ith particle in the given configuration. Equation (
86) closely reflects the method of computing the radial distribution function in a CE simulation (see, e.g., Equation (
11) in reference [
34]).
The function
admits yet another expression, which further strengthens its resemblance to the
of a liquid (as reported e.g., in [
35]). It follows from Equations (
30) and (
6) that
Note that, for any sufficiently smooth function
,
and
we are allowed to replace
with
in Equation (
87), and thus obtain
which finally leads to
At zero temperature, we can neglect the average and simply write
where in the last step we have followed the same path leading to Equation (
41).
We can similarly proceed for the functions at Equations (
31) and (
35), which can be computed by the following formulae:
and
While
is the statistical average of an estimator whose histogram can be updated in the course of the simulation (see Equation (
86)),
can only be estimated at the end of simulation, once
has been evaluated for every
with effort comparable to that made for the one-body entropy. Much more costly is the calculation of
, which should also be performed at the end of simulation after evaluating
for every
and
.
Using translational lattice symmetry, the radial distribution functions and
of a crystal can also be written as:
leading to simplifying Equations (
85), (
93), and (
94) into
In the above formulae, the index only runs over the cubes contained in a Wigner–Seitz/Voronoi cell of the lattice, while the sum is still carried out over all cubes in the simulation box.
3.7. Numerical Tests
We first examine the shape of the structure functions
and
for hard spheres, choosing a
r resolution of
(in units of the particle diameter
). We take a system of
particles arranged in a fcc lattice with packing fraction
(recall that the melting value is approximately 0.545). Periodic conditions are applied at the system boundary. In order to constrain the crystal in space, we keep one particle fixed during the simulation. As for
, we employ the Tarazona ansatz for
(see Equation (
39)), a value providing the best fit to the one-body density drawn from simulation.
We use the standard Metropolis Monte Carlo (MC) algorithm, constantly adjusting the maximum shift of a particle during equilibration until the fraction of accepted moves becomes close to
(then, the maximum shift is no longer changed). We produce 50,000 MC cycles in the equilibration run, whereas CE averages are computed over a total of further
cycles. Our results are plotted in
Figure 1. While at short distances
and
are rather different, as
r increases the oscillations of the two functions become closer and closer in amplitude.
To obtain the one-body density with sufficient accuracy, we use a grid of about 50 points along each space direction in the unit cell. However, this grid resolution is too high for allowing the computation of , as the memory requirements for processing the data are very huge. On the other hand, a coarser grid is incompatible with the chosen.
To get closer to achieving our goal, i.e., to ascertain the
N dependence of the two-body entropy for a crystal, we consider a two-dimensional system—hard disks. For this system, the transformation from fluid to solid occurs in two stages, via an intermediate hexatic fluid phase [
36] (the transition from isotropic to hexatic fluid is first-order, whereas the hexatic-solid transition is continuous and occurs at
). We consider a system of
hard disks, arranged in a triangular crystal with packing fraction
, and a mesh consisting of about 80 points along each direction in the unit cell. Even though translational correlations are only quasi-long-ranged in an infinite two-dimensional crystal, when one of the particles is kept artificially fixed this specificity is lost and the (finite) two-dimensional crystal is made fully similar to a three-dimensional crystal. Observe also that an infinite two-dimensional crystal shares at least the same breaking of rotational symmetry typical of an infinite three-dimensional crystal.
As before, we first look at the structure functions drawn from simulation,
and
. Our results are plotted in
Figure 2, together with the
function of Equation (
40) for
. For this
the matching between the two
functions is nearly perfect, indicating that the peaks of the one-body density are (to a high level of accuracy) Gaussian in shape. For
we find
.
In
Figure 3, we show our main result,
, for
and two different crystal sizes,
and 1152. We point out that, in order to obtain these data, we had to run a separate simulation for each
r, as the memory usage is rather extreme. To be sure, we have computed the
values in an independent way, i.e., using the same program loop written for
, eventually finding the same results as in
Figure 2. Looking at
Figure 3, we see that
shows a series of peaks at neighbor positions and in the valleys within, taking preferentially positive values (meaning that its oscillations are not centered around zero). However, the damping of large-distance oscillations is too gradual to allow us to assess the nature of the asymptotic decay of
and then compute
. We must attempt a few explanations for this behavior of
: On one hand, the decay of
may really be slow (at least in two dimensions), but
would nonetheless be extensive, which implies a large
value. It may as well be that constraining the crystal in space by hinging the position of one particle has a strong effect on the speed of
decay, which only a finite-size scaling of data can relieve. Indeed, when going from
to
the values of
are slightly shifted downwards.
In summary, we have not reached any clear demonstration of extensivity in a crystal. This task has proved to be very hard to settle numerically. Our hope is that, based on our preparatory work, other authors with more powerful computational resources at their disposal can push the numerical analysis forward and eventually come up with a definite solution of the problem.