Next Article in Journal
A Blockwise Bootstrap-Based Two-Sample Test for High-Dimensional Time Series
Next Article in Special Issue
The ff˜ Correspondence and Its Applications in Quantum Information Geometry
Previous Article in Journal
Style-Enhanced Transformer for Image Captioning in Construction Scenes
Previous Article in Special Issue
Does the Differential Structure of Space-Time Follow from Physical Principles?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Maximum Geometric Quantum Entropy

by
Fabio Anza
1,2,* and
James P. Crutchfield
2
1
Department of Mathematics Informatics and Geoscience, University of Trieste, Via Alfonso Valerio 2, 34127 Trieste, Italy
2
Complexity Sciences Center and Physics Department, University of California at Davis, One Shields Avenue, Davis, CA 95616, USA
*
Author to whom correspondence should be addressed.
Entropy 2024, 26(3), 225; https://doi.org/10.3390/e26030225
Submission received: 1 January 2024 / Revised: 16 February 2024 / Accepted: 26 February 2024 / Published: 1 March 2024

Abstract

:
Any given density matrix can be represented as an infinite number of ensembles of pure states. This leads to the natural question of how to uniquely select one out of the many, apparently equally-suitable, possibilities. Following Jaynes’ information-theoretic perspective, this can be framed as an inference problem. We propose the Maximum Geometric Quantum Entropy Principle to exploit the notions of Quantum Information Dimension and Geometric Quantum Entropy. These allow us to quantify the entropy of fully arbitrary ensembles and select the one that maximizes it. After formulating the principle mathematically, we give the analytical solution to the maximization problem in a number of cases and discuss the physical mechanism behind the emergence of such maximum entropy ensembles.
PACS:
05.45.-a; 89.75.Kd; 89.70.+c; 05.45.Tp

1. Introduction

1.1. Background

Quantum mechanics defines a system’s state | ψ as an element of a Hilbert space H . These are the pure states. To account for uncertainties in a system’s actual state | ψ , one extends the definition to density operators  ρ that act on H . These operators are linear, positive semidefinite ρ 0 , self-adjoint ρ = ρ , and normalized Tr ρ = 1 . ρ , then, is a pure state when it is also a projector: ρ 2 = ρ .
The spectral theorem guarantees that one can always decompose a density operator as ρ = i λ i | λ i λ i | , where λ i [ 0 , 1 ] are its eigenvalues and | λ i its eigenvectors. Ensemble theory [1,2] gives the decomposition’s statistical meaning: λ i is the probability that the system is in the pure state | λ i . Together, they form ρ ’s eigenensemble L ( ρ ) : = λ j , | λ j j , which, putting degeneracies aside for a moment, is unique. L ( ρ ) , however, is not the only ensemble compatible with the measurement statistics given by ρ . Indeed, there is an infinite number of different ensembles that give the same density matrix: p k , | ψ k k such that k p k | ψ k ψ k | = j λ j | λ j λ j | . Throughout the following, E ( ρ ) identifies the set of all ensembles of pure states consistent with a given density matrix.

1.2. Motivation

Since the association ρ E ( ρ ) is one-to-many, it is natural to ask whether a meaningful criterion to uniquely select an element of E ( ρ ) exists. This is a typical inference problem, and a principled answer is given by the maximum entropy principle (MEP) [3,4,5]. Indeed, when addressing inference given only partial knowledge, maximum entropy methods have enjoyed marked empirical success. They are broadly exploited in science and engineering.
Following this lead, the following answers the question of uniquely selecting an ensemble for a given density matrix by adapting the maximum entropy principle. We also argue in favor of this choice by studying the dynamical emergence of these ensembles in a number of cases.
The development is organized as follows. Section 2 discusses the relevant literature on this problem. It also sets up language and notation. Section 3 gives a brief summary of Geometric Quantum Mechanics: a differential-geometric language to describe the states and dynamics of quantum systems [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]. Then, Section 4 introduces the technically pertinent version of MEP—the Maximum Geometric Entropy Principle (MaxGEP). Section 5 discusses two mechanisms that can lead to the MaxGEP and identifies different physical situations in which the ensemble can emerge. Eventually, Section 6 summarizes what this accomplishes and draws several forward-looking conclusions.

2. Existing Results

The properties and characteristics of pure-state ensembles is a vast and rich research area, one whose results are useful across a large number of fields from quantum information and quantum optics to quantum thermodynamics and quantum computing, to mention only a few. This section discusses four sets of results relevant to our purposes. This also allows introducing language and notation.
First, recall Ref. [24], where Hughston, Josza, and Wootters gave a constructive characterization of all possible ensembles behind a given density matrix, assuming an ensemble with a finite number of elements. Second, Wiseman and Vaccaro, in Ref. [25], then argued for a preferred ensemble via the dynamically motivated criterion of a Physically Realizable ensemble. Third, Goldstein, Lebowitz, Tumulka, and Zanghi singled out the Gaussian Adjusted Projected (GAP) measure as a preferred ensemble behind a density matrix in a thermodynamic and statistical mechanics setting [26]. Fourth, Brody and Hughston used one form of maximum entropy within geometric quantum mechanics [27].

2.1. HJW Theorem

At the technical level, one of the most important results for our purposes is the Hughston–Josza–Wootters (HJW) theorem, proved in Ref. [24], which we now summarize.
Consider a system with finite-dimensional Hilbert space H S described by a density matrix ρ with rank r: ρ = j = 1 r λ j | λ j λ j | . We assume dim H S : = d S = r , since the case in which d S > r is easily handled by restricting H S to the r-dimensional subspace defined by the image of ρ . Then, a generic ensemble e ρ E ( ρ ) with d d S elements can be generated from L ( ρ ) via linear remixing with a d × d S matrix M having as columns d S orthonormal vectors. Then, e ρ = p k , | ψ k is given by the following:
p k | ψ k = j = 1 d S M k j λ j | λ j .
Equivalently, one can generate ensembles applying a generic d × d unitary matrix U to a list of d non-normalized d S -dimensional states in which the first d S , λ j | λ j j = 1 d S , are proportional to the eigenvectors of ρ , while the remaining d d S are simply null vectors:
p k | ψ k = j = 1 d S U k j λ j | λ j .
Here, we must remember that U is not an operator acting on H S but a unitary matrix mixing weighted eigenvectors into d non-normalized vectors.
The power of the HJW theorem is not only that it introduces a constructive way to build E ( ρ ) ensembles but that this way is complete. Namely, all ensembles can be built in this way. This is a remarkable fact, which the following sections rely heavily on.

2.2. Physically Realizable Ensembles

For our purposes, a particularly relevant result is that of Wiseman and Vaccaro [25]. (See also subsequent results by Wiseman and collaborators on the same topic [28]). The authors argue for a Physically Realizable ensemble that is implicitly selected by the fact that if a system is in a stationary state ρ s s , one would like to have an ensemble that is stable under the action of the dynamics generated by monitoring the environment. This is clearly desirable in experiments in which one monitors an environment to infer properties about the system. While this is an interesting way to answer the same question we tackle here, their answer is based on dynamics and limited to stationary states. The approach we propose here is very different, being based on an inference principle. This opens interesting questions related to understanding the conditions under which the two approaches provide compatible answers. Work in this direction is ongoing and it will be reported elsewhere.

2.3. Gaussian Adjusted Projected Measure

Reference [26] asks a similar question to that here but in a statistical mechanics and thermodynamics context. Namely, viewing pure states as points on a high-dimensional sphere ψ S 2 d S 1 , which probability measure μ on S 2 d S 1 , interpreted as a smooth ensemble on S 2 d S 1 , leads to a thermal density matrix:
ρ t h = ? d ψ μ ( ψ ) | ψ ψ |
Here, ρ t h could be the microcanonical or the canonical density matrix. Starting with Schrödinger’s [29,30] and Bloch’s [31] early work, the authors argue in favor of the Gaussian Adjusted Projected (GAP) measure. This is essentially a Gaussian measure, adjusted and projected to live on ψ S 2 d S 1 :
G A P ( σ ) e ψ | σ 1 | ψ .
Written explicitly in terms of complex coordinates ψ j , it is clear that this is a Gaussian measure with vanishing average E [ ψ j ] = 0 and covariance specified by E [ ψ j * ψ k ] = σ j k . In particular, σ = ρ guarantees that G A P ( ρ ) has ρ as density matrix.
The GAP measure has some interesting properties [26,32,33] and, as we see in Section 4, it is also closely related to one of our results in a particular case. Our results can therefore be understood as a generalization of the GAP measure. We will not delve deeper into this matter now but comment on it later.

2.4. Geometric Approach

In 2000, Brody and Hughston performed the first maximum entropy analysis for the ensemble behind the density matrix [27], in a language and spirit that is quite close to those we use here. Their result came before the definition of the GAP measure, but it is essentially identical to it: μ ( ψ ) exp ( j , k L j k ψ j * ψ k ) . Their perspective, however, is very different from that in Ref. [26], which is focused on thermal equilibrium phenomenology. The work we perform here, and our results, can also be understood as a generalization of Ref. [27]. Indeed, as we argued in Ref. [34] (and will show again in Section 4), the definition of entropy used (see Equation (10) in Ref. [27]) is meaningful only in certain cases, in particular, when the ensemble has support with dimension equal to the dimension of the state space of the system of interest. In general, more care is required.

2.5. Summary

We summarized four relevant sets of results on selecting one ensemble among the infinitely many that are generally compatible with a density matrix. Our work relies heavily on the HJW theorem [24], and it is quite different from the approach by Wiseman and Vaccaro [25]. Moreover, it constitutes a strong generalization with respect to the results on the GAP measure [26] in a thermal equilibrium context and with respect to the analysis by Brody and Hughston in [27].

3. Geometric Quantum States

Our maximum geometric entropy principle relies on a differential-geometric approach to quantum mechanics called Geometric Quantum Mechanics (GQM). The following gives a quick summary of GQM and how its notion of Geometric Quantum State [6,7,34] can be elegantly used to study physical and information-theoretic aspects of ensembles. More complete discussions are found in the relevant literature [8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23].

3.1. Quantum State Space

The state space of a finite-dimensional quantum system with Hilbert space H S is a projective Hilbert space P ( H S ) , which is isomorphic to a complex projective space P ( H S ) C P d S 1 : = Z C d S : Z λ Z , λ C / 0 . Pure states are thus in one-to-one correspondence with points Z C P d S 1 . Using a computational basis as reference basis | j j = 1 d S , Z has homogeneous coordinates Z = ( Z 1 , , Z d S ) where | Z = j = 1 d S Z j | j H S . One of the advantages in using the geometric approach is that one can exploit the symplectic character of the state space. Indeed, this implies that the quantum state space C P n can essentially be considered as a classical, although curved, phase space. With probability and phases being canonically conjugated coordinates, Z j = p j e i ϕ j , we have p j , ϕ k = δ j k . The intuition from classical mechanics can then be used to understand the phenomenology of quantum systems.

3.2. Observables

Within GQM, observables are Hermitian functions from C P d S 1 to the reals:
f O ( Z ) : = j , k = 1 d S Z j * Z k O j k / h = 1 d S | Z h | 2 ,
where O j k = j | O | k are the matrix elements of the Hilbert space self-adjoint operator O . An analogous relation holds for Positive Operator-Valued Measures (POVMs).

3.3. Geometric Quantum States

The quantum state space P ( H S ) has a preferred metric g F S and a related volume element d V F S —the Fubini–Study volume element. The details surrounding these go beyond our present purposes. It is sufficient to give d V F S ’s explicit form in the coordinate system we use for concrete calculations. This is the “probability + phase” coordinate system, given by Z ( p j , ϕ j ) j = 1 d S :
d V F S = det g F S j = 1 d S d Z j d Z j * = j = 1 d S d p j d ϕ j 2 .
This volume element can be used to define integration. Indeed, calling Vol [ B ] the volume of a set B P ( H S ) , we have Vol [ B ] : = B d V F S . In turn, this provides the fundamental, unitarily invariant, notion of a uniform measure on the quantum state space. This is the normalized Haar measure μ Haar :
μ Haar [ B ] = Vol [ B ] / Vol [ P ( H S ) ] .
μ Haar is a probability measure that weights all pure states uniformly with the total Fubini–Study volume of the quantum state space. Probability measures [35,36] are the appropriate mathematical formalization behind the physical notion of ensembles and they formalize the concept of a Geometric Quantum State (GQS): A probability measure on the (complex projective) quantum state space. For example, a pure state corresponds to a Dirac measure δ ψ with support on a single point ψ P ( H S ) , with Hilbert space representation | ψ .

3.4. GQS as Conditional Probability Measures

One way to embed the HJW theorem in this geometric context is the following.
Any density matrix can be purified in an infinite number of different ways. A purification | ψ ( ρ ) of ρ is a pure state in a larger Hilbert space | ψ ( ρ ) H S H E such that Tr E | ψ ( ρ ) ψ ( ρ ) | , where Tr E is the partial trace over the additional Hilbert space H E . It is known that, for the purification to be achieved, d E r . Since we assume r = d S , we have d E d S . Any purification of ρ will have a Schmidt decomposition of a specific type:
| ψ ( ρ ) = j = 1 d S λ j | λ j | S P j ,
where | S P j H E j = 1 d S are the d S orthonormal “Schmidt partners”. These can be extended to a full orthonormal basis on H E by adding d E d S orthonormal vectors which are orthogonal to span ( | S P j j ) .
L ( ρ ) is therefore understood as the ensemble resulting from conditioning on the Schmidt partners. Namely, when measuring the environment in the basis | S P j j = 1 d E , the state of the system after the measurement will be | λ j with probability λ j . Its GQS is μ L = j = 1 d S λ j δ λ j . If we now measure the environment in a generic basis, instead of using the Schmidt partners, we generate a different ensemble. Calling | v α α = 1 d E one such basis, we have:
j = 1 d S λ j | λ j | S P j = α = 1 d E p α | χ α | v α ,
with p α | χ α = I S | v α v α | ψ ( ρ ) , p α , | χ α α = 1 d E E ( ρ ) , and GQS μ = α = 1 d E p α δ χ α .
Starting from the Schmidt partners, these bases are in one-to-one correspondence with d E × d E unitary matrices acting on H E : | v α : = U | S P α . And these, in turn, are in one-to-one correspondence with the unitary matrices in the HJW theorem. Therefore, they are an analogously complete classification of ensembles. The reason for this slight rearrangement of things with respect to the HJW theorem is that we now have an interpretation of | χ α as the conditionally pure state of the system, conditioned on the fact that we make a projective measurement | v α α = 1 d E on the environment where the result α occurs with probability p α .

3.5. Quantifying Quantum Entropy

To develop entropy, the following uses the setup in Refs. [6,7,34] to study the physics of ensembles using geometric quantum mechanics. Since the focus here is a maximum entropy approach to select an ensemble behind a density matrix, it is important to have a proper understanding of how to quantify the entropy of an ensemble or, equivalently, of the GQS.
First, we look at the statistical interpretation of the pure states which participate in the conditional ensembles p α , | χ α . The corresponding kets | χ α are not necessarily orthogonal χ α | χ β δ α β , so the states are not mutually exclusive or distinguishable in the Fubini–Study sense. However, these states come with the classical labels α | χ α associated with the outcomes of projective measurements on the environment. In this sense, if α β , we have a classical way to distinguish them, and thus we can understand how to interpret expressions like α p α log p α .
Then, we highlight that the correct functional to use to evaluate the entropy of μ is not always the same. It depends on another feature of the ensemble, the quantum information dimension, which is conceptually related to the dimension of its support in quantum state space. To illustrate the concept, consider the following four GQSs of a qubit:
μ 1 = δ ψ μ 2 = k p k δ ψ k μ 3 = 1 T 0 T d t δ ψ ( t ) μ 4 = μ Haar .
Naturally, the entropy of μ 1 vanishes, since there is no uncertainty. The system inhabits only one pure state, ψ . The entropy of μ 2 is already nontrivial to evaluate. Indeed, while one obvious way is to use the functional k p k log p k , it is also very clear that this notion of entropy does not take into account the location of the points ψ k P ( H S ) . Intuitively, if all these points are close to each other, we would like our entropy to be smaller than in the case in which all the points are uniformly distributed on ψ k P ( H S ) .
The entropy of μ 3 is perhaps the most peculiar, but it illustrates the points in the best way. Let us assume that our qubit is evolving with a Hamiltonian H such that E 1 E 0 = ω . Then, ψ ( t ) = ( 1 p 0 , p 0 e i ϕ 0 ω t ) . If we aggregate the time average and look at the statistics we obtain, it is clear that the variable p is a conserved quantity— p ( t ) = p 0 , while ϕ ( t ) = ϕ 0 ω t is an angular variable that, over a long time, will be uniformly distributed in [ 0 , 2 π ] . This means lim T μ 3 = 1 2 π δ p 0 p , where δ p 0 p is a Dirac measure over the first variable p with support on p = p 0 . How do we evaluate the entropy of μ 3 ?
While to evaluate μ 4 = μ Haar we simply integrate over the whole state space and obtain log Vol ( C P 1 ) , this does not work for μ 3 . Indeed, with respect to the full, 2D quantum state space ( p , ϕ ) [ 0 , 1 ] × [ 0 , 2 π ] , the distribution clearly lives on a 1 D line, which is a measure-zero subset.
To properly address all these different cases, a more general approach is needed. Reference [34] adapted previous work by Renyi to probability measures on a quantum state space. This led to the notions of Quantum Information Dimension  D and Geometric Quantum Entropy  H D that address these issues and properly evaluate the entropy in all these cases. We now give a quick summary of the results in Ref. [34].

3.6. Quantum Information Dimension and Geometric Entropy

Thanks to the symplectic nature of P ( H S ) , the quantum state space is essentially a curved, compact, classical phase space. We can therefore apply classical statistical mechanics to it, using ( p j , ϕ j ) j = 1 d S as canonical coordinates. Since the Fubini–Study volume is d V F S j d p j d ϕ j , we can coarse-grain P ( H S ) by partitioning it into phase-space cells C a , b :
C a b = j = 1 d S 1 a j N , a j + 1 N × 2 π b j N , b j + 1 N
of equal Fubini–Study volume Vol C a b = Vol [ P ( H S ) ] N 2 ( d S 1 ) = ϵ 2 ( d S 1 ) , where a = ( a 1 , , a d S 1 ) ,   b = ( b 1 , , b d S 1 ) and a j , b j = 0 , 1 , , N .
The coarse-graining procedure produces a discrete probability distribution q a b : = μ [ C a b ] , for which we can compute the Shannon entropy:
H [ ϵ ] : = a , b q a b log q a b .
As we change ϵ = 1 / N 0 , the degree of coarse-graining changes accordingly. The scaling behavior of H [ ϵ ] provides structural information about the underlying ensemble. Indeed, since one can prove that for ϵ 0 , H [ ϵ ] has asymptotics
H [ ϵ ] ϵ 0 D log ϵ + h D ,
two quantities define its scaling behavior: D is the quantum information dimension and h D is the geometric quantum entropy. Their explicit definitions are:
D : = lim ϵ 0 H [ ϵ ] log ϵ ,
h D : = lim ϵ 0 H [ ϵ ] + D log ϵ .
Note how this keeps the dependence of the entropy on the information dimension explicit. This clarifies how, only in certain cases, one can use the continuous counterpart of Shannon’s discrete entropy. In general, its exact form depends on the value of D and it cannot be written as an integral on the full quantum state space with the Fubini–Study volume form.

4. Principle of Maximum Geometric Quantum Entropy

This section presents a fine-grained characterization of selecting an ensemble behind a given density matrix. This leverages both the HJW theorem and previous results by the authors. First, we note that D foliates E ( ρ ) into non-overlapping subsets E D ( ρ ) collecting all ensembles μ at given density matrix ρ and with information dimension D :
E D ( ρ ) E D ( ρ ) = δ D , D E D ( ρ ) , E ( ρ ) = D E D ( ρ ) .
As argued above, ensembles with different D pertain to different physical situations. These can be wildly different. Therefore, we often want to first select the D of the ensemble we will end up with and then choose that with the maximum geometric entropy. Thus, here we introduce the principle of maximum geometric entropy at fixed information dimension.
Proposition 1
(Maximum Geometric Entropy Principle). Given a system with density matrix ρ, the ensemble μ M E D that makes the fewest assumptions possible about our knowledge of the ensemble among all elements of E ( ρ ) with fixed information dimension dimension D is given by:
μ M E D : = arg max μ E D ( ρ ) h D .
Several general comments are in order. First, we note that μ M E D might not be unique. This should not come as a surprise. For example, with degeneracies, even the eigenensemble is not unique. Second, the optimization problem defined above is clearly constrained: the resulting ensemble has to be normalized and the average of ( Z j ) * Z k must be ρ j k . Calling E μ [ A ] the state space average of a function A performed with the GQS μ , these two constraints can be written as C 1 : = E μ [ 1 ] 1 = 0 and C j k ρ : = E μ [ ( Z j ) * Z k ] ρ j k = 0 . Using Lagrange multipliers, we optimize Λ [ μ , γ 1 , γ j k ] defined as:
Λ [ μ , γ 1 , γ j k ] : = h D μ + γ 1 C 1 + j , k γ j k C j k ρ .
While the vanishing of Λ ’s derivatives with respect to the Lagrange multipliers γ 1 , γ j k enforces the constraints C 1 = C j k ρ = 0 , derivatives with respect to μ give the equation whose solution is the desired ensemble μ M E D . We also note that the γ j k are not all independent. This is due to the fact that ρ is not an arbitrary matrix: Tr ρ = 1 , ρ 0 , and ρ = ρ . A similar relation holds for γ j k .
To illustrate its use, we now solve this optimization problem in a number of relevant cases. In discussing them, it is worth introducing additional notation. Since we often use canonically conjugated coordinates, ( p j , ϕ j ) j = 1 d S , we introduce vector notation ( p , ϕ ) , with p Δ d S 1 and ϕ T d S 1 , where Δ d S 1 is the ( d S 1 ) -dimensional probability simplex and T d S 1 is the ( d S 1 ) -dimensional torus. Analogously, we introduce the Dirac measures δ x p and δ φ ϕ with support on x Δ d S and φ T d S , respectively.

4.1. Finite Environments: D = 0

If D = 0 , then the support of the ensemble is made by a number of points, which is a natural number. That is, there exists N N such that μ M E D = 0 = α = 1 N p α δ χ α , with h 0 = α = 1 N p α log p α . Note how this is the HJW theorem’s domain of applicability, and this allows us to give a constructive solution.
We start by noting that N also foliates E D = 0 into non-overlapping sets in which the ensemble consists of exactly N elements. We call this set E 0 , N ( ρ ) and it is such that E 0 , N ( ρ ) E 0 , N ( ρ ) = δ N , N E 0 , N ( ρ ) , with E D = 0 ( ρ ) = N d S E 0 , N ( ρ ) . Within E 0 , N ( ρ ) , we can use the HJW theorem with the interpretation in which the ensemble is the conditional ensemble. Here, p α and χ α are generated by creating a purification of dimension N, in which the first d S elements of the basis | S P j j = 1 d S are fixed and the remaining N d S are free. We denote the entire basis of this type with the same symbol but a different label: | S P α α = 1 N . The ensemble we obtain if we measure it is the eigenensemble L ( ρ ) .
However, measuring in a different basis yields a general ensemble, with probabilities p α = ψ ( ρ ) | I S | v α v α | ψ ( ρ ) = j = 1 d S λ j S P j | v α 2 and states | χ α = j = 1 d S λ j v α | S P j p α | λ j . With h 0 = α p α log p α , the absolute maximum is attained at p α = 1 / N . We now show, constructively, that this is always achievable while still satisfying the constraints C 1 = C i j ρ = 0 , thus solving the maximization problem.
This is achieved by measuring the environment in a basis that is unbiased with respect to the Schmidt partner basis:
| v α : v α | S P β = e i θ α β N α , β = 1 , , N .
One such basis can always be built starting from | S P α α by exploiting the properties of the Weyl–Heisenberg matrices via the clock-and-shift construction [37]. This is true for all N N . When N = k n k N k with n k primes and N k some integers, the finite-field algorithm [38,39] can be used to build a whole suite of N bases that are unbiased with respect to the Schmidt partner basis. This leads to | χ α = j = 1 d S λ j e i θ α j | λ j and to:
μ M E D = 0 = δ λ p 1 N α = 1 N δ θ α ϕ , h 0 = log N ,
with λ = ( λ 1 , , λ d S ) and θ α = ( θ α 0 , , θ α d S ) .
To conclude this subsection, we simply have to show that this ensemble satisfies the constraints: C 1 = 0 and that the density matrix given by μ M E D = 0 is ρ , giving C j k ρ = 0 :
σ M E j k : = E μ M E D = 0 Z j * Z k = λ j λ k N α = 1 N e i ( θ α j θ α k ) = λ j λ k δ j k = λ k δ j k = ρ j k .
Here, the key property used is that 1 N α = 1 N e i ( θ α γ θ α β ) = S P β | α = 1 N | v α v α | | S P γ = δ β γ , which comes from Equation (2) and the fact that | v α α is a basis.

4.2. Full Support: D = 2 ( d S 1 )

The second case of interest is the one in which the quantum information dimension takes the maximum value possible, namely D = 2 ( d S 1 ) . Then, the GQS’s support has the same dimension as the full quantum state space and the optimization problem is also tractable. This is indeed the case solved by Brody and Hughston [27]. We do not reproduce the treatment here, which is almost identical in the language of GQM. Rather, we discuss some of its physical aspects from the perspective of conditional ensembles.
If D = 2 ( d S 1 ) and there are no other constraints aside from C 1 and C j k , the measure μ M E 2 ( d S 1 ) can be expressed as an integral with a density q M E with respect to the uniform, normalized, Fubini–Study measure d V F S :
μ M E 2 ( d S 1 ) [ A ] = A d V F S q M E ( Z ) .
And its geometric entropy h 2 ( d S 1 ) is the continuous counterpart of Shannon’s functional on the quantum state space:
h 2 ( d S 1 ) = P ( H S ) d V F S q ( Z ) log q ( Z ) .
This was proven in Ref. [34]. Hereafter, with a slight abuse of language, we refer to both μ M E 2 ( d S 1 ) and the density q M E ( Z ) as an ensemble or the GQS.
The maximization problem leads to:
q M E ( Z ) = 1 Q 2 ( d S 1 ) ( ρ ) e j k γ j k ( Z j ) * Z k , Q 2 ( d S 1 ) ( ρ ) P ( H S ) d V F S e j k γ j k ( Z j ) * Z k
and Lagrange multipliers γ j k are the solution of the nonlinear equations   log Q γ j k = ρ j k . We note how using as reference basis the eigenbasis | λ j j = 1 d S of ρ and Z ( p , ϕ ) as coordinate system reveals that   log Q γ j k = 0 when j k and   log Q γ j j = λ j . Thus, in this coordinate system the dependence of μ M E 2 ( d S 1 ) on the off-diagonal Lagrange multipliers disappears and we retain only the diagonal ones γ j j .
Moving to a single label γ j j γ j and using a vector notation:
q M E 2 ( d S 1 ) ( p , ϕ ) = 1 Q 2 ( d S 1 ) ( τ ) e τ · p , Q 2 ( d S 1 ) ( τ ) Δ d S 1 d p e τ · p τ γ d S γ 1 , γ d S γ 2 , , γ d S γ d S 1
Here, Q ( τ ) is the normalization function (a partition function). Its exact expression can be derived analytically and it is given in Appendix A.
We can see how μ M E 2 ( d S 1 ) is the product of an exponential measure on the probability simplex Δ d S 1 and the uniform measure on the high-dimensional torus of the phases T d S 1 . This leads to the following geometric entropy h 2 ( d S 1 ) :
h 2 ( d S 1 ) ( τ ) = log Q 2 ( d S 1 ) ( τ ) τ · λ .
In this case the explicit expression of the Lagrange multipliers τ satisfying the constraints, which was previously unknown, can be found analytically. This is reported in Appendix B.
We note that this exponential distribution on the probability simplex was recently proposed within the context of statistics and data analysis in Ref. [40]. Moreover, the exponential form associated to the maximum entropy principle is reminiscent of thermal behavior. Indeed, the shape of this distribution is closely related to the geometric canonical ensemble; see Refs. [7,14,27]. However, the value of the Lagrange multipliers is set by a different constraint, in which we fix the average energy rather than the whole density matrix.

4.3. Integer, but Otherwise Arbitrary, D

While one expects D to be an integer, there are GQSs that have fractal support, thus exhibiting a noninteger D . This was shown in Ref. [34]. This section discusses the generic case in which D 0 , 2 ( d S 1 ) , but it is still an integer. Within E D ( ρ ) , our ensemble μ M E D has support on a D -dimensional submanifold of the full 2 ( d S 1 ) -dimensional quantum state space, where it has a density. Reference [34] discusses, in detail, the case in which D = 1 and d S = 2 . Here, we generalize the procedure to arbitrary D and d S .
If the support of μ M E D is contained in a submanifold of dimension D < 2 ( d S 1 ) , which we call S D , we can project the Fubini–Study metric g F S down to P ( H S ) to get g S F S . Let us call X j : ξ a S D X j ( ξ a ) P ( H S ) the functions which embed S D into the full quantum state space P ( H S ) . Then, the metric induced on S D is g a b S = j , k a X j b X k g j k F S , where a : = / ξ a . Note that here we are using the “real index” notation even for coordinates X j on P ( H S ) . While P ( H S ) is a complex manifold, admitting complex homogeneous coordinates, we can always use real coordinates on it. Then, g S induces a volume form d ω S ξ = ω S ( d ξ ) = det g S d ξ , where d ξ is the Lebesgue measure on the R D which coordinatizes S D . Then, μ M E D can be written as:
μ M E D [ A S D ] = A d ω S ξ f ( ξ ) .
Eventually, this leads to:
h D = S D d ω S ξ f ( ξ ) log f ( ξ ) .
This allows rewriting the constraints explicitly in a form that involves only probability densities on S D :
C 1 = μ M E D [ S D ] 1 = S D d ω S ξ f ( ξ ) 1 , C j k = E μ M E D [ ( Z j ) * Z k ] ρ j k = S D d ω S ξ f ( ξ ) ( Z j ) * ( ξ ) Z k ( ξ ) ρ j k ,
where Z ( ξ ) : ξ S D Z ( ξ ) P ( H S ) are the homogeneous coordinate representation of the embedding functions X a of S D onto P ( H S ) .
The solution of the optimization problem leads to the Gaussian form, in homogeneous coordinates, with support on S D :
q M E D ( ξ ) = 1 Q D e j , k = 1 d S γ j k ( Z j ) * ( ξ ) Z k ( ξ ) Q D = S D d ω S ξ e j , k = 1 d S γ j k ( Z j ) * ( ξ ) Z k ( ξ ) .
Again, we can move from a homogeneous representation to a symplectic one Z ( ξ ) p ( ξ ) , ϕ ( ξ ) in which the reference basis is the eigenbasis of ρ . This gives ρ j k = λ j δ j k . This, in turn, means we only need the diagonal Lagrange’s multipliers γ j j . As for the previous case, we move to a single label notation γ j j γ j :
q M E D ( ξ ) = 1 Q D ( τ ) e τ · p ( ξ ) , Q D ( τ ) = S D d ω S ξ e τ · p ( ξ ) τ γ d S γ 1 , γ d S γ 2 , , γ d S γ d S 1
with an analytical expression for the entropy:
h D ( τ ) = log Q D ( τ ) τ · λ .
While this solution appears to have much in common with the D = 2 ( d S 1 ) case, there are profound differences. Indeed, the functions p ( ξ ) can be highly degenerate, since we are embedding a low-dimensional manifold, S D , into a higher one, P ( H S ) . Indeed, the coordinates ξ emerge from coordinatizing a submanifold of dimension D within one of dimension 2 ( d S 1 ) . This means that for S D there are 2 ( d S 1 ) D independent equations of the type K n ( Z ) = 0 n = 1 2 ( d S 1 ) D . In general, we expect them to be highly nonlinear functions of their arguments. While choosing an appropriate coordinate system allows simplifying, this choice has to be made on a case-by-case basis. In specific cases, discussed in the next section, several exact solutions can be found analytically.

4.4. Noninteger D : Fractal Ensembles

As Ref. [34] showed, even measuring the environment in a local basis can lead to GQSs with noninteger D . For example, if we explicitly break the translational invariance of the spin-1/2 Heisenberg model in 1 D by changing the local magnetic field of one spin, the GQS of one of its spin- 1 / 2 is described by a fractal resembling Cantor’s set in the thermodynamic limit of an infinite environment. Its quantum information dimension and geometric entropy have been estimated numerically to be D 0.83 ± 0.02 and h 0.83 grows linearly with N E , the size of the environment: h 0.83 0.66 N E . Their existence gives physical meaning to the question of finding the maximum geometric entropy ensemble with noninteger D .
Providing concrete solutions to this problem is quite complex, as it requires having a generic parametrization for an ensemble with an arbitrary fractional D . As far as we know, this is currently not possible. While we do know that certain ensembles have a noninteger D , there is no guarantee that fixing the value of the information dimension, e.g., D = N / M with N , M N relative primes, turns into an explicit way of parametrizing the ensemble. We leave this problem open for future work.

5. How Does μ ME Emerge?

While the previous section gave the technical details regarding ensembles resulting from the proposed maximum geometric quantum entropy principle, the following identifies the mechanisms for their emergence in a number of cases of physical interest.

5.1. Emergence of μ M E 0

As partly discussed in the previous section, μ M E 0 can emerge naturally as a conditional ensemble, when our system of interest interacts with a finite-dimensional environment (dimension N). If the environment is probed with projective measurements in a basis that is unbiased with respect to the Schmidt-partner basis | S P α α = 1 N , we reach the absolute maximum of the geometric entropy, log N . The resulting GQS is μ M E 0 = δ λ p 1 N α = 1 N δ θ α ϕ , with members of the ensemble being | χ α   = j = 1 d S λ j e i θ α j | λ j and p α = 1 / N .
As argued in Ref. [41], the notion of unbiasedness is typical. Physically, this is interpreted as follows. Imagine someone gives up | ψ ( ρ ) , a purification of ρ , without telling us anything about the way the purification is performed. This means we know nothing about the way ρ has been encoded into | ψ ( ρ ) . Equivalently, we do not know what the | S P j j = 1 d S are. If we now choose a basis of the environment to study the conditional ensemble, | v α α = 1 N , this will have very little information about the | S P j j = 1 d S —there is a very high chance that we will end up very close to the unbiasedness condition.
The mathematically rigorous version of “very high chance” and “very close” is given in Ref. [41] and it is not relevant here. The only thing we need is that this behavior is usually exponential in the size of the environment 2 N . Somewhat more accurately, the fraction of bases which are v α | S P j 2 1 / N are 1 2 N . Therefore, statistically speaking, it is extremely likely that, in absence of meaningful information about what the | S P j j = 1 d S are, the conditional ensemble we will see is μ M E 0 .

5.2. Emergence of μ M E 2 ( d S 1 )

For μ M E 2 ( d S 1 ) to emerge as a conditional ensemble, our d S -dimensional quantum system must interact with an environment that is being probed with measurements whose outcomes are parametrized by 2 ( d S 1 ) continuous variables, each with the cardinality of the reals. This is because we have to guarantee that D = 2 ( d S 1 ) . Therefore, conditioning on projective measurements on a finite environment is insufficient. One possibility is to have a finite environment that we measure on an overcomplete basis, like coherent states. A second possibility is to have a genuinely infinite-dimensional environment, on which we perform projective measurements. For example, we could have 2 ( d S 1 ) / 3 quantum particles in 3 D that we measure on the position basis n = 1 2 ( d S 1 ) / 3 | x n , y n , z n . All the needed details were given in Ref. [6], where we studied the properties of a GQS emerging from a finite-dimensional quantum system interacting with one with continuous variables.
We stress here that this is only a necessary condition, not a sufficient one. Indeed, we can have an infinite environment that is probed with projective measurements on variables with the right properties but still obtain an ensemble that is not μ M E 2 ( d S 1 ) . An interesting example of this is given by the continuous generalization of the notion of unbiased basis. We illustrate this in a simple example of a purification obtained with a set of 2 ( d S 1 ) real continuous variables, realized by 2 ( d S 1 ) non-interacting particles in a 1 D box [ 0 , L ] .
In this, the notion of an unbiased basis is satisfied by position and momentum eigenstates: x | k = e i k · x V . Thus, if our Schmidt partners are momentum eigenstates | S P j   =   | k j j = 1 d S , and we measure the environment in the position basis, we do not obtain a GQS with the required D = 2 ( d S 1 ) . Indeed, while we do obtain q ( x ) = ψ ( ρ ) | I S | x x | | ψ ( ρ )   = 1 V , the members of the ensemble | χ ( x )   = j = 1 d S λ j e i k j · x | λ j are not distributed in the appropriate way.
This leads to the ensemble δ λ p 1 ( 2 π ) d S 1 , which has the wrong information dimension: D = d S 1 , not D = 2 ( d S 1 ) . This clarifies why, in order to have D = 2 ( d S 1 ) , using an environmental basis that is unbiased with respect to the Schmidt partners is not enough. Specifically, the probabilities p j ( x ) = λ j | χ ( x ) 2 = λ j do not depend on x . They do not obtain redistributed by the unbiasedness condition and are always equal to the eigenvalues of ρ .
If we measure on a different basis | l   : = V d x u l * ( x ) , we obtain a different GQS since l | S P j = V d x u l ( x ) e i k j · x = F l ( k j ) is essentially the Fourier transform of u l :
q ( l ) = j = 1 d S λ j F l ( k j ) 2
| χ ( l )   = j = 1 d S λ j F l ( k j ) q ( l ) | λ j .
Equation (5) gives the functions p j ( l ) , ϕ j ( l ) j = 1 d S 1 :
p j ( l ) = λ j F l ( k j ) 2 n = 1 d S λ n F l ( k n ) 2 ,
ϕ j ( l ) = Arg F l ( k j ) .
This, together with the density q ( l ) specifies the ensemble via μ = d l q ( l ) δ p ( l ) p δ ϕ ( l ) ϕ .
Finding the exact conditions that lead to μ = μ M E 2 ( d S 1 ) involves solving a complex inverse problem. However, what we have accomplished so far allows us to understand the real mechanism behind its emergence. First, the ϕ j ( l ) must be uniformly distributed: they must be random phases. Second, the distribution of p must be of exponential form. The first condition can always be ensured by choosing some l | x = u l ( x ) and then multiplying it by pseudo-random phases, generated in a way that is completely independent on p . This can always be achieved without breaking the unitarity of l | x via u l ( x ) u l ( x ) e i θ l . This guarantees that the marginal distribution over the phases is uniform and that the density q ( p , ϕ ) becomes a product of its marginals, since the distribution of the ϕ has been built to be independent of everything else: q ( p , ϕ ) = f ( p ) · unif ( ϕ ) . Then, in order for q ( p , ϕ ) to be the maximum entropy one we need f ( p ) = 1 Q ( τ ) e τ · p .
Given a nondegenerate p ( l ) , this can be ensured by a specific form of q ( l ) since f ( p ) = d l q ( l ) δ p ( l ) p :
q ( l ) = det J ( l ) e τ · p ( l ) Q ( τ ) f ( p ) = e τ · p Q ( τ ) ,
where J is the Jacobian matrix of the coordinate change l p ( l ) . Checking that this form leads to the right distribution is simply a matter of coordinate changes. Alternatively, it can be seen by repeated use of the Laplace transform on the simplex, together with the result 1 Q ( τ ) Δ d S 1 d p e a · p e τ · p = Q ( τ a ) / Q ( τ ) . We now see the mechanism at play in a concrete way and how it leads to the maximum entropy GQS μ M E 2 ( d S 1 ) .
First, let us take the label l = ( l 1 , , l 2 ( d S 1 ) ) and split it in two l = ( a , b ) with a = ( a 1 , , a d S 1 ) and b = ( b 1 , , b d S 1 ) . Then, d l = d a d b . At this stage, the choice of l , the splitting, and | S P j are arbitrary. Then, we make the choice that a , b | S P j   = A j ( a ) e i B j ( b ) . The only property we need to check is that | a , b a , b can be a complete set:
d a A j ( a ) A k ( a ) d b e i ( B j ( b ) B k ( b ) ) = δ j k .
We can choose B j ( b ) such that d b e i ( B j ( b ) B k ( b ) ) = M δ j k , for example, by choosing B j ( b ) to be linear functions. Then, choosing A j ( a ) such that d a A j ( a ) = 1 M guarantees completeness. With this choice, we obtain:
q ( a , b ) = j = 1 d S λ j A j ( a ) , p j ( a , b ) = λ j A j ( a ) n = 1 d S 1 λ n A n ( a ) p j ( a ) , ϕ j ( a , b ) = B j ( b ) ϕ j ( b ) .
The probability density q ( a , b ) can be written as a product of two probability densities: q ( a , b ) = f ( a ) 1 M . Here, 1 / M is the uniform density for b and f ( a ) = j = 1 d S λ j M A j ( a ) is a probability density for a . Then, the GQS becomes a product of two densities: one over the probability simplex (for p ) and another one over the phases (for ϕ ):
μ = d a d b q ( a , b ) δ p ( a , b ) p δ ϕ ( a , b ) ϕ , = d a f a ( a ) δ p ( a ) p · 1 M d b δ ϕ ( b ) ϕ , = f p ( p ) f ϕ ( ϕ ) .
These are the formulas for two changes of variable in integrals: a p ( a ) and b ϕ ( b ) . Since these are invertible, we can confirm what we understood before. f ϕ ( ϕ ) = unif [ ϕ ] when the phases ϕ ( b ) are uniformly distributed. Moreover, when f p ( p ) = e τ · p Q ( τ ) and p ( a ) are exponentially distributed: f a ( a ) = det J a p ( a ) e τ · p ( a ) Q ( τ ) , with J a p being the Jacobian matrix of the change of variables a p ( a ) .

5.3. Stationary Distribution of Some Dynamic

A second mechanism, which can lead to the emergence of an ensemble with D = 2 ( d S 1 ) , is time averaging. Indeed, if we are in a nonequilibrium dynamical situation in which the system and its environment jointly evolve with a dynamical (possibly unitary) law, its conditional ensembles μ ( t ) depend on time.
To study stationary behavior from dynamics, one looks at time-averaged μ ( t ) ¯   = lim T 1 T 0 T μ ( t ) ensembles that, in this case, have a certain stationary density matrix ρ s s = ρ ( t ) ¯ . Unless something peculiar happens, we expect the ensemble to cover large regions of the full state space, leading to a stationary GQS with D = 2 ( d S 1 ) and a given density matrix ρ s s .
Intuitively, we expect dynamics that are chaotic in quantum state space to lead to ensembles described by μ M E 2 ( d S 1 ) . This is because the ensemble that emerges must be compatible with a density matrix ρ s s while still exhibiting a nontrivial dynamics due to the action of the environment. We now give a simple example of how this happens. Borrowing from Geometric Quantum Thermodynamics, see Ref. [7], where we studied a qubit with a Caldeira–Leggett-like environment, the resulting evolution for the qubit can be described using the Stochastic Schrödinger equation, which, as shown in Ref. [7], leads to a maximum entropy ensemble (see Equation (3)) of the required type.

5.4. Emergence of μ M E d S 1

Among all possible values of D , a third one which is particularly relevant is D = d S 1 , which is half the maximum value. The reason why this is important comes from the symplectic nature of the quantum state space and, ultimately, from dynamics. One physical situation in which μ M E d S 1 emerges naturally is the study of the dynamics of pure, isolated quantum systems. The phenomenology we discuss here is known, being intimately related to thermalization and equilibration studies. We discuss it here only in connection with the maximum geometric entropy principle introduced in Section 4.
Imagine an isolated quantum system in a pure state | ψ 0 evolving unitarily with a dynamics generated by some time-independent Hamiltonian H = n = 1 D E n | E n E n | . Assuming lack of degeneracies in the energy spectrum, the dynamics is given by | ψ t = n = 1 D p n 0 e i ( ϕ n 0 E n t ) , where p n 0 e i ϕ n 0 E n | ψ 0 and we have used symplectic coordinates in the energy eigenbasis. Since p Δ D 1 are conserved quantities p n t = p n 0 and ϕ T D 1 evolve independently and linearly ϕ n t = ϕ n 0 E n t on a high-dimensional torus, we know that a sufficient condition for the emergence of ergodicity on T D 1 is the so-called non-resonance condition: energy gaps have to be non-degenerate: namely E n E k = E a E b if and only if n = k and a = b or n = a and k = b .
This condition is usually true for interacting many-body quantum systems. If that is the case, then the evolution of the phases is ergodic on T D 1 . This was first proven by von Neumann [42] in 1929. Calling ( p ( t ) , ϕ ( t ) ) the instantaneous state and with δ p ( t ) p δ ϕ ( t ) ϕ the corresponding Dirac measure on the quantum state space, we have:
lim T 1 T 0 T d t δ p ( t ) p δ ϕ ( t ) ϕ = δ p ( 0 ) p · unif ϕ T D 1 ,
where unif ϕ T D 1 is the uniform measure on T D 1 in which all ϕ n are uniformly and independently distributed on the circle. It is not too hard to see that this is the maximum geometric entropy ensemble with D = d S 1 , compatible with the fact that the occupations of the energy eigenstates are all conserved quantities: p n t = p n 0 .
Indeed, these d S 1 constraints provide d S 1 independent equations, thus reducing the state-space dimension that the system explores to the high-dimensional torus T D 1 . On this, however, the dynamics is ergodic and the resulting stationary measure is the uniform one. By definition, this is the measure with the highest possible value of geometric entropy since its density is uniform and equal to q M E ϕ ( ϕ ) = 1 Vol [ T D 1 ] , where Vol [ T D 1 ]   = T D 1 d ω F S ϕ = ( 2 π ) D 1 is the volume of T D 1 , computed with the Fubini–Study volume element projected on T D 1 , that is k = 1 D 1 d ϕ k :
h d S 1 = T D 1 d ω F S ϕ 1 Vol [ T D 1 ] log 1 Vol [ T D 1 ] = log Vol [ T D 1 ] .

5.5. Comment on the Generic μ M E D

To have a GQS with generic information dimension D result from a conditional measurement on an environment, we must condition on at least D continuous variables with the cardinality of the reals. This can be achieved either via measurements on an overcomplete basis, such as coherent states, or via projective measurements on an infinite dimensional environment with at least D real coordinates. This condition is necessary, but not sufficient, to guarantee the emergence of the corresponding maximum entropy ensemble μ M E D . While we have seen that the notion of an unbiased basis is relevant when D = 0 , we also saw how this falls short in the generic D > 0 case. Understanding this general condition is a nontrivial task that requires a much deeper understanding of how systems encode quantum information in their environment and how this is extracted by means of quantum measurements. Further work in this direction is ongoing and will be reported elsewhere.
For a GQS with arbitrary dimension D to emerge as a stationary distribution on a quantum state space with dimension 2 ( d S 1 ) , it is likely that we need 2 ( d S 1 ) D independent equations constraining the dynamics, that is, if D is an integer. Indeed, due to the continuity of time and the smoothness of the time evolution in quantum state space, we expect D N in the vast majority of cases. If these equations constraining the motion on the quantum state space are linear, then we know that having 2 ( d S 1 ) D independent equations is both necessary and sufficient to have D as quantum information dimension. This, however, says virtually nothing about the maximization of the relevant geometric entropy h D . Moreover, constraints on an open quantum system can take very generic forms and the relevant equations will not always be linear.
An explicit example where such ensemble can be found constructively is given by the case of an isolated quantum system. The conditions for the emergence of a maximum entropy μ M E d S 1 are then known, being equivalent to the conditions for the ergodicity of periodic dynamics on a high-dimensional torus, which are known.

6. Conclusions

While a density matrix encodes all the statistics available from performing measurements on a system, they do not give information about how the statistics were created. And infinite possibilities are available. A natural way to select a unique ensemble behind a given density matrix is to approach the problem from the perspective of information theory. In this case, the issue becomes a standard inference problem, one to which we can apply the maximum entropy principle. To properly formulate the problem in this way requires a proper way to compute the entropy of an ensemble. While this is trivial for ensembles with a finite number of elements, it is not for continuous ensembles. The correct answer, the notion of Geometric Quantum Entropy h D , was given in Ref. [34]. This, however, depends strongly on another quantity that characterizes the ensemble: the quantum information dimension D . Consequently, we formulated the maximum geometric entropy principle at fixed quantum information dimension. This is a one-parameter class of maximum entropy principles, labeled by D , that can be used to explore various ways to have ensembles give rise to a specific density matrix.
As often happens with inference principles, the generic optimization problem can be hard to solve. However, here we solved a number of cases where the ensemble can be found analytically. We also explored the physical mechanism responsible for the emergence of μ M E D . Two different classes of situations were considered: (i) a conditional ensemble, resulting from measuring the environment of our system of interest, and (ii) stationary distributions, in which the statistics arise from aggregating data over time. We have also identified and discussed various instances where both mechanisms lead to a maximum entropy ensemble.

Author Contributions

Both authors contributed equally to the conceptualization of the work and writing of the manuscript. F.A. did the calculation while J.P.C. provided guidance and contextualization. All authors have read and agreed to the published version of the manuscript.

Funding

This material is based upon work supported by, or in part by, a Templeton World Charity Foundation Power of Information Fellowship, FQXi Grant FQXi-RFP-IPW-1902, and U.S. Army Research Laboratory and the U.S. Army Research Office under contracts W911NF-21-1-0048 and W911NF-18-1-0028.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

F.A. thanks Pawel Horodecki, Karol Zyckzowski, Marina Radulaski, Davide Pastorello, Akram Touil, Sebastian Deffner and Davide Girolami for discussions on geometric quantum mechanics. F.A. and J.P.C. thank David Gier and Ariadna Venegas-Li for helpful discussions and the Telluride Science Research Center for its hospitality during visits.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Calculating the Partition Function

Recall:
Z λ α β = P ( H ) e α , β λ α β Z α Z ¯ β d V F S .
Since λ α β are the Lagrange multipliers of C α β , we chose them to be Hermitian as they are not all independent. Thus, we can always diagonalize them with a unitary matrix:
α β U γ α λ α β U β ϵ = l γ δ γ ϵ .
This allows us to define auxiliary integration variables X γ = α U γ α Z α . Thanks to these, we express the quadratic form in the exponent of the integrand using that ( U U ) α β = δ α β :
Z λ Z ¯ = α β Z α λ α β Z ¯ β = α β α ˜ β ˜ Z α δ α α ˜ λ α ˜ β ˜ δ β β ˜ Z ¯ β ˜ = α β α ˜ β ˜ a b Z α U α a U a α ˜ λ α ˜ β ˜ U β ˜ b U b β Z ¯ β = a X a 2 l a .
Moreover, recalling that the Fubini–Study volume element is invariant under unitary transformations, we can simply adapt our coordinate systems to X a . And so we have X a = q a e i ν a . This gives d V F S = k = 1 D 1 d q a d ν a 2 . We arrive at the following simpler functional:
Z λ α β = P ( H ) e a = 0 D 1 l a q a k = 1 D 1 d q a d ν a 2 = ( 2 π ) D 1 2 D 1 Δ D 1 e a l a q a a d q a .
Now, we are left with an integral of an exponential function over the D 1 -simplex. We can use Laplace transform trick to solve this kind of integral:
I D 1 ( r ) Δ D 1 k = 0 D 1 e l k q k δ k = 0 D 1 q k r d q 1 d q D 1 I ˜ D 1 ( z ) 0 e z r I D 1 ( r ) d r , I ˜ n ( z ) = k = 0 n ( 1 ) k ( l k + z ) = ( 1 ) n ( n + 1 ) 2 k = 0 n 1 z z k ,
with z k = l k R .
The function I ˜ n ( z ) has n + 1 real and distinct poles: z = z k = l k . Hence, we exploit the partial fraction decomposition of I ˜ n ( z ) , which is:
( 1 ) n ( n + 1 ) 2 k = 0 n 1 z z k = ( 1 ) n ( n + 1 ) 2 k = 0 n R k z z k ,
where:
R k = ( z z k ) I ˜ n ( z ) z = z k = j = 0 , j k n ( 1 ) n ( n + 1 ) 2 z k z j .
Exploiting linearity of the inverse Laplace transform plus the basic result:
L 1 1 s + a ( t ) = e a t Θ ( t ) ,
where:
Θ ( t ) = 1 t 0 0 t < 0 .
We have for:
I n ( r ) = L 1 [ I ˜ n ( z ) ] ( r ) = Θ ( r ) k = 0 n R k e z k r .
And so:
Z = I D 1 ( 1 ) = k = 0 D 1 e l k j = 0 , j k D 1 ( l k l j ) .
Now, consider that l a are linear functions of the true matrix elements:
l a = f a ( λ α β ) = α β U a α λ α β U β a .
We arrive at:
Z ( λ α β ) = e f k ( λ α β ) j = 0 , j k D 1 ( l k ( λ α β ) l j ( λ α β ) ) .

Appendix B. Calculating Lagrange Multipliers

Given the expression of the partition function, we now show that that the value of Lagrange’s multipliers γ j k can be given analytically by extending the Laplace transform technique exploited Appendix A.
The nonlinear equation to fix γ j k is:
log Q γ j k = 1 Q P ( H S ) d V F S ( Z j ) * Z k e a , b γ a b ( Z a ) * Z b = ρ j k .
We now use as reference basis the eigenbasis of ρ and as coordinate system ( p , ϕ ) . This means only the diagonals γ j j enter the equation.
log Q γ k k = λ k 1 Q Δ d S 1 n = 1 d S 1 d p n p k e a γ a a p a = 1 Q R + d S n = 1 d S d p n p k e a = 1 d S γ a a p a δ j p j 1 .
To compute this, we use the same Laplace transform technique we used before, with a minor adaptation. First, we do single lable notation γ j j l j , then we define:
J D 1 ( k ) ( r ) R + D n = 1 D d p n p k j = 1 D e l j p j δ k = 1 D p k r
Its Laplace transform is:
J ˜ D 1 ( k ) ( z ) = 0 e z r J D 1 ( k ) ( r ) = R + D n = 1 D d p n p k j = 1 D e ( l j + z ) p j = n k 0 d p n e ( l n + z ) p n × 0 d p k ( p k ) e ( l k + z ) p k = n = 1 D 0 d p n e ( l n + z ) p n × l k log 0 d p k e ( l k + z ) p k = n = 1 D G n ( z ) log G k ( z ) l k
where:
G k ( z ) 0 d p k e ( l k + z ) p k = 1 l k + z 0 d y e y = 1 l k + z log G k ( z ) l k = G k ( z )
Therefore, we obtain:
J ˜ D 1 ( k ) ( z ) = n k D G n ( z ) × G k ( z ) 2 = n k 1 z z n 1 ( z z k ) 2 where z n = l n .
This can be written again as a sum, using the partial fraction decomposition:
J ˜ D 1 ( k ) ( z ) = n k 1 z z n 1 ( z z k ) 2 = n k R n z z n + R k ( 1 ) ( z z k ) 2 ,
where:
R n = ( z z n ) J ˜ D 1 ( k ) ( z ) z = z n = j n , k 1 z n z j 1 ( z n z k ) 2 = j n 1 z n z j 1 ( z n z k ) R k ( 1 ) = ( z z k ) 2 J ˜ D 1 ( k ) ( z ) z = z k = j k 1 z k z j .
Exploiting the basic Laplace transform result:
L 1 1 s + a n ( t ) = t n 1 e a t Γ ( n ) Θ ( t ) ,
we can the invert the relation to compute J D 1 ( r = 1 ) :
J D 1 ( r ) = L 1 J ˜ D 1 ( k ) ( z ) ( r ) = j k R j e z j r + r R k ( 1 ) e z k r .
Eventually, remembering that z k = l k = γ k k , we obtain:
J D 1 ( k ) ( 1 ) = j k R j e γ j j + R k ( 1 ) e γ k k = ( 1 ) D j k e γ j j a j ( γ j j γ a a ) ( γ j j γ k k ) e γ k k a k ( γ k k γ a a ) .
The Lagrange’s multipliers γ j can then be fixed by solving:
J D 1 ( k ) ( 1 ) = λ k ,
where λ k are the eigenvalues of the density matrix: ρ j k = δ j k λ k .

References

  1. Pathria, R.K.; Beale, P.D. Statistical Mechanics; Elsevier B.V.: Amsterdam, The Netherlands, 2011. [Google Scholar]
  2. Greiner, W.; Neise, L.; Stöcker, H. Thermodynamics and Statistical Mechanics; Springer: New York, NY, USA, 1995. [Google Scholar] [CrossRef]
  3. Jaynes, E.T. Information Theory and Statistical Mechanics. Phys. Rev. 1957, 106, 620. [Google Scholar] [CrossRef]
  4. Jaynes, E.T. Information Theory and Statistical Mechanics. II. Phys. Rev. 1957, 108, 171–190. [Google Scholar] [CrossRef]
  5. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley-Interscience: New York, NY, USA, 2006. [Google Scholar]
  6. Anza, F.; Crutchfield, J.P. Beyond Density Matrices: Geometric Quantum States. Phys. Rev. A 2020, 103, 062218. [Google Scholar] [CrossRef]
  7. Anza, F.; Crutchfield, J.P. Geometric Quantum Thermodynamics. Phys. Rev. E 2022, 106, 054102. [Google Scholar] [CrossRef]
  8. Strocchi, F. Complex Coordinates and Quantum Mechanics. Rev. Mod. Phys. 1966, 38, 36–40. [Google Scholar] [CrossRef]
  9. Kibble, T.W.B. Geometrization of quantum mechanics. Comm. Math. Phys. 1979, 65, 189–201. [Google Scholar] [CrossRef]
  10. Heslot, A. Quantum mechanics as a classical theory. Phys. Rev. D 1985, 31, 1341–1348. [Google Scholar] [CrossRef] [PubMed]
  11. Gibbons, G.W. Typical states and density matrices. J. Geom. Phys. 1992, 8, 147–162. [Google Scholar] [CrossRef]
  12. Ashtekar, A.; Schilling, T.A. Geometry of quantum mechanics. In AIP Conference Proceedings; AIP: Melville, NY, USA, 1995; Volume 342, pp. 471–478. [Google Scholar] [CrossRef]
  13. Ashtekar, A.; Schilling, T.A. Geometrical formulation of quantum mechanics. In On Einstein’s Path; Springer: New York, NY, USA, 1999; pp. 23–65. [Google Scholar] [CrossRef]
  14. Brody, D.C.; Hughston, L.P. Geometric quantum mechanics. J. Geom. Phys. 2001, 38, 19–53. [Google Scholar] [CrossRef]
  15. Bengtsson, I.; Zyczkowski, K. Geometry of Quantum States; Cambridge University Press: Cambridge, UK, 2017; p. 419. [Google Scholar] [CrossRef]
  16. Cariñena, J.F.; Clemente-Gallardo, J.; Marmo, G. Geometrization of quantum mechanics. Theoret. Math. Phys. 2007, 152, 894–903. [Google Scholar] [CrossRef]
  17. Chruściński, D. Geometric aspects of quantum mechanics and quantum entanglement. J. Phys. Conf. Ser. 2006, 30, 9–16. [Google Scholar] [CrossRef]
  18. Marmo, G.; Volkert, G.F. Geometrical description of quantum mechanics—transformations and dynamics. Phys. Scr. 2010, 82, 038117. [Google Scholar] [CrossRef]
  19. Avron, J.; Kenneth, O. An elementary introduction to the geometry of quantum states with pictures. Rev. Math. Phys. 2020, 32, 2030001. [Google Scholar] [CrossRef]
  20. Pastorello, D. A geometric Hamiltonian description of composite quantum systems and quantum entanglement. Intl. J. Geom. Meth. Mod. Phys. 2015, 12, 1550069. [Google Scholar] [CrossRef]
  21. Pastorello, D. Geometric Hamiltonian formulation of quantum mechanics in complex projective spaces. Intl. J. Geom. Meth. Mod. Phys. 2015, 12, 1560015. [Google Scholar] [CrossRef]
  22. Pastorello, D. Geometric Hamiltonian quantum mechanics and applications. Int. J. Geom. Methods Mod. Phys. 2016, 13, 1630017. [Google Scholar] [CrossRef]
  23. Clemente-Gallardo, J.; Marmo, G. The Ehrenfest picture and the geometry of quantum mechanics. Il Nuovo C. C 2013, 3, 35–52. [Google Scholar] [CrossRef]
  24. Hughston, L.P.; Jozsa, R.; Wootters, W.K. A complete classification of quantum ensembles having a given density matrix. Phys. Lett. A 1993, 183, 14–18. [Google Scholar] [CrossRef]
  25. Wiseman, H.M.; Vaccaro, J.A. Inequivalence of pure state ensembles for Open Quantum Systems: The preferred ensembles are those that are physically realizable. Phys. Rev. Lett. 2001, 87, 240402. [Google Scholar] [CrossRef] [PubMed]
  26. Goldstein, S.; Lebowitz, J.L.; Tumulka, R.; Zanghi, N. On the Distribution of the Wave Function for Systems in Thermal Equilibrium. J. Stat. Phys. 2006, 125, 1193–1221. [Google Scholar] [CrossRef]
  27. Brody, D.C.; Hughston, L.P. Information content for quantum states. J. Math. Phys. 2000, 41, 2586–2592. [Google Scholar] [CrossRef]
  28. Karasik, R.I.; Wiseman, H.M. How Many Bits Does It Take to Track an Open Quantum System? Phys. Rev. Lett. 2011, 106, 020406. [Google Scholar] [CrossRef] [PubMed]
  29. Schroedinger. The exhcange of energy in wave mechanics. Ann. Der Phys. 1927, 387, 257–264. [Google Scholar]
  30. Schrödinger, E. Statistical Thermodynamics; Dover Publications: New York, NY, USA, 1989; p. 95. [Google Scholar]
  31. Walecka. Fundamentals of Statistical Mechanics. Manuscript and Notes by Felix Bloch; Stanford University Press: Stanford, CA, USA, 1989. [Google Scholar]
  32. Goldstein, S.; Lebowitz, J.L.; Mastrodonato, C.; Tumulka, R.; Zanghi, N. Universal Probability Distribution for the Wave Function of a Quantum System Entangled with its Environment. Commun. Math. Phys. 2016, 342, 965–988. [Google Scholar] [CrossRef]
  33. Reimann, P. Typicality of Pure States Randomly Sampled According to the Gaussian Adjusted Projected Measure. J. Stat. Phys. 2008, 132, 921–935. [Google Scholar] [CrossRef]
  34. Anza, F.; Crutchfield, J.P. Quantum Information Dimension and Geometric Entropy. Phys. Rev. X Quatum 2022, 3, 020355. [Google Scholar] [CrossRef]
  35. Pollard, D. A User’s Guide to Measure Theoretic Probability; Cambridge University Press: Cambridge, UK, 2002. [Google Scholar]
  36. Kolmogorov, A. Foundations of the Theory of Probability; Chelsea Publishing Company: Chelsea, MI, USA, 1933. [Google Scholar]
  37. Vourdas. Quantum systems with finite Hilbert spaces. Rep. Prog. Phys. 2004, 67, 267. [Google Scholar] [CrossRef]
  38. Bengtsson. Three ways to look at Mutually Unbiased Bases. AIP Conf. Proc. 2007, 889, 40–51. [Google Scholar]
  39. Lawrence, J.; Brukner, Č.; Zeilinger, A. Mutually unbiased binary observable sets on N qubits. Phys. Rev. A 2002, 65, 032320. [Google Scholar] [CrossRef]
  40. Gordon-Rodriguez, E.; Loaiza-Ganem, G.; Cunningham, J. The continuous categorical: A novel simplex-valued exponential family. In Proceedings of the 37th International Conference on Machine Learning, Vienna, Austria, 8 June 2020; pp. 3637–3647. [Google Scholar]
  41. Anza, F.; Gogolin, C.; Huber, M. Eigenstate Thermalization for Degenerate Observables. Phys. Rev. Lett. 2018, 120, 150603. [Google Scholar] [CrossRef]
  42. Neumann, J.v. Beweis des Ergodensatzes und desH-Theorems in der neuen Mechanik. Z. Phys. 1929, 57, 30–70. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Anza, F.; Crutchfield, J.P. Maximum Geometric Quantum Entropy. Entropy 2024, 26, 225. https://doi.org/10.3390/e26030225

AMA Style

Anza F, Crutchfield JP. Maximum Geometric Quantum Entropy. Entropy. 2024; 26(3):225. https://doi.org/10.3390/e26030225

Chicago/Turabian Style

Anza, Fabio, and James P. Crutchfield. 2024. "Maximum Geometric Quantum Entropy" Entropy 26, no. 3: 225. https://doi.org/10.3390/e26030225

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop