1. Introduction
One of the main tasks of any physical theory is to try to answer the question: “
what is the theory about?”. It is usually said that this interpretative task is rather easy except for quantum mechanics, and that the conceptual problems of the latter theory are not eased by considering its relativistic generalization. In fact, this judgment needs qualification. Two examples will suffice. In the case of Newtonian mechanics, the lack of direct observability of the gravitation
force has always caused some conceptual perplexities. Not by chance, until the end of the 19th century the great physicist Heinrich Hertz [
1] tried to formulate classical mechanics by disposing of the concept of force: all we can observe directly are accelerations. The same question can be raised about classical electromagnetism, our second example: do lines of force exist? In his valuable [
2] Marc Lange has answered this question in the negative. The need for an interpretation of the mathematical formalism in which a physical theory is formulated is important not only from a philosophical viewpoint. An attempt to clarify the ontology of a physical theory has often had heuristic value. At the end of the XIX century, there was widespread disagreement among famous physicists about the ontology of our best physical theories: some considered the atomistic hypothesis to be only a useful fiction (Ostwald, Mach and Poincarè among others), and others (like Boltzmann) firmly believed in the mind-independence existence of atoms. Spurred by this controversy, physicists attempted to solve the dispute
experimentally: the convergence of 13 experimentally different ways to calculate Avogadro’s number became a clear piece of evidence in favour of the existence of atoms ([
3]). The whole community of physicists quickly converted to the hypothesis that atoms were
real or mind-independent. Consequently, the energetist approach, according to which the primary stuff in nature is energy, was progressively abandoned. In our case, if we do not try to construe mathematically precise models providing a satisfactory, exact answer to the crucial question: “
why and how does a measure of the position of an electron that, when going through two slits, was in a state of superposition of positions turns non-locally into a well-localized point on a fluorescent screen?” we will probably miss important future developments of theoretical physics obtainable by new experiments. Furthermore, there seems to be a sort of sociological change in the physics community. Few authoritative figures of contemporary physics (among which Gell-Mann and Weinberg, both winners of the Nobel Prize for physics) have changed their minds about the importance of a deeper study of the conceptual foundations of contemporary physics. This also implies going back to the founding fathers’ philosophical discussions [
4]. By following the seminal work by John Bell, Roger Penrose, Hugh Everett and GianCarlo Ghirardi among others, contemporary physicists are beginning to realize that the measurement problem, besides its intrinsic interest, may even be a stumbling block toward the construction of a quantum theory of gravity. An attentive philosophical analysis is therefore called for.
First of all, as Maudlin has remarked [
5], a theory is not realist or instrumentalist
per se. To be a realist or instrumentalist is to have an attitude toward a theory. Instrumentalists in general argue that the aim of science is to construe empirically adequate theories so that the function of physics is to predict and control the physical world. Realists, on the contrary, claim that the aim of science is to provide a consistent description of a mind-independent physical world and of our place in it. They often regard the instrumentalist attitude toward quantum theory—often unawarely absorbed by students in the physics departments—as unreasonable: quantum theory as standardly taught, they claim, is “not even a theory”, precisely because of its lack of ontological clarity, or
exactness as Bell put it. Even the staunchest defender of a realistic stance toward the theory, however, must accept the claim that merely instrumentalist attitudes (of the type ’shut up and calculate’) toward the quantum state cannot be viewed as inconsistent, since, as suggested above, they call into play the overarching
aim of the scientific enterprise.
In this paper, we will take a realistic attitude without further justification. As is well known, the main candidates for a realistic approach toward quantum mechanics are (1) Everett’s/many worlds’, (2) Bohmian mechanics’ and (3) collapse models’. As mentioned above, in our review, we will focus on (3) by offering a novel, multiperspectival approach, bringing together not only the theoretical and experimental aspects but also a philosophical discussion of the main conceptual problems presented by the first two aspects. In this sense, we claim that a combination of these three aspects can offer a more complete and therefore a deeper understanding of the current, relevant literature. Before beginning, however, an extremely brief comparison with the other two realistic approaches is appropriate. (i) The ontology of Everettian quantum mechanics is about the wave function. By postulating the splitting of the physical world in any interaction, this view changes the metaphysics without changing the physical theory. There is no dualism of evolution described by Schrödinger equation on the one hand, and by a non-linear irreversible dynamics described by the Born rule. The measurement problem is thereby solved. On the contrary, Bohmian mechanics and dynamical collapse models are different theories and not, strictly speaking, interpretations of quantum mechanics. (ii) Bohmian mechanics adds to the Schrödinger equation a so-called ‘guiding equation’, which specifies the velocities of the particles in terms of the wave function. In particular, the velocities of each particle depend non-locally on the positions of all the others. In the Bohmian case, the ontology of the theory is essentially one of particles, while the status of the wave function, allegedly evolving in configuration space, is more controversial. In any case, Bohmian mechanics is presented as a completion of the standard theory, which instead presupposes the two evolutions but is regarded as complete. (iii) In the case of dynamical collapse models the Schrödinger equation is modified or better generalized via the addition of a non-linear term. In some sense, the Schrödinger equation is “wrong” and, as we will see, needs to be supplemented in an appropriate way. The ontology of this theory is more pluralistic than the other two, consisting of the hypothesis that the wave function denotes either (i) in galaxies of events as Bell put it (known as flashes) being the result of localizations of the wave function or (ii) in a matter field propagating in configuration space, (iii) in the 3N-dimensional configuration space itself describing the N particles composing the physical system or (iv) a dispositional property of an ensemble of particles, which is the hypothesis that we will defend.
Within the collapse theories, the first two ontologies are based on spatiotemporally extended “be
ables”, a term introduced by Bell in opposition to observa
ble). The third ontology is about the 3N configuration space, which describes the configuration of the particles of the system. The fourth involves primitive dispositions
quantitative propensities possessed by the particles to localize. In a word, as Bell put it, if we want to formulate quantum mechanics
exactly, the wave function must either be incomplete (Bohm) or not always right (GRW). The following review of collapse models consists of a
synthesis of three different aspects, namely a
theoretical, an
experimental and a
philosophical one, at a level that is technically more advanced than, say, some among the many books available in the literature ([
5,
6,
7]). In this sense, we claim that its value consists in synthesizing three different but inseparable dimensions of the collapse models that should have always been discussed together. We believe that such a synthesis may provide a deeper understanding of one of the main research programs in the foundations of physics.
In the first part of the paper, we briefly summarize the main theoretical features of the collapse models. In the second part, we present possible experimental tests of the theory. In the last part, we discuss the three above-mentioned ontological assumptions (flashes, matter density, configuration space) by evaluating them in view of the first two parts of the paper.
2. The GRW Model
As is well-known, the key problem of quantum theory is how to reconcile the quantum nature of the microscopic constituents of matter with the classical properties of composite systems such as macroscopic objects. The textbook formulation of the theory ultimately assumes a mysterious division between the microscopic quantum world and the macroscopic classical one, but why there is such a division, and where it precisely lies, is not explained. The theory only says that when performing a measurement that connects the micro and the macro world, the quantum wave function collapses to a definite state. However, again, why it collapses and when it does so is not spelled out in clear terms.
Collapse models [
8,
9,
10,
11] aim at solving this problem by combining the linear and deterministic quantum dynamics and the collapse of the wave function, which is nonlinear and stochastic, into a single dynamical equation, capable of accounting for the quantum properties of protons, neutrons and electrons, and the classical properties of a macroscopic object, as well as the (smooth) transition from one domain to the other. As a matter of fact, the title of the original GRW paper [
8] is
Unified dynamics for microscopic and macroscopic systems.Collapse models assume that any physical system, be it large or small, is ultimately quantum mechanical, and as such it is described by a wave function. The time evolution of the wave function is guided by a dynamical equation, which departs from Schrödinger dynamics. Precisely, it is assumed that every constituent of the physical system is subject to
spontaneous collapses in space. They occur at
random times and are governed by a given probability law, characterized by the
collapse rate , i.e., the number of collapses per unit of time. In mathematical terms, what happens during a collapse is that the wave function
of the whole system changes instantaneously to
the localization operator defined as
is the position operator of the
i-th constituent of the system suffering the collapse, and
x the center of the collapse. We see that a collapse corresponds to multiplying the global wave function by a Gaussian function (and normalizing again the state), which suppresses those parts of the wave function that are far away from the center
of the collapse, and keeping only those that are close to the center: the wave function is localized in space, with a precision controlled by the length
.
For example, if the wave function
before the collapse is in a superposition of states which are distant more than
, a collapse suppresses one of the two terms and amplifies the other, so that
after the collapse the wave function is localized in space (again, with respect to the resolution set by
). On the other hand, if the wave function is in a superposition of states that are closer than
, the collapse will be ineffective as none of the terms will be suppressed. This is an important feature, because a collapse which is too sharp in space will jeopardize the internal structure of matter, according to which the wave function of electrons can be delocalized over several atoms. For this reason, the suggested [
8] numerical value of the precision of the collapse is
∼
m, which is a typical mesoscopic distance, meaning with this that macroscopic superpositions are suppressed, while microscopic ones are not.
The probability for particles
i to experience a localization around a point
x of space is given by:
which implies that the collapses are more likely to occur around those points in space where the wave function is appreciably different from zero. This is another way of saying that the collapse occurs following (almost) the Born rule.
In between collapses, the wave function evolves following the Schrödinger equation. As such, the collapse evolution is piece-wise: the wave function spreads out in space as dictated by the usual quantum dynamics, and enjoys being in a superposition, until a collapse occurs, which localizes it in space; then it can spread out again till the next collapse, and so on.
The spontaneous collapses must be rare for microscopic systems, otherwise, they would have already been spotted. For this reason GRW [
8] suggested that
s
meaning that for a single electron or proton, a collapse occurs on average once every ∼
years, which is more or less never. Then for microscopic systems, the new dynamics are almost identical to the usual quantum dynamics, modulo tiny deviations, which in light of potential tests of these models are the most interesting ones. What makes collapse models interesting and viable as a consistent quantum theory is the inbuilt amplification mechanism: when one particle of a composite system suffers a localization, the wave function of the
entire system is localized. Therefore, if we take for example a typical macroscopic object with
constituents, each of which is subject to a collapse once every ∼
s, the wave function of the system suffers a localization once every ∼
s: every second there are about
collapses occurring somewhere in the system, which keeps its wave function well-localized in space.
Then, the picture that emerges is the following: the wave function of microscopic systems is spread out in space: the Schrödinger equation, which makes it diffuse via the kinetic term, or creates superpositions through the interaction term. Collapses are so rare that they can be safely neglected. When particles interact to form more complex systems, their wave functions become entangled in a unique global wave function, which is subject to the collapses associated with its constituents. In this situation, the amplification mechanism enters into play: the collapse rate associated with the system’s wave function is proportional to the number of its constituents, and if this number is large enough, as for macroscopic objects, the collapse rate becomes high enough to guarantee that the system’s wave function has no time to spread out in space. As a consequence, the wave function of a macroscopic object is always well-localized in space, so well-localized that it behaves, for all practical purposes, like a point, moving subject to external forces, as for Newtonian mechanics.
According to the previous analysis, the solution to the quantum measurement problem offered by collapse models is the following: a microscopic system enters the measurement process in a quantum state that can be in any superposition allowed by the experimental situation; if no external influence disturbs it, this superposition is stable, During the measurement process (here we follow the ideal scheme proposed by von Neumann), the microscopic system becomes entangled with the macroscopic apparatus, and the superposition starts propagating from the first to the second. However, before this happens, the many collapses occurring in the macroscopic instrument destroy the superposition state: the instrument will always be in a localized state, corresponding to one of the possible outcomes, and also the state of the microscopic system will be one of those that previously were in a superposition, in particular the one correlated to the outcome of the measurement. In additions, it can be proven that the states to which the superposition can collapse are distributed according to the Born rule.
This is how the GRW model, and collapse models in general, account for measurement situations in quantum theory and, equivalently, for Schrödinger’s cat paradox: the classical world of tables and chairs (and cats) emerges from a wavy quantum world via a nonlinear dynamics, according to which wave functions become the better localized in space, the more constituents are glued to each other.
As a note, it should be clear that, within the context of collapse models, it is inappropriate to speak of microscopic ‘particles’, having in mind localized objects. We did, and will, use this term as it is customary among physicists, but here it is misleading. At the microscopic level, the wave function of a ‘particle’ is spread out over space, and it would be odd to associate it with a point-like object. Only at the macroscopic level wave functions are well-localized and can be associated with localized objects, although in an approximative sense.
Two comments are in order. The first one is that the collapses as described before do not preserve the symmetry (or anti-symmetry) of the wave function representing identical particles. For example, this implies that electrons in an atom would slowly all decay to the ground state because of the collapses. The stability of the matter is thus jeopardized. This issue can be resolved by replacing the collapse operator defined in Equation (
2) with a suitable operator preserving the identity of particles.
The second comment is that the piece-wise dynamics previously outlined, although consistent, might look somehow artificial. This issue can also be resolved by introducing a continuous version of the collapse, which acts on the wave function alongside Schrödinger’s. The resulting dynamics is a diffusion process for the wave function in the Hilbert space, with the unitary part or the collapsed part dominating, depending on the size of the system, i.e., on the number of its constituents.
The model thus outlined, where the collapse is continuous and preserves the identity of particles, is called the continuous spontaneous localization (CSL) model [
9].
3. Experimental Tests of Collapse Models
It is clear from the previous discussion that compared to quantum mechanics, collapse models make different predictions. In this section, we discuss possible experiments proposed to test collapse models against the standard quantum theory. For a more detailed analysis on this topic, we refer to [
11,
12,
13,
14,
15].
The most intuitive test of collapse models is realized by testing spatial quantum superpositions of massive objects that are much heavier than, say, a Cesium atom. For given parameters, collapse models forbid superpositions that are instead perfectly allowed by standard quantum theory. The simple idea behind the experimental test is to generate such superpositions state and tune the parameters so that the collapse effect would become apparent—if existing. Qualitatively, the parameters that one needs to control are the mass
m of the particle, the spatial size of the superposition state
l and the time for the superposition to exist
t. The direct link of
with the collapse model’s parameters has to be defined for each individual experimental configuration and model. In CSL[Adler], such a comparison has been performed in [
11] at the level of Lindbald’s master equation. This helps to identify the appropriate parameters for the test.
We note that this implies that experiments only test collapse models at the level of the density matrix, which is the
same for decoherence effects. However, the decoherence approach, in contrast to collapse models, does not discuss the reason for wave function collapse as an intrinsic property of the dynamics of the quantum system [
16,
17]. Decoherence is triggered by interactions of the quantum system with its environment. Experimental studies—at the density matrix level—confirm the existence of decoherence [
18,
19,
20,
21]. This means that both collapse models and decoherence predict very similar effects to observe in the density matrix dynamics and great care has to be taken to distinguish the two effects.
Before we discuss in a bit more detail the experimental tests of collapse models, we want to mention the so-called
macroscopicity measures. The quest to test collapse models naturally aims to test quantum effects in the macroscopic domain. The definition of ‘macroscopic’ must be handled with care, as the most prominent collapse models scale with the mass of the quantum object. This means that a useful macroscopicity measure must include
mass as a parameter. Other quantum systems, such as entangled photon pairs, are known to exist over many kilometers [MICIUS satellite about 1000 km], but would not represent a good system to test collapse models as the rest mass of the photons is notoriously zero. Even if the objective choice for a measure of macroscopicity is still lively debated, we think it appropriate to mention the measure
by Nimmrichter and Hornberger [
22], which fits in well with the test of collapse models by matter-wave-interferometry-like experiments—as it combines the set of parameters [
]. With
at hand, it is possible to objectively compare very different experiments ranging from optomechanics to superconducting flux devices and how effective they are to test collapse models.
As mentioned above, the kinds of experiments invoked to directly test the spatial superposition of massive particles are based on matter-wave interferometry and the largest particles that are supposed to show interference are
organic molecules. Experiments are performed in the Talbot–Lau interferometer (TLI) configuration, which acts in favor of the low coherence in molecular and nanoparticle beams [
23]. The largest particles interfered so far are of mass 10
amu in the Vienna interferometers [
24,
25]. This experiment is still about two orders of magnitude too small to test the CSL model with the strongest bound according to Adler, given the mass of the record molecule, the separation of slits on the gratings of the TLIs, which is about 100 nm and the time for particles to travel through the TLIs, which is on the order of 1 ms.
Interestingly, by assuming the sense of macroscopicity given by
, the generation of a superposition between the ground and the excited state of a very massive mechanical harmonic oscillator, such as that prepared in quantum optomechanics [
26] cannot easily outperform the lighter molecules in the Talbot–Lau interferometers. The reason is that all three parameters [
] have to be sufficiently large. In the case of optomechanical systems
m = 10
amu and is therefore much larger than for molecules, but the spatial separation between the states in a superposition is very small; more precisely, if estimated with the spatial size of the zero-point motion of the mechanical harmonic oscillator, it is of the order of
l = 10
m or less.
Moreover, atoms are not likely to produce much stronger tests. Atom interferometry of the Mach–Zehnder type can be performed with very large beam separations [
27], which gives makes parameter
l of the order of cm and therefore very competitive for a test, or can be held for as long as 20 s in a dipole trap [
28], which makes the parameter
t quite large. However, the mass of a single atom is so much lighter compared to molecules in TLIs, that the macroscopicity is not larger. If those atoms would undergo interference together, a collection of atoms would of course improve the mass factor, However, all interferometry experiments of Bose–Einstein condensates of atoms have revealed the individual superposition of atoms and not of the condensate ensemble, the superposition state being a N-particle product state of single-atom superpositions instead of a NOON state, which is the case of a molecule. The mass of the object in superposition can easily be worked out from interference fringe pattern and this would be the individual atom. If (i) ultra-cold atoms could be bound together or (ii) many of them could be made interacting with each other, say in a dipolar fashion, (iii) and we could realize a matter-wave interferometer for atoms, we would make a very good test of collapse models. Developments are underway and cold atomic interferometers hold promise for a test as this quantum technology is rich and powerful in preparation for massive quantum states.
Future plans for matter-wave interferometers which achieve even larger macroscopicity involve the OTIMA and LUMI interferometers at Vienna [
25,
29]. OTIMA and LUMI are Talbot–Lau interferometers and should be able to interfere with particles of masses up to 10
amu and beyond. The machine is existing, and now intense beams of slow nanoparticles have to be generated.
Experiments with single-levitated nano and microparticles have been suggested in 2010 [
30,
31,
32]. More details for such an experiment have been worked out for a double slit type of experiment in free fall [
33]. A related scheme for a nanoparticle-based on a Talbot interferometer has been proposed as well, while interferometry of levitated nanoparticles has yet to be achieved. Experiments with a particle of 10 nm diameter [
l = 100 nm,
t = 2 ms] would directly test the CSL model with Adler’s parameter choice [
34]. Investigations of intrinsic noise in optically levitated systems naturally lead to the suggestion of ion Paul trapping of charged particles or even better the magnetic levitation of superconductors and a rapidly growing experimental community is taking care of levitated nano and microparticles in vacuum [
35]. Ground-state cooling as a first step toward macroscopic quantum systems based on levitation has recently been achieved [
36,
37,
38]. Matter-wave interferometry with levitated mechanical systems comes into experimental reach and many alternatives for realization do exist. For a recent comprehensive review, see [
39].
All such matter-wave tests have common limitations. Known decoherence effects due to collisions with other particles such as rest gas in vacuum chambers or interaction [emission, scattering, absorption] of thermal radiation make these experiments a massive technical challenge. Furthermore, all particles have to propagate freely, which means that they are affected by Earth’s gravity. Particles of different mass fall by the same distance if in free motion for the same time. The interferometry of more massive particles takes more time compared to the interferometry of lighter particles. This means that more massive particles have to be in free fall for longer and fall for a longer time. To keep the matter-wave experiment in reasonable dimensions [say 10 m in the vertical direction], the mass of the particle cannot be much larger than 10
amu, c.f. [
34]. Experiments in micro-gravity in space would allow us to overcome this limit. The collaboration MAQRO to work towards a space mission has been formed a decade ago and is preparing large-mass matter-wave interferometry in space [
40,
41]. Alternatively, a test beyond 10
amu could be performed on Earth, if the particle could be prepared in a sufficient macroscopic superposition while trapped at low noise and possibly the evolution time of the wave function can be accelerated or boosted by an inflation operation [
42,
43].
However, better upper bounds on
can be obtained by studying the indirect effects of the collapse. For example, collapse models predict an increase in the kinetic energy of any system, leading to consequences at the cosmological scale. More precisely, this increase in energy implies the heating of the intergalactic medium (IGM) which amounts to a change in its spectrum. This effect was studied in [
12] setting an upper bound
.
Nowadays, the best upper bound on the
parameter comes from the study of the process of spontaneous radiation emission. In fact, as shown for the first time in [
44] and later discussed more in detail in [
45,
46,
47,
48,
49,
50], the interaction with the collapse noise induces, for charged systems, an emission of radiation not predicted by standard quantum mechanics. The radiation emitted from a single particle is very small, but when a macroscopic object containing a number of atoms of the order
is considered, the effect becomes much more relevant. A comparison between the radiation emission predicted by the CSL model with experimental data coming from the emission from Ge has been performed in [
44], leading to an upper bound
. A recent dedicated experiment at Gran Sasso has led to stronger bounds [
51,
52].
For completeness, in
Table 1, we report all the bounds on
coming from different experiments and cosmological data [
11]:
The technical complications for the implementation of macroscopic laboratory table-top matter-wave experiments have led to a number of proposals for an alternative, among which non-interferometric tests of quantum superposition. Such ideas began with the treatment of the collapse field as a noise field that, if the given experiment would be sufficiently sensitive, could in some way be observed as a distortion. The first of such ideas consisted in the proposal to discuss CSL effects for the light-matter interaction in a generic two-level system, as widely discussed in quantum optics. Then, the CSL noise is just another noise that affects the emission properties of an excited two-level system. The CSL noise triggered the relaxation of the exited electron and therefore the emission of radiation sooner than expected from the natural lifetime at the exited state. The result is a shift and broadening of the related spectral emission line as worked out in detail in Bahrami et al. [
53]. Unfortunately, both effects from CSL and other collapse models are very small and ultrahigh precision spectroscopy experiments are still at least two orders of magnitude away to resolve the effect on real spectra. The two-level effect does not include the N particle amplification mechanism as the spontaneous photo-emission process explained above. A further advantage of the spontaneous emission effect is that it predicts a new spectral feature (to trigger a forbidden transition and to stimulate its emission) instead of a modification (shift, broadening) of an existing spectral line. To prove or disprove the existence of a new feature is the preferred situation for an experimental test.
A related non-interferometric test—and one which has been already performed with existing experimental quantum technology [
32]—is a test with optomechanical and magnetomechanical systems. In optomechanics, a mechanical harmonic oscillator is coupled to light. A typical setting is cavity optomechanics, where the mechanical oscillator is a nanofabricated cantilever, which forms one of the mirrors of an optical resonator, setting the scene for an optical light mode. The optical cavity mode now depends on the motion of the mechanical oscillator. Light of a given wavelength will be enhanced in the cavity not depending on the position of the cantilever. The properties of the mechanical oscillator are mapped on the spectral response of the light field, which makes an ultra-sensitive mechanical sensor. Such optomechanical systems have been pioneered in the past decade or so, and many experiments already exist [
26,
54]. CSL noise now affects the motion of the cantilever, which consists of many atoms. The light reads this effect of the CSL noise, which can be described as a heating effect on an initially cooled center of mass motion of the cantilever. This results in an increase in the area of the spectral line which corresponds to this motion. The heating effect depends on the shape and size of the mechanical object. These very same effects can also be predicted for the aforementioned levitated mechanical systems and it is interesting to note that the very same CSL’s heating effect is also predicted for the direct observation of the motion/rotation/vibration of suspended or levitated objects. In sum, mechanical systems have high potentiality for ultimate testing of CSL, not only by interferometric, but also by non-interferometric means, and are covering a large parameter space for testing CSL models. It should be noted that those mentioned above are just a small selection of possible non-interferometric experiments for testing collapse models. For a comprehensive review we refer the reader to the recent work by Carlesso et al. [
51].
This variety of ideas lay ought to be considered as future, very promising tests of collapse models while giving, at the same time the opportunity to work with fascinating experiments towards the understanding of nature and especially of quantum systems in the macroscopic regime.
4. The Physical Origin of the Collapse
The random localization processes in the GRW model and the noise field in the CSL model are postulated, but an explanation of their origin in physical terms is still lacking The collapse is introduced so to speak ‘by hand’ in order to arrive at a phenomenological description of the wave function’s collapse, with the correct quantum probabilities given by the Born rule. The origin of the collapse, however, is still an open question. In this section we discuss two possible answers about the origin of the collapse that has been proposed in the literature.
One option, championed by Diósi and Penrose, is that the spontaneous collapses have a gravitational origin [
55,
56,
57]. Penrose [
56] has studied the effects of gravity on a superposed state
where
and
are two identical stationary states corresponding to the same energy eigenvalue, one located around a given region “1” and the other one located around another region “2” of spacetime. According to standard quantum mechanics, this superposition state is also stationary and therefore should last forever. On the other hand, the two states are located in different positions and therefore correspond to two different matter densities. According to general relativity, this means that they curve spacetime in different ways. Then, a superposition of different spacetime geometries appears, which amounts to an ill-defined time translation operator. This, according to Penrose, leads to an uncertainty in the energy of the superposed state
, which, in the Newtonian limit, should be proportional to the gravitational self-attraction of the two superposed states, i.e.,
where
corresponds to the mass density distribution of the state
. This uncertainty in the energy might suggest that the superposed state
collapses in a time of the order
∼
.
Not only does Penrose’s argument imply that that wave function should collapse, and that the larger the object in the superposition state, the greater the gravitational self-attraction and the faster the collapse; but it also provides a quantitative estimation of the reduction time. Clearly, this argument is heuristic and does not provide a dynamical equation for the evolution of the wave function.
The gravity-induced collapse model proposed by Diósi [
55,
57] instead provides such dynamics by postulating that the state vector evolves as the CSL model mentioned in
Section 2, with a different choice for the collapse operator, in such a way that the collapse time coincides with that indicated by Penrose. Hence, the name Diósi-Penrose (DP) model. The CLS model and DP model, thus, are equivalent at the formal level, including the amplification mechanism; by using a different choice for the collapse operator, they differ on the quantitative level. As discussed before, the virtue of the DP model is to suggest a physical origin for the collapse, although, as also suggested by Penrose, the ultimate answer should come from a proper quantum theory of gravity.
Another possibility to understand the origin of the noise is to consider collapse models, not as fundamental theories, but as phenomenological descriptions of a deeper underlying theory. “Trace dynamics”, proposed by Adler [
58], gives such a possibility. In trace dynamics, the dynamical variables are non-commutative matrices (operators) whose dynamics is described in terms of a trace Lagrangian and a trace Hamiltonian, which are generalizations of the standard Lagrangian and Hamiltonian formalism. It is assumed that these highly non-trivial dynamics are very fast, and rapidly reach statistical equilibrium; the resulting dynamics for the canonical ensemble take the form of quantum field theory. In other words, a quantum theory emerges as a thermodynamic limit of the underlying trace dynamics.
The fluctuations from this thermodynamics limit take the form of a Brownian motion’s corrections to the dynamics, which eventually produces stochastic modifications to standard quantum theory. The hypothesis, yet to be proven, is that the Brownian motion corrections are exactly the same as those described by collapse models. Therefore, in trace dynamics, the collapse of the wave function should arise from the underlying dynamics. The research for an underlying theory is not yet concluded. In fact, as pointed out by Adler itself in the introduction of his book, “[…] while we have given a general framework in which an emergent quantum theory may appear, we have not identified a specific theory in which all our requirements are realized” [
58].
6. Which Ontology for Collapse Models?
The main problem that collapse models try to solve is to clarify the unclear notion of measurement, on which the standard view is based: what are the physical conditions necessary and sufficient for a measurement interaction to occur? Using an argument ex auctoritate we can motivate this question by this quotation by John Bell, stressing once more the role of “aims” in the realist/instrumentalist divide:
“experiment is a tool. The aim remains: to understand the world. To restrict quantum mechanics to be exclusively about piddling laboratory operations is to betray the great enterprise. A serious formulation will not exclude the big world outside the laboratory.” (our emphasis).
By recalling the three main ontological hypotheses posed by collapse models, we will begin by discussing the “wave function” interpretation, First, we may want to remark that the wave function, qua a (complex) function on the configuration space, is an abstract entity. For the sake of precision, one could follow Maudlin’s suggestion and replace the ambiguous term ‘wave function’ with quantum state, where the latter is purposely to be interpreted as a physical entity denoted by the wave function regarded as an abstract object. Abstract objects in fact cannot affect and be affected by anything physical.
Once we are clear about the need to distinguish the wave function regarded as an abstract entity from its denotatum, our main problem of course still remains, which is to figure out what Maudlin’s quantum state is from a physical viewpoint. Given the philosophical stance declared at the beginning, here we are simply assuming that the wave function does not denote the state of information of an agent or her degrees of beliefs (as in epistemic views). In what follows, and keeping in mind Maudlin’s caveat, in accordance with the literature we will keep using the word ‘wave function’, given that it appropriately suggests that it could be identified with an entity evolving in configuration space:
No one can understand this theory until he is willing to think of the wave function as a real objective field rather than just a probability amplitude even though it propagates not in 3-space but in 3N-space.
In this respect, we should evaluate an interesting proposal [
66,
67], according to which (a) the wave function of a physical system is just the physical system (with an acronym, WISE) so that the wave function of, say, an electron, is the electron itself spread out in space and evolving according to Schrödinger equation; (b) the collapse is a postulate; (c) the collapse is not only random in space but also in time, given that it is characterized by a temporal interval of finite duration during which the collapse can occur. we will now comment on these three points in turn.
(a) To the extent that the electron evolves in configuration space, the latter becomes the fundamental ontological posit. in the wise interpretation, in fact, the evolution of n electrons is defined in 3n-configuration space. in this hypothesis, all non-standard formulations of the theory, collapse models presuppose configuration space, given that they describe the world in terms of the wave function too [
68,
69].
If we consider, in particular, collapse models, realism about the wave function may be misleadingly suggested, among other things, by the fact that in such models, the collapse is referred to by
mathematical multiplication of the global wave function by a Gaussian. Here, we will argue that if we want to make sense of the ontology of collapse models, this is the wrong way to go. In such models, wave function realism must entail that the multiplication in question
represents a
field taking values at points in configuration space, not in physical space. The main problem with this position, however, is to explain the
emergence of the familiar, macroscopic world of tables, chairs and people (the so-called beables) inhabiting the ordinary three-dimensional space of our experience, from a 3 N dimensional space. This is one of the main tasks that need to be accomplished by a theory that aims at “mending” in part the standard formulation. Arguments in favor and against the viability of this explanatory research program have been thoroughly discussed in [
69] to which we refer the reader. Here, we want to make
two critical points. The
first is that, in our opinion, no convincing explanation has been provided so far, the main difficulty depending on the vague notion of
emergence. In philosophy, this concept is used extensively, and not only in the philosophy of physics, to the detriment of clarity. As far as we can tell, the only clear meaning to be associated with ‘emergence’ is
reduction, which, in turn, by following Nagel and Hartmann, ought to be regarded as a
deduction of the emergent properties from those characterizing the relevant fundamental level. The paradigmatic example is here the logical relation between thermodynamic and statistical mechanics. The properties of the former ‘level’ (i.e., hot and cold for example referred to as a gas) ’emerge’ from the statistico-mechanical description in
this sense. Of course, in the case of wave function realism, one cannot deny the possibility of a reduction meant in this sense, as this denial might simply be due to our lack of imagination. However, it must be admitted that, so far, no such a deduction is available or has been proposed, and talking about the emergence of the familiar macroscopic world from a physically fundamental configuration space is a mere (though worth pursuing) research program. The
second critical point is a consequence of the first, since lacking this reduction or at least an exact account of ‘emergence’, it is not clear how wave function realism can really solve the measurement problem since it would always not be clear what it means to claim that a three-dimensional pointer in a physical space has moved to the right. (For this point, see [
70], p. 375.)
(b) As to the assumption that in the wise interpretation, the collapse is a postulate, there are well-known reasons explaining why one needs a nonlinear addition to the Schrödinger linear equation. in other words. unlike what happens for postulates in general, the dynamical properties of the collapse can be explained and motivated and not assumed at the start as something that needs no explanation. As an example, think of the postulate that the speed of light is independent of the motion of the source. Qua postulate, it is fundamental and by definition should not be explained. in addition, in collapse theories, the mathematical formulation is not a primitive, but is defined in terms of other formal concepts.
(c) The third point is fully respected also in GRW, given that, in any of its formulations (see above), the time of the collapse is random. For example, for an isolated particle, the collapse occurs with a well-defined frequency (once every 100,000,000 years) and happens whenever an isolated particle ’bumps’ into a macroscopic object. In this case, the localization process has a duration.
We can then go on discussing the so-called primitive ontology hypotheses, calling into play local beables (entities) living in Newtonian
spacetime. The explanatory task referred to above becomes easier since macroscopic objects live in 3D space. In the literature one typically finds two interpretations of the ‘hits’ of the wave function, that is two localizations hypothesis, namely in terms of a galaxy of ’flashes’ (GRW
) in Bell’s metaphor, and of matter density (GRW
). In the former hypothesis, a table or a chair is constituted by point-like ‘flashes’, representing the center of the peak of the Gaussian, each of them evolving in time in a discrete, non-continuous way. Since these localization processes happen in three-dimensional space at a certain moment of time, and objects are constituted by microscopic components, we must regard the former as literally constructed out of these flashes. It must be admitted that this hypothesis looks at odds, or maybe just ad hoc, with what we know about the microstructure of these objects. By looking ‘ad odds’, we do not mean that it contradicts well-known, solid facts about chemistry like ‘water is made of
but that it is still not
conceptually connected with them. It just looks like an addition to such facts that it is left somewhat unexplained. As Maudlin puts it “there is literally nothing at all material that is localized in spacetime” [
5] (p. 115). To this remark, we add that the main lesson of quantum field theory is that waves and particles are oscillations of quantum fields, so that in our most fundamental physical theory, fields must be regarded as more fundamental than parts-less particles like electrons.
A more reasonable hypothesis is that the ‘hits’ of the wave function refer to the change in the“density-of-stuff”, an ontology originally proposed by Ghirardi et al. [
8]. In this version of the collapse models, the mass density of the fundamental field constituting physical reality suffers contractions in correspondence to the multiplication of the wave function in configuration space, which, as in the flash ontology, must be regarded as purely mathematical tools used to describe the concrete evolution of the matter field in ordinary space and time. Unlike flashes, which are discrete, point-like happenings in spacetime, a field is obviously both continuous and smeared in space like a field must be. Consequently, it occupies a non-infinitesimally extended region of space. In a word, a ’hit’ of the wave function describes a non-local change in how much of the physical field (its density) is in a certain region. More precisely, the ontology in this version of the collapse model is given by a scalar field
in three-dimensional space, that is, an assignment of values to each of its points
r at a certain time
t. This assignment clearly depends on the values of the wave function in the configuration space. Following the notation by ([
70], p. 375) we can write the field
as
=
where
stands for the density of each particle
i, and
stand for its mass.
For instance, a matter-field crossing the two slits will localize at a point in the second screen by increasing
almost all of its density there, the qualification ‘almost all’ is necessary for the so-called ‘tail problem’ [
71]. The problem in question refers to the tails of the Gaussian multiplying wave function, which contain some very small amount of matter, which, crucially, does not make any difference at the empirical level. As Maudlin points out, another advantage of the matter density ontology over the discrete flash ontology is its coherence with the CSL models described above ([
5], p.121).
There is an important aspect of the collapse model that so far we have neglected: the localizations (flashes or density of matter) are irreversible, stochastic processes happening
in spacetime. As is well-known, all the fundamental physical laws are time-symmetric, that is, they describe the processes unfolding in what we conventionally call past to the future direction as well the time-reversed process going in the opposite direction (future to past). Albert [
72] and other eminent scholars have proposed to regard the collapse of the wave function (in those views in which it is postulated) as an irreversible process justifying our belief in a time asymmetry physical world. The localizations featuring essentially in the collapse models could then be regarded as a more precise, law-like explanation of macroscopic
de facto irreversible phenomena like the entropic growth in closed systems or the prevalence of retarded vs advanced radiation.
By endowing the matter field in three-dimensional space with irreducibly stochastic, spontaneous and irreversible
dispositions to localize [
73], the probabilities of these localizations could be assigned numbers by introducing a propensity-based interpretation of probability [
74]. In this alternative account, the ’hits’ of the wave function would denote a set of irreducible dispositions of the matter field to manifest in changes in its density. It should be remarked that, in all collapse views of quantum mechanics, a dispositionalist ontology must presuppose that there is nothing that ‘triggers’ the disposition; that is, there is no
stimulus that, like a match or a stone, that causes the match to burn or the glass to break. The localizations are spontaneous. Whether the presence of stimula is essential for a disposition to manifest in an event is a problem that here cannot be discussed.