**1. Introduction**

Quantum mechanics is often regarded as an essentially probabilistic theory, where the random collapses of the wavepacket with probabilities governed by the rule conjectured by Max Born (1926) [1] play a central role. Yet, evolution dictated by the Schrödinger Equation is deterministic. This clash of quantum determinism of the unitary evolutions of the fundamental quantum theory with the quantum randomness of its phenomenological practice is at the heart of the interpretational controversies.

**94**

The aim of this review is to assess the progress made in the wake of the earlier developments (including in particular theory decoherence and einselection) since the beginning of this millennium. This includes the realization that selection of preferred states— einselection of the pointer states usually justified using decoherence—is a consequence of the tension between the linearity of quantum theory and the nonlinearity of copying processes involved in the acquisition of information. Derivation of Born's rule based on the symmetries of entangled quantum states shores up and simplifies foundations of quantum theory.

Quantum Darwinism will be discussed especially carefully, but nevertheless with significant omissions that are inevitable in reviewing a rapidly evolving field. In such a case one is faced with a "moving target"—the most recent developments are inevitably either left out or treated only in the superficial manner (since assessing their impact on the future development of the field is difficult).

We will also reconsider the status of the quantum measurement problem [2]. I shall claim that perception of the objective classical reality is accounted for by the developments mentioned briefly above and discussed in more detail below.

We shall start by reviewing the assumptions—postulates of quantum theory—and by selecting from their textbook version core postulates that are consistent and can be used to address the issues usually dealt with via measurement axioms that are also included in the textbook presentations but are inconsistent with the quantum core. More detailed preview of the content of this review can be found at the end of this introductory section.

#### *1.1. Core Quantum Postulates*

The difficulty of reconciling quantum determinism with quantum randomness is reflected in the postulates that provide textbook summary of quantum mechanics (see, e.g., Dirac, 1958) [3]. We list them starting with four uncontroversial core postulates, cornerstones of the *quantum theory of the classical* we shall develop. Two are very familiar:

(i) *The state of a quantum system is represented by a vector in its Hilbert space* HS.

(ii) *Evolutions are unitary (i.e., generated by the Schrödinger Equation ).*

They imply, respectively, the *quantum superposition principle* and the *unitarity of evolutions*, and we shall often refer to them by citing their physical consequences. They provide an almost complete summary of the formal structure of the theory.

One more postulate should be added to (i) and (ii) to complete the mathematics of quantum mechanics:

(o) *Quantum state of a composite system is a vector in a tensor product of the Hilbert spaces of its subsystems.*

Postulate (o) (von Neumann, 1932 [4]; Nielsen and Chuang, 2000 [5]) is often omitted from textbooks as obvious. However, composite systems are essential, as in absence of subsystems Schrödinger Equation provides a *deterministic* description of the evolution of an indivisible Universe, and the measurement problem disappears [6,7]. In absence of at least a measured system and a measuring apparatus questions about the outcomes cannot be even posed. We shall need at least one more ingredient—an environment—to address them.

The measurement problem arises because a quantum state of a collection of systems can evolve from a Cartesian product (where definite state of the whole implies definite states of each subsystem) into an entangled state represented by tensor product: State of the whole is still definite and pure, but states of the subsystems are indefinite. By contrast, in classical settings completely known (pure) composite states are always represented by Cartesian products of pure states—state of each subsystem is also perfectly known.

Postulates (o)–(ii) provide a complete summary of the *mathematics* of quantum theory. They contain no premonition of either collapse or probabilities. Using them and the obvious additional ingredients (initial states and Hamiltonians) one can justify and carry out every quantum calculation. However, in order to relate quantum theory to experiments one needs to establish a correspondence between abstract state vectors in HS and experiments. This task starts with the *repeatability postulate*:

(iii) *Immediate repetition of a measurement yields the same outcome.*

Postulate (iii) is idealized—it is hard to perform such non-demolition measurements, but in principle it can be done. Yet—as a fundamental postulate—it is also indispensable. The very concept of a "state" embodies predictability that requires axiom (iii): The role of states is to allow for predictions, and the most basic prediction is that a state is what it is known to be. Repeatability postulate asserts that confirmation of this prediction is in principle possible.

Postulate (iii) is also uncontroversial: Repeatability is taken for granted in the classical setting where it follows from the assumption that one can find out an unknown state without perturbing it. This classical version is a much stronger assumption than the repeatability postulated above in (iii). It is responsible for the familiar "objective reality" of the classical world: It detaches existence of classical states from what is known about them.

Quantum measurement problem arises because—by contrast—unknown quantum states are re-prepared by the attempts to find out what they are. So, quantum repeatability postulate (iii) signals a significant weakening of the role states play in our quantum Universe: Repeatability guarantees only that the existence of a *known* quantum state can be confirmed, but it no longer implies their objective existence: Unlike classical states, unknown quantum state cannot be simply found out independently by many initially ignorant observers through direct measurements.

This quantum intertwining of the epistemic and ontic function of a state is the central quantum feature regarded as a key interpretational problem. One of our goals is to understand how (as a consequence of quantum Darwinism) one can recover objective existence—states that survive discovery by an initially ignorant observer, so others can confirm their identity.

We will show that the essence of the remaining textbook postulates can be deduced from the above quantum core that includes the mathematical postulates (o)–(ii) and the repeatability postulate (iii) that begins to deal with the experimental consequences of quantum theory such as information transfers, including the measurements.

#### *1.2. Quantum States, Information, and Existence*

So far, we have outlined a consistent set of core quantum postulates, (o)–(iii). They will serve as a basis for the derivation of the emergence of classical behavior in a quantum Universe. In this subsection, we consider textbook axioms (iv) and (v) that are at odds with the quantum core. The whole (o)–(v) list is, of course, given by textbooks. The inconsistency is usually "resolved" through some version of Bohr's strategy. That is, textbooks assume that quantum theory can be applied only to a part of the Universe. The rest of the Universe— including observers and measuring devices—must be classical, or at the very least out of quantum jurisdiction. Our aim will be to show that the classical domain need not be postulated, and that the measurement process (the focus of axioms (iv) and (v)) can be accounted for by using the quantum core postulates (o)–(iii).

In contrast to classical physics (where an unknown preexisting state can be found out by an initially ignorant observer) the very next textbook axiom explicitly limits predictive attributes of quantum states:

(iv) *Measurement outcome is an eigenstate of the Hermitian operator corresponding to the measured observable.*

Thus, in general, a measurement will return something else than the preexisting state of the system. Repeatability postulate (iii) is in a sense an exception to this quantum undermining of the predictive role of states. Axiom (iv) can be usefully subdivided into:

(iva) *Allowed measurement outcomes correspond to the eigenstates of a Hermitian operator.*

(ivb) *Only one outcome is seen in each run.*

This splitting may seem pedantic, but it is useful. Textbooks often separate our (iv) into such two axioms.

We emphasize that already (iva) limits predictive attributes of quantum states: When the Hermitian operator representing the measured observable does not have, as one of its eigenstates, the preexisting state of the system, the outcome cannot be predicted with certainty even when the preexisting state is perfectly known (pure).

Nevertheless, repeatability means that when the same measurement is immediately repeated on the very same system, the outcome will be the same. This is, operationally, the essence of the collapse: The preexisting pure state will give an unpredictable result that can be, however, confirmed and reconfirmed by re-measurement of the outcome. What you saw you will get, again and again. Therefore, as soon as (iva) can be accounted for (which we shall do in Section 2), then—in combination with the repeatability of (iii)—the symptoms of the "wavepacket collapse" postulated by (ivb) can be also recovered.

*Collapse axiom* is the first truly controversial item in the textbook list. In its literal form it is inconsistent with the first two postulates: Starting from a general state |*ψ*S - in a Hilbert space of the system (postulate (i)), an initial state |*<sup>A</sup>*0- of the apparatus A, and assuming unitary evolution (postulate (ii)) one is led to a superposition of outcomes;

$$\left| \left| \psi\_{S} \right> \right| \left| A\_{0} \right> = \left( \sum\_{k} a\_{k} \left| s\_{k} \right> \right) \left| A\_{0} \right> \Rightarrow \sum\_{k} a\_{k} \left| s\_{k} \right> \left| A\_{k} \right> \tag{1.1}$$

which is in apparent contradiction with (iv).

The impossibility to account—starting with the core quantum postulates (o)–(iii)—for the literal collapse to a single state postulated by (ivb) was appreciated since Bohr (1928) [8] and von Neumann (1932) [4]. It was—and often still is—regarded as an indication of the insolubility of the measurement problem. It is straightforward to extend such insolubility demonstrations to various more realistic situations, e.g., by allowing the state of the apparatus to be initially mixed. As long as the superposition and unitarity postulates (i) and (ii) hold, one is forced to admit that the quantum state of AS after they interacted

contains a superposition of many alternative outcomes rather than just one of them as the literal reading of the collapse axiom (and our immediate experience) sugges<sup>t</sup> (see Figure 1).

**Figure 1. Controlled-not, measurement, and Schrödinger's cat**: We expect this figure to be selfexplanatory. It is included primarily to establish the nomenclature (i.e., "control" and "target"), to illustrate Equation (1.1), and to emphasize the parallels be between the three situations illustrated above.

Given this clash between the mathematical structure of the theory and the expectation of the literal collapse (that captures the subjective impressions of what happens in the real-world measurements), one is tempted to accept—following Bohr—primacy of our immediate experience and blame the inconsistency of (iv) with the core of quantum formalism (superposition principle and unitarity, (i) and (ii)) on the nature of the apparatus: Copenhagen Interpretation regards apparatus, observer, and, generally, macroscopic objects as *ab initio* classical. They do not abide by the quantum principle of superposition—their evolutions need not be unitary. Therefore, according to Copenhagen Interpretation, the unitarity postulate (ii) does not apply to measurements, and the literal collapse can happen on the border between quantum and classical.

Uneasy coexistence of the quantum and the classical postulated by Bohr is a challenge to the unification instinct of physicists. Yet, it has proven surprisingly durable.

At the heart of many approaches to the measurement problem is the desire to reduce the relation between existence and information about what exists to what could have been taken for granted in a world where the fundamental theory was Newtonian physics. There, classical systems had real states that existed independently of what was known about them. They could be found out by measurements. Many initially ignorant observers could measure the same system without perturbing it. Their records would agree, reflecting reality of the underlying state and confirming its objective existence.

Immunity of classical states to measurements suggested that, in classical settings, the information was unphysical. Information was a mere immaterial shadow of real physical states. It was irrelevant for physics.

This dismissive view of information run into problems already when Newtonian classical physics confronted classical thermodynamics. Clash of these two classical theories led to Maxwell's demon, and is implicated in the origins of the arrow of time.

The specter of information was haunting classical physics since XIX century. The seemingly unphysical shadowy record state was beginning to play a role reserved for the "real" state.

Attempts to solve measurement problem often follow the strategy where the underlying state of the quantum system somehow becomes classical. Even decoherence can be, in a sense, regarded as a completely quantum version of such a strategy, with the effective classicality arising in the world that is fundamentally quantum. Other proposals assert supremacy of existence over information and sugges<sup>t</sup> modifications of quantum evolution equations (e.g., abandoning unitarity) as discussed by Weinberg (2012) [9].

It is conceivable that, one day, we may find discrepancies of quantum theory with experiments. However, evidence to date supports view that our Universe is quantum to the core, and we have to reconcile superposition principle, unitarity and their consequences— illustrated, e.g., by the violation of Bell's inequality—with our perceptions. Nonlocality of quantum states and other experimental manifestations of quantumness are here to stay.

The strategy adopted by the program discussed in this review is to start with the core quantum postulates (o)–(iii). They have the simplicity that rivals postulates of special relativity. Given this "let quantum be quantum" starting point we shall show how (and to what extent) both attributes of the familiar classical world—objective existence and information about it—emerge from the epiontic quantum substrate.

#### *1.3. Interpreting Relative States Interpretation*

1

The alternative to Bohr's Copenhagen Interpretation and a new approach to the measurement problem was proposed by Hugh Everett III, student of John Archibald Wheeler, over half a century ago (Everett, 1957 [10,11]; Wheeler, 1957 [12]; DeWitt and Graham, 1973 [13]). The basic idea was to abandon the literal view of collapse and recognize that a measurement (including the appearance of the collapse) is already implicit in Equation (1.1). One just needs to include an observer in the wavefunction, and consistently interpret the consequences of this step.

The obvious problem raised by (ivb)—"Why don't I, the observer, perceive such splitting, but register just one outcome at a time?"—is then answered by asserting that while the right-hand side of Equation (1.1) contains all the possible outcomes, the observer who recorded outcome #17 will (from then on) perceive "branch #17" that is consistent with the outcome reflected in his records. In other words, when the global state of the Universe is |Υ-, and my state is |I17-, for me the state of the rest of the Universe collapses to |*<sup>γ</sup>*17- ∼ I17|Υ-. Since this is the only state I (actually, |I17-!) am aware of, following the correlation, I should renormalize the state vector |*<sup>γ</sup>*17- of the rest of the Universe to reflect my certainty about my branch—this is now my only Universe1.

Much confusion and a heated ongoing debate has been sparked by the question of what happens to observers |I1-...|I16- and |I18-...|I∞-. If the quantum state of the whole Universe were classical—in the sense that we could attribute to it real existence—there would indeed be Many Worlds, each inhabited by a different |I*n*- (see, e.g., DeWitt, 1970 [14]; 1971 [15]; DeWitt and Graham, 1973 [13]; Deutsch, 1985 [16]; 1997 [17]; Saunders et al., 2010 [18]; Wallace, 2012 [19]). However, the elusive status of states in quantum theory—they can be confirmed (repeatability), but not found out—suggests a less radical possibility. After all, a patch in classical phase space also represents a state. When this patch collapses into a point upon measurement, it does not mean that there are other observers who from now on live in Universes with different outcomes, and have a memory consistent with these outcomes. The key difference between these two attitudes is in the extent to which a state is thought to be epistemic (as is a patch in phase space, representing ignorance of the observer) or ontic (as is the phase space point, that can be not only confirmed, but found out by others, even when observers are ignorant of its location beforehand). Only a classical—ontic—view of the state would make Many Worlds view (with all the branches equally real) inevitable. Quantum theory does not impose it, so in this sense Many Worlds Interpretation (in contrast to the Relative State view) is just "too classical" as it asserts objective existence of a quantum state of the Universe as a whole. I have no stake in this debate, but I shall comment on these matters in due course, after discussion of quantum Darwinism and the quantum origins of objective existence.

This "let quantum be quantum" view of the collapse is supported by the repeatability postulate (iii); upon immediate re-measurement, the same state will be found. Everett's assertion: "The discontinuous jump into an eigenstate is thus only a relative proposition, dependent on the mode of decomposition of the total wave function into the superposition, and relative to a particularly chosen apparatus-coordinate value...". is consistent with the core quantum postulates: In the superposition of Equation (1.1) record state |*<sup>A</sup>*17- can indeed imply detection of the corresponding state of the system, |*<sup>s</sup>*17-.

Two questions immediately arise. The first one concerns the part (iva) of the collapse postulate: What constrains the set of outcomes—the preferred states of the apparatus or the observer. By the principle of superposition (postulate (i)) the state of the system or of the apparatus after the measurement can be written in infinitely many ways, each corresponding to one of the unitarily equivalent bases in the Hilbert space of the pointer of an apparatus (or a memory cell of an observer);

$$\sum\_{k} a\_{k} |s\_{k}\rangle |A\_{k}\rangle = \sum\_{k} a\_{k}^{\prime} |s\_{k}^{\prime}\rangle \left|A\_{k}^{\prime}\right\rangle = \sum\_{k} a\_{k}^{\prime\prime} |s\_{k}^{\prime\prime}\rangle \left|A\_{k}^{\prime\prime}\right\rangle = \dots \tag{1.2}$$

This *basis ambiguity* is not limited to the pointers of measuring devices or cats, which for Schrödinger (1935) [20] play a role of the apparatus (see Figure 1). One can show that also very large systems (such as satellites of planets) can evolve into very nonclassical superpositions on surprisingly short timescales [21–23]. In reality, this does not seem to happen. So, there is something that (in spite of the egalitarian superposition principle enshrined in (i)) picks out certain preferred quantum states, and makes them effectively classical while banishing their superpositions.

Postulate (iva) anticipates this need for preferred states—destinations for quantum jumps: Before there is a collapse (as in (ivb)), a set of preferred states (one of which is selected by the collapse) must be somehow chosen. Indeed, discontinuity of quantum jumps Everett emphasizes in the quote above would be impossible without some underlying discontinuity in the set of the possible choices. Yet, there is nothing in Everett's writings that would provide a criterion for such preferred outcomes states, and nothing to even hint that he was aware of this question. We shall show how such discontinuities arise in the framework defined by the core quantum postulates (o)–(iii).

The second question concerns probabilities: How likely it is that—after I, the observer, measure S—I will become |I17-? Everett was very aware of its significance.

*The preferred basis problem* was settled by the *pointer basis* that is singled out by the environment—induced superselection (*einselection*), a consequence of decoherence (Zurek, 1981; 1982 [24,25]).

As emphasized by Dieter Zeh (1970) [26], apparatus, observers, and other macroscopic objects are immersed in their environments. The problem of preferred basis was not pointed out at that time, perhaps because this issue is never pointed out by Everett which motivated Zeh's paper. Indeed, it appears Everettians, (e.g. DeWitt, [14,15]) did not fully appreciate its importance until the advent of the pointer basis.

Decoherence leads to monitoring of the system by its environment, described by analogy with Equation (1.1). When this monitoring is focused on a specific observable of the system, its eigenstates form a *pointer basis*: They entangle least with the environment (and, therefore, are least perturbed by it). This resolves basis ambiguity. Pointer basis and einselection [24,25] were developed and are discussed elsewhere [6,7,24,25,27–33]. However, their original derivation comes at a price that would have been unacceptable to Everett: Theory of decoherence, as it is usually practiced, employs reduced density matrices. Their physical significance derives from averaging (Landau, 1927 [34]; Nielsen and Chuang, 2000 [5]; Zurek 2003 [35]) and is thus based on probabilities that follow from Born's rule:

(v) *Probability pk of finding an outcome* |*sk*- *in a measurement of a quantum system that was previously prepared in the state* |*ψ*- *is given by* |*sk*|*ψ*-|2.

Born's rule (1926) [1] completes standard textbook discussions of the foundations of quantum theory. In contrast to the wavepacket collapse of axiom (iv), axiom (v) is not in obvious contradiction with the core postulates (o)–(iii), so one can adopt the view that Born's rule is a part of the axiomatics of quantum theory. One can then use core postulates (o)–(iii) plus Born's rule to justify preferred basis and explain the symptoms of collapse through decoherence and einselection. This is the usual practice of decoherence (Zurek, 1991 [27]; 1998 [36]; 2003 [7]; Paz and Zurek, 2001 [28]; Joos et al., 2003 [29]; Schlosshauer, 2005 [31]; 2006 [37]; 2007 [32]; 2019 [33]). It relies, however, on the statistical interpretation of the reduced density matrices that depends on accepting Born's rule.

Nevertheless, (as Everett argued) axiom (v) is inconsistent with the spirit of the "let quantum be quantum" approach. Therefore, one might guess, he would not have been satisfied with the usual approach to decoherence and its consequences. Indeed, Everett attempted to derive Born's rule from the other quantum postulates. We shall follow his lead, although not his strategy which—as is now known—was flawed (DeWitt, 1971 [15]; Kent, 1990 [38]; Squires, 1990 [39]).
