**1. Introduction**

Quantum mechanics is at the basis of all modern physics and fundamental for the understanding of the world that we live in. As a general theory, quantum mechanics should apply also to the measurement process. From the general experience of non-destructive measurements, we draw conclusions about the interaction between the observed system and the measurement apparatus and how this can be described within quantum mechanics.

We thus consider a quantum system *μ*, interacting with a measurement device. For simplicity we assume that *μ* is a two-level system that is not destroyed in the process. Then after the measurement, *μ* ends up in one of the eigenstates of the measured observable. If *μ* is prepared in one of these eigenstates, it remains in that state after the measurement. If *μ* is initially in a superposition of the two eigenstates, it still ends up in one of the eigenstates and the measurement result is the corresponding eigenvalue. The probability for a certain outcome is the squared modulus of the corresponding state component in the superposition (Born's rule).

An essential question is whether the probabilistic nature of quantum measurement with Born's rule is an inherent feature of quantum mechanics or whether it can be shown to hold as a result of a quantum-mechanical treatment of the measurement process combined with a statistical analysis. In the latter case, a single measurement would be a quantum-mechanical process in which the state of the measurement apparatus (possibly including its surroundings) determines the result. Born's rule would then emerge from the statistics of the ensemble that describes the measurement apparatus in interaction with the system subject to measurement.

The requirement that *μ*, if initially in an eigenstate of the observable, remains in that eigenstate after interacting with the apparatus, is usually considered to lead to a well-known dilemma: If applying the (linear) quantum mechanics of the 1930s to *μ* in an initial superposition of those eigenstates, the result of the process appears to be a superposition of the two possible resulting states for *μ* and the apparatus without any change in the proportions between the channels. This has been referred to as von Neumann's dilemma [1], and it has led to paradoxical conclusions such as Schrödinger's cat.

Attempts to get around this problem include Everett's relative-state formulation [2] and its continuation in DeWitt's many-worlds interpretation [3] as well as non-linear modifications of quantum mechanics [4–8]. In the non-linear modifications, one gets the bifurcation of the measurement process but the non-linear character of the basic theory introduces new conceptual difficulties. Mathematically our treatment can be seen to be very close to quantum diffusion [5]; we have chosen to follow the same conventions in handling the statistics of stochastic variables as in Ref. [5]. The ambition to understand quantum measurement as a deterministic process we share with the De Broglie-Bohm theory [9], with the difference that we look for how details in the measurement device influence the process.

Bell pointed out that the Everett-DeWitt theory does not properly reflect the fact that the presence of inverse processes and interference are inherent features of quantum mechanics [10]:

Thus, DeWitt seems to share our idea that the fundamental concepts of the theory should be meaningful on a microscopic level and not only on some ill-defined macroscopic level. However, at the microscopic level there is no such asymmetry in time as would be indicated by the existence of branching and the non-existence of debranching. [...] [I]t even seems reasonable to regard the coalescence of previously different branches, and the resulting interference phenomena, as the characteristic feature of quantum mechanics. In this respect an accurate picture, which does not have any tree-like character, is the 'sum over all possible paths' of Feynman.

Therefore, as suggested by Bell, we investigate work of Feynman for a correct theory. We choose the scattering theory of quantum field theory, including Feynman diagrams, as a basis for our description of the measurement process. This theory contains inverse processes that result in a non-linear dependence on the initial state which removes the von Neumann dilemma.

In the field of investigation of the measurement process, a strong belief has been established that microscopic details of the measurement interaction cannot lead to the bifurcation determining the result of measurement (see, e.g., [11,12]). This belief is based on von Neumann's way of handling linear quantum mechanics. In this situation, many have abandoned the ambition to understand the mechanism of a single measurement and concentrated on the full ensemble of measurements. There one has studied the irreversible decoherence process that takes the initial ensemble, with *μ* in a pure state, into a mixed state for the final ensemble after measurement. Since 1970, the year of DeWitt's many-worlds theory [3] and Zeh's paper on decoherence [11], a new tradition has developed that includes different views on how to interpret quantum mechanics, see for instance Refs. [13,14]. Epistemological aspects play an important role in these interpretations.

Our idea is that the microscopic details of the measurement apparatus affect the process so that it takes *μ* into either of the eigenstates of the measured observable and initiates a recording of the corresponding measurement result. This is possible to see in a more developed form of linear deterministic quantum mechanics, namely the scattering theory of quantum field theory.

The development of relativistic quantum mechanics led to quantum field theory. For a situation where a quantum system *μ* meets a part *A* of a measurement device, interacts with it and then leaves it, scattering theory can be an adequate description. As we have pointed out already, the scattering theory of quantum field theory has the reversibility that was requested by Bell. These are our reasons for the choice of studying measurement in the scattering theory of quantum field theory.

In our approach, measurement is part of the physics studied, rather than a subject for epistemological analysis. We show how non-linearities can be generated within quantum theory. Our statistical study of measurement processes then shows that those states of the apparatus which are competitive in leading to a final state, also take *μ* into *one* of the eigenstates of the measured observable. Moreover, this bifurcation, leading to one of the two possible final states for *μ*, occurs with the frequencies given by Born's rule.

In the following sections, we shall first give our scattering theory description and then make all possible processes subject to a statistical analysis.
