2.1. Undecidability and Uncomputability in Theoretical Physics
As is known, Gödel’s Theorems constitute a fundamental stage in the relationship between logic and mathematics. Shattering Hilbert’s formalist dream, the undecidability results contributed to the contemporary conception of mathematics as an open, non-zippable system [
28]. Considering theoretical physics as a formal construction [
29], it is interesting to investigate the possibility of finding undecidable propositions here too. In this sense, there are some general results that depend on the observer being immersed in the system he observes, as discussed in Refs. [
30,
31] and others more specifically addressed to the questions posed by quantum physics and cosmology, where the unpredictability of a event is rooted in the nature of things. Classical examples of foundational challenges in physics include the collapse of the wave function—which functions similarly to a ‘fifth postulate’ in quantum mechanics—and the cosmological configurations of the universe described by the Wheeler–DeWitt equation [
32]. The Wheeler–DeWitt equation attempts to unify quantum mechanics and general relativity by formulating a wave function of the entire universe, encapsulating all possible cosmological configurations in a single quantum framework. However, a significant limitation of this approach is that it lacks a well-defined Hilbert space with an inner product structure. Without a Hilbert space inner product, it becomes impossible to define quantum expectation values for observables, which are essential for making physical predictions. This limitation suggests that while the Wheeler–DeWitt equation is a pioneering step toward a quantum theory of gravity, it may not fully satisfy the requirements of such a theory, as it cannot prescribe expectation values for all relevant quantum observables.
At the quantum level, the limitations of the Church–Turing thesis become evident due to the manifest incompleteness of quantum theory [
6]. The Church–Turing thesis posits that any function computable by an effective procedure can be calculated by a Turing machine. However, quantum phenomena such as entanglement and superposition introduce non-local correlations and probabilistic outcomes that classical computational models cannot fully capture.
One way to describe this situation is that Shannon–Turing information, which is local in nature, cannot fully capture the detailed evolution of a quantum system, unlike in classical systems, due to the “hidden information” associated with quantum entanglement. This hidden information manifests through quantum potentials or Feynman path integrals, reflecting non-local correlations that are not easily computable using classical information theory. The problem becomes even more radical at the level of quantum gravity because, in General Relativity (GR), the causal structure of spacetime is dynamic and not fixed. When combined with quantum uncertainty, this leads to an indefinite causal structure [
33,
34,
35,
36]. This indefinite causal structure complicates the application of standard computational and physical theories, as they rely on well-defined causal relationships. Therefore, both the incompleteness of our current quantum theories and the limitations of classical computational frameworks suggest the need for new paradigms to fully understand and describe the fundamental nature of the universe.
Without a fixed background spacetime, we cannot define a global time parameter or a well-ordered sequence of events, and we cannot consistently determine the truth or falsity of statements about the sequence of events or the evolution of the system. As a result, certain propositions about the behavior of a quantum gravitational system become undecidable within the theory because there is no consistent way to compute or predict outcomes using existing algorithms or axioms. In other words, questions about the system’s evolution or state cannot be resolved as true or false due to the lack of a definitive causal order, leading to undecidable sentences in the theory of quantum gravity. In other words, standard computational methods and logical frameworks struggle to address these questions, highlighting fundamental limitations in our ability to fully describe the universe at the quantum gravitational level.
Undecidability arises in quantum mechanics independently of the multiverse concept. Within the Everett interpretation, propositions about outcomes in other branches or the experiences of alternate selves are undecidable due to the lack of interaction between branches. For instance, an observer cannot determine the outcome of a measurement in another branch, making such propositions undecidable within their own frame of reference. Additionally, standard quantum mechanics presents undecidable propositions due to quantum indeterminacy and the measurement problem. In quantum gravity, the combination of quantum uncertainty and the dynamic causal structure of general relativity leads to undecidable propositions regarding the sequence and causality of events [
36].
Of particular interest for our purposes is to underline that the fundamental reason for the indeterminacy in quantum cosmology is of a mathematical nature and derives directly from the Gödel limits. It is the non-classification theorem of four-dimensional varieties; there is no algorithm that can classify all compact four-dimensional manifolds not limited, nor even capable of distinguishing between two of them [
37]. This is, as is evident, an issue at the heart of the so-called “peaceful coexistence” between GR and QM and which makes it difficult to define a cosmological wave function. However, there are alternatives. For example, it is possible to choose for physical reasons a selection criterion that selects a geometry from the totality of the manifolds as a cosmological boundary condition. This, for example, is the path chosen by Hartle–Hawking and other physicists which assigns a special role to de Sitter’s geometry, recently confirmed by observations on the acceleration of the universe [
11,
38,
39]. A more extreme solution could lie in the advent of cellular automata universe models on the Planck scale, and therefore, the extreme conceptual complexity of the varieties is not necessary if not coarse-grained, and is replaced by a discrete algorithmic complexity. This path was taken by ’t Hooft in an attempt to unite the interpretation of QM with particle physics [
40,
41]. ’t Hooft’s interpretation is often described as an attempt to reintroduce locality to the bottom of the QM, but this is not entirely accurate. In fact, the periodic orbits inside the cells which replace in a very precise sense the harmonic oscillators of QM cannot be observed directly (fast hidden variables), and the equivalence classes support the effects not local up to the Planck scale. It is, therefore, to all intents and purposes, an emerging version of QM. Even in this version, however, the measurement event seems afflicted by the unpredictable characteristics of collapse. Indeed, if we do not limit the idea of collapse to the traditional observer–observed binomial and connect it to the more general objective concept of interaction (a measurement is an interaction), we obtain an image of a universe that is “actualized” through interaction events unpredictable starting from the fundamental laws of physics, giving rise to emerging properties and metastructures and complexity.
One of the central issues in theoretical physics is the study of complex behaviors within systems. The notion of complexity is not singular; it spans across disciplines, justifying the multitude of possible approaches based on the peculiarities of the system under consideration. However, there is a deeper epistemological reason for this diverse landscape of the “archipelago of complexity”; it is the pivotal role of the observer in detecting situations of complexity, instances where the collective behaviors of a system lead to structural modifications and hierarchical orderings [
34]. This consideration leads directly to the crux of the issue of emergence in physics.
In general, intrinsic emergence is when we see a discrepancy between the formal model of a system and the observed behaviors. In other words, the recognition of emergence expresses the necessity, or at least the utility, of creating a new model capable of encompassing the new observational ranges. This raises the problem of the relationship between different levels of description, leading to two possible situations.
The first is known as phenomenological emergence, which concerns the semantic intervention of the observer regarding the new behaviors of the system. It aims to create a model whose characteristics—selection of state variables and dynamic descriptions—are aimed at a more convenient description of the observed processes. In this case, it is always possible, at least in principle, to connect the two models through appropriate “bridge laws”, whose task is to link the two descriptive levels via a finite amount of syntactic information.
The second one is radical emergence, which involves a completely new and different description that cannot be linked to the original model. Here, a breakdown of the causal chain is usually observed and can be described with appropriate symmetries and irreducible forms of unpredictability. In this case, the connection between the theoretical corpus and the new model may require a different type of theory semantics, such as a new interpretation and a new arrangement of basic propositions and their relationships, as in statistical physics [
42].
These two distinctions should be considered purely illustrative, as more varied and subtle intermediate cases can indeed arise. As an example of phenomenological emergence, consider the relationship between Newtonian dynamics and the concept of entropy (via Standish). Classical dynamics laws are time-reversible, whereas entropy defines an “arrow of time”. To bridge these two levels—the microscopic reversible dynamics and the macroscopic irreversible behavior—classical statistical mechanics employs Maxwell–Boltzmann statistics and probabilistic assumptions, which are centered on space-time symmetries (due to the isotropy and homogeneity of space-time, there are no privileged points, directions, or instants in a process of energy level de-correlation). This establishes a “conceptual bridge” between particle descriptions and entropy, thus connecting the microscopic and macroscopic analyses of the system. However, this connection does not cover all aspects of the problem and cannot be seen as a complete “reduction”. In fact, even within the framework of classical physics, entropy may locally decrease due to statistical fluctuations, and while the microscopic description provides fundamental insights, for practical purposes, we often describe a perfect gas using macroscopic parameters like pressure, volume, and temperature rather than tracking individual molecules [
43].
Another example concerns EPR-Bell correlations and the role of non-locality in Quantum Mechanics. In the Copenhagen interpretation, non-local correlations are observed but are not part of the theory’s facts. In Bohm’s interpretation, the introduction of quantum potential allows incorporating non-locality within the theory. It is worth noting that historically, the EPR issue originated as an ‘ideal experiment’ between Einstein and Bohr on the ‘elements of physical reality’ of QM. Only later, with Bohm’s analysis and Bell’s inequality on the limits of local theories with hidden variables, was it possible to transform the issue into experimental matter. Neither Einstein nor Bohr actually expected to observe ‘spooky actions at a distance’. Importantly, the expression of non-locality in Bohm’s theory does not require additional formal hypotheses beyond the standard framework provided by the Schrödinger equations. However, while this new interpretative perspective provides a different understanding of the theory, it also raises issues regarding what has been termed the ‘peaceful coexistence’ between special relativity and QM.
In both briefly examined (cited) cases, we can see how phenomenological and radical aspects of emergence are deeply intertwined in the dynamics of the development of physical theories. Moreover, it underscores the fundamental role of the observer in modeling and interpretive choices. It is essential to note that the relationship between observer and observed is not a bipolar relationship and, to prevent epistemological impoverishment, it cannot be resolved in a single direction. Instead, it should be considered an adaptive process where the system’s internal logic meets our ways of acquiring information about it to construct theories and interpretations capable of providing a description of the system.
2.2. Computing the Universe as a Whole
The broader definition of the universe proposed is due to the medieval philosopher Iohannes Scotus Eriugena (810–877), who understood it as everything that is created and that is not created. For a modern mentality, the reference to what is not created, or cannot be created, is interesting in relation to the importance given to constraints. Today we could include in the definition the probability of an event imposed by Quantum Mechanics, and replace the theological accents with the big bang and the conditions that define space-time and physical laws. All this must be distinguished from the observable universe, whose boundaries are those we know from cosmology understood as the history of matter, and is different from the set of possibilities contemplated by theoretical physics. The push towards unification pushes the archipelago of physical theories towards a greater number of connections around some central islands (relativity, Quantum Mechanics) and some mathematical keys (gauge theories). These connections imply very strict requirements on the constraints of possibilities, to the point of suggesting the idea of a new approach to physics based on them [
44]. The question we ask ourselves is whether any hypothetical theory of everything can be considered as a Gödel system and which aspects of the universe would remain undecidable or incomputable. It should be underlined that in the current state of knowledge, non-computability should not be understood in terms of algorithmic compression (Church–Turing thesis) because varieties constitute a central part of much of the knowledge of the physical world; furthermore, the very structure of quantum physics poses problems for the universality of a quantum Turing Machine [
45,
46]. An important ingredient of the universe is, in addition to chaos and randomness (Kolmogorov-type), the presence of organized complexity, which favors the development of structures with logical depth [
47]. In the current state of affairs, it is very difficult to say whether this aspect of the universe derives from physical laws or rather from special boundary conditions, as seems more likely [
48,
49]. We have arrived at the crucial point regarding the question of the physical world as a Godelian system, understood in a broad sense. On the one hand, we can resolve the issue of the incomputability of the cosmological boundary conditions (WdW equation) by choosing a specific geometry, as in the Hartle–Hawking case; in this case, the collapse of the wave function remains an unpredictable event with characteristics of randomness, an undecidable event on the basis of fundamental laws. As is known, the enigmatic aspects of the collapse dissolve in Everett’s many-worlds interpretation, which has merged today with the cosmology of chaotic inflation [
21,
50,
51]. This new powerful cosmological interpretation of QM seems capable of solving both of the two undecidability problems, that of the choice of boundary conditions for the universe and the collapse of the wave function, suggesting that the multiverse is a logically closed and consistent system fusing physical laws with boundary conditions [
52].
The Church–Turing–Deutsch principle (CTD), formulated by Deutsch [
6,
53,
54], states that a universal computing device can simulate any physical process;
“every finitely realizible physical system can be perfectly simulated by a universal model computing machine operating by finite means”. Any Turing Machine can in principle be built to describe any physical phenomenon. Turing Machines, including quantum and classical computers, are, in any case, also physical systems and anything they can do is dictated by the laws of physics, including our language and the building of mathematical truths. Thus, the physical limits of computation are determined by the fundamental constants of Nature such as the speed of light
c, the quantum of action
h and the gravitational constant
G, with well-defined quantitative bounds [
55].
Paradoxically, following the laws of computation, ideally, the universe could be simulated by a quantum computer or by a suitable Turing Machine with the consequence that one can deduce the possibility that we could be the result of a quantum computer simulation and live inside it.
Differently from the mathematical languages in which can be defined true and false and undecidable propositions, these concepts take on radically different aspects in the axiomatization of physical theories, where the axiomatic approach has always been little more than an attempt at synthesis, mixing theoretical elements and empirical assumptions [
26]. The point is that physics, however formalized, is never a syntactic system because every physical theory has an intrinsic semantics defined with operational procedures. Nonetheless, it is interesting to ask whether physical theories are undecidable and what foundational questions a “theory of everything” can pose.
One of the main points is the problem of the quantum measurement, which is at the heart of quantum information processing and is one of the criteria for quantum computation.
These properties also include metaproducts or emerging structures from sets of physical systems with emerging laws different from the basic laws of Quantum Mechanics. An example are Classical systems that deal with the concept of real numbers, which cannot be simulated by a Turing Machine, as a TM can only represent computable reals as the product of a finite calculation.
If the universe is finite, contained within a given finite region of space like in a sphere of radius
R, it contains a finite amount of information and energy
E and thus of entropy. This is given by the Bekenstein bound, an upper limit on the thermodynamic entropy:
where
k is Boltzmann’s constant,
ℏ is the reduced Planck constant, and
c the speed of light.
The entropy can also be described in terms of the Shannon entropy:
In other words, this quantity gives the maximal amount of information required to fully describe any given physical system down to the quantum level.
The information of a physical system, or the information necessary to perfectly describe that system, must be finite if the region of space and the energy are finite, as expressed in Equations (
1) and (
2).
In computer science, this implies that there is a maximal information-processing rate, Bremermann’s limit [
56,
57,
58,
59], for a physical system that has a finite size and energy, and that a Turing Machine with finite physical dimensions and unbounded memory is not physically possible. Unless we assume that the mathematical truths are emerging metastructures, viz., structures that do not directly depend on the initial physical laws, the actual representation of integer and real numbers would not be possible in a finite universe, including the representation of infinities, unless we assume the existence of local continuous variables that reflect the classical concepts of space and time or the quantum mechanical continuous variables. Being continuous, in the mathematical language, one needs to build an axiomatic definition that includes Dedekind’s cuts, with elements that have to be Dedekind-complete [
60] or with Tarski’s axiomatization [
61], that do not show a direct connection with what we call the basic physical principles.
Following the Constructor theory, information is expressed in terms of which transformations of physical systems are possible and which are impossible using the language of ergodic theory [
62]. An input substrate is processed by the Constructor giving an output substrate. Each event or measurement process can be expressed in terms of constructors and substrates up to a universal constructor for which input and output substrates represent the evolution of the universe, namely a universal constructor. In this way, the universe can be represented in the Constructor theory framework. Being a product of the mathematical language, a Constructor to represent the whole universe must have the same set of information to build and build the evolution of the universe. Information is something whose nature and properties are determined by the laws of physics alone. Information is also of the essence in the preparation of a state and measurement in physics; input and output of state preparation and measurements is represented by a set of information quantities, with information being a physical process.
Modern physics has recently developed some high-level phenomenology models, setting a sort of end-game theory model, which needs no a priori notions, to obtain a way of describing the system universe as a whole, like in the Wheeler–De Witt wave function of the universe, in the view of describing the universe in terms of a self-consistent semantically closed object.
Axiom-based models generate object-based logic and metastructures with their composition meta-rules, and are based upon symbols acting as fictitious objects obeying some meta-rules, requiring an infinite hierarchical regress to higher-level modeling, capable of building from any emergent phenomenon. This is an aspect derived from Gödel theorems for a formal logical system that can be, under certain hypotheses, classically translated into a Turing Machine by the classical Church–Turing thesis; each computable function can be computed by a universal Turing Machine. While generalizing this universality to quantum computation, we recall that there should be a universal quantum Turing Machine performing any desired unitary transformation on an arbitrary number of qubits, including a program as another part of the input state, or the program effecting a unitary transformation is independent of the state of qubits to be computed. It is shown, however, due to entanglement, that neither of these two situations exists in Deutsch’s quantum Turing Machine [
6]. In this case, an input state is unitarily evolved to an output one. Such an algorithm, written in classical language, consists mainly of two parts:
- 1.
How to embed the problem in the input state and the result in the output state.
- 2.
How to realize the desired unitary transformation in terms of various quantum gates and wires, i.e., how to construct a quantum computational network. As shown in [
63], the Church –Turing thesis cannot be, as it is, generalized to quantum computation, i.e., an arbitrary unitary transformation can be realized by a network composed of repeated application of local operations of gates and the algorithm for composing the network is classical. We have two types of universality in its quantum generalization:
Type-I universality refers to the ability of performing any desired unitary transformation on an arbitrary number of qubits, by including a program as another part of the input state, similar to the classical one.
Type-II universality means that the same program can be used for different input data.
Linearity and unitarity of quantum evolution conflict with these two types of universality in Deutsch’s QTM, and the two types of universality in quantum computation, as possible generalizations of the notion of universality in classical computation, as stated in the Church–Turing thesis, do not exist, because with dynamics fixed, linearity and unitarity of quantum evolution makes it impossible to synchronize different quantum paths of any possibility. This difficulty originates in entanglement. For a specific quantum computation, however, there is no such difficulty by definition; the Church–Turing thesis is interpreted as a physical principle, related to the problem of quantum measurement.
To model without a priori notions and infinite nesting into metastructures, start-up axioms must be given via a universality property by self-organized criticality. This describes the property of many systems to self-organize in such a way that the system itself moves towards states characterized by a fractal-like description, with no fundamental local scale. It was shown in [
64], generalizing Gödel and Chaitin results in mathematics, that self-referential and self-contained systems, such as the universe, must involve intrinsic non-local randomness, namely self-referential noise as a realization-independent characterization of self-referencing. Recent developments in nonlinear quantum field theory, also according to prescriptions of Nelson’s Quantum Mechanics and stochastic quantization, have also recently shown that a simple
model could generate stochastic behavior during its evolution [
65], giving universality to the chaotic inflationary scenario described via an interacting many-field model. This “noisy” property is a general feature of the quantum scenario; Wiener processes and Fractional Brownian Motions characterize each model evolution with a specific fractal behavior depending on its global particle statistics in a generalized Haldane scenario [
34,
66]. Chaitin stated that if the system is sufficiently complex, the self-referential capability of arithmetic results in randomness and unpredictability from a thermodynamical point of view. Local randomness also arises from quantum measurement processes, which is shown to be an undecidable proposition inside the structure universe, and should require a metastructure as an Everett scenario in which it must be defined as a universe-choice process following the model of Nelson’s Quantum Mechanics [
67,
68]. Using a classical formulation produced from nonlinear dynamics, in a finite time computation model (i.e., locally), quantum and classical nonlinear system are indistinguishable, and the choice corresponds to an arbitrary stop of a classical Turing Machine.
Heisenberg’s uncertainty principle in this scenario could act as a geometric self-similarity operator, and the Mach principle is intended as a measurement operator, essentially given by the nonlinearity property of the fields acting as a semiclassical object interacting with a quantum one (as happens during reheating during inflation, described semiclassically because it is interacting with other quantum scalar fields). So, the self-reproducing universe produces its existence by its own self-interaction. Showing that the problem of quantum measurement is closely related to Gödel’s thesis, this could imply as a necessary condition the existence of a metastructure, or of a self-referential structure given by a self-reproducing and organized hyerarchical interacting baby-universe scenario, geometrically described as a scaling characteristic of its fractal property by the Heisenberg uncertainty principle: Noether’s theorem. The (self)generation processes of the universe are such as to guarantee a quantum tessellation (with the Generalized Uncertainty Principle) and then an emergence of what we call isotropic and homogeneous spacetime. There is no clear difference between the universe and the multiverse, especially in QM (The concepts of “inside” and “outside” depend on the degree of logical openness of the model, which cannot be infinite, i.e., what is valid for mathematical metastructures does not necessarily apply to physical events, which also gave rise to formalisms. Among other things, the development of quantum gravity will lead us to better define these aspects suspended between continuous and discrete).
An example can be found in the inflationary scenario; Linde and Vilenkin [
50,
69,
70,
71] discussed the possibility, under certain hypotheses, of the existence of a self-reproducing inflationary universe described as a self-reproducing fractal structure. In this context, this scenario could be successfully seen as belonging to the class of what we call a “meta Everett Universe”, without needing a specific initial choice of parameters, interpreted as an emerging self-semantically closed structure which reproduces itself during its evolution.
Some properties of the language able to discuss this sort of semantically closed self-objects are classically well described by second-order cybernetics concepts; as an example, von Foester [
72,
73] postulated the existence of solutions for an indefinite classical recursive equation derived from a Piaget recursive structure of implications, which describes an observer’s account of a causal interaction between observer and observed without any starting point and with event ordering property, i.e., a chain of implications into the self-referential structure that defines ordering processes inside the structure, as stated by von Neumann. The solutions to this symbolic equation represent a stability structure in terms of discrete eigenvalues
into the chain of infinite implications and act in the formal logic structure as a group of axioms for a metamodel, i.e., a model of modeling the reality, in which all fundamental properties can emerge from a self-organizing process of the structure universe itself, by means of a process called eigenbehavior. A self-organizing system like this is autonomous
iff (if and only if) all structural processes that define and sustain all its dynamics are internally produced and reproduced, with an organizational closure, as seen above. Changing its classification ability, it is shown that the system must change its own structure. Following this prescription, it is possible to define the problem of quantum measurement as a process, i.e., an object inside and belonging to the universe itself, the result of an ensemble of physical processes that concur to create a global one which changes its own structure, defining a cosmic time, keeping the energy constant, according to thermodynamics and the decoherence problem of a pure state (in fact, if we make a distinction between observer and observed, the transformation of such a pure state as the universe into a mixed one by a measurement process is described by the change of Hamiltonian mean value, in violation of the energy conservation principle).
Thus, each physical process can act as a measurement process to its complementary into the universe, giving rise to a surging up of “infinite” state superposition, concurring to describe the evolution of the global state universe, as the result of a collapse of all those possible states of existence. The problem with infinities is so linked with Cantor’s theorem, which denies bijections between and , or the classical Turing Machine stop for non-computable functions.
In this way, roughly speaking, the collapse and the quantum measurement should be removed from the aura of “mystery” once and for all; in the end, we simply cannot count all the interactions in the universe (which we should have known since Feynman paths and Bohm’s potential [
74]). With Everett (and in this part, our meaning is strengthened and clarified), Rovelli and Von Foerster claim that we build our “eternal brilliant garland”, also removing the naïve residue of decoherence. Without collapse, there would be no particles! What remains, up to the conclusions, brings to fruition the link between physics and computation. In practice, we have removed Everett and decoherence from the banal readings of the “fundamentals”, bringing them back to the center of concrete physics. A rough synthesis could look like this. Every nucleation from the quantum vacuum, i.e., production of the universe, is so rich in complexity (viz., metastructures) that for every observer-language there will always be physically undecidable propositions.
In principle, this can be described by a Heraclitean Process System (HPS) with self-organizing critical characteristics: randomness, nonlinearity, nonlocality and iterative structure, to give rise to a fractal-like structure. The linear iterative map is given by the results of a local quantum mechanical measurement, generating self-referential noise. It is a manifestation of HPS characteristics via objectification processes, such as nonlocality configurations induced by macroscopic objects, or nonlinearity itself, such as Mach principle states. Nonlinearity behaves as a macroscopic semiclassical apparatus, giving discrete jumps. Some approaches to this statement had been performed by defining a quantum measurement as a sequence of binary quantum jumps caused by a macroscopic apparatus [
75,
76], avoiding the creation of a perpetuum mobile of the third kind. The perpetuum mobile is defined considering statistical operators for composed systems before and after a measurement,
the mean value of the energy is expressed in terms of the Hamiltonian H:
and the energy and entropy change are
. If an observable
A is measured, this means that
violating energy conditions. But by thermodynamics laws
that gives for a jump
but
that is a self-heating system.
In this way, an additional structure of the spacetime is added, in terms of a hypersurface with a constant value of what is interpreted as cosmic time. In the Heisenberg picture, the state vector changes at each quantum jump, which corresponds to a spacelike hypersurface. Each quantum jump gives a disjoint hypersurface, causewise ordered as in the von Foerster equation:
or
. A cosmic macroscopic time for the instantaneous wave function collapse is defined, and time receives, in this scenario, a precise meaning as any other quantum result. When applying the measure to the state universe, we must define an operator capable of describing the collapse of a state inside the Everett scenario. The description of whether a quantum measurement operation has occurred or not is given by Rovelli operator
M, [
77], which has the crude meaning of “
It has happened or not” and logical eigenvalues “1” or “0”.
To give an example, consider a physical system whose observable can be usually expressed in terms of state projectors as:
where
P is the general projection operator on a given basis. In our case, we consider the universe in a split between an observer, a part of the universe and the complementary set in the universe of the observer from the point of view of the internal observer. Let us consider the system vector under measurement
and
, the vector-state apparatus, then
represents the composed system between them after an interaction. Before the measurement process, we have
, then the state becomes
with probability
and
.
From the definition in Equation (
3), their statistical operators are defined in the following way:
which correspond to the vector
of the composed system, in agreement with the energy conservation principle and with laws of thermodynamics. By the concept of decoherence, the pure state universe is seen as the composed system of Equation (
4) and is equivalent to the one given after the measurement in terms of state projectors:
with the decoherence condition
To avoid the decoherence condition of Equation (
6) and obtain the pure state universe, following Equation (
5), one has to make indistinguishable the pure state from the mixed one, embedding all into a structure that must evolve after each process has occurred, changing its structure in terms of interactions between its subsets.
We can state that each event concurs to the evolution of all universes and all universes generate each event, as the Mach principle states. This defines the universe in terms of a von Neumann’s universal constructor [
14]. To give a simple example, the measure of
A is described with two discrete eigenvalues only, e.g.,
and
with eigenstates
and
and interaction system-device Hamiltonian
.
Following the prescriptions of Quantum Mechanics, we prepare the state of the device such that in a finite time, the interaction will evolve into into and into , i.e., .
Then, is a pure state that is replaced with one of the two substates after the wave function collapse or . Computing the probabilities for the collapse, we have a correspondent hypersurface for any given state of the combined state observer–observed system by means of Rovelli correspondent operator acting in this way:
State “happened” with eigenvalue 1
and
State “not happened” with eigenvalue 0
and
In this way, time and measure and event take sense into the Everett’s Many-Worlds model, in which every instant of time corresponds to a special case of a universe, and a measure with its result is described as a causal process with an ordering operator capable of defining a cosmic time. Only in this meta-universe can the problem of decoherence be translated as a problem of universe choice; phase coherence is lost by physical decoherence into the environment, making pure and mixed state indistinguishable and changing the structure of the global state in terms of mutual relationships between its substates, which means time evolution. The complementary of an observer can in fact be seen as the environment surrounding a quantum system. In this way, the observer can monitor the observables, or part of them, of the system. The effect of the observer is to induce decoherence continuously in the eigenstates of these observables and can assume the behavior similar to that of classical or pseudo-classical states [
78,
79,
80], as in the convergence of a quantum Turing Machine into the final state of computation.
For this reason, the Everett scenario seems to become a sort of necessary environment for the evolution of the pure state that represents the object universe when described in terms of classical (or pseudo-classical) formal propositions with the possibility of building a Gödel symbolic construction (or a Turing Machine) to describe the evolution of the universe and the interaction between subsets up to the interaction with “classical” (or pseudo-classical) observers, which leads to the problem of a quantum measurement to be related to an undecidable proposition inside the linguistic representation of the universe itself. The implication is to have as mandatory the existence of a metastructure as a “linguistic” meta-universe. Being that this representation of the meta-universe is a class of universes in the evolution of our universe and at the same time built with physical events in the universe, the class should be a subset of the universe by definition or coincide with the universe, recalling Russell’s paradox. The mandatory requirement of the existence of a universe of universes shows that our classical language is inadequate for describing the state universe as a whole self-object unless we employ a self-bootstrapped structure, in which the laws of physics self-sustain one another through their mutual consistency [
81].