1. Introduction
Some of the theories that try to account for the quantum gravity (QG) domain are considered perfectly consistent from a mathematical point of view. However, their physical meaning is currently debated. Indeed, physics is made up of mathematically formulated theories as well as ingeniously devised experiences; namely, the experiments and observations that provide data and ultimately have to verify theoretical predictions.
It is well-known that empirical testability is the epistemic problem of QG. In fact, the whole sphere of its observables seems to be out of reach at the moment. Objects and states at the Planck scale—which quantum gravity theories generally study—are too small in spatial and temporal magnitudes and too high in energy levels to be observed in a terrestrial laboratory. The best currently available possibilities of empirical examination are connected to astrophysical observations; for instance, of black holes or early states of the universe. Thus, from a theoretical stance, if testing quantum gravity theories is tremendously difficult, there remain at least two options: (1) to devise new possible experimental frameworks, while waiting for instruments which are able to make observations at the Planck scale [
1]; (2) to seriously consider the possibility of replacing the standard criteria of scientific verification with less empirically governed ones [
2].
We argue that these alternatives are related with two opposed views about the theoretical status of spacetime in physical theories and, in particular, in the QG context. Though neither of them is the standard point of view of the physical community, an analysis of their straightforward opposition could be insightful to understand the problem of the empirical coherence of QG theories.
According to some, spacetime is not a fundamental physical structure, but emerges at less basic levels. For example, in some high energies theories, such an emergence would occur at “low energies”, and even the geometrical framework of spacetime would be derivable from the dynamics of the fundamental QG theory [
3]. On the contrary, other interpreters claim that spacetime and the underlying geometrical notions are still fundamental in any physical domain [
4]. The grounding reason of this fundamentality would be an epistemological concern regarding the conditions of empirical testability. These in fact require a reference to geometrical magnitudes, which are thought to be the primary measurables. In this sense, if a theory aims to be empirically meaningful (i.e., testable through measurements), it cannot get rid of space and time categories. Such a connection is thought to be valid even in the context of QG theories.
Thus, unsurprisingly, the supporters of spacetime fundamentality assume the epistemologically conservative approach expressed by (1). On the contrary, some scholars who argue for the disappearance of spacetime seem to follow the perspective of (2). Indeed, according to them, there is a tension between the consideration of spacetime as empirically essential for any theory and its ontological status of being emergent. Thus, the concept of empirical salience itself should be redefined for physical theories [
5].
We suggest that the debate about the status of spacetime should pass from a philosophical dimension concerning ontology to an epistemic one concerning methodology. In particular, this debate may be relevant when determining the limits of a physical model’s testability. In line with so-called naturalized epistemology, we argue that these limits should be attuned to the boundaries of human cognitive capacities. We focus on the systems of numerical representation, which are tightly related to our capacities to abstract basic notions such as number, space, and time from experience. We review cognitive research that explores these topics. Then, we propose an epistemic argument that may be used as a “conceptual test” to assess if the mathematical content of a quantum gravity theory refers to some possibly verifiable empirical model.
2. The Debate on Spacetime and Its Methodological Meaning
Physics is certainly the most fundamental science of nature. Its methodology is a guiding model for the other empirical sciences. While older physical theories are replaced by newer ones over time, methods seem to be mostly preserved. Nevertheless, the methodology is not independent of the studied objects and the leading theories. A strong modification of basic physical theories generally modifies the definitions of key notions, and this can produce further effects on the methods, even besides the invention of new instruments and techniques of observation, just as happened in the passage from Ptolemaic to Galilean–Newtonian paradigm, where the basic initial condition of any physical system is to be in motion (even if not subject to forces) rather than at rest.
Thus, quite unsurprisingly, the thesis of the disappearance of spacetime at high energy elicits some interpreters to evoke a change in the criteria of scientific verification, in order to adapt the idea of empirical coherence to the QG theories without spacetime. In particular, a paradigm shift of this sort is supposed to involve the meaning of “physical observable” and its connection with actual observations and measurements. In all of this, the question of whether or not geometrical categories are fundamental turns out to be crucial.
However, this alleged revolution involves the epistemic foundations of natural sciences in general, and so it can hardly be resolved within the domain of physics alone. For instance, a naturalistic epistemology—namely, one coherent with the scientific outcomes regarding human cognitive capabilities, and not based on purely abstract views on knowledge—does not accept that scientific criteria for empirical sciences might be designed totally a priori, even if the motivations for the new norms rely on a solid mathematical basis. In the case at stake here, then, any possible solution should consider at least the epistemic role of the cognitive systems pertaining to the representation of quantity-related concepts. Indeed, the position of the observer—the ultimate “data processor” in any experimental setting—must be analyzed, and this analysis is basically absent in physical theories. In the next section, we shall return to questions concerning the observability and its conditions from a scientific, not idealized perspective.
2.1. Physical Theories and Their Philosophical Interpretations
The current theses of the disappearance of spacetime at the Planck scale originate from the “problem(s) of time” in geometrodynamics [
6]. Geometrodynamics was the first attempt to quantize the gravitational interaction starting from the canonical (or Hamiltonian) formulation of the General Theory of Relativity (GTR), interpreted as a background-independent theory [
7]. GTR predictions of changes in spacetime geometry (curvature) in correspondence with different distributions of matter and energy in the universe were indeed interpreted as if the laws governing the dynamics of the degrees of freedom of the gravitational field were independent of any fixed background metric.
This interpretation applied to a deterministic view of the theory led to difficulties in treating the temporal parameter. Some such difficulties depended on the attempts at quantization [
8], others were natural consequences of the Hamiltonian formalization of GTR itself [
9,
10,
11]. In short, in the Hamiltonian formulation, general relativity is taken to describe the dynamics of the spacetime manifold, which is split into a foliation of three-dimensional space-like hypersurfaces parametrized by an arbitrary coordinate-time. Thus, first, coordinate-time may—but does not need to—have the meaning of a physical time, like clock-time, which, in turn, can become a dynamical variable in the phase space of the theory [
12]. Second, in order to preserve determinism, constrained equation should be added, and—even in the case of the Hamiltonian formulation of GTR—the equation governing motion (the null Hamiltonian) is also supposed to be a constraint. This leads to the so-called “frozen dynamics” in which no physically relevant quantity can change, for the only admissible observables are “constants of the motion”, being those that commute with the Hamiltonian constraint [
8,
10,
13]. Finally, in the specific quantum context, the equation standing for the GTR Hamiltonian, which in geometrodynamics is the Wheeler–DeWitt equation, is taken to be timeless, lacking any clear time parameter [
14,
15]. This would support the conclusion that there is no physically relevant change in the QG universe.
More recently, interpreters of loop quantum gravity (LQG)—a new form of the canonical approach—claim that the whole relativistic spacetime disappears at high energy limit. Indeed, time elimination would be guaranteed by the fact that the yet-to-come LQG Hamiltonian should have the same features as the Wheeler–DeWitt equation [
7,
16]. Additionally, LQG dynamics of the basic building blocks (the spin-networks, whose states represent the basic magnitudes underlying or constituting the gravitational field) describes a physical framework lacking in other basic geometrical features, such as continuum and locality. This could mean that no spatial framework remains [
3,
16,
17].
Moreover, some have found traces of the disappearance of spacetime in other quantum gravity approaches [
5,
7,
18]. For instance, even String Theory, which is generally interpreted as background-dependent and follows a totally different approach from the canonical one, is taken to manifest an emergence of relativistic spacetime. In a development of the theory, known as the anti-de Sitter (AdS)/conformal field theory (CFT) conjecture, a string theory that includes gravitation together with a (asymptotically anti-de Sitter) background spacetime is equivalent to a theory without gravitation (conformal field theory) defined on a spacetime with fewer dimensions. This correspondence should exemplify not only the so-called “holographic principle” (i.e., the idea of representing the physics within any closed surface by degrees of freedom on the surface itself), but also the idea of the emergence of spacetime and its geometry: the spacetime in which gravity’s degrees of freedom are merely “encoded” is different in the number of dimensions from the spacetime of the theory with gravitation [
7] (p. 675).
2.2. Operational Reaction
There are different examples extracted from other research programs that would support the idea of spacetime disappearance at the QG level and its emergence at low energies [
5,
18]. Thus, the thesis of the disappearance of spacetime—even though historically connected to the “frozen dynamics thesis” of geometrodynamics interpreters and the even older eliminativist claims about time—is a collection of quite different propositions. The milder ones regard the emergence of relativistic spacetime (or a spacetime that can be described by relativistic models) from frameworks that are different but still spatiotemporal. The more extreme ones—and the above-mentioned interpretations of LQG are of this kind—argue for a fundamental QG level of physical reality that has no spatiotemporal (geometrical) framework.
This stronger view may receive the same criticisms as those formulated against the frozen dynamics thesis [
8,
19,
20,
21]. However, in many cases, such kinds of attacks consist of denying some of the premises of the frozen dynamics arguments—both “philosophical” (e.g., background independence, strict determinism, and the strong general covariance in Earman’s sense) and physical (e.g., identification of one of the constraints as the Hamiltonian).
The claim for the epistemic primacy of geometry by Hagar and Hemmo [
4] hides a quite different series of criticisms. The need for some kind of spatiotemporal framework even at the QG level—and, consequently, the untenability of the disappearance of spacetime thesis in its strong version—would depend on a somehow trivial, but not so obvious view on the foundations and the constitution of physics as a science.
The basic presupposition of Hagar and Hemmo’s argument is the idea that physics is not uniquely made up of dynamical theories, but also of experiments through which models must be tested. Additionally, experimental physics requires measurements to make tests. Hagar’s and Hemmo’s claim is that geometrical measurements—given by the use of some kind of rod and clock—are primitive.
…we believe that all measurements, even measurements of temperature, intensity, or what have you, are ultimately position measurements, and involve resolving distances, or wavelengths, using “measuring rods”.
However, Hagar and Hemmo argue that the primacy of geometrical measurements also means that no one can derive a “theory” of measurements from the dynamics alone, for geometrical notions such as length or duration are not reducible to algebraic concepts. So, they conclude that the geometrical side and the dynamical (purely algebraic) side of a theory must be on a par. The attribution of an algebraic character to LQG basic reality is derivable from Huggett and Wüthrich [
5] (p. 282). They talk about the analogous case of non-commutative geometry and seem to support the idea that an “
algebraic ontology” is the ultimate result that QG theories can reach.
According to Hagar and Hemmo, in other words, the methodological rules of verification and the measurement procedures are not derivable from any dynamical theory, even if the notions used in the theorization of measurement procedures are somehow theory-laden. In addition, given that any measurement is reducible to a geometric measurement, there is a matter of fact that cannot be denied—namely, that we need to presuppose some kind of geometrical notion in any act of measurement.
In this sense, a spatiotemporal framework is conceptually, but also practically needed: it would be a condition of possibility for any measurement, and then a condition of verification for theories concerning any physical domain. This should mean that, even assuming that only the little of what is supposed to exist in a physical domain (according to some theory) is actually observable and measurable, something observable must exist (to maintain the domain empirical), and this must have geometrical features—or features that can be translated into geometrical terms. Such features are what allows the observables to be effectively measured. So, if something might be proved at the QG level (for instance, the discrete nature of the spin network states predicted by LQG), it will be done because some physical entities will be used as a rod or a clock. In this way one should read the fact that even in LQG theorists give a special role to “geometrical operators”. Indeed, they need to maintain a linkage to the experimental settings [
4]. As Hagar [
22] (p. 184) states, “also in LQG a primitive notion of length is presupposed and not derived, i.e., a certain theoretical magnitude is picked out as designating it: instead of a small rod
… we have here a very small world–sheet associated with area.” He continues affirming that the association between the dynamical (theoretical) magnitude and the geometrical notion “can be justified only if one can make contact by experiments between the notion of length introduced and the results of experiments given in geometrical terms of ‘segments’, e.g., distance, velocity, etc., as measured by means of rods and clocks” [
22] (pp. 184–185). The conclusion is that even in LQG “there is no question of deriving the primitive geometrical notions from the dynamics, only of making the dynamics consistent with what we can measure, i.e., with primitive geometrical notions” [
22] (p. 185). In a similar way Hagar [
22] (pp. 178–180) also reinterprets the above-mentioned AdS/CFT conjecture.
Thus, the final claim of Hagar and Hemmo is: if one interprets QG theories as theories without spacetime, one would be at odds with the epistemic bases of experimental physics—namely, the primacy of geometrical observations and measurements. Even the assumption that everything can be resolved by the emergence of spacetime at low energies is thought to be merely elusive. Indeed, if the metric of the relativistic spacetime is an emergent phenomenon, how can one verify (in a very far future) that there are different metrics or no metric at all in the QG domain, when one assumes that there is no possibility of making geometrical measurements in that domain?
Moreover, trying to interpret John Bell’s point about local beables (advocated by Maudlin [
23] and Huggett and Wüthrich [
5]) in a way everything is played at the macroscopic scale cannot help the emergentist. Indeed, saying that we describe both experimental conditions and the outcomes of experiments in terms of the positions and motions of macroscopic bodies at a macroscopic scale does not mean that observational procedures are neutral about the microscopic scales. The experimental setting and, in particular, the measuring instruments are designed to produce effects at the microscopic level that are then made visible by the measuring apparatus at the macroscopic level, scientists’ eyes and cognitive apparatus being at that level. The point, according to Hagar and Hemmo, is that the basic result of the interaction between measuring apparatus and microscopic system ultimately has to be a measure of position, distance, area, volume, or duration. An empirically testable theory does not merely predict changes of geometrical magnitudes at the macroscopic level (namely, positions and motions in the instruments that are directly seen by the human observers), but predicts these changes as direct effects of the microscopic phenomena at the macroscopic level. In this sense, empirical testability is not neutral about whether there is a spacetime structure underlying the experimental set-up.
So, in order to rebut Hagar’s and Hemmo’s argument, one has to claim that the nature of the QG level is not and never will be a question of experience, in the sense that it is something not empirical by principle. There is indeed an anti-Kantian argument that concludes in this sense in Butterfield and Isham [
15] (pp. 132–133). However, if one accepts this position, one is also supposed to explain why a physicist should admit that there are entire domains of physics (and not simply a set of objects in a singular domain) that are not empirical. In other words, one should explain why we are allowed to abandon empirical testability for entire domains of physics and still consider them as physical. Otherwise, the objection against Hagar’s and Hemmo’s argument has to be directed against their claim that measurements are basically geometrical measurements with rods and clocks, or against the other basic claim that one cannot derive a theorization concerning geometric measurement from dynamical (purely numerical or algebraic) considerations alone without any reference to geometry. Now, denying the latter claim could be even more difficult, if one takes the actual position of the human observers into account, as we shall see in the final section.
2.3. Methodological Consequences of Spacetime Interpretations
The two theses about spacetime we have discussed above are clearly opposed only in the case that we take the disappearance thesis as an eliminativist claim, i.e., that there is no geometrical framework at the Planck scale [
17]. We have seen that, according to Huggett and Wüthrich [
5], there are various degrees of disappearance in the different approaches to QG, but that all of them rely on a framework that is quite different from the relativistic spacetime. Nevertheless, only few can be considered as theories that really reject any form of spacetime. Moreover, Earman states that even LQG still postulates a sort of spacetime, even though it is clearly not the GTR spacetime [
11] (p. 21).
The two theses—in their strong version, at least—lead to opposed ontological views concerning not only spacetime in QG, but also other more general issues, such as how to define a physical entity or an observable. The supporters of the disappearance of spacetime follow a Leibnizian approach, to use Earman’s words (Earman deems his position as Leibnizian in the replies to Maudlin [
20] (p. 23)), or more properly, a Pythagorean view on reality: one can directly derive the meaning of physical reality from the mathematical theory—with the help of some further “reasonable”
a priori criteria. More precisely, Earman’s criteria are essentially two: (1) observables must be deterministically definable [
24]; (2) it must be a gauge invariant or, almost equivalently, a diffeomorphism invariant quantity [
10].
The operationalist view assumed by the vindicators of spacetime fundamentality prompts to define physical reality with regard to its measurability. Indeed, “operationalism” means that any concept is “nothing more than a set of operations; the concept is synonymous with the corresponding set of operations” [
25] (p. 5). In the current case, the relevant concepts are physical, while the operations are essentially measurements. Hagar considers operationalism as “the thesis that sees measurable quantities as primary" [
22] (p. 46). In such a view, what qualifies an object as physical is the possibility to observe and measure it, not only or not primarily the
a priori mathematical features that are attributed by a theory. Rather, the theoretical considerations and predictions regarding the object are physically relevant only if they match or are compatible with data.
The latter point—the compatibility between a priori (theoretical) considerations and predictions and a posteriori (observational) data—is the key to understanding how the two theses produce an opposition that goes beyond the ontological field and a purely philosophical debate.
The supporters of the disappearance thesis are not willing to give up empirical coherence for theories without spacetime. They then combine the charitable suggestion by Healey [
21], of explaining how the spatiotemporal world in which events and experiences occur is derivable from QG theories without spacetime, with the idea that the relation of empirical salience with spatiotemporality is theory-laden, and consequently can change in the case of a paradigm shift [
5]. As Dawid [
2] suggests, QG physics somehow induces such a paradigm shift and the concomitant change in the scientific verification criteria.
The last ingredient of the proposal is the conception of emergence as compatible with reduction. In particular, Butterfield and Isham [
15] see the connection between GTR and the yet-to-come QG theory as a reduction by approximation or limiting procedures. This idea is directly taken on by Huggett and Wüthrich [
5] and Wüthrich [
16]. Wüthrich, in particular, elaborates a possible explanation of how LQG can be considered empirically coherent. In a nutshell, his proposal consists of two steps: (1) to find models of classical systems that are empirically verified, such as GTR’s; and then (2) derive from LQG theory itself a connection by derivation (through limiting and approximation procedures) to those GTR models, in which of course these are ultimately derived from LQG.
The problem with this proposal, from a more radical empiricist viewpoint (such as Hagar’s and Hemmo’s), is that all of this appears as a mathematical consistency proof: given that
x is empirically coherent, we prove that
y is mathematically connectible with
x; thus,
y is empirically coherent. This is certainly a necessary test for a new physical theory (that is hard to directly verify), but cannot be sufficient. The first concern about this test is that it assumes that QG theories cannot contradict GTR or Quantum Field Theory (QFT) models, and are not able to say anything new for those domains [
22] (Chapter 8). The second concern is that, in the most charitable interpretation, the test is only provisional, because it limits the verification of QG predictions to what they say about the domains of GTR and QFT. Certainly, this will be our horizon for a very long time, but the scope of new theories is to enlarge this horizon.
In a less charitable (or truly malicious) interpretation, however, the test—linked with the idea of paradigm shift—seems to prevent the need for new specific tests for predictions from QG theories concerning their proper domain. Of course, according to the defenders of spacetime fundamentality, the only real proposal regarding QG verifiability is to figure out new experimental settings and, eventually, search or wait for new measuring instruments. Plausibly, those instruments will be designed to determine primarily geometrical magnitudes.
3. Measurability and Knowledge
The main lesson from the debate about the fate of spacetime in QG is that detecting measurable quantities in that domain is the primary aim of the experimenters. Indeed, it is quite indisputable that if one talks of phenomena, measurability is a crucial feature to identify physically relevant magnitudes. Of course, if we go beyond phenomena, things can change, but the whole question is then a problem of metaphysics.
Focusing on the phenomenological and methodological side of the matter, we have outlined the proposal by Hagar and Hemmo [
4] that measurability involves the presence of geometrical magnitudes, because these are the primary measurables. They justify this statement with the fact that all measuring instrument we know of are (in their basic mechanism) position measurers, or rod/clocks. Moreover, such a primacy of geometrical magnitudes at the empirical level would not be something that can be predicted and then derived from the dynamical theories (not even from future QG theories), but merely presupposed by them to make contact with experiments.
The point that geometrical magnitudes are the primitive measurables (Primacy Thesis) may be theoretically supported, but basically relies on a consideration related to the history of measuring instruments, and so is an
a posteriori claim. Assuming the Primacy Thesis, the basic element of a conceptual test for the empirical coherence of QG models is that there must be a way to convert some of the dynamical features coming from the theory into geometrical terms, in order to obtain fundamental magnitudes that can be measured. As Hagar supposes, the consistency proofs based on reductionist approaches can be useful, when one try to identify the dynamical features that can be translated in geometrical terms [
22] (p. 238).
Accordingly, the related thesis that geometrical magnitudes are underivable—because they maintain a sort of primacy that goes beyond the scope of a dynamical theory—can also be supported by a posteriori considerations. We explain this theoretical possibility in the following section.
3.1. Empirical Realism and Evolutionism
Epistemology is the branch of philosophy that aspires to answer the question: what can be regarded as a scientific knowledge? This is a fundamental question for human existence and, of course, for the aim to grasp the empirical reality (or, less ambitiously, to anticipate the future behavior of well-defined parts of the world). In line with a naturalized epistemology [
26], we propose that answering this question requires understanding the cognitive limits of human knowledge systems. In particular, we focus on the systems of numerical representation, which are tightly related to our capacities to abstract basic notions such as number, space, and time from experience. In our view, an inquiry on the state of the art of cognitive research that explores these topics should provide a step toward an
a posteriori approach to the definition of the limits of human knowledge, and more specifically to the evaluation of the empirical adequacy of a scientific theory.
Since Aristotelian psychology, it has been hypothesized that human (and animal) minds possess from birth a perceptive/representational system that underlies any generalization of the events occurring in the environment. Such a system would also be capable of governing individual behavioral dispositions. Another old idea, though not so ancient, is that such an innate intuitive system leads to a “metric of similarity” [
27]. Kant famously argued that space, time, and number are the
a priori intuitions that structure a knowing subject’s experience and constitute the basic notions of the laws of mathematics and, consequently, physics [
28].
Some of the current hypotheses on numerical cognition explicitly refer to a “Kantian” research program; namely, to the building of a common theoretical framework for the notions of time, space, and number in cognitive science. This stance must not be confused with “transcendental idealism”. Rather, it consists of a sort of non-metaphysical or “natural" realism, according to which human and non-human animals internalize “basic codes and operations that are isomorphic to the physical and arithmetic laws that govern the interaction of objects in the external world” [
29] (p. 517).
According to this framework, the acquisition of cognitive skills is generally interpreted as an evolutionary processes in the Darwinian sense. Here, a common mechanism underlying any magnitude representation is interpreted in its adaptive function of guiding psycho-motor behavior. In fact, motor-related activities need the use of spatiotemporal information about the “external world”; i.e., they need to be based on correct measures of distances, times, etc.
In this context, the original Kantian idea of “a priori intuitions” is translated into a series of cognitive capacities, which rely on underlying genetic and neural mechanisms, governing both spatiotemporal, quantitative (conscious) representations and vital activities—spatial navigation, temporal orienting, and numerical computations included.
4. Cognitive Correlations between Numbers and Space
An important line of research in the field of numerical cognition focuses on the link between numbers and space. Empirical data suggest that spatial representation and visuo-spatial skills are involved in various related cognitive activities, such as magnitude representation, magnitude comparison, simple mental arithmetic, and multidigit calculations.
4.1. Mental Number Line
A very influential explanation of space/number interaction, which focuses on the concept of a mental number line, is based on the fact that numerical and spatial representation are mediated by common cortical circuits [
30,
31].
A set of robust empirical data which is mostly connected to the study of the spatial numerical association of response codes (SNARC) effect suggests that numerical representations trigger spatial ones, with smaller numbers connected to the left side, and larger numbers to the right side of the visual field. Verifications of the SNARC effect are based on simple numerical tasks such as parity judgment [
30] and magnitude comparison [
32]. In parity judgment tasks, where subjects are asked to classify a number as odd or even by pressing either a right or a left-hand positioned key, left-hand responses are faster for smaller numbers and right-hand responses are faster for larger numbers. Magnitude comparison consists of judging whether a number is smaller or larger than another number. Here, when left-hand and right-hand keys stand for, respectively, “smaller than” and “larger than”, responses are faster than the inverse response-key configuration.
A similar finding, the STEARC effect (spatial-temporal association of response codes), connects the spatial coding of quantity with temporal magnitudes discrimination. In this case, participants in the experiment have to establish—by pressing either a right or a left-hand positioned key—whether the last of a sequence of periodic auditory clicks starts before or after a fixed time interval. Again, the results of this task are influenced by the apparently irrelevant spatial position of the response key: early onset times elicit faster left-hand responses, while late onsets elicit faster responses with the right hand [
33].
These cognitive effects led to the hypothesis of a mental number line, where numbers are ordered from smaller to larger according to a left-to-right or right-to-left orientation, depending on the writing direction [
30,
34,
35]. This hypothesis represents one of the main results in the study of space–number interactions and shows how deeply related different modes of representations are in human cognition.
4.2. The “Number Module”
Above, we have seen a very robust cognitive phenomenon that connects spatial and numerical representations. However, how may we causally explain this phenomenon? Some researchers propose that such an explanation may arise from gaining a better knowledge of the neural basis of numerical cognition. A common starting point is that human knowledge does not develop from a tabula rasa. Instead, the human mind possesses a set of hardwired capacities that have evolved in order to support basic processing abilities, among which number-related skills have a distinctive role.
An influential model is Dehaene’s triple-code model of number processing [
36]. According to Dehaene, number representation involves different neural substrates in which numerical information is encoded visually (as strings of Arabic digits), verbally (as number words, sets of number facts, etc.), and analogically (as magnitude). The latter kind of numerical representation is mediated by a neurocognitive system called “number module,” or approximate number system (ANS), which is shared by different animal species and makes for the representation of approximate quantities [
37,
38,
39].
The typical problem in which the ANS comes into play is a comparison between different sets of objects to decide which one is largest. Some researchers suppose that the ANS is the only preverbal representational system needed for the development of basic numerical concepts [
39]. Spelke [
40] combines the work of the ANS with information gained by another “core system of numbers” [
39]; i.e., the object tracking system (OTS). This is a different mechanism that relies not just on numerical concepts, but more specifically on the representation of objects as distinct individuals. However, this mechanism is strictly related to numerical reasoning, as it provides the capacity to recognize at a glance the number of objects in sets of up to maximum four items (a well-documented cognitive skill named “subitizing”). According to Spelke, basic numerical concepts are the result of the parallel work of both core systems of numbers mediated by the use of natural language.
4.3. The “Accumulator” Model
Leslie et al. [
41] propose that the representation of exact quantities does not involve the OTS, nor is it based on language-related learned abilities. According to their hypothesis, the ANS works as an “accumulator” that is able to represent quantities analogically, as if they were different levels of water in a beaker [
42]. However, the accumulator alone is unable to explain basic number-related cognitive skills such as the use of exact values and the corresponding symbols in arithmetic. To this end, at least two innate notions are needed: the notion of “exact equality” and that of “next number”:
Exact Equality
If a counted set of objects results to have a certain cardinal value, then this value has to be the same each time that set is counted. The ANS lacks this feature, because it represents magnitudes as noisy values (reals), and with that kind of representation alone, two exactly equal values can hardly occur.
Next Number
If supported by the notion of “exactness”, the ANS can underlie a mechanism for accumulating continuous magnitudes with the further notion of “ordering”. However, such a mechanism would still be unable to determine the idea of “next number” (i.e., the successor of any given integer) if it was not already present as an innate notion.
The innate notions described above are the basis for the construction of a model that allows the accumulator to represent any kind of magnitude (continuous and discrete) and also to explain those arithmetic skills that need the use of symbols. The basic idea is that numerical symbols used in technical and natural languages are linked to preverbal symbols hardwired in the brain. In particular, there should be an innate mental symbol corresponding to the unit (ONE).
The concepts of ONE and next number provide the basic elements to describe the cognitive procedure that generates all mental symbols standing for exact quantities. This procedure is based on the successor function (), through which one can recursively construct the whole series of naturals. Once we are provided with a generative rule for mental number symbols, their ordered series may be mapped to the corresponding analog magnitudes encoded in the accumulator.
According to the authors, this is equivalent to drawing a sort of “measuring grid” onto the accumulator itself. When provided with that kind of a grid, the cognitive architecture of the accumulator model allows the representation of very different kinds of numbers—approximate and exact ones—through the same neurocognitive system (the ANS), which originally evolved to process only approximate numbers. Another interesting consequence of this cognitive architecture is that it may be used, as seen before, to represent any kind of magnitude (numerical, spatial, and temporal magnitudes), thus allowing to give a thorough explanation of the link between space, numbers, and time empirically verified in experiments on SNARC and STEARC effects.
4.4. A Theory of Magnitude (ATOM)
Space–numbers correlations such as those revealed by the SNARC effect suggest some kind of interaction between the neural circuitry involved in spatial and numerical processing. In fact, functional magnetic resonance imaging (fMRI) studies reveal that non-symbolic number processing activates the neurons of a bilateral parietal cortical area—the intra-parietal sulci (IPS)—which also have a functional role in visuo-spatial and manual tasks such as grasping and pointing [
31,
43,
44,
45].
In 2003, following Gallistel and Gelman [
42], V. Walsh made an influential proposal called A Theory Of Magnitude (ATOM) in order to “bring together
… disparate literatures on time, space and number, and to show similarities between these three domains that are indicative of common processing mechanisms, rooted in our need for information about the spatial and temporal structure of the external world” [
46] (p. 483).
According to Walsh, the parietal cortex represents environmental information about different kinds of magnitudes—number, space, time, size, speed, etc.—whose interactions are supposed to have a specific—and, to date, neglected—meaning for the guidance of action. In this view, such cognitive interactions are due to overlapping sets of parietal neurons that share a common metric for magnitude representation, and this fact is at the base of the SNARC and STEARC effects. ATOM predicts that these cognitive effects are instances of a broader SQUARC (spatial quantity association of response codes) effect, “in which any spatially or action-coded magnitude will yield a relationship between magnitude and space” [
46] (p. 487). In some cases, this prediction has indeed been verified [
47,
48].
A different approach to the explanation of the cognitive link between space and numbers is given by the Metaphorical Theory of Concepts [
49]. Here the intense use of spatial metaphors when talking about numerical concepts reflects low-level functional features of our neuro-cognitive system. According to Winter, Marghetis and Matlock [
50], the explanations given by the Metaphorical Theory and ATOM are complementary.
4.4.1. Resolving Distances
The main claim of ATOM—namely, that a common machinery in the brain is responsible for representing and processing any kind of magnitude—is coherent with the model proposed by Leslie et al. [
41].
The cognitive architecture consisting of the accumulator and the grid may thereby be considered as the core of the “generalized magnitude system” [
46]; namely, as the common neural structure at the basis of any kind of magnitude representation.
At this point, the significance of the cognitive hypotheses described above for our issue about geometry and measurements should be clear. The accumulator model, supported by the broader schema provided by ATOM, is a promising explanation of how human minds (or rather, brains) exploit their capacities of perceiving analog magnitudes to represent both exact (abstract) quantities and physical (concrete) magnitudes. The activity of this cognitive model is well-described as a measuring procedure. Indeed, measuring is ultimately the activity of numerically discriminating a magnitude through a regular parameter or yardstick (measurement unit), and this is exactly what the grid makes on the noisy magnitudes presented in the accumulator by using the mental symbol ONE as a yardstick. In this sense, the mental capacity of representing exact magnitudes is at the same time a measuring activity.
The cognitive explanation of how the accumulator model discriminates magnitudes is analog to a procedure of geometrical measurement. Indeed, in such a frame, the accumulator is the extensive dimension in which the mental symbol ONE—seen as the fundamental interval—moves as a rod to construct the measuring grid. This grid is then used to associate a precise number to the magnitude representation given by the accumulator. The ONE thus pinpoints the extremes of the perceived magnitude and resolves their distance in the accumulator. As Hagar and Hemmo remind us through the Primacy Thesis (PT), resolving distances is basically what geometrical measurements do.
5. A Conceptual Test of Empirical Adequacy
Now, we will try to develop an epistemic argument that can be intended as a conceptual test in order to examine the empirical content of physical theories, or conversely, the possibility to design experimental settings for physical models. We presuppose that any physical domain of interest should have an observable basis which basically consists of a set of effectively measurable quantities. Following the naturalized approach to epistemology, we connect measurability to the human ability to discriminate magnitudes. In this sense, we begin from the analysis of the human possibility of acquiring information from the physical world—on the basis of what we know about the functioning of our minds. Indeed, we assume that human minds cannot have direct access to reality “as such”. On the contrary, we are forced to deal with our capacity to process and interpret the external signals caught by our senses. For this reason, our theories about the natural world should be tuned to (and limited by) how the human mind interprets this world. Therefore, a physical theory that dispenses with geometry should consider whether this also means the abandonment of any possible cognitive access to the physical world from a human perspective.
If one follows an empiricist stance and looks at the opposition between disappearance (DT) and primacy thesis (PT), the question about the existence of spacetime at the fundamental level of physical reality (and the role of geometry in QG physics) is ultimately decided by the answer to the following question: are geometrical measurements the prototypical type of measurements?
According to our epistemic argument, this question must receive a positive answer—at least from the point of view of cognitive science. Our argument proceeds as follows:
General assumptions:
- i)
humans cannot have direct access to the external (i.e., non-mental) reality;
- ii)
the development of mental skills leads, for adaptive purposes, to optimizing the possibility of acquiring useful information, i.e., for approximatively reflecting actual features of the external world.
Philosophical premise:
- iii)
we must acknowledge that any natural science cannot be based only on its mathematical consistency, but that our theories about the natural world should be tuned to (and limited by) the way human minds process signals from the external world.
Empirical premises:
- iv)
the review of recent cognitive science theories about magnitude representation skills strongly suggests a strict epistemic correlation between magnitude discrimination and measurement processes of extensive magnitudes;
- v)
measurement processes, including those carried out in physics, require some kind of geometrical representation of magnitudes.
Conclusion:
- ⇒
physical models need to translate at least some of the dynamical features described by their correspondent theories in geometrical terms.
This argument is only an attempt to highlight the importance of the presence—in physical theories and models—of notions linking formal structures with possible observational and experimental setups. Thus, the only aim of our theoretical argument is to be a conceptual test. This means that it is not intended to determine its specific employment in models and theories.
Anyway, suggestions about how to pass from the dynamical vocabulary of some QG theories to a geometrical one can be found in Hagar [
22] (Chapters 7–8). A paradigmatic example, according to the author, regards the causal set approach. Here the (finite) volume of a region of the emerging relativistic spacetime would be measured by counting the number of “points events” (the building blocks of the theory) contained by that spatiotemporal region. The Riemannian metric of spacetime imposes that the points are counted uniformly. Thus, Hagar argues that this feature of the dynamics must be interpreted in geometrical terms, and so one should design a model of the theory as if the already macroscopically tested local Lorentz invariance of spacetime established that “the building blocks of the theory are uniformly distributed in spacetime” [
22] (p. 238).
6. Final Remarks
In this paper, we reviewed different interpretations of QG theories. According to some, the presence of a non-empirical physical reality at the Planck scale is the deep meaning of QG physics. Secondly, we proposed a conceptual test to assess the possible empirical meaning of a physical theory. Clearly, if one accepts that the basic level of reality could also be non-empirical, then one would consider our argument as implausible or irrelevant.
We also analyzed other QG interpretations that seem to accept that physics must still be based on some kind of empirical verification of its theories. Nevertheless, according to some of them, verifications might be independent of any consideration regarding the cognitive limits of human knowledge. Indeed, they propose that the verification of a fundamental theory may correspond to a consistency proof, namely the demonstration—by means of apt models—that QG theories are ultimately consistent with empirically verifiable (alleged) sub-theories; i.e., reduced theories such as GTR and QFT. Such a position entails that fundamental physics should be based on special epistemological criteria, totally different from those adopted by any other natural science, including experimental physics (e.g., current astrophysics, particle physics, etc.).
Clearly, our naturalized approach to epistemology maintains exactly the opposite. Indeed, the deeper meaning of our premises (see (i), (ii), and (iii) above) is that one cannot completely disregard the potential influences that our mental ways of representing the world have on the object of natural sciences (i.e., the physical world). A denial of these assumptions may be perfectly coherent under a Platonic interpretation of physical theories, and QG theories in particular, but only at the cost of giving up the empirical foundation of the physical world that is grounded on observations, measurements, and experiments.