*Review* **Quantum Gravity on the Computer: Impressions of a Workshop**

**Lisa Glaser 1,\* and Sebastian Steinhaus 2,\***


Received: 29 November 2018; Accepted: 10 January 2019; Published: 18 January 2019

**Abstract:** Computer simulations allow us to explore non-perturbative phenomena in physics. This has the potential to help us understand quantum gravity. Finding a theory of quantum gravity is a hard problem, but, in the last several decades, many promising and intriguing approaches that utilize or might benefit from using numerical methods were developed. These approaches are based on very different ideas and assumptions, ye<sup>t</sup> they face the common challenge to derive predictions and compare them to data. In March 2018, we held a workshop at the Nordic Institute for Theoretical Physics (NORDITA) in Stockholm gathering experts in many different approaches to quantum gravity for a workshop on "Quantum gravity on the computer". In this article, we try to encapsulate some of the discussions held and talks given during this workshop and combine them with our own thoughts on why and how numerical approaches will play an important role in pushing quantum gravity forward. The last section of the article is a road map providing an outlook of the field and some intentions and goalposts that were debated in the closing session of the workshop. We hope that it will help to build a strong numerical community reaching beyond single approaches to combine our efforts in the search for quantum gravity.

**Keywords:** quantum gravity; computer simulations; numerical methods

Quantum Gravity is one of the big open questions in theoretical physics. Despite recent successes in particle physics and cosmology, most notably the discovery of the Higgs boson and the direct detection of gravitational waves, we are still lacking a consistent description of physics from smallest to largest scales that reconciles gravity and the quantum nature of matter. Possible signatures and effects of quantum gravity are numerous, from singularities in the early universe and black holes to the size and origin of the cosmological constant. In addition to these fundamental issues, one might hope that future experiments could reveal other traces of quantum gravity. Hence, it is of utmost importance to push the development of quantum gravity approaches to a point where they make reliable predictions, which will allow us to verify or falsify theories.

In the last several decades, many promising non-perturbative approaches to describe space-time at the smallest scales have been developed, (causal) dynamical triangulations [1,2], causal set theory [3,4], group field theory [5,6]/tensor models [7–9], loop quantum gravity [10,11], noncommutative geometry [12], spin foam models [13,14], and others. All of these postulate discrete structures that serve as a truncation on the number of degrees of freedom and allow for well-defined non-perturbative dynamics, akin to lattice gauge theories. Previous research, in which these models are substantially simplified to be computable, has led to impressive results, e.g., the resolution of the Big Bang singularity in loop quantum cosmology as a Big Bounce [15]. However, in order to make predictions for the full theory beyond simplifications and symmetry reduced models, we have to explore their deep non-perturbative regime. The bottleneck in this is the development of numerical techniques that allow us to efficiently extract results from the models, e.g., expectation values of observables and characteristics of different phases of the theory. Encouraging developments have been made in recent years and the purpose of our workshop was to compare these across different quantum gravity approaches.

Within the last 30 years, computers have revolutionized our lives and the way science is done. While the very first physics computer simulations were 2d Ising models with 8 × 8 sites, the technology and its applications have evolved rapidly: today's high performance simulations can predict the gravitational waves emitted by two colliding black holes or neutron stars [16] and explain the masses of hadrons using lattice QCD [17]. These developments have slowly percolated into the quantum gravity community, and have given rise to mainly computational approaches to the problem, such as (causal) dynamical triangulations. In these approaches, the path integral for dimensions larger than two is too complicated to be tackled analytically, but numerical methods, adapted from QCD and statistical mechanics, show how a ground state with macroscopic features emerges [2].

Other approaches have followed this example: in causal set theory, Monte Carlo simulations are used to explore the space of all possible partial orders [18], which includes all geometries but also highly non-manifold like structures, and more recently to compare the prediction of a fluctuating cosmological constant to cosmological data [19]. Furthermore, in spin foam models, numerical methods are indispensable to study the dynamics of spin foams with many degrees of freedom, e.g., via the means of coarse graining/renormalization [20,21]. Moreover, calculating the fundamental spin foam amplitudes also requires numerical techniques [22].

In the workshop, we brought together experts on these approaches to discuss recent developments in quantum gravity on the computer. During the discussion, two broad clusters of topics emerged; observables that we can measure and how we can reliably measure them, and numerical methods that are efficient for the different approaches.

In this article, we would like to summarize these discussions and distill their main ideas. We hope this will serve as a record of this workshop and a reference point for the current development of the field.

In the first section of this article, we begin with a brief introduction to the various approaches discussed during the workshop. The rest of the section is split into three subsections, where we discuss subtleties in defining the theories on the computer in Section 1.2, interesting observables in Section 1.3 and numerical methods in quantum gravity in Section 1.4. In Section 2, we summarize the road map discussion of the last day and try to map goalposts and aspirations for the community. A list of participants, slides and posters can be found on the website [23].

### **1. Approaches, Observables and Numerical Methods**

### *1.1. Introduction to Various Approaches to Quantum Gravity*

Throughout this article, we use different theories to exemplify the issues we want to discuss; as a reminder, let us give a quick overview over frequently mentioned theories and their salient aspects. In (causal) dynamical triangulations, the path integral over geometries is regularized by introducing a triangulation; this has been explored analytically for two-dimensional geometries and through simulations in two, three and four dimensions [2]. The sum over geometries is implemented by summing over all possible triangulations, where the size of the simplices is kept fixed. In dynamical triangulations, sometimes also-called Euclidean dynamical triangulations to distinguish it from causal dynamical triangulations, these simplices are equilateral, with all edges having the same length. In causal dynamical triangulations (CDT), the time-like edges of the simplices have a different edge length, and a time-foliation of the geometries is enforced. This leads to a very different ensemble of geometries in the path integral, and in particular suppresses changes in the topology which lead to degenerate behavior in Euclidean triangulations. In both approaches, the simulations use a simplicial version of the Regge action [24] to weight the geometries.

In two dimensions, dynamical triangulations can be solved using so-called matrix models. They give probability distributions for *N* × *N* random variables—thus also-called random matrices—where the matrices are invariant under the conjugation of the unitary group. The action of these models then consists of matrix invariants, e.g., the trace of a product of three matrices. This theory can be expanded in a sum over Feynman (ribbon) diagrams, where each diagram is dual to a discrete two-dimensional surface, e.g., a triangulation if the interaction term is three-valent [25]. Tensor models were developed to explore this method in higher dimensions. Instead of integrating over random matrices and thus obtaining two dimensional surfaces, here the integrals are over higher order random tensors with an action consisting of tensor invariants, thus creating surfaces in higher dimensions [8,9].

In several ways, group field theory (GFT) [5,6] is similar to tensor models. Using the same order interaction vertices, the combinatorics of the Feynman graphs of group field theories and tensor models agree. However, in addition to the combinatorics, the Feynman diagrams carry group theoretic data encoding a discrete geometry. The fields of the theory are defined on several copies of the underlying symmetry group. Crucially, this group manifold is not related to a space-time manifold. Instead, space-time is supposed to emerge from field excitations, e.g., as a condensate [26]. Group field theories are closely related to loop quantum gravity and spin foam models, e.g., group field theories can be constructed whose Feynman diagrams are given by spin foam amplitudes [27]. As for quantum field theories, the consistency of GFTs is investigated through renormalization [28].

Spin foam models [13,14] are a path integral approach to quantum gravity sometimes also referred to as covariant loop quantum gravity. Similar to previously described approaches, spin foams regularize the gravitational path integral by introducing a discretisation, a 2-complex, which is frequently chosen to be dual to a triangulation. The discrete geometry is again encoded in group theoretic data. For a given 2-complex, the path integral is implemented by summing over this data weighted by spin foam amplitudes. A priori, there is no rule determining which 2-complex to choose for a particular calculation, and generically the results depend on this. One way to address this is by also summing over all possible 2-complexes [29], which is systematically implemented in group field theory as discussed above. Alternatively, the refinement approach [20,30] aims at consistently defining the dynamics across various 2-complexes, e.g., by relating the theories by identifying states on the boundaries of these complexes.

Among the theories discussed here, loop quantum gravity (LQG) [10,11] is the only approach aiming to canonically quantize gravity. To this end, space-time, which is assumed to be globally hyperbolic, is split into space and time. Due to diffeomorphism symmetry, the theory is totally constrained, i.e., the Hamiltonian itself is a sum of constraints, such that the dynamics amount to gauge transformations. Moreover, these constraints form the so-called hyper surface deformation algebra. The goal of LQG is to quantize this algebra of constraints via Dirac quantization. To achieve this, one defines a kinematical Hilbert space, whose states do not satisfy the constraints, and constructs suitable constraint operators and an associated operator algebra. Then, the final goal is to find the physical Hilbert space, i.e., all states annihilated by the constraints. As an alternative method to tackle this issue, spin foam models have been developed as the "covariant" theory to LQG. While the two frameworks are closely related, e.g., the boundary states of modern spin foam models are kinematical states of LQG, their connection is not completely understood [31,32].

In causal set theory (CST) space-time is reduced to a partially ordered set. The discrete events are related to each other only if they are causally connected [3]. This leads to a minimal amount of structure assumed, which is why reconstructing space-time from a causal set is a complicated problem. There are methods to recover manifold properties from a causal set, maybe the simplest is to recover time like distance between two events by counting the longest chain between them. Recovering space like distances is more complicated, but still possible [33], and we can even define a measure allowing us to identify local regions, in the sense of regions that are small compared to the curvature scale of the manifold [34]. A causal set is considered to be manifold-like if it could have, with high likelihood, arisen from a statistical, so-called sprinkling, process on a given manifold (for a good definition and an algorithm to reconstruct the embedding, see [35]).

Modern string theory describes open and closed strings in 10 + 1 dimensions [36]. Since higher dimensions often lead to more trouble in computer simulations this has not extensively been explored numerically. The old, bosonic, string theory, which describes the quantization of 2D surfaces covered by strings, can be studied numerically [37]. In fact, this was one of the motivating examples for the dynamical triangulations approach. This is often called non-critical string theory, and is an example of a theory that can be solved analytically but also explored using simulations [38].

One might debate whether noncommutative geometry really offers an approach to quantum gravity, or is purely a mathematical generalization of the concept of manifolds. A compact Riemannian manifold can be expressed as an algebra of functions acting on a Hilbert space together with a Dirac operator, a so-called spectral triple. Generalizing this description to allow for noncommutative function algebras then extended the space of geometries allowed [12]. While the original examples were concerned with infinite dimensional algebras, it is also possible to construct finite matrix algebras that then converge towards continuum geometries in the limit of infinite matrix size. These are the so-called fuzzy spaces which have recently been proposed as possible states in the path integral for quantum gravity [39,40].

The asymptotic safety approach [41] hinges on Weinberg's idea [42] that quantum gravity, described as a quantum field theory, is non-perturbatively renormalizable, i.e., possesses an interacting fixed point of the renormalization group flow in the ultraviolet described by a finite amount of relevant coupling constants. In practice, this hypothesis is investigated via the functional renormalization group [43], where one integrates out short scale degrees of freedom to derive an effective theory at larger scales. Generically, this operation cannot be performed in full generality and requires truncations, e.g., only particular terms in the action, called the theory space, are considered. To check whether signs of a fixed point persist once more interactions are allowed, the theory space is consistently enlarged. Work in this theory is mostly done using analytic methods or computational algebra packages, thus not exactly qualifying it as a numerical approach. However, it can play an important role in connecting continuum to discrete theories, and thus testing predictions.

#### *1.2. Subtleties in Defining a Theory (on the Computer)*

In the past decades, we have seen tremendous progress in the definition and development of non-perturbative approaches to quantum gravity. While some of these approaches share similarities, e.g., the use of discrete structures to calculate the non-perturbative regime, they are based on very different assumptions and key ideas about what a theory of quantum gravity should be. This variety itself is an opportunity and should be embraced rather than antagonized, ye<sup>t</sup> it arises due to one of the grea<sup>t</sup> weaknesses of quantum gravity, the lack of experimental data to guide development. However, a diverse set of approaches gives us the chance to uncover universal features across theories and to reveal the consequences of their underlying assumptions. To make the most of this chance, it is indispensable to make an effort to better understand the theories and their connections to one another.

Since we rely on numerical simulations in order to compute results, e.g., expectation values of observables, it would be ideal to know exactly how to choose the parameters of the theory, i.e., coupling constants or size of the discretisaton, to reliably and efficiently ge<sup>t</sup> the "right" answer. A prime example is lattice QCD [44], in which numerical methods provide accurate predictions, e.g., for the hadron spectrum [17]. Two features are crucial for its success: its direct contact to experiments and the existence of a renormalizable continuum theory. On the one hand, the renormalizability of the continuum theory, thanks to asymptotic freedom [45], makes it possible to determine the dynamics, i.e., the coupling constants, at different scales. On the other hand, experimental data fix the parameters of the theory and tells us which scale is relevant for a particular process. Naturally, this does not imply that the simulations can be straightforwardly performed, but it allows practitioners of QCD to focus their efforts on specific regions in parameter space. In his talk, Jack Laiho described in detail the challenges one faces in lattice QCD calculations, in particular with respect to fermionic degrees of freedom.

Considering their importance for the success of lattice QCD, it seems crucial to tackle the issues of renormalization, an effective continuum theory and contact to experiments in quantum gravity. Here, we understand renormalization in the Wilsonian sense [46], as a scheme to relate theories defined at different scales. Usually, one orders the degrees of freedom according to scale, then integrates out those at shorter scales to derive an effective theory on larger scales, ultimately relating a microscopic dynamics to macroscopic physics. Additionally, choice of parameters, ambiguities or the choice of discretisation in the microscopic, allegedly fundamental, theory might give rise to different continuum dynamics strongly affecting observable quantities. We would summarize these as different phases of the theory. Conversely, by exploring this phase diagram, we can identify regions of universal behaviour of the theory, unravel phase transitions and fixed points and hence check the consistency of the theory.

Finding a systematic framework that can relate theories at different scales in a background independent setting is a challenge. In her talk, Bianca Dittrich described a thoroughly studied proposal in spin foam models based on the idea to relate theories by identifying the same physical transitions on different discretisations [47,48], and thus scales, in order to find theories giving consistent answers. In particular, she emphasized that consistency is indispensable for extracting predictions from the theory, e.g., expectation values of observables. To make progress in this direction, it is worthwhile to implement approximations and simplifications in order to cover a larger part of the parameter space with given resources.

### 1.2.1. Relating to the Continuum

Closely related to the issue of renormalization is the question of a continuum limit or at least an effective continuum theory compatible with any particular discrete quantum gravity theory. Ideally, such a continuum theory should agree with general relativity in a suitable limit, but it might also reveal crucial deviations that experiments can search for. One possible relation discussed at the workshop, was to compare the 3-volume correlations computed in causal dynamical triangulations with an effective continuum theory. Interestingly, this can also be studied in other approaches and explored using functional renormalization group techniques [49]. However, special care is advised when comparing continuum theories and their discretisations, as relating numerical simulations to analytic solutions can give rise to new subtleties.

A particularly interesting example is the bosonic string, as pointed out by Jan Ambjørn. The bosonic string can be solved with analytic as well as numerical methods; however, these two solutions do not necessarily agree. The reason for this conundrum is an incompatibility of the renormalization procedures; the continuum theory used dimensional regularization, and hence did not generate certain terms that arose in the discrete theory. Repeating the continuum calculation using a different regularization scheme made it possible to match the continuum and discrete results [38]. This showcases how much care needs to be taken in mapping analytic and numerical results onto each other. This illustrates that "brute force" applications of known methods may not be directly applicable in the context of quantum gravity.

### 1.2.2. Approximations and Simplifications

Another particularly contentious issue is the use of approximations and simplifications in computer simulations. The most obvious of these is that simulated models are necessarily much smaller than the real universe. The space-time volume of our universe is about 10<sup>240</sup> Planck volumes in size, which does not compare well to, for example, the size of 10<sup>2</sup> Planck volumes currently examined in causal set theory. Some theories do better but in general the size of the universe in current discrete approaches is of the order 10<sup>0</sup> to 10<sup>5</sup> discrete building blocks. Of course, simulating the entire universe from quantum gravity might be too ambitious, and it might suffice to simulate a small region of space-time that recovers general relativity semi classically. The current best tests of general relativity limit corrections to appear on a scale below 47 μm [50] 1. Assuming we wanted to simulate a cube of space-time of this extend in all four dimensions, we would need to simulate ∼10<sup>122</sup> Planck volumes, which is still out of reach by several orders of magnitude. One might argue that it is only a question of time, and better code to improve this, but no matter how good our code will be, the size of our simulations will be limited by the need to build our computer within the universe, and out of atoms. Hence, careful reasoning and planning about how to best use our limited resources is an important part of pushing forward numerical quantum gravity.

Many current simulations, in particular those using Monte Carlo methods, use Wick rotated geometries and statistical physics methods that allow for faster convergence of the results. However, it is not clear how the theory is affected by these changes, e.g., whether the ensemble with respect to which one samples geometries is significantly altered. Moreover, effects typical for quantum superpositions might be obscured by this choice. Conversely, in some approaches, it is not clear how to define a Wick rotation in the first place. The only way to control for these factors would be to find algorithms and implementations working with oscillating amplitudes. Some methods are tensor network renormalization techniques [51], which, on the other hand, are limited by numerical cost, which increases with the complexity of the studied system. A promising future direction might be simulating quantum systems on actual quantum computers. This could avoid the problem of complex phases and make it possible to explore superpositions of states. Even disregarding these fundamental points, there are still other simplifications and limitations we need to include in our theories, and it is important to be aware of these and explore their limits.

More specifically, theory dependent examples of simplifications are the foliation in CDT, the restriction to particular geometric intertwiners in current spin foam simulations, and the 2d orders in Causal set theory. In CDT, the simulations fix space-time to be foliated into constant time slices and to have a constant topology. This limitation has proven necessary to suppress so-called "baby-universes", which have been identified as the reason that dynamical triangulations are so irregular and do not show good continuum behavior in the simplest examinations. However, this limitation has been explored and challenged: a certain rescaling of the matrix model for 2D dynamical triangulation suppresses the baby universes and leads to the same behavior as CDT [52]. In addition, in more recent work, it was shown that simulations without a strict foliation, but still conserving a time-orientability condition, lead to a good continuum behavior in two and three dimensions [53,54]. These results are expected to also hold for 4d; however, they have not been tested ye<sup>t</sup> there due to technical challenges. Nevertheless, they lend some credibility to the claims that the foliation in CDT is a simplification that does not overly constrain the phase space of the model. Moreover, this foliation can be used to employ an efficient algorithm, like the transfer matrix algorithm described in Andrzej Görlich's talk—see also Section 1.4. Additional hints for this come from recent results obtained in Euclidean dynamical triangulations with an additional curvature term. These simulations show a first order phase transition, but it is conjectured that this transition ends at a critical point that could be in the same universality class as CDT [55].

Spin foams also come initially with a large theory space that is hard to explore in full generality. Indeed, calculating the fundamental amplitudes of the theory is analytically not possible and requires a lot of computational resources, even for a single building block [22,56]. Studying larger spin foams is systematically tackled in the framework of renormalization [20] described in Bianca Dittrich's talk, where effective degrees of freedom at a coarser level are defined from the full amplitude without ad hoc truncations. A suitable numerical scheme are so-called tensor network techniques [51], in which the system is rewritten as a contraction of a network of tensors, i.e., multidimensional arrays. The goal

<sup>1</sup> This number is estimated by assuming that if extra dimensions of this size can not be experimentally excluded it gives a conservative upper limit on the scale at which quantum gravity would appear.

is to approximate said network by a coarser network efficiently by locally manipulating the tensors, e.g., sorting degrees of freedom according to their relevance via a singular value decomposition. These methods are particularly useful for identifying different phases of the model, e.g., in 2D analogue spin foam models [47,57–59] and 3D lattice gauge theories [47,60], where they revealed rich phase structures and phase transitions. Benjamin Bahr presented a closely related, but less holistic ansatz suitable for studying 4D spin foams: the underlying idea is to restrict the theory space to specific geometric shapes, e.g., cuboids [61] or frusta [62], which are coarse grained by requiring agreemen<sup>t</sup> of expectation values of observables across discretisations. Instead of a triangulation, the combinatorics of the foam are chosen to be hypercubic such that the coarse graining procedure can be straightforwardly iterated. Integrating over all possible shapes for the polyhedra is computationally prohibitively expensive, using the simpler cuboids allowed for calculating the first 4D RG flow of (restricted) spin foam models and revealed indications for a phase transitions and a UV-attractive fixed point [21,63]. Similarly, the spectral dimension in the cuboid case showed signs of a phase transition, where one phase is characterized by a dimension of four [64]. Moreover, a candidate for a similar fixed point was also found in the frusta setting, which extends the space of allowed geometries compared to the simpler cuboids [65].

As a last example, in most of the current explorations of the dynamics in causal set theory, the path integral is restricted to only a sum over the so-called 2D orders. These are a subclass of causal sets that can always be embedded into a plane, and that are dominated by causal sets that could arise from sprinkling in 1 + 1d Minkowski space. Sumati Surya told us about these and their limitations, opportunities and possible extensions in some detail. This has two practical reasons, one is that the class of 2D orders is much smaller than that of all causal sets, and hence much easier to explore on the computer. The class of all possible causal sets grows like 2 *N*2/4, and is dominated by the very non-manifoldlike Kleitman–Rothschild orders [66], numerically this dominance sets in for *N* 90 [18], which makes it very hard to explore in computer simulations. The other reason is that the choice of 2D orders immediately answers a number of questions one needs to debate before simulating causal sets, namely those concerned with how to pick the dimension of space-time, and hence the action to use in the simulations. Furthermore, it also allows us to store the causal set in a 2D array and thus enables faster algorithms.

**Figure 1.** Examples of different simplifications used in the theories. From left to right, we see a causal triangulation with a foliation, a square frustum for spin foams and a 2D order causal set.

The three simplifications discussed above are illustrated in Figure 1.

The issue of limited numerical resources and necessary simplifications sheds a light onto the question how we can efficiently use them to reveal the properties of space-time and work towards making contact with experiments. Indeed, the latter point is certainly difficult for a theorist. Optimally, we would like to study observables that are well-defined both in discrete and continuum theories, ye<sup>t</sup> connecting these to observable physical effects is usually a harder question. Thus, in order to deepen the connection between abstract quantum gravity theories and phenomenology, it is imperative that different quantum gravity approaches strive towards defining and studying the *same* observables. Then, we can unveil similarities and differences between approaches that might stimulate development and realization of experiments capable of testing multiple theories at once.

Indeed, there is grea<sup>t</sup> potential in studying observables in quantum gravity. In the next section, we present some proposals discussed during the workshop.
