Next Article in Journal
Purported Self-Organized Criticality of the Cardiovascular Function: Methodological Considerations for Zipf’s Law Analysis
Next Article in Special Issue
Data-Driven Identification of Stroke through Machine Learning Applied to Complexity Metrics in Multimodal Electromyography and Kinematics
Previous Article in Journal
The Information Length Concept Applied to Plasma Turbulence
Previous Article in Special Issue
Embedded Complexity of Evolutionary Sequences
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neural Activity in Quarks Language: Lattice Field Theory for a Network of Real Neurons

1
Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Roma, Italy
2
School of Cyber Science and Technology, University of Science and Technology of China, Hefei 230026, China
3
Laboratoire de Chimie et Biochimie Pharmacologiques et Toxicologiques, UMR 8601, UFR Biomédicale et des Sciences de Base, Université Paris Descartes-CNRS, PRES Paris Sorbonne Cité, 75006 Paris, France
*
Authors to whom correspondence should be addressed.
Entropy 2024, 26(6), 495; https://doi.org/10.3390/e26060495
Submission received: 25 March 2024 / Revised: 28 May 2024 / Accepted: 30 May 2024 / Published: 6 June 2024
(This article belongs to the Special Issue Entropy and Information in Biological Systems)

Abstract

:
Brain–computer interfaces have seen extraordinary surges in developments in recent years, and a significant discrepancy now exists between the abundance of available data and the limited headway made in achieving a unified theoretical framework. This discrepancy becomes particularly pronounced when examining the collective neural activity at the micro and meso scale, where a coherent formalization that adequately describes neural interactions is still lacking. Here, we introduce a mathematical framework to analyze systems of natural neurons and interpret the related empirical observations in terms of lattice field theory, an established paradigm from theoretical particle physics and statistical mechanics. Our methods are tailored to interpret data from chronic neural interfaces, especially spike rasters from measurements of single neuron activity, and generalize the maximum entropy model for neural networks so that the time evolution of the system is also taken into account. This is obtained by bridging particle physics and neuroscience, paving the way for particle physics-inspired models of the neocortex.

1. Introduction

Integrating observations of neural activity into a coherent theoretical framework is still a challenging task due to the volume and diversity of the experimental data. High-resolution recording techniques allow for simultaneous sampling of hundreds of neurons [1,2,3,4,5]. Although current probes for in vivo experiments usually record only a fraction of active neurons, the next generation promises to greatly increase this crucial parameter. Concerning the time scanning rate, interface performances exceeded the typical timescale of the neuron’s activity long ago, and a gigantic amount of data has been accumulated and published, along with a variety of methods and theories [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]. While some progress has been made in large-scale modeling, micro- and meso-scale models lag behind [3,24,25,26,27,28,29,30,31,32,33,34,35,36,37]. For better or worse, the situation resembles the “zoo of particle physics” prior to the introduction of the Standard Model. In this paper we introduce a lattice field theory (LFT) [21,23,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56] that is tailored to interpret data from multisite brain–computer interfaces (BCIs) in a systematic and physically grounded way that links the microscopic parameters to the experimental observations through well-known renormalization procedures. In short, LFTs discretize the space–time into a lattice grid and are commonly used in theoretical particle physics to facilitate numerical simulations and intractable calculations. Our starting point will be from the novel “kernel” [56] approach to LFTs, in part because of its simplicity and in part because it allows a natural connection with the theory of spin glasses [56,57,58,59,60,61], which has been proposed as model of neural activity [19,22,62,63,64,65,66] and for pattern storage in memory and learning [15,16,18,67,68]. We will assume that time evolution can be characterized by a discrete non-relativistic process of interacting binary fields, or Qubits [53,54,55], that from a neuroscience point of view can be interpreted as a field theoretic version of the Free Energy principle of the Bayesian Brain Theory (see for example Friston et al. [49,69,70]). We fully develop the formalism and the basic principles for binary raster diagrams (although the arguments can be readily extended to any Potts-like model with multi-spin interactions). The scope of this paper is to present the theory in full mathematical detail, so that it can be of reference in a wide range of settings from single neuron recordings to multi-layer perceptron networks and quantum Turing machines.
We will describe in detail the whole process of constructing, testing and interpreting actual experimental BCI observables from very basic theoretical principles. Our goal is to describe the microscopic support of cognitive processes, and our fundamental assumption is that it can be exactly encoded into a digital quantum network (qubit network), emergent from the on–off states of the action potentials (digital neurons hypothesis). We base this on the fact that neurons are distinct objects, and that they have a refractory period ensuring regularity in the time domain. Then, we try to take advantage of the presumed information “bottleneck” offered by the digital neuron hypothesis to simplify the mathematical analysis, and eventually bridge neuroscience, physics and data science within the formalism of quantum field theory. The quantum formalism may look redundant at first, as classical evolution is always recovered as a subcase of quantum evolution, but we believe that it will eventually be proven crucial to describe the neural correlates of cognitive processes.

How to Read This Paper

This work is intended to provide a comprehensive exploration of the themes at hand. For a shorter and more readable synopsis please refer to [71]. The paper is organized into four main sections: the introduction (Section 1) provides a preview of the main results (Section 2), a basic theoretical section introducing the observables (Section 3) and another more technical section to develop the neural LFT in full detail (Section 4). In the final section (Section 5), we apply the theory to real experimental situations. The length and organization of this paper are probably not optimal for readers who are unfamiliar with statistical mechanics and computational neuroscience. Given the conceptual span, multidisciplinary nature and novelty of the framework, there will unavoidably be parts that are more difficult to read or that may appear trivial, depending on the background of the reader. Nonetheless, we believe that both physicists and neuroscientists can read and understand this paper with a similar level of effort. Achieving balanced “dissatisfaction” between physics and neuroscience readers is an important goal of this paper. Indeed, although previous proposals for neural LFTs exist [21,23,49,51,52,72], in our opinion, none tackled the problem of linking with the actual experimental observables in a way that can also be managed by non-physicists.

2. Main Results

2.1. Neural Activity in Terms of Lattice Field Theory

It is widely accepted that the most basic computational units of the brain are the neurons. Neurons receive electro-chemical inputs from other neurons through dendrites, which are then integrated into the cell body. When the integration reaches a threshold, the neuron generates electrical impulses called action potentials, or spikes. If such a threshold is not reached, no spike is generated. When recording neural activity, e.g., during a neurophysiology experiment, it is usual to collect the timing and occurrence of spikes of an arbitrary number N of individual neurons and align them, in an arbitrary time window T, with events or stimuli specific to the chosen experimental paradigm. The matrix with N rows (neurons) and T columns (time) that encodes this information is called a spike raster. Calling V the space and S the time in which our system “lives”, we note that V is regularized by the intrinsic discretization of its units, i.e., the neurons are discrete objects, and that S is regularized by the physiological existence of an absolute refractory period and fixed by the natural temporal ordering of the observed dynamical evolution. This implies that when studying an ensemble of neurons, we can represent space–time with a set of discrete points or sites of a lattice. Formally, we define a spatial mapping onto the following ordered set of vertices (see Section 3.1),
V : = 1 i N , S : = 1 α T
where the time window is regularized into sub-intervals α according to a hypothetical “clock time” τ , corresponding to the minimum time between two computational operations of the neuron. Considering that, after a spike, a neuron enters a refractory period during which it is temporarily unable to generate another one, as τ it would be natural to consider charge time + discharge time + absolute refractory period + relative refractory period. However, since τ may vary significantly depending on the experimental setting, for an accurate digitalization of the signal, it is convenient to consider its smallest possible value, i.e., the typical duration of a spike: τ 1 ms. Within a τ , the neuronal computational unit i can be either silent or active, which can be represented by a binary variable. The raster (or kernel) Ω can be explicitly written as:
Ω : = { φ i α { 0 , 1 } : i V , α S }
in which φ i α is the binary variable representing the activity of the i-th neuron at time α . After these simple observations, one recognizes that Ω naturally provides all the necessary information to describe the observed neural dynamic. From now on we will refer to Ω as the (neural) activity kernel, or simply, kernel [56]. The neural dynamic is expected to follow some causal evolution influenced by the prior states, i.e., a dynamical process with memory. As already argued by several authors [21,23,49,51,52,72], it is reasonable to assume that such dynamics can be described by a quantum evolution, so that the formalism of quantum field theory [21,23,38,40,41,42,43,44,45,46,51,52,53,54,55,56] can be applied: this is a harmless assumption, since classical evolution can always be retrieved as a sub-case of quantum evolution. Then, let us assume that the evolution of φ i α in α can be characterized by a discrete process of interacting binary fields, or Qubits [53,54,55,56]. We can model the time evolution of φ i α by considering its statistical mechanics counterpart on a lattice [21,23,38,40,41,42,43,44,45,46,51,52,53,54,56]: to do so, we only need to formally define a few quantities, familiar to neuroscientists, that can be obtained from the binary kernel Ω . Just like a system of particles, we can describe how the N neurons, represented by Ω , interact with their surroundings in discrete lattice space–time with a single expression enclosing static and dynamic properties. In short, we postulate the existence of the lattice action
A : { 0 , 1 } V S R ,
that allows us to derive the Lagrangian description of the system and, through the principle of least action [47,73], the corresponding statistical theory. Hence, let O be a test function of Ω , we denote the ensemble average with respect to A with angle brackets and formally define it as follows (WGL formula/Gibbs average/softmax average):
O ( Ω ) : = Ω { 0 , 1 } V S O ( Ω ) exp [ λ A ( Ω ) ] Ω { 0 , 1 } V S exp [ λ A Ω ] .
The classical (non-quantum) limit of the theory is obtained taking the limit of infinite λ , corresponding to a zero Planck constant, or the zero temperature limit of canonical statistical mechanics [74]. Let us introduce Φ the space correlation matrix and Π the time correlation matrix (joint-spike matrix, JS),
Φ : = { ϕ i j 0 , 1 : i , j V } , ϕ i j : = 1 T α S φ i α φ j α Π : = { p α β 0 , 1 : α , β S } , p α β : = 1 N i V φ i α φ i β
Indicating the transpose operation with the symbol †, these are straightforwardly obtained from the kernel through the relations
Ω Ω / T = Φ , Ω Ω / N = Π ,
which are often used automatically (and unconsciously) to calculate spatial and temporal correlations between experimental data. Combining these quantities, we can write a simplified action
A Ω | A , B , I : = T i V j V A i j ϕ i j + N α S β S B α β p α β + i V α S I i α φ i α .
where the matrix A of potential interactions and the matrix B of kinetic interactions control the theory and I is the input kernel that collects the external influences. The full derivation of Equation (7), omitted here for the sake of conciseness, can be found in Section 4. This is our first main result: an explicit expression for the action of a network of real neurons that depends on easily accessible experimental quantities. Although commonly used to derive and comment on empirical results in neuroscience research [75,76], the empirical correlation matrices were, hitherto, not related to each other or ascribable to a precise physical meaning. The action A provides a recipe for interpreting them in a physically grounded way, setting them within a general theoretical framework that portrays the dynamics of a system in terms of kinetic and potential energies. This entails being able to put a plethora of experimental results under one theoretical hat, using a coherent physical theory. The ingredients of the recipe are the three observables Φ , Π and Ω , that encode all the information about the system, the parameters of the theory A and B that control the fluctuations and the boundary conditions I. We will call these triplets the hypermatrix and the inverse hypermatrix, respectively, since each group of observables can be arranged into a single matrix as in Figure 1 and Figure 2. Many properties of the inverse matrices can be inferred by simple self-consistency conditions: for example, causality implies that B is an upper triangular (see Section 4.1.3). Our method represent a natural generalization of the maximum entropy principle proposed in the works of Schneidman, Tkacik and colleagues [62,63,64,65,66] where an Ising model with variable couplings is used to fit ex vivo recordings of a salamander retina (see Section 2.2, Section 2.3 and Section 4.1.2). Moreover, it is remarkable that the Principal Component Analysis (PCA) and the maximum entropy principle can be linked in a natural way within the proposed LFT context. The PCA is probably the most used numerical method in many scientific fields and, in neuroscience, a large amount of data from decades of research are already available in this form to feed machine learning methods [24,35,77,78,79,80,81,82]). Given the operator relations between the kernel and the correlation matrices, it can be shown (Section 2.4) that both PCA in the space domain and the maximum entropy principle are special cases of the proposed LFT with zero kinetic terms ( B = 0 ), while the action associated with a PCA in the time domain is that of a purely kinetic LFT ( A = 0 ), i.e., the dual of the maximum entropy principle in the so-called momentum space. Although in this paper we will stick to a strictly real-space analysis, we notice that there are also many powerful spectral methods that can be used to analyze rectangular arrays, like the singular value decomposition (SVD). For example, using the SVD, one finds that the spectrum of Φ and Π is the same apart from a scaling factor. This last fact has its own physical importance and will be discussed elsewhere.

2.2. Generalization of the Maximum Entropy Principle

We show that our LFT is a generalization of the maximum entropy principle as presented in the work of Schneidman, Tackcik and others [62,63,64,65,66]. With some algebra and a global rescaling of the quantities (see Section 4.1.2) we can write the action in the magnetic representation:
A M | A , B , h = i V α S h i α σ i α + T 4 i V j V A i j c i j + N 4 α S β S B α β q α β .
where h is linearly related to I (see Section 4.1.2). In the spin case the hypermatrix will therefore consist of M, C and Q. By expanding the definition of c i j we immediately note that in the limit B 0 we have:
A M | A , 0 , h = α S i V h i α σ i α + 1 4 α S i V j V A i j σ i α σ j α .
In this way, a replicated version [60] of the Hamiltonian (i.e., the total energy of the system) used in the works of Schneidman and Tkacik is obtained. Notice that the limit T 1 , corresponding to a single replica of the system, is exactly equivalent to the max entropy model [62,63,64]
A M | A , 0 , h = i V h i 1 σ i 1 + 1 4 i V j V A i j σ i 1 σ j 1 .
Thus, the max entropy principle is recovered as specific case of a field theory with zero kinetic energy. For what we have shown, the authors approximate the activity with a LFT at equilibrium with B = 0 , which corresponds to a field theory with zero kinetic energy.

2.3. Scaling Test for Axonal Connectivity

Let us now show a simple possible application to neural recordings, considering the case of the salamander retina dataset shown in the papers of Schneidman, Tkacik and colleagues [62,63,64] (Figure 3 and Figure 4 ). Remarkably, they were able to reconstruct the A matrix for small groups of neurons in a salamander retina, thus obtaining both the correlation matrix and its dual in the parameter space. In Figure 1f of their paper [63] they show the distributions of the reconstructed couplings J ˜ i j for some values of N. The distributions of J ˜ i j are indeed approximately Gaussian and it would be very interesting to verify the scaling of the variance of such distributions at various N. Concerning the space couplings, for mean field models [56,57,58] the pairwise interaction is the sum of two terms (stationary and fluctuating)
A i j = J ˜ i j + δ J ˜ i j ,
and hence, for the thermodynamic limit to exist, it is necessary that the h i be of order O 1 in the number of neurons and that J ˜ i j and δ J ˜ i j scale correctly. This depends on the connectivity of the matrix of axonal adjacencies (axon matrix):
Λ = Λ i j 0 , 1 : i , j V .
Average connectivity is defined as follows:
g Λ : = 1 N i V j V Λ i j .
Therefore, to normalize correctly, one must take
J ˜ i j = 1 g Λ J 0 Λ i j , δ J ˜ i j = 1 g Λ J i j Λ i j
with J 0 and J i j Gaussian variables of unit variance and zero mean. For fully connected models, we have g 1 = N and then
J ˜ i j = 1 N J 0 , δ J ˜ i j = 1 N J i j
For models with large connectivity but that are sub-linear in the number of neurons, one can consider g Λ = N α with 0 < α < 1 ,
J ˜ i j = 1 N α J 0 Λ i j , δ J ˜ i j = 1 N α J i j Λ i j
whereas for finite-dimensional models we have that g Λ = O 1 . From preliminary analysis (we extracted data from Figure 1f of Tkacik et al., 2009 [63] with G3data and performed a Gaussian fit to find the variances, Figure 4), we confirm that the couplings are approximately Gaussian, but the scaling exponent appears to be α = 1 / 2 and not α = 1 as for the Sherrington–Kirkpatrick model [56,57,58,60,87]. This would be interesting, since the system would admit a thermodynamic limit and still have sufficient connectivity to manage the data with a mean-field theory [56,57,58,60,87]. Also, it would be very interesting to see the full hypermatrix to which the covariance matrix in Figure 1 of Tkacik et al. [63] belongs, and even more interesting would be to fit such a hypermatrix with the field theory presented in this paper.
Figure 3. Example of Columnar organization of retina and the decimation procedure from x , y to x , y . Diagram elaborated from Figure 2 of Jones 2000 [88]. We re-scaled the vertical dimension z for improved visualization.
Figure 3. Example of Columnar organization of retina and the decimation procedure from x , y to x , y . Diagram elaborated from Figure 2 of Jones 2000 [88]. We re-scaled the vertical dimension z for improved visualization.
Entropy 26 00495 g003
Figure 4. We extracted the distributions of the reconstructed couplings J i j from Figure 1f of Tkacik et al., 2009 [63] with G3data (A) and computed the scaling of the parameters, first from Gaussian fits (B,C) and then from the first two moments of the distributions (BD). Both methods confirm that the couplings are approximately Gaussian with scaling exponent α = 1 / 2 . We remark that finding the parameters of the theory using scaling techniques like this is typical of LFT analysis of elementary particle theory, where they are typically used to estimate particle masses and other observables.
Figure 4. We extracted the distributions of the reconstructed couplings J i j from Figure 1f of Tkacik et al., 2009 [63] with G3data (A) and computed the scaling of the parameters, first from Gaussian fits (B,C) and then from the first two moments of the distributions (BD). Both methods confirm that the couplings are approximately Gaussian with scaling exponent α = 1 / 2 . We remark that finding the parameters of the theory using scaling techniques like this is typical of LFT analysis of elementary particle theory, where they are typically used to estimate particle masses and other observables.
Entropy 26 00495 g004

2.4. Relation with the Principal Component Analysis

Here we show that the PCA can also be interpreted as a special case of our LFT. In particular, the PCA can be understood as projecting the data into a subspace such that the projected data have a minimum discrepancy with the original one. In the following, we explicitly consider only the projection into the spatial domain, but the temporal projection is similar. Then, let V be a subset of V with size n < N ,
X : = { x i k R : i V , k V } ,
being a real valued kernel with N rows and n columns, and let us introduce the set of kernels with orthonormal columns (in this paragraph we denote by I the identity matrix)
T : = { X R V V : X X = I } .
The columns span a n-dimensional subspace of R N , and the projection of Ω into this subspace is X X Ω . The PCA aims to identify the subspace such that the discrepancy between the operators X X Ω and Ω is as small as possible in the Frobenius norm. The objective function is
A ( Ω | X ) : =   X X Ω Ω F 2 = Tr ( Ω Ω X Ω Ω X )
where · F denotes the Frobenius norm and where we applied D F 2 = Tr ( D D ) , with Tr ( · ) , indicating the trace of the operator. This and the othonormal constraint constitute a constrained optimization problem: calling Y the solution to this problem, we write
A ( Ω | Y ) = min X T A ( Ω | X ) = min X T Tr ( Ω Ω X Ω Ω X ) .
The minimum is found by choosing Y as the n largest eigenvector of Ω Ω (or the largest left-singular vectors of Ω ), see Goodfellow et al. [89] for a proof. In the end, one finds
A ( Ω | Y ) = Tr ( Y Ω Ω Y ) = T i V j V ϕ i j k V y i k y j k .
We conclude that the action of PCA in the space domain is that of a LFT with B = 0 and
A i j = k V y i k y j k ,
and is therefore a special form of the maximum entropy principle described above.

2.5. A Model for Cortical Recordings

The LFT formalism makes it possible to compare cortical recordings of neural activity with a renormalized field theory. The most used interface to simultaneously record collective neural activity is the silicon-based multilectrode array Utah 96 (Blackrock Microsystems, Salt Lake City, UT, USA) [3,4,34,90]. To date, there are about 20 years of recordings of neural activity made with Utah 96 across different species and under hundreds of different experimental conditions, with thousands of kernels already available. The Utah array is a square grid with a 10 × 10 electrode arrangement with a total of 96 channels (the vertices of the square have no record). Due to its planar geometry, the length of its electrodes (around 1.5 mm penetration into the cortex) and their pitch (40 μm), the Utah 96 is able to record from neurons belonging to horizontally separated cortical assemblies sampled from the same superficial cortical layer z (an example is given in Figure 1 panel A). We will refer to such assemblies as minitubes [83,84,88,91,92,93,94,95,96,97,98,99] (see Figure 3 for a sketch). As a comparison, a multi-electrode array with a linear geometry, like a single shank with multiple contact points arranged in a vertical fashion (e.g., like Neuropixels [2,100] or SiNAPS [1] probes), would sample from neurons across various layers within a single minitube [84,94]. Given that each electrode of the Utah array is designed to approximately record the activity of individual minitubes at a distance enough to avoid self-interaction terms, we can model any of its recording as a decimated minitube lattice and the dynamic evolving around each electrode tip with an on/off field φ ^ x y α that identifies the state of the observed minitube (see Section 4.2.6). Let us name x y z as the coordinates of a lattice such that z represents the average height from the surface of the cortex at which a given layer is located. V x y z represents the volume occupied by (all) the neurons present there and x y is the position of the minitube section in the horizontal plane. Each cortical layer z has its kernel
Ω z : = { Ω x y z α 0 , 1 V x y z : x y L 2 , α S } ,
where L 2 is a two-dimensional lattice with an average lattice step around the diameter of the individual minitube. Calling I ( · ) the indicator function, we can now define φ ^ x y α as:
φ ^ x y α : = I ( Ω x y z α 0 )
which corresponds to assuming that the activation of any one of the neurons in the cell V x y z corresponds to the activation of the entire cell and with a high probability of the activation of the whole minitube. Next, to model the spacing between the probing contacts of the array we apply renormalization considering a decimated lattice x y whose step is much greater than the diameter of the individual minitube. One way of renormalizing is the so-called renormalization by decimation [101,102,103], in which the details of the system at small scales are systematically simplified by integrating out most of the degrees of freedom (spins in magnetic systems, neurons in this context). In our case, it corresponds to leaving only those on the decimated lattice x y at height z. This leads to the decimated activity kernel:
Ω ^ : = { φ ^ x y α 0 , 1 : x y L 2 , α S } , φ ^ x y α : = I ( Ω x y z α 0 )
Since Ω ^ describes the on/off activity recorded by each electrode tip, we call it the “electrode” kernel (the full derivation is in Section 4.2.6 and Section 5.1). Ω ^ yields the empirical hypermatrix shown in Figure 1 panel D. The space arrangement of Ω ^ for square multielectrode arrays like the Utah 96 is shown in Figure 1 panel B. Explaining how a cortical recording modeled in these terms is comparable to a renormalized field theory is our second main result. In Section 4.2.6, we show explicitly a very simple example of how to compute corrections for the effective theory in the semi-classical limit (near-zero Planck constant) [56]. It should be noted that more accurate renormalization schemes could be achieved by many other established methods [52,57,101,102,103,104,105,106,107]. In general, the exact renormalized theory for a generic cortical recording will depend on the details of the system, the recording interface, the experimental settings and other features that should be considered case by case. Indeed, modeling and calculating the non-linear corrections for the effective theory would be one of the core aspects on which a transfer of expertise from nuclear physics and statistical mechanics to neuroscience could be crucial. In the case of Utah 96, the interface appears to be designed to take individual minitubes for each electrode at a distance enough to avoid self-interaction terms, so we can assume that, apart from systematic errors, sensor degradation, etc., the data can be identified with a decimated version of the kernel of columnar activities, i.e., with Ω ^ x y defined before. It would be extremely interesting to reconstruct the couplings from an experimental hypermatrix, e.g., of the motor control experiments presented in Pani et al. [35] (see Figure 1 and Figure 2). However, this is a task with a significant computational burden: to reduce the number of computations required, one could eventually bin or decimate the kernel on a larger clock time. Anyway, we remark that even by looking at the hypermatrix alone, in particular at the kernel and the overlap matrix, it is already possible to appreciate most of the results of Pani et al. [35] without resorting to numerical methods such as PCA (that can, however, be interpreted in this framework). In previous works [35,82,86,108,109,110] the modulation of the activity is observed between the Go and movement onset (M_on), which is called a “motor plan”. We can also appreciate a stationary rhythmic activity before the Go signal (see Figure 1 and Figure 2) and after the M_on revealed by transverse waves in the joint spike matrix Π that is not detected with standard methods (e.g., PCA) and that is suggestive of a time crystal [111,112]. A detailed discussion of the experiment shown in Figure 1 can be found in Section 5.2.

2.6. Conclusions

In conclusion, we showed that applying lattice methods from elementary particle theory to natural neurons should be possible and fruitful, but the understanding of this theoretical framework will still require substantial work. However, given the advanced development of LFTs and their vast range of applicability, knowledge exchange with neuroscience would be beneficial for the theoretical development of the latter in the near future (see Section 5.3.1 and Section 5.3.2) and for both in the long run (see Section 5.3.3). The encounter opens exciting avenues for interdisciplinary research, facilitating connections between computational neuroscience and other fields of physics that utilize LFTs. Indeed, LFTs became of crucial importance far beyond their traditional realms, encompassing cellular automata [113,114,115], number theory [53] and computational modeling [61,116,117,118]. Thus, it is reasonable to think that they could do the same in neuroscience once the appropriate common theoretical frame is established [21,23,49,52,56]. With our manipulations we showed that determining the effective theory describing the dynamics of assemblies of neurons in the neocortex is possible in terms of LFT, at least in our formalism [57]. Moreover, given its spatial symmetries [83,84,88,91,92,93,94,95,96,97,98,99], it is possible that the topology of such a theory is either mean-field or two-dimensional, with the cortical layers behaving as interacting fields just as in elementary particle theory. Rethinking neural interactions in this manner could significantly simplify the analytical construction of effective theories. This is because, whether in the mean field [52,56,57,58,106,107] or dimension 2 [101,102,103,104,105,119], established schemes for analyzing, simulating, renormalizing and, in some cases, exactly solving such theories already exist. In truth, it is not possible to predict what a collision between particle physics and neuroscience might lead to, yet the arguments in this work strongly suggest that it would be worth finding out.

3. Theoretical Methods

3.1. Fundamentals

Here we introduce some fundamental notations to describe a patch of cortical tissue as a dynamical system on lattices. In these first sections we will consider variables that are aimed to model the whole part of the nervous system involved in the neural computation, of which the actually observed neurons (e.g., during a neurophysiology experiment) are typically a sparse subset. This part is structured as follows: in the first sections we introduce the notation and the basic quantities, the observables and their physical significance, various notions of ergodicity and the fundamental one of stationarity. A discrete Lagrangian description of the system is then introduced in terms of LFT and relations to observables and a general statistical theory are established. A simple kernel “renormalization” scheme, based on Franchini 2023 [56], is then introduced to deal with the problem of relating the microscopic theory with marginals on a sparse subset of neurons. Finally, some applications to neuroscience are described, such as the possibility of constructing an effective theory by renormalizing via decimation [102,103,104,105] of a two-dimensional lattice of cortical columns. For the detailed derivation of the Lattice Field Theory (LFT) in kernel formalism [56] and its associated statistical theory, please refer to Section 4 and Section 4.2.

3.1.1. Map on Vertex

Let N be the number of neurons involved in the task; these are arbitrarily mapped onto the ordered set of vertices
V : = 1 i N .
Indexing i V is defined short of a two-way map that shuffles the index
θ : V V , θ 1 : V V
This map constitutes a parameter of the representation and ideally should be chosen so as to highlight the function of the various neurons observed during the task, i.e., highlighting any groups of neurons that belong to the same population, or computational structure.

3.1.2. Identify the θ Highlighting the Space-Time Structures

The neurons involved in the task (all of them, not just those actually observed) could be partitioned into further subgroups based on function, location, average activity and times when this varies. For example, it is possible to sort neurons by average activity or by the earliest time at which they significantly change their initial state. In the case of multi-electrode array interfaces, it is also possible to define a unique sorting of the experimental rasters based on the position of the sensor channels. Given the sampling rate of neural interfaces, it is assumed that the temporal sampling of the data is much finer than any functional level of interest, while the spatial resolution could be severely limited. This issue will be addressed in subsequent sections; for this one we assume that θ is arbitrary unless otherwise stated.

3.1.3. Dynamical System

Let us represent the electric field of the neurons with a semi-compact (i.e., still not discretized in the time variable) real valued kernel
X : = x V t R N : t 0 , T , x V t : = x i t R : i V
In general, we assume that the vector of the action potential x V follows a causal evolution
x V t = f x V s : s < t ,
determined by the previous activity (dynamic process with memory) according to a hypothetical law f which in principle could be stochastic [69,70]. We further assume that such a law can be described or approximated by a quantum time evolution so that the formalism of statistical field theory [42] can be applied (notice that classical evolution is a subcase of quantum evolution).

3.1.4. Classical Lagrange Mechanics

Let us briefly recall the fundamental assumptions of the Lagrangian mechanics in a continuous time: we assume the existence of the “Lagrangian” function, canonically interpreted [120] as the difference between the kinetic and the potential energy of the system at given time. Then, the classical action is defined from the Lagrangian by integrating over the time
A X : = 0 T d t L t , x V t , t x V t ,
we denoted the derivative with respect to time with t : = d / d t . Then, the associated evolution is a stationary point of the action, Y, usually a minimum. The equations of motion are also obtained from the Lagrangian through the celebrated Euler–Lagrange equations
L x i = t L t x i
that provide the stationary points of the action. On the other hand, determining whether a certain set of equations of motion
Γ V t , y V , t y V , t 2 y V = 0
admit a Lagrangian description is known as the inverse problem of Lagrangian mechanics. Many authors contributed to this topic and the necessary conditions, also known as the Helmoltz conditions (after Hermann Helmholtz),
Γ i ( t 2 y j ) = Γ j ( t 2 y i ) ,
Γ i y j Γ j y i = 1 2 t Γ i ( t y j ) Γ j ( t y i ) ,
Γ i ( t y j ) + Γ j ( t y i ) = 2 t Γ j ( t 2 y i ) ,
have been worked out in many general contexts [121,122]. The problems date back to Jacobi and has been attacked by many authors [122,123]. The lattice case has been studied by Crăciun and Opris [124], Bourdin and Cresson [125] and more recently by Gubbiotti [126,127]. Although this has been indirectly shown already for many important models of neural networks [49], a systematic test of these conditions would be an important test also for the quantum field theoretic description, which includes the free energy principle of Friston et al. [49,69,70].

3.1.5. Time Regularization

Since the space is already regularized by the natural discretization of the computational units (neurons/minitubes), in order to switch to the all-lattice system, it remains to regularize the time window. The window is therefore quantized in T sub-intervals of size τ , in turn mapped onto the ordered set of vertices
S : = 1 α T
preserving the temporal ordering. Notice that, in contrast to V, the map between time intervals and α S is naturally fixed by the time ordering of the process. The preservation of the time ordering will be a key feature of our field theoretic approach and is reminiscent of the time-ordered product of Green functions that is used as the starting point to derive the LSZ formula [128].

3.1.6. Multiscale Analysis

To study the system we will sometimes make use of the “multiscale” representation described at the end of Section 3 of [56] (Definitions 7 and 8, Figures 3.1 and 3.2). A workable definition requires the introduction of some degree of theoretical description, but in general it can be formalized with a joint partition of V and S in the computational cells, if any, of the various structures. Typically, structures at multiple scales, both in space (neurons) and time, will be identified. The most general partition is of the type described in Section 4, in which a sequence L of nested partitions (refinements) of the kernel S are the events and sub-events into which the data taken can be categorized, e.g., in the setting described in [35] we will have the single task level, which would be of the order of seconds, then the various sub-events (such as Go signal, Movement onset, etc.) on the 100 ms and finally the single actual “computations”, which might be isolated around 1–10 ms. Given the sampling rate of the most commonly used sensors, it is assumed that at the temporal level the data are scanned on a much finer scale than any functional level of interest, while the spatial resolution could be severely limited. This issue will be addressed in subsequent sections; for this one we assume that θ is arbitrary unless otherwise stated.

3.1.7. Clock Time

The time interval is quantized according to a hypothetical “clock” time τ , corresponding to the time between two fundamental computational operations by the neuron. As described in the main text, it is convenient to consider τ = 1 ms.

3.1.8. Binary Computation Cell

Within a clock time the computation unit can be possibly active or silent: this can be encoded by a binary variable (digital neuron hypothesis),
φ i α 0 , 1
which is assumed to be the actual variable supporting the computation. The natural association is obviously with the active/silent state of the neuron during an interval equal to the clock time. This leads to the definition of the (neural) activity kernel Ω , i.e., a binary array with N rows and T columns that encodes the entire activity of the observed neurons within the time window. Formally, the kernel is a function of the type:
Ω : V S 0 , 1
We can introduce the symbol φ i α to represent the activity of the i-th neuron at time α , in this way the kernel can be explicitly written as
Ω : = φ i α 0 , 1 : i V , α S
The kernel is the general order parameter [56] and is assumed to contain the entire information needed to describe the observed process. Given the presence of an absolute refractory period, we expect that the regularized dynamical system
φ V α = f ( φ V β : β < α )
is the one that naturally describes the evolution of the affected cortex sector and not the possible continuous theory that would be obtained by taking τ 0 .

3.1.9. Kernel of the Magnetizations

To transform from the binary representation into the spin system we use the map σ = 2 φ 1 (which is equivalent to replacing the zeros by −1) that gives in the kernel of magnetizations:
M : V S 1 , 1
explicitly, the magnetization kernel is
M : = σ i α 1 , 1 : i V , α S
The two descriptions are mathematically equivalent, but note that some observables may have different meanings. For example, the correlation between binary spikes is 1 only when neurons fire together and 0 otherwise, while the product between two spins is equal to 1 if the variables are equal, and negative if opposite. In practice, the first variable is sensitive only to the event in which the two neurons fire simultaneously, putting at the same rank (same null value) the events in which only one of the neurons fires and the one in which they are silent together. The second, on the other hand, distinguishes only whether or not they do the same thing, regardless of whether they fire or not.

3.2. Observables

In the following we define the observables of interest that can be obtained from the kernel.

3.2.1. Kernel Offset

The zero-order observable is the global average, or “offset”, of the kernel. This quantity is specific to the representation made with kernels and would be a kind of “ergodic” approximation of the activity in the space–time window that is considered. The offset is defined for both the activity and the magnetization kernels, respectively,
Ω ¯ : = 1 T α S 1 N i V φ i α , M ¯ : = 1 T α S 1 N i V σ i α
The two quantities are related by a linear relationship
M ¯ = 2 Ω ¯ 1
From a mathematical point of view they are proportional to the Grand Sum (sum of all the elements of a matrix). Such a kernel corresponds to the thermodynamic limit of a gas on a lattice with a particle density exactly equal to Ω ¯ . For magnetic systems, in highly connected ones this can occur due to the presence of a constant external field, or from a uniform fully connected interaction of the Curie–Weiss type. We do not know if there are other magnetic or gaseous systems that have this property and are substantially different from magnetization eigenstates.

3.2.2. Row and Column Averages

At this point we can move to the description of the observables of order one. We define the row averages, which would be the average activity of the state at time α and the column averages, that is the average firing rate of the i-th neuron in the time window considered. The average activity is
ω : = ω α 0 , 1 : α S , ω α : = 1 N i V φ i α ,
with ω α space averages a time α , while averages over the rows are
f : = f i 0 , 1 : i V , f i : = 1 T α S φ i α .
where f i is the time-averaged activity of the i-th neuron in the time interval S. Notice that the average of the averages is in both cases equal to the offset
1 N i V f i = 1 T α S ω α = Ω ¯
Similarly, starting from the kernel of magnetizations we obtain the vector of space averages over the columns,
μ : = μ α 0 , 1 : α S , μ α : = 1 N i V σ i α ,
the vector of time averages (on the rows)
m : = m i 0 , 1 : i V , m i : = 1 T α S σ i α .
where m i is the time average of the i-th spin in the time interval S. Again, the average of the averages is equal to the offset
1 N i V m i = 1 T α S μ α = M ¯
and between the averages made with spin and with spikes the same linear relationship holds: at this first level of description there are still no particular differences emerging between the two representations.

3.2.3. Spectra of the Averages (Quantiles)

We introduce the distributions (histograms) of the means
p f s : = 1 N i V δ s f i , p ω s : = 1 T α S δ s ω α
and these two quantities are independent of the ordering of the index and allow for a comparison of the statistics of row and column averages. These can be expressed as cumulants
F f s : = 0 s d u p f u , F ω s : = 0 s d u p ω u
alternatively, one can consider the “quantiles” F f 1 and F ω 1 [58,129] that are the inverse functions of the cumulants. In the case of quantiles, it is particularly easy to construct the form of the function; let us use the notation i and α for the maps that reorder the indices by increasing values (order statistics)
f i + 1 f i , ω α + 1 ω α ,
the quantiles of f and ω are given by the following expressions:
F f 1 s = f i I s i / T , i 1 / T , F ω 1 s = ω α I s α / N , α 1 / N .
Magnetic versions of f and ω are defined in the same way. Given the linear relationship between the averages of the two representations, one can use the same index as that used for f and ω and write directly
F m 1 s = m i I s i / T , i 1 / T , F μ 1 s = μ α I s α / N , α 1 / N .
In what follows we may also refer to quantiles as “spectra” since for N and T finite are step functions whose support is naturally quantized in intervals of height 1 / T for f, 1 / N for ω , 2 / T for m and 2 / N for μ . This is not very relevant in the thermodynamic limit, but it is certainly relevant in many experimental situations: the discussion will be taken up later after introducing the Wasserstein distance.

3.2.4. Correlation Matrices

The variables of order two are the correlation matrices. It is possible to define a temporal correlation matrix, which in the case of the (neural) activity kernel would be the joint spikes matrix
Φ : = { ϕ i j 0 , 1 : i , j V } , ϕ i j : = 1 T α S φ i α φ j α
and a spatial correlation matrix,
Π : = { p α β 0 , 1 : α , β S } , p α β : = 1 N i V φ i α φ i β
In general, these matrices are obtained from the kernel through the relations
Ω Ω / T = Φ , Ω Ω / N = Π
Note, however, that these observables are not completely independent from the averages and converge on the following matrices
Φ 0 : = { f i f j 0 , 1 : i , j V } , Π 0 : = { ω α ω β 0 , 1 : α , β S }
in the free-field approximation. We therefore define the connected correlation matrices, which describe the correlations between the fluctuations
C * : = Φ Φ 0 , Q * : = Π Π 0
these are zero in the free-field limit and nontrivial in case the correlations between fluctuations are significant. Similarly to averages, we can also associate the correlations value distributions
p C * s : = 1 N 2 i V j V δ s c i j * , p Q * s : = 1 T 2 α S β S δ ( s q * α β )
which in the following we call fluctuation distributions. Again, we can define analogues in the magnetic description. We can associate the spin–spin correlation matrix typical of systems seen in statistical mechanics
C : = c i j 1 , 1 : i , j V , c i j : = 1 T α S σ i α σ j α
and the overlap matrix commonly used in spin glass theory [56,60]
Q : = { q α β 1 , 1 : α , β S } , q α β : = 1 N i V σ i α σ i β
The relationship between kernels, correlations and overlap is still the same
M M / T = C , M M / N = Q
As before, we can define the mean field matrices
C 0 : = m i m j 1 , 1 : i , j V , Q 0 : = { μ α μ β 1 , 1 : α , β S }
however, it is important to note that the matrices constructed by the magnetization kernel have a different meaning than those constructed by the binary kernel. To compare them, one should again consider the connected matrices
C : = C C 0 , Q : = Q Q 0
The latter contain only the correlations between fluctuations and are the same for both spin and binary neurons minus a scaling factor. Note that the distribution of the levels of Q * is exactly the distribution of the overlaps mentioned in the Replica Symmetry Breaking (RSB) theory [60].

3.2.5. Differences between Spin and Lattice Gas

Although from a mathematical point of view the two descriptions are equivalent, some observables may have different meanings. For example, in the case of activities, the correlation between spikes is one only when the neurons fire together and equal to zero in any other case, while the product between two spins is equal to one if the variables are equal and negative if the variables are opposite. In practice, the first variable is sensitive only to the event when the two neurons fire at the same time; the matrix of values is
φ i α φ j β = 1 φ i α = 1 , φ j β = 1 0 φ i α = 1 , φ j β = 0 0 φ i α = 0 , φ j β = 1 0 φ i α = 0 , φ j β = 0
putting on the same level (same null value) events in which only one neuron fires and when they are silent together. The second, on the other hand, distinguishes only whether or not they do the same thing, regardless of whether or not they fire
σ i α σ j β = 1 σ i α = + 1 , σ j β = + 1 1 σ i α = + 1 , σ j β = 1 1 σ i α = 1 , σ j β = + 1 1 σ i α = 1 , σ j β = 1
Notice that the relationship between the two observables is
σ i α σ j β = 4 φ i α φ j β 2 ( φ i α + φ j β ) + 1
and that this in turn identifies a new observable
φ i α + φ j β = 2 φ i α = 1 , φ j β = 1 1 φ i α = 1 , φ j β = 0 1 φ i α = 0 , φ j β = 1 0 φ i α = 0 , φ j β = 0
Linearly related to its corresponding one calculated with the magnetizations
σ i α + σ j β = 2 ( φ i α + φ j β ) 1
explicitly the sum of the spins worth
σ i α + σ j β = 2 σ i α = + 1 , σ j β = + 1 0 σ i α = + 1 , σ j β = 1 0 σ i α = 1 , σ j β = + 1 2 σ i α = 1 , σ j β = 1
This variable allows us to distinguish three cases (both silent, both active only one active). Its intrinsic meaning is not yet clear, assuming it has any, however, if we take an exponential transformation of the variables
z i α : = exp λ σ i α , γ i α : = exp 2 λ φ i α
we find that the equivalent relationship is with the products
z i α z j β = γ i α γ j β exp 2 λ .
Let us now consider the difference variable, which we will later interpret as a generalized form of impulse to construct the kinetic term of the Lagrangian
φ i α φ j β = 0 φ i α = 1 , φ j β = 1 1 φ i α = 1 , φ j β = 0 1 φ i α = 0 , φ j β = 1 0 φ i α = 0 , φ j β = 0
the link with the spin counterpart is
σ i α σ j β = 2 ( φ i α φ j β ) ,
the difference between spin is the vector
σ i α σ j β = 0 σ i α = + 1 , σ j β = + 1 2 σ i α = + 1 , σ j β = 1 2 σ i α = 1 , σ j β = + 1 0 σ i α = 1 , σ j β = 1
Notice that the value of the modulus for sums
| σ i α + σ j β | = 2 σ i α = + 1 , σ j β = + 1 0 σ i α = + 1 , σ j β = 1 0 σ i α = 1 , σ j β = + 1 2 σ i α = 1 , σ j β = 1
is complementary to the modulus of differences
| σ i α σ j β | = 0 σ i α = + 1 , σ j β = + 1 2 σ i α = + 1 , σ j β = 1 2 σ i α = 1 , σ j β = + 1 0 σ i α = 1 , σ j β = 1
it follows that the moduli of sum and difference satisfy the relationship
| σ i α + σ j β |   +   | σ i α σ j β |   = 2
and that the moduli of spin and neuron differences are directly proportional
| σ i α σ j β |   = 2 | φ i α φ j β |
One can therefore write the modulus of the sum of spins with a quantity proportional to the modulus of the difference of neurons (or spins) changed by the sign. The formula for the product of the spins is therefore
σ i α σ j β   =   1     | σ i α σ j β |   =   1 2 | φ i α φ j β |
while that for the sum of the spin is
σ i α + σ j β = 2 σ i α ( 1     | σ i α σ j β | ) = 2 ( 2 φ i α 1 ) ( 1 2 | φ i α φ j β | )
both proportional to the modulus of the difference between the neurons.

3.3. Estimators for Ergodicity Breaking

The idea of ergodicity is related to both space and time. With the term “ergodic”, we intend that the same information can be obtained by looking at a small portion of space for a large amount of time, or at a large portion of space for a small amount of time. Quoting Yamamoto’s book [130]: the idea of ergodicity arises if we have a single sample function of a stochastic process instead of the whole ensemble. A single sample function often provides little information about the statistics of the process. However, if the process is ergodic, that is, the time averages are equal to the ensemble averages, then all statistical information can be derived from a single sample function. When a process is ergodic, each sample function represents the entire process. Reflection should convince that the process must necessarily be stationary. Ergodicity thus implies stationarity. There are levels of ergodicity, just as there are levels (degrees) of stationarity. We will mostly consider two levels of ergodicity: ergodicity in mean and in correlation.

3.3.1. Ergodicity in Mean and Correlation

From Yamamoto’s book [130], a process is said to be “ergodic on average” if the averages of rows and columns of the kernel are equal
μ α = m i
Note that this implies averages μ α and m i constants in α and i, which by the definitions given earlier should necessarily be equal to the offset M ¯ . Let χ i k be the autocorrelation with period k of the i-th neuron
χ i k = 1 T α S σ i α σ i α k
and let q α α k be the overlap between the states at the time α and α k
q α α k = 1 N i V σ i α σ i α k
A process is said to be “ergodic in correlation” if
χ i k = q α α k
from which it follows that the autocorrelation and overlap functions must be constant in i and α , respectively. Note that for a given k the averages are equal
1 N i V χ i k = 1 T α S q α α k
and we can therefore introduce the period-averaged autocorrelation k
Δ k : = 1 T α S q α α k = 1 T α S 1 N i V σ i α σ i α k
which should highlight synchronous activity at a given time scale k (if any). In a correlation-ergodic process, this function describes the system completely. We should specify whether one should consider simple or connected correlations. If connected correlations are negligible, one has
Δ 0 k : = 1 T α S μ α μ α k
which can be deduced from the averaged kernel and should highlight the time scales at which first-order variables (i.e., averages) are correlated. Therefore, to study true second-order ergodicity, we need to look at the connected autocorrelation
Δ * k : = 1 T α S q α α k μ α μ α k
which is also independent from the choice of representation for the computational cell (either spin or lattice gas). This quantity must necessarily be calculated by averaging the matrices and should track the correlation scales of the fluctuations in the considered time window. Note that for each of these observables it is also possible to calculate the error with a simple propagation: for a window of size T, the error diverges as k approaches T because of the reduction in the number of values over which the average is averaged.

3.3.2. Stationarity

Again, from Yamamoto’s book [130], a kernel is said to be stationary in a weak sense if the mean value of the columns μ α is a constant and the overlap q α α k between the states at times α e α k depends only on k (and not α ). If a process is stationary in the weak sense, the autocorrelation function and the power spectral density function form a pair of Fourier transforms (Wiener-Khinchine theorem). Therefore, if we know or can measure the autocorrelation function, we can find the power spectrum density function, that is, which frequencies contain how much power in the signal. Ergodicity in correlation implies stationarity, but stationarity does not imply ergodicity in correlation.

3.3.3. Ergodicity in Distribution

These notions of ergodicity can be relaxed, for example, by considering the distance between the distributions of means p m and p μ . This form of ergodicity is less restrictive than the previous ones since it does not require stationarity of the process; therefore, it is some weaker form of ergodicity than those described by Yamamoto and is probably non-standard, although it is quite natural to consider in the context of kernels [56].

3.3.4. Wasserstein Metric

An explicit definition of ergodicity in distributions requires defining a distance between distributions: a particularly interesting distance (essentially for its relevance in the context of optimal transport) is the Wasserstein metric of order k
W k p m , p μ k : = 0 1 d s F m 1 s F μ 1 s k
It can be shown that convergence with respect to a distance of order k is equivalent to the usual convergence in the weak sense plus the convergence of the first k moments. For time windows with N = T , the formula becomes particularly simple,
W k p m , p μ k : = 1 N i V m i μ i k
for N T one must be careful to first establish an appropriate binning. The version that seems most interesting to us is that of order k = 1 , which is equivalent to weak convergence and is also known as the “earth mover” distance in that it establishes the optimal probability mass transport plan for transforming one distribution into the other. The distributions are thought of as two sand piles on the segment and the cost of moving one unit of mass from one value to the other is taken as proportional to the distance. In practice, it minimizes the work of transforming one pile into the other. We also have that the distance between cumulants is the same as with quantiles
0 1 d s F m 1 s F μ 1 s = 1 1 d u F m u F μ u
However, it is important to remember that all these quantities are ideally designed to study limit kernels. For N and T finite (and not even particularly large in our case), the number of support levels accessible for the two distributions could be very different depending on how many clock times are in the window, given that the number of observed neurons remains fixed. One must therefore carefully consider whether there are enough events in the time interval under consideration such that the spectrum of m is reasonably comparable with that of μ .

4. Lattice Field Theory

It has been proposed by many authors [41,115,131] that although QFTs are generally defined on a continuous support, it is perfectly possible to formulate physical theories directly in terms of difference equations and still keep all the desirable symmetries and conservation laws of continuous formulations. This is an example of a native lattice approach to quantum field theory and its practical importance is growing with the available computational power and evolution of AI. Let us introduce an index for the “mixed space” (or interval space)
Λ : = { 1 l L }
that is a vertex set of L = N T vertices marked by the index , then let M be a spin field on Λ with components supported by 1 , 1 . For now, we will represent the magnetization kernel as a spin vector on the mixed space–time lattice Λ , which collects the value of the field at all intervals.
M : = { σ l Γ : l Λ }
For the moment, M is simply a vector that contains the field value for all the space–time points, or computational cells in our case. We will soon re-map Λ into a multiplex lattice V S where the proper time α (special dimension) is treated separately from the other dimensions (ordinary dimensions), that is, the “kernel representation” [56], but already at this point we can highlight another key idea of using a LFT to fit neural data. In contrast with what is usually done in max entropy approaches [62], in a LFT, the same neuron at two different times is considered in the same way as two different neurons, and it is the “action” function that ultimately correlates them in such a way that they look like the same neuron evolving in time. Then, let O be a test function of M. Following the stochastic quantization approach of Symanzik, Nelson, Parisi et al. [42,128,132,133,134], we postulate the analytic euclidean action function [41,43,73]
A : R Λ R
and that the averages can be computed from the Wick-rotated Gell–Mann–Low (WGL) formula [133]. The combined work of several authors showed that this is equivalent to the Gibbs average [135] with A on behalf of the Hamiltonian, which corresponds to the principle of least action. The WGL formula is
O X = X R Λ O X exp λ A ( X ) X R Λ exp λ A ( X )
and is suitable to describe any observable that depends on the field X. The continuous (or thermodynamic) limit of this theory is attained for L , if it exists, while the zero temperature limit, λ , correspond to the non-quantum limit of the theory.

4.1. Qubit Field Theory

Binary quantum field theories were pioneered by C. F. von Weizsacker in the 1950s with the “Ur” (Alternatives) theory [53,136]. The Ur theory is the earliest example of the Qubit field theory [53,54,55,56] and is probably the simplest of all lattice quantum field theories (QFTs). Here, we apply the Taylor theorem and other elementary mathematical methods to the Lee formulation of quantum mechanics [40,41] in order to obtain a path integral formulation of the Ur theory via perturbative methods. We start from the assumption that the action is an analytic function of the field components, then Taylor’s theorem can be applied to obtain a convergent perturbation theory [134]. Define the auxiliary functions
D l ( X ) : = A ( X ) x l , D l l X : = 2 A ( X ) x l x l 1 I ( l = l ) + 1 2 2 A ( X ) x l 2 I ( l = l )
then, by Taylor’s theorem, the action can be expanded around the null field and this correspond to the one- and two- vertex interactions, etc.
A ( X ) = l Λ D l 0 x l + l Λ l Λ D l l 0 x l x l +
here, we stop at the second order to avoid complications, but a fourth-order theory should be considered for an accurate description of physical theories. To obtain a Ur theory we can take | x l | g with g < 1 (a form of Ising limit [137]). Let us introduce the tensors
F l : = g D l ( 0 ) , F l l : = g 2 D l l ( 0 ) ,
By substituting into the series expansion before we obtain the first-order perturbation theory of the Ur in magnetic representation
A ( M ) = l Λ F l σ l + l Λ l Λ F l l σ l σ l +
the theory is controlled by the tensor sequence F. We can immediately recognize the Ising-like structure of the action, which can be related to the usual formulations of the Standard Model on the lattice trough, for example, the Parotto mapping of QCD [46]. In general, the statistical method [42,56,60] allows the problem of finding the quantum (thus, also classical) time evolution of a system of interacting binary fields [53,54,55,56] to be transformed into a problem of classical statistical mechanics on a lattice [41,42,43,44], which can then be studied through canonical theory [74,135], renormalization [38,101,102,103,104,105] and other powerful mathematical methods [56,60].

4.1.1. Neural LFT

In the following section we provide the complete derivation of the expressions of the action A , both in the binary case and in the spin representation. Therefore, for the sake of completeness, some expressions and definitions given in the main text will be included again. For any physical (then limited) region of our analogue space–time, a map Θ exists that connects the kernel with the mixed space and vice versa. Let us introduce a “grand map”
Θ : V S Λ , Θ 1 : Λ V S
that establishes a biunivocal relation between the points of the mixed space Λ and those of the observed space time VS. This map always exists for any physical (finite) discrete observable and is another free parameter of the theory that can be tuned to highlight the space–time structures. Then, we relabel the points according to a double index as in [56]. Assuming that the neuron’s computation is supported by the the field
ψ i α : = φ i α Ω ¯ ,
where Ω ¯ is the global offset for the neuron in vivo, which could possibly be zero. By Taylor’s theorem, the action of a lattice field theory A ( Ω | F ) can be described by an expansion of the kind:
A ( Ω | F ) = i V α S F i α ψ i α + i V j V α S β S F i j α β ψ i α ψ j β + + i V j V h V α S β S γ S F i j h α β γ ψ i α ψ j β ψ h γ + i V j V h V k V α S β S γ S δ S F i j h k α β γ δ ψ i α ψ j β ψ h γ ψ k δ +
Each term represent one-, two-, three- and four-vertex interactions, etc., while the tensors sequence F collects the parameters of the theory. However, if we want to consider the same non-relativistic approximation used by Schneidman and colleagues [62], interactions with more than two vertices can be neglected as well as two-vertex interactions with four different indices. Therefore, the proposed action reduces to:
A ( Ω | A , B ) = i V j V A i j α S ψ i α ψ j α + α S β S B α β i V ψ i α ψ i β
The action depends explicitly on the correlation and overlap matrices and is controlled by the matrix of potential interactions A and by the matrix of kinetic interactions B. We can switch to the binary representation φ through the transformation
ψ i α ψ j β = φ i α φ j β Ω ¯ ( φ j β + φ i α ) + Ω ¯ 2
doing the algebra one finds that the structure of the theory is the same:
i V j V A i j α S ψ i α ψ i β + α S β S B α β i V ψ i α ψ i β = = i V j V A i j α S φ i α φ j α + α S β S B α β i V φ i α φ i β + + Ω ¯ α S i V φ i α j V ( A i j A j i ) + Ω ¯ α S i V φ i α β S ( B α β B β α ) + const .
because the connected part is identical. If the global offset Ω ¯ is not zero one must take into account the appearance of additional currents
I i 0 : = Ω ¯ j V ( A i j A j i ) , I 0 α : = Ω ¯ β S ( B α β B β α )
which, however, transform linearly. Ultimately, the action in the binary form is:
A Ω | A , B = i V I i 0 α S φ i α + α S I 0 α i V φ i α + i V j V A i j α S φ i α φ j α + α S β S B α β i V φ i α φ i β
Notice that by normalizing the sums the action can be rewritten using the first-order observables f, ω , which are obtained from the kernel Ω through linear transformations, and with those of the second order, the matrices Φ and Π :
A Ω | A , B = T i V I i 0 f i + N α S I 0 α ω α + T i V j V A i j ϕ i j + N α S β S B α β p α β ,
that is Equation (7) of the main text. For a single realization of the process the kernel is binary and the matrices can be obtained from the kernel (in this case the hypermatrix is a redundant representation). However, as we shall see, this does not apply in general to the averaged hypermatrix, where the correlations contain information about the ensemble fluctuations.

4.1.2. Magnetic Representation

To switch to the magnetic representation we apply the usual spin-bit transformation σ = 2 φ 1 :
A M | A , B = 1 2 i V I i 0 α S σ i α + 1 2 α S I 0 α i V σ i α + + 1 4 i V j V ( A i j + A i j ) α S σ i α + 1 4 α S β S ( B α β + B β α ) i V σ i α + + 1 4 i V j V A i j α S σ i α σ j α + 1 4 α S β S B α β i V σ i α σ i β
The structure of the interaction is identical except for a global rescaling
A ˜ i j = 1 4 A i j , B ˜ α β : = 1 4 B α β ,
and an adjustment of currents with an additional term
I ˜ i 0 : = 2 Ω ¯ j V ( A ˜ i j A ˜ i j ) + j V ( A ˜ i j + A ˜ j i ) , I ˜ 0 α : = 2 Ω ¯ β S ( B ˜ α β B ˜ β α ) + β S ( B ˜ α β + B ˜ β α ) .
The action in the magnetic representation is therefore
A ( M | A ˜ , B ˜ ) = T i V I ˜ i 0 m i + N α S I ˜ 0 α μ α + T i V j V A ˜ i j c i j + N α S β S B ˜ α β q α β
In this case the hypermatrix will consists of M, C and Q. We can recover the Ising Hamiltonian used in Schneidman et al. [62] in the limit T 1 and B 0 :
A ( M | A ˜ , 0 ) = i V j V A ˜ i j σ i 1 σ j 1
Thus, the max entropy principle is recovered as specific case of a field theory with zero kinetic energy. Since the theories are equivalent in the coming manipulations we will mainly use the spin representation.

4.1.3. Lagrangian Description of LFT

Assuming that the process evolves causally, it follows that the kinetic matrix B must be upper triangular, that is, the state of the system at instant α depends only on the states realized in the previous β α 1 . Therefore, we can define the sequence of time windows
S : = S α S : α S , S α : = 1 β α
In this way it is possible to define the Lagrangian of the system
L ( σ V S α | A , B ) : = i V j V A i j σ i α σ j α β S α 1 B α β i V σ i α σ i β = = i V j V A i j σ i α σ j α N β S α 1 B α β q α β
where in the second line the definition of overlap q α β is applied
A M | A , B = α S L ( σ V S α | A , B ) .
We can isolate the potential term from the kinetic term (which depends on the overlap)
H ( σ V α | A ) : = i V j V A i j σ i α σ j α , K ( q α S α 1 | B ) : = N β S α 1 B α β q α β .
Thus, we can rewrite the Lagrangian in the canonical form [120]
L ( σ V S α | A , B ) = H ( σ V α | A ) + K ( q α S α 1 | B ) ,
where q α S α 1 is the α -th row of the matrix of overlaps up to the time α 1
q α S α 1 : = { q α β Q : β S α 1 }
and this is enough to set the dynamics of the system. That the overlap-dependent term can be truly interpreted as a kinetic term is deduced by comparing with a simple Lagrangian system (see the work from Lee [41] for an overview). We introduce the pulse (or “momentum”) kernel
M : = σ i α 2 , 0 , 2 : α S α , σ i α : = σ i α σ i α 1
The Lagrangian of the scalar field is
L ( σ V α , σ V α | A , B 0 ) : = H ( σ V α | A ) + 1 2 B 0 σ V α 2 2
with a few algebraic steps (e.g., Babylonian trick [138]) it can be shown that
σ V α 2 2 = 2 N ( 1 q α α 1 )
therefore, the Lagrangian can be rewritten as
L ( σ V α , σ V α | A , B 0 ) = H ( σ V α | A ) + B 0 N ( 1 q α α 1 ) .
In the case of our action, taking
B α β = B 0 I α 1 = β
where I ( · ) is the indicator function. The associated Lagrangian becomes
L ( σ V S α | A , B ) = H ( σ V α | A ) B 0 N q α α 1
and one can see immediately that the difference between the two Lagrangians is
L ( σ V S α | A , B ) L ( σ V α , σ V α | A , B 0 ) = B 0 N
i.e., a constant that is irrelevant to the determination of dynamics. Moreover, the sign of the kinetic term is reversed with respect to that of the overlap term. It follows that the free field system is a sub-case of the general action described at the beginning, whose overlap term can be reduced to the kinetic term of the free Lagrangian.

4.2. Statistical Field Theory

So far, our theory is equivalent to a binary quantum field theory on the lattice, i.e., the Qubit field theory [53,54,55,56], since the same results can be deduced by applying the Wick rotation, (i.e., a rotation i 1 of the imaginary time units into the real plane) [47,139,140] to a system of non-relativistic quantum oscillators [21,23,38,40,41,42,43,44,46,49,51,141]. In general, the evolution of a Lagrangian system is determined by the principle of stationary action, which means that the kernel that satisfies it is not necessarily a minimum of the action: it can also be a maximum, or a saddle point. Following Symanzik, Nelson, Parisi et al. [42,128,133,134], for the quantum evolution we consider a Gibbs principle [74] applied to the action, which is equivalent to the principle of least action [47,73]. We define the action’s partition function:
G A , B = M 1 , 1 V S exp λ A M | A , B
where we interpret the action as a Hamiltonian and look for its minimum. Here, λ is the inverse Planck constant and plays the role of a temperature. The classical limit is recovered for λ . We also define the free action
Ψ A , B : = 1 λ log G A , B
which would be the analogue of the free energy. We then apply the steps to obtain the Gibbs principle [56]: first we manipulate the partition function, multiplying and dividing by a test measure to obtain the flat functional
M 1 , 1 V S exp λ A M | A , B = exp λ A M | A , B log ζ M ζ .
Then, we apply Jensen’s inequality to the average versus the test measure
exp λ A M | A , B log ζ M ζ exp λ A M | A , B ζ log ζ M ζ = exp λ F ζ | A , B
so as to obtain the free action functional
F ζ | A , B : = A M | A , B ζ + 1 λ log ζ σ ζ
This functional is greater or equal to the free action for any test measure
Ψ A , B F ζ | A , B , ζ P ( 1 , 1 V S )
and one can see that the minimum is actually reached by the Gibbs measure
η M | A , B : = 1 G A , B exp λ A M | A , B
It can be verified that the measure satisfies the relationship with the free action
F η | A , B : = inf ζ P 1 , 1 V S F ζ | A , B = Ψ A , B
If the system is assumed to be classical (i.e., non-quantum) the dynamic is obtained in the zero temperature limit. However, it could also have an intrinsic minimum temperature (equivalent to the non-zero Planck’s constant).

4.2.1. Connection with Replica Theory

The theory in the Lagrangian form allows us to establish a connection with the replica theory [60,87]
G A , B = M 1 , 1 V S α S exp [ λ L ( σ V S α | A , B ) ]
and since the sum over the kernels is equivalent to a sum over T replicas of the system σ V
M 1 , 1 V S = σ V 1 1 , 1 V σ V 2 1 , 1 V σ V T 1 , 1 V
the replicated system is obtained in the limit B 0 (no kinetic term)
A M | A , 0 = α S H ( σ V α | A )
with simple steps we arrive at the following relations
G A , 0 : = σ 1 , 1 V S α S exp [ λ H ( σ V α | A ) ] = = σ V 1 1 , 1 V exp [ λ H ( σ V 1 | A ) ] σ V T 1 , 1 V exp [ λ H ( σ V T | A ) ] = α S Z A = Z A T
where Z is the partition function associated with the Hamiltonian H and
Z A : = σ V 1 1 , 1 V exp λ H ( σ V 1 | A )
As one can see, the partition function of the action converges to the partition function of the Hamiltonian replicated T times. The interpretation of the replica trick [60]
log Z A = lim T 0 1 T [ Z A T 1 ] = lim T 0 1 T G A , 0 1
is natural enough in this context: the formal limit T 0 describes a situation in which the continuous limit of the theory ( τ 0 ) is observed for an infinitesimal time.

4.2.2. External Input

So far, we have only considered the evolution of an isolated system, but obviously in our case the input is crucial, so we must include it in the model. This can be done in a relatively simple way by introducing the input kernel, which describes the input signal in the network
I M | I : = α S I V α · σ V α
which should be added to the action to obtain the description of the full system
A M | A , B , I : = A M | A , B I M | I
By introducing the input partition function (see the Interface Model of [56])
R I : = M 1 , 1 V S exp λ I M | I = α S i V 2 cosh λ I i α
and applying the Gibbs principle we find the distribution of the input
ρ M | I : = 1 R I exp λ I M | I = 1 R I α S i V exp λ I i α σ i α
The partition function of the general action can be expressed in terms of the average of the isolated state with respect to ρ
G A , B , I = R I exp λ A M | A , B ρ
Note that the partition can also be expressed as the average of the input over the measure of the isolated system
G A , B , I = G A , B exp λ I M | I η
from which a relationship between partition functions and averages over states
R I exp λ A M | A , B ρ = G A , B exp λ I M | I η
For example, the input kernel could model the signal arriving to the observed cortical area after a stimulus. In case of a motor task [35,86], which could be the thalamic input arriving to the boundary neurons (in a topological sense) of the recorded cortical region following the Go stimulus, it is expected to be a steady-state almost everywhere except around the time interval at which the motor plan is realized. If axonal and synaptic connections are reasonably stable then most of the observed variability could come from the input noise from the rest of the network, or slightly different initial conditions, etc. To include all possible effects one can introduce a “quenched” space–time noise term, i.e., a random field δ to be added to the input term I
I M | I , δ : = α S I V α · σ V α α S δ V α · σ V α
which statistically mimic the input noise on the time scale of the entire session. In case of the recordings described in the main text [35,86], we expect that quenched noise terms can be ignored.

4.2.3. Ground State of the Action and Order Parameter

The variational principle identifies a distribution η called the ground state of the action (GS), which would be the one from which the mutielectrode interface draws the states we observe at the single trial level. Note that the GS of the action is a distribution in the space of kernels { 1 , 1 } V S , and hence a natural order parameter would be the kernel of the GS
M η : = σ i α η R : i V , α S .
The kernel of the GS is of particular interest since one can directly obtain the average momentum kernel M η and all kernels derived from linear operations. It also allows us to determine and subtract the steady state of the “hold” phase before the motor plan is observed (see main text). Notice that averages of first-order observables, such as the offset, or row and column averages, can be computed directly from the average kernel because they are related to it by a linear relationship. Additionally, if the connected correlation matrices are negligible then the correlation matrices can be deduced from the average kernel since C 0 and Q 0 depend on the averages. The amplitude of the kernel cell fluctuations satisfies the relation
[ σ i α σ i α η ] 2 η = 1 σ i α η 2
However, the relationship does not apply in general,
[ σ i α σ i α η ] [ σ j β σ j β η ] η = = σ i α σ j β η σ i α η σ j β η σ i α η σ j β η + σ i α η σ j β η = σ i α σ j β η σ i α η σ j β η
Thus, the average kernel may not be a sufficient order parameter to fully describe the GS η , and we should also look at second-order variables. To verify this we can compute the ensemble covariance matrices
δ C η : = C η M η M η / T , δ Q η : = Q η M η M η / N .
If these matrices are non-trivial it means that the correlations cannot be reconstructed from the average kernel. Thus, the most general order parameter in this approximation (two-body non-relativistic) is the hypermatrix, composed by the average kernel M η and the average correlation matrices Q η and C η .

4.2.4. Repeated Experiments and Ensemble Average

Let us see how we might model an actual experimental observation of these matrices. A possible method is to replicate an experiment n times and find a way to average the results in such a way as to estimate the ensemble average. First, we need to index the trials with the label k and the span of the index is denoted by
W : = 1 k n ,
the empirical ensemble collecting the actually observed kernels, called a recording session in [35], can be represented as follows
W : = { M k 1 , 1 V S : k W } ,
where M k is the k-th trial of the session
M k : = { φ i k α 0 , 1 : i V , α S } .
We added the new trial index k in the lower part of the φ symbol, though in principle it should have been placed on top as the idea of repeated experiments would suggest a kind of time variable. Since the trials are usually designed to be independent, the order in which they are performed should not matter for the k index. This is an ideal situation that should be verified for each experimental set, but it is also a reasonable approximation of the experimenter’s intentions. To confront the experiments and precisely define the empirical averages, we need one last ingredient: to choose the proper synchronization of the experimental kernels. Then, we introduce the integer vector
ν W : = ν W Z : k W
that collects the relative time shifts of the trials, i.e., for the k-th trial the α index is shifted by ν k units of clock time, that is equivalent to apply the substitution
α α + ν k
We formally indicate the application of the timeshifts to the empirical ensemble with
W W ν W .
We argue that for some ν W , the average on the empirical set converges to the ensemble average in the ideal limit of infinite repetitions of the same experiments
O M η = lim n 1 n M W ν W O M .
For example, the hypermatrix shown in Figure 1 is obtained by choosing ν W in such a way as to synchronize the replicas of the experiment with respect to the movement onset (see Section 5.2.2), while in [35], the alignment is by Go signal. In principle, one should also consider more complex kinds of synchronizations, like alignments that are based on maximizing the correlations between the trials. Let us introduce the overlap matrix for the session
Q ν W : = { Q k k ν W 0 , 1 : k , k W } ,
where the entries are the overlaps between the trials k and k with timeshifts ν W ,
Q k k ν W : = 1 T α S 1 N i V φ i k α + ν k φ i k α + ν k .
For example, we may want to align the samples with respect to some vector ν W * such that ν k g o ν k * ν k m o v and that it maximizes the norm of the overlap matrix
Q ν W * 2 2 = sup ν W Q ν W 2 2 ,
in any case, notice that the minimization of such kinds of functionals could soon become impractical for large datasets. We will show an example of the rank reduction method (by renormalization) in Figure 11–13.

4.2.5. Inference Methods

Since the work of Schneidman et al. [62], the possibility of reconstructing the couplings has become a major goal in computational neuroscience, and powerful inference methods are now available [142,143,144,145]. Let us consider a relativistic theory in the mixed space l Λ and let us consider a free field model with input kernel I. If we assume the action is that of a free (non-interacting) theory [58], then the parameters can be obtained easily from the average magnetizations by inverting the Callen equations [146,147]
λ I l = tanh 1 m l , m l : = σ l ,
Clearly there is an error that depends on the number of samples of the empirical ensemble average. Calling n the number of experiments, or replicas, on which the ensemble average is taken (e.g., the “session trials” of [35]), the error is as follows
λ δ I l = cosh λ I l n .
We can go further and add two body interactions, in this case the problem becomes less trivial, and we have to deal with the so-called inverse Ising problem, a classic inference problem [142,146,147,148]. Let us switch on the two body interactions F l l and introduce the “grand covariance” of the mixed space
C l l : = σ l σ l η σ l η σ l η .
The values of the coupling parameters are ultimately recovered by inverting the following system of (possibly non-linear) equations [142]:
C l l = m l I l = 2 Ψ I l I l ,
that must be solved and then inverted to find the F matrix. There are various methods to do so [146,147,148], and an excellent survey is that of Nguyen et al. [142]. There are also several approximate formulas that only require us to invert the grand covariance, the simplest is
F l l LR = C 1 l l
that correspond to the so-called “naive” mean-field theory [142]. More advanced formulas depending on the inverse covariance are, for example, the TAP formula [142]
F l l TAP = 2 C 1 l l 1 + 1 8 m l m l C 1 l l ,
the “independent-pair” approximation formula [142]
F l l IP = 1 4 ln 1 + m l 1 + m l + C l l 1 m l 1 m l + C l l 1 + m l 1 m l C l l 1 m l 1 + m l C l l ,
and the Sessak–Monasson formula [142,149]
F l l SM = F l l IP C 1 l l C l l 1 m l 2 1 m l 2 C l l 2 ,
especially suited in the limit of small covariances. Notice that the presence of the pair interactions also modifies the expression of the external fields. Introducing the Legendre transform of the free energy with respect to the magnetizations,
Γ : = max I Λ R Λ I Λ · m Λ + Ψ ,
the equations for both parameters are obtained from [142]
I l = Γ m l , C 1 l l = Γ m l m l .
Using the double index, the equations for the F parameters are
C i j α β = 2 Ψ I i α I j β .
since in the non-relativistic approximation we ignored correlations where both upper and lower pairs of indices are different, these can be translated in
C i j α β = C i j α α I α = β + C i i α β I i = j ,
the relations with the elements of the covariance matrices are
δ c i j η = 1 T α S C i j α α , δ q α β η = 1 N i V C i i α β .
This reduces from T 2 N 2 to N T ( N + T ) the number of parameters that should be actually computed to reconstruct the action, greatly enhancing the computational tractability in the non-relativistic case.

4.2.6. Renormalization

We conclude the theoretical sections with a simple renormalization [101,141] scheme, based on [56], that will be useful to link the theory with experimental observations. From this subsection we switch again to the lattice gas representation. Consider a joint kernel partition as in Section 3 of Franchini 2023 [56], with two levels (equivalent to one-step Replica Symmetry Breaking: RSB1). Let N 1 , N 2 , T 1 and T 2 be numbers such that N = N 1 N 2 and T = T 1 T 2 , and let
V 0 = 1 i 1 N 1 , V i 1 = 1 i 2 N 2 , S 0 = 1 α 1 T 1 , S α 1 = 1 α 2 T 2 .
The kernel can be rewritten according to the new multiscale index
Ω = Ω i 1 α 1 0 , 1 : i 1 V 0 , α 1 S 0
where we introduced the sub-kernels
Ω i 1 α 1 : = φ i 1 i 2 α 1 α 2 0 , 1 : i 2 V i 1 , α 2 S α 1
the field is renormalized according to a map such that
φ ^ i 1 α 1 : = R ( Ω i 1 α 1 ) 0 , 1
to regain some binary variables, i.e., φ ^ i 1 α 1 will be one if within the cell V i 1 S α 1 the condition set by the renormalization map is verified, and zero otherwise. By construction, the relationship between the two variables is such that
Ω i 1 α 1 = φ ^ i 1 α 1 Ω i 1 α 1
We can define the renormalized kernel as follows:
Ω ^ : = φ ^ i 1 α 1 0 , 1 : i 1 V 0 , α 1 S 0
Since the action structure is symmetrical between space and time, we can also perform the calculations on the potential term alone. We apply the multiscale index
i V j V A i j α S φ i α φ j α = i 1 V 0 j 1 V 0 i 2 V i 1 j 2 V j 1 A i 1 i 2 j 1 j 2 α 1 S 0 α 2 S α 1 φ i 1 i 2 α 1 α 2 φ j 1 j 2 α 1 α 2
and then the renormalization map
i V j V A i j α S φ i α φ j α = i 1 V 0 j 1 V 0 α 1 S 0 A ^ i 1 j 1 α 1 ( Ω ) φ ^ i 1 α 1 φ ^ j 1 α 1
For example, for a bin renormalization
φ ^ i 1 α 1 = I ( Ω i 1 α 1 0 )
the effective interaction will be given by
A ^ i 1 j 1 α 1 ( Ω ) : = i 2 V i 1 j 2 V j 1 A i 1 i 2 j 1 j 2 α 2 S α 1 φ i 1 i 2 α 1 α 2 φ j 1 j 2 α 1 α 2
while for a renormalization by decimation (Kadanoff renormalization) [101,102,103],
φ ^ i 1 α 1 = φ i 1 1 α 1 1
we will have that
A ^ i 1 j 1 α 1 ( Ω ) : = i 2 V i 1 { 1 } j 2 V j 1 { 1 } A i 1 i 2 j 1 j 2 α 2 S α 1 { 1 } φ i 1 i 2 α 1 α 2 φ j 1 j 2 α 1 α 2
We separate the stationary term (if any)
A ^ i 1 j 1 α 1 ( Ω ) : = A ^ i 1 j 1 + δ A ^ i 1 j 1 α 1 ( Ω )
The stationary term corresponds to the renormalized coupling matrix; we can thus rewrite the action potential term by separating the renormalized part from the fluctuation
i V j V A i j α φ i α φ j α = i 1 V 0 j 1 V 0 A ^ i 1 j 1 α 1 S 0 φ ^ i 1 α 1 φ ^ j 1 α 1 + i 1 V 0 j 1 V 0 α 1 S 0 δ A ^ i 1 j 1 α 1 ( Ω ) φ ^ i 1 α 1 φ ^ j 1 α 1
Doing the same with the kinetic term
B ^ i 1 α 1 β 1 ( Ω ) : = α 2 S α 1 β 2 S α 1 B α 1 α 2 β 1 β 2 i 2 V i 1 φ i 1 i 2 α 1 α 2 φ i 1 i 2 β 1 β 2
and separating the uniform term
B ^ i 1 α 1 β 1 ( Ω ) : = B ^ α 1 β 1 + δ B ^ i 1 α 1 β 1 ( Ω )
the treatment is completely symmetrical, leading to
α S β S B α β i V φ i α φ i β = α 1 S 0 β 1 S 0 B ^ α 1 β 1 i 1 V 0 φ ^ i 1 α 1 φ ^ i 1 β 1 + α 1 S 0 β 1 S 0 i 1 V 0 B ^ i 1 α 1 β 1 ( Ω ) φ ^ i 1 α 1 φ ^ i 1 β 1
The action in the renormalized variables will therefore have a perturbation
G A , B = Ω 0 , 1 V S exp λ A Ω | A , B = = Ω ^ 0 , 1 V 0 S 0 exp [ λ A ( Ω ^ | A ^ , B ^ ) λ δ A ( Ω ^ | A , B ) ] = G ( A ^ , B ^ ) exp [ λ δ A ( Ω ^ | A , B ) ] η ^
where η ^ is the GS of the renormalized action. In general, this expression depends on the details of the couplings within the renormalized cell. The perturbation of the action is formally defined as
δ A ( Ω ^ | A , B ) : = 1 λ log Ω K Ω ^ exp [ λ Γ ( Ω , Ω ^ | A , B ) ]
where the sum is on those Ω that if renormalized are equal to Ω ^ , i.e.,
K ( Ω ^ ) : = { Ω 0 , 1 V S : R Ω = Ω ^ }
and the function Γ is defined as follows:
Γ ( Ω , Ω ^ | A , B ) : = i 1 V 0 j 1 V 0 α 1 S 0 δ A ^ i 1 j 1 α 1 ( Ω ) φ ^ i 1 α 1 φ ^ j 1 α 1 + α 1 S 0 β 1 S 0 i 1 V 0 δ B ^ i 1 α 1 β 1 ( Ω ) φ ^ i 1 α 1 φ ^ i 1 β 1
Thus, renormalization operations can also change the structure of the action. For example, consider the potential part: we can approximate the renormalized coupling fluctuations with a stationary Random Energy Model (REM universality, see Arous and Kuptsov [150] or Section 6 of Franchini 2023 [56,58] for a practical example in kernel language).
δ A ^ i 1 j 1 α 1 ( Ω ) J i 1 j 1 ( Ω ) Δ i 1 j 1 .
The partition function can be approximated as follows:
Ω K Ω ^ exp λ i 1 V 0 j 1 V 0 α 1 S 0 δ A ^ i 1 j 1 α 1 ( Ω ) φ ^ i 1 α 1 φ ^ j 1 α 1 Ω K Ω ^ exp λ i 1 V 0 j 1 V 0 J i 1 j 1 ( Ω ) Δ i 1 j 1 α 1 S 0 φ ^ i 1 α 1 φ ^ j 1 α 1 = = exp λ ^ 2 T 1 2 i 1 V 0 j 1 V 0 Δ i 1 j 1 ϕ ^ i 1 j 1 2
where in the second row we applied the PPP-REM [56] average and λ ^ is the renormalized temperature. In essence, this type of mean field approximation introduces a linear term in the renormalization map
A i j A ^ i 1 j 1 λ ^ T 1 Δ i 1 j 1 ϕ ^ i 1 j 1 +
which results in a quadratic term added to the action
i V j V A i j ϕ i j i 1 V 0 j 1 V 0 A ^ i 1 j 1 ϕ ^ i 1 j 1 λ ^ T 1 i 1 V 0 j 1 V 0 Δ i 1 j 1 ϕ ^ i 1 j 1 2 +
Then, in the first approximation we could ignore the corrections terms in the PMd experiments with Utah96, due to the small magnitude of the correlations. Notice that this could also explain other deviations from the max entropy principle like those shown in Figure 2 of Meshulam et al. [65]. More accurate renormalization schemes based on multi-scale analysis can be computed following the methods of Franchini 2023 [56,57,58] and many other methods as well [52,101,102,103,107], although in general the exact shape of the perturbations depend on the details of the system and on the instrumental limits and systematics, and to push further it is therefore necessary to introduce more specified information about the couplings and the kinetic properties of the system, both of the neocortex and the sensor.

5. Experimental Methods

5.1. Cortical Minitubes

So far, the most accepted theory for the anatomical and functional organization of the retina is the columnar model (see Figure 3) [88,91,92], and similar assemblies of neurons are observable trough the whole neocortex, at least at the anatomical level. Anyway, since in the retina there is also a well-established corresponding functional organization that has still not been shown for the whole neocortex, in the following we will use the name “minitubes” to indicate only the anatomical structures that are seen from a histological inspection (Figure 5). Then, let L 3 be a cubic lattice and let x y z L 3 such that z represents, for example, the average height from the surface of the cortex at which a given cortical layer is located.

Decimated Kernel

Let x y be the position of the center of gravity of the cortical minitube section in the horizontal plane. To model the minitube layers we will define a partition of the space R 3 into volumes of equal size according to the lattice cells; for simplicity, we will approximate the cortical minitubes with square-based minitubes. Notice that the present charting of the neocortex is not accounting for the neural connections that may have any topology and are encoded in the interaction matrix A. The reason for using a euclidean reference frame is to allow for comparisons with existing histologies and fMRIs, as well as other data [151]. Also, it may highlight effects due to possible extracellular fields and currents [152], whose correlations may follow a euclidean topology. The layers of the minitubes are thus represented by the lattice cells
U x y z : = U x U y U z R 3
Now, calling v i R 3 the position of the nucleus of the i-th neuron, we can group by the volume in which they are located
V x y z : = i V : v i U x y z
each of these groups of neurons will have its own associated kernel
Ω x y z : = φ i α 0 , 1 : i V x y z , α S .
At this point, one could further group the neurons, first by index z, so as to form the cortical minitubes. The vertices belonging to the minitube are
V x y : = z L V x y z
that is the set of neurons that constitutes the minitube at position x y . The kernel is
Ω x y : = { Ω x y z 0 , 1 V x y z : z L , α S }
and describes the activity of the single cortical minitube in x y . Some interfaces, such as Neuropixel or deep multielectrode shanks, allow direct observations of this activity. The minitubes are in the end grouped again to form the cortex structures and areas,
V : = x y L 2 V x y
and the original kernel can thus be expressed in terms of the minitubes:
Ω = { Ω x y α 0 , 1 V x y : x y L 2 , α S } ,
so that it represents a two-dimensional lattice of cortical minitubes [88,91,92,153,154,155], a system in 2 + 1 + 1 dimensions. For the above, we can consider the experimental kernel for a specific tubular layer
Ω z : = { Ω x y z α 0 , 1 V x y z : x y L 2 , α S } ,
where, again, x y z L 3 are the spatial coordinates in a cubic lattice such that z represents the average height from the surface of the cortex at which a given layer is located, and x y is the position of the minitube section in the horizontal plane. The points are organized in a planar sub-lattice x y L 2 (of the observed cortical layer z) whose step is much greater than the diameter of the individual minitube, so that the activities recorded at the various points belong with high probability to different and well-spaced minitubes. At this point, to model the spacing between the probing points, we apply a renormalization by decimation on Ω and obtain the decimated activity kernel
Ω ^ : = { φ ^ x y α 0 , 1 : x y L 2 , α S } , φ ^ x y α : = I ( Ω x y z α 0 ) .
this is the electrode kernel of Equation (25) shown in the main text. This kernel is intended to model approximately the sensor recording, net of systematic errors and approximations. According to our arguments it should be comparable with a renormalized theory. Notice that this renormalization happens only in space and hence the information coming from the digitalization of neuronal signals is largely preserved (as far as the signals inside a channel or multi-units do not overlap too much in time). Ω ^ leads to the experimental hypermatrix of Figure 1. Finally, notice that the present charting of the neocortex is not accounting for the anatomical neural connections that may have any topology and are encoded in the interaction matrix A. Clearly, determining the exact effective theory that can describe the dynamics of the columns and their excitations will require careful analysis of the body of knowledge about the structure of the neocortex and the interface itself, but these manipulations demonstrate that a treatment in terms of field theory is possible, at least in this formalism. Moreover, given the particular architecture of the cortex, it is possible that the topology of such a theory is essentially either mean-field or two-dimensional, and with layers of cortices behaving as interacting fields, just as in elementary particle theory. This could greatly facilitate the analytical construction of effective theories.

5.2. Neural Recordings with Utah 96

5.2.1. Subjects

Two male rhesus macaque monkeys (Macaca mulatta, Monkeys P and C), weighing 9 and 9.5 kg, respectively, were employed for the task shown as the case study. Animal care, housing, surgical procedures and experiments conformed to European (Directive 86/609/ECC and 2010/63/UE) and Italian (D.L. 116/92 and D.L. 26/2014) laws and were approved by the Italian Ministry of Health. Monkeys were pair-housed with cage enrichment. They were fed daily with standard primate chow that was supplemented with nuts and fresh fruits if necessary. During recording days, the monkeys received their daily water supply during the experiments.

5.2.2. Apparatus and Task

The monkeys were seated in front of a black isoluminant background (<0.1 cd/m2) of a 17-inch touchscreen monitor (LCD, 800 × 600 resolution), inside a darkened, acoustic-insulated room. A non-commercial software package, CORTEX (http://www.nimh.gov.it, accessed on 1 January 2010), was used to control the presentation of the stimuli and the behavioural responses. Figure 1 and Figure 6 panel C show the scheme of the task: a Go-signal reaching task. Each trial started with the appearance of a central target (CT) (red circle, diameter 1.9 cm). The monkeys had to reach and hold the CT. After a variable holding time (400–900 ms, 100 ms increments) a peripheral target (PT) (red circle, diameter 1.9 cm) appeared randomly in one of two possible locations (right/left, D1/D2) and the CT disappeared (Go signal). After the Go signal the subjects had to reach and hold the PT for a variable time (400–800 ms, 100 ms increments) to receive juice. The time between the presentation of the Go signal and the onset of the hand movement (M_on) is the reaction time (RT). White circles around the central target were used as feedback for the animals to indicate the touch.

5.2.3. Extraction and Processing of Neuronal Data

A multielectrode array (Blackrock Microsystems, Salt Lake City) with 96 electrodes (Utah 96, spacing 0.4 mm) was surgically implanted in the left dorsal premotor cortex (PMd; the references used after opening the dura were the arcuate sulcus and pre-central dimple) to acquire unfiltered electric field potentials (UFP; i.e., the raw signal) sampled at 24.4 kHz (Tucker Davis Technologies, Alachua, FL, USA). As described in previous work from our group [35], we extracted single neurons activities from the raw signal by employing the spike sorting toolbox KiloSort3 [156] with the following parameters. Thresholds: [9 9] (thresholds for template-matching on spike detection); Lambda: 10 (bias factor of the individual spike amplitude towards the cluster mean); Area Under the Curve split: 0.9 (threshold for cluster splitting); and number of blocks: 5 (amount of blocks channels are divided into for estimating probe drift). The output was manually curated in Phy (v2.0; 17) to merge clusters that were mistakenly separated by the automated sorter. From this procedure we obtained a binary spike raster with a time resolution of 1ms (1 for a spike, 0 for no spikes) for each single trial of the experiment. Each single-trial raster was then put into the form of the kernel Ω ^ of Equation (212).

5.2.4. Neural Dynamics Underlying Movement Generation in PMd

We chose this task as a use-case for its simplicity as it involves only two experimental conditions. In this way, the results obtained in our LFT context are directly comparable with those obtained previously using common approaches that rely on covariance analysis [35,99,157,158,159,160]. We extracted the kernel Ω ^ in relation to the movement onset (M_on), considering an epoch of 1s before and after the event. By doing so, the distributions of the behavioral events of the task (the Go signal and M_on) are included (see Figure 1). It has been demonstrated that, during the time preceding the movement, PMd neurons express strong modulations associated with movement control [35,86,108,158,161,162]. The hypermatrices computed for the two experimental conditions are shown in Figure 7, Figure 8, Figure 9 and Figure 10. The JS matrices exhibit striking features, and by comparing them across movement directions, one can retrieve most of the hallmarks of PMd neural dynamics. The first is the strong increase in synchronous activity peaking within the 200 ms interval preceding the M_on (black markers in Figure 1) that correspond to the functional state of the system linked to the incoming movement generation. Indeed, the motor planning of actions in PMd is recognized to be encoded at the population level in the form of synchronization patterns that exhibit a strong modulation around 200 ms before the onset of movement [35,77,78,82,86,108,109,110,157,158,159,161,162,163,164,165,166,167,168,169,170,171,172,173,173]. The second is the specificity of PMd neurons for the direction of movement, which in the reported task could happen towards the left or right (D1/D2). In Figure 1 and Figure 8 (ED), this is evidenced by the more intense motifs of synchrony for one direction (D2) with respect to the other (D1). They emerge at the end of the motor plan maturation (∼ within 200 ms before M_on), continuing for at least 200 ms afterwards. In the ED section we report examples from a second subject and, separately, the components of the hypermatrix with additional details (e.g., the difference |D1-D2| for both Π ^ and Φ ^ .) Significantly, the dynamic contributions detectable from the JS matrix can be easily mapped in the spatial domain thanks to the hypermatrix arrangement, which emphasize the correspondences between the JS matrix, the kernels Ω ^ and the spatial and temporal averages. For example, from the kernels in Figure 1 and the zoom of Figure 8, the firing patterns that elicit a specific configuration of dynamical synchrony can be identified. This reveals that the temporal correlations during the motor plan maturation are caused by a specific firing sequence in the kernel Ω ^ (for both D1 and D2). Hence, we can infer that the maturation of the motor plan corresponds to different populations of neurons discharging with variable timings and intensities for D1 compared to D2. The JS matrix also demonstrates that the direction-specific correlations coincide with more intense firing for D2 compared to D1. In addition to direction-specific differences, relevant similarities are also appreciable. The cross-emerging at the center of the JS matrix represents synchronization among neural ensembles that extends throughout the duration of the trial for both D1 and D2. Again, the neural assemblies responsible can be easily identified from the kernels Ω ^ . Future work will be needed to clarify more details. The spatial correlations are instead recoverable from the matrix Φ ^ . In our example, it can be noted how the combinations underlying the motor plan are preserved for both directions (same correlation values in Φ ^ for both directions), while the direction-specific ones change.

5.2.5. Comparison with Other Methods

Thus, with the hypermatrix representation, neural dynamics can be efficiently decomposed into their spatial and temporal contributions, and their roles in the studied task are easily mapped. From these remarks, we understand the striking traits of the hypermatrix: its completeness despite its simplicity. It conveys fundamental information about the system in a compact representation without the need for complex numerical artifice. This is a substantial difference with other approaches frequently used to analyze neural activity (e.g., PCA or machine learning methods among the most popular [24,35,77,78,79,80,81,82]). Although these methods have provided valuable insights, none of them offer a picture encompassing the temporal and the spatial attributes of the system at the same time. Moreover, the connection these methods make between recorded activity and circuit mechanisms is elusive and hardly generalizable. In the case of PCA, for example, the temporal and spatial properties can be linked together only after a non-trivial, and most of the time arbitrary, sequence of numerical steps. Among others, these include a dimensionality reduction, i.e., choosing a number of PCs and the subsequent projections onto the reduced space; this requires computing the eigenvectors of the covariance matrix. In contrast, our theory only requires simple scalar products of the experimental rasters, eliminating the need for dimensionality reduction. In addition, the interpretations that conventional methods offer about the intrinsic nature of neural processes are strongly dependent on the chosen analysis pipeline and are far from being derived from the universal principles of a physical theory. This significantly impacts, for example, the definition that these methods can provide for the energy of the system, which remains vague and unformalized (such as in the case of the manifold hypothesis [174] and the widespread PCA-based energy landscapes [79,174,175,176]. We have instead shown that the kernel, its transpose and the corresponding scalar products give an accurate and physical-based description of the energy functional of the system. Most importantly, our approach entails a formal communication between physics and neuroscience using as a language the governing equations of elementary particles. This allows for the measurement of neural interactions through physically grounded observables and their interpretation in terms of well-known laws. In our LFT framework, temporal and spatial correlations have a precise meaning, representing, respectively, the kinetic and potential energy terms of the recorded neurons. As detailed in Section 4, our energy functional is obtained through the parameters of the theories A, B and I. See Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16 and Figure 17.

5.2.6. Test of the (Renormalized) Neural LFT

Generally speaking, the first requirement of a theory is that it should be possible to estimate the variables that describe it from experimental data (more formally, inverting the model). In our case, the set A, B and I may be recovered by inverting the hypermatrix. To do so, it is necessary to resort to a class of well-defined methods that go by the name of inverse Ising problems [142,177]. The same class of methods have been used by Tkacik et al. [63] to estimate the couplings of the Ising Hamiltonian with which they modeled the salamander retina recordings. This could also apply to the use-case here discussed, but at the price of a remarkable computational burden, mostly due to the very high rank of the JS matrix. To this respect, a viable way to lighten it could be to properly bin (renormalize) the process according to a larger clock time τ . This would yield a JS matrix of a smaller rank without losing too much information. Following these considerations, we applied a bin renormalization to the kernel on a time step of 10 ms (at the level of individual trial), reducing by a factor of 100 the number of kernel cells to deal with. From Figure 18, Figure 19, Figure 20 and Figure 21 it is evident that the kernel and the patterns in the covariance matrices are almost unaffected by the chosen renormalization, at least for this type of behavioral task. We were able to compute the grand covariance of the renormalized kernel: the distributions of the matrix entries are shown in Figure 21. We see that the red distribution follows the expected normal product peak centered on zero due to the product of independent Gaussian fluctuations, which is also in the blue and green distributions. But, notice that the most correlated pairs deviating from the normal product distribution are only in blue and green. This shows that if we ignore correlation below a certain threshold (which in this case is around 5%) then we can approximate the activity with the simplified “non-relativistic” action proposed in this paper. Notice that the deviations contributing to the overlap matrix are still much smaller than those contributing to the correlation matrix and should produce only small deviations from the max entropy model of Schneidman et al. [62].

5.3. Perspectives

5.3.1. Microscopic Models

It has been proposed that a movement may be carried out by the suppression of some steady signal that ends the holding or “non-movement state” and triggers the movement [35]. This idea is in line with the shared view whereby a command initiated in other regions is executed locally in the PMd, which is part of a larger network subserving motor control based on frontal, parietal, subcortical, cerebellar and spinal structures. According to our formalism, we can state that the part of the brain deciding the movement sends the command to the PMd in the form of a spatially structured external field that is stationary throughout the execution of the computation. In an analogy with magnetic systems, such an external field configures the phase toward which the population of neurons will try to balance. It can be hypothesized that the neural computation underlying the so-called motor plan is performed in convergence to the system’s equilibrium: at the time α in which the external input changes, the system converges to the phase (valley) selected by the new input. This can be modeled with the magnetization profile of a one-dimensional Ising chain subject to some external field. If the field is suddenly switched on at time α 0 , the Lagrangian contains a one-dimensional Ising kinetic term in α 0 : this is to force the stationary dynamic with an average interspike period τ that is deduced from a time covariance matrix δ Q η (see Figures in Section 5). This simple interface model in one dimension was introduced and solved by Robert and Widom in [178] adapting methods from Percus, Tejero and others [179,180,181]. One can confront the shape of the transient field with that predicted by [178]. This mechanism also sets the typical relaxation timescale of the process. In this scenario it would be possible to construct analytically solvable models with a locally stationary external input, like the aforementioned model, which could faithfully represent local circuitry. For example, one could formally model the circuit sketched in Pani et al. [35] and check it against experimental data. It would also be possible to directly apply the “layer representation” (a repeated application of the Bayes rule) introduced in Franchini 2021 [57,58] and Franchini 2023 [56] to compute the partition function associated to the action of various deep (layered/hierarchical) models, like the synfire of Moshe Abels et al.

5.3.2. Movement and the Glassy Phase

Like the salamander retina, the PMd (ora other cortices) might also be structurally capable of exhibiting glassy phases, however, it is not necessarily the case that these are physiologically within the “computation” of movement, nor that they play a central role in sending the system off balance (at least until consciousness is in play). For example, unlike the retina, which is a structure strictly devoted to “inputs” to be passed to the central nervous system (which in the case of [63] is also detached from it), we recorded from a system that should mainly process and produce an “output” to the muscles or other areas. If the neural system responsible for movement is in a glassy phase (not going to equilibrium quickly), it might be unable to consistently convey motor commands. As a result, the executed movements may deviate from the intended actions of the animal, leading to inaccuracies such as missing the targets or unintentional actions. Moreover, the time covariance matrix (see Figure 17, Figure 18, Figure 19, Figure 20 and Figure 21) supports the idea that the movement is not glassy: the overlap covariance matrix does not contain the movement (except in the refractory profile) and the firing patterns of the neuron are consistent with a noise model of the kind considered in [58]. Notice that the refractory period induces a structure in the overlap covariance matrix that is approximately stationary, and that the approximate symmetry of the overlap between trials shows that replica symmetry is only slightly broken. Following the ideas of [18] that see the learning as a selection of possible states of the system, we would expect more “glassy” behavior during the initial stages of training, when the monkey still has not entirely learned the task requirements. This could be studied by calculating, for example, the overlap between kernels of sessions separated by large time intervals, but the known degradation problems of Utah interfaces could mask fine-grained differences. Also, it could be possible that glassy activity may insurge in similar conditions as those considered for ex vivo salamander retina. For example, it would be of extreme interest to study the exceptionally rare recording of a dying brain published in [182], which is from the same monkey studied in [35].

5.3.3. Computing Physical LFTs with Brain Organoids

In addition to the orthodox purpose of reading and interpreting activity of natural neural networks in vivo, even more interesting applications have been made possible from recent advances in growing, shaping and interfacing biological neural tissue. The most striking example is perhaps the digital interfacing of brain organoids [183,184], a method that has already reached a fairly good technical level as demonstrated in T. Sharf et al., 2022 [184]. In short, brain organoid modeling is an advanced technique for studying brain development, physiology, function and disease occurrence (see Zheng et al., 2022 [183] review for an interesting overview). The experimental possibilities in this regard would certainly be of far reach, less expensive on both ethical and material sides and would also provide a safer guide for studying animal and human brains in vivo. There are now concrete possibilities of building hybrid circuits by connecting artificial neural networks and brain organoids [183] through currently available interfaces that could then be trained in the binary LFT language. Also, natural neural networks have been shown to work on a more efficient energetic basis and to learn from fewer examples. For example, shaping natural neural networks into useful neural circuitry [183] may allow us to realize in practice the ideas described in [51] and use natural neurons to run physical LFT simulations.

Author Contributions

Conceptualization, G.B. and S.F. (Simone Franchini); Methodology, G.B., S.F. (Simone Franchini) and L.P.; Validation, G.B., S.F. (Simone Franchini) and L.P.; Investigation, G.B., S.F. (Simone Franchini), L.P., R.B., S.R., E.B., P.P. and S.F. (Stefano Ferraina); Data acquisition and resources, E.B., P.P. and S.F. (Stefano Ferraina); Data curation, G.B., S.R., P.P. and S.F. (Stefano Ferraina); Writing, (original draft), G.B., S.F. (Simone Franchini); Review and editing, G.B., S.F. (Simone Franchini), L.P., S.R., E.B., P.P. and S.F. (Stefano Ferraina); Data Visualization, G.B. and S.R.; Funding acquisition, P.P. and S.F. (Stefano Ferraina). All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by Sapienza University of Rome grant number PH11715C823A9528 (to Stefano Ferraina) and RM12117A8AD27DB1 (to Pierpaolo Pani). We acknowledge a contribution from the Italian National Recovery and Resilience Plan (NRRP), M4C2, funded by the European Union–NextGenerationEU (Project IR0000011, CUP B51E22000150006, “EBRAINS–Italy” (to Stefano Ferraina)).

Institutional Review Board Statement

Animal care, housing, surgical procedures and experiments conformed to European (Directive 86/609/ECC and 2010/63/UE) and Italian (D.L. 116/92 and D.L. 26/2014) laws and were approved by the Italian Ministry of Health.

Data Availability Statement

Data are available from the corresponding author(s) upon reasonable request.

Acknowledgments

We thank Cheng Shi (University of Basel), Alessandro Treves (SISSA Trieste), Giorgio Parisi (Accademia dei Lincei) and Karl Friston (University College London) for interesting discussions.

Conflicts of Interest

The authors report no conflicts of interest.

References

  1. Angotzi, G.N.; Boi, F.; Lecomte, A.; Miele, E.; Malerba, M.; Zucca, S.; Casile, A.; Berdondini, L. SiNAPS: An implantable active pixel sensor CMOS-probe for simultaneous large-scale neural recordings. Biosens. Bioelectron. 2019, 126, 355–364. [Google Scholar] [CrossRef] [PubMed]
  2. Steinmetz, N.A.; Aydin, C.; Lebedeva, A.; Okun, M.; Pachitariu, M.; Bauza, M.; Beau, M.; Bhagat, J.; Böhm, C.; Broux, M.; et al. Neuropixels 2.0: A miniaturized high-density probe for stable, long-term brain recordings. Science 2021, 372, eabf4588. [Google Scholar] [CrossRef] [PubMed]
  3. Bullard, A. Feasibility of Using the Utah Array for Long-Term Fully Implantable Neuroprosthesis Systems; Technical Report; University of Michigan: Ann Arbor, MI, USA, 2019. [Google Scholar]
  4. Leber, M.; Bhandari, R.; Mize, J.; Warren, D.J.; Shandhi, M.M.; Solzbacher, F.; Negi, S. Long term performance of porous platinum coated neural electrodes. Biomed. Microdevices 2017, 19, 62. [Google Scholar] [CrossRef] [PubMed]
  5. Ye, Z.; Shelton, A.M.; Shaker, J.R.; Boussard, J.; Colonell, J.; Birman, D.; Manavi, S.; Chen, S.; Windolf, C.; Hurwitz, C.; et al. Ultra-high density electrodes improve detection, yield, and cell type identification in neuronal recordings. bioRxiv 2024, 2023.08.23.554527. [Google Scholar] [CrossRef]
  6. Hodgkin, A.L.; Huxley, A.F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 1952, 117, 500–544. [Google Scholar] [CrossRef]
  7. Beurle, R.L. Properties of a mass of cells capable of regenerating pulses. Philos. Trans. R. Soc. Lond. Ser. B Biol. Sci. 1956, 240, 55–94. [Google Scholar] [CrossRef]
  8. Freeman, W.J. Waves, Pulses, and the Theory of Neural Masses. Prog. Theor. Biol. 1972, 2, 1–10. [Google Scholar]
  9. Amari, S.I. Characteristics of Random Nets of Analog Neuron-Like Elements. IEEE Trans. Syst. Man Cybern. 1972, 2, 643–657. [Google Scholar] [CrossRef]
  10. Wilson, H.R.; Cowan, J.D. Excitatory and Inhibitory Interactions in Localized Populations of Model Neurons. Biophys. J. 1972, 12, 1–24. [Google Scholar] [CrossRef]
  11. Wilson, H.R.; Cowan, J.D. A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik 1973, 13, 55–80. [Google Scholar] [CrossRef]
  12. Fischer, B. A neuron field theory: Mathemalical approaches to the problem of large numbers of interacting nerve cells. Bull. Math. Biol. 1973, 35, 345–357. [Google Scholar] [CrossRef]
  13. Lopes da Silva, F.H.; Hoeks, A.; Smits, H.; Zetterberg, L.H. Model of brain rhythmic activity. The alpha-rhythm of the thalamus. Kybernetik 1974, 15, 27–37. [Google Scholar] [CrossRef] [PubMed]
  14. Amari, S.I. Dynamics of pattern formation in lateral-inhibition type neural fields. Biol. Cybern. 1977, 27, 77–87. [Google Scholar] [CrossRef] [PubMed]
  15. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554. [Google Scholar] [CrossRef]
  16. Amit, D.J.; Gutfreund, H.; Sompolinsky, H. Storing Infinite Numbers of Patterns in a Spin-Glass Model of Neural Networks. Phys. Rev. Lett. 1985, 55, 1530. [Google Scholar] [CrossRef] [PubMed]
  17. Amit, D.J.; Brunel, N. Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cereb. Cortex 1997, 7, 237–252. [Google Scholar] [CrossRef] [PubMed]
  18. Toulouse, G.; Dehaene, S.; Changeux, J.P. Spin Glass Model of Learning by Selection (Darwinism/Categorizaton/Hebb Synapse/Ultrametricity/Frustradon). Proc. Natl. Acad. Sci. USA 1986, 83, 1695–1698. [Google Scholar] [CrossRef] [PubMed]
  19. Treves, A. Are spin-glass effects relevant to understanding realistic auto-associative networks? J. Phys. A Math. Gen. 1991, 24, 2645. [Google Scholar] [CrossRef]
  20. Abeles, M.; Bergman, H.; Gat, I.; Meilijson, I.; Seidemann, E.; Tishby, N.; Vaadia, E. Cortical activity flips among quasi-stationary states. Proc. Natl. Acad. Sci. USA 1995, 92, 8616–8620. [Google Scholar] [CrossRef]
  21. Buice, M.A.; Cowan, J.D. Field-theoretic approach to fluctuation effects in neural networks. Phys. Rev. E—Stat. Nonlinear Soft Matter Phys. 2007, 75, 051919. [Google Scholar] [CrossRef]
  22. Hermann, G.; Touboul, J. Heterogeneous connections induce oscillations in large-scale networks. Phys. Rev. Lett. 2012, 109, 018702. [Google Scholar] [CrossRef] [PubMed]
  23. Buice, M.A.; Chow, C.C. Beyond mean field theory: Statistical field theory for neural networks. J. Stat. Mech. Theory Exp. 2013, 2013, P03003. [Google Scholar] [CrossRef] [PubMed]
  24. Pandarinath, C.; O’Shea, D.J.; Collins, J.; Jozefowicz, R.; Stavisky, S.D.; Kao, J.C.; Trautmann, E.M.; Kaufman, M.T.; Ryu, S.I.; Hochberg, L.R.; et al. Inferring single-trial neural population dynamics using sequential auto-encoders. Nat. Methods 2018, 15, 805–815. [Google Scholar] [CrossRef]
  25. Barack, D.L.; Krakauer, J.W. Two views on the cognitive brain. Nat. Rev. Neurosci. 2021, 22, 359–371. [Google Scholar] [CrossRef] [PubMed]
  26. Ahmadi, N.; Constandinou, T.G.; Bouganis, C.S. Robust and accurate decoding of hand kinematics from entire spiking activity using deep learning. J. Neural Eng. 2021, 18, 026011. [Google Scholar] [CrossRef] [PubMed]
  27. Pang, J.C.; Aquino, K.M.; Oldehinkel, M.; Robinson, P.A.; Fulcher, B.D.; Breakspear, M.; Fornito, A. Geometric constraints on human brain function. Nature 2023, 618, 566–574. [Google Scholar] [CrossRef] [PubMed]
  28. Faskowitz, J.; Moyer, D.; Handwerker, D.A.; Gonzalez-2 Castillo, J.; Bandettini, P.A.; Jbabdi, S.; Betzel, R. Commentary on Pang et al. (2023) Nature. bioRxiv 2023, 2023.07.20.549785. [Google Scholar] [CrossRef]
  29. Gardner, R.J.; Hermansen, E.; Pachitariu, M.; Burak, Y.; Baas, N.A.; Dunn, B.A.; Moser, M.B.; Moser, E.I. Toroidal topology of population activity in grid cells. Nature 2022, 602, 123–128. [Google Scholar] [CrossRef] [PubMed]
  30. Shi, Y.L.; Zeraati, R.; Levina, A.; Engel, T.A. Spatial and temporal correlations in neural networks with structured connectivity. Phys. Rev. Res. 2023, 5, 013005. [Google Scholar] [CrossRef]
  31. Genkin, M.; Hughes, O.; Engel, T.A. Learning non-stationary Langevin dynamics from stochastic observations of latent trajectories. Nat. Commun. 2021, 12, 5986. [Google Scholar] [CrossRef]
  32. Pinotsis, D.A.; Miller, E.K. In vivo ephaptic coupling allows memory network formation. Cereb. Cortex 2023. [Google Scholar] [CrossRef] [PubMed]
  33. Wei, Z.; Lin, B.J.; Chen, T.W.; Daie, K.; Svoboda, K.; Druckmann, S. A comparison of neuronal population dynamics measured with calcium imaging and electrophysiology. PLoS Comput. Biol. 2020, 16, e1008198. [Google Scholar] [CrossRef] [PubMed]
  34. Chandrasekaran, S.; Fifer, M.; Bickel, S.; Osborn, L.; Herrero, J.; Christie, B.; Xu, J.; Murphy, R.K.J.; Singh, S.; Glasser, M.F.; et al. Historical perspectives, challenges, and future directions of implantable brain-computer interfaces for sensorimotor applications. Bioelectron. Med. 2021, 7, 14. [Google Scholar] [CrossRef] [PubMed]
  35. Pani, P.; Giamundo, M.; Giarrocco, F.; Mione, V.; Fontana, R.; Brunamonti, E.; Mattia, M.; Ferraina, S. Neuronal population dynamics during motor plan cancellation in nonhuman primates. Proc. Natl. Acad. Sci. USA 2022, 119, e2122395119. [Google Scholar] [CrossRef] [PubMed]
  36. Stringer, C.; Pachitariu, M.; Steinmetz, N.; Reddy, C.B.; Carandini, M.; Harris, K.D. Spontaneous behaviors drive multidimensional, brainwide activity. Science 2019, 364, eaav7893. [Google Scholar] [CrossRef] [PubMed]
  37. Pachitariu, M.; Stringer, C.; Dipoppa, M.; Schröder, S.; Rossi, L.F.; Dalgleish, H.; Carandini, M.; Harris, K.D. Suite2p: Beyond 10,000 neurons with standard two-photon microscopy. bioRxiv 2017, 061507. [Google Scholar] [CrossRef]
  38. Wilson, K.G. Confinement of quarks. Phys. Rev. D 1974, 10, 2445. [Google Scholar] [CrossRef]
  39. Balian, R.; Drouffe, J.M.; Itzykson, C. Gauge fields on a lattice. I. General outlook. Phys. Rev. D 1974, 10, 3376. [Google Scholar] [CrossRef]
  40. Lee, T.D. Can time be a discrete dynamical variable? Phys. Lett. B 1983, 122, 217–220. [Google Scholar] [CrossRef]
  41. Lee, T.D. Difference equations and conservation laws. J. Stat. Phys. 1987, 46, 843–860. [Google Scholar] [CrossRef]
  42. Parisi, G. Statistical Field Theory; Addison-Wesley: Redwood City, CA, USA, 1989. [Google Scholar]
  43. Wiese, U.J. An Introduction to Lattice Field Theory; Technical Report 11; 2009; Available online: https://saalburg.aei.mpg.de/wp-content/uploads/sites/25/2017/03/wiese.pdf (accessed on 24 March 2024).
  44. Gupta, S. Introduction to Lattice Field Theory; Technical report; Asian Schoolon Lattice Field Theory TIFR: Mumbai, India, 2011. [Google Scholar]
  45. Zohar, E.; Burrello, M. Formulation of lattice gauge theories for quantum simulations. Phys. Rev. D—Part Fields Gravit. Cosmol. 2015, 91, 054506. [Google Scholar] [CrossRef]
  46. Parotto, P. Parametrized Equation of State for QCD from 3D Ising Model. Proc. Sci. 2018, 311, 036. [Google Scholar] [CrossRef]
  47. Faccioli, P. Lecture Course: Statistical Field Theory—YouTube. 2020. Available online: https://www.youtube.com/watch?v=fGkmCXcGpjA (accessed on 29 May 2024).
  48. Magnifico, G.; Felser, T.; Silvi, P.; Montangero, S. Lattice quantum electrodynamics in (3+1)-dimensions at finite density with tensor networks. Nat. Commun. 2021, 12, 3600. [Google Scholar] [CrossRef] [PubMed]
  49. Fagerholm, E.D.; Foulkes, W.M.; Friston, K.J.; Moran, R.J.; Leech, R. Rendering neuronal state equations compatible with the principle of stationary action. J. Math. Neurosci. 2021, 11, 1–15. [Google Scholar] [CrossRef]
  50. Gosselin, P.; Lotz, A.; Wambst, M. Statistical Field Theory and Networks of Spiking Neurons. arXiv 2020, arXiv:2009.14744. [Google Scholar]
  51. Halverson, J. Building Quantum Field Theories Out of Neurons. arXiv 2021, arXiv:2112.04527. [Google Scholar]
  52. Tiberi, L.; Stapmanns, J.; Kühn, T.; Luu, T.; Dahmen, D.; Helias, M. Gell-Mann-Low Criticality in Neural Networks. Phys. Rev. Lett. 2022, 128, 168301. [Google Scholar] [CrossRef] [PubMed]
  53. Gornitz, T.; Graudenz, D.; Weizsacker, C.F. Quantum field theory of binary alternatives. Int. J. Theory Phys. 1992, 31, 1929–1959. [Google Scholar] [CrossRef]
  54. Deutsch, D. Qubit Field Theory. arXiv 2004, arXiv:quant-ph/0401024. [Google Scholar]
  55. Singh, H. Exploring Quantum Field Theories with Qubit Lattice Models. Ph.D. Thesis, Duke University, Durham, NC, USA, 2020. [Google Scholar]
  56. Franchini, S. Replica Symmetry Breaking without replicas. Ann. Phys. 2023, 450, 169220. [Google Scholar] [CrossRef]
  57. Franchini, S. A simplified Parisi ansatz. Commun. Theor. Phys. 2021, 73, 055601. [Google Scholar] [CrossRef]
  58. Franchini, S. A simplified Parisi Ansatz II: REM Universality. arXiv 2023, arXiv:2312.07808. [Google Scholar]
  59. Concetti, F. The Full Replica Symmetry Breaking in the Ising Spin Glass on Random Regular Graph. J. Stat. Phys. 2018, 173, 1459–1483. [Google Scholar] [CrossRef]
  60. Mezard, M.; Parisi, G.; Virasoro, M.A. Spin Glass Theory and Beyond; World Scientific Publishing Company: Singapore, 1987. [Google Scholar]
  61. Mezard, M.; Montanari, A. Information, Physics, and Computation; Oxford University Press: Oxford, UK, 2009; p. 569. [Google Scholar]
  62. Schneidman, E.; Berry, M.J.; Segev, R.; Bialek, W. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature 2006, 440, 1007–1012. [Google Scholar] [CrossRef] [PubMed]
  63. Tkacik, G.; Schneidman, E.; Berry, M.J., II; Bialek, W. Spin glass models for a network of real neurons. arXiv 2009, arXiv:0912.5409. [Google Scholar]
  64. Tkačik, G.; Marre, O.; Mora, T.; Amodei, D.; Berry, M.J.; Bialek, W. The simplest maximum entropy model for collective behavior in a neural network. J. Stat. Mech. Theory Exp. 2013, 2013, P03011. [Google Scholar] [CrossRef]
  65. Meshulam, L.; Gauthier, J.L.; Brody, C.D.; Tank, D.W.; Bialek, W. Successes and failures of simple statistical physics models for a network of real neurons. arXiv 2021, arXiv:2112.14735. [Google Scholar]
  66. Tang, A.; Jackson, D.; Hobbs, J.; Chen, W.; Smith, J.L.; Patel, H.; Prieto, A.; Petrusca, D.; Grivich, M.I.; Sher, A.; et al. A maximum entropy model applied to spatial and temporal correlations from cortical networks in vitro. J. Neurosci. 2008, 28, 505–518. [Google Scholar] [CrossRef] [PubMed]
  67. Treves, A.; Amit, D.J. Metastable states in asymmetrically diluted Hopfield networks. J. Phys. A Math. Gen. 1988, 21, 3155. [Google Scholar] [CrossRef]
  68. Ryom, K.I.; Treves, A. Speed Inversion in a Potts Glass Model of Cortical Dynamics. PRX Life 2023, 1, 013005. [Google Scholar] [CrossRef]
  69. Friston, K. The free-energy principle: A unified brain theory? Nat. Rev. Neurosci. 2010, 11, 127–138. [Google Scholar] [CrossRef] [PubMed]
  70. Friston, K. A free energy principle for a particular physics. arXiv 2019, arXiv:1906.10184. [Google Scholar]
  71. Bardella, G.; Franchini, S.; Pani, P.; Ferraina, S. Lattice physics approaches for neural networks. arXiv 2024, arXiv:2405.12022. [Google Scholar] [CrossRef]
  72. Qiu, S.; Chow, C. Field theory for biophysical neural networks. Proc. Sci. 2014, Part F130500, 23–28. [Google Scholar]
  73. Brown, L.M. Feynman’s Thesis: A New Approach to Quantum Theory; World Scientific Publishing Co.: Singapore, 2005; pp. 1–119. Available online: http://files.untiredwithloving.org/thesis.pdf (accessed on 29 May 2024).
  74. Huang, K. Statistical Mechanics; Wiley: New York, NY, USA, 2003; p. 493. [Google Scholar]
  75. Cohen, M.R.; Kohn, A. Measuring and interpreting neuronal correlations. Nat. Neurosci. 2011, 14, 811–819. [Google Scholar] [CrossRef] [PubMed]
  76. Aertsen, A.M.; Gerstein, G.L.; Habib, M.K.; Palm, G. Dynamics of neuronal firing correlation: Modulation of “effective connectivity”. J. Neurophysiol. 1989, 61, 900–917. [Google Scholar] [CrossRef] [PubMed]
  77. Kaufman, M.T.; Churchland, M.M.; Ryu, S.I.; Shenoy, K.V. Cortical activity in the null space: Permitting preparation without movement. Nat. Neurosci. 2014, 17, 440. [Google Scholar] [CrossRef]
  78. Elsayed, G.F.; Lara, A.H.; Kaufman, M.T.; Churchland, M.M.; Cunningham, J.P. Reorganization between preparatory and movement population responses in motor cortex. Nat. Commun. 2016, 7, 1–15. [Google Scholar] [CrossRef]
  79. Gallego, J.A.; Perich, M.G.; Miller, L.E.; Solla, S.A. Neural Manifolds for the Control of Movement. Neuron 2017, 94, 978–984. [Google Scholar] [CrossRef]
  80. Yu, B.M.; Cunningham, J.P.; Santhanam, G.; Ryu, S.I.; Shenoy, K.V.; Sahani, M. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. J. Neurophysiol. 2009, 102, 614–635. [Google Scholar] [CrossRef]
  81. Le, T.; Shlizerman, E. STNDT: Modeling Neural Population Activity with Spatiotemporal Transformers. Adv. Neural Inf. Process. Syst. 2022, 35, 17926–17939. [Google Scholar]
  82. Candelori, B.; Bardella, G.; Spinelli, I.; Pani, P.; Ferraina, S.; Scardapane, S. Spatio-temporal transformers for decoding neural movement control. bioRxiv 2024. [Google Scholar] [CrossRef]
  83. Hill, S. Cortical Columns, Models of; Springer: New York, NY, USA, 2014; pp. 1–4. [Google Scholar] [CrossRef]
  84. Opris, I.; Hampson, R.E.; Stanford, T.R.; Gerhardt, G.A.; Deadwyler, S.A. Neural Activity in Frontal Cortical Cell Layers: Evidence for Columnar Sensorimotor Processing. J. Cogn. Neurosci. 2011, 23, 1507. [Google Scholar] [CrossRef] [PubMed]
  85. Rapan, L.; Froudist-Walsh, S.; Niu, M.; Xu, T.; Funck, T.; Zilles, K.; Palomero-Gallagher, N. Multimodal 3D atlas of the macaque monkey motor and premotor cortex. NeuroImage 2021, 226, 117574. [Google Scholar] [CrossRef] [PubMed]
  86. Bardella, G.; Pani, P.; Brunamonti, E.; Giarrocco, F.; Ferraina, S. The small scale functional topology of movement control: Hierarchical organization of local activity anticipates movement generation in the premotor cortex of primates. NeuroImage 2020, 207, 116354. [Google Scholar] [CrossRef] [PubMed]
  87. Charbonneau, P.; Marinari, E.; Mézard, M.; Parisi, G.; Ricci-Tersenghi, F.; Sicuro, G.; Zamponi, F. Spin Glass Theory and Far Beyond: Replica Symmetry Breaking after 40 Years; World Scientific: Singapore, 2023; pp. 1–740. [Google Scholar] [CrossRef]
  88. Jones, E.G. Microcolumns in the cerebral cortex. Proc. Natl. Acad. Sci. USA 2000, 97, 5019–5021. [Google Scholar] [CrossRef] [PubMed]
  89. Goodfellow, I.; Bengio, Y.O.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  90. Normann, R.A.; Maynard, E.M.; Rousche, P.J.; Warren, D.J. A neural interface for a cortical vision prosthesis. Vis. Res. 1999, 39, 2577–2587. [Google Scholar] [CrossRef] [PubMed]
  91. Mountcastle, V.B. The columnar organization of the neocortex. Brain 1997, 120, 701–722. [Google Scholar] [CrossRef] [PubMed]
  92. Buxhoeveden, D.P.; Casanova, M.F. The minicolumn hypothesis in neuroscience. Brain 2002, 125, 935–951. [Google Scholar] [CrossRef]
  93. Hatsopoulos, N.G. Columnar organization in the motor cortex. Cortex 2010, 46, 270–271. [Google Scholar] [CrossRef]
  94. Georgopoulos, A.P.; Merchant, H.; Naselaris, T.; Amirikian, B. Mapping of the preferred direction in the motor cortex. Proc. Natl. Acad. Sci. USA 2007, 104, 11068–11072. [Google Scholar] [CrossRef] [PubMed]
  95. Potjans, T.C.; Diesmann, M. The cell-type specific cortical microcircuit: Relating structure and activity in a full-scale spiking network model. Cereb. Cortex 2014, 24, 785–806. [Google Scholar] [CrossRef] [PubMed]
  96. Markowitz, D.A.; Curtis, C.E.; Pesaran, B. Multiple component networks support working memory in prefrontal cortex. Proc. Natl. Acad. Sci. USA 2015, 112, 11084–11089. [Google Scholar] [CrossRef] [PubMed]
  97. Cain, N.; Iyer, R.; Koch, C.; Mihalas, S. The Computational Properties of a Simplified Cortical Column Model. PLoS Comput. Biol. 2016, 12, e1005045. [Google Scholar] [CrossRef] [PubMed]
  98. Hawkins, J.; Ahmad, S.; Cui, Y. A theory of how columns in the neocortex enable learning the structure of the world. Front. Neural Circuits 2017, 11, 295079. [Google Scholar] [CrossRef]
  99. Chandrasekaran, C.; Peixoto, D.; Newsome, W.T.; Shenoy, K.V. Laminar differences in decision-related neural activity in dorsal premotor cortex. Nat. Commun. 2017, 8, 614. [Google Scholar] [CrossRef] [PubMed]
  100. Paulk, A.C.; Kfir, Y.; Khanna, A.R.; Mustroph, M.L.; Trautmann, E.M.; Soper, D.J.; Stavisky, S.D.; Welkenhuysen, M.; Dutta, B.; Shenoy, K.V.; et al. Large-scale neural recordings with single neuron resolution using Neuropixels probes in human cortex. Nat. Neurosci. 2022, 25, 252–263. [Google Scholar] [CrossRef] [PubMed]
  101. Wilson, K.G. The renormalization group and critical phenomena. Rev. Mod. Phys. 1983, 55, 583. [Google Scholar] [CrossRef]
  102. Kadanoff, L.P. Relating theories via renormalization. Stud. Hist. Philos. Sci. Part B Stud. Hist. Philos. Mod. Phys. 2013, 44, 22–39. [Google Scholar] [CrossRef]
  103. Efrati, E.; Wang, Z.; Kolan, A.; Kadanoff, L.P. Real-space renormalization in statistical mechanics. Rev. Mod. Phys. 2014, 86, 647–667. [Google Scholar] [CrossRef]
  104. Niemeijer, T.; Van Leeuwen, J.M. Wilson Theory for Spin Systems on a Triangular Lattice. Phys. Rev. Lett. 1973, 31, 1411. [Google Scholar] [CrossRef]
  105. Niemeyer, T.; Van Leeuwen, J.M. Wilson theory for 2-dimensional Ising spin systems. Physica 1974, 71, 17–40. [Google Scholar] [CrossRef]
  106. Parisi, G.; Petronzio, R.; Rosati, F. Renormalization group approach to spin glass systems. Eur. Phys. J. B 2001, 21, 605–609. [Google Scholar] [CrossRef]
  107. Angelini, M.C. Real-Space Renormalization group for spin glasses. arXiv 2023, arXiv:2302.05292. [Google Scholar]
  108. Bardella, G.; Giuffrida, V.; Giarrocco, F.; Brunamonti, E.; Pani, P.; Ferraina, S. Response inhibition in premotor cortex corresponds to a complex reshuffle of the mesoscopic information network. Netw. Neurosci. 2024, 1–26. [Google Scholar] [CrossRef]
  109. Ramawat, S.; Mione, V.; Di Bello, F.; Bardella, G.; Genovesio, A.; Pani, P.; Ferraina, S.; Brunamonti, E. Different Contribution of the Monkey Prefrontal and Premotor Dorsal Cortex in Decision Making During a Transitive Inference Task. Neuroscience 2022, 485, 147–162. [Google Scholar] [CrossRef] [PubMed]
  110. Giarrocco, F.; Bardella, G.; Giamundo, M.; Fabbrini, F.; Brunamonti, E.; Pani, P.; Ferraina, S. Neuronal dynamics of signal selective motor plan cancellation in the macaque dorsal premotor cortex. Cortex 2021, 135, 326–340. [Google Scholar] [CrossRef] [PubMed]
  111. Wilczek, F. Quantum Time Crystals. Phys. Rev. Lett. 2012, 109, 160401. [Google Scholar] [CrossRef] [PubMed]
  112. Zhang, J.; Hess, P.W.; Kyprianidis, A.; Becker, P.; Lee, A.; Smith, J.; Pagano, G.; Potirniche, I.D.; Potter, A.C.; Vishwanath, A.; et al. Observation of a discrete time crystal. Nature 2017, 543, 217–220. [Google Scholar] [CrossRef]
  113. Grinstein, G.; Jayaprakash, C.; He, Y. Statistical Mechanics of Probabilistic Cellular Automata. Phys. Rev. Lett. 1985, 55, 2527. [Google Scholar] [CrossRef]
  114. Elze, H.T. Action principle for cellular automata and the linearity of quantum mechanics. Phys. Rev. A 2014, 89, 012111. [Google Scholar] [CrossRef]
  115. Hooft, G.T. The Cellular Automaton Interpretation of Quantum Mechanics; Springer Nature: New York, NY, USA, 2014. [Google Scholar]
  116. Fredkin, E.; Toffoli, T. Conservative Logic; Kluwer Academic Publishers-Plenum Publishers: Dordrecht, The Netherlands, 1982; Volume 21, pp. 219–253. [Google Scholar] [CrossRef]
  117. Capobianco, S.; Toffoli, T. Can anything from Noether’s Theorem be salvaged for discrete dynamical systems? Lect. Notes Comput. Sci. 2011, 6714, 77–88. [Google Scholar] [CrossRef]
  118. Cranmer, K.; Kanwar, G.; Racanière, S.; Rezende, D.J.; Shanahan, P.E. Advances in machine-learning-based sampling motivated by lattice quantum chromodynamics. Nat. Rev. Phys. 2023, 5, 526–535. [Google Scholar] [CrossRef]
  119. Kogut, J.; Susskind, L. Hamiltonian Formulation of Wilson’s Lattice Gauge Theories. Phys. Rev. D 1975, 11, 395. [Google Scholar] [CrossRef]
  120. Goldstein, H.; Poole, C.P.; Safko, J.L. Classical Mechanics; Pearson Education: Upper Saddle River, NJ, USA, 2002; p. 638. [Google Scholar]
  121. Nigam, K.; Banerjee, K. A Brief Review of Helmholtz Conditions. arXiv 2016, arXiv:1602.01563. [Google Scholar]
  122. Sarlet, W. The Helmholtz conditions revisited. A new approach to the inverse problem of Lagrangian dynamics. J. Phys. A Math. Gen. 1982, 15, 1503. [Google Scholar] [CrossRef]
  123. Douglas, J. Solution of the Inverse Problem of the Calculus of Variations. Proc. Natl. Acad. Sci. USA 1939, 25, 631–637. [Google Scholar] [CrossRef] [PubMed]
  124. Craciun, D.; Opris, D. The Helmholtz conditions for the difference equations systems. Balk. J. Geom. Its Appl. (BJGA) 1996, 1, 21–30. [Google Scholar]
  125. Bourdin, L.; Cresson, J. Helmholtz’s inverse problem of the discrete calculus of variations. J. Differ. Equ. Appl. 2013, 19, 1417–1436. [Google Scholar] [CrossRef]
  126. Gubbiotti, G. On the inverse problem of the discrete calculus of variations. J. Phys. A Math. Theory 2019, 52, 305203. [Google Scholar] [CrossRef]
  127. Gubbiotti, G. Lagrangians and integrability for additive fourth-order difference equations. Eur. Phys. J. Plus 2020, 135, 853. [Google Scholar] [CrossRef]
  128. Lehmann, H.; Symanzik, K.; Zimmermann, W. Zur Formulierung quantisierter Feldtheorien. Il Nuovo Cimento 1955, 1, 205–225. [Google Scholar] [CrossRef]
  129. Steinbrecher, G.; Shaw, W.T. Quantile mechanics. Eur. J. Appl. Math. 2008, 19, 87–112. [Google Scholar] [CrossRef]
  130. Yamamoto, Y. Fundamentals of Noise Processes; Cambridge University Press: Cambridge, UK, 2004; pp. 1–7. [Google Scholar]
  131. Rovelli, C.; Smolin, L. Discreteness of area and volume in quantum gravity. Nucl. Phys. B 1995, 442, 593–619. [Google Scholar] [CrossRef]
  132. Nelson, E. Review of stochastic mechanics. J. Phys. Conf. Ser. 2012, 361, 012011. [Google Scholar] [CrossRef]
  133. Guerra, F.; Rosen, L.; Simon, B. The P(ϕ) 2 Euclidean Quantum Field Theory as Classical Statistical Mechanics. Ann. Math. 1975, 101, 111. [Google Scholar] [CrossRef]
  134. Parisi, G.; Wu, Y. Pertubation theory without gauge fixing. Sci. Sin. 1981, 24, 483. [Google Scholar] [CrossRef]
  135. Gibbs, J.W. Elementary Principles in Statistical Mechanics: Developed with Especial Reference to the Rational Foundation of Thermodynamics; Cambridge University Press: Cambridge, UK, 2010; pp. 1–207. [Google Scholar] [CrossRef]
  136. Finkelstein, D.R. Ur Theory and Space-Time Structure. Time, Quantum and Information; Springer: Berlin/Heidelberg, Germany, 2003; pp. 397–407. [Google Scholar] [CrossRef]
  137. Caginalp, G. Thermodynamic properties of the phi/sup 4/ lattice field theory near the Ising limit. Ann. Phys. 1980, 126, 500–511. [Google Scholar] [CrossRef]
  138. Kistler, N. Solving spin systems: The Babylonian way. arXiv 2021, arXiv:2111.04375. [Google Scholar]
  139. Wick, G.C. Properties of Bethe-Salpeter Wave Functions. Phys. Rev. 1954, 96, 1124. [Google Scholar] [CrossRef]
  140. O’Brien, D. The Wick rotation. Aust. J. Phys. 1975, 28, 7–13. [Google Scholar] [CrossRef]
  141. Di Castro, C.; Jona-Lasinio, G. On the microscopic foundation of scaling laws. Phys. Lett. A 1969, 29, 322–323. [Google Scholar] [CrossRef]
  142. Nguyen, H.C.; Zecchina, R.; Berg, J. Inverse statistical problems: From the inverse Ising problem to data science. Adv. Phys. 2017, 66, 197–261. [Google Scholar] [CrossRef]
  143. Carleo, G.; Cirac, I.; Cranmer, K.; Daudet, L.; Schuld, M.; Tishby, N.; Vogt-Maranto, L.; Zdeborová, L. Machine learning and the physical sciences. Rev. Mod. Phys. 2019, 91, 045002. [Google Scholar] [CrossRef]
  144. Zdeborová, L.; Krzakala, F. Statistical physics of inference: Thresholds and algorithms. Adv. Phys. 2016, 65, 453–552. [Google Scholar] [CrossRef]
  145. Merger, C.; René, A.; Fischer, K.; Bouss, P.; Nestler, S.; Dahmen, D.; Honerkamp, C.; Helias, M. Learning Interacting Theories from Data. Phys. Rev. X 2023, 13, 041033. [Google Scholar] [CrossRef]
  146. Albert, J.; Swendsen, R.H. The Inverse Ising Problem. Phys. Procedia 2014, 57, 99–103. [Google Scholar] [CrossRef]
  147. Swendsen, R.H. Monte Carlo Calculation of Renormalized Coupling Parameters. Phys. Rev. Lett. 1984, 52, 1165. [Google Scholar] [CrossRef]
  148. Aurell, E.; Ekeberg, M. Inverse ising inference using all the data. Phys. Rev. Lett. 2012, 108, 090201. [Google Scholar] [CrossRef]
  149. Sessak, V.; Monasson, R. Small-correlation expansions for the inverse Ising problem. J. Phys. A: Math. Theor. 2009, 42, 055001. [Google Scholar] [CrossRef]
  150. Arous, G.B.; Kuptsov, A. REM Universality for Random Hamiltonians. Prog. Probab. 2009, 62, 45–84. [Google Scholar] [CrossRef]
  151. van Hemmen, J.L.; Schüz, A.; Aertsen, A. Structural aspects of biological cybernetics: Valentino Braitenberg, neuroanatomy, and brain function. Biol. Cybern. 2014, 108, 517–525. [Google Scholar] [CrossRef] [PubMed]
  152. Buzsáki, G.; Anastassiou, C.A.; Koch, C. The origin of extracellular fields and currents–EEG, ECoG, LFP and spikes. Nat. Rev. Neurosci. 2012, 13, 407–420. [Google Scholar] [CrossRef] [PubMed]
  153. Segev, R.; Puchalla, J.; Berry, M.J. Functional organization of ganglion cells in the salamander retina. J. Neurophysiol. 2006, 95, 2277–2292. [Google Scholar] [CrossRef] [PubMed]
  154. Segev, C. Chapter 1 Kinetic Models of Synaptic Transmission; MIT Press: Cambridge, MA, USA, 1998; pp. 1–25. [Google Scholar]
  155. Lübke, J.; Feldmeyer, D. Excitatory signal flow and connectivity in a cortical column: Focus on barrel cortex. Brain Struct. Funct. 2007, 212, 3–17. [Google Scholar] [CrossRef] [PubMed]
  156. Pachitariu, M.; Steinmetz, N.A.; Kadir, S.N.; Carandini, M.; Harris, K.D. Fast and accurate spike sorting of high-channel count probes with KiloSort. Adv. Neural Inf. Process. Syst. 2016, 29. [Google Scholar]
  157. Churchland, M.M.; Cunningham, J.P.; Kaufman, M.T.; Foster, J.D.; Nuyujukian, P.; Ryu, S.I.; Shenoy, K.V.; Shenoy, K.V. Neural population dynamics during reaching. Nature 2012, 487, 51–56. [Google Scholar] [CrossRef] [PubMed]
  158. Mattia, M.; Pani, P.; Mirabella, G.; Costa, S.; Del Giudice, P.; Ferraina, S. Heterogeneous attractor cell assemblies for motor planning in premotor cortex. J. Neurosci. 2013, 33, 11155–11168. [Google Scholar] [CrossRef]
  159. Kaufman, M.T.; Seely, J.S.; Sussillo, D.; Ryu, S.I.; Shenoy, K.V.; Churchland, M.M. The largest response component in the motor cortex reflects movement timing but not movement type. eNeuro 2016, 3, 85–101. [Google Scholar] [CrossRef]
  160. Clawson, W.; Vicente, A.F.; Ferraris, M.; Bernard, C.; Battaglia, D.; Quilichini, P.P. Computing hubs in the hippocampus and cortex. Sci. Adv. 2019, 5, eaax4843. [Google Scholar] [CrossRef]
  161. Weinrich, M.; Wise, S.P. The premotor cortex of the monkey. J. Neurosci. Off. J. Soc. Neurosci. 1982, 2, 1329–1345. [Google Scholar] [CrossRef] [PubMed]
  162. Churchland, M.M.; Cunningham, J.P.; Kaufman, M.T.; Ryu, S.I.; Shenoy, K.V. Cortical preparatory activity: Representation of movement or first cog in a dynamical machine? Neuron 2010, 68, 387–400. [Google Scholar] [CrossRef] [PubMed]
  163. Shenoy, K.V.; Sahani, M.; Churchland, M.M. Cortical Control of Arm Movements: A Dynamical Systems Perspective. Annu. Rev. Neurosci. 2013, 36, 337–359. [Google Scholar] [CrossRef] [PubMed]
  164. Churchland, M.M.; Yu, B.M.; Ryu, S.I.; Santhanam, G.; Shenoy, K.V. Neural variability in premotor cortex provides a signature of motor preparation. J. Neurosci. Off. J. Soc. Neurosci. 2006, 26, 3697–3712. [Google Scholar] [CrossRef] [PubMed]
  165. Ames, K.C.; Ryu, S.I.; Shenoy, K.V. Neural Dynamics of Reaching Following Incorrect or Absent Motor Preparation. Neuron 2014, 81, 438. [Google Scholar] [CrossRef] [PubMed]
  166. Mirabella, G.; Pani, P.; Ferraina, S. Neural correlates of cognitive control of reaching movements in the dorsal premotor cortex of rhesus monkeys. J. Neurophysiol. 2011, 106, 1454–1466. [Google Scholar] [CrossRef] [PubMed]
  167. Battaglia-Mayer, A.; Buiatti, T.; Caminiti, R.; Ferraina, S.; Lacquaniti, F.; Shallice, T. Correction and suppression of reaching movements in the cerebral cortex: Physiological and neuropsychological aspects. Neurosci. Biobehav. Rev. 2014, 42, 232–251. [Google Scholar] [CrossRef]
  168. Caminiti, R.; Johnson, P.B.; Galli, C.; Ferraina, S.; Burnod, Y. Making arm movements within different parts of space: The premotor and motor cortical representation of a coordinate system for reaching to visual targets. J. Neurosci. 1991, 11, 1182–1197. [Google Scholar] [CrossRef]
  169. Caminiti, R.; Borra, E.; Visco-Comandini, F.; Battaglia-Mayer, A.; Averbeck, B.B.; Luppino, G. Computational architecture of the parieto-frontal network underlying cognitive-motor control in monkeys. eNeuro 2017, 4, 306–322. [Google Scholar] [CrossRef]
  170. Nambu, A.; Tokuno, H.; Takada, M. Functional significance of the cortico-subthalamo-pallidal ‘hyperdirect’ pathway. Neurosci. Res. 2002, 43, 111–117. [Google Scholar] [CrossRef]
  171. Middleton, F.A.; Strick, P.L. Cerebellar Projections to the Prefrontal Cortex of the Primate. J. Neurosci. 2001, 21, 700. [Google Scholar] [CrossRef] [PubMed]
  172. Marconi, B.; Genovesio, A.; Battaglia-Mayer, A.; Ferraina, S.; Squatrito, S.; Molinari, M.; Lacquaniti, F.; Caminiti, R. Eye–Hand Coordination during Reaching. I. Anatomical Relationships between Parietal and Frontal Cortex. Cereb. Cortex 2001, 11, 513–527. [Google Scholar] [CrossRef] [PubMed]
  173. Johnson, P.B.; Ferraina, S.; Bianchi, L.; Caminiti, R. Cortical Networks for Visual Reaching: Physiological and Anatomical Organization of Frontal and Parietal Lobe Arm Regions. Cereb. Cortex 1996, 6, 102–119. [Google Scholar] [CrossRef] [PubMed]
  174. Langdon, C.; Genkin, M.; Engel, T.A. A unifying perspective on neural manifolds and circuits for cognition. Nat. Rev. Neurosci. 2023, 24, 363–377. [Google Scholar] [CrossRef] [PubMed]
  175. Wang, S.; Falcone, R.; Richmond, B.; Averbeck, B.B.; Davidoff, L.M. Attractor dynamics reflect decision confidence in macaque prefrontal cortex. Nat. Neurosci. 2023, 26, 1970–1980. [Google Scholar] [CrossRef] [PubMed]
  176. Genkin, M.; Shenoy, K.V.; Chandrasekaran, C.; Engel, T.A. The dynamics and geometry of choice in premotor cortex. bioRxiv 2023, 2023.07.22.550183. [Google Scholar] [CrossRef]
  177. Decelle, A.; Ricci-Tersenghi, F. Solving the inverse Ising problem by mean-field methods in a clustered phase space with many states. Phys. Rev. E 2016, 94, 012112. [Google Scholar] [CrossRef]
  178. Robert, M.; Widom, B. Field-induced phase separation in one dimension. J. Stat. Phys. 1984, 37, 419–437. [Google Scholar] [CrossRef]
  179. Percus, J.K. One-dimensional Ising model in arbitrary external field. J. Stat. Phys. 1977, 16, 299–309. [Google Scholar] [CrossRef]
  180. Tejero, C.F. One-dimensional inhomogeneous Ising model: A new approach. J. Stat. Phys. 1987, 48, 531–538. [Google Scholar] [CrossRef]
  181. Derrida, B.; Mendès France, M.; Peyrière, J. Exactly solvable one-dimensional inhomogeneous models. J. Stat. Phys. 1986, 45, 439–449. [Google Scholar] [CrossRef]
  182. Pani, P.; Giarrocco, F.; Giamundo, M.; Brunamonti, E.; Mattia, M.; Ferraina, S. Persistence of cortical neuronal activity in the dying brain. Resuscitation 2018, 130, e5–e7. [Google Scholar] [CrossRef] [PubMed]
  183. Zheng, H.; Feng, Y.; Tang, J.; Ma, S. Interfacing brain organoids with precision medicine and machine learning. Cell Rep. Phys. Sci. 2022, 3, 100974. [Google Scholar] [CrossRef]
  184. Sharf, T.; van der Molen, T.; Glasauer, S.M.; Guzman, E.; Buccino, A.P.; Luna, G.; Cheng, Z.; Audouard, M.; Ranasinghe, K.G.; Kudo, K.; et al. Functional neuronal circuitry and oscillatory dynamics in human brain organoids. Nat. Commun. 2022, 13, 4403. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Case study: in vivo recordings from the dorsal premotor cortex (PMd) of non-human primates during a behavioral task. (A) Cortical minitube sampling of the Utah array: the listening volume of each electrode can be reasonably assumed of the order of the distance between the electrodes (∼400 μm) [83]. In the case of PMd, the Utah 96 samples activity from around the inner Baillager band [84,85] at around a 1.5 mm penetration (see also Figure 5). (B) Decimated lattice of the electrode kernel Ω ^ for Utah 96 interfaces. (C) Behavioral task that required visually guided arm movements toward a peripheral target (Go trials) that could appear in two opposite directions (D1 or D2). Monkeys had to reach and hold the peripheral target to get the reward. RT: reaction time; CT: central target; Go: Go signal appearance; M_on: Movement onset. (D) Experimental hypermatrix from the electrode kernel Ω ^ of Equation (25) for D1 and D2. Neural activity is aligned [−1, +1]s around the M_on to include the distributions of the stimuli (the Go signal, orange distribution and M_on, magenta). Here, the I of Equation (7) shows the time markers for the stimuli presented during the task. Purple traces are the observables computed during a baseline period, the first 250 ms of the selected epoch. The neurons are sorted according to the activity in the first 250 ms of D1. Ticks are every 500 ms.
Figure 1. Case study: in vivo recordings from the dorsal premotor cortex (PMd) of non-human primates during a behavioral task. (A) Cortical minitube sampling of the Utah array: the listening volume of each electrode can be reasonably assumed of the order of the distance between the electrodes (∼400 μm) [83]. In the case of PMd, the Utah 96 samples activity from around the inner Baillager band [84,85] at around a 1.5 mm penetration (see also Figure 5). (B) Decimated lattice of the electrode kernel Ω ^ for Utah 96 interfaces. (C) Behavioral task that required visually guided arm movements toward a peripheral target (Go trials) that could appear in two opposite directions (D1 or D2). Monkeys had to reach and hold the peripheral target to get the reward. RT: reaction time; CT: central target; Go: Go signal appearance; M_on: Movement onset. (D) Experimental hypermatrix from the electrode kernel Ω ^ of Equation (25) for D1 and D2. Neural activity is aligned [−1, +1]s around the M_on to include the distributions of the stimuli (the Go signal, orange distribution and M_on, magenta). Here, the I of Equation (7) shows the time markers for the stimuli presented during the task. Purple traces are the observables computed during a baseline period, the first 250 ms of the selected epoch. The neurons are sorted according to the activity in the first 250 ms of D1. Ticks are every 500 ms.
Entropy 26 00495 g001
Figure 2. (A) Trial-averaged correlation matrix C ^ for both directions of movement (D1 + D2) computed on the full 2 s window. Notice the eight most correlated red pairs in red. (B) Space arrangement of the “channel” kernel Ω ^ for the Utah 96. The lattice is charted by x y (decimated lattice), the black corners are silent by default. In red we show the first eight most correlated pairs. (C) Absolute value of the space covariance δ C ^ for D1 + D2 vs. euclidean distance; the most correlated pairs are highlighted in red. The euclidean distance between the recording channels seems to not have much significance at the m m scale in this cortical area. (D) Placement of the interface during surgery for one monkey (Monkey P) [35,86].
Figure 2. (A) Trial-averaged correlation matrix C ^ for both directions of movement (D1 + D2) computed on the full 2 s window. Notice the eight most correlated red pairs in red. (B) Space arrangement of the “channel” kernel Ω ^ for the Utah 96. The lattice is charted by x y (decimated lattice), the black corners are silent by default. In red we show the first eight most correlated pairs. (C) Absolute value of the space covariance δ C ^ for D1 + D2 vs. euclidean distance; the most correlated pairs are highlighted in red. The euclidean distance between the recording channels seems to not have much significance at the m m scale in this cortical area. (D) Placement of the interface during surgery for one monkey (Monkey P) [35,86].
Entropy 26 00495 g002
Figure 5. Utah 96 compared with the an example of PMd histological taken from [85]. The Utah 96 BCI [4,90] is a silicon-based microelectrode array in the form of a rectangular or square grid in a 10 × 10 pattern (the total number of channels is 96 as the vertices of the square have no record; see Figure 3). Each pin is 1.5 mm long, with a diameter of 80 μm at the base tapering to the tip around 40–50 μm. The electrodes are electrically insulated from neighboring electrodes by a glass moat surrounding the base. The electrode tips are coated with platinum to facilitate charge transfer into the nerve tissue, and the electrode stems are insulated with silicon nitride. In the recording used in [35,86,108] the grid is square and measures 4.2 mm, with 96 silicon microelectrodes and a spacing of 0.4 mm. In the case of non-human primate PMd, it should record neural activity from the inner Baillager band [84,85].
Figure 5. Utah 96 compared with the an example of PMd histological taken from [85]. The Utah 96 BCI [4,90] is a silicon-based microelectrode array in the form of a rectangular or square grid in a 10 × 10 pattern (the total number of channels is 96 as the vertices of the square have no record; see Figure 3). Each pin is 1.5 mm long, with a diameter of 80 μm at the base tapering to the tip around 40–50 μm. The electrodes are electrically insulated from neighboring electrodes by a glass moat surrounding the base. The electrode tips are coated with platinum to facilitate charge transfer into the nerve tissue, and the electrode stems are insulated with silicon nitride. In the recording used in [35,86,108] the grid is square and measures 4.2 mm, with 96 silicon microelectrodes and a spacing of 0.4 mm. In the case of non-human primate PMd, it should record neural activity from the inner Baillager band [84,85].
Entropy 26 00495 g005
Figure 6. Experimental recording of neural activity and behavioral task. (A) Cortical minitube sampling of the Utah 96 array: the listening volume of each electrode can be reasonably assumed of the order of the distance between the electrodes (∼400 μm) [83]. In the case of PMd the Utah 96 samples activity from around the inner Baillager band [84,85] at around 1.5 mm penetration. (B) Decimated lattice x y of the electrode kernel Ω ^ for Utah 96. Each lattice cell can be either silent or active, as described in Section 5.1. (C) Behavioral task that required us to move the arm toward a peripheral target (Go trials) that could appear in one of two directions of movement (D1 or D2). Monkeys had to reach and hold the peripheral target to get the reward. RT: reaction time; CT: central target; Go: Go signal appearance.
Figure 6. Experimental recording of neural activity and behavioral task. (A) Cortical minitube sampling of the Utah 96 array: the listening volume of each electrode can be reasonably assumed of the order of the distance between the electrodes (∼400 μm) [83]. In the case of PMd the Utah 96 samples activity from around the inner Baillager band [84,85] at around 1.5 mm penetration. (B) Decimated lattice x y of the electrode kernel Ω ^ for Utah 96. Each lattice cell can be either silent or active, as described in Section 5.1. (C) Behavioral task that required us to move the arm toward a peripheral target (Go trials) that could appear in one of two directions of movement (D1 or D2). Monkeys had to reach and hold the peripheral target to get the reward. RT: reaction time; CT: central target; Go: Go signal appearance.
Entropy 26 00495 g006
Figure 7. Experimentalhypermatrices of Monkey P. The figure shows the first-order observables of the theory and the hypermatrix from the electrode kernel Ω ^ of Equation (25) for PMd data (here for D1 and D2). Neural activity is aligned [ 1 , 1 ] s to the M_On to include the distributions of the stimuli (the Go signal, orange distribution and M_On, magenta. T = 2 s). The uppermost panels represent the I of Equation (7) in the form of time markers for the stimuli presented during the task. The green traces above the Π matrix are the time evolution of the spatially averaged activity. The green traces above the transposed Ω ^ are instead the time-average activity for each i. Purple traces are the “baseline” observables computed in the first 250 ms, which, as expected, are indistinguishable for both conditions. The kernels and Φ are sorted according to the activity in the first 250 ms of D1, before the appearance of any Go signal. Black ticks are every 500 ms. Black segments are 250 ms wide.
Figure 7. Experimentalhypermatrices of Monkey P. The figure shows the first-order observables of the theory and the hypermatrix from the electrode kernel Ω ^ of Equation (25) for PMd data (here for D1 and D2). Neural activity is aligned [ 1 , 1 ] s to the M_On to include the distributions of the stimuli (the Go signal, orange distribution and M_On, magenta. T = 2 s). The uppermost panels represent the I of Equation (7) in the form of time markers for the stimuli presented during the task. The green traces above the Π matrix are the time evolution of the spatially averaged activity. The green traces above the transposed Ω ^ are instead the time-average activity for each i. Purple traces are the “baseline” observables computed in the first 250 ms, which, as expected, are indistinguishable for both conditions. The kernels and Φ are sorted according to the activity in the first 250 ms of D1, before the appearance of any Go signal. Black ticks are every 500 ms. Black segments are 250 ms wide.
Entropy 26 00495 g007
Figure 8. Hypermatrix detail. The figures report details of the hypermatrices of Figure 1 (Monkey P) for two epochs of the task: −200 ms before (Pre Mov_on panel) and after (Post Mov_on panel) the Mov_on for D1 and D2. Axes scales and color labels are the same as in Figure 1. The details of the dynamical synchronization patterns changing between D1 and D2 and the corresponding kernel configurations are evident.
Figure 8. Hypermatrix detail. The figures report details of the hypermatrices of Figure 1 (Monkey P) for two epochs of the task: −200 ms before (Pre Mov_on panel) and after (Post Mov_on panel) the Mov_on for D1 and D2. Axes scales and color labels are the same as in Figure 1. The details of the dynamical synchronization patterns changing between D1 and D2 and the corresponding kernel configurations are evident.
Entropy 26 00495 g008
Figure 9. Comparison with background. We can compare the experimental hypermatrix of D1 + D2 (A) with the same observable computed in the first 250 ms only (B), which is the region highlighted in purple in Figure 1. The most correlated channel pairs in the spatial correlation matrix are still visible.
Figure 9. Comparison with background. We can compare the experimental hypermatrix of D1 + D2 (A) with the same observable computed in the first 250 ms only (B), which is the region highlighted in purple in Figure 1. The most correlated channel pairs in the spatial correlation matrix are still visible.
Entropy 26 00495 g009
Figure 10. Overlap distributions: comparison of the distributions of the overlap between various set of experimental trials (Equation (163)). (A) Comparison between D1 + D2, D1 and D2 and the intersection of the two distribution. (B) D1. (C) D2. (D) interoverlap.
Figure 10. Overlap distributions: comparison of the distributions of the overlap between various set of experimental trials (Equation (163)). (A) Comparison between D1 + D2, D1 and D2 and the intersection of the two distribution. (B) D1. (C) D2. (D) interoverlap.
Entropy 26 00495 g010
Figure 11. Experimental first-order observables: upper panels I ^ , lower panels ω ^ and f ^ . The time window is centered [−1, +1]s to the onset of Movement (M_on). Alignment includes the distributions of the stimuli: Go signal (orange distribution) and M_on (magenta). Total number of trials: n t r P = 800 , n t r C = 404 . Number of neurons recorded: N P = 166 , N C = 71 . T = 2 s. Number of recording electrodes of the Utah array: N ^ = 96 for both monkeys. Ticks are every 500 ms.
Figure 11. Experimental first-order observables: upper panels I ^ , lower panels ω ^ and f ^ . The time window is centered [−1, +1]s to the onset of Movement (M_on). Alignment includes the distributions of the stimuli: Go signal (orange distribution) and M_on (magenta). Total number of trials: n t r P = 800 , n t r C = 404 . Number of neurons recorded: N P = 166 , N C = 71 . T = 2 s. Number of recording electrodes of the Utah array: N ^ = 96 for both monkeys. Ticks are every 500 ms.
Entropy 26 00495 g011
Figure 12. Electrode kernels for Monkey P. The time window is centered [−1, +1]s to the M_on. n t r P = 800 ; N P = 166 . T = 2 s; N ^ = 96.
Figure 12. Electrode kernels for Monkey P. The time window is centered [−1, +1]s to the M_on. n t r P = 800 ; N P = 166 . T = 2 s; N ^ = 96.
Entropy 26 00495 g012
Figure 13. Experimental matrices for Monkey P: upper panel Π ^ ; lower panel Φ ^ . n t r P = 800 ; N P = 166 . T = 2 s; N ^ = 96.
Figure 13. Experimental matrices for Monkey P: upper panel Π ^ ; lower panel Φ ^ . n t r P = 800 ; N P = 166 . T = 2 s; N ^ = 96.
Entropy 26 00495 g013
Figure 14. Experimental ensemble covariance matrices for Monkey P: upper panel δ Q ^ ; lower panel δ C ^ ; from Equation (155). n t r C = 800   N P = 166 . T = 2 s; N ^ = 96.
Figure 14. Experimental ensemble covariance matrices for Monkey P: upper panel δ Q ^ ; lower panel δ C ^ ; from Equation (155). n t r C = 800   N P = 166 . T = 2 s; N ^ = 96.
Entropy 26 00495 g014
Figure 15. Electrodekernels for Monkey C: The time window is centered [−1, +1]s to the M_on. n t r C = 404 , N C = 71 . T = 2 s; N ^ = 96.
Figure 15. Electrodekernels for Monkey C: The time window is centered [−1, +1]s to the M_on. n t r C = 404 , N C = 71 . T = 2 s; N ^ = 96.
Entropy 26 00495 g015
Figure 16. Experimental matrices for Monkey C: upper panel Π ^ ; lower panel Φ ^ . n t r C = 404 , N C = 71 . T = 2 s; N ^ = 96.
Figure 16. Experimental matrices for Monkey C: upper panel Π ^ ; lower panel Φ ^ . n t r C = 404 , N C = 71 . T = 2 s; N ^ = 96.
Entropy 26 00495 g016
Figure 17. Experimentalensemble covariance matrices for Monkey C: upper panel δ Q ^ ; lower panel δ C ^ ; from Equation (165) SI. n t r C = 404   N C = 71 . T = 2 s; N ^ = 96.
Figure 17. Experimentalensemble covariance matrices for Monkey C: upper panel δ Q ^ ; lower panel δ C ^ ; from Equation (165) SI. n t r C = 404   N C = 71 . T = 2 s; N ^ = 96.
Entropy 26 00495 g017
Figure 18. Renormalization: Interspike interval (ISI) distribution for Monkey P considering both directions of movement together (D1 + D2): the renormalization time is set at 10 ms. There is a relation between the interspike interval (ISI), the average activity and the patterns that appear in the time covariance matrix. Consider, for example, the time covariance matrix in Figure 20C: we can see by the naked eye the shape of the darker region around the diagonal. This is a direct consequence of the existence of a refractory period for the recorded neurons (spiking above a certain frequency is indeed forbidden), ensuring the so-called ‘ultraviolet cutoff’ that ultimately makes time discretization possible.
Figure 18. Renormalization: Interspike interval (ISI) distribution for Monkey P considering both directions of movement together (D1 + D2): the renormalization time is set at 10 ms. There is a relation between the interspike interval (ISI), the average activity and the patterns that appear in the time covariance matrix. Consider, for example, the time covariance matrix in Figure 20C: we can see by the naked eye the shape of the darker region around the diagonal. This is a direct consequence of the existence of a refractory period for the recorded neurons (spiking above a certain frequency is indeed forbidden), ensuring the so-called ‘ultraviolet cutoff’ that ultimately makes time discretization possible.
Entropy 26 00495 g018
Figure 19. (A,B) Comparison between the D1 + D2 kernel (A) and its renormalized version (B). We can see that the two kernels are similar. Notice the amplification of the signal in the renormalized kernel due to increase in spike density.
Figure 19. (A,B) Comparison between the D1 + D2 kernel (A) and its renormalized version (B). We can see that the two kernels are similar. Notice the amplification of the signal in the renormalized kernel due to increase in spike density.
Entropy 26 00495 g019
Figure 20. Renormalization: comparison between the D1 + D2 covariance matrices (AC) and their renormalized version (BD). We can see how the synchronization patterns are preserved in both cases. Notice that in the time covariance matrices (C,D), the shape of the dark band around the diagonal is not stationary in time; therefore, the neural computations analyzed here are not a stationary process. In fact, following the shape, we can also appreciate an inverse relation between the width of the band and the average activity of the channels that we could interpret as the effect of the varying input activity on the relative component of the refractory period of the neurons. We will conduct this analysis in future works, as we believe that it would be better demonstrated on different datasets. Anyway, it is reasonable to expect that studying the patterns of the time covariance matrix and their relation with the ISI would be of certain interest on both the physiology and physics side.
Figure 20. Renormalization: comparison between the D1 + D2 covariance matrices (AC) and their renormalized version (BD). We can see how the synchronization patterns are preserved in both cases. Notice that in the time covariance matrices (C,D), the shape of the dark band around the diagonal is not stationary in time; therefore, the neural computations analyzed here are not a stationary process. In fact, following the shape, we can also appreciate an inverse relation between the width of the band and the average activity of the channels that we could interpret as the effect of the varying input activity on the relative component of the refractory period of the neurons. We will conduct this analysis in future works, as we believe that it would be better demonstrated on different datasets. Anyway, it is reasonable to expect that studying the patterns of the time covariance matrix and their relation with the ISI would be of certain interest on both the physiology and physics side.
Entropy 26 00495 g020
Figure 21. Grand covariance test. We analyze the entries of the renormalized grand covariance, which is the connected two-body correlation between the points of the renormalized kernel. The full grand covariance distribution is shown in panel (A). Panel (B) in green and panel (D) in blue show distributions that only show elements that contribute to the correlation and overlap matrices defined in Equation (165). All the other points are shown in panel (C), which is red. We see that the red distribution follows the expected normal product peak centered on zero due to the product of independent Gaussian fluctuations, which is also in the blue and green distributions. But, notice that only blue and green are the most correlated pairs deviating from the normal product distribution.
Figure 21. Grand covariance test. We analyze the entries of the renormalized grand covariance, which is the connected two-body correlation between the points of the renormalized kernel. The full grand covariance distribution is shown in panel (A). Panel (B) in green and panel (D) in blue show distributions that only show elements that contribute to the correlation and overlap matrices defined in Equation (165). All the other points are shown in panel (C), which is red. We see that the red distribution follows the expected normal product peak centered on zero due to the product of independent Gaussian fluctuations, which is also in the blue and green distributions. But, notice that only blue and green are the most correlated pairs deviating from the normal product distribution.
Entropy 26 00495 g021
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bardella, G.; Franchini, S.; Pan, L.; Balzan, R.; Ramawat, S.; Brunamonti, E.; Pani, P.; Ferraina, S. Neural Activity in Quarks Language: Lattice Field Theory for a Network of Real Neurons. Entropy 2024, 26, 495. https://doi.org/10.3390/e26060495

AMA Style

Bardella G, Franchini S, Pan L, Balzan R, Ramawat S, Brunamonti E, Pani P, Ferraina S. Neural Activity in Quarks Language: Lattice Field Theory for a Network of Real Neurons. Entropy. 2024; 26(6):495. https://doi.org/10.3390/e26060495

Chicago/Turabian Style

Bardella, Giampiero, Simone Franchini, Liming Pan, Riccardo Balzan, Surabhi Ramawat, Emiliano Brunamonti, Pierpaolo Pani, and Stefano Ferraina. 2024. "Neural Activity in Quarks Language: Lattice Field Theory for a Network of Real Neurons" Entropy 26, no. 6: 495. https://doi.org/10.3390/e26060495

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop