Next Article in Journal
Network Goodness Calculus Propositions
Previous Article in Journal
Innovative Ways of Developing and Using Specific Purpose Alternatives for Solving Hard Combinatorial Network Routing and Ordered Optimisation Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Mechanics Underpinning Non-Deterministic Computation in Cortical Neural Networks

by
Elizabeth A. Stoll
Western Institute for Advanced Study, Denver, CO 80202, USA
AppliedMath 2024, 4(3), 806-827; https://doi.org/10.3390/appliedmath4030043
Submission received: 25 November 2023 / Revised: 25 May 2024 / Accepted: 30 May 2024 / Published: 26 June 2024

Abstract

:
Cortical neurons integrate upstream signals and random electrical noise to gate signaling outcomes, leading to statistically random patterns of activity. Yet classically, the neuron is modeled as a binary computational unit, encoding Shannon entropy. Here, the neuronal membrane potential is modeled as a function of inherently probabilistic ion behavior. In this new model, each neuron computes the probability of transitioning from an off-state to an on-state, thereby encoding von Neumann entropy. Component pure states are integrated into a physical quantity of information, and the derivative of this high-dimensional probability distribution yields eigenvalues across the multi-scale quantum system. In accordance with the Hellman–Feynman theorem, the resolution of the system state is paired with a spontaneous shift in charge distribution, so this defined system state instantly becomes the past as a new probability distribution emerges. This mechanistic model produces testable predictions regarding the wavelength of free energy released upon information compression and the temporal relationship of these events to physiological outcomes. Overall, this model demonstrates how cortical neurons might achieve non-deterministic signaling outcomes through a computational process of noisy coincidence detection.

1. Introduction

Cortical neurons have highly unpredictable signaling outcomes [1], in contrast with spinal neurons, whose signaling outcomes are easily predicted by analyzing upstream inputs [2]. Cortical neurons allow random electrical noise to gate signaling outcomes, with spontaneous subthreshold fluctuations in membrane potential significantly contributing to the likelihood of an action potential [3,4]. Indeed, these cells actively maintain a ‘cortical up-state’, hovering near the action potential threshold and allowing stochastic events to drive signaling outcomes [5]. This process of noisy coincidence detection results in probabilistic firing patterns, which can be modeled with Bayesian statistics [6], random connectivity models [7], fanofactor analysis of inter-spike variability [8], or by modifying the Hodgkin–Huxley equations to account for electrical noise [9,10,11]. Notably, the most reliable models of cortical neurons are composed of stochastic differential equations, which provide an analog membrane potential based on ion channel opening probability, and lead to spiking behavior upon threshold activation [12]. This method models ‘noise’ at the synapse as a Ornstein–Uhlenbeck process [13] or by applying the Fokker–Planck equations [14].
Given the success of these inherently probabilistic computational models, it is worth revisiting whether state transitions in cortical neurons should be modeled under classical assumptions. Indeed, the random movement of just a few ions can push a cortical neuron over action potential threshold [15,16], interspike intervals at the single-unit level are statistically random [3,4], and population coding at the neural network level is statistically random [6,7,8]. Quantum-scale events materially affect macro-scale outcomes at the level of the computational unit [9,10,11], and probabilistic models effectively replicate cortical neural network activity [12,13,14]. Furthermore, classical approaches cannot explain the extraordinary energy efficiency of the brain, while a model of quantum computation can [17]. To understand the mechanics underlying a computational process where quantum uncertainty is preserved in situ, it will be useful to develop a mathematical framework for a multi-scale quantum system.
Classically, a spiking neuron is viewed as a binary unit, always in an on-state or an off-state. Here, the neuron is modeled as a two-state quantum system, with some probability of switching from an off-state to an on-state. This neuronal state is dependent on the inherently probabilistic position and momentum of each ion in the vicinity. In this multi-scale quantum model, the voltage of a cortical neuron is a function of all component pure states, and an optimal system state in the present context is selected from a probability distribution, during a process of quantum information generation and compression. This computational process yields eigenvalues for all state vectors and immediately restores uncertainty across the system, prompting the next computational cycle. This theoretical model offers a mechanism by which cortical neurons might achieve non-deterministic signaling outcomes through a process of ambient-temperature quantum computation and yields specific predictions that can be tested in the laboratory.

2. Methods

2.1. Modeling the Cortical Neuron as a Two-State Quantum System

During up-state, cortical neurons linger at their action potential threshold, allowing both upstream signals and random electrical noise to prompt a signaling outcome. So, while a neuron is classically interpreted as a binary logic gate in an ‘on’ or ‘off’ state, coded as 1 or 0, it could also be described as having some probability of converting to an ‘on’ state or remaining in an ‘off’ state.
In this new approach, a cortical neuron integrates upstream signals with random electrical noise, defining its voltage state as a function of time, as the system is perturbed. The neuron starts in off-state ϕ , not firing an action potential, and over time t, it reaches another state χ . And so, over some period of time, from t0 to t, the state of the neuron evolves from ϕ to χ . The timepath taken from one state to another is given by:
χ U ( t , t 0 ) ϕ .
The probability of a state change can be represented in some basis:
χ k k U ( t , t 0 ) j j ϕ ,
such that U is completely described by base states k and j, which represent the initial off-state of the neuron and the state of the neuron after the time step, respectively:
k U ( t , t 0 ) j ,
The time interval can be understood as being t = t 0 + Δ t , so identifying the state of the neuron χ at time t can be understood as taking a path from one state to another:
χ ψ ( t 0 + Δ t ) = χ U ( t 0 + Δ t , t 0 ) ψ ( t 0 ) .
If Δ t = 0 , there can be no state change. In this case:
ψ ( t 0 + Δ t ) = ψ ( t 0 ) .
In any other case, the state of the neuron at time t is given by the orthonormal base states k and j, with probability amplitudes:
C k ( t 0 + Δ t ) = k ψ ( t 0 )
And:
C j ( t 0 + Δ t ) = j ψ ( t 0 ) .
The neuronal state ψ at time t can therefore be described as a normed state vector ψ , in a superposition of two orthonormal base states k and j, with probability amplitudes Ck and Cj. The sum of the squared moduli of all probability amplitudes is equal to 1:
| C k | 2 + | C j | 2 = 1 .
Since the neuron starts the time evolution in state ψ ( t 0 ) = j , its probable state at time t is given by:
k ψ ( t 0 + Δ t ) = k U ( t 0 + Δ t , t 0 ) ψ ( t 0 ) .
This equation can also be written in expanded form as the sum of all transition probabilities:
k ψ ( t ) = j k U ( t 0 + Δ t , t 0 ) j j ψ ( t 0 ) .
For the state vector ψ ( t ) , the probability of a state change at time t is described by the U-matrix, U k j ( t ) :
U k j ( t ) = k U ( t 0 + Δ t , t 0 ) j .
And so, all probability amplitudes are dependent on the amount of time that has passed, Δ t :
C k ( t 0 + Δ t ) = j U k j ( t ) C j ( t 0 ) .
The neuron undergoes perturbations over time Δ t . If Δ t = 0 , there can be no state change and k = j . If Δ t > 0 , there is some probability of a state change, where k j . As such, the two-state quantum system is described by the Kronecker delta δ k j :
δ k j = k j = 1 , if k = j 0 , if k j
Each of the coefficients of the U-matrix U k j differ from δ k j by some amount proportional to Δ t , such that:
U k j ( t ) = δ k j + W k j Δ t .
where W k j = ( i / ) H ^ k j ( t ) , with the Hamiltonian operator term H ^ k j ( t ) representing the time derivatives of each of the coefficients of U k j :
U k j ( t ) = δ k j i H ^ k j ( t ) Δ t .
The probability amplitude C k ( t ) at time t is therefore given by:
C k ( t 0 + Δ t ) = j [ δ k j i H ^ k j ( t ) Δ t ] C j ( t 0 ) .
Since the sum of δ k j C j ( t 0 ) = ψ ( t 0 ) , the latter equation simplifies to:
C k ( t 0 + Δ t ) C k ( t 0 ) = i j [ H ^ k j ( t ) Δ t ] ψ ( t 0 ) .
In dividing both sides of the equation by Δ t , it becomes apparent that any state change is a sum of all possible perturbations that affect the system under investigation, from its starting condition at time t 0 , including random events. Again, C k ( t ) is the probability amplitude i ψ ( t 0 ) of finding the state vector ψ in one of the base states k = j or k j at time t. And so, the time derivative of this probability function yields the path taken:
C k ( t 0 + Δ t ) C k ( t 0 ) Δ t = i j H ^ k j ( t ) ψ ( t 0 ) .
The system is therefore described by the time-dependent Schrödinger equation, with any state change related to a change in energy distribution:
i d ψ d t = H ^ ψ .
Where i is the imaginary unit, = h / 2 π is the reduced Planck constant, H ^ is the Hamiltonian operator, which corresponds to the total energy of the system, and ψ is the eigenvector describing the quantum system after time t. This equation describes the eigenstate of a system as a function of the amount of time t that has passed and the amount of energy available for redistribution across the system, given by the Hamiltonian H ^ . This equation allows a system state to be defined as a distribution of probabilities representing the possible paths taken since the system state was last defined. By combining Equations (18) and (19), we find the probability of a state change is equal to the sum of all time derivatives of the normed state vector:
C k ( t 0 + Δ t ) C k ( t 0 ) Δ t = d ψ d t .

2.2. Modeling the Cortical Neuron Membrane Potential as a Function of Component Pure States

The dynamic membrane potential of a cortical neuron has been previously modeled with the Fokker-Planck equation [14], with eigenfunctions ϕ i and coefficients α i forming a complete set on which the membrane potential distribution P ( V , t ) can be expanded:
P ( V , t ) = i α i ϕ i ( V ) e λ i t / τ m .
Critically, all eigenvalues are real and strictly negative, with eigenfunctions obeying L ( ϕ i ) = λ i ϕ i and λ i > 0 . A key component of this model is τ m , the membrane time constant of the neuron, given by:
τ m = g L c m + v e a e + v i a i ,
where v e and v i are the excitatory and inhibitory synaptic inputs, respectively; a e and a i are the excitatory and inhibitory synaptic strengths, respectively; c m is the membrane capacitance; and g L is the membrane conductance in the context of a leak current, given by:
g L = I L V V L .
The total current is then classically provided by the sum:
I n = g L ( t ) ( V V L ) + g e ( t ) ( V E e ) + g i ( t ) ( V E i ) ,
where E e and E i are the reversal potentials of excitatory and inhibitory synapses, respectively. In spinal neurons, excitatory and inhibitory synaptic inputs dominate any changes to the membrane potential, and the leak current does not contribute materially to signaling outcomes. Meanwhile in cortical neurons, excitatory and inhibitory inputs are balanced, such that leak currents dominate. To model the contribution of individual ions to the leak current, we can model the current density probabilistically:
I L = q 2 m ( ψ * s ψ ψ s ψ * ) ,
where m is the mass of each point charge q. In this model, the neuronal state ψ n evolves over time, with the signaling outcome at time t a function of all ion states. Notably, the state of each ion in the system ψ i also evolves with time. The time-dependent Schrödinger equation for the position of each ion is:
i d d t ψ ( r , t ) = H ^ ψ ( r , t ) .
And the time-dependent Schrödinger equation for the momentum of each ion is:
i d d t ψ ( s , t ) = H ^ ψ ( s , t ) .
The position of each ion is given by a distribution of probability amplitudes along the x, y, and z axes. And so, the momentum of each ion s is provided by the derivative of the wavefunction with respect to its position r:
i d ψ d r = i x x + y y + z z = i .
Since mass, energy, and electrical charge are conserved, we can employ the continuity equation:
ρ t = r · I L ,
where ρ = | ψ ( r , t ) | 2 . Therefore:
| ψ ( r , t ) | 2 t = r · [ q 2 m ( ψ * s ψ ψ s ψ * ) ] .
Since r = s / i :
| ψ ( r , t ) | 2 t = ψ * q s 2 2 m i ψ ψ q s 2 2 m i ψ * .
The time-dependent Schrödinger equation to describe the Hamiltonian of the system, with respect to momentum, is:
i ψ t = ( s 2 2 m ) ψ .
And its complex conjugate is:
i ψ * t = ( s 2 2 m ) ψ * .
Therefore:
ρ t = | ψ ( r , t ) | 2 t = ψ * q ψ t + ψ q ψ * t .
And so, the sum of all synaptic leak currents for a given neuron at time t is:
I L = q ρ / t ψ / r .
when excitatory and inhibitory inputs are balanced, during a cortical up-state, the current density of a neuron is effectively described by the sum of all leak currents:
I n = q ρ / t ψ / r .
Therefore, the probability of the neuron crossing action potential threshold ( V t h ) is given by the relationship between current and voltage, with any negative eigenvalues for ψ ( r , t ) prompting the neuron to return to its resting potential:
V V t h = 1 g L q ρ / t ψ / r ,

2.3. Modeling the Information Encoded by the Multi-State System, in Terms of von Neumann Entropy

During cortical up-states, neurons actively maintain a resting potential right near the voltage threshold for firing an action potential [5], permitting stochastic ion leak to gate a state change in the computational unit [3,4]. At time t, the neuron either has reached voltage threshold, or it has not; there is some probability of either outcome. The neuron is a two-state quantum system, dependent on the inherently probabilistic position and momentum of each ion in the vicinity. To model the system at either the neuron level or the ion level, ρ x is defined as the probability of an object transitioning from one to another pure state:
ρ x : = ψ x ψ x .
The density matrix ρ is composed of an ensemble of mutually orthogonal pure states ρ x , with each having some probability of occurring p x :
ρ = p x ρ x = p x ψ x ψ x ,
Because the cortical neuron is sensitive to quantum-level events, it remains suspended in a state of uncertainty—physically encoding ‘information’ (the von Neumann entropy of the system, or the mixed sum of all component pure states). The quantity of information encoded by the neuron is calculated by tracing the volume of probability amplitudes across a high-dimensional density matrix:
S ( ρ ) = T r ( ρ ln ρ ) .
Here, the system macro-state is the mixed sum of all component pure micro-states, or the mixed sum of all outer products multiplied by their transition probabilities. The inner product a y ψ provides the probability that the state vector ψ x assigns to the eigenvector a y . As such, the probability of measuring a certain eigenvalue a y equals:
p ( a y ) = a y ρ a y .
The object is in some state at time t 0 , and it has some probability of being found in another state at time t, after being transiently defined as a distribution of possible states. The probability of finding a system in any one eigenstate can be calculated by applying the Born rule, which equates the inner product of the state vector and its expectation value to the probability of transition to a particular actual state [18]. This rule states that any measurement of the observable has a probability p ( ψ a ) of being equal to an exact value a, at time t for the state vector ψ :
p ( ψ a ( t ) ) = | a ψ ( t ) | 2 .
The square of the absolute value of this wavefunction is a real number, but the wavefunction itself is a complex-valued probability amplitude, which exists along an axis orthogonal to all real eigenvalues. Every eigenvalue a defined along this axis is dependent on the amount of time that has passed. If sufficient constraints are present, each component pure state can be defined by resolving the Hamiltonian at time t. If sufficient constraints are not present, some component pure states will not be resolved, and some net amount of entropy will be produced by the system during that computational cycle.

2.4. Generating a Distribution of Possible System States from Quantum Uncertainty

To understand the mechanics underlying this process, a calculation of possible system states across the neural network can be conceptualized by modeling the Poisson distribution of current density for each ion in the system, in relation to each region of neural membrane, thereby generating a distribution of possible paths. But because these events do not occur with equal probability or independently of the previous system state, it is more appropriate to model the probability distribution of position and momentum vectors as r ( x ) and s ( x ) , respectively, each defined across three spatial axes, with a weight function w ( x ) > 0 :
( r , s ) = r ( x ) s ( x ) w ( x ) d x .
In this Sturm–Liouville system, the weight function w ( x ) mathematically represents a quantum harmonic oscillator. w ( x ) embodies the quantum uncertainty of the ion state and naturally generates a Hilbert space for each position and momentum eigenvector. If this operation is symmetric or Hermitian, such that:
( r , L s ) = L ( r ( x ) ) s ( x ) w ( x ) d x = r ( x ) L ( s ( x ) ) w ( x ) d x = ( L r , s ) ,
then all polynomials will form complete sets in Hilbert space, with real eigenvalues and orthonormal eigenfunctions, providing solutions to the second-order linear differentiation equation given by:
L u = λ u ,
In which, λ is a constant and L is a Hermitian operator defined by the real functions of x, α , β , and γ :
L = α ( x ) d 2 d x 2 + β ( x ) d d x + γ ( x ) .
The distribution of outcomes for r ( x ) and s ( x ) is given by ρ , the mixed sum of mutually orthogonal states ρ x , each occurring with some probability p x :
ρ ( r , s ) = ( p r ( x ) ρ r ( x ) ) ( p s ( x ) ρ s ( x ) ) .
The evolution of the system state ρ over time t is given by the Liouville–von Neumann equation:
i d d t ρ = i p x ( ψ x ψ x + ψ x ψ x ) ,
with eigenstates provided by the Hamiltonian operator:
i d d t ψ x = H ^ ψ x .
and its Hermitian conjugate:
i d d t ψ x = ψ x H ^ .
Substituting the operator and the conjugate into the Liouville–von Neumann equation yields:
i d d t ρ = i ( H ^ ρ x ρ x H ^ ) = [ H ^ , ρ x ] .
Because time itself is intrinsically uncertain, the time evolution of the system state is provided by a time-shift operator U:
U = e i H ^ t .
A unitary transformation, driven by this time-shift operator, allows the density matrix to evolve from ρ x ( t 0 ) to ρ x ( t ) :
ρ x ( t 0 ) ρ x ( t ) = U ρ x ( t 0 ) U .
Undergoing this unitary transformation yields eigenvalues for each component pure state, resolving the position and momentum of each ion, and the voltage state of each neuron. Those eigenvalues are only resolved as the Hamiltonian is resolved, as the dimensionality of the density matrix is reduced, as the wavefunction collapses, as linear correlations are extracted, or as a single system state is selected from a distribution of possible system states. These are all equivalent processes, representing the compression of ‘information’ or von Neumann entropy. And importantly, this multi-scale system has a single timepoint t, at which both ion states and neuron states are resolved (although some uncertainty or entropy may remain at the completion of each computational cycle).

2.5. Reducing the Probability Distribution into a Single Observable System State

Because the position and momentum of any ion at time t depends on whether it has interacted with other ions or electrical fields during the time evolution, the interdependency of eigenvectors must be taken into account. By calculating the positions and momenta of all ions within a temperature-defined system in relation to each other, this approach ensures that ions do not turn out to occupy the same position, spin, and energy state, which would render them identical. This outcome would cause the universe to lose mass, and is therefore forbidden by the Pauli exclusion principle [19]. This conservation principle can be applied to any thermodynamic system, so the total amount of energy available to the system is no more than the amount of energy stored in the system plus the net amount of energy that has entered the system over time t. In the central nervous system, which traps thermal energy to drive computational work, the distribution of system states is represented by the Hamiltonian operator:
i d d t ρ ( t ) = H ^ ρ ( t ) .
In a far-from-equilibrium system that is thermodynamically coupled to its surrounding environment, one cannot assume the Brownian motion of ions occurs at the classical limit, and so one must apply the quantum mechanical formulation of the Fokker–Planck equations, known as the Caldeira–Leggett model [20,21,22]. Here, the Hamiltonian operator accounts for the position r and momentum s of each ion with mass m and charge q, moving in an potential V ( r ) , where ω is a quantum harmonic oscillator and c is a coupling constant:
H ^ ( r , s ) = s 2 2 m + V ( r ) + 1 2 ( s t c r t ) 2 m + m ω 2 r t 2 .
Since the state of each ion depends (in part) on the state of all other ions, the whole multi-scale system must be considered together, with ion positions dependent on membrane voltages and vice-versa. This combined wavefunction relates each state vector ψ to the passage of time t and the Hamiltonian operator H ^ . The spectrum of the Hamiltonian operator is the set of possible outcomes at time t. The Hamiltonian operator H ^ is related to the Lagrangian function of position r, its time derivative r ˙ and time t. It is calculated by taking the Legendre transform, in order to minimize the action necessary to effect change:
H ^ ( r i , s i , t ) = i = 1 n s i r ˙ i L ( r i , r ˙ i , t ) .
The vector spaces represented by r ˙ and s ˙ are generated by the sum of all perturbances and quantum oscillations, which are mathematically represented by the weight function within the Sturm–Liouville equation. As a result, the derivative of the Hamiltonian operator is related to any changes in position and momentum and time, and therefore can be calculated by taking the partial derivatives of these eigenvectors:
d H ^ = H ^ r i d r i + H ^ s i d s i + H ^ t d t .
The phase space distribution ρ ( r , s ) describes the probability of a particular system state being selected from the total phase space volume d n r d n s . To describe this phase space volume, the Liouville equation yields the evolution of ρ ( r , s ) over time t:
ρ t + i = 1 n ρ r i r ˙ i + ρ s i s ˙ i = d ρ d t = 0 .
As a result of this geometrical constraint, the state vectors ( ρ , ρ r ˙ , ρ s ˙ ) are conserved across the system, and the vector field ( r ˙ , s ˙ ) has zero divergence. This permits temperature and energy to be conserved across the system as well. The continuity equation is given by:
ρ t + i = 1 n ( ρ r ˙ i ) r i + ( ρ s ˙ i ) s i = 0 .
Because ρ / t equals zero, the continuity in the probability density can be termed as:
ρ i = 1 n r ˙ i r i + s ˙ i s i = 0 .
And because the vector spaces r ˙ and s ˙ are defined as:
r ˙ i = H ^ s i ,
and:
s ˙ i = H ^ r i ,
the continuity laws persist while taking the second derivative of the Hamiltonian operator. This process identifies the observable boundary of the total volume of possible system states:
ρ i = 1 n 2 H ^ r i s i 2 H ^ s i r i = 0 .
The relationship between the density matrix ρ and the Hamiltonian operator describing the distribution of energy across the system becomes apparent in this equation, as does the relationship between position and momentum vectors and the Hamiltonian operator. Once a trace is taken across the density matrices representing r ( x ) and s ( x ) , the weight function w ( x ) underpinning the probability distribution can be solved in relation to the other parameters. And so, during a system-wide computation, eigenvalues are selected and eigenstates ψ r ( x ) and ψ s ( x ) are transiently resolved. It should be noted that, in this formulation of temporally proceeding events, the operators drive the computational cycle, rather than the vector states. That is, the Hamiltonian operator may evolve over time (permitting changes in position and momentum) while the vector states themselves remain time-independent. Any events occurring within a time evolution may therefore contribute to the outcome of a computation. This includes any quantum oscillations and any classical input currents occurring during that time evolution.

2.6. Restoring Uncertainty after the System State Is Transiently Defined

By relating any probabilistic changes in the position and momentum of each ion to the distribution of energy across the system, the Hamiltonian operator effectively portrays all observable outcomes. The unitary change of basis guiding this computational process is defined by the phase factor exp( i H ^ t / ). Therefore, any reversal to the direction of time causes positive energies to become negative—a result that is disallowed by the first law of thermodynamics, unless the spin is also reversed with the introduction of an electric dipole moment [23]. An electric dipole moment is a spontaneous shift in the energy state of an electron in the presence of an external electric field; this event is expected to occur here, as the probability distribution is reduced and eigenvalues are calculated.
Initially, as the system was perturbed by its external environment, all component pure states (and the Hamiltonian operator) were integrated to create a volume of probability. Now, we can take the derivative of that volume of probability (and the Hamiltonian operator) to define eigenvalues on the boundary region of that high-dimensional probability distribution. Doing so reduces the distribution of system states into a single actualized system state, with a non-deterministic outcome:
d E d λ = ψ λ d H ^ d λ ψ λ .
This equation is the Hellman–Feynman theorem, and it lies at the core of quantum electrodynamics [24,25]. Here, any change in the position or momentum of an ion is proportional to the change in the Hamiltonian, because the Hamiltonian corresponds to all potential and kinetic energies in the system. It is useful to note that the total amount of energy in the system is not what is uncertain, but rather how this energy is distributed. The measure of possible system states, or energetic configurations, is directly related to the total entropy encoded by the system at the beginning of the computational cycle [26]. The system state, upon completion of the computational cycle, is related to any parameter λ that contributes to the Hamiltonian operator, such as any shift in atomic position, momentum, orbital, or electrical field strength. And so, the derivative of the total energy in the system E is related to the inner product of the state vectors ψ and the derivative of the Hamiltonian H ^ , both with respect to the parameter λ .
In developing this solution to the Schrödinger equation, Richard Feynman discovered that any wavefunction collapse has a discrete effect on the component atoms. This event is accompanied by a shift in the charge distribution across each atom in the system, in relation to their newly- defined distance from each other [24,25]. For two atoms interacting at a separation D, the induced dipole moment for each atom is 1 / D 7 . The smaller the distance between atoms, the larger the dipole moment, and the larger the boost to angular momentum. As a result, any perturbation to the system—for example, an ion channel opening or a shift in the local electrical field—changes the state of the system, in a manner related to the original unperturbed state and the derivative of the Hamiltonian. The assignment of atomic locations, relative to other atoms in the system, is paired with an alteration in the organization of electrons around the nucleus. This event prompts van der Waals forces between neighboring atoms [27,28].
In summary: By integrating complex-valued probability amplitudes, the system generates a quantity of quantum information. By taking the derivative of this volume, the system identifies eigenvalues on the observable boundary region of that high-dimensional vector space. As eigenvalues are selected, all other eigenstates are eliminated. At that point, the wavefunction collapses and the information held by the system is abruptly compressed. In accordance with the Landauer principle, any compression of information is paired with a release of free energy [29,30,31,32]. This free energy can be used to perform work in the system, e.g., restoring the resting membrane potential as uncertainty within the neuron’s receptive field is reduced. Critically, this bidirectional exchange of free energy and entropy ensures that the total energy of the system is conserved over the course of the computation, while respecting the first and second laws of thermodynamics [26].
By moving from a prior state to a posterior state, the system actualizes a solution to a computational problem: What is the optimal system state to encode the surrounding environment? However, this defined system state is transient, because it is paired with an immediate restoration of uncertainty, through an alteration in the charge distribution around each atom. The dipole moment induced by wavefunction collapse then prompts new atomic interactions—and for successive measurements with discrete results, which do not destroy the entanglement of the particle system, each measurement with value a establishes the basis for a new state, which then undergoes subsequent time evolution, in accordance with the von Neumann projection postulate [33]. And so, immediately after a wavefunction is resolved, the system again begins to evolve over time, forming a new probabilistic system state, and undergoing another computational cycle.

2.7. Converting Probabilistic System States to Temporally Irreversible Signaling Outcomes

Taking the derivative of the Hamiltonian involves taking the derivative of all component pure states, with respect to all perturbations since the last detection event. This process compresses information, yielding eigenvalues on the boundary region of the high-dimensional probability distribution. During this computational process, the complex-valued distribution of possible system states is reduced, and component pure states throughout the system (e.g., ion position and momentum) are transiently defined. But this defined multi-scale system state immediately becomes the past, as the charge distribution across each atom is altered and a new probabilistic system state emerges.
Since the expectation value for the energy of a given atom a i ρ a i is proportional to the expectation value for its spin ψ k ψ , the energy shift due to the induced dipole moment causes a sign shift in both values, and time symmetry is broken. Essentially, as ground states lose degeneracy, the resulting dipole moments should alter the attraction between sodium ions outside the neuron and atoms comprising the lipid bilayer of the neural membrane. The resulting van der Waals forces are expected to cause a permitted violation of time-reversal symmetry, thereby effecting causation within the system as information is compressed, eigenvalues are identified, and leak currents are observed.
Yet a system will only demonstrate violations of time-reversal symmetry if quantum uncertainty contributes to thermal fluctuation–dissipation dynamics. Only if coincident upstream signaling events and random oscillations trigger a change in membrane resistance, within the temporal parameters of ion dissipation and ion pump rectification kinetics, will inherently probabilistic events contribute to gating a neuronal state change.

2.8. Conditions under Which Quantum Fluctuations Contribute to Dissipation Dynamics

A physical structure that actively generates an electrochemical resting potential will generate entropy and heat. The neural membrane provides resistance, and therefore some quantity of free energy will be dissipated into entropy as electrical interactions occur. However, in a heat-trapping system, this energy is not necessarily irreversibly lost; it can be used to do work [34]. The Callen–Welton fluctuation–dissipation theorem asserts that thermal fluctuations drive a response function affecting the impedance of the structure, given by χ ( t ) :
χ ( t ) = β A ( t ) θ ( t ) d t .
where β = k B T , θ ( t ) is the Heaviside step function, and A(t) is the expectation value of O k ( t ) , an observable that is subject to thermal fluctuations in a dynamical system. If decoherence timescales within the system are longer than the timescales of ion dissipation and ionization dynamics, quantum uncertainty can materially contribute to ion behavior at the neuronal membrane [35]. That is, if a time-dependent change in the voltage of a neuron relies on any quantum fluctuations in the position or momentum of nearby ions, then the ensemble average of each observable (a measure of fluctuation, given by the Hermitian operator [ O k ( t ) , O j ( t 0 ) ]) will be related to the response function (a measure of dissipation, given by χ ( t t 0 ) , as a function of time. This relationship is given by the Kubo fluctuation–dissipation formula [36,37]:
χ ( t t 0 ) = i θ ( t t 0 ) [ O k ( t ) , O j ( t 0 ) ] .
This response function, χ ( t ) , can be written as a function of oscillatory events:
χ ( t ) = d ω 2 π e i ω t χ ( ω ) .
If t < 0 , then i ω t will be negative and e i ω t will be zero, so the entire response function χ ( t ) will be zero. As such, the quantum states underpinning this spectral function can only causally contribute to dissipation dynamics as time moves in a forward direction. The Fourier transform of the response function provides for dissipation and fluctuation dynamics in the frequency domain, given by χ ( ω ) :
χ ( ω ) = i 0 T r ( e β H ^ [ O k ( t ) , O j ( t 0 ) ] ) e i ω t d t .
This time-dependent function provides the density of possible states for a particle system, which emerge in the presence of a perturbation or changing electrical field. And so [ O k ( t ) , O j ( t 0 ) ] is simply a description of how the density matrix, the wavefunction, or the Hamiltonian operator changes over some time evolution, compared with the probability that all energy states would rearrange in that same way under time reversal. As such, the response function χ ( ω ) can be written in expanded form:
χ ( ω ) = i 0 e i ω t d t m n e E m β [ m O k n n O j m e i ( E m E n ) t m O j n n O k m e i ( E n E m ) t ] .
If the uncertainty in the position and momentum of an ion is sustained in the presence of a constantly changing electrical field, then quantum fluctuations may contribute to ion behavior, affecting the state of ions interacting with the electrochemical potential of the neural membrane. In this case, quantum fluctuations may contribute to the probability of a state change in the macro-scale computational unit, from an off-state to an on-state. However, the state of each neuron at the moment the Hamiltonian operator is resolved will govern its response. Neurons in a cortical up-state, which allow stochastic events to gate a signaling outcome, may be nudged toward action potential threshold or away from it as uncertainty is abruptly reduced. Indeed, only neurons in cortical up-state, allowing random noise to gate a signaling outcome, will be nudged. Meanwhile, neurons receiving suprathreshold stimulation from a resting state will exhibit deterministic firing patterns.

2.9. Assumptions of the Model

2.9.1. Neurons Are Functionally Isolated but Remain Sensitive to External Perturbations

This model assumes that neurons are functionally isolated, but remain sensitive to external perturbations. The architecture of a deep layered neural network permits perturbations from outside the system itself, while the physiology of individual computational units within the deeper layers permits isolated computations to occur.
With regard to the network architecture: Neurons in the thalamus, which receive signals from the periphery, exhibit deterministic firing patterns that faithfully represent mappings within their receptive fields [38,39]. These neurons synapse onto deeper layers of the neural network, with signals cascading onto primary sensory neurons and higher-order cortical neurons, which increasingly integrate multiple inputs with noise to produce probabilistic outcomes [40,41]. Neurons in the motor cortex and basal ganglia integrate various inputs to initiate voluntary movement and inhibit other movement [42,43]. These neurons synapse onto downstream motor regions in the thalamus, reticular formation, and the spinal cord, with these signals converging onto peripheral neurons with deterministic outcomes that instruct the extension and flexion of specific muscle groups [44,45]. Essentially, higher-order neurons within the deepest layers of cortex receive messages from within the system itself, with these perturbations representing the outside world.
With regard to the neuronal physiology: In cortical neurons, excitatory and inhibitory inputs are actively balanced, so that thermal fluctuations dominate any exit from equilibrium potential [5]. While a suprathreshold stimulus can certainly lead to spiking behavior in neurons of primary sensory cortex [46,47], random electrical noise drives spiking behavior in higher-order cortical neurons [3,4]. In these areas, signaling outcomes are more probabilistic [48,49]. For this reason, modeling the contribution of noise to signaling outcomes is a valid approach. Notably, in peripheral neurons and superficial layers of the cortical circuitry, random noise is insufficient to affect signaling outcomes, and the computational units themselves are highly robust to noise, so thermal fluctuations do not materially contribute to neuronal outcomes, and these terms drop from the equation. However, a neuron in cortical up-state, hovering near action potential threshold and allowing random electrical noise to gate signaling outcomes, is highly sensitive to noise. In this case, the noise must be mechanistically taken into account to effectively model the probability of the neuron transitioning from off-state to on-state.

2.9.2. Uncertainty in the State of Individual Ions Affects the Voltage State of the Neuron

This model assumes that a shift in the state of very few ions can affect the likelihood of a neuron firing an action potential. When neurons are in a cortical up-state, a 1 mV increase in membrane potential can easily push a neuron over threshold [15]. This change in voltage corresponds to 1 pA of current, or only 10 sodium ions moving through a channel over the course of a microsecond [16]. Since the random motion of just a few sodium ions can push a cortical neuron over action potential threshold, and these cells have significant leak currents [50], quantum-scale events appear to affect macro-scale outcomes at the level of the computational unit.
Neurons in cortical up-state are expected to encode maximal amounts of information, with high probability amplitudes for each outcome k = j and k j . Yet for ions, the probability amplitudes across both position and momentum space will be zero almost everywhere. For ions located outside the brain, the probability that an ion will be located within a cortical neuron in the next instant is negligible. And for ions already located within the brain, the probability amplitudes are largely centered around the most recent position and momentum values, except in the context of nearby ion channel opening or a shift in the local electrical field. An unlikely outcome for an ion might be a positional change corresponding to leak across a neural membrane, with this event affecting the voltage state of the neuron.
Here, each ion is described by a wavefunction, or a distribution of eigenstates. During the unitary transformation, any non-distinguishable states (e.g., linear correlations between pure states) are eliminated, thereby compressing the von Neumann entropy of the system. As eigenvalues for each ion are assigned, the voltage of the neuronal membrane is defined, and the cell either reaches action potential threshold or not. If uncertainty is reduced during the computation, and eigenvalues are assigned, then information is compressed, free energy is released, and the neuron restores its resting potential. If information cannot be compressed, and no eigenvalues can be assigned, then free energy is not released, and the neuron fires an action potential, thereby encoding ‘uncertainty’ in its receptive field.

2.9.3. The Estimated Decoherence Timescales Meet the Criteria for a Quantum System

The model for decoherence in neural systems, focusing on interactions between sodium ions and the neuronal membrane potential, was originally provided by Max Tegmark [35]. This model was recalculated using the newest coulomb scattering data, which are presented in a sister paper, along with the matrix algebra formulation of quantum information generation and compression [51]. These estimates demonstrate that decoherence timescales at the neuronal membrane are indeed longer than ion dissipation rates, and therefore meet the criteria for a quantum system—but only for cortical neurons in up-state.
A cortical neuron in up-state is effectively isolated, because excitatory and inhibitory inputs are balanced, so random thermal fluctuations drive signaling outcomes. While there are upstream inputs, these upstream inputs do not drive signaling outcomes in cortical neurons—thermal fluctuations do. For this reason, each computational unit is modeled as a two-state quantum system, with some probability of transitioning from off-state to on-state. Of course, the neuron is not truly isolated; it exists within a circuit, and so a suprathreshold stimulus can still kick it over action potential threshold and cause it to fire. The concept of modeling the contribution of quantum-level noise to the state of the computational unit is always valid; however, in many circuits, the decoherence timescales are too short, or the random noise is insufficient to affect signaling outcomes, or the computational units themselves are robust to noise, so thermal fluctuations do not materially contribute to neuronal outcomes, and these terms naturally fall out as neurons obey classical dynamics.
Only in a cortical up-state is the uncertainty in both neuronal membrane potential and extracellular ion position expected to be sustained for sufficiently long timescales for linear correlations to be reduced—for the wavefunction to collapse, for information to be compressed, for free energy to be released, for atomic momentum to be altered—and for voltage changes in the computational unit to be observed, at the completion of the computation.

3. Results

3.1. The Expected Wavelength of Spontaneous Free Energy Release during Information Compression

If cortical neural networks engage in ambient-temperature quantum computation, then these far-from-equilibrium thermodynamic systems must bidirectionally exchange free energy for information, with any free energy expended on information generation during the initial stage of the thermodynamic computing cycle being partially recovered during the information compression stage. As such, discrete quantities of free energy should be released, local to any reduction in uncertainty, as an optimal system state is selected from some probability distribution. Any thermal fluctuation should correspond to a shift in the atomic dipole moment, boosting the angular momentum of individual ions. These events should therefore be observable. To evaluate this hypothesis, we can calculate the expected effects of quantum fluctuation–dissipation dynamics in the mammalian central nervous system.
If the perturbation to the state of an ion during some time evolution relies on any random changes to the spin or energy state of a component electron, then the ensemble average of each observable (a measure of fluctuation, given by x ^ ( t ) , x ^ ( 0 ) ) will be related to the response function (a measure of dissipation, given by χ ( t ) ) in the frequency domain [36]. This relationship essentially models how the oscillatory behavior of a quantum system, along an imaginary axis, affects the thermal dynamics of the observable system:
χ ( t ) = i θ ( t t 0 ) x ^ ( t ) , x ^ ( 0 ) ,
where θ ( t ) is the Heaviside step function and the response χ ( t ) provides an expectation value for x ( t ) , which is a time-dependent ‘observable’ subject to thermal fluctuation in a dynamical system. The time-dependent equation is given by:
f ( t ) = d ω 2 π e i ω t f ( ω ) ,
and its Fourier transform is given by:
f ( ω ) = f ( t ) e i ω t d t .
If the eigenvalues are to be real, the sum of the real part and the imaginary part of the response function over some time evolution χ ( t t 0 ) must also be real. The full response function is given by:
χ ( ω ) = Re χ ( ω ) + Im χ ( ω ) .
The real part of the response function is given by:
Re χ ( ω ) = 1 2 [ χ ( ω ) χ ( ω ) ] = 1 2 d t e i ω t [ χ ( t ) χ ( t ) ] .
And the imaginary part of the response function is given by:
Im χ ( ω ) = i 2 [ χ ( ω ) χ ( ω ) ] = i 2 d t e i ω t [ χ ( t ) χ ( t ) ] .
The energy of quantum fluctuations E( ω ) is related to the frequency ω :
E ( ω ) = 1 2 ω c o t h 1 2 β ω ,
where β = 1 / k B T . The classical power spectrum is related to the complex-valued quantum spectral density in such a way that quantum noise can contribute to the local thermal density under certain conditions:
x ^ ( t ) x ^ ( t 0 ) e i ω t d t = 1 β E ( ω ) x ^ ( t ) x ^ ( t 0 ) e i ω t d t .
If E > k B T , with a high temperature, a broad distribution of electrons across energy states, and low occupancy of energy states, then the quantum contribution to ion behavior is negligible, and the behavior of ions will reduce to Boltzmann–Maxwell statistics. If E < k B T , with electrons at low frequencies obeying the Rayleigh–Jeans law, then the quantum contribution to ion behavior is also negligible, and again the behavior of ions will reduce to classical Boltzmann–Maxwell statistics. Only in cases of high particle density, when the energy held by an ion is greater than its chemical potential, and E > k B T , will quantum fluctuations contribute to ion behavior. This is predicted to be the case in the mammalian brain.
If quantum fluctuations do indeed contribute to ion behavior in biological systems, then E must be greater than k B T . Since:
k B T = ( 1.38 × 10 23 J / K ) ( 310 K ) = 4.28 × 10 21 J ,
and:
f = E h = 4.28 × 10 21 kg m 2 / s 2 6.63 × 10 34 kg m 2 / s = 6.46 × 10 12 Hz ,
it is expected that high-energy particles of E > k B T will be observed in the central nervous system at 37 °C (310 K). Specifically, these high-energy particles should have a frequency of f > 6.46 × 1012 Hz, a wavelength of λ < 46 microns, or an energy of E > 0.0267 eV, within the infrared light spectrum. Spontaneous emissions of photons in this range have indeed been observed in mammalian brain tissue [52,53,54,55] and infrared stimulation of the brain has been shown to have a functional effect on neural activity [56,57,58]. Further studies are needed to measure the exact wavelengths of these photon emissions and temporally correlate these events with neuronal signaling outcomes.
In summary, it is predicted that photons, specifically in the infrared range of the electromagnetic spectrum, should be released upon information compression in the mammalian brain. Since the first law of thermodynamics states that energy cannot be created nor destroyed, any system capable of reducing entropy to achieve a non-deterministic computation must release free energy upon information compression. If quantum computing does occur in cortical neural networks, then spontaneous thermal fluctuations should be observed, locally to any reduction in uncertainty. These thermal fluctuations are expected to drive synchronous firing of a statistically random ensemble of neurons across the network.
Therefore, in this approach, the reduction in thermodynamic entropy is paired with both the selection of an optimal system state from a large probability distribution (one that is thermodynamically favored to correlate with the surrounding environment) and the release of thermal free energy (which is used to physically instantiate the solution to that computational problem).

3.2. Specific Predictions of This Model

If cortical neural networks are indeed quantum computing systems rather than classical computing systems, then evidence of quantum information generation and compression should be observed in the neocortex. As such, this theory makes specific predictions with regard to the energy efficiency of the brain [17]; with regard to coulomb scattering and decoherence timescales at the neuronal membrane [51]; and with regard to the expected effects of electromagnetic stimulation and pharmacological intervention in cortical neural networks [59]. Some additional specific predictions of the theoretical framework, prompted by the present model, include the following.

3.2.1. Thermal Free Energy Is Spontaneously Released during Computation as Information Is Compressed

Infrared particles with wavelengths of λ < 46 microns or f > 6.46 × 1012 Hz should spontaneously appear at the neural membrane during cortical information processing. This prediction must be tested with sensitive infrared detection devices rather than classical electrodes or imaging systems; the spontaneous release of infrared-wavelength particles should be observed in the brain as uncertainty is resolved into signaling outcomes. A quantitative increase in these particles should be observed, for example, upon perceptual recognition of a highly uncertain visual or auditory stimulus, with a strong temporal correspondence to P300 event-related potentials in the cerebral cortex. By contrast, this spontaneous thermal free energy release should not occur in the case of an epileptic seizure—when constitutive ion channel activation, rather than information processing, leads to highly synchronized neural activity across the cerebral cortex. Of course, spontaneous emissions of photons in this range have been observed in mammalian brain tissue [52,53,54,55], and infrared stimulation of the brain has been shown to have a functional effect on neural activity [56,57,58], but further studies are needed to measure the exact wavelengths of these photon emissions and evaluate whether these events are temporally correlated with neuronal signaling outcomes. If the brain does cyclically generate and compress entropy, the system should demonstrate much higher energy efficiency than expected under classical conditions (Table 1).

3.2.2. The Spontaneous Release of Thermal Free Energy during Information Compression Prompts Synchronized Firing across the Neural Network

This model describes both a computational process and a thermodynamic process, since information compression is both the selection of an optimal system state from a large probability distribution and the reduction of entropy. This system-wide thermocomputational event, resolving the uncertainty in component pure states, is predicted to lead to a spontaneous release of free energy. Neurons that reduce uncertainty during the computation should therefore recover free energy and restore their resting potential; neurons that retain uncertainty in their receptive field should distribute free energy to entropy and fire a signal. This system-wide computation should therefore result in the synchronous firing of a statistically random ensemble of neurons across the network. Synchronous neural activity is indeed observed at a range of frequencies in cortical neural networks, and is considered a correlate of higher-order cognitive processes [61,62]. Critically, the high frequency oscillations which are observed during perceptual tasks cannot be modeled by coupling and recruitment under classical assumptions and timescales [63,64]. Here, information compression events are predicted to prompt spontaneous synchronized activity across sparsely distributed neurons. This coordinated activity is predicted to occur in cells that allow random noise to gate signaling outcomes (e.g., cortical neural circuits) but not in cells that act entirely deterministically (e.g., spinal reflex circuits). While this oscillatory activity, occurring at a range of frequencies, has been observed in the mammalian brain, additional studies could explore the potential correlation between probabilistic coding and network-level activity in avian and cephalopod species. If synchronous activity is caused by classical methods of signal propagation, rather than being the result of a system-wide non-deterministic computation, then both classical simulations of cortical neural networks and spinal reflex circuits should readily demonstrate fast and slow oscillations. If instead, synchronous activity across the network is caused by information compression paired with free energy release, then synchronous firing should be eliminated by absorption of the predicted wavelengths and should be prompted by introduction of these wavelengths. In short, if a classical model is correct, then classical mechanisms should readily generate oscillations at a range of nested frequencies, and if the present model is correct, then cyclical information generation and compression should readily generate these oscillations (Table 2).

4. Discussion

The neuron is classically viewed as a transistor, always in either an on-state or an off-state. Here, the cortical neuron is modeled as a qubit, with some probability of transitioning from an off-state to an on-state over some time evolution. In this new approach, a state change in the computational unit relies on inherently probabilistic events at the ion level.
Noise plays a critical role in cortical neural networks and other complex dynamical systems, facilitating information transmission through a mechanism of stochastic resonance [67,68]. Random electrical noise prompts phase transitions in individual computational units, leading to the highly variable interspike intervals that are observed in cortical neurons [69]. Random electrical noise also leads to synchronized activity and large-scale oscillations at the systems level [70]. Noise-induced resonant activation and noise-enhanced stability is observed in both neural networks [71] and memristic architectures [72].
The result of a noisy computational process, at the network level, is ‘sparse populational coding’—the synchronous firing of a statistically random ensemble of sparsely distributed neurons across the network [73,74]. This neuronal activity corresponds with the realization of a multi-sensory percept [62,63,75,76]. Notably, sparse populational coding and percept realization are observed in neural networks that retain sensitivity to random electrical noise (such as the mammalian neocortex), but not in neural networks that are robust to random electrical noise (such as spinal reflex circuits).
Many previous models have used classical methods to describe the interspike interval or the synchronous firing of statistically random ensembles of neurons at the network level [9,10,11,12,13,14,67,68,69,70,71,72], yet none of these methods provide mechanistic insight into how individual cortical neurons gate signaling outcomes by allowing thermoelectric noise to affect neuronal signaling outcomes. The present approach demonstrates how noisy coding at the synapse is mechanistically linked to probabilistic signaling outcomes. This method makes specific predictions that can be tested in the laboratory, to evaluate whether the brain is a classical system (as typically assumed) or a multi-scale quantum system (as presented here).
Hameroff and Penrose have also proposed the brain to be a quantum system, with orchestrated objective reduction of the system state corresponding to conscious experience [77]. Indeed, the collapse of alternative eigenstates is a key property of cortical neural networks in both OrchOR theory and the present model. However, OrchOR theory focuses on the role of quantum decoherence on microtubule dynamics, while the present report models the contribution of random electronic noise to the cortical neuron membrane potential. The former approach does not connect with established mechanisms of neuronal information processing, while the latter approach is highly compatible with the known contributions of Brownian motion to cortical neuron signaling outcomes.
While the idea of the brain as a quantum system has often been considered highly controversial, researchers in computational neuroscience have recently adopted the concept to drive significant advances in neural network modeling and machine learning [78,79,80], building on previous efforts to incorporate noise into the Hodgkin and Huxley model [9,10,11,81]. However, these approaches, when implemented in classical computing architecture, do not permit physical information compression and free energy release. A network of computational units with probabilistic gating behavior and similar decoherence timescales to the mammalian brain would have to be built to achieve this computational process in engineered neural network architecture.
In this model, inherently probabilistic component states are integrated to populate a Hamiltonian operator. The Hamiltonian operator is then differentiated with respect to all perturbations to the system. The redistribution of energy that results from this computational process assigns eigenvalues for the position and momentum of each ion in the system at a single point in time. This process defines the voltage state of each neuron, as quantum-level events affect outcomes at the level of the computational unit. This study demonstrates how neurons in a cortical up-state retain a state of uncertainty, physically generating and compressing information to achieve non-deterministic signaling outcomes. This model is explicitly a form of ambient-temperature quantum computation. There are three additional ways to describe this computational process, provided in sister reports:
In accordance with the laws of thermodynamics, free energy must be expended to create information and this free energy is partially recovered upon information compression. In this model, probabilistic component pure states can be represented algebraically by a density matrix [51]. The density matrix undergoes a unitary change of basis, as the system state is perturbed by its surrounding environment over some time evolution. The diagonalization of the density matrix yields a zero determinant, leading to observables on the boundary region of that high-dimensional probability distribution.
In accordance with the laws of holography, these probabilistic component pure states can also be represented geometrically as complex-valued waves or wavefunctions [59]. These complex-valued probability amplitudes constructively and destructively interfere on the sensitive charge-detecting polymer surface of the neural membrane. As a result of this physical interference between probability amplitudes, the wavefunction collapses and eigenvalues are actualized on the boundary region of the high-dimensional probability distribution. This process of physically encoding information on the polymer neural membrane surface generates a holographic projection of the encoded content.
In accordance with the laws of thermodynamics, entropy must always increase and the total energy of the system must be conserved. Under classical assumptions, most free energy in the cerebral cortex should be dissipated to entropy, as an optimal system state is selected from a large probability distribution and all other possible system states are lost. Yet the observed energy efficiency of the brain suggests this is not the case. By modeling the system as cyclically generating and compressing quantum information, the extraordinary energy efficiency of the brain can be explained without violating the first or second law of thermodynamics [17]. This novel approach also forges a deep connection between predictive processing and the thermodynamic limits of quantum computation [26]. Achieving sparse populational coding, through a process of inherently probabilistic computation, may lead to more generally intelligent and energy-efficient neural networks.
In summary, the present model of Hamiltonian mechanics is complemented by these models of matrix mechanics, wave mechanics, and wetware-instantiated thermodynamic computation, which demonstrate the same process of information generation and compression. Indeed, cortical neurons may be better described as qubits, encoding von Neumann entropy, rather than classical bits, encoding Shannon entropy. This theoretical framework for ambient-temperature quantum computation may not only provide useful insight into the operation of biological systems, but also drive advances in engineered intelligence.

Funding

The author received support for this work from the Western Institute for Advanced Study, with generous donations from Jason Palmer, Bala Parthasarathy, and Dave Parker.

Data Availability Statement

All methods, materials, and data needed to replicate this study are included in the manuscript.

Conflicts of Interest

The author declares no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Softky, W.R.; Koch, C. The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPs. J. Neurosci. 1993, 13, 334–350. [Google Scholar] [CrossRef] [PubMed]
  2. Powers, R.K.; Binder, M.D. Effective synaptic current and motoneuron firing rate modulation. J. Neurophysiol. 1995, 74, 793–801. [Google Scholar] [CrossRef] [PubMed]
  3. Stern, E.A.; Kincaid, A.E.; Wilson, C.J. Spontaneous subthreshold membrane poten-tial fluctuations and action potential variability of rat corticostriatal and striatal neurons in vivo. J. Neurophysiol. 1997, 77, 1697–1715. [Google Scholar] [CrossRef] [PubMed]
  4. Dorval, A.D.; White, J.A. Channel noise is essential for perithreshold oscillations in entorhinal stellate neurons. J. Neurosci. 2005, 25, 10025–10028. [Google Scholar] [CrossRef] [PubMed]
  5. Haider, B.; Duque, A.; Hasenstaub, A.R.; McCormick, D.A. Neocortical network activity in vivo is generated through a dynamic balance of excitation and inhibition. J. Neurosci. 2006, 26, 4535–4545. [Google Scholar] [CrossRef] [PubMed]
  6. Beck, J.M.; Ma, W.J.; Kiani, R.; Hanks, T.; Churchland, A.K.; Roitman, J.; Shadlen, M.N.; Latham, P.E.; Pouget, A. Probabilistic population codes for Bayesian decision making. Neuron 2008, 60, 1142–1152. [Google Scholar] [CrossRef] [PubMed]
  7. Maoz, O.; Tkačik, G.; Esteki, M.S.; Kiani, R.; Schneidman, E. Learning probabilistic neural representations with randomly connected circuits. Proc. Natl. Acad. Sci. USA 2020, 117, 25066–25073. [Google Scholar] [CrossRef] [PubMed]
  8. Fayaz, S.; Fakharian, M.A.; Ghazizadeh, A. Stimulus presentation can enhance spik-ing irregularity across subcortical and cortical regions. PLoS Comput. Biol. 2022, 18, e1010256. [Google Scholar] [CrossRef] [PubMed]
  9. Rinzel, J.; Miller, R.N. Numerical calculation of stable and unstable periodic solu-tions to the Hodgkin-Huxley equations. Math. Biosci. 1980, 49, 27–59. [Google Scholar] [CrossRef]
  10. Rowat, P. Interspike interval statistics in the stochastic Hodgkin-Huxley model: Coex-istence of gamma frequency bursts and highly irregular firing. Neural Comput. 2007, 19, 1215–1250. [Google Scholar] [CrossRef]
  11. Austin, T.D. The emergence of the deterministic Hodgkin-Huxley equations as a limit from the underlying stochastic ion channel mechanism. Ann. Appl. Prob. 2008, 18, 1279–1325. [Google Scholar] [CrossRef]
  12. Montbrió, E.; Pazó, D. Exact mean-field theory explains the dual role of electrical synapses in collective synchronization. Phys. Rev. Lett. 2020, 125, 248101. [Google Scholar] [CrossRef] [PubMed]
  13. Goldman, J.S.; Kusch, L.; Aquilue, D.; Yalcinkaya, B.H.; Depannemaecker, D.; Ancourt, K.; Nghiem, T.-A.E.; Jirsa, V.; Destexhe, A. A comprehensive neural simulation of slow-wave sleep and highly responsive wakefulness dynamics. Front. Comput. Neurosci. 2021, 16, 1058957. [Google Scholar] [CrossRef] [PubMed]
  14. Ostojic, S.J. Interspike interval distributions of spiking neurons driven by fluctuating inputs. J. Neurophysiol. 2011, 106, 361–373. [Google Scholar] [CrossRef] [PubMed]
  15. Armstrong, C.M.; Hille, B. Voltage-gated ion channels and electrical excitability. Neuron 1998, 20, 371–380. [Google Scholar] [CrossRef] [PubMed]
  16. Chung, S.-H.; Hoyles, M.; Allen, T.; Kuyucak, S. Study of ionic currents across a model membrane channel using brownian dynamics. Biophys. J. 1998, 75, 793–809. [Google Scholar] [CrossRef]
  17. Stoll, E.A. A thermodynamical model of non-deterministic computation in cortical neural networks. Phys. Biol. 2024, 21, 016003. [Google Scholar] [CrossRef]
  18. Born, M. Statistical Interpretation of Quantum Mechanics. Science 1955, 122, 675–679. [Google Scholar] [CrossRef] [PubMed]
  19. Pauli, W. Über den Zusammenhang des Abschlusses der Elektronengruppen im Atom mit der Komplexstruktur der Spektren. Z. Phys. 1925, 31, 765–783. [Google Scholar] [CrossRef]
  20. Caldeira, A.O.; Leggett, A.J. Path integral approach to quantum Brownian motion. Phys. A 1983, 121, 587–616. [Google Scholar] [CrossRef]
  21. Caldeira, A.O.; Leggett, A.J. Quantum tunnelling in a dissipative system. Ann. Phys. 1983, 149, 374–456. [Google Scholar] [CrossRef]
  22. Chang, L.-D.; Waxman, D. Quantum Fokker-Planck equation. J. Phys. C 1985, 18, 5873–5879. [Google Scholar] [CrossRef]
  23. Khriplovich, I.B.; Lamoreaux, S.K. CP Violation Without Strangeness: Electric Dipole Moments of Particles, Atoms, and Molecules; Springer: Berlin, Germany, 2012. [Google Scholar]
  24. Feynman, R.P. Forces in Molecules. Phys. Rev. 1939, 56, 340. [Google Scholar] [CrossRef]
  25. Esteve, J.G.; Falceto, F.; Garcia Canal, C. Generalization of the Hellmann-Feynman theorem. Phys. Lett. A 2010, 374, 819–822. [Google Scholar] [CrossRef]
  26. Stoll, E.A. An energy-efficient process of non-deterministic computation drives the emergence of predictive models and exploratory behavior. Front. Cognit. 2024, 2, 1171273. [Google Scholar] [CrossRef]
  27. Bethe, H.A. The electromagnetic shift of energy levels. Phys. Rev. 1947, 72, 339. [Google Scholar] [CrossRef]
  28. Holstein, B.A. The van der waals interaction. Am. J. Phys. 2001, 69, 441–449. [Google Scholar] [CrossRef]
  29. Landauer, R. Irreversibility and heat generation in the computing process. IBM J. Res. Dev. 1961, 5, 183–191. [Google Scholar] [CrossRef]
  30. Berut, A.; Arakelyan, A.; Petrosyan, A.; Ciliberto, S.; Dillenschneider, R.; Lutz, E. Experimental verification of Landauer’s principle linking information and thermodynamics. Nature 2012, 483, 187–189. [Google Scholar] [CrossRef]
  31. Jun, Y.; Gavrilov, M.; Beckhoefer, J. High-precision test of Landauer’s principle in a feedback trap. Phys. Rev. Lett. 2014, 113, 190601. [Google Scholar] [CrossRef]
  32. Yan, L.L.; Xiong, T.P.; Rehan, K.; Zhou, F.; Liang, D.F.; Chen, L.; Zhang, J.Q.; Yang, W.L.; Ma, Z.H.; Feng, M. Single-atom demonstration of the quantum Landauer principle. Phys. Rev. Lett. 2018, 120, 210601. [Google Scholar] [CrossRef] [PubMed]
  33. Von Neumann, J. Mathematical Foundations of Quantum Mechanics; Springer: Berlin, Germany, 1932; Volume 205, p. 116277. [Google Scholar]
  34. Callen, H.B.; Welton, T.A. Irreversibility and generalized noise. Phys. Rev. 1951, 83, 34. [Google Scholar] [CrossRef]
  35. Tegmark, M. Why the brain is probably not a quantum computer. Inf. Sci. 2000, 128, 155–179. [Google Scholar] [CrossRef]
  36. Kubo, R. The fluctuation-dissipation theorem. Rep. Prog. Phys. 1966, 29, 255–284. [Google Scholar] [CrossRef]
  37. Martyushev, L.M.; Nazarova, A.S.; Seleznev, V.D. On the problem of minimum en-tropy production in the nonequilibrium stationary state. J. Phys. A 1966, 40, 371–380. [Google Scholar] [CrossRef]
  38. Taube, J.S. Interspike interval analyses reveal irregular firing patterns at short, but not long, intervals in rat head direction cells. J. Neurophysiol. 2010, 104, 1635–1648. [Google Scholar] [CrossRef] [PubMed]
  39. Naito, T.; Sadakane, O.; Okamoto, M.; Sato, H. Orientation tuning of surround sup-pression in lateral geniculate nucleus and primary visual cortex of cat. Neuroscience 2007, 149, 962–975. [Google Scholar] [CrossRef] [PubMed]
  40. Averbeck, B.B.; Latham, P.E.; Pouget, A. Neural correlations, population coding and computation. Nat. Rev. Neurosci. 2006, 7, 358–366. [Google Scholar] [CrossRef]
  41. Shadlen, M.N.; Newsome, W.T. The variable discharge of cortical neurons: Implica-tions for connectivity, computation, and information coding. J. Neurosci. 1998, 18, 3870–3896. [Google Scholar] [CrossRef]
  42. Lee, D.; Port, N.L.; Kruse, W.; Georgopoulos, A.P. Variability and correlated noise in the discharge of neurons in motor and parietal areas of the primate cortex. J. Neurosci. 1998, 18, 1161–1170. [Google Scholar] [CrossRef]
  43. Cui, H.; Anderson, R.A. Different representations of potential and selected motor plans by distinct parietal areas. J. Neurosci. 2011, 31, 18130–18136. [Google Scholar] [CrossRef]
  44. Jackson, A.; Mavoori, J.; Fetz, E.E. Correlations between the same motor cortex cells and arm muscles during a trained task, free behavior, and natural sleep in the macaque monkey. J. Neurophysiol. 2007, 97, 360–374. [Google Scholar] [CrossRef]
  45. Russo, A.A.; Bittner, S.R.; Perkins, S.M.; Seely, J.S.; London, B.M.; Lara, A.H.; Miri, A.; Marshall, N.J.; Kohn, A.; Jessell, T.M.; et al. Motor cortex embeds muscle-like commands in an untangled population response. Neuron 2018, 97, 953–966. [Google Scholar] [CrossRef]
  46. Hubel, D.H.; Wiesel, T.N. Receptive fields of single neurones in the cat’s striate cor-tex. J. Physiol. 1959, 148, 574–591. [Google Scholar] [CrossRef]
  47. Mendonça, P.R.; Vargas-Caballero, M.; Erdélyi, F.; Szabó, G.; Paulsen, O.; Robinson, H.P. Stochastic and deterministic dynamics of intrinsically irregular firing in cortical inhibitory interneurons. eLife 2016, 5, e16475. [Google Scholar] [CrossRef]
  48. Wimmer, K.; Compte, A.; Roxin, A.; Peixoto, D.; Renart, A.; de la Rocha, J. Sensory integration dynamics in a hierarchical network explains choice probabilities in cortical area MT. Nat. Commun. 2015, 6, 6177. [Google Scholar] [CrossRef] [PubMed]
  49. Melloni, L.; Schwiedrzik, C.M.; Müller, N.; Rodriguez, E.; Singer, W. Expectations change the signatures and timing of electrophysiological correlates of perceptual awareness. J. Neurosci. 2011, 31, 1386–1396. [Google Scholar] [CrossRef]
  50. Huang, S.; Hong, S.; De Schutter, E. Non-linear leak currents affect mammalian neuron physiology. Front. Cell. Neurosci. 2015, 9, 432. [Google Scholar]
  51. Stoll, E.A. Random electrical noise drives non-deterministic computation in cortical neural networks. bioRxiv 2022. [Google Scholar] [CrossRef]
  52. Isojima, Y.; Isoshima, T.; Nagai, K.; Kikuchi, K.; Nakagawa, H. Ultraweak biochem-iluminescence detected from rat hippocampal slices. Neuroreport 1995, 6, 658–660. [Google Scholar] [CrossRef]
  53. Kobayashi, M.; Takeda, M.; Sato, T.; Yamazaki, Y.; Kaneko, K.; Ito, K.; Kato, H.; Inaba, H. In vivo imaging of spontaneous ultraweak photon emission from a rat’s brain correlated with cere-bral energy metabolism. Neurosci. Res. 1999, 34, 103–113. [Google Scholar] [CrossRef] [PubMed]
  54. Kataoka, Y.; Cui, Y.; Yamagata, A.; Niigaki, M.; Hirohata, T.; Oishi, N.; Watanabe, Y. Activity-dependent neural tissue oxidation emits intrinsic ultraweak photons. Biochem. Biophys. Res. Commun. 2001, 285, 1007–1011. [Google Scholar] [CrossRef] [PubMed]
  55. Tang, R.; Dai, J. Spatiotemporal imaging of glutamate-induced biophotonic activities and transmission in neural circuits. PLoS ONE 2014, 9, e85643. [Google Scholar] [CrossRef] [PubMed]
  56. Amaroli, A.; Marcoli, M.; Venturini, A.; Passalacqua, M.; Agnati, L.F.; Signore, A.; Raffetto, M.; Maura, G.; Benedicenti, S.; Cervetto, C. Near-infrared laser photons induce glutamate release from cerebro-cortical nerve terminals. J. Biophotonics 2018, 11, e201800102. [Google Scholar] [CrossRef] [PubMed]
  57. Naeser, M.A.; Ho, M.D.; Martin, P.I.; Hamblin, M.R.; Koo, B.B. Increased functional connectivity within intrinsic neural networks in chronic stroke following treatment with red/near-infrared transcranial photobiomodulation. Photobiomodul. Photomed. Laser Surg. 2020, 38, 115–131. [Google Scholar] [PubMed]
  58. Tan, X.; Rajguru, S.; Young, H.; Xia, N.; Stock, S.R.; Xiao, X.; Richter, C.P. Radiant en-ergy required for infrared neural stimulation. Sci. Rep. 2015, 5, 13273. [Google Scholar] [CrossRef] [PubMed]
  59. Stoll, E.A. Modeling electron interference at the neuronal membrane yields a holo-graphic projection of representative information content. bioRxiv 2022. [Google Scholar] [CrossRef]
  60. Levy, W.B.; Calvert, V.G. Communication consumes 35 times more energy than computation in human cortex, but both costs are needed to predict synapse number. Proc. Natl. Acad. Sci. USA 2021, 118, e2008173118. [Google Scholar] [CrossRef]
  61. Buzsaki, G.; Draguhn, A. Neuronal oscillations in cortical networks. Science 2004, 304, 1926–1929. [Google Scholar] [CrossRef]
  62. Engel, A.K.; Singer, W. Temporal binding and the neural correlates of sensory awareness. Trends Cogn. Sci. 2001, 5, 16–25. [Google Scholar] [CrossRef]
  63. Stacey, W.C.; Krieger, A.; Litt, B. Network recruitment to coherent oscillations in a hippocampal computer model. J. Neurophysiol. 2011, 105, 1464–1481. [Google Scholar] [CrossRef]
  64. Whittington, M.A.; Cunningham, M.O.; LeBeau, F.E.N.; Racca, C.; Traub, R.D. Mul-tiple origins of the cortical gamma rhythm. Dev. Neurobiol. 2010, 71, 92–106. [Google Scholar] [CrossRef]
  65. Timofeev, I.; Bazhenov, M.; Seigneur, J.; Sejnowski, T. Neuronal synchronization and thalamocortical rhythms in sleep, wake, and epilepsy. In Jasper’s Basic Mechanisms of the Epilepsies, 4th ed.; Oxford University Press: Oxford, UK, 2012. [Google Scholar]
  66. Gansel, K.S. Neural synchrony in cortical networks: Mechanisms and implications for neural information processing and coding. Front. Integr. Neurosci. 2022, 16, 900715. [Google Scholar] [CrossRef]
  67. Astumian, R.D.; Moss, F. The constructive role of noise in fluctuation driven transport and stochastic resonance. Chaos 1998, 8, 533–538. [Google Scholar] [CrossRef] [PubMed]
  68. Lucarini, V. Stochastic resonance in non-equilibrium systems. Phys. Rev. E 2019, 100, 062124. [Google Scholar] [CrossRef]
  69. Lindner, B.; Schimansky-Geier, L. Analytical approach to the stochastic Fitz-Hugh-Nagumo system and coherence resonance. Phys. Rev. E 1999, 60, 7270–7276. [Google Scholar] [CrossRef]
  70. Lindner, B.; Garcia-Ojalvo, J.; Neiman, A.; Schimansky-Geier, L. Effects of noise in excitable systems. Phys. Rep. 2004, 392, 321–424. [Google Scholar] [CrossRef]
  71. Valenti, D.; Augello, G.; Spagnolo, B. Dynamics of a FitzHugh-Nagumo system sub-jected to autocorrelated noise. Eur. Phys. J. B 2008, 65, 443–451. [Google Scholar] [CrossRef]
  72. Surazhevsky, I.A.; Demim, V.A.; Ilyasov, A.I.; Emelyanov, A.V.; Nikiruy, K.E.; Rylkov, V.V.; Shchanikov, S.A.; Bordanov, I.A.; Gerasimova, S.A.; Guseinov, D.V.; et al. Noise-assisted persistence and re-covery of memory state in a memristive spiking neuromorphic network. Chaos 2021, 146, 110890. [Google Scholar]
  73. Geisler, C.; Brunel, N.; Wang, X.J. Contributions of intrinsic membrane dynamics to fast network oscillations with irregular neuronal discharges. J. Physiol. 2005, 94, 4344–4361. [Google Scholar]
  74. Brunel, N. Dynamics of sparsely-connected networks of excitatory and inhibitory spik-ing neurons. J. Comput. Neurosci. 2000, 8, 183–208. [Google Scholar] [CrossRef]
  75. Csibra, G.; Davis, G.; Spratling, M.W.; Johnson, M.H. Gamma oscillations and ob-ject processing in the infant brain. Science 2000, 290, 1582–1585. [Google Scholar] [CrossRef] [PubMed]
  76. Herrmann, C.S.; Knight, R.T. Mechanisms of human attention: Event-related poten-tials and oscillations. Neurosci. Biobehav. Rev. 2001, 25, 465–476. [Google Scholar] [CrossRef] [PubMed]
  77. Hameroff, S.R.; Penrose, R. Consciousness in the universe: A review of the ‘OrchOR’ theory. Phys. Life Rev. 2014, 11, 39–78. [Google Scholar] [CrossRef] [PubMed]
  78. Khalid, M.; Wu, J.; Ali, T.M.; Ameen, T.; Altaher, A.S.; Moustafa, A.A.; Zhu, Q.; Xiong, R. Cortico-Hippocampal computational modeling using quantum-inspired neural networks. Front. Comput. Neurosci. 2020, 14, 80. [Google Scholar] [CrossRef] [PubMed]
  79. Wang, Z.; Xu, M.; Zhang, Y. Quantum pulse coupled neural network. Neural Netw. 2022, 152, 105–117. [Google Scholar] [CrossRef] [PubMed]
  80. Jeswal, S.K.; Chakraverty, S. Recent developments and applications in quantum neu-ral network: A review. Arch. Comput. Methods Eng. 2019, 26, 793–807. [Google Scholar] [CrossRef]
  81. Adair, R.K. Noise and stochastic resonance in voltage-gated ion channels. Proc. Natl. Acad. Sci. USA 2003, 100, 12099–12104. [Google Scholar] [CrossRef]
Table 1. A comparison of explanatory power: the energy efficiency of the neural network.
Table 1. A comparison of explanatory power: the energy efficiency of the neural network.
Classical models
Proposed MechanismPredicted ObservationEvidence For/Against
The energy efficiency of the system is the result of optimal synaptic weighting, optimal ion channel distribution, and other molecular mechanisms.Net production of physical entropy in the human brain is compatible with classical assumptions, with ATP turnover producing some amount of entropy.The computational cost for each spike is an astounding 0.1 W: In the context of known caloric intake, this energy requirement is “off by a factor of  10 8 .” [60]
Present model
Proposed MechanismPredicted ObservationEvidence For/Against
The energy efficiency of the system is the result of information compression, with neural outcomes prompted by the extraction of correlations, consistencies, or ‘predictive value’.Net production of physical entropy in the human brain is far too low to retain the assumptions of a classical system, with the amount of work done per calorie showing near-perfect use.“The energy efficiency of the human brain is consistent with this model of non-deterministic computation.” [17] “This computational process maximizes free energy availability.” [26]
Table 2. A comparison of explanatory power: the synchronous firing of sparse neuronal ensembles.
Table 2. A comparison of explanatory power: the synchronous firing of sparse neuronal ensembles.
Classical models
Proposed MechanismPredicted ObservationEvidence For/Against
The observed synchronous firing at a range of nested frequencies is the result of information encoding, with neural signaling outcomes prompted by a common stimulus.A combination of gap junctions, chemical synapses, ephaptic coupling, changes in ion concentration, and optimization of neural connectivity over time prompts synchronous firing.These events are not readily simulated: “It is difficult, however, to identify the exact contribution of each mechanism to a specific type of oscillation.” [65] The problem is “non-trivial.” [66]
Present model
Proposed MechanismPredicted ObservationEvidence For/Against
The observed synchronous firing at a range of nested frequencies is the result of information compression, with neural signaling outcomes prompted by the extraction of correlations.An infrared photon pulse drops the membrane potential of some neurons, while other neurons in cortical up-state fire, resulting in synchronous firing but not ictal activity across the network.The predictions of this model should be tested: specifically, spontaneous infrared photon release is expected to be temporally correlated with neural oscillations, but not with ictal activity.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Stoll, E.A. The Mechanics Underpinning Non-Deterministic Computation in Cortical Neural Networks. AppliedMath 2024, 4, 806-827. https://doi.org/10.3390/appliedmath4030043

AMA Style

Stoll EA. The Mechanics Underpinning Non-Deterministic Computation in Cortical Neural Networks. AppliedMath. 2024; 4(3):806-827. https://doi.org/10.3390/appliedmath4030043

Chicago/Turabian Style

Stoll, Elizabeth A. 2024. "The Mechanics Underpinning Non-Deterministic Computation in Cortical Neural Networks" AppliedMath 4, no. 3: 806-827. https://doi.org/10.3390/appliedmath4030043

Article Metrics

Back to TopTop