Next Article in Journal
Model Selection for Non-Negative Tensor Factorization with Minimum Description Length
Next Article in Special Issue
Beyond the Maximum Storage Capacity Limit in Hopfield Recurrent Neural Networks
Previous Article in Journal
Spherical Fuzzy Logarithmic Aggregation Operators Based on Entropy and Their Application in Decision Support Systems
Previous Article in Special Issue
Stability Analysis for Memristor-Based Complex-Valued Neural Networks with Time Delays
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stationary-State Statistics of a Binary Neural Network Model with Quenched Disorder

Laboratory of Neural Computation, Center for Neuroscience and Cognitive Systems@UniTn, Istituto Italiano di Tecnologia, 38068 Rovereto, Italy
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(7), 630; https://doi.org/10.3390/e21070630
Submission received: 20 May 2019 / Revised: 17 June 2019 / Accepted: 23 June 2019 / Published: 26 June 2019
(This article belongs to the Special Issue Statistical Mechanics of Neural Networks)

Abstract

:
In this paper, we study the statistical properties of the stationary firing-rate states of a neural network model with quenched disorder. The model has arbitrary size, discrete-time evolution equations and binary firing rates, while the topology and the strength of the synaptic connections are randomly generated from known, generally arbitrary, probability distributions. We derived semi-analytical expressions of the occurrence probability of the stationary states and the mean multistability diagram of the model, in terms of the distribution of the synaptic connections and of the external stimuli to the network. Our calculations rely on the probability distribution of the bifurcation points of the stationary states with respect to the external stimuli, calculated in terms of the permanent of special matrices using extreme value theory. While our semi-analytical expressions are exact for any size of the network and for any distribution of the synaptic connections, we focus our study on networks made of several populations, that we term “statistically homogeneous” to indicate that the probability distribution of their connections depends only on the pre- and post-synaptic population indexes, and not on the individual synaptic pair indexes. In this specific case, we calculated analytically the permanent, obtaining a compact formula that outperforms of several orders of magnitude the Balasubramanian-Bax-Franklin-Glynn algorithm. To conclude, by applying the Fisher-Tippett-Gnedenko theorem, we derived asymptotic expressions of the stationary-state statistics of multi-population networks in the large-network-size limit, in terms of the Gumbel (double exponential) distribution. We also provide a Python implementation of our formulas and some examples of the results generated by the code.

1. Introduction

Biological networks are typically endowed with synaptic connections that have highly heterogeneous strength [1]. The magnitude of the synaptic strength is determined by several variable factors. In the case of chemical synapses, these factors include the size of the available vesicle pool, the release probability of neurotransmitters into the synaptic cleft, the amount of neurotransmitter that can be absorbed in the postsynaptic neuron, and the efficiency of neurotransmitter replenishment [1,2,3,4,5]. These factors differ from synapse to synapse, causing heterogeneity in the distribution of synaptic strengths. An additional source of heterogeneity in biological networks is represented by the number of connections made by the axon terminals to the dendrites of post-synaptic neurons, which affects both the magnitude of the synaptic strengths and the local topology of neural circuits [5]. Heterogeneity of synapses is thought to be an intrinsic network property that cannot be explained by differences across individuals or experiments [4]. It has also been hypothesized that this heterogeneity has an important functional role in making circuits more robust against possible dysfunctions [6].
Synaptic heterogeneity can be included in network models by assuming that the strengths of synapses within the network are random variables with some specific distribution, for example a Gaussian distribution. The standard deviation of the distribution can be interpreted as a measure of how much synapses are heterogeneous.
A powerful way to study the effect of heterogeneities of the interactions within a physical system is quenched disorder. In condensed matter physics, quenched disorder describes mathematically the highly irregular atomic bond structure of amorphous solids known as spin glasses [7,8,9,10,11,12]. Models with quenched disorder allow the computation of statistically representative and thus physically relevant results by averaging physical observables over the variability of the connections between the network units [12]. This is done by first generating several copies or repetitions of the network, where the strength and/or the topology of the connections are generated randomly from given probability distributions. Each copy owns a different set of frozen connections, which affect the value of physical observables. The physical observables therefore present a probability distribution over the set of connections. Since measurements of macroscopic observables are dominated by their mean values [12], the final, and often difficult step is to calculate averages over the distribution of the connections.
So far, only a few studies have investigated the dynamics of neural networks with heterogeneous, randomly distributed synaptic connections. In [13,14,15], the authors discussed the statistical and dynamical consequences of excitation-inhibition balance, clustering and spatially dependent recurrent connections in networks of integrate-and-fire neurons with random sparse connectivity. Analytical results for graded-rate neurons have been derived in [16,17,18,19,20], with special focus on the thermodynamic limit of fully connected networks. The neurons in these models are all-to-all connected with unit probability, namely the network topology is deterministic, while the strength of the synaptic connections is normally distributed. Other papers have focused on networks of binary neurons in the thermodynamic limit, and taken directly advantage of the powerful statistical approaches developed for studying spin glasses. These approaches can be adapted for the investigation of networks of binary-rate neurons in the thermodynamic limit, to study neural networks endowed with Hebbian plasticity rules for their synaptic connections, that behave as attractor neural networks storing and retrieving desired patterns of neural activity (see [21] and references therein).
Here we aim to extend the study of binary network models with quenched disorder to the case of networks of arbitrary size, paying special attention to small sizes. Studying networks of arbitrary size, including small ones, is important because sizes of brains, and of specialized neural networks within brains, change considerably across animal species, ranging from a few tens of neurons in invertebrates such as rotifers and nematodes, to billions of neurons in cetaceans and primates [22]. Network size also changes across levels of spatial organization, ranging from microscopic and mesoscopic levels of organization in cortical micro-columns and columns (including from a few tens [23] to a few tens of thousands of neurons [24,25]), to several orders of magnitude more in large macroscopic networks [26,27]. The study of how finite-size systems behave with quenched disorder is particularly challenging because each network realization may be very different from another when considering systems of small size. As a result, little is known about the effect of synaptic heterogeneities for finite-size networks.
To perform this study, we build on our recent progress, [28], in which we introduced optimized algorithms for investigating changes in the long-time dynamics of arbitrary-size recurrent networks, composed of binary-rate neurons that evolve in discrete time steps. These changes of dynamics are known in mathematical terms as bifurcations [29]. In particular, in [28] we studied changes in the number of stationary network states and the formation of neural oscillations, elicited by variations in the external stimuli to the network. The synaptic strengths and the topology of the network were arbitrary (possibly random and asymmetric), and we studied the bifurcation structure of single network realizations without averaging it over several copies of the model.
In the present paper, we extend the mathematical formalism and the algorithms introduced in [28], by deriving a complete semi-analytical description of the statistical properties of the long-time network states and of the corresponding bifurcations, across network realizations. For simplicity, we focus on the mathematical characterization of the stationary states of random networks. Going beyond classic spin-glass theory [7,8,9,10,11,12], we extend our work beyond the calculation of the expected values, by deriving complete probability distributions. Moreover, unlike previous work on random networks of graded neurons [16,17,18,19,20], which focused on fully connected models with normally distributed synaptic strengths in the thermodynamic limit, we investigate the more complicated case of arbitrary-size networks, with arbitrary distribution of the synaptic strengths and of the topology of the connections. Moreover, rather than considering the problem of storing/retrieving some desired sequences of neural activity patterns [21], we determine the fixed-point attractors, namely the stationary solutions of the network equations, that are generated by some given arbitrary distribution of the synaptic connections (i.e., by connections that are not necessarily designed to store and retrieve some desired patterns).

2. Materials and Methods

2.1. The Network Model

We study a recurrent neural network model composed of N binary neurons, whose state evolves in discrete time steps. The firing rate of the ith neuron at time t is represented by the binary variable ν i t 0 , 1 , so that ν i t = 0 if the neuron is not firing at time t, and ν i t = 1 if it is firing at the maximum rate, see e.g., [30]. We also define ν t = def ν 0 t , ν 1 t , , ν N 1 t T 0 , 1 N , namely the vector containing the firing rates of all N neurons at time t.
If the neurons respond synchronously to the local fields h i ν t , the firing rates at the time instant t + 1 are updated according to the following activity-based equation (see e.g., [28,31]):
ν i t + 1 = H h i ν t θ i , h i ν t = def j = 0 N 1 J i , j ν j t + I i , i = 0 , , N 1 .
As we anticipated, N is the number of neurons in the network, which in this work is supposed to be finite. Moreover, I i is a deterministic external input (i.e., an afferent stimulus) to the ith neuron, while H · is the Heaviside step function:
H h θ = 0 , if h < θ 1 , if h θ ,
with deterministic firing threshold θ .
J i , j is the (generally asymmetric) entry of the synaptic connectivity matrix J, and represents the strength or weight of the random and time-independent synaptic connection from the jth (presynaptic) neuron to the ith (postsynaptic) neuron. The randomness of the synaptic weights is quenched, therefore a “frozen” connectivity matrix J is randomly generated at every realization of the network, according to the following formula:
J i , j = 0 , if T i , j = 0 W i , j , if T i , j = 1 .
In Equation (2), T i , j is the i , j th entry of the topology matrix T, so that T i , j = 0 if there is no synaptic connection from the jth neuron to the ith neuron, and T i , j = 1 if the connection is present. The topology of the network is generally random and asymmetric, and it depends on the (arbitrary) entries P i , j 0 , 1 of the connection probability matrix P . In particular, we suppose that T i , j = 1 with probability P i , j , while T i , j = 0 with probability 1 P i , j . Moreover, in Equation (2) the terms W i , j are also (generally asymmetric and non-identically distributed) random variables, distributed according to marginal probability distributions p W i , j (for simplicity, here we focus on continuous distributions, however our calculations can be extended to discrete random variables, if desired). To ensure the mathematical tractability of the model, we suppose that the terms W i , j are statistically independent from each other and from the variables T i , j , and that the variables T i , j are independent too.
On the other hand, if the neurons respond asynchronously to the local fields, at each time instant only a single, randomly drawn neuron k is to undergo an update (see [28,31]):
ν i t + 1 = ν i t , i k ν i t + 1 = H h i ν t θ i , i = k ,
where the local field h i ν t is defined as in Equation (1) (this is known as Glauber dynamics [32] in the zero-noise limit). The results that we derive in this paper are valid for both kinds of network updates, synchronous and asynchronous, since they generate identical bifurcation diagrams of the stationary states for a given set of network parameters, as we proved in [28].
For simplicity, from now on we will represent the vector ν t by the binary string ν 0 t ν 1 t ν N 1 t , obtained by concatenating the firing rates at time t. For example, in a network composed of N = 6 neurons, the vector ν t = 1 , 1 , 0 , 0 , 1 , 0 T will be represented by the string 110010 (in this notation, no multiplication is intended between the bits).

2.2. Statistical Properties of the Network Model

In this paper, we focus on the calculation of the statistical properties of the stationary solutions of Equations (1) and (3), provided the probability distribution of the entries J i , j of the connectivity matrix is known. Therefore we do not consider the problem of storing/retrieving some desired sequences of neural activity patterns.
Our formalism is based on a branch of statistics known as extreme value theory [33], which deals with the extreme deviations from the median of probability distributions. Formally, extreme deviations are described by the minimum and maximum of a set of random variables, which correspond, respectively, to the smallest and largest order statistics of that set [34,35,36,37].
In this section, we derive semi-analytical formulas of the probability distribution of the bifurcation points of the stationary states (Section 2.2.1), of their mean multistability diagram (Section 2.2.2), of the probability that a state is stationary for a given combination of stimuli (Section 2.2.3), and of the probability that a state is stationary regardless of the stimuli (Section 2.2.4). We implemented these formulas in the “Semi-Analytical Calculations” section of the Supplemental Python script “Statistics.py”. Note that our formulas are semi-analytical, in that they are expressed in terms of 1D definite integrals containing the arbitrary probability distributions p W i , j . In the Python script, these integrals are calculated through numerical integration schemes. However, note that generally the integrals may be calculated through analytical approximations, while for some distributions exact formulas may also exist, providing a fully analytical description of the statistical properties of the stationary states.

2.2.1. Probability Distribution of the Bifurcation Points

The multistability diagram of the network provides a complete picture of the relationship between the stationary solutions of the network and a set of network parameters. In particular, in [28] we studied how the degree of multistability (namely the number of stationary solutions) of the firing-rate states and their symmetry depend on the stimuli I i , which represent our bifurcation parameters. Given a network with P distinct input currents (i.e., I i I 0 , , I P 1 i ), we define Γ I α to be the set of neurons that share the same external current I α (namely Γ I α = def i 0 , , N 1 : I i = I α ), while Γ I α , u ν = def i Γ I α : ν i = u , for u 0 , 1 . Then in [28] we proved that a given firing-rate state ν is a stationary solution of the network Equations (1) and (3) for every combination of stimuli I = def I 0 , , I P 1 V ν = V 0 ν × × V P 1 ν , where:
V α ν = def , Ξ α ν , if Γ I α , 1 ν = Λ α ν , + , if Γ I α , 0 ν = Λ α ν , Ξ α ν , otherwise Λ α ν = def max i Γ I α , 1 ν I i ν , Ξ α ν = def min i Γ I α , 0 ν I i ν I i ν = def θ i j = 0 N 1 J i , j ν j .
By calculating the hyperrectangles V ν for every ν , we obtain a complete picture of the relationship between the stationary states and the set of stimuli. If the hyperrectangles corresponding to M different states ν overlap, the overlapping region has multistability degree M (i.e., for combinations of stimuli lying in that region, the network has M distinct stationary states). A stationary state loses its stability at the boundaries Λ α ν and Ξ α ν , turning into another stationary state or an oscillation. Therefore Λ α ν and Ξ α ν represent the bifurcation points of the stationary solution ν of Equations (1) and (3).
In what follows, we derive the probability density functions of the bifurcation points given the firing-rate state ν (note that, for simplicity, from now on we will omit the superscript ν in all the formulas). We define the permanent of an n × n matrix A as follows [38]:
per A = σ i = 0 n 1 A i , σ i ,
where the summation is over the n ! permutations σ of the set 0 , 1 , , n 1 , while σ i represents the ith position of this set after the reordering (e.g., for n = 4 and σ = 1 , 3 , 2 , 0 , we get σ 1 = 3 ). Since the random variables I i are conditionally independent given the firing-rate state ν (as a consequence of the independence of the synaptic weights J i , j ), if we call F I i x = def x p I i y d y the cumulative distribution function of I i , then from Equation (4) we obtain that Λ α and Ξ α given ν are distributed according to the following order statistics [34,35,36,37]:
p Λ α x = 1 γ α , 1 1 ! per p α , 1 x , F α , 1 γ α , 1 1 x p Ξ α x = 1 γ α , 0 1 ! per p α , 0 x , I γ α , 0 , γ α , 0 1 F α , 0 γ α , 0 1 x ,
where γ α , u = def Γ I α , u , while p α , 1 x , F α , 1 γ α , 1 1 x and p α , 0 x , I γ α , 0 , γ α , 0 1 F α , 0 γ α , 0 1 x are γ α , 1 × γ α , 1 and γ α , 0 × γ α , 0 matrices respectively, p α , u x = def p I i x i Γ I α , u and F α , u x = def F I i x i Γ I α , u are γ α , u × 1 column vectors, F α , u v x = def F α , u x , , F α , u x v times is a γ α , u × v matrix, and I γ α , 0 , γ α , 0 1 is the γ α , 0 × γ α , 0 1 all-ones matrix. Please note that in the Supplemental Python script “Statistics.py” the permanent is calculated by means of the Balasubramanian-Bax-Franklin-Glynn (BBFG) formula, which is the fastest known algorithm for the numerical calculation of the permanent of arbitrary matrices [39,40,41,42].
According to Equation (5), in order to complete the derivation of the probability densities p Λ α and p Ξ α , we need to evaluate the probability density and the cumulative distribution function of I i . By defining S i = def j = 0 N 1 J i , j ν j = j R J i , j and R = def j 0 , , N 1 : ν j = 1 , from the definition of I i in Equation (4) it follows that:
p I i x = p S i θ i x .
Since the synaptic weights J i , j are independent by hypothesis, the probability distribution of S i can be calculated through the convolution formula:
p S i θ i x = j R p J i , j θ i x ,
According to Equation (2):
p J i , j x = P i , j p W i , j x + 1 P i , j δ x ,
where δ · is the Dirac delta function. Then, from Equations (7) and (8), it follows that:
p S i θ i x = a i θ i x + b i δ θ i x a i x = S P R \ j S P i , j j R \ S 1 P i , j j S p W i , j x b i = j R 1 P i , j ,
where P R represents the power set of R. Finally, we obtain p I i x from Equations (6) and (9), while the corresponding cumulative distribution function is:
F I i x = x a i θ i y d y + b i H x θ i .
Please note that the definite integral in Equation (10) depends on the probability distribution p W i , j , which is arbitrary. For this reason, we do not provide any analytical expression of this integral, though exact formulas exist for specific distributions p W i , j , e.g., when W i , j are normally distributed (in the Supplemental Python script “Statistics.py”, the distribution p W i , j is defined by the user, and the integrals are calculated by means of numerical integration schemes).
Now we have all the ingredients for calculating the probability distributions of the bifurcation points from Equation (5). Note, however, that this formula cannot be used in its current form, because it involves the ill-defined product between the Dirac delta function (which is contained in the probability density p I i ) and a discontinuous function (i.e., F I i , which jumps at x = θ i ). To see that this product is generally ill-defined, consider for example the probability density of min j X j , in the case when X j are n independent and identically distributed random variables:
p min j X j x = 1 n 1 ! per p X x p X x F X x F X x F X x F X x n 1 times = n p X x F X n 1 x .
If F X x is absolutely continuous, we can prove that p min j X j x , as given by Equation (11), is correctly normalized:
+ n p X x F X n 1 x d x = + n d F X x d x F X n 1 x d x = 0 1 n F X n 1 d F X = n 1 n F X n 0 1 = 1 .
However, in the limit p X x δ x , the cumulative distribution function F X x is discontinuous at x = 0 , and we get:
+ n p X x F X n 1 x d x + n δ x H n 1 x d x .
If now we attempt to apply the famous formula:
+ δ x f x d x = f 0 ,
we get that the integral equals n H n 1 0 = n > 1 , therefore p min j X j x is not properly normalized. The same problem occurs for max j X j .
To fix it, consider the general case when X is a mixture of continuous and discrete random variables. Its probability density can be decomposed as follows:
p X x = p X c x + q D F X x q lim x x q F X x δ x x q .
In Equation (12), p X c is the component of p X that describes the statistical behavior of the continuous values of X. Moreover, x q q D represents the set of the discrete values of X, at which the cumulative distribution function F X is (possibly) discontinuous. In the specific case when X = Λ α or X = Ξ α , by comparing Equation (12) with Equations (5), (6) and (9), we get:
p Λ α c x = 1 γ α , 1 1 ! per a α , 1 θ x , F α , 1 γ α , 1 1 x p Ξ α c x = 1 γ α , 0 1 ! per a α , 0 θ x , I γ α , 0 , γ α , 0 1 F α , 0 γ α , 0 1 x ,
where a α , u θ x = def a i θ i x i Γ I α , u is a γ α , u × 1 column vector. Moreover, D = Γ I α , 1 and D = Γ I α , 0 for Λ α and Ξ α respectively, while x q = θ q . According to Equation (12), we also need to evaluate the cumulative distribution functions of Λ α and Ξ α . By following [36], we get:
F Λ α x = 1 γ α , 1 ! per F α , 1 γ α , 1 x F Ξ α x = n = 1 γ α , 0 1 n ! γ α , 0 n ! per F α , 0 n x , I γ α , 0 , γ α , 0 n F α , 0 γ α , 0 n x .
We observe that Equations (12), (13) and (14) do not depend anymore on the ill-defined product between the Dirac delta distribution and the Heaviside step function. These formulas will be used in the next subsections to calculate the mean multistability diagram of the network (Section 2.2.2), the probability that a firing-rate state is stationary for a given combination of stimuli (Section 2.2.3), and the probability that a state is stationary regardless of the stimuli (Section 2.2.4).

2.2.2. Mean Multistability Diagram

The mean multistability diagram is the plot of the bifurcation points Λ α and Ξ α , averaged over the network realizations. The mean bifurcation points Λ α and Ξ α (where the brackets · represent the statistical mean over the network realizations) correspond to the values of the stimulus I α at which a given firing-rate state ν loses its stability on average, turning into a different stationary state or an oscillatory solution. We propose two different approaches for evaluating the mean bifurcation points, which we implemented in the Supplemental Python script “Statistics.py”.
The first method is based on Equation (12), from which we obtain:
X = + x p X x d x = + x p X c x d x + q D x q F X x q lim x x q F X x ,
for X = Λ α and X = Ξ α . The cumulative distribution function F X in Equation (15) is calculated by means of Equation (14), while the function p X c is given by Equation (13).
The second method takes advantage of the following formula:
X z = + x z d F X x = z 0 + x z 1 1 F X x + 1 z F X x d x ,
where the second equality is obtained by integrating the Lebesgue-Stieltjes integral by parts. After some algebra, in the special case z = 1 we get:
X = + H x F X x d x ,
where F X is given again by Equation (14). By running both methods in the Supplemental Python script, the reader can easily check that Equations (15) and (16) provide identical results for X , apart from rounding errors.
It is important to observe that the multistability diagram shows only those stability regions for which Λ α < Ξ α for every α , because if this condition is not satisfied, the state ν is not stationary on average for any combination of stimuli. Moreover, beyond multistability, the diagram also provides a complete picture of spontaneous symmetry-breaking of the stationary solutions of the firing rates. Spontaneous symmetry-breaking occurs whenever neurons in homogeneous populations fire at different rates, despite the symmetry of the underlying neural equations. We define the population function p · that maps the neuron index i 0 , , N 1 to the index α of the population the neuron i belongs to, so that p i = α . Then, in a single network realization, a population α is said to be homogeneous if the sum S i β = def k : p k = β J i , k , the firing threshold θ i and the external stimulus I i do not depend on the index i, for every index i such that p i = α (see [28]). However, in the present article we study network statistics across realizations. For this reason, the homogeneity of a neural population should be defined in a statistical sense, namely by checking whether the probability distribution of S i β does not depend on the index i, for every neuron i in the population α . Whenever the neurons in a population show heterogeneous firing rates despite the homogeneity condition being satisfied, we say that the symmetry of that population is spontaneously broken. In order to check whether the probability distribution of S i β is population-dependent, it is possible to calculate numerically the Kullback-Leibler divergence D KL S i β S j β between all the pairs of neurons i , j that belong to the same population α . However, in the Supplemental script “Statistics.py”, we checked the statistical homogeneity of the neural populations in a simpler and computationally more efficient way, though our approach is less general than that based on the Kullback-Leibler divergence. Our method relies on the assumption that a small number of moments of S i β , S j β , for example just the mean and the variance:
S i β = k : p k = β P i , k W i , k Var S i β = k : p k = β P i , k W i , k 2 P i , j 2 W i , k 2 ,
are sufficient for discriminating between the probability distributions of the two random variables. In other words, we assumed that if S i β S j β and/or Var S i β Var S j β , then S i β , S j β are differently distributed, and therefore the neural population α is statistically heterogeneous. Note, however, that the probability distribution of a scalar random variable with finite moments at all orders, generally is not uniquely determined by the sequence of moments. It follows that there exist (rare) cases of differently distributed random variables that share the same sequence of moments. For this reason, the moments are not always sufficient for discriminating between two probability distributions. Note also that a sufficient condition for the sequence of moments to uniquely determine the random variable is that the moment generating function has positive radius of convergence (see Theorem (30.1) in [43]).

2.2.3. Occurrence Probability of the Stationary States for a Given Combination of Stimuli

In this subsection, we calculate the probability that a given firing-rate state ν is stationary, for a fixed combination of stimuli. According to Equation (4), ν is stationary for every I V . Since the boundaries of V (namely the functions Λ α and Ξ α ) are random variables, it follows that the probability that the firing-rate state ν is stationary, for a fixed combination of stimuli I ^ = def I ^ 0 , , I ^ P 1 , is P I ^ V = V 0 × × V P 1 . Since Λ α , given the firing rate ν and for α 0 , , P 1 , are conditionally independent variables (and the same for the variables Ξ α ), it follows that P I ^ V can be factored out into the product of the probabilities P I ^ α V α . In particular, whenever Γ I α , 1 = , from Equation (4) we see that ν is stationary for every I α < Ξ α . It follows that, in this case, P I ^ α V α = P I ^ α < Ξ α = I ^ α + p Ξ α x d x = 1 F Ξ α I ^ α . On the other hand, whenever Γ I α , 0 = , the state ν is stationary for every I α Λ α , so that P I ^ α V α = P I ^ α Λ α = I ^ α p Λ α x d x = F Λ α I ^ α . In all the other cases, ν is stationary for every Λ α I α < Ξ α . This condition can be decomposed as I α Λ α I α < Ξ α , and since Γ I α , 0 Γ I α , 1 = , the random variables Λ α e Ξ α are conditionally independent given ν , so that P I ^ α V α = P I ^ α Λ α P I ^ α < Ξ α = F Λ α I ^ α 1 F Ξ α I ^ α . Therefore, to summarize, the probability that the firing-rate state ν is stationary, for a fixed combination of stimuli I ^ , is:
P I ^ V = α = 0 P 1 P I ^ α V α P I ^ α V α = 1 F Ξ α I ^ α , if Γ I α , 1 = F Λ α I ^ α , if Γ I α , 0 = F Λ α I ^ α 1 F Ξ α I ^ α , otherwise .
Please note that P I ^ V can be equivalently interpreted as the conditional probability P ν | ν , I ^ (therefore generally ν 0 , 1 N P ν | ν , I ^ 1 , namely P I ^ V is not normalized over the set of the possible 2 N firing-rate states.). In other terms, P I ^ V represents the probability (across the realizations of the synaptic connectivity matrix) that, given the specific firing-rate vector ν is the initial condition of Equations (1) or (3), the network state will remain ν at the next time instants (i.e., ν is stationary). P ν | ν , I ^ should not be confused with the probability to observe the state ν in the long-time regime, given the initial condition of the network is distributed randomly according to some arbitrary law, across the realizations of the matrix J. In the latter case, similarly to spin glasses [44,45], the long-time firing rates become correlated as a consequence of the dynamics described by Equations (1) or (3), therefore the calculation of the exact probability distribution of the stationary states is expected to become intractable for an arbitrary network size.

2.2.4. Occurrence Probability of the Stationary States Regardless of the Stimuli

In this subsection, we calculate the probability to observe a given firing-rate state ν in the whole multistability diagram of a single network realization, namely the probability that the state ν is stationary regardless of the specific combination of stimuli to the network. In other words, this represents the probability that ν is stationary for at least one combination of stimuli. The firing-rate state ν is observed in the multistability diagram only if its corresponding hyperrectangle V has positive hypervolume vol V . Since V = V 0 × × V P 1 , it follows that vol V = α = 0 P 1 len V α > 0 only if len V α > 0 α , where len V α represents the length of the interval V α . Please note that in Section 3 we consider an example of neural network model with P = 2 , therefore in that case α = 0 P 1 len V α represents the area of the rectangles in the stimuli space. Nevertheless, to avoid confusion, we will continue to use the general notation vol V .
In particular, when Γ I α , 0 = (respectively Γ I α , 1 = ), from Equation (4) we get len V α = for every Λ α (respectively Ξ α ), or in other words P len V α > 0 = 1 . On the other hand, when Γ I α , 0 , Γ I α , 1 , according to Equation (4) we obtain len V α = Ξ α Λ α . Since Λ α and Ξ α are conditionally independent given the firing rate ν , we can write:
p len V α x = + p Ξ α z p Λ α z x d z ,
and therefore, by using Equation (12):
P len V α > 0 = 0 + p len V α x d x = + p Ξ α x F Λ α x d x = + p Ξ α c x F Λ α x d x + q Γ I α , 0 F Ξ α θ q lim x θ q F Ξ α x F Λ α θ q .
To conclude, since the quantities len V α are conditionally independent, we obtain that the probability to observe a given firing-rate state ν in the whole multistability diagram of a single network realization is:
P vol V > 0 = α = 0 P 1 P len V α > 0 .

2.3. The Special Case of Multi-Population Networks Composed of Statistically Homogeneous Populations

In biological networks, heterogeneity is experimentally observed between different types of synapses (e.g., excitatory vs inhibitory ones), as well as within a given synaptic type [1]. For this reason, in this subsection we focus our attention on the study of random networks composed of P statistically homogeneous populations. As we explained in Section 2.2.2, by statistical homogeneity we mean that the synaptic weights are random and therefore heterogeneous, but the probability distribution of S i β = def k : p k = β J i , k , as well as the firing threshold θ i and the external stimulus I i , are population-dependent. In other words, in statistically homogeneous networks the probability distribution of their connections depends only on the pre- and post-synaptic population indexes and not on the individual synaptic pair indexes, and similarly the firing thresholds and the external stimuli depend only on the population index and not on the individual neuron index. This model has been used previously in neuroscience to study the dynamical consequences of heterogeneous synaptic connections in multi-population networks (see e.g., [16,19]). However, while previous studies focused on the thermodynamic limit of the network model, here we consider the case of arbitrary-size networks.
We call N α the size of population α , so that α = 0 P 1 N α = N . Moreover, we rearrange the neurons so that the connection probabilities can be written in the following block-matrix form:
P = P 0 , 0 P 0 , 1 P 0 , P 1 P 1 , 0 P 1 , 1 P 1 , P 1 P P 1 , 0 P P 1 , 1 P P 1 , P 1 , P α , β = P α aut Id N α + P α , α I N α Id N α , if α = β P α , β I N α , N β , if α β .
In Equation (19), P α , β is a N α × N β matrix, while P α aut represents the magnitude of the diagonal entries of the matrix P α , α , namely the probability to observe a self-connection or autapse [46]. P α , β represents the probability to observe a synaptic connection from a neuron in population β to a (distinct) neuron in population α . Moreover, I N α , N β is the N α × N β all-ones matrix (here we use the simplified notation I N α = def I N α , N α ), while Id N α is the N α × N α identity matrix. According to the homogeneity assumption, we also suppose that the strength of the non-zero synaptic connections from population β to population α is generated from a population-dependent probability distribution:
p W i , j = p α aut for i = j p α , β otherwise ,
i , j belonging to populations α , β , respectively. For every excitatory population the support of the distribution p W i , j must be non-negative, while for every inhibitory population it must be non-positive. We also suppose that all the neurons in population α have the same firing threshold θ i = ϑ α and share the same stimulus I i = I α . For example, if each population receives a distinct external stimulus, then Γ I α = n α 1 , n α 1 + 1 , , n α 1 , where n α 1 = def β = 0 α 1 N β and n 1 = def 0 . However, generally, there may exist distinct populations that share the same stimulus.
Now consider the following N × N block matrix:
B = def B 0 , 0 B 0 , 1 B 0 , Y 1 B 1 , 0 B 1 , 1 B 1 , Y 1 B X 1 , 0 B X 1 , 1 B X 1 , Y 1 ,
with homogeneous X λ × Y μ blocks B λ , μ = B λ , μ I X λ , Y μ (where λ = 0 X 1 X λ = μ = 0 Y 1 Y μ = N , while B λ , μ are free parameters). We found that:
per B = C s S T s U s S = def s = s 0 , 0 , , s X 1 , Y 1 N X Y : μ = 0 Y 1 s λ , μ = X λ λ , λ = 0 X 1 s λ , μ = Y μ μ , N = def 0 , 1 , 2 , C = def λ = 0 X 1 X λ ! , T s = def μ = 0 Y 1 Y μ s 0 , μ , , s X 1 , μ , U s = def λ = 0 X 1 μ = 0 Y 1 B λ , μ s λ , μ ,
with multinomial coefficients:
Y μ s 0 , μ , , s X 1 , μ = def Y μ ! s 0 , μ ! s X 1 , μ ! .
As a consequence of the statistical homogeneity of the multi-population network considered in this subsection, the matrices in Equations (13) and (14) are composed of homogeneous block submatrices. For this reason, in the specific case of this multi-population network, the permanents in Equations (13) and (14) can be calculated by means of Equation (20). Note that, for a given α , the parameter X represents the number of distinct populations that share the current I α (for example, X = 1 if each population receives a distinct external stimulus), while Y = 1 , 2 is the number of block columns (for example, Y = 1 when calculating F Λ α x , see Equation (14), and Y = 2 when calculating p Λ α c x , see Equation (13)). X λ corresponds to the number of neurons with index i Γ I α , u that belong to the population λ , while Y μ represents the number of columns of the μ th block column (for example, when calculating p Λ α c x , we set Y 0 = 1 and Y 1 = γ α , 1 1 , which correspond to the number of columns of the submatrices a α , 1 θ x and F α , 1 γ α , 1 1 x respectively, see Equation (13)). Moreover, N corresponds to γ α , u , while the parameters B λ , μ represent the entries of the matrices in Equations (13) and (14) (for example, N = γ α , 1 , B λ , 0 = a i θ i x and B λ , 1 = F I i x for i in the population λ , when calculating p Λ α c x ).
For the sake of clarity, we implemented Equation (20) in the Supplemental Python script “Permanent.py”. Since the permanents in Equations (13) and (14) can be obtained from Equation (20) for Y = 1 and Y = 2 , in the script we specifically implemented these two cases. The computation of per B by means of Equation (20) generally prove much faster than the BBFG algorithm, see Section 3. However, it is important to note that while the BBFG algorithm can be applied to neural networks with any topology, Equation (20) is specific for multi-population networks composed of statistically homogeneous populations.

2.4. Large-Network Limit

In computational neuroscience, statistically homogeneous multi-population networks represent an important class of network models, since their large-size limit is typically well-defined and serves as a basis for understanding the asymptotic behavior of neural systems [16,17,18,19,20]. In this subsection, we derive the large-size limit of the class of statistically homogeneous multi-population networks with quenched disorder that we introduced in Section 2.3. In particular, we focus on the case when each neural population receives a distinct external stimulus current, and we also suppose that the contribution of self-connections to the statistics of the firing rates is negligible in the large-network limit. The consequences of the relaxation of these two assumptions will be discussed at the end of this subsection.
The derivation of the asymptotic form of the stationary-state statistics requires the introduction of a proper normalization of the sum S i = j = 0 N 1 J i , j ν j in Equation (1), in order to prevent the divergence of the mean and the variance of S i in the thermodynamic limit. For this purpose, we choose the mean and the variance of the random variables W i , j as follows:
m α , β = def W i , j = μ α , β N β s α , β 2 = def Var W i , j = σ α , β 2 N β + μ α , β N β 2 P α , β 1 ,
given parameters μ α , β , σ α , β , N β and P α , β such that μ α , β R and Var W i , j R 0 , for every i , j (with i j ) in the populations α , β , respectively. Equation (21) implies that:
J i , j = T i , j W i , j = P α , β N β μ α , β Var J i , j = T i , j 2 W i , j 2 T i , j 2 W i , j 2 = P α , β N β σ α , β 2 ,
and therefore:
μ α = def j = 0 N 1 J i , j ν j β = 0 P 1 γ β , 1 P α , β N β μ α , β σ α 2 = def Var j = 0 N 1 J i , j ν j β = 0 P 1 γ β , 1 P α , β N β σ α , β 2 ,
having neglected the contribution of the autapses. Therefore the mean and the variance of S i are finite for every state ν in the thermodynamic limit, as desired. Now, consider any of the firing-rate states ν composed of γ α , 1 active neurons in the population α ( α 0 , , P 1 ). For the central limit theorem, given any distribution (not necessarily normal) of W i , j that satisfies Equation (21), we get:
γ β , 1 1 γ β , 1 j = n β 1 n β 1 J i , j ν j P α , β N β μ α , β d N 0 , P α , β N β σ α , β 2 β ,
in the limit γ β , 1 (see Section 2.3 for the definition of the parameter n β ). In turn, this implies that:
I i = θ i β = 0 P 1 j = n β 1 n β 1 J i , j ν j d N ϑ α μ α , σ α 2 .
Since the random variables I i are conditionally independent given ν and identically distributed i Γ I α and α fixed, according to the Fisher-Tippett-Gnedenko theorem [47,48,49], the distribution of the variables Λ α and Ξ α converges to the Gumbel distribution in the limit γ α , u . In other words, given X Λ , Ξ , and by defining, according to [50]:
z X α x = def x a X α b X α a Λ α = def ϑ α μ α + σ α Φ 1 1 1 n Λ α , a Ξ α = def ϑ α μ α σ α Φ 1 1 1 n Ξ α b X α = def σ α Φ 1 1 1 n X α e Φ 1 1 1 n X α n Λ α = def γ α , 1 , n Ξ α = def γ α , 0 ,
then in the limit n X α we get:
p Λ α x g z Λ α x , F Λ α x G z Λ α x p Ξ α x g z Ξ α x , F Ξ α x 1 G z Ξ α x ,
where g · and G · are the Gumbel probability density and its cumulative distribution function, respectively:
g x = 1 b X α e x + e x , G x = e e x .
In Equation (22), Φ · represents the cumulative distribution function of the standard normal probability density. Φ 1 · is the probit function, which can be expressed in terms of the inverse of the error function erf · as Φ 1 x = 2 erf 1 2 x 1 . By using an asymptotic expansion of erf 1 · , we get:
Φ 1 1 1 n X α ln n X α 2 2 π ln ln n X α 2 2 π Φ 1 1 1 n X α e 2 + ln n X α 2 2 π ln 2 + ln n X α 2 2 π .
Moreover, it is possible to prove that:
Λ α = a Λ α + b Λ α γ , Ξ α = a Ξ α b Ξ α γ ,
where γ is the Euler-Mascheroni constant. Please note that Equation (24) can be used for plotting the mean multistability diagram of the network, while Equations (17), (22) and (23) provide an analytical expression of the occurrence probability of the stationary states for a given combination of stimuli. Unfortunately, we are not aware of any exact formula of the occurrence probability of the stationary states regardless of the stimuli (see Section 2.2.4). For this reason, the latter should be calculated numerically or through analytical approximations, from Equation (18).
Now we discuss the two assumptions that we made in the derivation of our results, namely distinct external stimuli to each neural population, and a negligible contribution of the autapses to the statistics of the firing rates. The relaxation of the first assumption implies the calculation of the minimum and maximum of non-identically distributed random variables. For example, in the case when an external stimulus is shared by two distinct populations, the variable I i has two distinct probability distributions, depending on the population the neuron i belongs to. However, the Fisher-Tippett-Gnedenko theorem is valid only for identically distributed variables, and a straightforward generalization of the theorem to the case of non-identically distributed variables is not available (see Section 4.2). Note, however, that this limitation applies only to the asymptotic expansion discussed in the present subsection. The exact (i.e., non-asymptotic) theory discussed in Section 2.2 is not affected by this limitation, and is also valid when a stimulus is shared by several populations.
The second assumption in our derivation was the negligible contribution of the autapses to the statistics of the firing rates in the large-network limit. This assumption can be relaxed for example by supposing that the autapses are not scaled, so that the random variable S i is strongly affected by the autaptic weight J i , i when ν i = 1 . In this case, the central limit theorem does not apply anymore to the whole sum S i = j = 0 N 1 J i , j ν j , but only to S i J i , i . In other words, in the case when the autapses are not normally distributed, the sum S i is not normally distributed either, therefore the distribution of the variables Λ α and Ξ α may not be necessarily the Gumbel law. This case can be studied analytically, if desired, but we omitted it for the sake of brevity.

2.5. Numerical Simulations

To further validate our results, in Section 3 we compare our semi-analytical formulas with numerical Monte Carlo simulations, that we implemented in the “Numerical Simulations” section of the Supplemental Python script “Statistics.py”. During these numerical simulations, we run 5000 realizations of the network described in Figure 1 and Table 1, obtaining the numerical results shown in Figure 2, Figure 3 and Figure 4. At each network realization, we generate a new (quenched) connectivity matrix J, according to Equation (8). Then, for each J, we derive the corresponding bifurcation points Λ α and Ξ α and the hyperrectangles V , by applying the algorithm “Multistability_Diagram.py” that we introduced in [28].
The cumulative distribution function of the bifurcation points is then computed by means of a cumulative sum of the probability histograms of Λ α and Ξ α . This provides a numerical approximation of the functions F Λ α x and F Ξ α x , that we derived semi-analytically in Equation (14).
The mean multistability diagram is calculated by averaging the bifurcation points Λ α and Ξ α over the network realizations. This provides a numerical approximation of the quantities Λ α and Ξ α , that we derived semi-analytically in Equations (15) and (16).
The probability that a given firing-rate state ν is stationary for a fixed combination of stimuli I ^ is calculated by counting, during the Monte Carlo simulations, the number of times I ^ V . By dividing this number by the total number of realizations, we obtain a numerical estimation of the probability P I ^ V (see Equation (17) for its semi-analytical expression). This calculation is then repeated for each of the 2 N firing-rate states ν . Alternatively, this probability can be calculated by counting the relative number of times ν t 0 + 1 = ν t 0 (stationarity condition), where the firing-rate state ν t 0 is each of the 2 N initial conditions of the network model, while the state ν t 0 + 1 is calculated iteratively from it by means of Equation (1) or Equation (3). We implemented both methods in the Python script “Statistics.py”, and the reader can easily check that they provide identical numerical estimations of P I ^ V .
The probability that the state ν is stationary, regardless of the specific combination of stimuli to the network, is derived numerically by counting the relative number of times vol V = α = 0 P 1 len V α > 0 , for each of the 2 N firing-rate states ν . This provides a numerical estimation of the probability P vol V > 0 , that we derived semi-analytically in Equation (18).
Then, in Figure 5 we show a speed test of Equation (20), for the calculation of the permanent of a block matrix with homogeneous blocks described in Table 2. The computational time required by Equation (20) is compared with the time required by the Balasubramanian-Bax-Franklin-Glynn algorithm [39,40,41,42].
To conclude, in Section 3 we run 100,000 realizations of the two-population statistically homogeneous network described in Table 3, obtaining the numerical results reported in Figure 6. The probability distribution of the bifurcation points in the large-size limit is calculated numerically through a kernel density estimation. The density estimator is applied to the samples of the random variables Λ α and Ξ α , which are generated during the Monte Carlo simulations according to Equation (4), given the firing-rate state ν .

3. Results

In this section, we report the comparison between, on the one hand, the semi-analytical formulas of the mean multistability diagram, of the occurrence probability of the stationary firing-rate states, and of the probability distribution of the bifurcation points in both the small and large-size limits (see Section 2.2, Section 2.3 and Section 2.4), and, on the other hand, the corresponding numerical counterparts (Section 2.5).
For illustrative purposes, we consider the case P = 2 , so that the multistability diagram can be visualized on a plane. In particular, we suppose that the network is composed of excitatory (E) and inhibitory (I) neurons. For this reason, it is convenient to change the notation slightly, and to consider α E , I rather than α 0 , 1 (so that the multistability diagram will be plotted on the I E I I plane). Since the total number of firing-rate states of the network increases as 2 N with the network size, in this section we apply the Python script “Statistics.py” to a small-sized network ( N = 4 ), in order to ensure the clarity of the figures. It is important to note that, in the derivation of the results in Section 2.2 and Section 2.3, we did not resort to any mathematical approximation, and that our semi-analytical formulas are exact for every size N. For this reason, if desired, our script can be applied to networks with size N 4 , depending on the computational power available.
In the network that we test, we suppose that the neurons with indexes i 0 , 1 are excitatory and receive an external stimulus I E , while the neurons with indexes i 2 , 3 are inhibitory and receive a stimulus I I . Moreover, we suppose that the independent random variables W i , j are distributed according to the following Wigner semicircle distribution:
p W i , j x = 2 π R i , j 2 R i , j 2 x C i , j 2 , if x C i , j R i , j 0 , otherwise ,
centered at x = C i , j and with radius R i , j . In Table 1 we report the values of the parameters P , θ = θ 0 , θ 1 , θ 2 , θ 3 T , C and R that we chose for this network.
In panel A of Figure 1 we show the graph of the connection probability matrix P , while in panel B we plot some examples of the Wigner probability distributions p W i , j .
Please note that for our choice of the parameters, the support of the Wigner distribution, namely the range C i , j R i , j , C i , j + R i , j , is a subset of R 0 (respectively R 0 ) for excitatory (respectively inhibitory) neurons. It follows that the connectivity matrix J of the model satisfies the Dale’s principle [51], as required for biologically realistic networks (note, however, that our algorithm can also be applied to networks that do not satisfy the principle, if desired).
In Figure 2 we plot the cumulative distribution functions of the bifurcation points Λ E and Λ I of, e.g., the firing-rate state 1110.
The figure shows a very good agreement between the semi-analytical functions (red curves), calculated through Equation (14), and the numerical functions (blue dots), computed over 5000 network realizations as described in Section 2.5. Please note that, generally, the cumulative distribution functions are not continuous (see also Section 2.2.1), and that jump discontinuities may occur at the firing thresholds ( θ = 1 , in this example).
In Figure 3 we plot the comparison between the mean multistability diagram of the network, evaluated numerically through 5, 50 and 5000 Monte-Carlo repetitions (panels A–C), and the same diagram, evaluated semi-analytically through Equation (15) or Equation (16) (panel D).
The figure shows that, by increasing the number of network realizations, the numerical multistability diagram converges to the semi-analytical one (compare panels C and D), apart from small numerical errors, that depend on the integration step in the semi-analytical formulas, and on the finite number of repetitions in the Monte-Carlo simulations. The diagrams show a complex pattern of multistability areas in the I E I I plane, characterized by multistability degrees M = 1 (monostability), 2 (bistability), and 3 (tristability). A similar result was already observed in small binary networks with deterministic synaptic weights, see [28,52]. Moreover, note that our algorithm detected the presence of white areas, characterized by multistability degree M = 0 . In these areas, we do not observe the formation of stationary firing-rate states, so that the only possible long-time dynamics for those combinations of stimuli is represented by neural oscillations. However, generally, oscillations in the firing-rate states may also co-occur with stationary states in areas of the I E I I plane where M > 0 . The reader is referred to Section 4.3 for a discussion about the possibility to extend our work to the study of neural oscillations.
In Figure 4 we plot the comparison between the semi-analytical and numerical occurrence probability of the stationary firing-rate states (red and blue bars, respectively).
In panels A and B we plot, respectively, the probability that the state ν is stationary for a fixed combination of stimuli ( I ^ E = 0 and I ^ I = 4 , red bars calculated through Equation (17)), and the probability that ν is stationary regardless of the stimuli (red bars calculated through Equation (18)). The figure shows again a very good agreement between semi-analytical and numerical results.
In particular, panel A shows that, for the network parameters that we chose (see Table 1), the states 0000, 0100, 1000, 1010, 1011 and 1100 are never stationary for I ^ E = 0 and I ^ I = 4 . In other words, in every network realization the rectangles V corresponding to these states never contain the point of coordinates I ^ = I ^ E , I ^ I . However, panel B shows that, at least for other combinations of the stimuli, these firing-rate states can also be stationary.
Moreover, panel B shows that the firing-rate states 0000, 0011, 1100 and 1111 have unit probability to be observed in the whole multistability diagram of a single network realization, namely P vol V > 0 = 1 for these states. This is a consequence of the fact that, for these states, Γ I α , u = for both α = E and α = I and for some u (for example, Γ I E , 1 = Γ I I , 0 = for the state 0011). For this reason, we get len V E = len V I = , namely P len V E > 0 = P len V I > 0 = 1 , so that P vol V > 0 = 1 (see Section 2.2.4). On the other hand, for all the other firing-rate states, we obtain Γ I α , u = only for one value of the index α (for example, Γ I E , 1 = , Γ I E , 0 = 0 , 1 , Γ I I , 1 = 3 and Γ I I , 0 = 2 for the state 0001), or for none of them (for example, Γ I E , 0 = 0 , Γ I E , 1 = 1 , Γ I 1 , 0 = 2 and Γ I 1 , 1 = 3 for the state 0101). Therefore, for these states, typically P vol V > 0 < 1 .
In Figure 5 we show the speed gain of Equation (20) over the BBFG algorithm, achieved during the calculation of the permanent of homogeneous block matrices that we implemented in the Python script “Permanent.py”.
The matrices were generated randomly, according to the parameters in Table 2.
In particular, panel A shows that the mean computational time, that we called T BBFG , required for calculating the permanent by means of the BBFG algorithm over several realizations of the matrix, increases exponentially with the matrix size N . On the contrary, the mean time required by Equation (20), that we called T HBM , increases very slowly with N , resulting in a progressive and considerable improvement of performance over the BBFG algorithm (mean speed gain T BBFG / T HBM 1 ).
Panel B of Figure 5 shows the limitations of Equation (20). While T BBFG does not depend on the parameter X (namely the number of neural populations that share the same external stimulus), T HBM strongly increases with X , resulting in a progressive loss of performance of Equation (20) over the BBFG algorithm. This is a consequence of the increasing number of multinomial coefficients that, according to Equation (20), must be calculated in order to evaluate the matrix permanent when X is incremented. In more detail, the total number of multinomial coefficients is:
μ = 0 Y 1 Y μ + X 1 X 1 μ = 0 Y 1 X Y μ Y μ ! 1 + Y μ Y μ + 1 2 X + O X 2 , X μ = 0 Y 1 Y μ = N ,
where the asymptotic expansion holds in the limit X . Our analysis shows that generally, in the study of statistically homogeneous multi-population networks, Equation (20) should be preferred to the BBFG algorithm when X N , namely when each stimulus is shared by a relatively small number of populations.
In Figure 6 we show examples of the probability distributions of the bifurcation points in a large random network composed of two statistically homogeneous populations (one excitatory and one inhibitory).
The network parameters that we used are reported in Table 3.
In particular, we suppose that the weights are distributed according to the following Laplace probability density:
p W i , j x = 1 2 s α , β e 2 s α , β x m α , β ,
i , j belonging to populations α , β , respectively ( m α , β and s α , β are defined as in Equation (21)). Figure 6 shows a good agreement between the analytical formula of the Gumbel probability density function and numerical simulations, despite slight differences between analytical and numerical densities can be observed, as a consequence of the finite size of the network. These differences disappear in the limit N . In particular, Figure 6 shows that the firing-rate states with 240 active neurons in the excitatory population, and 80 active neurons in the inhibitory one, are very unlikely to be observed in the whole multistability diagram of the network. For these states, and for our choice of the network parameters (see Table 3), the probability to get Ξ E > Λ E and Ξ I > Λ I is very small, as a consequence of the large distance between the peaks of the distributions of Λ α and Ξ α . More generally, for a network composed of an arbitrary number P of statistically homogeneous populations, we found that the stationary states that are more likely to occur in the large-size limit are those characterized by homogeneous intra-population firing rates, namely the stationary states of the form:
ν = ν ¯ 0 ν ¯ 0 N 0 times ν ¯ 1 ν ¯ 1 N 1 times ν ¯ P 1 ν ¯ P 1 N P 1 times , ν ¯ α 0 , 1 , α 0 , , P 1 .
This result proves that, in the large-size limit, the stationary states of this network can be studied through a dimensional reduction of the model. In other words, in order to completely characterize the statistical properties of this network, it suffices to consider the firing-rate states of the form (27), since the states that present intra-population symmetry breaking are very unlikely to be observed. The main consequence of this phenomenon is a tremendous simplification in the mathematical analysis of the network model, since it reduces the analysis of the 2 N states of the network to only 2 P states of the form (27). In turn, this simplification implies a strong reduction of the computational time of the algorithms, since typically P N .

4. Discussion

We studied how the statistical properties of the stationary firing-rate states of a binary neural network model with quenched disorder depend on the probability distribution of the synaptic weights and on the external stimuli. The size of the network is arbitrary and finite, while the synaptic connections between neurons are assumed to be independent (not necessarily identically distributed) random variables, with arbitrary marginal probability distributions. By applying the results derived in [34,35,36,37] for the order statistics of sets of independent random variables, our assumptions about the network model allowed us to calculate semi-analytically the statistical properties of the stationary states and of their bifurcation points, in terms of the permanent of special matrices.
In particular, in Section 2.2.1 we derived the probability density and the cumulative distribution functions of the bifurcation points of the model in the stimuli space. From these distributions, in Section 2.2.2 we derived the mean multistability diagram of the network, namely the plot of the bifurcation points averaged over network realizations. Then, in Section 2.2.3 and Section 2.2.4, we derived the probability that a given firing-rate state is stationary for a fixed combination of stimuli, and the probability that a state is stationary regardless of the stimuli. These results provide a detailed description of the statistical properties of arbitrary-size networks with arbitrary connectivity matrix in the stationary regime, and describe how these properties are affected by variations in the external stimuli.
In Section 2.3 we specialized to the case of statistically homogeneous multi-population networks of arbitrary finite size. For these networks, we found a compact analytical formula of the permanent, which outperforms of several orders of magnitude the fastest known algorithm for the calculation of the permanent, i.e., the Balasubramanian-Bax-Franklin-Glynn algorithm [39,40,41,42]. Then, in Section 2.4 we derived asymptotic expressions of the statistical behavior of these multi-population networks in the large-size limit. In particular, if the contribution of the autapses to the statistics of the firing rates can be neglected, we proved that the probability distributions of the bifurcation points tend to the Gumbel law, and that the statistical properties of large-size multi-population networks can be studied through a powerful dimensional reduction.
For the sake of clarity, we implemented our semi-analytical results for arbitrary-size networks with arbitrary connectivity matrix in the Supplemental Python script “Statistics.py”. The script also performs numerical calculations of the probability distributions of the bifurcation points, of the occurrence probability of the stationary states and of the mean multistability diagram, through which we validated our semi-analytical results. To conclude, in the Supplemental Python script “Permanent.py”, we implemented a comparison between our analytical formula of the permanent for statistically homogeneous multi-population networks, and the Balasubramanian-Bax-Franklin-Glynn algorithm. This comparison proved the higher performance of our formula in the specific case of multi-population networks, provided each external stimulus is shared by a relatively small number of populations.

4.1. Progress with Respect to Previous Work on Bifurcation Analysis

In the study of neural circuits, bifurcation theory has been applied mostly to networks composed of graded-output units with analog (rather than discrete) firing rates, see e.g., [53,54,55,56]. On the other hand, bifurcation theory of non-smooth dynamical systems of finite size, including those with discontinuous functions like the discrete network that we studied in this paper, has recently received increased attention in the literature. However, the theory has been developed mostly for continuous-time models [57,58,59,60,61] and for piecewise-smooth continuous maps [62], while discontinuous maps have received much less attention, see e.g., [63]. In [28] we tackled this problem for finite-size networks composed of binary neurons with discontinuous activation function that evolve in discrete-time steps, and we introduced a brute-force algorithm that performs a semi-analytical bifurcation analysis of the model with respect to the external stimuli. Specifically, in [28] we focused on the study of bifurcations in the case of single network realizations. In the present paper we extended those results to networks with quenched disorder, and we introduced methods for performing the bifurcation analysis of the model over network realizations. While in [28] we studied the bifurcations of both the stationary and oscillatory solutions of the network equations, here we focused specifically on the bifurcations of the stationary states, while the study of neural oscillations is discussed in Section 4.3.
Our work is closely related to the study of spin glasses in the zero-temperature limit, since a single realization of our network model has deterministic dynamics. In spin glasses, the physical observables are averaged over the randomness of the couplings in the large-size limit, by means of mathematical techniques such as the replica trick and the cavity method [10,11,64]. In our work, we followed a different approach, based on extreme value theory and order statistics. This allowed us to reduce the mathematical derivation of the averages and, more generally, of the probability distributions of the stationary states of arbitrary-size networks, to the calculation of 1D definite integrals on the real axis.
To our knowledge, bifurcations of neural networks with quenched disorder were investigated only for fully connected network models with normally distributed weights and graded activation function [16,17,18,19,20]. These studies focused on the thermodynamic limit of the models, preventing us from making progress in the comprehension of the dynamics of small networks, such as microcolumns in the primate cortex [23] or the nervous system of some invertebrates [22]. The neural activity of small networks containing only tens or hundreds of neurons may show unexpected complexity [28]. For this reason, the study of small networks typically requires more advanced mathematical techniques, because the powerful statistical methods used to study large networks do not apply to small ones. Contrary to previous research, in this paper we first focused on the study of networks of arbitrary size, including small ones. Moreover, unlike previous work, we considered networks with an arbitrary synaptic connectivity matrix, which is not necessarily fully connected or normally distributed. In particular, our work advances the tools available for understanding small-size neural circuits, by providing a complete (generally semi-analytical) description of the stationary behavior of Equations (1) and (3). Then, for completeness, and similarly to [16,17,18,19,20], we studied the large-size limit of multi-population networks composed of statistically homogeneous populations. Unlike previous work, which focused on networks composed of graded-output neurons, our binary-rate assumption allowed us to derive asymptotic analytical formulas for the statistics of the stationary states and of the corresponding bifurcation points, advancing our comprehension of neural networks at macroscopic spatial scales.

4.2. Limitations of Our Approach

A first limitation of the algorithms that we introduced in the Supplemental file “Statistics.py” is represented by the network size. Note that, during the derivation of our semi-analytical formulas in Section 2.2, we did not make any assumption about the number of neurons in the network. As a consequence, our results are exact for networks of arbitrary size. However, the number of possible firing-rate states in a binary network grows exponentially with the number of neurons, therefore in practice our algorithms can be applied only to small-size networks. The maximum network size that can be studied through our approach depends on the computational power available.
To study the asymptotic statistical properties of large networks, in this paper we focused on the special case of statistically homogeneous networks with arbitrary sparseness and distinct external stimuli to each neural population. The bifurcation points of these networks obey the Fisher-Tippett-Gnedenko theorem, in that they correspond to the extreme values of some conditionally independent and identically distributed random variables. While the extreme value statistics of a finite number of (independent and) non-identically distributed random variables are known (see [36]), a straightforward generalization of the Fisher-Tippett-Gnedenko theorem to statistically heterogeneous variables in the large-size limit is not available [65]. For this reason, a second limitation of our approach is represented by the study of the asymptotic properties of neural networks whose external stimuli are shared by two or more populations. In these networks, the extreme value statistics must be calculated for a set of non-identically distributed random variables, see our discussion at the end of Section 2.4. Therefore the complete characterization of these networks still represents an open problem.
As in [16,17,18,19,20], a third limitation of our work is represented by the assumption of statistical independence of the synaptic connections. The calculation of order statistics for dependent random variables represents another open problem in the literature, which prevents the extension of our results to neural networks with correlated synaptic connections. In computational neuroscience, the dynamical and statistical properties of this special class of neural networks are still poorly understood. A notable exception is represented by [66], which provides a theoretical study of a graded-rate network with correlated normally distributed weights in the thermodynamic limit.

4.3. Directions for Future Work on the Statistical Mechanics of Networks with Quenched Disorder

We expect our approach for studying the stationary-state statistics of networks of binary neurons can be extended to more biologically realistic network models. The simplest extensions of the binary model are represented by the study of neuronal models with more complex step functions, that allow the presence of one quiescent state and several firing modes (e.g., networks of ternary neurons, see [67]), and the study of networks of neurons with piecewise-linear activation functions [68]. Because of the piecewise linearity of these models, we expect analytical solutions of their stationary-state statistics can be derived through a generalization of the approach we described in this paper.
However, stationary states represent only a subset of the dynamic repertoire of a network model. Another important dynamical regime in networks with random connections is represented by neural oscillations, which have been previously studied, for example, in [19,69], for fully connected graded-rate neurons in the thermodynamic limit, and for leaky integrate-and-fire neurons, respectively. In future work, we will investigate the possibility to extend our results to the study of neural oscillations in binary network models. Please note that according to [28], the bifurcation points at which an existing neural oscillation disappears, or the formation of a new oscillation is observed, correspond to the minima of minima or to the maxima of maxima of sets of random variables. Provided the arguments of the functions min · and max · are conditionally independent given the firing rate ν , it follows that, in principle, the statistics of neural oscillations could be studied (semi-)analytically by applying extreme value theory twice.

4.4. Possible Implications of This Work for Circuit Neuroscience

The mathematical techniques developed here may provide useful tools to better understand what are the specific functional advantages of the diversity and heterogeneity of synaptic connections and what it may be their role in stabilizing networks or preventing dysfunctions of brain networks. Being able to better understand mathematically at a deep level how such heterogeneities shape the circuit’s dynamics at different spatial scales, including scales that involve finite-size neural networks, could be thus useful to understand the origin and consequences of aberrant circuit functions.
While much more work is needed to steer results in this direction, our results suggest the presence of strong qualitative and quantitative differences between the behavior or dynamics of the networks with quenched synaptic variability presented here, and networks with similar size and individual neuron models but with homogeneous synaptic weights, that we studied in [28]. The dynamical properties of the networks with quenched disorder change considerably across realizations, as a consequence of their small size. In particular, the variability of the synaptic weights allows the network to explore a larger set of operational modes. For this reason, we typically observe a reorganization of the (mean) bifurcation diagram (in the degree of fragmentation of the diagram in the stimuli space, and in the maximum multistability degree of the stationary solutions), as well as a wider set of accessible stationary states across the network realizations, both for fixed external stimuli and regardless of the stimuli. A systematic study of the enrichment of network dynamics offered by quenched variations in the synaptic strength structures in network models, coupled with more systematic measures of variability of synaptic strengths in healthy or dysfunctional biological neural circuits, could help understanding the possible functional benefits of heterogeneities in neural circuits.

Supplementary Materials

The following are available online at https://www.mdpi.com/1099-4300/21/7/630/s1, File “Statistics.py”: this Python script calculates semi-analytically the cumulative distribution function of the bifurcation points, the mean multistability diagram, and the occurrence probability of the stationary states (for fixed stimuli and regardless of them) for a network with quenched disorder, according to the methods described in Section 2.2. Moreover, the script compares these results with the numerical evaluation of the corresponding quantities, obtained through the Monte Carlo approach described in Section 2.5. File “Permanent.py”: this Python script calculates the permanent of block matrices with homogeneous blocks by means of Equation (20), and compares the computational time of this formula with that of the Balasubramanian-Bax-Franklin-Glynn algorithm.

Author Contributions

Conceptualization, D.F. and S.P.; Methodology, D.F. and S.P.; Software, D.F.; Validation, D.F.; Formal Analysis, D.F.; Writing—Original Draft Preparation, D.F. and S.P.

Funding

This research was in part supported by the Flag-Era JTC Human Brain Project (SLOW-DYN) and by the Simons Foundation (SFARI, Explorer Grant No. 602849).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Marder, E.; Goaillard, J.M. Variability, compensation and homeostasis in neuron and network function. Nat. Rev. Neurosci. 2006, 7, 563–574. [Google Scholar] [CrossRef] [PubMed]
  2. Dobrunz, L.E.; Stevens, C.F. Heterogeneity of release probability, facilitation, and depletion at central synapses. Neuron 1997, 18, 995–1008. [Google Scholar] [CrossRef]
  3. Waters, J.; Smith, S.J. Vesicle pool partitioning influences presynaptic diversity and weighting in rat hippocampal synapses. J. Physiol. 2002, 541, 811–823. [Google Scholar] [CrossRef] [PubMed]
  4. Parker, D. Variable properties in a single class of excitatory spinal synapse. J. Neurosci. 2003, 23, 3154–3163. [Google Scholar] [CrossRef] [PubMed]
  5. Branco, T.; Staras, K. The probability of neurotransmitter release: Variability and feedback control at single synapses. Nat. Rev. Neurosci. 2009, 10, 373–383. [Google Scholar] [CrossRef] [PubMed]
  6. Santhakumar, V.; Soltesz, I. Plasticity of interneuronal species diversity and parameter variance in neurological diseases. Trends Neurosci. 2004, 27, 504–510. [Google Scholar] [CrossRef] [PubMed]
  7. Sherrington, D.; Kirkpatrick, S. Solvable model of a spin-glass. Phys. Rev. Lett. 1976, 35, 1792–1796. [Google Scholar] [CrossRef]
  8. Kirkpatrick, S.; Sherrington, D. Infinite-ranged models of spin-glasses. Phys. Rev. B 1978, 17, 4384–4403. [Google Scholar] [CrossRef]
  9. De Almeida, J.R.L.; Thouless, D.J. Stability of the Sherrington-Kirkpatrick solution of a spin glass model. J. Phys. A Math. Gen. 1978, 11, 983–990. [Google Scholar] [CrossRef] [Green Version]
  10. Mézard, M.; Sourlas, N.; Toulouse, G.; Virasoro, M. Replica symmetry breaking and the nature of the spin glass phase. J. Phys. 1984, 45, 843–854. [Google Scholar] [CrossRef]
  11. Mézard, M.; Parisi, G.; Virasoro, M. Spin Glass Theory and Beyond: An Introduction to the Replica Method and Its Applications; World Scientific Singapore: Singapore, 1986. [Google Scholar]
  12. Sherrington, D. Spin glasses. In Physics of Novel Materials; Das, M.P., Ed.; World Scientific Singapore: Singapore, 1999; Chapter 4; pp. 146–204. [Google Scholar] [CrossRef]
  13. Litwin-Kumar, A.; Doiron, B. Slow dynamics and high variability in balanced cortical networks with clustered connections. Nat. Neurosci. 2012, 15, 1498–1505. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Doiron, B.; Litwin-Kumar, A. Balanced neural architecture and the idling brain. Front. Comput. Neurosci. 2014, 8, 56. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Rosenbaum, R.; Doiron, B. Balanced networks of spiking neurons with spatially dependent recurrent connections. Phys. Rev. X 2014, 4, 021039. [Google Scholar] [CrossRef]
  16. Sompolinsky, H.; Crisanti, A.; Sommers, H.J. Chaos in random neural networks. Phys. Rev. Lett. 1988, 61, 259–262. [Google Scholar] [CrossRef] [PubMed]
  17. Cessac, B. Increase in complexity in random neural networks. J. Phys. I France 1995, 5, 409–432. [Google Scholar] [CrossRef]
  18. Faugeras, O.; Touboul, J.; Cessac, B. A constructive mean-field analysis of multi-population neural networks with random synaptic weights and stochastic inputs. Front. Comput. Neurosci. 2009, 3, 1. [Google Scholar] [CrossRef] [PubMed]
  19. Hermann, G.; Touboul, J. Heterogeneous connections induce oscillations in large-scale networks. Phys. Rev. Lett. 2012, 109, 018702. [Google Scholar] [CrossRef] [PubMed]
  20. Cabana, T.; Touboul, J. Large deviations, dynamics and phase transitions in large stochastic and disordered neural networks. J. Stat. Phys. 2013, 153, 211–269. [Google Scholar] [CrossRef]
  21. Coolen, A.C.C.; Sherrington, D. Dynamics of fully connected attractor neural networks near saturation. Phys. Rev. Lett. 1993, 71, 3886–3889. [Google Scholar] [CrossRef]
  22. Williams, R.W.; Herrup, K. The control of neuron number. Ann. Rev. Neurosci. 1988, 11, 423–453. [Google Scholar] [CrossRef]
  23. Mountcastle, V.B. The columnar organization of the neocortex. Brain 1997, 120, 701–722. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Helmstaedter, M.; de Kock, C.P.; Feldmeyer, D.; Bruno, R.M.; Sakmann, B. Reconstruction of an average cortical column in silico. Brain Res. Rev. 2007, 55, 193–203. [Google Scholar] [CrossRef] [PubMed]
  25. Meyer, H.S.; Wimmer, V.C.; Oberlaender, M.; de Kock, C.P.; Sakmann, B.; Helmstaedter, M. Number and laminar distribution of neurons in a thalamocortical projection column of rat vibrissal cortex. Cereb. Cortex 2010, 20, 2277–2286. [Google Scholar] [CrossRef] [PubMed]
  26. De Luca, M.; Beckmann, C.F.; de Stefano, N.; Matthews, P.M.; Smith, S.M. fMRI resting state networks define distinct modes of long-distance interactions in the human brain. NeuroImage 2006, 29, 1359–1367. [Google Scholar] [CrossRef] [PubMed]
  27. Mantini, D.; Perrucci, M.G.; Del Gratta, C.; Romani, G.L.; Corbetta, M. Electrophysiological signatures of resting state networks in the human brain. Proc. Natl. Acad. Sci. USA 2007, 104, 13170–13175. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Fasoli, D.; Panzeri, S. Optimized brute-force algorithms for the bifurcation analysis of a binary neural network model. Phys. Rev. E 2019, 99, 012316. [Google Scholar] [CrossRef] [Green Version]
  29. Kuznetsov, Y.A. Elements of Applied Bifurcation Theory; Springer: New York, NY, USA, 1998; Volume 112. [Google Scholar]
  30. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef]
  31. Coolen, A.C.C. Statistical mechanics of recurrent neural networks I—Statics. In Handbook of Biological Physics; Elsevier: Amsterdam, The Netherlands, 2001; Volume 4, pp. 553–618. [Google Scholar]
  32. Glauber, R.J. Time dependent statistics of the Ising model. J. Math. Phys. 1963, 4, 294–307. [Google Scholar] [CrossRef]
  33. De Haan, L.; Ferreira, A. Extreme Value Theory: An Introduction; Springer: New York, NY, USA, 2006. [Google Scholar]
  34. Vaughan, R.J.; Venables, W.N. Permanent expressions for order statistic densities. J. R. Stat. Soc. Ser. B 1972, 34, 308–310. [Google Scholar] [CrossRef]
  35. Bapat, R.B.; Beg, M.I. Order statistics for nonidentically distributed variables and permanents. Sankhyā Ser. A 1989, 51, 79–93. [Google Scholar]
  36. Bapat, R.B. Permanents in probability and statistics. Linear Algebra Appl. 1990, 127, 3–25. [Google Scholar] [CrossRef] [Green Version]
  37. Hande, S. A note on order statistics for nondentically distributed variables. Sankhyā Ser. A 1994, 56, 365–368. [Google Scholar]
  38. Valiant, L.G. The complexity of computing the permanent. Theor. Comput. Sci. 1979, 8, 189–201. [Google Scholar] [CrossRef] [Green Version]
  39. Balasubramanian, K. Combinatorics and Diagonals of Matrices. Ph.D. Thesis, Indian Statistical Institute, Calcutta, India, 1980. [Google Scholar]
  40. Bax, E.; Franklin, J. A Finite-Difference Sieve to Compute the Permanent; Technical Report CalTech-CS-TR-96-04; CalTech: Pasadena, CA, USA, 1996. [Google Scholar]
  41. Bax, E. Finite-Difference Algorithms for Counting Problems. Ph.D. Thesis, California Institute of Technology, Pasadena, CA, USA, 1998. [Google Scholar]
  42. Glynn, D.G. The permanent of a square matrix. Eur. J. Combin. 2010, 31, 1887–1891. [Google Scholar] [CrossRef] [Green Version]
  43. Billingsley, P. Probability and Measure; John Wiley & Sons: New York, NY, USA, 1995. [Google Scholar]
  44. Sompolinsky, H.; Zippelius, A. Dynamic theory of the spin-glass phase. Phys. Rev. Lett. 1981, 47, 359–362. [Google Scholar] [CrossRef]
  45. Ben-Arous, G.; Dembo, A.; Guionnet, A. Aging of spherical spin glasses. Probab. Theory Relat. Fields 2001, 120, 1–67. [Google Scholar] [CrossRef]
  46. Yilmaz, E.; Ozer, M.; Baysal, V.; Perc, M. Autapse-induced multiple coherence resonance in single neurons and neuronal networks. Sci. Rep. 2016, 6. [Google Scholar] [CrossRef] [PubMed]
  47. Fisher, R.A.; Tippett, L.H.C. Limiting forms of the frequency distribution of the largest or smallest member of a sample. Math. Proc. Camb. Philos. Soc. 1928, 24, 180–190. [Google Scholar] [CrossRef]
  48. Gnedenko, B. Sur la distribution limite du terme maximum d’une série aléatoire. Ann. Math. 1943, 44, 423–453. [Google Scholar] [CrossRef]
  49. Gumbel, E.J. Statistics of Extremes; Columbia University Press: New York, NY, USA, 1958. [Google Scholar]
  50. Vivo, P. Large deviations of the maximum of independent and identically distributed random variables. Eur. J. Phys. 2015, 36, 055037. [Google Scholar] [CrossRef]
  51. Strata, P.; Harvey, R. Dale’s principle. Brain Res. Bull. 1999, 50, 349–350. [Google Scholar] [CrossRef]
  52. Fasoli, D.; Cattani, A.; Panzeri, S. Pattern storage, bifurcations and groupwise correlation structure of an exactly solvable asymmetric neural network model. Neural Comput. 2018, 30, 1258–1295. [Google Scholar] [CrossRef] [PubMed]
  53. Borisyuk, R.M.; Kirillov, A.B. Bifurcation analysis of a neural network model. Biol. Cybern. 1992, 66, 319–325. [Google Scholar] [CrossRef] [PubMed]
  54. Beer, R.D. On the dynamics of small continuous-time recurrent neural networks. Adapt. Behav. 1995, 3, 469–509. [Google Scholar] [CrossRef]
  55. Pasemann, F. Complex dynamics and the structure of small neural networks. Netw.-Comp. Neural 2002, 13, 195–216. [Google Scholar] [CrossRef]
  56. Haschke, R.; Steil, J.J. Input space bifurcation manifolds of recurrent neural networks. Neurocomputing 2005, 64, 25–38. [Google Scholar] [CrossRef]
  57. Leine, R.I.; van Campen, D.H.; van de Vrande, B.L. Bifurcations in nonlinear discontinuous systems. Nonlinear Dyn. 2000, 23, 105–164. [Google Scholar] [CrossRef]
  58. Awrejcewicz, J.; Lamarque, C.H. Bifurcation and Chaos in Nonsmooth Mechanical Systems; World Scientific: Singapore, 2003. [Google Scholar]
  59. Leine, R.I.; van Campen, D.H. Bifurcation phenomena in non-smooth dynamical systems. Eur. J. Mech. A-Solid 2006, 25, 595–616. [Google Scholar] [CrossRef]
  60. Makarenkov, O.; Lamb, J.S.W. Dynamics and bifurcations of nonsmooth systems: A survey. Phys. D 2012, 241, 1826–1844. [Google Scholar] [CrossRef] [Green Version]
  61. Harris, J.; Ermentrout, B. Bifurcations in the Wilson–Cowan equations with nonsmooth firing rate. SIAM J. Appl. Dyn. Syst. 2015, 14, 43–72. [Google Scholar] [CrossRef]
  62. Parui, S.; Banerjee, S. Border collision bifurcations at the change of state-space dimension. Chaos 2002, 12, 1054–1069. [Google Scholar] [CrossRef] [PubMed]
  63. Avrutin, V.; Schanz, M.; Banerjee, S. Multi-parametric bifurcations in a piecewise-linear discontinuous map. Nonlinearity 2006, 19, 1875–1906. [Google Scholar] [CrossRef]
  64. Mézard, M.; Parisi, G. The cavity method at zero temperature. J. Stat. Phys. 2003, 111, 1–34. [Google Scholar] [CrossRef]
  65. Kreinovich, V.; Nguyen, H.T.; Sriboonchitta, S.; Kosheleva, O. Modeling extremal events is not easy: Why the extreme value theorem cannot be as general as the central limit theorem. In Uncertainty Modeling; Kreinovich, V., Ed.; Springer International Publishing: New York, NY, USA, 2017; Chapter 8; pp. 123–133. [Google Scholar] [CrossRef]
  66. Faugeras, O.; MacLaurin, J. Asymptotic description of neural networks with correlated synaptic weights. Entropy 2015, 17, 4701–4743. [Google Scholar] [CrossRef]
  67. Bao, G.; Zeng, Z. Analysis and design of associative memories based on recurrent neural network with discontinuous activation functions. Neurocomputing 2012, 77, 101–107. [Google Scholar] [CrossRef]
  68. Treves, A.; Rolls, E.T. What determines the capacity of autoassociative memories in the brain? Netw. Comput. Neural Syst. 1991, 2, 371–397. [Google Scholar] [CrossRef]
  69. Brunel, N.; Wang, X.J. What determines the frequency of fast network oscillations with irregular neural discharges? I. Synaptic dynamics and excitation-inhibition balance. J. Neurophysiol. 2003, 90, 415–430. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Probability distribution of the synaptic weights. This figure shows the probability distribution of the synaptic weights J i , j (see Equation (2)), in the specific case of the small-size network model described in Section 3. (A) Graph of the connection probability matrix P reported in Table 1. An arrow from the vertex j to the vertex i represents a connection probability P i , j > 0 . (B) Wigner semicircle distribution of some variables W i , j , according to Equation (25) and the values of the parameters C and R reported in Table 1.
Figure 1. Probability distribution of the synaptic weights. This figure shows the probability distribution of the synaptic weights J i , j (see Equation (2)), in the specific case of the small-size network model described in Section 3. (A) Graph of the connection probability matrix P reported in Table 1. An arrow from the vertex j to the vertex i represents a connection probability P i , j > 0 . (B) Wigner semicircle distribution of some variables W i , j , according to Equation (25) and the values of the parameters C and R reported in Table 1.
Entropy 21 00630 g001
Figure 2. Examples of cumulative distribution functions of the bifurcation points. This figure reports the cumulative distribution functions of the bifurcation points Λ E and Λ I of the firing-rate state 1110 (panels (A) and (B), respectively), in the case of the small-size network described in Section 3. The red curves represent the semi-analytical functions, calculated through Equation (14), while the blue dots represent the numerical functions, computed over 5000 network realizations as described in Section 2.5. Similar results can be derived for all the other bifurcation points of the network, if desired.
Figure 2. Examples of cumulative distribution functions of the bifurcation points. This figure reports the cumulative distribution functions of the bifurcation points Λ E and Λ I of the firing-rate state 1110 (panels (A) and (B), respectively), in the case of the small-size network described in Section 3. The red curves represent the semi-analytical functions, calculated through Equation (14), while the blue dots represent the numerical functions, computed over 5000 network realizations as described in Section 2.5. Similar results can be derived for all the other bifurcation points of the network, if desired.
Entropy 21 00630 g002
Figure 3. Mean multistability diagram. This figure reports the mean multistability diagram of the small-size network described in Section 3, see Equation (25) and Table 1. The diagram shows how the degree of multistability of the network, namely the number of stationary solutions, depends on average on the external currents I E and I I . Each color represents a different degree of multistability M (e.g., blue = tristability). (AC) Numerical multistability diagrams, obtained through Monte Carlo simulations as described in Section 2.5, for an increasing number of network realizations (5, 50 and 5000). (D) Semi-analytical multistability diagram, obtained through the techniques described in Section 2.2.2. Please note that by increasing the number of network realizations, the numerical multistability diagram converges to the semi-analytical one.
Figure 3. Mean multistability diagram. This figure reports the mean multistability diagram of the small-size network described in Section 3, see Equation (25) and Table 1. The diagram shows how the degree of multistability of the network, namely the number of stationary solutions, depends on average on the external currents I E and I I . Each color represents a different degree of multistability M (e.g., blue = tristability). (AC) Numerical multistability diagrams, obtained through Monte Carlo simulations as described in Section 2.5, for an increasing number of network realizations (5, 50 and 5000). (D) Semi-analytical multistability diagram, obtained through the techniques described in Section 2.2.2. Please note that by increasing the number of network realizations, the numerical multistability diagram converges to the semi-analytical one.
Entropy 21 00630 g003
Figure 4. Occurrence probability of the firing-rate states. This figure reports the occurrence probability of the 2 N = 16 firing-rate states of the network described in Section 3 (see Equation (25) and Table 1), from the state 0000 to the state 1111. (A) Occurrence probability of the firing-rate states for fixed stimuli, obtained for I ^ E = 0 and I ^ I = 4 . The red bars represent the occurrence probability calculated semi-analytically through the method described in Section 2.2.3. The blue bars represent the same probability, evaluated numerically by a Monte Carlo simulation, as explained in Section 2.5; (B) Occurrence probability of the firing-rate states, regardless of the stimuli (red bars calculated according to the approach of Section 2.2.4, blue bars computed again through a Monte Carlo simulation). In both panels, we computed the blue bars over 5000 network realizations. Please note that the occurrence probabilities are not normalized over the set of 2 N firing-rate states, see text.
Figure 4. Occurrence probability of the firing-rate states. This figure reports the occurrence probability of the 2 N = 16 firing-rate states of the network described in Section 3 (see Equation (25) and Table 1), from the state 0000 to the state 1111. (A) Occurrence probability of the firing-rate states for fixed stimuli, obtained for I ^ E = 0 and I ^ I = 4 . The red bars represent the occurrence probability calculated semi-analytically through the method described in Section 2.2.3. The blue bars represent the same probability, evaluated numerically by a Monte Carlo simulation, as explained in Section 2.5; (B) Occurrence probability of the firing-rate states, regardless of the stimuli (red bars calculated according to the approach of Section 2.2.4, blue bars computed again through a Monte Carlo simulation). In both panels, we computed the blue bars over 5000 network realizations. Please note that the occurrence probabilities are not normalized over the set of 2 N firing-rate states, see text.
Entropy 21 00630 g004
Figure 5. Speed test for the analytical formula of the permanent. This figure shows the mean computational times T BBFG and T HBM (see text) required for calculating the permanent of homogeneous block matrices B by means of an Intel® Core™ i5-5300U CPU clocked at 2.30 GHz with 16 GB RAM. We chose the entries of the matrices B to be independent random numbers 0 B λ , μ < 0.3 with two decimal digits, generated from a uniform probability distribution (see the Python script “Permanent.py”), while the remaining parameters of B are shown in Table 2. The average times T BBFG and T HBM are calculated over 100 repetitions of the matrices. (A) Mean computational times T BBFG (blue line) and T HBM (red line) in seconds, as a function of N . The green line represents the mean speed gain T BBFG / T HBM of Equation (20) over the BBFG algorithm. (B) Mean computational times and speed gain as a function of X .
Figure 5. Speed test for the analytical formula of the permanent. This figure shows the mean computational times T BBFG and T HBM (see text) required for calculating the permanent of homogeneous block matrices B by means of an Intel® Core™ i5-5300U CPU clocked at 2.30 GHz with 16 GB RAM. We chose the entries of the matrices B to be independent random numbers 0 B λ , μ < 0.3 with two decimal digits, generated from a uniform probability distribution (see the Python script “Permanent.py”), while the remaining parameters of B are shown in Table 2. The average times T BBFG and T HBM are calculated over 100 repetitions of the matrices. (A) Mean computational times T BBFG (blue line) and T HBM (red line) in seconds, as a function of N . The green line represents the mean speed gain T BBFG / T HBM of Equation (20) over the BBFG algorithm. (B) Mean computational times and speed gain as a function of X .
Entropy 21 00630 g005
Figure 6. Large-size limit of a statistically homogeneous two-population network. This figure shows the probability distribution of the bifurcation points in a random network composed of two statistically homogeneous populations (one excitatory and one inhibitory) with Laplace-distributed weights (see Equation (26)), in the large-size limit. The parameters of the network are reported in Table 3. In particular, note that in this figure we compute the probability density of the bifurcation points for every firing-rate state ν that is composed of γ E , 1 = 240 (respectively γ I , 1 = 80 ) active neurons in the excitatory (respectively inhibitory) population. The analytical curves (see the green and red solid lines) are obtained from Equations (22) and (23), while the numerical probability densities (magenta and blue dots) are calculated over 100,000 network realizations, as described in Section 2.5. (A) Analytical and numerical probability distributions of the the bifurcation points Λ E and Ξ E ; (B) Probability distributions of Λ I and Ξ I .
Figure 6. Large-size limit of a statistically homogeneous two-population network. This figure shows the probability distribution of the bifurcation points in a random network composed of two statistically homogeneous populations (one excitatory and one inhibitory) with Laplace-distributed weights (see Equation (26)), in the large-size limit. The parameters of the network are reported in Table 3. In particular, note that in this figure we compute the probability density of the bifurcation points for every firing-rate state ν that is composed of γ E , 1 = 240 (respectively γ I , 1 = 80 ) active neurons in the excitatory (respectively inhibitory) population. The analytical curves (see the green and red solid lines) are obtained from Equations (22) and (23), while the numerical probability densities (magenta and blue dots) are calculated over 100,000 network realizations, as described in Section 2.5. (A) Analytical and numerical probability distributions of the the bifurcation points Λ E and Ξ E ; (B) Probability distributions of Λ I and Ξ I .
Entropy 21 00630 g006
Table 1. An example of network parameters. This table contains the values of the parameters of the small-size network that we study in Section 3 (see Equation (25) and Figure 1). The symbol × in the matrices C and R means that the statistics of the stationary states and of the bifurcation points are not affected by those parameters, since the corresponding synaptic connections are absent ( P i , j = 0 ).
Table 1. An example of network parameters. This table contains the values of the parameters of the small-size network that we study in Section 3 (see Equation (25) and Figure 1). The symbol × in the matrices C and R means that the statistics of the stationary states and of the bifurcation points are not affected by those parameters, since the corresponding synaptic connections are absent ( P i , j = 0 ).
P = 0 0.5 1 0.6 0.4 0.5 0.1 1 0.5 0.7 0.3 0.8 0 1 0.9 0 , θ = 0 1 1 2
C = × 4 3 10 6 5 2 4 3 4 6 7 × 2 5 × , R = × 4 2 3 5 3 2 3 3 4 5 6 × 2 4 ×
Table 2. Set of parameters used for generating Figure 5.
Table 2. Set of parameters used for generating Figure 5.
Panel A
N = 10 , 12 , , 22 X = 3 X 0 = 3 Y 0 = 8
Y = 2 X 1 = 5 Y 1 = N 8
X 2 = N 8
Panel B
N = 16 X = 1 , 2 , 4 , 8 , 16 X λ = N X Y 0 = 7
Y = 2 λ 0 , , X 1 Y 1 = 9
Table 3. Set of parameters used for generating Figure 6.
Table 3. Set of parameters used for generating Figure 6.
N E = 640 γ E , 1 = 240 ϑ E = 3 μ E E = 11 σ E E = 0.8 P E E = 0.7
N I = 160 γ I , 1 = 80 ϑ I = 0 μ E I = 8 σ E I = 0.6 P E I = 0.9
μ I E = 5 σ I E = 0.7 P I E = 1.0
μ I I = 10 σ I I = 0.9 P I I = 0.8

Share and Cite

MDPI and ACS Style

Fasoli, D.; Panzeri, S. Stationary-State Statistics of a Binary Neural Network Model with Quenched Disorder. Entropy 2019, 21, 630. https://doi.org/10.3390/e21070630

AMA Style

Fasoli D, Panzeri S. Stationary-State Statistics of a Binary Neural Network Model with Quenched Disorder. Entropy. 2019; 21(7):630. https://doi.org/10.3390/e21070630

Chicago/Turabian Style

Fasoli, Diego, and Stefano Panzeri. 2019. "Stationary-State Statistics of a Binary Neural Network Model with Quenched Disorder" Entropy 21, no. 7: 630. https://doi.org/10.3390/e21070630

APA Style

Fasoli, D., & Panzeri, S. (2019). Stationary-State Statistics of a Binary Neural Network Model with Quenched Disorder. Entropy, 21(7), 630. https://doi.org/10.3390/e21070630

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop