Next Article in Journal
LPI Optimization Framework for Radar Network Based on Minimum Mean-Square Error Estimation
Next Article in Special Issue
The Emergence of Hyperchaos and Synchronization in Networks with Discrete Periodic Oscillators
Previous Article in Journal
Coupled DM Heating in SCDEW Cosmologies
Previous Article in Special Issue
Avalanching Systems with Longer Range Connectivity: Occurrence of a Crossover Phenomenon and Multifractal Finite Size Scaling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Self-Organized Supercriticality and Oscillations in Networks of Stochastic Spiking Neurons

1
Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA
2
Instituto de Computação, Universidade de Campinas, Campinas-SP 13083-852, Brazil
3
Departamento de Estatística, Instituto de Matemática e Estatística (IME), Universidade de São Paulo, São Paulo-SP 05508-090, Brazil
4
Departamento de Física, Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto (FFCLRP), Universidade de São Paulo, Ribeirão Preto-SP 14040-901, Brazil
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(8), 399; https://doi.org/10.3390/e19080399
Submission received: 23 May 2017 / Revised: 27 July 2017 / Accepted: 31 July 2017 / Published: 2 August 2017
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)

Abstract

:
Networks of stochastic spiking neurons are interesting models in the area of theoretical neuroscience, presenting both continuous and discontinuous phase transitions. Here, we study fully-connected networks analytically, numerically and by computational simulations. The neurons have dynamic gains that enable the network to converge to a stationary slightly supercritical state (self-organized supercriticality (SOSC)) in the presence of the continuous transition. We show that SOSC, which presents power laws for neuronal avalanches plus some large events, is robust as a function of the main parameter of the neuronal gain dynamics. We discuss the possible applications of the idea of SOSC to biological phenomena like epilepsy and Dragon-king avalanches. We also find that neuronal gains can produce collective oscillations that coexist with neuronal avalanches.

1. Introduction

Neuronal network models are extended dynamical systems that may present different collective behaviors or phases characterized by order parameters. The separation regions between phases can be described as bifurcations in the order parameters or phase transitions. In several models of neuronal activity, the relevant phase change is a continuous transition from an absorbing silent state to an active state [1,2,3]. In such a continuous transition, we have a critical point (in general, a critical surface) where concepts of universality classes and critical exponents (among others) are valid. At criticality, we observe avalanches of activity described by power laws for their size and duration. Furthermore, the avalanche profile shows fractal scaling. Since the landmark findings of Beggs and Plenz in 2003 [2], these behaviors have been reported also in biological networks; see the reviews [4,5,6,7].
The motivation for the idea that criticality is important to understand neuronal activity is not only empirical. Several works have shown that there are advantages for a network to operate at the critical state [3,8,9,10]. However, it is not clear how biological networks tune themselves to the critical region.
An important idea discussed in several papers is that, since criticality depends on the strength of the synapses (links) between the neurons, a homeostatic mechanism for dynamic synapses tunes the network toward the critical region. There are two main paradigms: self-organization of Hebbian synapses [11,12,13,14,15,16] and self-organization of dynamic synapses [17,18,19,20,21] following Tsodyks and Markran [22,23]. With these synaptic mechanisms, it is possible to achieve, or at least to approximate, a self-organized critical (SOC) state.
With a different approach, we have shown recently that dynamic neuronal gains, biophysically linked to firing-dependent excitability of the axonal initial segment (AIS) [24,25,26,27], can also lead to self-organized criticality [28]. This new mechanism is simpler than dynamic synapses because, for biological networks with N neurons, we have of the order of 10 4 N synapses [29], and thus 10 4 N dynamic equations, but only N equations for the neuronal gains.
It has also been observed in Brochini et al. [28] that, for achieving exact SOC, all papers in the literature used a time scale τ for synaptic recovery proportional to N, with N . The use of this non-local information (N), and a diverging recovery time τ , is not plausible biologically. A similar time scale τ is present in the neuronal gain recovery. When we use a biological range for τ that does not scale with N, we observe that the network turns out (slightly) supercritical, a phenomenon that we called self-organized supercriticality (SOSC). That is, both dynamic synapses and dynamic gains with fixed τ , which seems to be the reasonable biological assumption, present SOSC instead of SOC.
Here, we report for the first time an extensive study of the neuronal gain mechanism and SOSC. First, we present new mean-field results for phase transitions in a fully-connected model of integrate-and-fire stochastic neurons with fixed gains. We find both continuous and discontinuous phase transitions. Then, we introduce a simplified gain dynamics depending only on the τ parameter, which also has a simple mean-field solution in the case of the continuous transition and presents SOSC. We compare this solution with extensive simulations for different system sizes N and values for τ . Surprisingly, we found collective oscillations produced by the gain dynamics that coexist with neuronal avalanches.

2. The Model

We consider a fully-connected network composed of i = 1 , , N discrete-time stochastic neurons [28,30,31,32,33]. The synapses transmit signals from some presynaptic neuron j to a postsynaptic neuron i with synaptic strength W i j . The Boolean variable X i [ t ] 0 , 1 denotes whether neuron i fired between t and t + 1 , and V i [ t ] corresponds to its membrane potential at time t. Firing X i [ t + 1 ] = 1 occurs with probability Φ ( V i [ t ] ) , which is called the firing function [32,33,34,35,36,37].
If a presynaptic neuron j fires at discrete time t, then X j [ t ] = 1 . This event increments by W i j the potential of every postsynaptic neuron i that has not fired at time t. The potential of a non-firing neuron may also integrate an external stimulus I i [ t ] . Apart from these increments, the potential of a non-firing neuron decays at each time step towards zero by a factor μ [ 0 , 1 ] , which models the effect of a current leakage.
The neuron membrane potentials evolve as:
V i [ t + 1 ] = 0 if X i [ t ] = 1 , μ V i [ t ] + I i [ t ] + 1 N j = 1 N W i j X j [ t ] if X i [ t ] = 0 .
This is a special case of the general model from [32] with the filter function g ( t t s ) = μ t t s , where t s is the time of the last firing of neuron i [28]. In contrast to standard integrate-and-fire (IF) neurons, the firing is not deterministic above a threshold, but stochastic. We also have X i [ t + 1 ] = 0 if X i [ t ] = 1 (refractory period of one time step).
The firing function 0 Φ ( V ) 1 is sigmoidal, that is monotonically increasing. We also assume that Φ ( V ) is zero up to some threshold potential V T . If Φ is the shifted Heaviside step function Φ ( V ) = Θ ( V V T ) , we have a deterministic discrete-time leaky integrate-and-fire (LIF) neuron. Any other choice for Φ ( V ) gives a stochastic neuron.
In Brochini et al. [28], we have studied a linear saturating function with neuronal gain Γ similar to that used in [33]. Here, we study the so-called rational function that does not have a saturating potential; see Figure 1a:
Φ ( V ) = Γ ( V V T ) 1 + Γ ( V V T ) Θ ( V V T ) .
Notice that we recover the deterministic LIF model Φ ( V ) = Θ ( V V T ) when Γ . The use of the rational instead of the linear saturating function is convenient and gives some theoretical advantages, for example to avoid the anomalous cycles-two observed in [28].

3. Mean-Field Calculations

The network’s activity is measured by the fraction ρ [ t ] of firing neurons (or density of active sites):
ρ [ t ] = 1 N j = 1 N X j [ t ] .
The density of active neurons ρ [ t ] can be computed from the probability density p [ t ] ( V ) of potentials at time t:
ρ [ t ] = Φ ( V ) p [ t ] ( V ) d V ,
where p [ t ] ( V ) d V is the fraction of neurons with potential in the range [ V , V + d V ] at time t.
Neurons that fire between t and t + 1 have their potential reset to zero. They contribute to p [ t + 1 ] ( V ) a Dirac impulse at potential V = 0 , with amplitude ρ [ t ] given by Equation (4). The potentials of all neurons also evolve according to Equation (1). This process modifies p [ t ] ( V ) also for V 0 .
In the mean-field limit, we assume that the synaptic weights W i j follow a distribution with average W = W i j and finite variance. By disregarding correlations, the term in Equation (1) corresponding to the sum of all presynaptic inputs simplifies to W ρ [ t ] .
If the external input is constant, I i [ t ] = I , a stationary state is achieved, which depends only on the average synaptic weight W, the leakage parameter μ and the parameters that define the function Φ ( V ) , that is Γ and V T . In Brochini et al. [28], it is shown that the stationary p ( V ) is composed of delta peaks with height η k situated at voltages U k given by:
U 0 = 0 ,
U k = μ U k 1 + I + W ρ ,
η k = 1 Φ ( U k 1 ) η k 1 ,
ρ = η 0 = k = 0 Φ ( U k ) η k ,
for all k 1 . Here, U k corresponds to the potential value of the population of neurons that have firing age k. The firing age is the amount of time steps since the neuron fired for the last time. The normalization condition k = 0 η k = 1 must be included explicitly. Equations (6)–(8) can be solved numerically for any firing function Φ , so this result is very general.

4. Results

4.1. Phase Transitions for the Rational Φ ( V )

In terms of non-equilibrium statistical physics, ρ is the order parameter; I is a uniform external field; and Γ and W are the main control parameters. The activity ρ also depends on V T and μ .

4.1.1. The Case with μ > 0 , I = 0 , V T = 0

By using Equations (5)–(8), we obtain numerically ρ ( W , Γ ) for several values of μ > 0 , for the case with I = 0 , V T = 0 (Figure 1b). Only the first 100 peaks ( U k , η k ) were considered, since, for the given μ and Φ , there was no significant probability density beyond that point. The same numerical method can be used for studying the cases I 0 , V T 0 .
We also obtained an analytic approximation (see Appendix) for small ρ :
ρ 1 2 + μ + μ 2 / ( 1 μ ) Γ Γ C Γ Δ Γ β ,
where Γ C = ( 1 μ ) / W defines the critical line and Δ Γ = ( Γ Γ C ) / Γ is the reduced control parameter. Therefore, the critical exponent for the order parameter near criticality is β = 1 , characteristic of the mean-field directed percolation (DP) universality class [38]. We also compare Equation (9) with the numerical results for ρ ( Γ , μ ) in Figure 1c.

4.1.2. Analytic Results for μ = 0

In the case μ = 0 , it is possible to do a simple mean-field analysis valid for N . This case is illustrative because it presents all phase transitions that occur with μ > 0 .
When μ = 0 and I i [ t ] = I (uniform constant input), the stationary density p ( V ) consists of only two Dirac peaks at potentials U 0 = 0 and U 1 = I + W ρ . Equation (8) simplifies to:
ρ = ρ Φ ( 0 ) + ( 1 ρ ) Φ ( I + W ρ ) ,
since η 0 = ρ and η 1 = 1 ρ .
By inserting the function Equation (2) in Equation (10) and remembering that Φ ( 0 ) = 0 , we get:
2 Γ W ρ 2 ( Γ W + 2 Γ ( V T I ) 1 ) ρ + Γ ( V T I ) = 0 ,
with solutions:
ρ ± = Γ ( W + 2 V T 2 I ) 1 ± Δ 4 Γ W ,
Δ = ( Γ ( W + 2 V T 2 I ) 1 ) 2 8 Γ 2 W ( V T I ) ,

4.1.3. The Case with I = 0 , V T = 0 : Continuous Transition

For V T = I = 0 , we have:
ρ ( W ) = 1 2 W W C W β = 1 2 Γ Γ C Γ β ,
where the phase transition line is:
Γ C = 1 / W C ,
and the critical exponent is β = 1 . This corresponds to a standard mean-field continuous (second order) absorbing state phase transition (Figure 2a,b with V T = I = 0 ).

4.1.4. The Case with I < V T : Discontinuous Transition

For I < V T , we have discontinuous (first order) phase transitions when Δ = 0 (see Equation (13)):
Γ C W C + 2 Γ C ( V T I ) 1 2 = 8 Γ C 2 W C ( V T I ) ,
which, after some algebra, leads to the phase transition lines:
Γ C W C = 1 + 2 Γ C ( V T I ) 2 ,
Γ C = 1 W C 2 ( V T I ) 2
which have the correct limit, Equation (15), when V T 0 , I 0 . The transition discontinuity is:
ρ C = V T I 2 W C = 1 2 2 Γ C ( V T I ) 1 + 2 Γ C ( V T I ) .
In Figure 2a, we show examples of the phase transitions, which occur when the unstable point ρ collapses with the stable point ρ + . It is important to notice that, for any V T > 0 , the unstable point never touches the absorbing point ρ 0 = 0 , so the zero solution is always stable. Only for the case V T = 0 , the solution ρ 0 looses stability, and ρ + is the unique solution above the critical line Γ C = 1 / W C . Figure 2b gives the phase diagram Γ × W for some values of V T with I = 0 ; see Equation (18). Finally, we give the phase diagram for the variables Γ W versus Γ ( V I ) ; see Figure 3 and Equation (17).

4.2. Self-Organized Supercriticality through Dynamic Gains with μ = 0 , I = 0 , V T = 0

If we fine-tune the model to some point in the critical line Γ C = 1 / W , we can observe perfect neuronal avalanches with size distribution P S ( s ) s 3 / 2 and duration distribution P D ( d ) d 2 [28]. As expected, these are mean-field exponents fully compatible with the experimental results [2,6].
This fine-tuning, however, is not plausible biologically. What we need is some homeostatic mechanism that makes the critical region an attractor of some self-organization dynamics. In the literature, a well-studied mechanism is dynamic synapses W i j [ t ] [17,18,19]. For example, in discrete time [20,21]:
W i j [ t + 1 ] = W i j [ t ] + 1 τ ( A W i j [ t ] ) u W i j [ t ] X j [ t ] ,
where τ is a synaptic recovery time, A is an asymptotic value and u [ 0 , 1 ] is the fraction of the depletion of neurotransmitter vesicles when the presynaptic neuron fires.
In Brochini et al. [28], we proposed a new self-organization mechanism based on dynamic neuronal gains Γ i [ t ] while keeping the synapses W i j fixed [28]. The idea is to create a feedback loop based only on the local activity X i [ t ] of the neuron, reducing the gain when the neuron fires and recovering slowly after that. The biological motivation for dynamic gains is spike frequency adaptation, a well-known phenomenon that depends on the decrease (and recovery) of sodium ion channels’ density at the axon initial segment (AIS) when the neuron fires [25,26].
The dynamics for the neuronal gains studied in [28] has a form similar to that used in [17,19,20,21] for synapses:
Γ i [ t + 1 ] = Γ i [ t ] + 1 τ ( A Γ i [ t ] ) u Γ i [ t ] X i [ t ] .
The advantage of neuronal gains is that now we have only N dynamical equations (notice the term X i [ t ] that refers to the activity of the postsynaptic neuron, not of the presynaptic one as in Equation (20)). For dynamic synapses, we need to simulate N ( N 1 ) equations for the fully-connected graph model and 10 4 N for a biologically-realistic network, and this is computationally very costly for large N.
A problem with this dynamics, however, also present in dynamic synapses, is that we have a three-dimensional parameter space ( τ [ 1 , ] , A [ 1 / W , ] , u [ 0 , 1 ] ) that must be fully explored to characterize the stationary value Γ * ( τ , A , u , N ) . Here, we propose a new simplified dynamics with a single free parameter, the gain recovery time τ :
Γ i [ t + 1 ] = Γ i [ t ] + 1 τ Γ i [ t ] Γ i [ t ] X i [ t ] = 1 + 1 τ X i [ t ] Γ [ t ] .
The self-organization mechanism can be viewed in Figure 4. Therefore, we reduce our parametric study to determine the curves Γ * ( 1 / τ , 1 / N ) ; see Figure 5a,b. The fluctuations measured by the standard deviation S D of the Γ [ t ] time series, after the transient, diminish for increasing τ (Figure 5c) and probably go to zero for τ , in accord with Campos et al. [21]. However, in contrast to this idealized τ limit, as discussed in [19,21], the fluctuations do not converge to zero for finite τ in the thermodynamic limit N (see Figure 5d). This occurs because, for low τ , the adaptation mechanism produces oscillations of Γ [ t ] around the value Γ * ( τ ) .
We can do a mean-field analysis of Equation (22) to find the value Γ * ( τ ) . Denote the average gain as Γ [ t ] = Γ i [ t ] . Averaging over the sites, we have:
Γ [ t + 1 ] = Γ [ t ] + 1 τ Γ [ t ] ρ [ t ] Γ [ t ] ,
since ρ [ t ] = X i [ t ] . In the stationary state, we have Γ [ t + 1 ] = Γ [ t ] = Γ * , ρ [ t ] = ρ * , so:
1 τ Γ * = ρ * Γ * .
A solution is Γ * = 0 , but this is unstable; see Equation (22). Another solution is obtained by inserting Equation (14), ρ * = ( Γ * Γ C ) / ( 2 Γ * ) , in Equation (24):
Γ * = Γ C 1 2 / τ .
Notice that this is valid only when Equation (14) is valid, that is, for Γ * 1 / W . Furthermore, Equation (25) presumes that ρ * is a stable fixed point, which can not be true for some interval of values of τ ; see below.
A first order approximation leads to:
Γ * = Γ C 1 + 2 τ .
This mean-field calculation shows that, if τ , we obtain an exact SOC state Γ * Γ C ; or for finite networks, a scaling τ = O ( N a ) with an exponent a > 0 would be required, as done previously for dynamic synapses [17,19,20,21]. However, this scaling for τ cannot be justified biologically.
Therefore, biology requires a finite recovery time τ , which always leads to supercriticality; see Equation (25) or (26). This supercriticality is self-organized in the sense that it is achieved and maintained by the gain dynamics Equation (22). We call this phenomena self-organized supercriticality (SOSC).
The deviation from criticality can be small. For example, if τ = 1000 ms (assuming one time step equals 1 ms in the model):
Γ * 1.002 Γ C .
Even a more conservative value τ = 100 ms gives Γ * 1.02 Γ C . Although not perfect SOC [5], this result is sufficient to explain a power law with exponent 3 / 2 for small ( s < 1000 ) neuronal avalanches plus a supercritical bump (Figure 6).
By using Equation (25) in Equation (14), we also obtain:
ρ * = 1 τ
showing that the network presents supercritical activity for any finite τ . This result, however, is valid only in the infinite size limit. For finite size networks, fluctuations interrupt this constant ρ * = 1 / τ activity, leading the system to the absorbing state and defining the end of the avalanches.
Simulations reveal that these fixed points ( Γ * , ρ * ) correspond only to mean values around which both Γ [ t ] and ρ [ t ] oscillate; see Figure 7a–c. These global oscillations are unexpected since the model has been devised to produce avalanches, not oscillations. The finite size fluctuations and oscillations drive the network to the absorbing zero state, generating the avalanches. What we see in the histogram of Figure 6 is a combination of power law avalanches in some range plus large events (superavalanches or Dragon-kings due to the Γ [ t ] oscillations).

5. Discussion

We examined a network of stochastic spiking neurons with a rational firing function Φ that has not been studied previously. We obtained numeric and analytic results that show the presence of continuous and discontinuous absorbing phase transitions. Classic SOC is possible only at the continuous transition, which means that we need to use zero firing thresholds ( V T = 0 ). In some sense, this is a kind of fine-tuning of the Φ function, but not the usual one where the synaptic strength W and the neuronal gain Γ are the main control parameters.
The presence of a well-behaved absorbing phase transition in the directed percolation class enables the use of a homeostatic mechanism for the neuronal gains that tunes the network to the critical region. The dynamics on the gains is biologically plausible and can be related to a decrease and recovery, due to the neuron activity, of the firing probability at the axon initial segment (AIS) [24]. Our dynamic Γ i [ t ] mimics the well-known phenomenon of spike frequency adaptation [25,26] and is a one-parameter simplification of the three-parameter dynamics studied by us in [28].
We observe that this gain dynamics is equivalent to approaching the critical line with fixed W and variable Γ [ t ] , that is performing vertical movements in Figure 2b. Previous literature approaches the critical point W C by dynamic [17,18,19,20,21] or Hebbian [11,12,13,14,15,16] synapses. This corresponds to fixing Γ and allowing changes in W i j [ t ] along the horizontal axis; see Figure 2b.
The two homeostatic strategies are similar, but we stress that we have only N equations for the gains Γ i [ t ] instead of N ( N 1 ) equations for the synapses W i j [ t ] , so that our approach implies a huge computational advantage. Indeed, previous literature as [14,17] reported system sizes on the range of N = 1000–4000, to be compared to our maximal size of N = 160,000.
We found that the fixed point Γ * predicted by a mean-field calculation is not exactly critical, but instead supercritical, and that the distance from criticality depends on the gain recovery time τ . Previous claims about achieving exact SOC by using dynamic synapses are based on the erroneous assumption that we can use a synaptic recovery time τ N a [17,19,20,21]. However, if we use a finite τ , which is not only plausible, but biologically necessary, we obtain SOSC, not SOC [21,28]. However, we found that for large, but plausible values of τ , the system is only slightly supercritical and presents power law avalanches (plus small supercritical bumps) compatible with the biological data.
SOSC enables us to explore supercritical networks that are robust, that is the stationary state, with or without oscillations, is achieved from any initial condition and recovers from perturbations. Therefore, the question now is: are there self-organized supercritical (SOSC) oscillating neuronal networks in the brain?
The first evidence would be a supercritical bump in the P ( s ) distributions. Indeed, we found several papers where such bumps seem to be present; see for example the first plot in Figure 2 of Friedman et al. [39] and Figure 4 of Scott et al. [40]. It seems to us that, since the main paradigm for neuronal avalanches is exact SOC, with pure power laws, it is possible that researchers report what is expected and do not comment on or emphasize small supercritical bumps, even if they are present in their published data. Therefore, we suggest that experimental researchers reevaluate their data in the search for small supercritical bumps. The presence of supercritical bumps can also be masked by the phenomenon of subsampling [41,42,43], so the analysis must be done with some care.
Supercriticality, in the form of the so-called Dragon-king avalanches [14,44,45], has been conjectured to be at the basis of hyperexcitability in epilepsy [6,46,47]. Furthermore, networks can be put artificially in the hyperexcitable state and show bimodal distributions P S ( s ) with large supercritical bumps [48]. The SOSC phenomenon seems to be a natural explanation for such hyperexcitability. In [48], the supercritical bumps are fitted by a supercritical branching process, but are not explained in mechanistic terms as, in our case, due to different values of the biophysical τ recovery time.
The unexpected oscillations in Γ [ t ] around Γ * have amplitudes that depend on τ and vanish for large τ (Figure 5d and Figure 7a). These oscillations in Γ [ t ] induce oscillations in the activity ρ [ t ] (Figure 7b,c). In our model, the discrete time interval Δ t is postulated as describing the width of a spike, that is Δ t 1–2 ms. From our simulation data, with these values for Δ t , we obtain frequencies f 0.5–16 Hz, depending on the τ value.
Interestingly, this frequency range includes Delta, Theta and Alpha rhythms. The coexistence of Theta waves and neuronal avalanches has been observed experimentally [49]. Furthermore, some theoretical work recently discussed the coexistence of oscillations and avalanches [50].
The presence of oscillations can mean that the fixed point ( Γ * , ρ * ) is unstable below some bifurcation point τ b (even for N ) or that it is stable, but has a very small negative Lyapunov exponent, such that finite-size fluctuations drive Γ [ t ] , ρ [ t ] away from equilibrium, producing excursions (oscillations) in the ( Γ , ρ ) plane. At this point, without further study, we cannot decide what is the correct scenario. Notice that similar oscillations for W [ t ] were also observed for dynamic synapses [17,19,20], although these authors have not studied in detail such phenomenon.
Finally, from a conceptual point of view, the observed subcriticality in some of our simulations (see Figure 5b,d) is less important than supercriticality (SOSC), because it is a finite-size effect for small N. Our largest networks have N = 160,000, which is small compared to real biological networks that have at least one or two orders of magnitude more neurons.
Nevertheless, there is in the literature claims that subcritical states are present in certain experimental conditions [51,52,53]. How can we rectify these findings? Here, we offer an answer based on the findings of Priesemann et al. [53]. These authors found that, in order to explain in vivo experiments with awake animals, they need three ingredients: subsampling [41], increased input (violating the standard separation of scales of SOC models) and small subcriticality of the networks. If we increase the inputs in our network, by a Poisson process on the variable I [ t ] for example, the overall result is that the homeostatic mechanism turns our network subcritical. This occurs because increased forced firing implies an overall depression of the gains Γ i [ t ] in Equation (22), so that a new equilibrium is achieved with Γ * < Γ c .
Then, under external input like in awake animals, our adaptive networks turns out to be subcritical and returns to criticality or supercriticality for spontaneous activity without external input. We obtained preliminary simulation results confirming this scenario, and a comprehensive study of the effect of external input shall be done in the next paper.

6. Materials and Methods

All numerical calculations were done by using MATLAB. Simulation codes were made in Fortran90.
In the study of neuronal avalanches, we simulate the evolution of finite networks with N neurons, uniform synaptic strengths W i j = W ( W i i = 0 ) and Φ ( V ) rational with V T = 0 . The avalanche statistics were obtained after the transient of the neuronal gains’ self-organization. A silent instant when X i [ t ] = 0 for all i defines the end of an avalanche. We start a new avalanche by forcing the firing of a single random neuron i, setting V i [ t + 1 ] to a value high enough for the neuron spikes.

7. Conclusions

We have shown in this paper that dynamic neuronal gains lead naturally to self-organized supercriticality (SOSC) and not SOC. The same occurs with dynamic synapses [21]. Therefore, we propose that neuronal avalanches are related to SOSC instead of exact SOC. This opens an opportunity for the reevaluation of the accumulated experimental data.
SOSC suggests that neuronal tissues could be more prone to Dragon-king avalanches [44] and hyperexcitability than one would expect from simple power laws. This prediction of larger and increased instability due to supercriticality may be important for studies in epilepsy [14].
Finally, the emergence of oscillations coexisting with neuronal avalanches seems to unify in a single formalism two theoretical approaches and two different research communities: those that emphasize critical behavior and avalanches and those that emphasize oscillations and synchronized activity.
In a future work, we intend to study with more care the mechanism that generates these oscillations and how to relate them to EEG data. In order to simulate more biological networks, we also intend to study the cases V T > 0 , I > 0 and μ > 0 .

Acknowledgments

This article was produced as part of the activities of São Paulo Research Foundation (FAPESP) Research, Innovation and Dissemination Center (RIDC) for Neuromathematics (Grant #2013/07699-0, S.Paulo Research Foundation). The RIDC for Neuromathematics covered publication costs. Ariadne A. Costa also thanks Grants #2016/00430-3 and #2016/20945-8, São Paulo Research Foundation (FAPESP). Ludmila Brochini also acknowledges Grant #2016/24676-1, São Paulo Research Foundation (FAPESP). Osame Kinouchi also received support from Center for Natural and Artificial Information Processing Systems at the University of São Paulo (CNAIPS-USP).

Author Contributions

Ariadne A. Costa performed the network simulations and prepared all of the figures. Osame Kinouchi and Ludmila Brochini performed analytic and numerical calculations. All authors analyzed the results and wrote the manuscript. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Phase Transition for μ > 0, VT = 0

We want to derive the critical point for the leakage case μ > 0 and also to obtain approximate curves for the activity ρ near the critical region. We start from the exact formulas (supposing Φ ( 0 ) = 0 ):
ρ = k = 1 η k Φ ( U k ) ,
η k = η k 1 1 Φ ( U k 1 ) ,
U 0 = 0 ,
U k = μ U k 1 + W ρ ,
= W ρ j = 0 k 1 μ j = W ρ 1 μ k 1 μ ,
then, we use the recurrence relations Equations (A2) and (A4) into Equation (A1):
ρ = k = 1 η k 1 1 Φ ( U k 1 ) Φ ( μ U k 1 + W ρ ) .
We notice that, due to Equation (A4), all terms U k are small in the critical region where ρ 0 . So, we approximate the rational Φ ( U ) function for small U:
Φ ( U k ) = Γ U k 1 + Γ U k Γ U k Γ 2 U k 2 .
Inserting in Equation (A6), and using Equation (A4) we get:
ρ = k = 1 η k 1 ( 1 Γ U k + Γ 2 U k 2 ) ( Γ μ U k 1 + Γ W ρ Γ 2 ( μ U k 1 + W ρ ) 2 ) .
Since each U k is proportional to ρ , from now we conserve only terms proportional to ρ and ρ 2 . After recombining the terms up to order V k 1 2 according to Equation (A7), we obtain:
ρ Γ W ρ ( 1 Γ W ρ ) k = 1 η k 1 + ( μ 2 Γ μ W ρ Γ W ρ + Γ 2 W 2 ρ 2 ) k = 1 η k 1 ϕ ( U k 1 ) μ 2 Γ k = 1 η k 1 ϕ ( U k 1 ) U k 1 .
Notice that k = 1 η k 1 = k = 0 η k = 1 by normalization. Using this fact and also Equation (A1) (using Φ ( U 0 ) = 0 ), after some rearrangement we obtain:
ρ ρ ( Γ W + μ ) + ρ 2 ( Γ 2 W 2 2 Γ W μ Γ W ) μ 2 Γ k = 1 η k 1 ϕ ( U k 1 ) U k 1 .
With respect to the last term, we use Equations (A1) and (A5) to obtain
μ 2 Γ k = 1 η k 1 ϕ ( U k 1 ) U k 1 = μ 2 Γ k = 1 η k 1 ϕ ( U k 1 ) ( W ρ 1 μ k 1 μ ) = μ 2 Γ W ρ 1 μ ρ k = 1 η k 1 ϕ ( U k 1 ) μ k μ 2 Γ W ρ 2 1 μ ,
which is valid for μ < 1 . Here, the sum ρ k = 1 η k 1 ϕ ( U k 1 ) μ k is composed of terms in ρ 3 than can be dismissed. Using this approximation in Equation (A9) we obtain two solutions. One is the absorbing state ρ = 0 . The other solution is
ρ 1 1 + Γ W + 2 μ + μ 2 / ( 1 μ ) Γ Γ c Γ ,
where we considered Γ c = ( 1 μ ) / W . Moreover, in the critical region we can approximate Γ W 1 μ , leading to:
ρ 1 2 + μ + μ 2 / ( 1 μ ) Γ Γ c Γ .
We compare this analytical approximation with numerical solutions for ρ ( Γ W , μ ) near the critical point, see Figure 1c.
A similar calculation for the monomial function Φ ( V ) = Γ V Θ ( V ) Θ ( Γ V 1 ) + Θ ( 1 Γ V ) gives:
ρ ( 1 μ ) Γ Γ c Γ .
with the same critical line Γ C = ( 1 μ ) / W . The monomial function with μ > 0 was studied numerically in Brochini et al. [28] but this analytic proof for Γ C is new.

References

  1. Herz, A.V.; Hopfield, J.J. Earthquake cycles and neural reverberations: Collective oscillations in systems with pulse-coupled threshold elements. Phys. Rev. Lett. 1995, 75, 1222. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Beggs, J.M.; Plenz, D. Neuronal avalanches in neocortical circuits. J. Neurosci. 2003, 23, 11167–11177. [Google Scholar] [PubMed]
  3. Kinouchi, O.; Copelli, M. Optimal dynamical range of excitable networks at criticality. Nat. Phys. 2006, 2, 348–351. [Google Scholar] [CrossRef]
  4. Chialvo, D.R. Emergent complex neural dynamics. Nat. Phys. 2010, 6, 744–750. [Google Scholar] [CrossRef]
  5. Marković, D.; Gros, C. Power laws and self-organized criticality in theory and nature. Phys. Rep. 2014, 536, 41–74. [Google Scholar] [CrossRef]
  6. Hesse, J.; Gross, T. Self-organized criticality as a fundamental property of neural systems. Front. Syst. Neurosci. 2014, 8. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Cocchi, L.; Gollo, L.L.; Zalesky, A.; Breakspear, M. Criticality in the brain: A synthesis of neurobiology, models and cognition. Prog. Neurobiol. 2017, in press. [Google Scholar] [CrossRef] [PubMed]
  8. Beggs, J.M. The criticality hypothesis: How local cortical networks might optimize information processing. Philos. Trans. R. Soc. A 2008, 366, 329–343. [Google Scholar] [CrossRef] [PubMed]
  9. Shew, W.L.; Yang, H.; Petermann, T.; Roy, R.; Plenz, D. Neuronal avalanches imply maximum dynamic range in cortical networks at criticality. J. Neurosci. 2009, 29, 15595–15600. [Google Scholar] [CrossRef] [PubMed]
  10. Massobrio, P.; de Arcangelis, L.; Pasquale, V.; Jensen, H.J.; Plenz, D. Criticality as a signature of healthy neural systems. Front. Syst. Neurosci. 2015, 9. [Google Scholar] [CrossRef] [PubMed]
  11. De Arcangelis, L.; Perrone-Capano, C.; Herrmann, H.J. Self-organized criticality model for brain plasticity. Phys. Rev. Lett. 2006, 96, 028107. [Google Scholar] [CrossRef] [PubMed]
  12. Pellegrini, G.L.; de Arcangelis, L.; Herrmann, H.J.; Perrone-Capano, C. Activity-dependent neural network model on scale-free networks. Phys. Rev. E 2007, 76, 016107. [Google Scholar] [CrossRef] [PubMed]
  13. De Arcangelis, L.; Herrmann, H.J. Learning as a phenomenon occurring in a critical state. Proc. Natl. Acad. Sci. USA 2010, 107, 3977–3981. [Google Scholar] [CrossRef] [PubMed]
  14. De Arcangelis, L. Are dragon-king neuronal avalanches dungeons for self-organized brain activity? Eur. Phys. J. Spec. Top. 2012, 205, 243–257. [Google Scholar] [CrossRef]
  15. De Arcangelis, L.; Herrmann, H. Activity-Dependent Neuronal Model on Complex Networks. Front. Physiol. 2012, 3. [Google Scholar] [CrossRef] [PubMed]
  16. Van Kessenich, L.M.; de Arcangelis, L.; Herrmann, H. Synaptic plasticity and neuronal refractory time cause scaling behaviour of neuronal avalanches. Sci. Rep. 2016, 6, 32071. [Google Scholar] [CrossRef] [PubMed]
  17. Levina, A.; Herrmann, J.M.; Geisel, T. Dynamical synapses causing self-organized criticality in neural networks. Nat. Phys. 2007, 3, 857–860. [Google Scholar] [CrossRef]
  18. Levina, A.; Herrmann, J.M.; Geisel, T. Phase transitions towards criticality in a neural system with adaptive interactions. Phys. Rev. Lett. 2009, 102, 118110. [Google Scholar] [CrossRef] [PubMed]
  19. Bonachela, J.A.; De Franciscis, S.; Torres, J.J.; Muñoz, M.A. Self-organization without conservation: Are neuronal avalanches generically critical? J. Stat. Mech. Theory Exp. 2010, 2010, P02015. [Google Scholar] [CrossRef]
  20. Costa, A.A.; Copelli, M.; Kinouchi, O. Can dynamical synapses produce true self-organized criticality? J. Stat. Mech. Theory Exp. 2015, 2015, P06004. [Google Scholar] [CrossRef]
  21. Campos, J.G.F.; Costa, A.A.; Copelli, M.; Kinouchi, O. Correlations induced by depressing synapses in critically self-organized networks with quenched dynamics. Phys. Rev. E 2017, 95, 042303. [Google Scholar] [CrossRef] [PubMed]
  22. Tsodyks, M.; Markram, H. The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proc. Natl. Acad. Sci. USA 1997, 94, 719–723. [Google Scholar] [CrossRef] [PubMed]
  23. Tsodyks, M.; Pawelzik, K.; Markram, H. Neural networks with dynamic synapses. Neural Comput. 1998, 10, 821–835. [Google Scholar] [CrossRef] [PubMed]
  24. Kole, M.H.; Stuart, G.J. Signal processing in the axon initial segment. Neuron 2012, 73, 235–247. [Google Scholar] [CrossRef] [PubMed]
  25. Ermentrout, B.; Pascal, M.; Gutkin, B. The effects of spike frequency adaptation and negative feedback on the synchronization of neural oscillators. Neural Comput. 2001, 13, 1285–1310. [Google Scholar] [CrossRef] [PubMed]
  26. Benda, J.; Herz, A.V. A universal model for spike-frequency adaptation. Neural Comput. 2003, 15, 2523–2564. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Buonocore, A.; Caputo, L.; Pirozzi, E.; Carfora, M.F. A leaky integrate-and-fire model with adaptation for the generation of a spike train. Math. Biosci. Eng. 2016, 13, 483–493. [Google Scholar] [PubMed]
  28. Brochini, L.; Costa, A.A.; Abadi, M.; Roque, A.C.; Stolfi, J.; Kinouchi, O. Phase transitions and self-organized criticality in networks of stochastic spiking neurons. Sci. Rep. 2016, 6, 35831. [Google Scholar] [CrossRef] [PubMed]
  29. Tang, Y.; Nyengaard, J.R.; De Groot, D.M.; Gundersen, H.J.G. Total regional and global number of synapses in the human brain neocortex. Synapse 2001, 41, 258–273. [Google Scholar] [CrossRef] [PubMed]
  30. Gerstner, W.; van Hemmen, J.L. Associative memory in a network of ’spiking’ neurons. Netw. Comput. Neural Syst. 1992, 3, 139–164. [Google Scholar] [CrossRef]
  31. Gerstner, W.; Kistler, W.M. Spiking Neuron Models: Single Neurons, Populations, Plasticity; Cambridge University Press: Cambridge, UK, 2002. [Google Scholar]
  32. Galves, A.; Löcherbach, E. Infinite Systems of Interacting Chains with Memory of Variable Length—A Stochastic Model for Biological Neural Nets. J. Stat. Phys. 2013, 151, 896–921. [Google Scholar] [CrossRef]
  33. Larremore, D.B.; Shew, W.L.; Ott, E.; Sorrentino, F.; Restrepo, J.G. Inhibition causes ceaseless dynamics in networks of excitable nodes. Phys. Rev. Lett. 2014, 112, 138103. [Google Scholar] [CrossRef] [PubMed]
  34. De Masi, A.; Galves, A.; Löcherbach, E.; Presutti, E. Hydrodynamic limit for interacting neurons. J. Stat. Phys. 2015, 158, 866–902. [Google Scholar] [CrossRef]
  35. Duarte, A.; Ost, G. A model for neural activity in the absence of external stimuli. Markov Process. Relat. Fields 2016, 22, 37–52. [Google Scholar]
  36. Duarte, A.; Ost, G.; Rodríguez, A.A. Hydrodynamic Limit for Spatially Structured Interacting Neurons. J. Stat. Phys. 2015, 161, 1163–1202. [Google Scholar] [CrossRef]
  37. Galves, A.; Löcherbach, E. Modeling networks of spiking neurons as interacting processes with memory of variable length. J. Soc. Fr. Stat. 2016, 157, 17–32. [Google Scholar]
  38. Hinrichsen, H. Non-equilibrium critical phenomena and phase transitions into absorbing states. Adv. Phys. 2000, 49, 815–958. [Google Scholar] [CrossRef]
  39. Friedman, N.; Ito, S.; Brinkman, B.A.; Shimono, M.; DeVille, R.L.; Dahmen, K.A.; Beggs, J.M.; Butler, T.C. Universal critical dynamics in high resolution neuronal avalanche data. Phys. Rev. Lett. 2012, 108, 208102. [Google Scholar] [CrossRef] [PubMed]
  40. Scott, G.; Fagerholm, E.D.; Mutoh, H.; Leech, R.; Sharp, D.J.; Shew, W.L.; Knöpfel, T. Voltage imaging of waking mouse cortex reveals emergence of critical neuronal dynamics. J. Neurosci. 2014, 34, 16611–16620. [Google Scholar] [CrossRef] [PubMed]
  41. Priesemann, V.; Munk, M.H.; Wibral, M. Subsampling effects in neuronal avalanche distributions recorded in vivo. BMC Neurosci. 2009, 10. [Google Scholar] [CrossRef] [PubMed]
  42. Girardi-Schappo, M.; Tragtenberg, M.; Kinouchi, O. A brief history of excitable map-based neurons and neural networks. J. Neurosci. Methods 2013, 220, 116–130. [Google Scholar] [CrossRef] [PubMed]
  43. Levina, A.; Priesemann, V. Subsampling scaling. Nat. Commun. 2017, 8, 15140. [Google Scholar] [CrossRef] [PubMed]
  44. Sornette, D.; Ouillon, G. Dragon-kings: Mechanisms, statistical methods and empirical evidence. Eur. Phys. J. Spec. Top. 2012, 205, 1–26. [Google Scholar] [CrossRef]
  45. Lin, Y.; Burghardt, K.; Rohden, M.; Noël, P.A.; D’Souza, R.M. The Self-Organization of Dragon Kings. arXiv, 2017; arXiv:1705.10831. [Google Scholar]
  46. Hobbs, J.P.; Smith, J.L.; Beggs, J.M. Aberrant neuronal avalanches in cortical tissue removed from juvenile epilepsy patients. J. Clin. Neurophysiol. 2010, 27, 380–386. [Google Scholar] [CrossRef] [PubMed]
  47. Meisel, C.; Storch, A.; Hallmeyer-Elgner, S.; Bullmore, E.; Gross, T. Failure of adaptive self-organized criticality during epileptic seizure attacks. PLoS Comput. Biol. 2012, 8, e1002312. [Google Scholar] [CrossRef] [PubMed]
  48. Haldeman, C.; Beggs, J.M. Critical branching captures activity in living neural networks and maximizes the number of metastable states. Phys. Rev. Lett. 2005, 94, 058101. [Google Scholar] [CrossRef] [PubMed]
  49. Gireesh, E.D.; Plenz, D. Neuronal avalanches organize as nested theta-and beta/gamma-oscillations during development of cortical layer 2/3. Proc. Natl. Acad. Sci. USA 2008, 105, 7576–7581. [Google Scholar] [CrossRef] [PubMed]
  50. Poil, S.S.; Hardstone, R.; Mansvelder, H.D.; Linkenkaer-Hansen, K. Critical-state dynamics of avalanches and oscillations jointly emerge from balanced excitation/inhibition in neuronal networks. J. Neurosci. 2012, 32, 9817–9823. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Bedard, C.; Kroeger, H.; Destexhe, A. Does the 1/f frequency scaling of brain signals reflect self-organized critical states? Phys. Rev. Lett. 2006, 97, 118102. [Google Scholar] [CrossRef] [PubMed]
  52. Tetzlaff, C.; Okujeni, S.; Egert, U.; Wörgötter, F.; Butz, M. Self-organized criticality in developing neuronal networks. PLoS Comput. Biol. 2010, 6, e1001013. [Google Scholar] [CrossRef] [PubMed]
  53. Priesemann, V.; Wibral, M.; Valderrama, M.; Pröpper, R.; Le Van Quyen, M.; Geisel, T.; Triesch, J.; Nikolić, D.; Munk, M.H. Spike avalanches in vivo suggest a driven, slightly subcritical brain state. Front. Syst. Neurosci. 2014, 8. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Firing densities and phase diagram for V T = 0 , I = 0 . (a) Examples of the rational firing function Φ ( V ) for Γ = 1 , V T = 0.0 and Γ = 1 , V T = 0.5 . (b) Firing density ρ ( Γ W ) for μ = 0.0 , 0.5 , 0.9 . The absorbing state ρ 0 = 0 looses stability after Γ W > Γ C W C = 1 μ . (c) Comparison, near the critical region, between order parameter ρ obtained numerically from Equation (8) (points) and from the analytic approximation Equation (9) (lines).
Figure 1. Firing densities and phase diagram for V T = 0 , I = 0 . (a) Examples of the rational firing function Φ ( V ) for Γ = 1 , V T = 0.0 and Γ = 1 , V T = 0.5 . (b) Firing density ρ ( Γ W ) for μ = 0.0 , 0.5 , 0.9 . The absorbing state ρ 0 = 0 looses stability after Γ W > Γ C W C = 1 μ . (c) Comparison, near the critical region, between order parameter ρ obtained numerically from Equation (8) (points) and from the analytic approximation Equation (9) (lines).
Entropy 19 00399 g001
Figure 2. Phase transitions for the μ = 0 case as a function of Γ , W and V T with I = 0 : (a) The solid lines represent the stable fixed points ρ + ( W ) , and dashed lines represent unstable fixed points ρ ( W ) , for thresholds V T = 0.0 , 0.1 , 0.2 and 0.5 . The discontinuity ρ C given by Equation (19) goes to zero for V T 0 . (b) Phase diagram Γ × W defined by Equation (17). From top to bottom, V T = 0.0 , 0.1 , 0.2 and 0.5 . We have ρ + > 0 above the phase transition lines. For V T > 0 , all of the transitions are discontinuous.
Figure 2. Phase transitions for the μ = 0 case as a function of Γ , W and V T with I = 0 : (a) The solid lines represent the stable fixed points ρ + ( W ) , and dashed lines represent unstable fixed points ρ ( W ) , for thresholds V T = 0.0 , 0.1 , 0.2 and 0.5 . The discontinuity ρ C given by Equation (19) goes to zero for V T 0 . (b) Phase diagram Γ × W defined by Equation (17). From top to bottom, V T = 0.0 , 0.1 , 0.2 and 0.5 . We have ρ + > 0 above the phase transition lines. For V T > 0 , all of the transitions are discontinuous.
Entropy 19 00399 g002
Figure 3. Phase diagram for the μ = 0 case as a function of Γ W and Γ ( V T I ) : The transition line, Equation (17), is Γ C W C = ( 1 + 2 Γ ( V T I ) ) 2 . This line is a first order phase transition, which terminates at the second order critical point Γ C W C = 1 with V T I = 0 .
Figure 3. Phase diagram for the μ = 0 case as a function of Γ W and Γ ( V T I ) : The transition line, Equation (17), is Γ C W C = ( 1 + 2 Γ ( V T I ) ) 2 . This line is a first order phase transition, which terminates at the second order critical point Γ C W C = 1 with V T I = 0 .
Entropy 19 00399 g003
Figure 4. Self-organization with dynamic neuronal gains: Simulations of a network of N = 160,000 neurons with fixed W i j = W = 1 and V T = 0 . Dynamic gains Γ i [ t ] starts with Γ i [ 0 ] uniformly distributed in [ 0 , Γ max = 1.0 ] . This defines the initial condition Γ [ 0 ] 1 N i N Γ i [ 0 ] Γ max / 2 = 0.5 . Self-organization of the average gain Γ [ t ] over time, for different τ . The horizontal dashed line marks the value Γ C = 1 / W = 1 .
Figure 4. Self-organization with dynamic neuronal gains: Simulations of a network of N = 160,000 neurons with fixed W i j = W = 1 and V T = 0 . Dynamic gains Γ i [ t ] starts with Γ i [ 0 ] uniformly distributed in [ 0 , Γ max = 1.0 ] . This defines the initial condition Γ [ 0 ] 1 N i N Γ i [ 0 ] Γ max / 2 = 0.5 . Self-organization of the average gain Γ [ t ] over time, for different τ . The horizontal dashed line marks the value Γ C = 1 / W = 1 .
Entropy 19 00399 g004
Figure 5. Self-organized value Γ * ( τ , N ) obtained with dynamic gains ( W i j = W = 1 ): (a) Curves Γ ( 1 / τ ) for several values of N. (b) Curves Γ ( 1 / N ) for several values of τ . (c) Standard deviation of the Γ [ t ] time series after the transient, as a function of 1 / τ . (d) Standard deviation of the Γ [ t ] time series after the transient, as a function of 1 / N .
Figure 5. Self-organized value Γ * ( τ , N ) obtained with dynamic gains ( W i j = W = 1 ): (a) Curves Γ ( 1 / τ ) for several values of N. (b) Curves Γ ( 1 / N ) for several values of τ . (c) Standard deviation of the Γ [ t ] time series after the transient, as a function of 1 / τ . (d) Standard deviation of the Γ [ t ] time series after the transient, as a function of 1 / N .
Entropy 19 00399 g005
Figure 6. Avalanche statistics for the model with dynamic neuronal gains: Probability histogram for avalanche sizes ( P S ( s ) ) with logarithmic bins for several τ with N = 160,000. Notice the self-organized supercriticality (SOSC) phenomenon and Dragon-king avalanches for small τ .
Figure 6. Avalanche statistics for the model with dynamic neuronal gains: Probability histogram for avalanche sizes ( P S ( s ) ) with logarithmic bins for several τ with N = 160,000. Notice the self-organized supercriticality (SOSC) phenomenon and Dragon-king avalanches for small τ .
Entropy 19 00399 g006
Figure 7. Dynamic (a) gain Γ [ t ] and (b) activity ρ [ t ] for several values of τ . (c) Γ [ t ] and ρ [ t ] for τ = 320 . In the figures, we consider only the last 500 time steps of simulation (from a time series of five million time steps) in a system with N = 160,000. The large events (oscillations) correspond to Dragon-king avalanches.
Figure 7. Dynamic (a) gain Γ [ t ] and (b) activity ρ [ t ] for several values of τ . (c) Γ [ t ] and ρ [ t ] for τ = 320 . In the figures, we consider only the last 500 time steps of simulation (from a time series of five million time steps) in a system with N = 160,000. The large events (oscillations) correspond to Dragon-king avalanches.
Entropy 19 00399 g007

Share and Cite

MDPI and ACS Style

Costa, A.A.; Brochini, L.; Kinouchi, O. Self-Organized Supercriticality and Oscillations in Networks of Stochastic Spiking Neurons. Entropy 2017, 19, 399. https://doi.org/10.3390/e19080399

AMA Style

Costa AA, Brochini L, Kinouchi O. Self-Organized Supercriticality and Oscillations in Networks of Stochastic Spiking Neurons. Entropy. 2017; 19(8):399. https://doi.org/10.3390/e19080399

Chicago/Turabian Style

Costa, Ariadne A., Ludmila Brochini, and Osame Kinouchi. 2017. "Self-Organized Supercriticality and Oscillations in Networks of Stochastic Spiking Neurons" Entropy 19, no. 8: 399. https://doi.org/10.3390/e19080399

APA Style

Costa, A. A., Brochini, L., & Kinouchi, O. (2017). Self-Organized Supercriticality and Oscillations in Networks of Stochastic Spiking Neurons. Entropy, 19(8), 399. https://doi.org/10.3390/e19080399

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop