Next Article in Journal
Going Round in Circles: Landauer vs. Norton on the Thermodynamics of Computation
Next Article in Special Issue
A Relevancy, Hierarchical and Contextual Maximum Entropy Framework for a Data-Driven 3D Scene Generation
Previous Article in Journal
Finite-Time Chaos Suppression of Permanent Magnet Synchronous Motor Systems
Previous Article in Special Issue
Entropy Estimation of Disaggregate Production Functions: An Application to Northern Mexico
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parameter Estimation for Spatio-Temporal Maximum Entropy Distributions: Application to Neural Spike Trains

INRIA, 2004 route de lucioles, 06560, Sophia-Antipolis, France
*
Authors to whom correspondence should be addressed.
Entropy 2014, 16(4), 2244-2277; https://doi.org/10.3390/e16042244
Submission received: 19 February 2014 / Revised: 28 March 2014 / Accepted: 8 April 2014 / Published: 22 April 2014
(This article belongs to the Special Issue Maximum Entropy and Its Application)

Abstract

: We propose a numerical method to learn maximum entropy (MaxEnt) distributions with spatio-temporal constraints from experimental spike trains. This is an extension of two papers, [10] and [4], which proposed the estimation of parameters where only spatial constraints were taken into account. The extension we propose allows one to properly handle memory effects in spike statistics, for large-sized neural networks.

1. Introduction

With the evolution of multi-electrode array (MEA) acquisition techniques, it is currently possible to simultaneously record the activity of a few hundred neurons up to a few thousand [1]. Stevenson et al. [2] reported that the number of recorded neurons doubles approximately every eight years. However, beyond the mere recording of an increasing number of neurons, there is a need to extract relevant information from data in order to understand the underlying dynamics of the studied network, how it responds to stimuli and how the spike train response encodes these stimuli. In the realm of spike train analysis, this means having efficient spike sorting techniques [36], but also efficient methods to analyze spike statistics. The second aspect requires using canonical statistical models, whose parameters have to be tuned (“learned”) from data.

The maximum entropy method (MaxEnt) offers a way of selecting canonical statistical models from first principles. Having its root in statistical physics, MaxEnt consists of fixing a set of constraints, determined as the empirical average of features measured from the spiking activity. Maximizing the statistical entropy given those constraints provides a unique probability, called a Gibbs distribution, which approaches, at best, data statistics in the following sense: among all probability distributions that match the constraints, this is the one that has the smallest Kullback-Leibler divergence with the data ([7]). Equivalently, it satisfies the constraints without adding additional assumption on the statistics [8].

Most studies have focused on properly describing the statistics of spatially-synchronized patterns of neuronal activity without considering time-dependent patterns and memory effects. In this setting, pairwise models [9,10] or extensions with triplets and quadruplets interactions [1113] were claimed to correctly fit ≈ 90 to 99% of the information. However, considering now the capacity of these models to correctly reproduce spatio-temporal spike patterns, the performances drop-off dramatically, especially in the cortex [14,15] or in the retina [16].

Taking into account spatio-temporal patterns requires introducing memory in statistics, described as a Markov process. MaxEnt extends easily to this case (see Section 2.2 and the references therein for a short description) producing Gibbs distributions in the spatio-temporal domain. Moreover, rigorous mathematical methods are available to fit the parameters of the Gibbs distribution [16]. However, the main drawback of these methods is the huge computer memory they require, preventing their applications to large-scale neural networks. Considering a model with memory depth D (namely, the probability of a spike pattern at time t depends on the spike activity in the interval [tD, t–1]), there are 2N(D+1) possible patterns. The method developed in [16] requires one to handle a matrix of size 2N(D+1) × 2N(D+1). Therefore, it becomes intractable for N(D + 1) > 20.

In this paper, we propose an alternative method to fit the parameters of a spatio-temporal Gibbs distribution with larger values of the product, N(D + 1). We have been able to go up to N(D + 1) (~ 120) on a small cluster (64 processors AMD Opteron(tm) 2, 300 MHz). The method is based on [17] and [18], who proposed the estimation of parameters in spatial Gibbs distributions. The extension in the spatio-temporal domain is not straightforward, as we show, but it carries over to the price of some modifications. Combined with parallel Monte Carlo computing developed in [19], this provides a numerical method, allowing one to handle Markovian spike statistics with spatio-temporal constraints.

The paper is organized as follow. In Section 2, we recall the theoretical background for spike trains with a Gibbs distribution. We discuss both the spatial and spatio-temporal case. In the next section, 3, we explain the method to fit the parameters of MaxEnt distributions. As we mathematically show, the convex criterion used by [17] still applies for spatio-temporal constraints. However, the method used by [18] to avoid recomputing the Gibbs distribution at each parameters change cannot be directly used and has to be adapted using a linear response scheme. In the last section, 4, we show benchmarks evaluating the performance of this method and discuss the computational obstacles that we encountered. We made tests with both synthetic and real data. Synthetic data were generated from known probability distributions using a Monte Carlo method. Real data correspond to spike trains obtained from retinal ganglion cells activity (courtesy of M.J. Berry and O. Marre). The method shows a satisfying performance in the case of synthetic data. Real data analysis is not systematic, but instead used as an illustration and comparison with the paper of Schneidman et al. 2006 ([9]). As we could see in the example, the performance on real data, although satisfying, is affected by the large number of parameters in the distribution, a consequence of the choice to work with canonical models (Ising, pairwise with memory). This effect is presumably not related to our method, but to a standard problem in statistics.

Some of our notations might not be usual to some readers. Therefore, we added a list of symbols at the end of the paper.

2. Gibbs Distributions in the Spatio-Temporal Domain

2.1. Spike Trains and Observables

2.1.1. Spike Trains

We consider the joint activity of N neurons, characterized by the emission of action potentials (“spikes”). We assume that there is a minimal time scale, δ, set to one without loss of generality, such that a neuron can at most fire a spike within a time window of size δ. This provides a time discretization labeled with an integer time, n. Each neuron activity is then characterized by a binary variable. We use the notation, ω, to differentiate our binary variables ∈ {0, 1} to the notation, σ or S, used for “spin” variables ∈ {−1, 1}. ωk(n) = 1 if neuron k fires at time n, and ωk(n) = 0, otherwise.

The state of the entire network in time bin n is thus described by a vector ω ( n ) = def [ ω k ( n ) ] k = 1 N, called a spiking pattern. A spike block is a consecutive sequence of spike patterns, ω n 1 n 2, representing the activity of the whole network between two instants, n1 and n2.

ω n 1 n 2 = { ω ( n ) } { n 1 n n 2 } .

The time-range (or “range”) of a block, ω n 1 n 2, is n2n1+1, the number of time steps from n1 to n2. Here is an example of a spike block with N = 4 neurons and range R = 3:

[ 0 1 1 0 0 1 1 0 1 1 1 1 ]

A spike train or raster is a spike block, ω 0 T, from some initial time, zero, to some final time, T. To alleviate the notations, we simply write ω for a spike train. We note Ω, the set of spike trains.

2.1.2. Observables

An observable is a function, Entropy 16 02244f12, which associates a real number, Entropy 16 02244f12 (ω), to a spike train. In the realm of statistical physics, common examples of observables are the energy or the number of particles (where ω would correspond, e.g., to a spin configuration). In the context of neural networks examples are the number of neuron firing at a given time, n, k = 1 N ω k ( n ), or the function ωk1(n1)ωk2(n2), which is one if neuron k1 fires at time n1 and neuron k2 fires at time n2, and is zero, otherwise.

Typically, an observable does not depend on the full raster, but only on a sub-block of it. The time-range (or “range”) of an observable is the minimal integer R > 0, such that, for any raster, ω, O ( ω ) = O ( ω 0 R - 1 ). The range of the observable k = 1 N ω k ( n ) is one; the range of ωk1(n1)ωk2 (n2) is n2n1 + 1. From now on, we restrict to observables of range R, fixed and finite. We set D = R – 1.

An observable is time-translation invariant if, for any time n > 0, we have O ( ω n n + D ) O ( ω 0 D ) whenever ω n n + D = ω 0 D. The two examples above are time-translation invariant. The observable λ(n1)ωk1(n1)ωk2(n2), where λ is a real function of time, is not time-translation invariant. Basically, time-translation invariance means that Entropy 16 02244f12 does not depend explicitly on time. We focus on such observables from now on.

2.1.3. Monomials

Prominent examples of time-translation invariant observables with range R are products of the form:

m p 1 , , p r ( ω ) = def u = 1 r ω k u ( n u ) .

where pu, u = 1. . . r are pairs of spike-time events (ku, nu), ku = 1. . . N being the neuron index and nu = 0. . . D being the time index. Such an observable, called monomial, takes therefore values in { 0, 1 } and is one, if and only if ωku(nu) = 1, u = 1. . . r (neuron k1 fires at time n1, . . .; neuron kr fires at time nr). A monomial is therefore a binary observable that represents the logic-AND operator applied to a prescribed set of neuron spike events.

We allow the extension of the definition (1) to the case where the set of pairs p1, . . . , pr is empty, and we set m∅︀ = 1. For a number, N, of neurons and a time range, R, there are, thus, 2N R such possible products. Any observable of range R can be represented as a linear combination of products (1). Monomials constitute therefore a canonical basis for observable representation. To alleviate notations, instead of labeling monomials by a list of pairs, as in (1), we shall label them by an integer index, l.

2.1.4. Potential

Another prominent example of an observable is the function called “energy” or potential in the realm of the MaxEnt. Any potential of range R can be written as a linear combination of the 2N R possible monomials (1):

H λ = l = 1 2 N R λ l m l ,

where some coefficients, λl, in the expansion may be zero. Therefore, by analogy with spin systems, monomials somewhat constitute spatio-temporal interactions between neurons: the monomial u = 1 r ω k u ( n u ) contributes to the total energy, λ(ω), of the raster, ω, if and only if neuron k1 fires at time n1, . . . , and neuron kr fires at time nr in the raster, ω. The number of pairs in a monomial (1) defines the degree of an interaction: Degree 1 corresponds to “self-interactions”, Degree 2 to pairwise, and so on. Typical examples of such potentials are the Ising model [9,10,20]:

H I s i n g ( ω ( 0 ) ) = i λ i ω i ( 0 ) + i j λ i j ω i ( 0 ) ω j ( 0 ) ,

where considered events are individual spikes and pairs of simultaneous spikes. Another example is the Ganmor–Schneidman–Segev (GSS) model [11,12]:

H G S S ( ω ( 0 ) ) = i λ i ω i ( 0 ) + i j λ i j ω i ( 0 ) ω j ( 0 ) + i j k λ i j k ω i ( 0 ) ω j ( 0 ) ω k ( 0 ) ,

where, additionally to 3, simultaneous triplets of spikes are considered (we restrict the form (4) to a triplet, although Ganmor et al. were also considering quadruplets). In these two examples, the potential is a function of the spike pattern at a given time. Here, we choose this time equal to zero, without loss of generality, since we are considering time-translation invariant potentials. More generally, the form (2) affords the consideration of spatio-temporal neurons interactions: this allows us to introduce delays, memory and causality in spike statistics estimation. A simple example is a pairwise model with delays, such as:

H P R ( ω 0 D ) = i λ i ω i ( D ) + s = 0 D i j λ i j s ω i ( 0 ) ω j ( s ) ,

where ‘PR’ stands for ‘pairwise with range R’, which takes into account the events where neuron i fires s time steps after a neuron, j, with s = 0. . . D.

2.2. The Maximum Entropy Principle

Assigning equal probabilities (uniform probability distribution) to possible outcomes goes back to Laplace and Bernoulli ([21]) (“principle of insufficient reason”). Maximizing the statistical entropy without constraints is equivalent to this principle. In general, however, one has some knowledge about data, typically characterized by the empirical average of the prescribed observables (e.g., for spike trains, firing rates, the probability that a fixed group of neurons fire at the same time, the probability that K neurons fire at the same time [22]); this constitutes a set of constraints. The maximum entropy principle (MaxEnt) is a method to obtain, from the observation of a statistical sample, a probability distribution that approaches, at best, the statistics of the sample, taking into account these constraints without additional assumptions [8]. Maximizing the statistical entropy given those constraints provides a distribution as far as possible from the uniform one and as close as possible to the empirical distribution. For instance, considering the empirical mean and variance of the sample of a random variable as constraints results in a Gaussian distribution.

Although some attempts have been made to extend MaxEnt to non-stationary data [2326], it is mostly applied in the context of stationary statistics: the average of an observable does not depend explicitly on time. We shall work with this hypothesis. In its simplest form, the MaxEnt also assumes that the sample has no memory: the probability of an outcome at time t does not depend on the past. We first discuss the MaxEnt in this context in the next section, before considering the case of processes with memory in Section 2.2.2.

2.2.1. Spatial Constraints

In our case, the natural constraints are represented by the empirical probability of occurrence of characteristic spike events in the spike train or, equivalently, by the average of specific monomials. Classical examples of constraints are the probability that a neuron fires at a given time (firing rate) or the probability that two neurons fire at the same time. For a raster, ω, of length T, we note π ω ( T ), the empirical distribution, and π ω ( T ) [ O ], the empirical average of the observable, Entropy 16 02244f12, in the raster, ω. For example, the empirical firing rate of neuron i is π ω ( T ) [ ω i ] = 1 T n = 0 T - 1 ω i ( n ); the empirical probability that two neurons, i, j, fire at the same time is π ω ( T ) [ ω i ω j ] = 1 T n = 0 T - 1 ω i ( n ) ω j ( n ); and so on. Given a set of L monomials, ml, their empirical average, π ω ( T ) [ m l ], measured in the raster, ω, constitute a set of constraints shaping the sought for probability distribution. We consider here monomials corresponding to events occurring at the same time, i.e., ml(ω) ≡ ml ( ω(0) ), postponing to Section 2.2.2 the general case of events occurring at distinct times.

In this context, the MaxEnt problems is stated as follows. Find a probability distribution, μ, that maximizes the entropy:

S [ μ ] = - ω ( 0 ) μ [ ω ( 0 ) ] log μ [ ω ( 0 ) ] ,

(where the sum holds on the 2N possible spike patterns, ω(0)), given the constraints:

μ [ m l ] = π ω ( T ) [ m l ] , l = 1 L .

The average of monomials, predicted by the statistical model, μ (noted here as μ [ml ]), must be equal to the average, π ω ( T ) [ m l ], measured in the sample. There is, additionally, the probability normalization constraint:

ω ( 0 ) μ [ ω ( 0 ) ] = 1

This provides a variational problem:

μ = arg max ν M [ S [ ν ] + λ 0 ( ω ( 0 ) ν [ ω ( 0 ) ] - 1 ) + l = 1 L λ l ( ν [ m l ] - π ω ( T ) [ m l ] ) ]

where is the set of (stationary) probabilities on spike trains. One searches, among all stationary probabilities ν, for the one which maximizes the right hand side of (9). There is a unique such probability, μ = μλ, provided N is finite and λl > – ∞. This probability depends on the parameters, λ.

Stated in this form, the MaxEnt is a Lagrange multipliers problem. The sought probability distribution is the classical Gibbs distribution:

μ λ [ ω ( 0 ) ] = 1 Z λ e H λ [ ω ( 0 ) ] ,

where Zλ = ∑ω(0) eλ[ ω(0) ] is the partition function, whereas H λ [ ω ( 0 ) ] = l = 1 L λ l m l [ ω ( 0 ) ]. Note that the time index (here, zero) does not play a role, since we have assumed μλ to be stationary (time-translation invariant).

The value of λls is fixed by the relation:

μ λ ( m l ) = log Z λ λ l = π ω ( T ) [ m l ] , l = 1 L .

Additionally, note that the matrix 2 log Z λ λ l λ l is positive. This ensures the convexity of the problem and the uniqueness of the solution of the variational problem.

Note that we do not expect, in general, μλ to be equal to the (hidden) probability shaping the observed sample. It is only the closest one satisfying the constraints (7) [7]. The notion of closeness is related to the Kullback-Leibler divergence, defined in the next section.

It is easy to check that the Gibbs distribution (10) obeys:

μ λ [ ω n 1 n 2 ] = n = n 1 n 2 μ λ [ ω ( n ) ] ,

for any spike block, ω n 1 n 2. Indeed, the potential of the spike block, ω n 1 n 2, is H λ ( ω n 1 n 2 ) = n = n 1 n 2 H λ ( ω ( n ) ), whereas the partition function on spike blocks ω n 1 n 2 is Z n 2 - n 1 = ω n 1 n 2 e H λ [ ω n 1 n 2 ] = Z λ n 2 - n 1. Equation (12) expresses that spiking patterns occurring at different times are independent under the Gibbs distribution (10). This is expected: since the constraints shaping μλ take only into account spiking events occurring at the same time, we have no information on the causality between spike generation or on memory effects. The Gibbs distributions obtained when constructing constraints only with spatial events leads to statistical models where spike patterns are renewed at each time step, without reference to the past activity.

2.2.2. Spatio-Temporal Constraints

On the opposite side, one expects that spike train generation involves causal interactions between neurons and memory effects. We would therefore like to construct Gibbs distributions taking into account information on the spatio-temporal interactions between neurons and leading to a statistical model, not assuming anymore that successive spikes patterns are independent. Although the notion of the Gibbs distribution extends to processes with infinite memory [27], we shall concentrate here on Gibbs distributions associated with Markov processes with finite memory depth D; that is, the probability of having a spike pattern, ω(n), at time n, given the past history of spikes reads P [ ω ( n ) ω n - D n - 1 ]. Note that those transition probabilities are assumed not to depend explicitly on time (stationarity assumption).

Such a family of transition probabilities, P [ ω ( n ) ω n - D n - 1 ], defines a homogeneous Markov chain. Provided P [ ω ( n ) ω n - D n - 1 ] > 0 (this is a sufficient, but not a necessary condition. In the remainder of the paper, we shall work with this assumption) for all ω n - D n, there is a unique probability, μ, called the invariant probability of the Markov chain, such that:

μ [ ω 1 D ] = ω 0 D - 1 P [ ω ( D ) ω 0 D - 1 ] μ [ ω 0 D - 1 ] .

In a Markov process, the probability of a block, ω n 1 n 2, for n2n1 + 1 > D, is:

μ [ ω n 1 n 2 ] = n = n 1 + D n 2 P [ ω ( n ) ω n - D n - 1 ] μ [ ω n 1 n 1 + D - 1 ] ,

the Chapman–Kolmogorov relation [28]. To determine the probability of ω n 1 n 2, one has to know the transition probabilities and the probability, μ [ ω n 1 n 1 + D - 1 ]. When attempting to construct a Gibbs distribution obeying (14) from a set of spatio-temporal constraints, one has therefore to determine simultaneously the family of transition probabilities and the invariant probability. Remark that setting:

φ ( ω 0 D ) = log P [ ω ( D ) ω 0 D - 1 ] ,

we may write (14) in the form:

μ [ ω n 1 n 2 ω n 1 n 1 + D - 1 ] = e n 1 + D n 2 φ ( ω n n + D ) .

The probability of observing the spike pattern, ω n 1 n 2, given the past ω n 1 n 1 + D - 1 of depth D has an exponential form, similar to (10). Actually, the invariant probability of a Markov chain is a Gibbs distribution in the following sense.

In view of (14), probabilities must be defined as whatever, even if n2n1 is arbitrarily large. In this setting, the right objects are probabilities on infinite rasters [28]. Then, the entropy rate (or Kolmogorov–Sinai entropy) of μ is:

S [ μ ] = - lim sup n 1 n + 1 ω 0 n μ [ ω 0 n ] log μ [ ω 0 n ] ,

where the sum holds over all possible blocks, ω 0 n. This reduces to (6) when μ obeys (12).

The MaxEnt takes now the following form. We consider a set of L spatio-temporal spike events (monomials), whose empirical average value, π ω ( T ) [ m l ], has been computed. We only restrict to monomials with a range at most equal to R = D + 1, for some D > 0. This provide us a set of constraints of the form (7). To maximize the entropy rate (17) under the constraints (7), we construct a range-R potential H λ = l = 1 L λ l m l. The generalized form of the MaxEnt states that there is a unique probability measure μλ, such that [29]:

P [ λ ] = sup ν M ( S [ ν ] + ν [ H λ ) ] ) = S [ μ λ ] + μ λ [ H λ ] .

This is the extension of the variational principle (9) to Markov chains. It selects, among all possible probability, ν, a unique probability, μλ, which realizes the supremum. μλ is called the Gibbs distribution with potential λ.

The quantity, ℘ [ λ ], is called topological pressure or free energy density. For a potential of the form (2) [30,31]:

P [ λ ] λ l = μ λ [ m l ] .

This is the analog of (11), which allows one to tune the parameters, λl. Thus, ℘ [ λ ] plays the role of log Zλ in (10). Actually, it is equal to log Zλ when restricting to the memoryless case (In statistical physics, the free energy is −kT log Z. The minus sign comes from the minus sign in the Hamiltonian). ℘ [ λ ] is strictly convex thanks to the assumption P [ ω ( n ) ω n - 1 n - D ] > 0, which guarantees the uniqueness of μλ.

Note that μλ has not the form (10) for D > 0. Indeed a probability distribution, e.g., of the form μ λ ( ω 0 n - 1 ) = 1 Z n e H λ ( ω 0 n - 1 ) with:

H λ ( ω 0 n - 1 ) r = 0 n - D - 1 H λ ( ω r r + D ) = l λ l r = 0 n - D - 1 m l ( ω r r + D ) ,

the potential of the block ω 0 n - 1, and:

Z n [ λ ] = ω 0 n - 1 e H λ ( ω 0 n - 1 ) ,

the “n-time steps” partition function does not obey the Chapman–Kolmogorov relation (14).

However, the following holds [29,3234].

  • There exist A, B > 0, such that, for any block, ω 0 n - 1:

    A μ λ [ ω 0 n - 1 ] e - ( n - D ) P [ λ ] e H λ ( ω 0 n - 1 ) B .

  • We have:

    P [ λ ] = lim n 1 n log Z n [ λ ] .

    In the spatial case, Zn [ λ] = Zn [ λ ] and ℘ [ λ] = log Z [ λ ], whereas A = B = 1 in (22). Although (23) is defined by a limit, it is possible to compute ℘ [ λ ] as the log of the largest eigenvalue of a transition matrix constructed from λ (Perron–Frobenius matrix) [35]. Unfortunately, this method does not apply numerically as soon as NR > 20.

These relations are crucial for the developments made in the next section.

To recap, a Gibbs distribution in the sense of [18] is the invariant probability distribution of a Markov chain. The link between the potential λ and the transition probabilities P [ ω ( D ) ω 0 D - 1 ] (respectively, the potential [15]) is given by: φ ( ω 0 D ) = H ( ω 0 D ) - G ( ω 0 D ), where Entropy 16 02244f13, called a normalization function, is a function of the right eigenvector of a transition matrix built from , and a function of P [λ]. Entropy 16 02244f13 reduces to log Zλ = ℘ [ λ ] when D = 0 [2].

To finish this section, let us introduce the Kullback-Leibler divergence, dKL(ν, μ), which provides a notion of similarity between two probabilities, ν, μ. We have dKL(ν, μ) ≥ 0 with equality, if and only if μ = ν. The Kullback-Leibler divergence between an invariant probability ν and the Gibbs distribution, μλ, with potential λ is given by dKL ( ν, μλ) = ℘ [ λ ] – ν [λ ] – Entropy 16 02244f14 [ ν ], [29]. When ν = π ω ( T ), we obtain the divergence between the “model (μλ)” and the “empirical probability ( π ω ( T ))”:

d K L ( π ω ( T ) , μ λ ) = P [ λ ] - π ω ( T ) [ H λ ] - S [ π ω ( T ) ] .

3. Inferring the Coefficients of a Potential from Data

Equation (11) or (19) provides an analytical way to compute the coefficients of the Gibbs distribution from data. However, they require the computation of the partition function or of the topological pressure, which becomes rapidly intractable as the number of neurons increases. Thus, researchers have attempted to find alternative methods to compute reliably and efficiently the λls. An efficient method has been introduced in [17] and applied to spike trains in [18]. Although these papers are restricted to Gibbs distributions of the form (10) (models without memory), we show in this section how their method can be extended to general Gibbs distributions.

3.1. Bounding the Kullback-Leibler Divergence Variation

3.1.1. The Spatial Case

The method developed in [17] by Dudik et al. is based on the so-called convex duality principle, used in mathematical optimization theory. Due to the difficulty in maximizing the entropy (which is a concave function), one looks for a convex function that easier to investigate. Dudik et al. showed that, for spatially constrained MaxEnt distributions, finding the Gibbs distribution amounts to finding the minimum of the negative log likelihood (we have adapted [17] to our notations. Moreover, in our case, π ω ( T ) corresponds to the empirical average on a raster, ω, whereas π in [17] corresponds to an average over independent samples):

L π ω ( T ) ( λ ) = - π ω ( T ) [ log μ λ ] .

Indeed, in the spatial case, the Kullback-Leibler divergence between the empirical measure, π ω ( T ), and the Gibbs distribution at μλ is:

d K L ( π ω ( T ) , μ λ ) = π ω ( T ) [ log π ω ( T ) log μ λ ] = π ω ( T ) [ log π ω ( T ) ] - π ω ( T ) [ log μ λ ] ,

so that, from (24):

L π ω ( T ) ( λ ) = P [ λ ] - π ω ( T ) [ H λ ] ,

where we used S [ π ω ( T ) ] = - π ω ( T ) [ log ( π ω ( T ) ) ].

Since ℘ is convex and π ω ( T ) [ H λ ] linear in λ, L π ω ( T ) ( λ ) is convex. Its unique minimum is given by (11).

Moreover, we have:

L π ω ( T ) ( λ ) - L π ω ( T ) ( λ ) = P [ λ ] - P [ λ ] - π ω ( T ) [ Δ H λ ] ,

with Δλ = λ′λ. From (10):

Z [ λ ] Z [ λ ] = 1 Z [ λ ] ω ( 0 ) e H λ ( ω ( 0 ) ) = ω ( 0 ) e Δ H λ ( ω ( 0 ) ) μ λ [ ω ( 0 ) ) ] = μ λ [ e Δ H λ ] ,

and since P [λ] = log Z [λ] in the spatial case:

P [ λ ] - P [ λ ] = log μ λ [ e Δ H λ ] .

Therefore:

L π ω ( T ) ( λ ) - L π ω ( T ) ( λ ) = log μ λ [ e Δ H λ ] - π ω ( T ) [ Δ H λ ] .

The idea proposed by Dudik et al. is then to bound this difference by an easier-to-compute convex quantity, with the same minimum as L π ω ( T ) ( λ ), and to reach this minimum by iterations on λ. They proposed a sequential and a parallel method. Let us summarize first the sequential method. The goal here is not to rewrite their paper [17], but to explain some crucial elements that are not directly appliable to the spatio-temporal case.

In the sequential case, one updates λ as λ′ = λ + δel, for some l, where el is the canonical basis vector in direction l, so that Δλ = δml, and:

L π ω ( T ) ( λ ) - L π ω ( T ) ( λ ) = log μ λ [ e δ m l ] - δ π ω ( T ) [ m l ] .

Using the following property:

e δ x 1 + ( e δ - 1 ) x ,

for x ∈ [0, 1], and since ml ∈ {0, 1}, we have:

log μ λ [ e δ m l ] log ( 1 + ( e δ - 1 ) μ λ [ m l ] ) .

This bound, proposed by Dudik et al., is remarkably clever. Indeed, it replaces the computation of the average μλ [eδml], which is computationally hard, by the computation of μλ [ml ], which is computationally easy. Finally,

L π ω ( T ) ( λ ) - L π ω ( T ) ( λ ) - δ π ω ( T ) [ m l ] + log ( 1 + ( e δ - 1 ) μ λ [ m l ] ) .

In the parallel case, the computation and results differ. One now updates λ as λ = λ + l = 1 L δ l e l. Moreover, one has to renormalize the mls in m l = m l L in order that Equation (34) below holds. We have, therefore, Δ H λ = l = 1 L δ l m l .

Thus,

L π ω ( T ) ( λ ) - L π ω ( T ) ( λ ) = log μ λ [ e l = 1 L δ l m l ] - l = 1 L δ l π ω ( T ) [ m l ] .

Using the following property [36]:

e l = 1 L δ l m l 1 + l = 1 L m l ( e δ l - 1 ) ,

for δl ∈ ℝ and m l 0 , l = 1 L m l 1, we have:

log μ λ [ e l = 1 L δ l m l ] log ( 1 + l = 1 L ( e δ l - 1 ) μ λ [ m l ] ) .

Since log(1 + x) ≤ x for x > 1, Dudick et al. obtain:

log μ λ [ e l = 1 L δ l m l ] l = 1 L ( e δ l - 1 ) μ λ [ m l ] ,

provided l = 1 L ( e δ l - 1 ) μ λ [ m l ] > - 1. (this constraint has to be checked during iterations). Finally, using the definition of m l .

L π ω ( T ) ( λ ) - L π ω ( T ) ( λ ) 1 L [ - l = 1 L δ l π ω ( t ) [ m l ] + l = 1 L ( e δ l - 1 ) μ λ [ m l ] ] .

To be complete, let us mention that Dudik et al. consider the case where some error, εl, is allowed in the estimation of the coefficient, λl. This relaxation on the parameters alleviates the overfitting.

In this case, the bound on the right-hand side in (33) (sequential case) becomes:

F l ( λ , δ ) = - δ π ω ( T ) [ m l ] + log ( 1 + ( e δ - 1 ) μ λ [ m l ] ) + ɛ l ( λ l + δ - λ l ) .

whereas the right-hand side in (35) becomes l = 1 L G l ( λ , δ ) with:

G l ( λ , δ ) = 1 L [ - δ l π ω ( T ) [ m l ] + ( e δ l - 1 ) μ λ [ m l ] ] + ɛ l ( λ l + δ - λ l ) ,

The minimum of these functions is easy to find, and one obtains, for a given λ, the variation, δ, required to lower bound the log-likelihood variation. The authors have shown that both the sequential and parallel method produce a sequence, λ(k), which converges to the minimum of L π ω ( T ) as k → +∞. Note, however, that one strong condition in their convergence theorem is εl > 0. This requires a sharp estimate of the error, εl, which cannot be solely based on the central limit theorem or on Hoeffding inequality in our case, because when the empirical average, π ω ( T ) ( m l ), is too small, the minima of F, computed in [18], may not be defined.

3.1.2. Extension to the Spatio-Temporal Case

We now show how to extend these computations to the spatio-temporal case, provided one replaces the log-likelihood, L π ω ( T ), by the Kullback-Leibler divergence (24). The main obstacle is that the Gibbs distribution does not have the form, e H Z. We obtain, thus, a convex criterion to minimize Kullback-Leibler divergence variation, hence reaching the minimum, π ω ( T ).

Replacing ν in Equation (24) by π ω ( T ), the empirical measure, one has:

d K L ( π ω ( T ) , μ λ ) - d K L ( π ω ( T ) , μ λ ) = P [ λ ] - P [ λ ] - π ω ( T ) [ Δ H λ ] ,

because the entropy, S [ π ω ( T ) ], cancels. This is the analog of (27). The main problem now is to compute ℘ [ λ′] – ℘[ λ ].

From (22), we have:

A e - ( n - D ) P [ λ ] ω 0 n - 1 e H λ ( ω 0 n - 1 ) e Δ H λ ( ω 0 n - 1 ) ω 0 n - 1 μ λ [ ω 0 n - 1 ] e Δ H λ ( ω 0 n - 1 ) B e - ( n - D ) P [ λ ] ω 0 n - 1 e H λ ( ω 0 n - 1 ) e Δ H λ ( ω 0 n - 1 )

so that:

lim n 1 n [ log A - ( n - D ) P [ λ ] + log ( ω 0 n - 1 e H λ ( ω 0 n - 1 ) e Δ H λ ( ω 0 n - 1 ) ) ] lim n 1 n log ( ω 0 n - 1 μ λ [ ω 0 n - 1 ] e Δ H λ ( ω 0 n - 1 ) ) lim n 1 n [ log B - ( n - D ) P [ λ ] + log ( ω 0 n - 1 e H λ ( ω 0 n - 1 ) e Δ H λ ( ω 0 n - 1 ) ) ] .

Since H λ ( ω 0 n - 1 ) = H λ ( ω 0 n - 1 ) + Δ H λ ( ω 0 n - 1 ), from (23):

lim n 1 n log ω 0 n - 1 e H λ ( ω 0 n - 1 ) e Δ H λ ( ω 0 n - 1 ) = P [ λ ]

Therefore:

P [ λ ] - P [ λ ] = lim n 1 n log ω 0 n - 1 μ λ [ ω 0 n - 1 ] e Δ H λ ( ω 0 n - 1 ) .

This is the extension of (29) to the spatio-temporal case. In the spatial case, it reduces to (29) from (12). This equation is obviously numerically intractable, but it has two advantages: on the one hand, it allows one to extend the bounds, (33) (sequential case) and (35) (parallel case), and on the other hand, it can be used to get a δ-power expansion of ℘ [ λ′]–℘ [ λ ]. This last point is used in Section 3.2.3.

To get the analog of (33) in the sequential case where Δ H λ ( ω 0 n - 1 ) = δ r = 0 n - D - 1 m l ( ω r r + D ), one may still apply (31) which holds, provided:

m l ( ω 0 n - 1 ) r = 0 n - 1 - D m l ( ω r r + D ) < 1

Therefore, compared to the spatial, we have to replace ml by m l n - D in Δ H λ ( ω 0 n - 1 ). We have, therefore:

ω 0 n - 1 μ λ [ ω 0 n - 1 ] e Δ H ( ω 0 n - 1 ) = ω 0 n - 1 μ λ [ ω 0 n - 1 ] e δ 1 n - D m l ( ω 0 n - 1 ) 1 + ( e δ - 1 ) 1 n - D ω 0 n - 1 μ λ [ ω 0 n - 1 ]     m l ( ω 0 n - 1 ) .

From the time translation invariance of μλ, we have:

1 n - D ω 0 n - 1 μ λ [ ω 0 n - 1 ] m l ( ω 0 n - 1 ) = 1 n - D r = 0 n - D - 1 ω 0 n - 1 μ λ [ ω 0 n - 1 ] m l ( ω r r + D ) = 1 n - D r = 0 n - D - 1 μ λ [ m l ] = μ λ [ m l ]

so that:

ω 0 n - 1 μ λ [ ω 0 n - 1 ] e δ 1 n - D m l ( ω 0 n - 1 ) 1 + ( e δ - 1 ) μ λ [ m l ] .

At first glance this bound is not really useful. Indeed, from (40), we obtain:

P [ λ ] - P [ λ ] lim n 1 n log ( 1 + ( e δ - 1 ) μ λ [ m l ] ) = 0.

Since this holds for any δ, this implies ℘ [ λ′] = ℘ [ λ ]. The reason for this is evident. Renormalizing ml, as we did to match the condition imposed by the bound (31), is equivalent to renormalizing δ by δ n - D. As n → +∞, this perturbation tends to zero and λ′ = λ. Therefore, the clever bound (31) would here be of no interest if we were seeking exact results. However, the goal here is to propose a numerical scheme, where, obviously, n is finite. We replace, therefore, the limit n → +∞ by a fixed n in the computation of ℘ [ λ′ ] – ℘[ λ ]. Keeping in mind that ml must also be renormalized in π ω ( T ) [ Δ H λ ] and using 1 n < 1 n - D, the Kullback-Leibler divergence (38) obeys:

d K L ( π ω ( T ) , μ λ ) - d K L ( π ω ( T ) , μ λ ) 1 n - D [ - δ π ω ( T ) [ m l ] + log ( 1 + ( e δ - 1 ) μ λ [ m l ] ) ] ,

the analog of (33).

In the parallel case, similar remarks hold. In order to apply the bound (34), we have to renormalize the mls in m l = 1 L ( n - D ). As for the spatial case, we also need to check that l = 1 L ( e δ l - 1 ) μ λ [ m l ] > - 1. (this constraint is not a guarantee and has to be checked during iterations). One obtains finally:

d K L ( π ω ( T ) , μ λ ) - d K L ( π ω ( T ) , μ λ ) 1 L ( n - D ) [ - l = 1 L δ l π ω ( T ) [ m l ] + l = 1 L ( e δ l - 1 ) μ λ [ m l ] ] ,

the analog of (35).

Compared with the spatial case, we see, therefore, that n must not be too large to have a reasonable Kullback-Leibler divergence variation. It must not be too small, however, to get a good approximation of the empirical averages.

3.2. Updating the Target Distribution when the Parameters Change

When updating the parameters, λ, one has to compute again the average values, μλ [ml ], since the probability, μλ, has changed. This has a huge computational cost. The exact computation (e.g., from (11, 19)) is not tractable for large N, so approximate methods have to be used, like Monte Carlo [19]. Again, this is also CPU time consuming, especially if one recomputes it again at each iteration, but at least it is tractable.

In this spirit, Broderick et al. [18] propose generating a Monte Carlo raster distributed according to μλ and to use it to compute μλ′ when || λ′λ || is sufficiently small. We explain their method, limited to the spatial case, in the next section, and we explain why it is not applicable in the spatio-temporal case. We then propose an alternative method.

3.2.1. The Spatial Case

The average of ml is obtained by the derivative of the topological pressure, ℘ [λ ]. In the spatial case, where ℘(λ) = log Zλ, we have:

μ λ [ m l ] = P ( λ ) λ j = 1 Z [ λ ] ω ( 0 ) m l ( ω ( 0 ) ) e H λ ( ω ( 0 ) ) = Z [ λ ] Z [ λ ] ω ( 0 ) m l ( ω ( 0 ) ) e Δ H λ ( ω ( 0 ) ) μ λ [ ω ( 0 ) ) ]

Using (28), one finally obtains:

μ λ [ m l ] = μ λ [ m l ( ω ( 0 ) ) e Δ H λ ( ω ( 0 ) ) ] μ λ [ e Δ H λ ( ω ( 0 ) ) ] ,

which is Equation (18) in [18]. Using this formula, one is able to compute the average of ml with respect to the new probability, μλ′, only using the old one, μλ.

3.2.2. Extension to the Spatio-Temporal Case

We now explain why the Broderick et al. method does not extend to the spatio-temporal case. The main problem is that if one tries to obtain the analog of the equality (45), one obtains, in fact, an inequality:

A B μ λ [ m l ] lim n 1 n μ λ [ m l ( ω 0 n - 1 ) e Δ H λ ( ω 0 n - 1 ) ] μ λ [ e Δ H λ ( ω 0 n - 1 ) ] B A μ λ [ m l ] ,

where A, B are the constants in (22). They are not known in general (they depend on the potential), and they are different. However, in the spatial case A = B = 1, whereas μ λ [ m l ( ω 0 n - 1 ) e Δ H λ ( ω 0 n - 1 ) ] = μ λ [ m l ( ω ( 0 ) ) e Δ H λ ( ω ( 0 ) ) ], because the potential has range one. Then, one recovers (45). Let us now explain how we obtain (46).

The averages of quantities are obtained by the derivative of the topological pressure (Equation (19)). We have:

μ λ [ m l ] = P λ l = lim n 1 n log Z n [ λ ] λ l .

Assuming that the limit and the derivative commute (see, e.g., [37]), gives:

μ λ [ m l ] = lim n 1 n 1 Z n [ λ ] ω 0 n - 1 m l ( ω 0 n - 1 ) e H λ ( ω 0 n - 1 ) = lim n 1 n 1 Z n [ λ ] ω 0 n - 1 m l ( ω 0 n - 1 ) e Δ H λ ( ω 0 n - 1 ) e H λ ( ω 0 n - 1 ) = lim n 1 n ω 0 n - 1 m l ( ω 0 n - 1 ) e Δ H λ ( ω 0 n - 1 ) e H λ ( ω 0 n - 1 ) ω 0 n - 1 e Δ H λ ( ω 0 n - 1 ) e H λ ( ω 0 n - 1 ) .

From (22):

A e - ( n - D ) P [ λ ] ω 0 n - 1 m l ( ω 0 n - 1 ) e Δ H λ ( ω 0 n - 1 ) e H λ ( ω 0 n - 1 ) ω 0 n - 1 m l ( ω 0 n - 1 ) e Δ H λ ( ω 0 n - 1 ) μ λ [ ω 0 n - 1 ] B e - ( n - D ) P [ λ ] ω 0 n - 1 m l ( ω 0 n - 1 ) e Δ H λ ( ω 0 n - 1 ) e H λ ( ω 0 n - 1 )

and:

A e - ( n - D ) P [ λ ] ω 0 n - 1 e Δ H λ ( ω 0 n - 1 ) e H λ ( ω 0 n - 1 ) ω 0 n - 1 e Δ H λ ( ω 0 n - 1 ) μ λ [ ω 0 n - 1 ] B e - ( n - D ) P [ λ ] ω 0 n - 1 e Δ H λ ( ω 0 n - 1 ) e H λ ( ω 0 n - 1 ) .

Therefore:

A B ω 0 n - 1 m l ( ω 0 n - 1 ) e Δ H λ ( ω 0 n - 1 ) e H λ ( ω 0 n - 1 ) ω 0 n - 1 e Δ H λ ( ω 0 n - 1 ) e H λ ( ω 0 n - 1 ) ω 0 n - 1 m l ( ω 0 n - 1 ) e Δ H λ ( ω 0 n - 1 ) μ λ [ ω 0 n - 1 ] ω 0 n - 1 e Δ H λ ( ω 0 n - 1 ) μ λ [ ω 0 n - 1 ] B A ω 0 n - 1 m l ( ω 0 n - 1 ) e Δ H λ ( ω 0 n - 1 ) e H λ ( ω 0 n - 1 ) ω 0 n - 1 e Δ H λ ( ω 0 n - 1 ) e H λ ( ω 0 n - 1 ) .

Now, from [29,31], (49) gives (46).

3.2.3. Taylor Expansion of the Pressure

The idea is here to use a Taylor expansion of the topological pressure. This approach is very much in the spirit of [38], but extended here to the spatio-temporal case. Since λ′ = λ + δ, we have:

μ λ [ m l ] = μ λ [ m l ] + j = 1 L μ λ [ m l ] λ j δ j + 1 2 j , k = 1 L 2 μ λ [ m l ] λ j λ k δ j δ k + = μ λ [ m l ] + j = 1 L 2 P [ λ ] λ j λ l δ j + 1 2 j , k = 1 L 3 P [ λ ] λ j λ k λ l δ j δ k +

The second derivative of the pressure is given by [29,3234]:

2 P [ λ ] λ j λ l = n = - + C j l ( n ) χ j l [ λ ] ,

where:

C j l ( n ) = μ λ [ m j m l σ n ] - μ λ [ m j ] μ λ [ m l ] ,

is the correlation function between ml, mk at time n, computed with respect to μλ. (51) is a version of the fluctuation-dissipation theorem in the spatio-temporal case. σn is the time shift applied n times. The third derivatives can be computed, as well, by taking the derivative (51) and using (47). This generates terms with third order correlations, and so on [37]. Up to second order, we have:

μ λ [ m l ] = μ λ [ m l ] + j = 1 L χ j l [ λ ] δ j +

Since the observable are monomials, they only take the values zero or one, and the computation of χjl is straightforward, reducing to counting the occurrence of time pairs, t, t + n, such that mj(t) = 1 and ml(t + n) = 1.

On practical grounds, we introduce a parameterΔ = ||λ′ − λ||, which measures the variation in the parameters after updating. If Δ is small enough (smaller than some Δc), the terms of order three in the Taylor expansion are negligible; then, we can use (53). Otherwise, if Δ is big, we compute a new Monte Carlo estimation of μ λ (as described in [19]). We explain in Section 4.2 how Δc was chosen in our data. Then, we use the following trick. If ||δ|| > Δc, we compute the new value μλ [mj]. If Δ c > δ > Δ c 10, we use the linear response approximation (53) of μλ. Finally, if δ < Δ c 10, we use μλ [ml] instead of μλ [ml] in the next iteration of the method. Thus, in the case, ||δ|| < Δc, we use the Gibbs distribution computed at some time step, say n, to infer the values at the next iteration. If we do that several successive time steps, the distance to the original value, λn, of the parameters increases. Therefore, we compute the norm ||λnλn+k|| at each time step, k, and we do not compute a new raster until this norm is larger than Δc.

3.3. The Algorithms

We have two algorithms, sequential and parallel, which are very similar to Dudik et al. Especially, the convergence of their algorithms, proven in their paper, extends to our case, since it only depends on the shape of the cost functions (36, 37). We describe here the algorithms coming out from the presented mathematical framework, in a sequential and parallel version. We iterate the algorithms until the distance η = d ( μ λ , π ω ( T ) ) is smaller than some ηc. We use the Hellinger distance:

d ( μ λ , π ω ( T ) ) = 1 2 l = 1 L ( π ω ( T ) ( m l ) - μ λ ( m l ) ) 2

3.3.1. Sequential Algorithm

Algorithm 1. The sequential algorithm.

δ is the learning rate by which we change the value of a parameter, λl. η is the convergence criterion (54)). Δ is the parameter allowing us to decide whether we update the parameter change by computing a new Gibbs sample or by the Taylor expansion. Fl is given by Equation (36)

3.4. Parallel Algorithm

Algorithm 2. The parallel algorithm. Gl is given by (37).

The implementation of those algorithms consists of an important part in software developed at INRIA (Institut National de Recherche en Informatique et en Automatique) and called EnaS (Event Neural Assembly Simulation). The executable is freely available at [39].

4. Results

In this section, we perform several tests on our method. We first consider synthetic data generated with a known Gibbs potential and recover its parameters. This step also allows us to tune the parameter, Δc, in the algorithms. Then, we consider real data analysis, where the Gibbs potential form is unknown. This last step is not a systematic study that would be out of the scope of this paper, but simply provided as an illustration and comparison with the paper of Schneidman et al. 2006 [9].

4.1. Synthetic Data

Synthetic data are obtained by generating a raster distributed according to a Gibbs distribution, whose potential (2) is known. We consider two families of Gibbs potentials. For each family, there are L > N monomials, whose range belongs to { 1, . . . , R }. Among them, there are N “rate monomials” ωi(D), i = 1. . . N, whose average gives the firing rate of neuron i, denoted ri; the LN other monomials, with degree k > 1, are chosen at random with a probability law ~ ek, which favors, therefore, pairwise interactions. The difference between the two families comes from the distribution of coefficients, λl.

  • “Dense” raster family. The coefficients are drawn with a Gaussian distribution with mean zero and variance 1 L to ensure a correct scaling of the coefficients dispersion as L increases (Figure 1(a)). This produces typically a dense raster (Figure 1(b)) with strong multiple correlations.

  • “Sparse” raster family. The rate coefficients in the potential are very negative: the coefficient, hi, of the rate monomial, ωi(D), is h i = log ( r i 1 - r i ), where ri [0 : 0.01] with a uniform probability distribution. Other coefficients are drawn with a Gaussian distribution with mean 0.8 and variance one (Figure 2(a)). This produces a sparse raster (Figure 2(b)) with strong multiple correlations.

4.2. Tuning Δc

For small N, R (NR ≤ 20), it is possible to exactly compute the topological pressure using the transfer matrix technique [16]. We have therefore a way to compare the Taylor expansion (51) and the exact value.

If we perturb λ by an amount, δ, in the direction, l, this induces a variation on μλ [ml], l = 1. . . L, given by the Taylor expansion (53). To the lowest order μλ [ml] = μλ [ml] + O(1), so that:

ɛ ( 1 ) = 1 L l = 1 L μ λ [ m l ] - μ λ [ m l ] μ λ [ m l ]

is a measure of the relative error when considering the lowest order expansion.

In the same way, to the second order:

μ λ [ m l ] = μ λ [ m l ] + j = 1 L χ j l [ λ ] δ j + O ( 2 ) ,

so that:

ɛ ( 2 ) = 1 L l = 1 L | μ λ [ m l ] - μ λ [ m l ] - j = 1 L χ j l [ λ ] δ j | μ λ [ m l ] ,

is a measure of the relative error when considering the next order expansion.

In Figure 3, we show the relative errors, ε (1), ε (2) (in percent), as a function of δ. For each point, we generate 25 potentials, with N = 5, R = 3, L = 12. For each of these potentials, we randomly perturb the λjs, with a random sign, so that the norm of the perturbation ||δ|| is fixed. The linear response, χ, is computed from a raster of length T = 100, 000.

These curves show a big difference between the dense and sparse case. In the dense case, the second order error is about 5% for Δc = 1, whereas we need a Δc ~ 0.03 to get the same 5% in the sparse case. We choose to align on the sparse case, and in typical experiments, we take Δc = 0.1, corresponding to about 10% of the error on the second order.

4.3. Computation of the Kullback-Leibler Divergence

To compute the Kullback-Leibler divergence between the empirical distribution, π ω ( T ), and the fitted predicted distribution, μλ, we need to know the value of the pressure, ℘ [λ], the empirical probability of the potential, π ω ( T ) [ H λ ], and the entropy, S [ π ω ( T ) ]. For small networks, we can compute the pressure using the Perron–Frobenius theorem ([16]). However, for large scales, since we cannot compute the pressure, computing the Kullback-Leibler divergence is not direct and exact. We compute an approximation using the following technique. From Equation (18) and (24), we can write:

d k l ( π ω ( T ) , μ λ ) = μ λ [ H λ ] + S [ μ λ ] - π ω ( T ) [ H λ ] - S [ π ω ( T ) ] = l λ l ( μ λ [ m l ] - π ω ( T ) [ m l ] ) + S [ μ λ ] - S [ π ω ( T ) ]

From the parameters, λ, we compute a spike train distributed as μλ using the Monte Carlo method ([19]). From this spike train, we compute the monomials averages, μλ[ml], and the entropy, Entropy 16 02244f14[μλ], using the method of Strong et al. ([40]). π ω ( T ) [ m l ] and S [ π ω ( T ) ] are computed directly on the empirical data set.

4.4. Performances on Synthetic Data

Here, we test the method on synthetic data, where the shape of the sought potential is known: only the λls have to be estimated. Experiments were designed according to the following steps:

  • We start from a potential H λ * = l L λ l * m l. The goal is to estimate the coefficient values, λ l *, knowing the set, , of monomials spanning the potential.

  • We generate a synthetic spike train (ωs) distributed according to the Gibbs distribution of λ*.

  • We take a potential λ = ∑l∈ λlml with random initial coefficients λl. Then, we fit the parameters, λl, to the synthetic spike train, ω s ( T ).

  • We evaluate the goodness of fit.

For the last step (goodness of fit), we have used three criteria. The first one simply consists of computing the L1 error d 1 = 1 L l = 1 L | λ l * - λ l ( e s t ) |, where λ k ( e s t ) is the final estimated value. d1 is then averaged on 10 random potentials. Figure 4 shows the committed error in the case of sparse and dense potentials. The method showed a good performance, both in the dense and sparse case, for large N × R ~ 60.

The main advantage of this criterion is that it provides an exact estimation of the error made on coefficient estimation. Its drawback is that we have to know the shape of the potential that generated the raster: this is not the case anymore for real neural network data. We therefore used a second criterion: confidence plots. For each spike block, [ ω 0 D ], appearing in the raster, ωs, we draw a point in a two-dimensional diagram with, on abscissa, the observed empirical probability, π ω s ( T ) [ ω 0 D ], and, on ordinate, the predicted probability, μ λ [ ω 0 D ]. Ideally, all points should align on the diagonal y = x (equality line). However, since the raster is finite, there are finite-sized fluctuations ruled by the central limit theorem. For a block, [ ω 0 D ], generated by a Gibbs distribution, μλ, and having an exact probability, μ λ [ ω 0 D ], the empirical probability, π ω s ( T ) [ ω 0 D ], is a Gaussian random variable with mean μ λ [ ω 0 D ] and mean-square deviation σ = μ λ [ ω 0 D ] ( 1 - μ λ [ ω 0 D ] ) T. The probability that π ω s ( T ) [ ω 0 D ] [ μ λ [ ω 0 D ] - 3 σ , μ λ [ ω 0 D ] + 3 σ ] is therefore of about 99, 6%. This interval is represented by confidence lines spreading around the diagonal. As a third criterion, we have used the Kullback-Leibler divergence (55).

We have plotted two examples in Figures 5 and 6 for sparse data types:

  • Spatial case, 40 neurons, (NR = 40): Ising model (3). Figure 5.

  • Spatio-temporal, 40 neurons, R = 2 (NR = 80): Pairwise model with delays (5). Figure 6

4.5. The Performance on Real Data

Here, we show the inferring of the MaxEnt distribution on real spike trains. We analyzed a data set of 20 and 40 neurons with spatial and spatio-temporal constraints (data courtesy of M. J. Berry and O. Marre, 40 is the maximal number of neurons in this data set). Data are binned at 20 ms. We show the confidence plots and an example of convergence curves using the Hellinger distance. The goal here is to check the goodness of fit, not only for spatial patterns (as done in [912]), but also for spatio-temporal patterns.

Figure 7 shows the evolution of the Hellinger distance during the parameter update both in the parallel and sequential update process.

After estimating the parameters of an Ising and pairwise model of range R = 2 on a set of 20 neurons, we evaluate the confidence plots. Figures 8 and 9 show, respectively, the confidence plots for patterns of Ranges 1, 2 and 3 after fitting with an Ising model and the pairwise model of range R = 2. Our results on 20 neurons confirm the observations made in [16] for N = 5, R = 2: a pairwise model with memory performs quite better than an Ising model to explain spatio-temporal patterns.

We then made the same analysis for 40 neuron. Figures 10 and 11 show, respectively, the confidence plots for patterns of Ranges 1, 2 and 3 after fitting with an Ising model and the pairwise model of range R = 2. In this case, we were not able to obtain a good convergence for N = 40, R = 2. This is presumably due to the insufficient length of the data set, which does not allow us to estimate accurately the probability of some monomials. This aspect is discussed in the next section.

5. Discussion and Conclusion

The method shows better performances for synthetic data than for real data, although we did not make extensive studies of real data. The main reason, we believe, is that in the second case, we do not know the form of the potential. As a consequence, we stick to existing canonical forms of potentials, e.g., Ising and pairwise. The main problem with this approach is that the number of parameters to estimate dramatically grows with NR. The increase is moderate for the Ising model ( N rates + N ( N - 1 ) 2 symmetric pairwise couplings), but it becomes prohibitively large even for pairwise range R models. On the opposite, our analysis of synthetic data used a relatively small number of parameters to fit.

The large number of parameters has two drawbacks: the increasing of the computation time and errors in the estimation. Let us comment on the second problem. It is not intrinsic to our method, nor is it intrinsic to MaxEnt; this is a well-known problem, which arises already when doing linear regression analysis. Increasing the number of parameters may eventually lead to catastrophic estimations, where the addition of a degree of freedom can seriously hinder the resolution.

In the case of MaxEnt, the situation can be described as follows. We generate a finite raster, ω 0 T, from a known distribution, μλ*, with a potential of the form (2). Denote μλ* [m] as the vector with entries μλ* [ml] and π ω ( T ) [ m ] as the vector with entries π ω ( T ) [ m l ]. From (19), we have μλ* [m] = ∇λ*℘. This exact solution is obtained when the Gibbs distribution, μλ*, can be exactly sampled, namely, for an infinite raster. For a finite raster, if T is large enough to apply the central limit theorem, the empirical distribution, π ω ( T ) [ m ], is Gaussian with mean μλ [m] and covariance 1 T χ given by (51). We have, therefore, π ω ( T ) [ m ] = μ λ * [ m ] + β, where β is a centered Gaussian with covariance 1 T χ. Solving (19), where the exact probability, μλ*, is replaced by the empirical one, π ω ( T ), one obtains an approximate solution of λ, λ* with: λ = λ* + ε, where: λ P = π ω ( T ) [ m ]. Therefore, ∇λ℘ = μλ* [m] + β = ∇λ*+ε℘ = ∇λ*℘ + εχ + O(||ε||2). Hence, ε = χ−1β. χ is invertible, since ℘ is convex.

The fluctuations of the estimated solution, λ, around the exact solution, λ*, are therefore Gaussian, centered, with covariance Entropy 16 02244f15 [ε.∊̃] = Entropy 16 02244f15 [χ−1.β. β̃. χ̃−1]. Since χ is symmetric, we have E [ ɛ . ɛ ˜ ] = χ - 1 . E [ β . β ˜ ] . χ - 1 = 1 T χ - 1. We arrive, therefore, at the conclusion that the fluctuations on the estimated coefficients, λ, are highly constrained by the convexity of the pressure, as expected. Mathematically, everything goes nicely, since ℘ is convex. However, it may happen that ℘ is quite flat in some directions/monomials. Then, small errors will be largely amplified. Therefore, when considering potentials of the form (2), it is expected that some terms (monomials) not only are irrelevant, but also dramatically deteriorate the estimation problem, introducing almost zero eigenvalues in χ. This is presumably what happened in Figure 11, where we were not able to obtain a good convergence for monomial averages.

At this stage, the main question is therefore: can we have an idea of the potential shape from data before fitting the parameters? This question is not only related to the goodness of fit, but it is also a question of concept. Is is useful to represent a pairwise distribution for 40 neurons with nearly 2,000 parameters? The idea would then be to filter irrelevant monomials. For that, a feature selection method is useful and should complement this work. There are many directions we can take in favor of the feature selection; for instance, selecting the features on the threshold ([41,42]), using a χ2 method ([43]), as well as the incremental feature selection algorithm ([44], [45]). Other methods based on periodic orbit sampling ([46]) and information geometry ([47,48]) are under current investigation.

We have presented a method to fit the parameters of the MaxEnt distribution with spatio-temporal constraints. In the process of exploring the dynamics of neural data, we hypothesize the model, fit it and, finally, judge the quality of the suggested model. Hence, this work is positioned as an important intermediate step in neural coding using the MaxEnt framework, opening the door for analyzing the dynamics of large networks, not being limited to spatial and/or traditional MaxEnt models.

Finally, we would like to highlight two points that should be investigated in further studies:

  • The effect of binning. In many experimental studies, data is binned. Basically, binning was used in order to account for time spiking sensitivity, which is not the same for all the biological neural networks. For instance, [9] used 20 ms of binning for retinal spike trains. In the present paper, we have used the same as these authors, but we have not considered the effect of binning on our statistical estimations. This is certainly a matter of further investigations, especially because, to our best knowledge, no systematic study on the binning effects on statistics has been done. In particular, three distinct dimensions should be considered:

    The statistical dimension: how does binning biases statistics? Could binning introduce spurious effects, such as, e.g., creating fallacious long-range correlations?

    The computational dimension: how does the performance of the algorithm change with the bin size?

    The biological dimension: cross-correlograms are not the same in all brain areas. Therefore, the optimal bin size is expected to depend on the investigated area.

  • Maximum entropy: There are several methods now in use to model the spatio-temporal correlations in ensembles of neurons. The generalized linear model (GLM) approach uses the maximum likelihood and point-process to assess connectivity (e.g., [10]). Reverse correlation methods can also work well (e.g., [49]). Finally, there are causality metrics, like Granger causality or transfer entropy ([50]). Some of these methods have been compared in [51], but further investigations should be helpful, starting from synthetic data, where statistics is under good control. Especially, how does maximum entropy perform compared to these others methods?

Our method allows one to investigate these two questions on numerical grounds although such an investigation should be completed by mathematical insights, using the properties of spatio-temporal Gibbs distributions.

6. List of symbols

ωi(n)

Spike event

ω(n)

Spike pattern

ω n 1 n 2

Spike block

ω

Spike train

T

Length (in time) of the spike train

N

Number of neurons

R

Model range

D

Model memory (R = D − 1)

ml(ω)

Monomial number l

m

Vector of monomials

L

Total number of parameters (monomials) in the model

λl

Parameter number l

λ

Parameters vector

Gibbs potential

Zλ

Partition function

Entropy 16 02244f14

Entropy

Topological pressure

π ω ( T )

Empirical probability measured on the spike train, ω, of length T

μλ

Gibbs density with parameters λ

Set of invariant probabilities

δ l = λ l - λ l

Learning rate or the value by which we update the parameters, λl

δ

Vector of learning rates

dKL

Kullback-Leibler divergence

Cjk

Correlation between two monomials, j and k

χ

Hessian matrix (second derivative of the pressure)

Δ

Root sum square of the learning rates

β

Fluctuations on the monomials averages

ε

Fluctuations on the parameters (relaxation)

Acknowledgments

We thank the reviewers for helpful remarks and constructive criticism. We also warmly acknowledge M.J. Berry and O. Marre for providing us MEA recordings from the retina and G. Tkacik, who provided us the references, [17,18], and helped us in the algorithm design. This work was partially supported by the ERC-NERVI number 227747, KEOPS ANR-CONICYT, and European FP7 projects RENVISION (FP7-600847), BRAINSCALES (FP7-269921).

Conflicts of Interest

The authors declare no conflicts of interest.

  • Author’s contributionReal data was provided by M.J. Berry and O. Marre from Princeton university. The authors contributed equally to the presented mathematical and computational framework and the writing of the paper.

References

  1. Ferrea, E.; Maccione, A.; Medrihan, L.; Nieus, T.; Ghezzi, D.; Baldelli, P.; Benfenati, F.; Berdondini, L. Large-scale, high-resolution electrophysiological imaging of field potentials in brain slices with microelectronic multielectrode arrays. Front. Neural. Circ 2012, 6. [Google Scholar]
  2. Stevenson, I.H.; Kording, K.P. How advances in neural recording affect data analysis. Nat. Neurosci 2011, 14, 139–142. [Google Scholar]
  3. Marre, O.; Amodei, D.; Deshmukh, N.; Sadeghi, K.; Soo, F.; Holy, T.; Berry, M., II. Mapping a Complete Neural Population in the Retina. J. Neurosci 2012, 43, 14859–14873. [Google Scholar]
  4. Hill, D.N.; Mehta, S.B.; Kleinfeld, D. Quality Metrics to Accompany Spike Sorting of Extracellular Signals. J. Neurosci 2011, 31, 8699–8705. [Google Scholar]
  5. Litke, A.M.; Bezayiff, N.; Chichilnisky, E.J.; Cunningham, W.; Dabrowski, W.; Grillo, A.A.; Grivich, M.; Grybos, P.; Hottowy, P.; Kachiguine, S.; Kalmar, R.S.; Mathieson, K.; Petrusca, D.; Rahman, M.; Sher, A. What does the eye tell the brain?: Development of a system for the large scale recording of retinal output activity. IEEE Trans. Nucl. Sci 2004, 51, 1434–1440. [Google Scholar]
  6. Quiroga, R.Q.; Nadasdy, Z.; Ben-Shaul, Y. Unsupervised spike detection and sorting with wavelets and superparamagnetic clustering. Neural Comput 2004, 16, 1661–1687. [Google Scholar]
  7. Csiszár, I. On the computation of rate-distortion functions (Corresp.). Inform. Theory, IEEE T on 1974, 20, 122–124. [Google Scholar]
  8. Jaynes, E. Information theory and statistical mechanics. Phys. Rev 1957, 106, 620. [Google Scholar]
  9. Schneidman, E.; Berry, M.; Segev, R.; Bialek, W. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature 2006, 440, 1007–1012. [Google Scholar]
  10. Pillow, J.W.; Shlens, J.; Paninski, L.; Sher, A.; Litke, A.M.; Chichilnisky, E.J.; Simoncelli, E.P. Spatio-temporal correlations and visual signaling in a complete neuronal population. Nature 2008, 454, 995–999. [Google Scholar]
  11. Ganmor, E.; Segev, R.; Schneidman, E. Sparse low-order interaction network underlies a highly correlated and learnable neural population code. Proc. Natl. Acad. Sci. USA 2011, 108, 9679–9684. [Google Scholar]
  12. Ganmor, E.; Segev, R.; Schneidman, E. The architecture of functional interaction networks in the retina. J. Neurosci 2011, 31, 3044–3054. [Google Scholar]
  13. Tkačik, G.; Schneidman, E.; Berry, M.J., II; Bialek, W. Spin glass models for a network of real neurons. arXiv preprint arXiv:0912.5409. 2009. [Google Scholar]
  14. Tang, A.; Jackson, D.; Hobbs, J.; Chen, W.; Smith, J.L.; Patel, H.; Prieto, A.; Petrusca, D.; Grivich, M.I.; Sher, A.; Hottowy, P.; Dabrowski, W.; Litke, A.M.; Beggs, J.M. A Maximum Entropy Model Applied to Spatial and Temporal Correlations from Cortical Networks. In Vitro. J. Neurosci 2008, 28, 505–518. [Google Scholar]
  15. Marre, O.; El Boustani, S.; Frégnac, Y.; Destexhe, A. Prediction of spatiotemporal patterns of neural activity from pairwise correlations. Phys. Rev. Lett 2009, 102. [Google Scholar]
  16. Vasquez, J.C.; Marre, O.; Palacios, A.G.; Berry, M.J.; Cessac, B. Gibbs distribution analysis of temporal correlation structure on multicell spike trains from retina ganglion cells. J. Physiol. Paris 2012, 106, 120–127. [Google Scholar]
  17. Dudík, M.; Phillips, S.; Schapire, R. Performance Guarantees for Regularized Maximum Entropy Density Estimation. Proceedings of the 17th Annual Conf. on Comp. Learn Theory, 2004.
  18. Broderick, T.; Dudik, M.; Tkacik, G.; Schapire, R.E.; Bialek, W. Faster solutions of the inverse pairwise Ising problem. arXiv:0712.2437. 2007. [Google Scholar]
  19. Nasser, H.; Marre, O.; Cessac, B. Spatio-temporal spike train analysis for large scale networks using the maximum entropy principle and Montecarlo method. J. Stat. Mech 2013, 2013, P03006. [Google Scholar]
  20. Schaub, M.T.; Schultz, S.R. The Ising decoder: reading out the activity of large neural ensembles. arXiv:1009.1828. 2010. [Google Scholar]
  21. Garibaldi, U.; Penco, M.A. Probability Theory and Physics Between Bernoulli and Laplace: The Contribution of J H. Lambert (1728–1777). Proceeding of the Fifth National Congress on the History of Physics, Rome, 1985; 9, pp. 341–346.
  22. Tkacik, G.; Marre, O.; Mora, T.; Amodei, D.M.B., 2nd; Bialek,, W. The simplest maximum entropy model for collective behavior in a neural network. J. Stat. Mech 2013, P03011. [Google Scholar]
  23. Jaynes, E.T. Where do we stand on maximum entropy. In The Maximum Entropy Formalism; Levine, D., Tribus, M., Eds.; MIT Press: Cambridge, MA, USA, 1978; pp. 15–118. [Google Scholar]
  24. Jaynes, E.T. The minimum entropy production principle. Ann. Rev. Phys. Chem 1980, 31, 579–601. [Google Scholar]
  25. Jaynes, E. Macroscopic prediction. In Complex Systems - Operational Approaches in Neurobiology, Physics, and Computers; Springer: Berlin, Germany, 1985; pp. 254–269. [Google Scholar]
  26. Otten, M.; Stock, G. Maximum caliber inference of nonequilibrium processes. J. Chem. Phys 2010, 133, 034119. [Google Scholar]
  27. Fernandez, R.; Maillard, G. Chains with complete connections : General theory, uniqueness, loss of memory and mixing properties. J. Stat. Phys 2005, 118, 555–588. [Google Scholar]
  28. Gikhman, I.; Skorokhod, A. The Theory of Stochastic Processes; Springer: Berlin, Germany, 1979. [Google Scholar]
  29. Chazottes, J.; Keller, G. Pressure and Equilibrium States in Ergodic Theory. Isr. J. Math 2008, 131. [Google Scholar]
  30. Ruelle, D. Statistical Mechanics: Rigorous Results; Benjamin: New York, NY, USA, 1969. [Google Scholar]
  31. Keller, G. Equilibrium States in Ergodic Theory; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  32. Ruelle, D. Thermodynamic Formalism; Addison-Wesley: Reading, MA, USA, 1978. [Google Scholar]
  33. Bowen, R. Lecture Notes in Mathatics; Springer-Verlag: 1975; Volume 470. [Google Scholar]
  34. Georgii, H.O. Gibbs Measures and Phase Transitions (De Gruyter Studies in Mathematics); Springer: Berlin, Germany, 1988. [Google Scholar]
  35. Vasquez, J.C.; Palacios, A.; Marre, O., II; M.J.B.; Cessac, B. Gibbs distribution analysis of temporal correlation structure on multicell spike trains from retina ganglion cells. J. Physiol. Paris 2012, 106, 120–127. [Google Scholar]
  36. Collins, M.; Schapire, R.E.; Singer, Y. Logistic Regression, AdaBoost and Bregman Distances. Mach. Lear 2002, 48, 253–285. [Google Scholar]
  37. Mayer, V.; Urbański, M. Thermodynamical formalism and multifractal analysis for meromorphic functions of finite order. Memoir. Am. Math. Soc 2010, 203. [Google Scholar]
  38. Kappen, H.; Rodriguez, F. Boltzmann Machine learning using mean field theory and linear response correction. In NIPS; Kearns, M., Ed.; MIT Press: Cambridge, MA, USA, 1998; Volume 12, pp. 280–286. [Google Scholar]
  39. Event neural assembly Simulation: v3 version, Available online: http://enas.gforge.inria.fr/v3/download.html accessed on 21 April 2014.
  40. Strong, S.; Koberle, R.; de Ruyter van Steveninck, R.; Bialek, W. Entropy and information in neural spike trains. Phys. Rev. Let 1998, 80, 197–200. [Google Scholar]
  41. Rosenfeld, R.; Carbonell, J.; Rudnicky, A. Adaptive Statistical Language Modeling: A Maximum Entropy Approach. In Technical report; School of Computer Science, Carnegie Mellon University, 1994. [Google Scholar]
  42. Koeling, R. Chunking with maximum entropy models. In Proceeding ConLL ’00 Proceedings of the 2nd Workshop on Learning Language in Logic and the 4th conference on Computational Natural Language Learning; Volume 7, pp. 139–141. Association for Computational Linguistics: Stroudsburg, PA, USA, 2000. [Google Scholar]
  43. Chen, S.F.; Rosenfeld, R. Efficient Sampling and Feature Selection in Whole Sentence Maximum. Ent. Lang Mod 1999. [Google Scholar]
  44. Berger, A.L.; Pietra, S.A.D.; Pietra, V.J.D. A Maximum Entropy approach to Natural Language Processing. Comp. Lang 1996, 22, 39–71. [Google Scholar]
  45. Zhou, Y.; Wu, L. A fast algorithm for feature selection in conditional maximum entropy modeling. Proceedings of the Empirical Methods in Natural Language Processing (EMNLP 2003), Sapporo, Japan, July 2003; pp. 153–159.
  46. Cessac, B.; Cofre, R. Estimating maximum entropy distributions from periodic orbits in spike trains. research report RR-8329; INRIA, 2013. [Google Scholar]
  47. Nakahara, H.; Amari, S. Information-Geometric Decomposition in Spike Analysis. Adv. Neural Inform. Process. Syst 2001, 253–260. [Google Scholar]
  48. Amari, S. Information geometry on hierarchy of probability distributions. IEEE T. Inf. Theory 2001, 47, 1701–1711. [Google Scholar]
  49. Chichilnisky, E.J. A simple white noise analysis of neuronal light responses. Network Comput. Neural Syst 2001, 12, 199–213. [Google Scholar]
  50. Li, Z.; Li, X. Estimating Temporal Causal Interaction between Spike Trains with Permutation and Transfer Entropy. PloS One 2013, 8, e70894. [Google Scholar]
  51. Truccolo, W.; Hochberg, L.R.; Donoghue, J.P. Collective dynamics in human and monkey sensorimotor cortex: predicting single neuron spikes. Nature Neurosci 2009, 13, 105–111. [Google Scholar]
Figure 1. The dense family.
Figure 1. The dense family.
Entropy 16 02244f1 1024
Figure 2. The sparse family.
Figure 2. The sparse family.
Entropy 16 02244f2 1024
Figure 3. Error on the average μλ [ml] as a function of the perturbation amplitude, δ. First order corresponds to ε(1) and second order to ε(2) (see the text). The curves correspond to N = 5, R = 3, L = 12. (Left) The dense case; (right) the sparse case.
Figure 3. Error on the average μλ [ml] as a function of the perturbation amplitude, δ. First order corresponds to ε(1) and second order to ε(2) (see the text). The curves correspond to N = 5, R = 3, L = 12. (Left) The dense case; (right) the sparse case.
Entropy 16 02244f3 1024
Figure 4. The distance between the exact value of coefficients and the estimated value, averaged on the set of 10 random potentials for NR = 60. (a) Dense spike trains; (b) sparse spike trains.
Figure 4. The distance between the exact value of coefficients and the estimated value, averaged on the set of 10 random potentials for NR = 60. (a) Dense spike trains; (b) sparse spike trains.
Entropy 16 02244f4 1024
Figure 5. Data were generated with an Ising distribution. After fitting with an Ising model, we show the comparison between observed and predicted probabilities of monomials in (a). (b,c,d) The comparison of predicted and observed probabilities of patterns of Depths 1, 2 and 3, respectively. In the four plots, the x-axis represents the observed probabilities and the y-axis the predicted probabilities. The estimated Kullback-Leibler divergence is 0.0107.
Figure 5. Data were generated with an Ising distribution. After fitting with an Ising model, we show the comparison between observed and predicted probabilities of monomials in (a). (b,c,d) The comparison of predicted and observed probabilities of patterns of Depths 1, 2 and 3, respectively. In the four plots, the x-axis represents the observed probabilities and the y-axis the predicted probabilities. The estimated Kullback-Leibler divergence is 0.0107.
Entropy 16 02244f5 1024
Figure 6. Data were generated with a pairwise distribution of range R = 2. After fitting with a pairwise model of range R = 2, we show the comparison between observed and predicted probabilities of monomials in (a). (b,c,d) The comparison of predicted and observed probabilities of patterns of Depths 1, 2 and 3, respectively. In the four plots, the x-axis represents the observed probabilities and the y-axis the predicted probabilities. The estimated Kullback-Leibler divergence is 0.0174.
Figure 6. Data were generated with a pairwise distribution of range R = 2. After fitting with a pairwise model of range R = 2, we show the comparison between observed and predicted probabilities of monomials in (a). (b,c,d) The comparison of predicted and observed probabilities of patterns of Depths 1, 2 and 3, respectively. In the four plots, the x-axis represents the observed probabilities and the y-axis the predicted probabilities. The estimated Kullback-Leibler divergence is 0.0174.
Entropy 16 02244f6 1024
Figure 7. Evolution of the Hellinger distance during the parallel (a) and the sequential (b) update in the case of modeling a real data set with a pairwise model of range R = 2. The parallel update provides a fast convergence; however, it is steady after a hundred iterations. The,n we iterate the sequential algorithm.
Figure 7. Evolution of the Hellinger distance during the parallel (a) and the sequential (b) update in the case of modeling a real data set with a pairwise model of range R = 2. The parallel update provides a fast convergence; however, it is steady after a hundred iterations. The,n we iterate the sequential algorithm.
Entropy 16 02244f7 1024
Figure 8. A 20-neuron data set binned at 20 ms with an Ising model. After fitting, we show the comparison between observed (in the real spike train) and predicted average values of monomials in (a). (b,c,d) The comparison of predicted and observed probabilities for patterns of Ranges 1, 2 and 3, respectively. In (a), (b), (c) and (d), the x-axis represents the observed probabilities and the y-axis the predicted probabilities. The computation time is equal to 18 hours on a small cluster of 64 processors (around 5 min per iteration). The estimated Kullback-Leibler divergence is 0.307.
Figure 8. A 20-neuron data set binned at 20 ms with an Ising model. After fitting, we show the comparison between observed (in the real spike train) and predicted average values of monomials in (a). (b,c,d) The comparison of predicted and observed probabilities for patterns of Ranges 1, 2 and 3, respectively. In (a), (b), (c) and (d), the x-axis represents the observed probabilities and the y-axis the predicted probabilities. The computation time is equal to 18 hours on a small cluster of 64 processors (around 5 min per iteration). The estimated Kullback-Leibler divergence is 0.307.
Entropy 16 02244f8 1024
Figure 9. A 20-neuron data set binned at 20 ms with a pairwise model of Range 2. After fitting, we show the comparison between observed (in the real spike train) and predicted average values of monomials in (a). (b,c,d) The comparison of predicted and observed probabilities for patterns of Ranges 1, 2 and 3, respectively. In (a), (b), (c) and (d), the x-axis represents the observed probabilities and the y-axis the predicted probabilities. The computation time is equal to 40 hours on a small cluster of 64 processors (around 12 min per iteration). The estimated Kullback-Leibler divergence is 0.281.
Figure 9. A 20-neuron data set binned at 20 ms with a pairwise model of Range 2. After fitting, we show the comparison between observed (in the real spike train) and predicted average values of monomials in (a). (b,c,d) The comparison of predicted and observed probabilities for patterns of Ranges 1, 2 and 3, respectively. In (a), (b), (c) and (d), the x-axis represents the observed probabilities and the y-axis the predicted probabilities. The computation time is equal to 40 hours on a small cluster of 64 processors (around 12 min per iteration). The estimated Kullback-Leibler divergence is 0.281.
Entropy 16 02244f9 1024
Figure 10. A 40-neuron data set binned at 20 ms with an Ising model. After fitting, we show the comparison between observed (in the real spike train) and predicted average values of monomials in (a). (b,c,d) The comparison of predicted and observed probabilities for patterns of Ranges 1, 2 and 3, respectively. In (a), (b), (c) and (d), the x-axis represents the observed probabilities and the y-axis the predicted probabilities. The computation time is equal to three days on a small cluster of 64 processors (around 21 min per iteration). The estimated Kullback-Leibler divergence is 0.930.
Figure 10. A 40-neuron data set binned at 20 ms with an Ising model. After fitting, we show the comparison between observed (in the real spike train) and predicted average values of monomials in (a). (b,c,d) The comparison of predicted and observed probabilities for patterns of Ranges 1, 2 and 3, respectively. In (a), (b), (c) and (d), the x-axis represents the observed probabilities and the y-axis the predicted probabilities. The computation time is equal to three days on a small cluster of 64 processors (around 21 min per iteration). The estimated Kullback-Leibler divergence is 0.930.
Entropy 16 02244f10 1024
Figure 11. A 40-neuron data set binned at 20 ms with a pairwise model of Range 2. After fitting, we show the comparison between observed (in the real spike train) and predicted average values of monomials in (a). (b,c,d) The comparison of predicted and observed probabilities for patterns of Ranges 1, 2 and 3, respectively. In (a), (b), (c) and (d), the x-axis represents the observed probabilities and the y-axis the predicted probabilities. The computation time is equal to seven days on a small cluster of 64 processors (around 47 min per iteration). The estimated Kullback-Leibler divergence is 0.983.
Figure 11. A 40-neuron data set binned at 20 ms with a pairwise model of Range 2. After fitting, we show the comparison between observed (in the real spike train) and predicted average values of monomials in (a). (b,c,d) The comparison of predicted and observed probabilities for patterns of Ranges 1, 2 and 3, respectively. In (a), (b), (c) and (d), the x-axis represents the observed probabilities and the y-axis the predicted probabilities. The computation time is equal to seven days on a small cluster of 64 processors (around 47 min per iteration). The estimated Kullback-Leibler divergence is 0.983.
Entropy 16 02244f11 1024

Share and Cite

MDPI and ACS Style

Nasser, H.; Cessac, B. Parameter Estimation for Spatio-Temporal Maximum Entropy Distributions: Application to Neural Spike Trains. Entropy 2014, 16, 2244-2277. https://doi.org/10.3390/e16042244

AMA Style

Nasser H, Cessac B. Parameter Estimation for Spatio-Temporal Maximum Entropy Distributions: Application to Neural Spike Trains. Entropy. 2014; 16(4):2244-2277. https://doi.org/10.3390/e16042244

Chicago/Turabian Style

Nasser, Hassan, and Bruno Cessac. 2014. "Parameter Estimation for Spatio-Temporal Maximum Entropy Distributions: Application to Neural Spike Trains" Entropy 16, no. 4: 2244-2277. https://doi.org/10.3390/e16042244

Article Metrics

Back to TopTop