Next Article in Journal
Panel I: Connecting 2nd Law Analysis with Economics, Ecology and Energy Policy
Next Article in Special Issue
Information Anatomy of Stochastic Equilibria
Previous Article in Journal
The Entropy-Based Quantum Metric
Previous Article in Special Issue
Identifying the Coupling Structure in Complex Systems through the Optimal Causation Entropy Principle
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identifying Chaotic FitzHugh–Nagumo Neurons Using Compressive Sensing

1
School of Electrical, Computer, and Energy Engineering, Arizona State University, Tempe, AZ 85287, USA
2
Department of Physics, Arizona State University, Tempe, AZ 85287, USA
3
School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ 85287, USA
*
Author to whom correspondence should be addressed.
Entropy 2014, 16(7), 3889-3902; https://doi.org/10.3390/e16073889
Submission received: 4 March 2014 / Revised: 23 June 2014 / Accepted: 7 July 2014 / Published: 15 July 2014
(This article belongs to the Special Issue Information in Dynamical Systems and Complex Systems)

Abstract

:
We develop a completely data-driven approach to reconstructing coupled neuronal networks that contain a small subset of chaotic neurons. Such chaotic elements can be the result of parameter shift in their individual dynamical systems and may lead to abnormal functions of the network. To accurately identify the chaotic neurons may thus be necessary and important, for example, applying appropriate controls to bring the network to a normal state. However, due to couplings among the nodes, the measured time series, even from non-chaotic neurons, would appear random, rendering inapplicable traditional nonlinear time-series analysis, such as the delay-coordinate embedding method, which yields information about the global dynamics of the entire network. Our method is based on compressive sensing. In particular, we demonstrate that identifying chaotic elements can be formulated as a general problem of reconstructing the nodal dynamical systems, network connections and all coupling functions, as well as their weights. The working and efficiency of the method are illustrated by using networks of non-identical FitzHugh–Nagumo neurons with randomly-distributed coupling weights.

1. Introduction

In this paper, we address the problem of the data-based identification of a subset of chaotic elements embedded in a network of nonlinear oscillators. In particular, given such a network, we assume that time series can be measured from each oscillator. The oscillators, when isolated, are not identical in that their parameters are different, so dynamically, they can be in distinct regimes. For example, all oscillators can be described by differential equations of the same mathematical form, but with different parameters. Consider the situation where only a small subset of the oscillators are chaotic and the remaining oscillators are in dynamical regimes of regular oscillations. Due to mutual couplings among the oscillators, the measured time series from most oscillators would appear random. The challenge is to identify the small subset of originally (“truly”) chaotic oscillators.
The problem of identifying chaotic elements from a network of coupled oscillators arises in biological systems and biomedical applications. For example, consider a network of coupled neurons that exhibit regular oscillations in a normal state. In such a state, the parameters of each isolated neuron are in the regular regime. Under external perturbations or slow environmental influences, the parameters of some neurons can drift into the chaotic regime. When this occurs, the whole network would appear to behave chaotically, which may correspond to a certain disease. The virtue of nonlinearity stipulates that the irregular oscillations at the network level can emerge even if only a few oscillators have gone “bad”. It is thus desirable to be able to pin down the origin of the ill-behaved oscillators—the few chaotic neurons among a large number of healthy ones.
One might attempt to use the traditional approach of time-delayed coordinate embedding to reconstruct the phase space of the underlying dynamical system [13] and then to compute the Lyapunov exponents [4,5]. However, since we are dealing with a network of nonlinear oscillators, the phase-space dimension is high and an estimate of the largest Lyapunov exponent would only indicate if the whole coupled system is chaotic or nonchaotic, depending on the sign of the estimated exponent. In principle, using time series from any specific oscillator(s) would give qualitatively the same result. Thus, the traditional approach cannot give an answer as to which oscillators are chaotic when isolated.
There were previous efforts in nonlinear systems identification and parameter estimation for coupled oscillators and spatiotemporal systems, such as the auto-synchronization method [6]. There were also works on revealing the connection patterns of networks. For example, a methodology was proposed to estimate the network topology controlled by feedback or delayed feedback [79]. Network connectivity can be reconstructed from the collective dynamical trajectories using response dynamics, as well [10,11]. In addition, the approach of random phase resetting was introduced to reconstruct the details of the network structure [12]. For neuronal systems, there was a statistical method to track the structural changes [13,14]. While many of these previous methods require complete or partial information about the dynamical equations of the isolated nodes and their coupling functions, completely data-driven and model-free methods exist. For example, the network structure can be obtained by calculating the causal influences among the time series based on, for example, the Granger causality method [15,16], the transfer-entropy method [17] or the method of inner composition alignment [18]. However, such causality-based methods are unable to reveal information about the dynamical equations of the isolated nodes. There were regression-based methods [19] for systems identification based on, for example, the least-squares approximation through the Kronecker-product representation [20], which would require large amounts of data. (Due to the L1 nature of compressive sensing [2125], the data requirement in our method can be significantly relaxed.) The unique features of our method are: (1) it is completely data driven; (2) it can give an accurate estimate of all system parameters; (3) it can lead to faithful reconstruction of the full network structure, even for large networks; and (4) it requires a minimal data amount. While some of these features are shared by previous methods, no single previous method possesses all of these features.
Here, we develop a method to address the problem of identifying a subset of ill-behaved chaotic elements from a network of nonlinear oscillators, the majority of them being regular. The basic mathematical framework underlying our method is compressive sensing (CS), a paradigm for high-fidelity signal reconstruction using only sparse data [2125]. The CS paradigm was originally developed to solve the problem of transmitting extremely large data sets, such as those collected from large-scale sensor arrays. Because of the extremely high dimensionality, direct transmission of such data sets would require a very broad bandwidth. However, there are many applications in which the data sets are sparse. To be concrete, say a data set of N points is represented by an N × 1 vector, X, where N is a very large integer. Then, X being sparse means that most of its entries are zero and only a small number of k entries are non-zero, where kN. One can use a random matrix Ψ of dimension M × N to obtain an M × 1 vector Y: Y = Ψ · X, where Mk. Because the dimension of Y is much lower than that of the original vector X, transmitting Y would require a much smaller bandwidth, provided that X can be reconstructed at the other end of the communication channel. Under the constraint that the vector to be reconstructed is sparse, the feasibility of faithful reconstruction is guaranteed mathematically by the CS paradigm [2125]. In the past decade, CS has been exploited in a large variety of applications, ranging from optical image processing [26] and reconstruction of nonlinear dynamical and complex systems [27,28] to quantum measurements [29].
It has been shown in a series of recent papers [27,28,3033] that the detailed equations and parameters of nonlinear dynamical systems and complex networks can be accurately reconstructed from short time series using the CS paradigm. Here, we extend this approach to a network of coupled, mixed nonchaotic and chaotic neurons. We demonstrate that, by formulating the reconstruction task as a CS problem, the system equations and coupling functions, as well as all of the parameters can be obtained accurately from sparse time series. Using the reconstructed system equations and parameters for each and every neuron in the network and setting all of the coupling parameters to zero, a routine calculation of the largest Lyapunov exponent can unequivocally distinguish the chaotic neurons from the nonchaotic ones.
We remark on the generality of our compressive sensing-based method. Insofar as time series from all dynamical variables of the system are available and a suitable mathematical base can be found in which the nodal and coupling functions can be expanded in terms of a sparse number of terms, the whole system, including all individual nodal dynamics, can be accurately reconstructed. With the reconstructed individual nodal equations, chaotic neurons can be identified through routine calculation of the largest Lyapunov exponent.

2. Methods

Figure 1a shows schematically a representative coupled neuronal network. Consider a pair of neurons, one chaotic and another nonchaotic when isolated (say 1 and 10, respectively). When they are placed in a network, due to coupling, the time series collected from both will appear random and qualitatively similar, as shown in Figure 1b,c. It is visually quite difficult to distinguish the time series and to ascertain which node is originally chaotic and which is regular. The difficulty is compounded by the fact that the detailed coupling scheme is not known a priori. Say that the chaotic behavior leads to the undesirable function of the network and is to be suppressed. A viable and efficient method is to apply small pinning controls [3437] to the relatively few chaotic neurons to drive them into some regular regime. (Here, we assume the network is such that, when all neurons are regular, the collective dynamics is regular. That is, we exclude the uncommon, but not unlikely, situation that a network system of coupled regular oscillators would exhibit chaotic behaviors.) Accurate identification of the chaotic neurons is thus key to implementing the pinning control strategy.
Given a neuronal network, our aim is thus to locate all neurons that are originally chaotic and neurons that are potentially likely to enter into a chaotic regime when they are isolated from the other neurons or when the couplings among the neurons are weakened. Our approach consists of two steps. Firstly, we employ the CS framework to estimate, from measured time series only, the parameters in the FHN equation for each neuron, as well as the network topology and various coupling functions and weights. As will be shown below, this can be done by expanding the nodal dynamical equations and the coupling functions into some suitable mathematical base, as determined by the specific knowledge of the actual neuronal dynamical system, and then casting the problem into that of determining the sparse coefficients associated with various terms in the expansion. The nonlinear system identification problem can then be solved using some standard CS algorithm. Secondly, we set all coupling parameters to zero and analyze the dynamical behaviors of each and every individual neuron by calculating the Lyapunov exponents. Those with positive largest exponent are identified as chaotic.
A typical time series from a neuronal network consists of a sequence of spikes in the time evolution of the cell membrane potential. We demonstrate that our CS-based reconstruction method works well even for such spiky time series. We also analyze the dependence of the reconstruction accuracy on the data amount and show that only limited data are required to achieve high accuracy in reconstruction.

2.1. The FitzHugh–Nagumo (FHN) Model of Neuronal Dynamics

The FHN model, a simplified version of the biophysically detailed Hodgkin–Huxley model [38], is a mathematical paradigm for gaining significant insights into a variety of dynamical behaviors in real neuronal systems [39,40]. For a single, isolated neuron, the corresponding dynamical system is described by the following two-dimensional, nonlinear ordinary differential equations:
d V d t = 1 δ [ V ( V a ) ( 1 V ) W ] , d W d t = V W b + S ( t ) ,
where V is the membrane potential, W is the recover variable, S(t) is the driving signal (e.g., periodic signal) and a, b and δ are parameters. The parameter δ is chosen to be infinitesimal, so that V (t) and W (t) are “fast” and “slow” variables, respectively. Because of the explicitly time-dependent driving signal S(t), Equation (1) is effectively a three-dimensional dynamical system, in which chaos can arise [41]. For a network of FHN neurons, the equations are:
d V i d t = 1 δ [ V i ( V i a ) ( 1 V i ) W i ] + i = 1 N c i j ( V j V i ) d W i d t = V i W i b + S ( t ) ,
where cij is the coupling strength (weight) between the i-th and the j-th neurons (nodes). For cij = cji, the interactions between any pair of neurons are symmetric, leading to a symmetric adjacency matrix for the network. For cijcji, the network is asymmetrically weighted.

2.2. Formulation of Reconstruction Problem in the CS Framework

The problem of CS can be stated as follows. Given a low-dimensional measurement vector YRM, one seeks to reconstruct the much higher-dimensional vector XRN according to:
Y = Ψ X ,
where NM and Ψ is an M × N projection matrix. A sufficiently sparse vector X can be reconstructed by solving the following convex optimization problem [2125]:
min X 1 s . t . Y = Ψ X ,
where the X 1 = i = 1 N | X i | is the l1 norm of vector X.
Equation (2) is a non-autonomous coupling network of N oscillators, driven by external signal S(t). In general, for an isolated oscillator, the dynamics can be written as:
x ˙ i ( t ) = F i [ x i ( t ) ] + S i ( t ) ,
where xiRm is an m-dimensional dynamical variable and Si(t) denotes the external driving. The network equations can then be written as:
x ˙ i = F i ( x i ) + j = 1 , j i N W i j [ H ( x j ) H ( x i ) ] + S i ( t ) ,
where WijRm×m is the weighted coupling matrix between node i and node j and H is the coupling function. Our goal is to reconstruct the nodal velocity field Fi and all of the coupling matrices W using time series x(t) and the given driving signal S(t). First, we group all terms directly associated with node i into F′i(xi), by defining:
F i ( x i ) F i ( x i ) H ( x i ) j = 1 , j i N W i j .
We have:
x ˙ i = F i ( x i ) + j = 1 , j i N W i j H ( x j ) + S i ( t ) .
Then, we choose a suitable base and expand F′(xi) into the following form:
F i ( x i ) = γ a ˜ i ( γ ) g ˜ i ( γ ) ( x i ) ,
where g ˜ i ( γ ) ( x i ) are a set of orthogonal and complete base functions, which are chosen such that the coefficients a ˜ i ( γ ) are sparse. While the coupling function H(xi), if it is nonlinear, can be expanded in a similar manner, for notational convenience, we assume that they are linear: H(xi)= xi. We then have:
x ˙ i = γ a ˜ i ( γ ) g ˜ i ( γ ) ( x i ) + j = 1 , j i N W i j x j + S i ( t ) ,
where all of the coefficients a ˜ i ( γ ) and Wij are to be determined from time series xi via CS. Specifically, the coefficient vector a ˜ i ( γ ) determines the nodal dynamics, and the weighted matrices Wij give the full topology and coupling strengths of the entire network.
Suppose we have measurements of all state variables xi(t) at M different values of t and assume further that for each t value, the values of the state variables at a slightly later time, t + δt, are also available, where δt ≪ Δt, so that the derivative vector i can be estimated at each time instant. Equation (9) for all M time instants can then be written in the following matrix form:
G i = ( g ˜ i ( t 1 ) x 1 ( t 1 ) x k ( t 1 ) x N ( t 1 ) g ˜ i ( t 2 ) x 1 ( t 2 ) x k ( t 2 ) x N ( t 2 ) g ˜ i ( t M ) x 1 ( t M ) x k ( t M ) x N ( t M ) ) ,
where the index k in xk(t) runs from one to N, ki, and each row of the matrix is determined by the available time series at one instant of time. The derivatives at different times can be written in a vector form as: Xi =[i(t1), ··· , i(tM)]T. The coefficients from the functional expansion and the weights associated with all links in the network, which are to be determined, can be combined concisely into a vector ai, as follows:
a i = [ a ˜ i , W 1 i , , W i 1 , i , W i + 1 , i , , W N , i ] T .
where [·]T denotes the transpose. For a properly chosen expansion base and a general complex network whose connections are typically sparse, the vector ai to be determined is sparse, as well. Finally, Equation (9) can be written in the standard CS form as:
X i = G i a i + S i ( t ) ,
a linear equation in which the dimension of the unknown coefficient vector ai can be much larger than that of Xi, and the measurement matrix Gi will have many more columns than rows. In a conventional sense, this equation is ill defined, but since ai is sparse, insofar as its number of non-zero coefficients is smaller than the dimension of Xi, the vector ai can be uniquely and efficiently determined by CS [2125].

3. Results

We consider the FHN model with sinusoidal driving: S(t)= r sin ω0t. The model parameters are r = 0.32, ω0 = 15.0, δ = 0.005 and b = 0.15. For a = 0.42, an individual neuron exhibits chaos. The time series are generated by the fourth-order Runge–Kutta method with step size h = 10−4. We sample three consecutive measurements at time interval τ = 0.05 apart and then use a standard two-point formula to calculate the derivative. Representative chaotic time series and the corresponding dynamical trajectory are shown in Figure 2.
We first present the reconstruction result for an isolated neuron, by setting to zero all coupling terms in Equation (10). The vector ai to be determined then contains the unknown parameters associated with a single neuron only. We choose power series of order four as the expansion base, so that there are 17 unknown coefficients to be determined. We use 12 data points generated from a random starting point. The results of the reconstruction are shown in Figure 3a,b for variables V and W, respectively. The last two coefficients associated with each variable represent the strength of the driving signal. Since only the variable W receives sinusoidal input, the last coefficient in W is nonzero. By comparing the positions of nonzero terms and our previously assumed vector form, gi(t), we can fully reconstruct the dynamical equations of any isolated neuron. In particular, we see from Figure 3a,b that all estimated coefficients agree with their respective true values. Figure 3c shows how the estimated coefficients converge to the true values as the number of data points is increased. We see that, for over 10 data points, we can already reconstruct faithfully all of the parameters.
Next, we consider the network of coupled FHN neurons as schematically shown in Figure 1a, where the coupling weights among various pairs of nodes are uniformly distributed in the interval [0.3, 0.4]. The network is random with connection probability p = 0.04. From time series, we construct the CS matrix for each variable of all nodes. Since the couplings occur among the variables V of different neurons, the strengths of all incoming links can be found in the unknown coefficients associated with different V variables. Extracting all coupling terms from the estimated coefficients, we obtain all off-diagonal terms in the weighted adjacency matrix.
To assess the reconstruction accuracy, we define Enz as the average normalized difference between the non-zero terms in the estimated coefficients and the real values:
E n z = 1 M n z k = 1 M n z c k c k c k ,
where Mnz is the number of non-zero terms in the actual coefficients, c k ' and ck are the k-th nonzero terms in the estimated coefficients and the true one, respectively. For convenience, we define Rm as the relative number of data points normalized by the number of total unknown coefficients. Figure 4 shows the reconstructed adjacency matrix as compared with the real one for Rm = 0.7. We see that our method can predict all links correctly, in spite of the small errors in the predicted weight values. The errors are mainly due to the fact that there are large coefficients in the system equations, but the coupling weights are small.
Using the weighted adjacency matrix, we can identify the coupling terms in the vector function F′i(xi), so as to extract the terms associated with isolated nodal velocity field Fi. We can then determine the value of parameter a and calculate the largest Lyapunov exponent for each individual neuron. The results are shown in Figure 5a,b. We see that, for this example, Neuron 1 has a positive largest exponent, while the largest exponents for all others are negative, so 1 is identified as the only chaotic neuron among all neurons in the network.
Next, we discuss the relationship between reconstruction error and data requirement. As shown in Figure 6, for different network sizes N, the reconstruction error decreases with Rm. For Rm larger than a threshold, the normalized error Enz is small. For N = 40, the threshold is about 0.6, and it is 0.5 for N = 60. That is because, for fixed connection probability, a larger network will have more sparse connections, requiring a smaller value of Rm for accurate reconstruction.
Finally, we study the performance of our method with respect to systematically varying of network size and edge density. As with any method, larger networks require more computation. We study networks of a size up to N = 100 nodes. Figure 7a shows the normalized error associated with the nonzero terms, Enz, for different network sizes N and normalized data amount Rm. For a given network size, similar to Figure 6, Enz gradually decreases to a certain low level as the relative data amount Rm is increased. We further observe that smaller values of Rm are required to reconstruct larger networks of the same connecting probability P. Note that Rm is the relative data amount defined with respect to the number of unknown coefficients, so for larger networks, the absolute data amount required actually increases. In Figure 7b, we show the contour plot of values of Enz in the parameter plane (Rm, P) for a fixed network size (N = 60). We see that, for a fixed value of Rm, as P is increased, the error Enz also increases, which is anticipated, as denser networks lead to a denser projection matrix in compressive sensing.

4. Conclusions

We develop a completely data-driven method to detect chaotic elements embedded in a network of nonlinear oscillators, where such elements are assumed to be relatively few. From a biomedical perspective, the chaotic elements can be the source of certain diseases, and their accurate identification is desirable. In spite of being only a few, the chaotic oscillators can cause the time series from other, originally regular oscillators to appear random, due to the network interactions among the oscillators. The standard method in nonlinear time series, the method of delay-coordinate embedding, cannot be used to identify the local chaotic elements, because the method can give information only about the global dynamics. For example, one can attempt to estimate the largest Lyapunov exponent by using time series, either from a chaotic oscillator or from an originally regular oscillator, and the embedding method would yield qualitatively or even quantitatively similar results. Our compressive sensing-based method, however, overcomes such difficulties by generating an accurate estimate of all system equations, which include the local dynamical equations of each individual node and all coupling functions. Isolating the coupling functions from the local velocity fields, we can obtain the original dynamical equations for each individual oscillator, enabling efficient calculation of the Lyapunov exponents for all oscillators and, consequently, accurate identification of the chaotic oscillators. We illustrate this methodology by using model networks of FHN neurons. One key virtue of compressive sensing, namely the low data requirement, enables us to accomplish the task of identifying chaos with short time series. Our method is generally applicable to any nonlinear dynamical networks, insofar as time series from the oscillators are available.
Comparing with our previous works on compressive sensing-based nonlinear system identification and reverse engineering of complex networks [27,28,3033], the new technical features of the present work are the following. Firstly, we demonstrate that the compressive sensing-based system identification is effective for spiky time series that are typical of neuronal networks. Secondly, local velocity fields and non-uniform weights of node-to-node interactions can be reconstructed accurately for neuronal networks with both fast and slow variables in the presence of external driving. Thirdly, the method works regardless of the ratio between the number of originally chaotic and nonchaotic oscillators. The great flexibility, the extreme low data requirement and high accuracy make our method appealing for various problems arising from nonlinear system identification, especially in biology and biomedicine.
There are a number of limitations to our method. For example, for any accessible node in the network, time series of all dynamical variables are required. If information from one node or some of the nodes in the network is inaccessible, or “hidden” from the outside world, it is not feasible to recover the nodal dynamical system of such nodes and their neighbors [31,33]. The “hidden dimensions” problem, in which some dynamical variables are not given, is another obstacle to realistic applications. Our compressive sensing-based method also requires reasonable knowledge about the underlying complex system, so that a suitable mathematical base can be identified for expansions of the various nodal and coupling functions. Further efforts are certainly needed.

Acknowledgments

This work was supported by AFOSR (Air Force Office of Scientific Research) under Grant No. FA9550-10-1-0083 and by NSF (National Science Foundation) under Grant No. CDI-1026710.

Author Contributions

Ri-Qi Su, Ying-Cheng Lai, Xiao Wang designed research project. Ri-Qi Su did computations. Ri-Qi Su and Ying-Cheng Lai analyzed results. Ying-Cheng Lai, Ri-Qi Su and Xiao Wang wrote the paper. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Takens, F. Detecting strange attractors in fluid turbulence. In Dynamical Systems and Turbulence; Rand, D., Young, L.S., Eds.; Springer-Verlag: Berlin, Germany, 1981; Volume 898, pp. 366–381. [Google Scholar]
  2. Packard, N.H.; Crutchfield, J.P.; Farmer, J.D.; Shaw, R.S. Geometry from a time series. Phys. Rev. Lett 1980, 45, 712–716. [Google Scholar]
  3. Kantz, H.; Schreiber, T. Nonlinear Time Series Analysis, 1st ed; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
  4. Wolf, A.; Swift, J.B.; Swinney, H.L.; Vastano, J.A. Determining Lyapunov exponents from a time series. Physica D 1985, 16, 285–317. [Google Scholar]
  5. Eckmann, J.P.; Kamphorst, S.O.; Ruelle, D.; Ciliberto, S. Liapunov exponents from time series. Phys. Rev. A 1986, 34, 4971–4979. [Google Scholar]
  6. Parlitz, U. Estimating model parameters from time series by autosynchronization. Phys. Rev. Lett 1996, 76, 1232. [Google Scholar]
  7. Yu, D.; Righero, M.; Kocarev, L. Estimating topology of networks. Phys. Rev. Lett 2006, 97, 188701. [Google Scholar]
  8. Tang, W.K.S.; Yu, M.; Kocarev, L. Identification and monitoring of biological neural network. Proceedings of the IEEE International Symposium on Circuits and Systems ( IEEE ISCAS 2007), New Orleans, LA, USA, 27–30 May 2007; pp. 2646–2649.
  9. Yu, D.; Parlitz, U. Inferring network connectivity by delayed feedback control. PLoS One 2011, 6, e24333. [Google Scholar]
  10. Timme, M. Revealing network connectivity from response dynamics. Phys. Rev. Lett 2007, 98, 224101. [Google Scholar]
  11. Shandilya, S.G.; Timme, M. Inferring network topology from complex dynamics. New J. Phys 2011, 13, 013004. [Google Scholar]
  12. Levnajić, Z.; Pikovsky, A. Network reconstruction from random phase resetting. Phys. Rev. Lett 2011, 107, 034101. [Google Scholar]
  13. Berry, T.; Hamilton, F.; Peixoto, N.; Sauer, T. Detecting connectivity changes in neuronal networks. J. Neurosci. Methods 2012, 209, 388–397. [Google Scholar]
  14. Hamilton, F.; Berry, T.; Peixoto, N.; Sauer, T. Real-time tracking of neuronal network structure using data assimilation. Phys. Rev. E 2013, 88, 052715. [Google Scholar]
  15. Brovelli, A.; Ding, M.; Ledberg, A.; Chen, Y.; Nakamura, R.; Bressler, S.L. Beta oscillations in a large-scale sensorimotor cortical network: directional influences revealed by Granger causality. Proc. Nat. Acad. Sci. USA 2004, 101, 9849–9854. [Google Scholar]
  16. Zhou, D.; Xiao, Y.; Zhang, Y.; Xu, Z.; Cai, D. Causal and structural connectivity of pulse-coupled nonlinear networks. Phys. Rev. Lett 2013, 111, 054102. [Google Scholar]
  17. Stetter, O.; Battaglia, D.; Soriano, J.; Geisel, T. Model-free reconstruction of excitatory neuronal connectivity from calcium imaging signals. PLoS Comput. Biol 2012, 8, e1002653. [Google Scholar]
  18. Hempel, S.; Koseska, A.; Kurths, J.; Nikoloski, Z. Inner composition alignment for inferring directed networks from short time series. Phys. Rev. Lett 2011, 107, 054101. [Google Scholar]
  19. Makarov, V.A.; Panetsos, F.; Feo, O. A method for determining neural connectivity and inferring the underlying network dynamics using extracellular spike recordings. J. Neurosci. Methods 2005, 144, 265–279. [Google Scholar]
  20. Yao, C.; Bollt, E.M. Modeling and nonlinear parameter estimation with Kronecker product representation for coupled oscillators and spatiotemporal systems. Physica D 2007, 227, 78–99. [Google Scholar]
  21. Candès, E.; Romberg, J.; Tao, T. Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math 2005, 59, 1207–1223. [Google Scholar]
  22. Candès, E.; Romberg, J.; Tao, T. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar]
  23. Donoho, D. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar]
  24. Baraniuk, R.G. Compressed sensing. IEEE Signal Process. Mag 2007, 24, 118–121. [Google Scholar]
  25. Candeś, E.; Wakin, M. An introduction to compressive sampling. IEEE Signal Process. Mag 2008, 25, 21–30. [Google Scholar]
  26. Willett, R.; Marcia, R.; Nichols, J. Compressed sensing for practical optical imaging systems: A tutorial. Opt. Eng 2011, 50, 072601. [Google Scholar]
  27. Wang, W.X.; Yang, R.; Lai, Y.C.; Kovanis, V.; Grebogi, C. Predicting catastrophes in nonlinear dynamical systems by compressive sensing. Phys. Rev. Lett 2011, 106, 154101. [Google Scholar]
  28. Wang, W.X.; Yang, R.; Lai, Y.C.; Kovanis, V.; Harrison, M.A.F. Time-series based prediction of complex oscillator networks via compressive sensing. EuroPhys. Lett 2011, 94, 48006. [Google Scholar]
  29. Shabani, A.; Kosut, R.L.; Mohseni, M.; Rabitz, H.; Broome, M.A.; Almeida, M.P.; Fedrizzi, A.; White, A.G. Efficient measurement of quantum dynamics via compressive sensing. Phys. Rev. Lett 2011, 106, 100401. [Google Scholar]
  30. Wang, W.X.; Lai, Y.C.; Grebogi, C.; Ye, J.P. Network reconstruction based on evolutionary-game data. Phys. Rev. X 2011, 1, 021021. [Google Scholar]
  31. Su, R.Q.; Wang, W.X.; Lai, Y.C. Detecting hidden nodes in complex networks from time series. Phys. Rev. E 2012, 85, 065201. [Google Scholar]
  32. Su, R.Q.; Ni, X.; Wang, W.X.; Lai, Y.C. Forecasting synchronizability of complex networks from data. Phys. Rev. E 2012, 85, 056220. [Google Scholar]
  33. Su, R.Q.; Lai, Y.C.; Wang, X.; Do, Y.H. Uncovering hidden nodes in complex networks in the presence of noise. Sci. Rep 2014, 4, 3944. [Google Scholar]
  34. Wang, X.F.; Chen, G. Pinning control of scale-free dynamical networks. Physica A 2002, 310, 521–531. [Google Scholar]
  35. Li, X.; Wang, X.F.; Chen, G. Pinning a complex dynamical network to its equilibrium. IEEE Trans. Circ. Syst. I 2004, 51, 2074–2087. [Google Scholar]
  36. Sorrentino, F.; di Bernardo, M.; Garofalo, F.; Chen, G. Controllability of complex networks via pinning. Phys. Rev. E 2007, 75, 046103. [Google Scholar]
  37. Yu, W.; Chen, G.; Lü, J. On pinning synchronization of complex dynamical networks. Automatica 2009, 45, 429–435. [Google Scholar]
  38. Hodgkin, A.L.; Huxley, A.F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol 1952, 17, 500–544. [Google Scholar]
  39. FitzHugh, R. Impulses and physiological states in theoretical models of nerve membrane. Biophys. J 1961, 1, 445–466. [Google Scholar]
  40. Nagumo, J.; Arimoto, S.; Yoshizawa, S. An active pulse transmission line simulating nerve axon. Proc. IRE 1962, 50, 2061–2070. [Google Scholar]
  41. Ott, E. Chaos in Dynamical Systems, 2nd ed; Cambridge University Press: Cambridge, UK, 2002. [Google Scholar]
Figure 1. (a) Schematic illustration of a small neuronal network, where the dynamics of each neuron is mathematically described by the FitzHugh–Nagumo (FHN) equations. (b,c) Dynamical trajectories of two neurons from the coupled system, one being chaotic when isolated and another regular, respectively. The trajectories give little hint as to which one is originally chaotic and which one is regular, due to the coupling. Specifically, Neuron 1 is originally chaotic (by setting parameter a = 0.42 in the FHN equation), while all other neurons are regular (their values of the corresponding parameter in the FHN equation are chosen uniformly from the interval [0.43, 0.45]).
Figure 1. (a) Schematic illustration of a small neuronal network, where the dynamics of each neuron is mathematically described by the FitzHugh–Nagumo (FHN) equations. (b,c) Dynamical trajectories of two neurons from the coupled system, one being chaotic when isolated and another regular, respectively. The trajectories give little hint as to which one is originally chaotic and which one is regular, due to the coupling. Specifically, Neuron 1 is originally chaotic (by setting parameter a = 0.42 in the FHN equation), while all other neurons are regular (their values of the corresponding parameter in the FHN equation are chosen uniformly from the interval [0.43, 0.45]).
Entropy 16 03889f1
Figure 2. (a) Chaotic time series of the membrane potential V and recovery variable W from a single neuron for a = 0.42; and (b) the corresponding dynamical trajectory.
Figure 2. (a) Chaotic time series of the membrane potential V and recovery variable W from a single neuron for a = 0.42; and (b) the corresponding dynamical trajectory.
Entropy 16 03889f2
Figure 3. (a,b) Predicted coefficients from compressive sensing (CS) and a comparison with the actual parameter values in the dynamical equations of variables V and W. The number of data points used is 12. (c) Predicted parameters for a single neuron as the number of data points is increased. The sampling interval is Δt = 0.05. All results are averaged over 10 independent time series.
Figure 3. (a,b) Predicted coefficients from compressive sensing (CS) and a comparison with the actual parameter values in the dynamical equations of variables V and W. The number of data points used is 12. (c) Predicted parameters for a single neuron as the number of data points is increased. The sampling interval is Δt = 0.05. All results are averaged over 10 independent time series.
Entropy 16 03889f3
Figure 4. For the network in Figure 1a, (a) the actual and (b) estimated weighted adjacency matrix. The normalized data amount used in the reconstruction is Rm = 0.7.
Figure 4. For the network in Figure 1a, (a) the actual and (b) estimated weighted adjacency matrix. The normalized data amount used in the reconstruction is Rm = 0.7.
Entropy 16 03889f4
Figure 5. (a) Estimated values of parameter a for different neurons (red circles), as compared with the actual values (black crosses). The random network size is N = 20 with connection probability p = 0.04. The normalized data amount used in reconstruction is Rm = 0.7. (b) The largest Lyapunov exponents calculated from the reconstructed system equations. The reference line denotes a null value.
Figure 5. (a) Estimated values of parameter a for different neurons (red circles), as compared with the actual values (black crosses). The random network size is N = 20 with connection probability p = 0.04. The normalized data amount used in reconstruction is Rm = 0.7. (b) The largest Lyapunov exponents calculated from the reconstructed system equations. The reference line denotes a null value.
Entropy 16 03889f5
Figure 6. For neural networks of size N = 40 and N = 60, the normalized error associated with non-zero terms, Enz, as a function of normalized data amount Rm. The results are averaged over 10 independent measurement realizations.
Figure 6. For neural networks of size N = 40 and N = 60, the normalized error associated with non-zero terms, Enz, as a function of normalized data amount Rm. The results are averaged over 10 independent measurement realizations.
Entropy 16 03889f6
Figure 7. (a) For a random network of fixed connecting probability p = 0.04, a contour plot of the normalized error associated with nonzero terms, Enz, in the parameter plane (Rm, N). (b) For a random network of fixed size N = 60, a contour plot of Enz in the parameter plane (Rm, P). All results are obtained from 10 independent network realizations. See the text for explanations.
Figure 7. (a) For a random network of fixed connecting probability p = 0.04, a contour plot of the normalized error associated with nonzero terms, Enz, in the parameter plane (Rm, N). (b) For a random network of fixed size N = 60, a contour plot of Enz in the parameter plane (Rm, P). All results are obtained from 10 independent network realizations. See the text for explanations.
Entropy 16 03889f7

Share and Cite

MDPI and ACS Style

Su, R.-Q.; Lai, Y.-C.; Wang, X. Identifying Chaotic FitzHugh–Nagumo Neurons Using Compressive Sensing. Entropy 2014, 16, 3889-3902. https://doi.org/10.3390/e16073889

AMA Style

Su R-Q, Lai Y-C, Wang X. Identifying Chaotic FitzHugh–Nagumo Neurons Using Compressive Sensing. Entropy. 2014; 16(7):3889-3902. https://doi.org/10.3390/e16073889

Chicago/Turabian Style

Su, Ri-Qi, Ying-Cheng Lai, and Xiao Wang. 2014. "Identifying Chaotic FitzHugh–Nagumo Neurons Using Compressive Sensing" Entropy 16, no. 7: 3889-3902. https://doi.org/10.3390/e16073889

Article Metrics

Back to TopTop