Next Article in Journal
Simulation on Cooperative Changeover of Production Team Using Hybrid Modeling Method
Previous Article in Journal
Evolutionary Algorithms in Health Technologies
 
 
Article
Peer-Review Record

Data-Driven Predictive Modeling of Neuronal Dynamics Using Long Short-Term Memory

Algorithms 2019, 12(10), 203; https://doi.org/10.3390/a12100203
by Benjamin Plaster and Gautam Kumar *
Reviewer 1: Anonymous
Reviewer 2:
Algorithms 2019, 12(10), 203; https://doi.org/10.3390/a12100203
Submission received: 12 August 2019 / Revised: 17 September 2019 / Accepted: 23 September 2019 / Published: 24 September 2019

Round 1

Reviewer 1 Report

The article present a state of the art machine learning algorithm to capture single-neuron dynamics. This is a notoriously hard problem in the field and, although the article uses some uncommon assumptions in the field, the algorithm is shown to perform very well. The focus on the prediction horizon is also well treated. The article is well written and there is an attention to detail. I recommend the publication of the article as is, although I do have some very minor suggestions.

 

1. Some DL with neuro-dynamical systems related references are germane to this introduction: recent papers by Huh and Sejnowski as Zenke and Ganguly have developped a method for network training that correctly capture spike timing. Also the paper by Pandarinath … Sussillo is appropriate recent developments.

 

2. Note that the intrinsic bursting regime is a chaotic one (Naud … Gerstner Biol. Cybern 2008). Thus I would argue that the reason why the LSTM has trouble fitting is of the high sensitivity to initial conditions.

 

3. Note that the typical neuroscience application would not have the knowledge of the state space topology, nor would it be able to train on the whole state vector. The fact that the present method trains on the whole state vector should be discussed in the discussion and should also be highlighted in the introduction. Doing this would avoid frustrations from neuro-audience and thus make the article more accessible.

 

4. There are some typos and capitalization inconsistencies in references. Journal titles have inconsistent capitalization and the last reference seem to have the article title cut off.

 

5. I think the specification of some voltage dependent time constants are missing.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

There has been a recent surge of interest in bridging traditional machine learning techniques with scientific computing to extract and/or forecast the behavior of dynamical systems equations from large, complex data sets. These advances will be particularly advantageous for researchers interested in designing closed-loop devices that stimulate the brain according to the current (estimated) state of the system. This requires algorithms that can extract and/or forecast neural dynamics at a minimal computational cost, allowing them to operate in real-time with sufficiently short deadlines. Although multilayer LSTMs with reverse-order mapping sequences have already been designed for use in language processing (Sutskever et al., 2014), they have not been previously applied to the prediction of dynamical systems behavior from time-series data. As such, this paper is a timely contribution to the burgeoning field of SciML. The authors demonstrate their approach with a nonlinear system of ODEs governing the Hodgkin-Huxley single neuron model, while highlighting that these techniques may also prove quite valuable in forecasting models of neuron populations for control via implants and external devices. I enjoyed the paper and look forward to exploring this technique on some of our own data.

I have no major concerns with this work, just some minor issues and clarifications, listed below. If suitably addressed then I feel this manuscript is appropriate for publication in Algorithms.

Sutskever et al. (2014) employed reverse sequence mapping in a multilayered- and  deep- LSTM network for a language translation task. I think they should be referenced around lines 48-50, along with a brief description, so that the distinction between their work and the work presented here is more transparent. Specifically, before describing the technical components, I would do more to emphasize that you are stacking their architecture in order to generalize this technique to dynamical systems time series forecasting. Otherwise, the individual deep networks themselves were similar? By line 92, 109-120 and in the Discussion we see a bit of this, but I think it’s important to frame the exact advances of this paper in the context of the existing literature right away (otherwise, I found lines 36-50 did a good job with background summary).

I have a concern about the non-physiological firing rates observed in the paper’s simulations. They are certainly high for natural, in vivo hippocampal CA1 spike rates, as well as in vitro studies (0-40 Hz). Bursting in this system is typically understood to be 40 Hz (I’ve seen doublets as high as 70 Hz in the literature). This can be observed directly in reference [28], Figure 1E within, where input current as high as 0.3 nA (300 pA) resulted in the typical high firing rate of approximately 40-60 Hz. In the regular tonic section of this submitted manuscript, I visually estimate 200 Hz from Figure 5C (20 spikes over 100 ms), whereas typical values would be ~ 0-10 Hz for the low tonic spiking regime. What was the mean firing rate associated with your lower bound current of 2.3 nA? Figure 5B suggests a rate of 15 spikes over 100 ms for irregular bursting, although the input current supplied for bursting sections were somewhat more reasonable. In general it might be nice to report burst and tonic rates within each section. The range of input currents and firing rate both seem off by an order of magnitude compared to typical in vitro experiments. Of course choice of model parameters such as the membrane time constant change the scale of the bias current; is something along these lines the reason for this choice of current value, and, if so, what alternative explanation is there for these unnatural rates? Although according to Golomb et al. (2006), one can drive plateau potentials and upwards of 100 Hz regular tonic spiking, I think this is the less interesting of the two tonic dynamical regimes in natural processing. I am being picky about this because I’m curious how this would affect the results: I would assume prediction error would be smaller as spikes become denser in time. And how does temporal sparsity of events affect the choice of a suitable horizon time? Does the network’s performance scale to much lower, physiological activity levels? I expect that it does, but biologists will appreciate the confirmation and a more detailed description.

An important claim of this paper is that dynamical systems forecasting can be implemented in a more computationally cost-effective manner using the described techniques. I think the reader could benefit from a more explicit description of the trade-off between longer horizon time and more accurate predictions, versus a shorter horizon but better real-time performance. What would be an appropriate choice, practically speaking, for this system to generate predictions within a time-interval for real world applications? Once initially trained offline, how long does it take for the stack to generate a prediction for a given horizon length? Is it reasonable to achieve forecasting of dynamical state in less time than the predicted horizon, while still maintaining sufficient accuracy? For example, what about online correction of a motor plan using neuro-stimulation? The dynamics would be changing over the timescale of a few hundred milliseconds. Is that too demanding? Perhaps this approach is currently better for slower timescale modelling, which might be enough for therapeutic stimulation in neurological disorders? I like the way the discussion touches on limitations, and think readers would find some further description of this topic helpful, provided the authors and reviewers feel this question is valid.

Some of the figures would benefit from larger axis labels and tick value font size. For example, Figures 5 and 6. Others, such as Figure 8, are fine. Perhaps this is just a sizing / pre-print issue.

Reference [28]: is there a specific reason this review was chosen? There are many prominent, high impact reviews on this subject matter. Very minor, just curious really. Content was fine.

As indicated on the reviewer checklist, there are some very minor and sporadic language issues, not at all detracting from the clarity of the writing, just occasionally a bit disrupting to an otherwise good manuscript flow. The authors should also double check some of the subscripts used (e.g., gNa and VNa, line 453).

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

I am happy with the changes the authors made to the manuscript.

Back to TopTop