Next Article in Journal
Prime Number Sieving—A Systematic Review with Performance Analysis
Previous Article in Journal
Impacting Robustness in Deep Learning-Based NIDS through Poisoning Attacks
Previous Article in Special Issue
Challenges in Reducing Bias Using Post-Processing Fairness for Breast Cancer Stage Classification with Deep Learning
 
 
Article
Peer-Review Record

Spike-Weighted Spiking Neural Network with Spiking Long Short-Term Memory: A Biomimetic Approach to Decoding Brain Signals

Algorithms 2024, 17(4), 156; https://doi.org/10.3390/a17040156
by Kyle McMillan 1, Rosa Qiyue So 2,3, Camilo Libedinsky 4,5, Kai Keng Ang 2,6,* and Brian Premchand 2
Reviewer 1: Anonymous
Reviewer 2:
Algorithms 2024, 17(4), 156; https://doi.org/10.3390/a17040156
Submission received: 7 February 2024 / Revised: 9 April 2024 / Accepted: 10 April 2024 / Published: 12 April 2024
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (2nd Edition))

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

In this paper, authors propose a novel spike-weighted SNN with spiking long short-term memory (swSNN-SLSTM) for brain-machine interfaces(BMIs). Comparing the proposed algorithm with several existing ML models, the swSNN-SLSTM outperforms both the unscented Kalman filter and the LSTM-based ANN. Adding LSTM to spike neural networks to capture temporal information hidden in brain signals is an interesting attempt.

However, the experimental results are a little bit insufficient and more details are need for the introduction of the proposed algorithm. For instance, changes of data dimension need to be specified.

 

My specific comments are listed below:

 

1.     Line184-196, please organize the choice of hyper-parameters of each layer into a table for better comprehension. Moreover, the hyper-parameters of mem-SNN and sw-SNN are the same, please make sure that there is no error here.

2.     It seems that the weight mechanism in the output layer is the most significant difference between the sw-SNN and the mem-SNN. Please explain in more detail why choose ±0.01 as the output weight.

3.     Figure 2 illustrates that comparing with the prediction results of x position, the results of y position prediction are less than satisfactory. The adjustments of the weight for y position output in sw-SNN should be taken into consideration for the improvements of the proposed method.

4.     From Fig.4(B), the highest correlation coefficient is achieved when the number of nodes is 400. However, authors choose 600 as the node number in the sw-SNN-SLSTM, why?

5.     Some conclusions in the discussion section are inconsistent with the experimental results. Please check and revise the following inconformities.

(1)Line241-242, according to the experimental results of dataset1, there is little difference  with the performances sw-SNN/mem-SNN/LSTM and the proposed fusion models.

(2)Line328, in this sentence, it seems that the result descriptions for dataset1 and 2 are mixed up.

(3)Paragraph 2 and 3, section 4. According to the network introduction, SNN is the backbone structure for spike information processing and SLSTM is one of the module in the proposed method for temporal pattern learning. It’s not necessary for exploring which module contributes the most since they play different roles in the algorithm

(4)Line368-375 and line385-395, the descriptions about calculation operations are confusing. Whether SNN reduce computational operations or not?

 

6.     The name of networks in this paper should be unified for better comprehension. (rate-coded SNN or spike-weighted SNN)

Comments on the Quality of English Language

see suggestions

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

Comments and Suggestions for Authors

The authors are proposing machine learning architecture based on spiking neural network to decode intents from EEG measurements. In my opinion, the paper requires improving clarity of presentation before it can be recommended for publication.

- Title: a typo

- Abstract: the objectives of the paper need to be better explained. There should be balance between background/methods/results to be 1/3, 1/3, 1/3.

- For any major statements, e.g. that the proposed algorithm reduce power (energy?) consumption or that the proposed methods outperform Kalman filter methods, there should be either evidence, explanation or reference to existing literature. 

- The overall setup and objectives are not clear from reading Introduction. Kalman filter is used for regression, if the measurement noise is Gaussian and white. Why is Kalman filter considered here for a decision problem? Kalman filter does not perform any decisions.

- It is better to avoid subsections in Introduction. Instead, please elaborate and clearly explain what is being done in this paper, new contributions, i.e., the last two paragraphs of Introduction can be expanded and improved. If the spike-weighted model is referred to as rate-based model, why not to assume this already in the paper title?

- Not all symbols used are defined, e.g. X(t) in (1). Is 't' a continuous variable or discrete? when/how are these variables updated, continuously? 

- More importantly, the overall architecture is unclear, what are the inputs, what are outputs, what these outputs represent? why is it sometimes referred to as decoding architecture and sometimes as rate coded model?

- Note that there are bold-faced symbols in the text, but not in the equations used

- reset scheme is set to none? 2 different types vs. 4 different models? spikes were not recorded? These statements are confusing how to understand them.

- surrogate gradient was used, but it is unclear why and how

- Figure 1 appears before it is first mentioned in the main text.

- if resolution is 0.01, and there are 600 values between -1 and +1, what is the resolution then?

- the statement about using Adam optimizer is repeated

- the notation for 1e-2 and similar is not unified

- what are variations of hyperparameters? Since there are many parameters to set, it may be better to summarize them in a table.

- what does velocity represent in Kalman filter? what does it mean to train Kalman filter?

- in Results, please describe what is ground truth

- l. 259 and 260: what are these values 'p' and how were they obtained?

- It would be helpful to see a one paragraph conclusion to make sure that the paper was understood properly.

 

Comments on the Quality of English Language

there are a few typos, missing space before and after section titles

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

The clarity of the paper improved, however, proofreading the writing is still recommended. For instance, 'this shows THAT incorporating ...', 'In SUMMARY, ....' etc. In addition, some sentences appear unfinished, e.g. line 350. In the text, 'figure 1' -> 'Figure 1', the caption of figures are not capitalized, and the dot at the end of captions should be used consistently. In general, every section should start with at least one introductory paragraph before start any new subsections. 

Comments on the Quality of English Language

The paper needs further proofreading and formatting. 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop