Next Article in Journal
Distributed Bayesian Inference for Large-Scale IoT Systems
Next Article in Special Issue
Comparison of Bagging and Sparcity Methods for Connectivity Reduction in Spiking Neural Networks with Memristive Plasticity
Previous Article in Journal
An Artificial-Intelligence-Driven Spanish Poetry Classification Framework
Previous Article in Special Issue
Spiking Neural Networks for Computational Intelligence: An Overview
 
 
Article
Peer-Review Record

Extraction of Significant Features by Fixed-Weight Layer of Processing Elements for the Development of an Efficient Spiking Neural Network Classifier

Big Data Cogn. Comput. 2023, 7(4), 184; https://doi.org/10.3390/bdcc7040184
by Alexander Sboev 1,2,*, Roman Rybka 1,3, Dmitry Kunitsyn 1,2, Alexey Serenko 1, Vyacheslav Ilyin 1,4,5 and Vadim Putrolaynen 6
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Big Data Cogn. Comput. 2023, 7(4), 184; https://doi.org/10.3390/bdcc7040184
Submission received: 24 November 2023 / Revised: 4 December 2023 / Accepted: 14 December 2023 / Published: 18 December 2023
(This article belongs to the Special Issue Computational Intelligence: Spiking Neural Networks)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The main questions or observations are following:

-        To put in evidence the results no need three type of datasets. If more than one dataset was used, it should be highlighted out why they were needed, what are the particularities of each that cannot be found in a single dataset. The data obtained according to these particularities should be shown in the results chapter and discussed.

-        The spiking neural network presentation is generally presented by a diagram (86) and the formulas are equally generally put without a correlation, important intermediate steps in between.

-        A spike neuron model is mentioned and some values are given. What is the scheme on which it is based, how was the relationship derived. What is its simulated model? (109-120)

-        Based on the topology of the network presented, in the results chapter all the particularities of the network and its modifications must be mentioned in order to show the results obtained. It is sufficient to formulate only the criteria/dependencies that led to the results obtained.

-        It is mentioned but generally, please highlight in comparison with the nearest published material in the subject area, what is the add

Author Response

We thank the reviewer for the scrupulous attention they have given to the manuscript and for their constructive comments and suggestions.

The paper has been revised addressing the reviewer's feedback:

  • Regarding the remark about the motivation for using several datasets, we now explain at the beginning of Section 2 that we aimed at assessing the feasibility of our proposed spiking layer on data from different domains, of various size and classification complexity. We therefore chose several datasets for which published accuracies achieved by other spiking networks exist to compare against.
  • In order to address the remark about unclear presentation of the spiking network model, more explanation has been added linking the description in the text to the diagram depicting the neural network model. (138-139)
  • In the neuron model description, we have added a link to the literature describing the Leaky Integrate-and-Fire neuron, and specified in more detail how numerical simulation was performed. (115, 127-128)
    Explanation has also been added regarding the selection of values for neuron parameters. (122-123)
  • We have reviewed the Results section so that to make sure that all the particularities of the network and its modifications are outlined where appropriate, and that the best-performing variants of the network, with which the resulting performance is reported, have been specified. We have also re-checked that the dependencies of the results on the model parameters are outlined in the Discussion section.
  • Regarding a comparison to other published material, we have checked that our results contain a comparison to accuracies of other methods based on spiking neural networks. We have also emphasized in the conclusion that our proposed spiking layer is the first one to have fixed weights, and all such layers existing to date have been of conventional, non-spiking neurons.

Reviewer 2 Report

Comments and Suggestions for Authors

The main contribution of this work is demonstrating how fixed-weight layers, either generated from random distribution or logistic functions, can effectively extract significant features from input data, such as Fisher's iris, Wisconsin Breast Cancer, and MNIST datasets. The suggested approach simplifies potential hardware implementation for neuromorphic computing devices. The paper explores the capabilities of layers with fixed weights in solving benchmark tasks of real-valued vector and image classification. The paper includes detailed experiments and results demonstrating the effectiveness of the proposed method. The results showed that layers with random or logistic function-generated weights can efficiently extract meaningful features from input data. It was found that logistic functions, in particular, yield high accuracy with less dispersion in results. The study concludes that the proposed non-trainable layer efficiently transforms numeric input values into a less-dimensional space of spike counts, allowing subsequent decoding of classes from these spike counts. The approach demonstrated competitive accuracy on the datasets used and is seen as a foundation for creating efficient and economical SNN topologies for deployment on biomorphic devices. The manuscript offers significant contributions to developing efficient SNN classifiers, especially in reducing energy consumption in neuromorphic computing systems. The innovative approach of using fixed-weight layers in SNNs demonstrates promising results in accuracy and efficiency, paving the way for further research and practical applications. However, some points have to be addressed for consideration of publication:

 

[a] As logistic regression [1] and logistic map [2] are distinct objects, the term logistic function leaves the text ambiguous. For example, equation (3) refers to a logistic map. However, the authors address logistic regression in Subsection 3.5 without clearly defining the regression model.

 

[b] As an essential parameter in describing complexity behavior (see [2]), the authors should explain why they chose r = 1.885, A = 0.3, and B = 5.9. Besides, if other values ​​were chosen, would improving the model's performance be possible? (as a model tuning procedure in machine learning)

 

[c] There are other regression models related to growth curves, such as the probit model. I suggest mentioning this possibility, even if hypothetical, as a suggestion for future investigations.

 

References

[1] Hosmer, D. W., Lemeshow, S. (2000). Applied logistic regression. John Wiley and Sons. ISBN: 0471356328, 9780471356325.

[2] Sprott, J. C. (2003). Chaos and Time-Series Analysis. Oxford University Press.

Author Response

We appreciate the reviewer for their careful scrutiny of the manuscript and for offering constructive comments and suggestions.

The paper has undergone revisions to incorporate the feedback provided by the reviewer:

  • In order to eliminate ambiguity surrounding logistic functions and logistic regression, we have clarified that logistic regression is a decoder and logistic functions are weights.
  • In response to the comment regarding the selected values of logistic function parameters for the purposes of the article, we have incorporated clarifying remarks and referenced our previous works.
  • Regarding the comment about other regression models, we have included statements about the goals of our future work in Subsection 3.5.

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

I do not have another observations.

Back to TopTop