Next Article in Journal
Design and Implementation of Peak Current Mode with PI Controller for Coupled Inductor-Based High-Gain Z-Source Converter
Previous Article in Journal
Vae-Clip: Unveiling Deception through Cross-Modal Models and Multi-Feature Integration in Multi-Modal Fake News Detection
Previous Article in Special Issue
Enhancement of Synaptic Performance through Synergistic Indium Tungsten Oxide-Based Electric-Double-Layer and Electrochemical Doping Mechanisms
 
 
Review
Peer-Review Record

A Survey on Neuromorphic Architectures for Running Artificial Intelligence Algorithms

Electronics 2024, 13(15), 2963; https://doi.org/10.3390/electronics13152963
by Seham Al Abdul Wahid *,†, Arghavan Asad † and Farah Mohammadi †
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Electronics 2024, 13(15), 2963; https://doi.org/10.3390/electronics13152963
Submission received: 30 June 2024 / Revised: 13 July 2024 / Accepted: 24 July 2024 / Published: 26 July 2024
(This article belongs to the Special Issue Neuromorphic Device, Circuits, and Systems)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This paper presents an efficient method for implementing spiking neural networks (SNNs) on FPGAs.  This method effectively lowers the requirement for energy consumption and computation speed. The implementation is efficient and leads to improved performances. However there are some points must be solved before publication.

1.    this paper lacks detailed comparisons and reviews of the state-of-the-arts. For example, there should be different methods for implementing SNNs. Also in many papers, different methods for implementing SNNs are reviewed. The authors should compare their methods with baseline methods with different methods for implementing SNNs. Besides, optionally, the authors can also compare their methods with commercial methods such Loihi, etc.

 

In addition, the authors can also compare parallel methods in neuromorphic computing, such as memristive computing[1][2]. This helps build overall impression for readers about this field.

 

2.     Resolution of figures:

Many figures seem to be in low resolution. It is suggested to use the high-resolution vector images in the paper to make it clearer.

3.     In Figure 3: STDP does not belongs to the unsupervised learning method only. STDP takes in the reference signals so it should be a kind of supervised learning method.

 

4.     Gradient descent is a kind of backpropagation method and they should be in a parallel relationships.

 

5.     For training algorithms including robust training algorithms, the authors are encouraged to discuss the following papers in machine learning, which can be seamlessly transferred to the area of neuromorphic computing to make it more complete.


Overall, this paper presents a novel method for implementing SNNs on FPGAs. I recommend minor revision for this paper.

 

[1] Improving the robustness of analog deep neural networks through a Bayes-optimized noise injection approach

[2] BayesFT: Bayesian Optimization for Fault Tolerant Neural Network Architecture

 

Comments on the Quality of English Language

The English is good

Author Response

Please see the attachment. 

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

This is a review useful for those mostly who are versed enough to read and follow it, perhaps not even needing such a review. The text presented status quo is reflective enough for those who can appreciate it. However such a review is not very helpful for newcomers who would like to join the field. If the authors have a chance, I'd recommend them to define such terminology as "CPU", "GPU" (and why discriminate between them in this review), "brain-inspired computing", "neuromorphic computers", "non- Von Neumann", "neurons", "synapses", etc. In parallel, they could minimize the use of stamps like "increasingly enhance" (instead of just enhance), etc. Along the same lines, they could provide brief introductions into their chosen models explaining how they are representative and non-trivial, and, perhaps mentioning in passing some other models.

 

... IN terms of style and clarity, some pieces are really challenging for understanding, for example, such as presented in lines 216-220:  "Challenging to implement on classical von Neumann architecture (CPUs/GPUs) due 216 to the large demands of power and time. Hence, FPGA or ASICs which can offer a high-217 speed and low-power hardware implementations of SNNs are a good alternative for 218 large-scale implementations [6]. Other implementations are completed using memristors 219 combined with STDP [7]."

 

By the end of the review, the many accumulated abbreviations (such as ALU, SRAM, DRAM, SDTP, VDSP, LIF SNN, etc) leave their own life that is quite difficult to follow, and the reader should often retract back in text trying to figure out what are the meanings of these abbreviations, which i snot in all cases possible.

In summary, this  ay be a useful review for people with significant previous exposure as giving them a degree of integral view. AS a physicist, it is hard for me to judge how this review fits the criterial of publication in this particular journal; from my point of view, it can be improved.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

This review paper summarizes the potential specifications of neuromorphic chip architecture and seeks to highlight which applications they are suitable for.

The topic of this work is interesting.

However, I fail to see its contribution to advancing the state of the art. Previous literature needs to be better investigated and discussed in depth.

Author Response

Please see the attachment. 

Author Response File: Author Response.pdf

Round 2

Reviewer 3 Report

Comments and Suggestions for Authors

The Authors addressed enough my comments and suggestions.

Back to TopTop