Next Article in Journal
Calibration Method for Line-Structured Light Three-Dimensional Measurement Based on a Simple Target
Next Article in Special Issue
Period-One Laser Dynamics for Photonic Microwave Signal Generation and Applications
Previous Article in Journal
Low-Complexity Sampling Frequency Offset Estimation and Compensation Scheme for OFDM-Based UWOC System
Previous Article in Special Issue
Microwave Photonic Signal Generation in an Optically Injected Discrete Mode Semiconductor Laser
 
 
Article
Peer-Review Record

Multilayer Photonic Spiking Neural Networks: Generalized Supervised Learning Algorithm and Network Optimization

Photonics 2022, 9(4), 217; https://doi.org/10.3390/photonics9040217
by Chentao Fu 1, Shuiying Xiang 1,2,*, Yanan Han 1, Ziwei Song 1 and Yue Hao 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Photonics 2022, 9(4), 217; https://doi.org/10.3390/photonics9040217
Submission received: 20 January 2022 / Revised: 18 March 2022 / Accepted: 23 March 2022 / Published: 25 March 2022
(This article belongs to the Special Issue Microwave Photonics Applications)

Round 1

Reviewer 1 Report

In the manuscript, the authors proposed a generalized supervised learning algorithm for multilayer photonic spiking neural networks. A weight update rule based on the STDP rule and the gradient descent mechanism was proposed. The multilayer photonic SNN successfully solved the XOR problem, the classfication of Iris dataset and Wisconsin breast cancer dataset. In my opinion, this work is novelty and interesting for the filed of photonic neuromorphic computing. I think this work can be accpected by the photonics. Some minor suggestions are as follows.
1. Line 51, ‘recently attention’ should be replaced by ‘recent attention’
2. Line 117, ‘na’  ‘a’ should be a subscript. 
3. Line 136, ‘example label’?
4. Line 155, in Eq.(8), two h are used. Please check it.

Author Response

Thank you for your careful review. Your enlightening suggestions are helpful for improving the quality of this manuscript. All revisions are highlighted and underlined in the revised manuscript. For details please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

This work introduced a generalized supervised learning algorithm by combining the STDP rule and gradient descent mechanism for multilayer photonic SNNs. The work is practical and timely. Before publication, several questions are required to be resolved by the authors.

1. How did the author match the neural model to hardware neurons? The reviewer suggests showing the schematic of the VCSELs neurons relating to the models.

2. The authors demonstrated small datasets using the proposed algorithms, such as Iris and Wisconsin breast cancer datasets. To support such a generalized algorithm, can the authors present the training performance on standard MNIST datasets or even the CIFAR-10?

3. As the authors mentioned that energy efficiency is the crucial benchmark on system performance, can the authors evaluate the system’s energy efficiency on performing Iris or Wisconsin breast cancer datasets?

4. Given the algorithm in this work is proposed for photonic SNNs, a hardware architecture based on photonics is recommended to be presented in the manuscript.

Author Response

Thank you for your careful review. Your enlightening suggestions are helpful for improving the quality of this manuscript. All revisions are highlighted and underlined in the revised manuscript. For details please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

The paper presents a supervised learning rule for multilayer photonics SNN combining spike time dependent plasticity with gradient based computation. The resulting algorithm is demonstrated on three supervised learning tasks with shallow and relatively small spiking neural networks based on the VCSEL-SA rate equations.

Overall I found the presentation easy to follow and the research well motivated. A significant shortcoming are the relatively easy benchmarks that were chosen, which don't allow for a good judgement of the overall merits of the presented approach.

Regarding the novelty of the presented method, several advances in recent years are not mentioned by the authors in the introduction

  • The surrogate / pseudo-derivative approach as pioneered by [1,2,3] (among others), is an easily scalable way to scale SNN training to many layers [4].
  • The statement "Note that the spike trains, denoted by the sum of the Dirac function, are not differentiable in a SNN." is not precise, indeed differentiability is dependent of the variable to be differentiated. It is possible in many instances to compute for example derivatives of the spike times w.r.t to weight parameters using the implicit function theorem and thereby derive a an analog of the backpropagation algorithm [5]. In particular this approach works with spike times and contains the case of a loss that depends quadratically on the spike times of a given layer and generalises the SpikeProp algorithm.
  • There are a number of other approaches that combine gradient computation with STDP like update rules (e.g. [6,7], but many others) 

I'm less familiar with applications to photonic SNN models, so I assume that a discussion in this context is still novel. Since the overall presentation is sound I can recommend this article for publication, provided some effort is invested in further elaborating on the current state of the art. 

Some further questions:

  • Does this approach generalise beyond 2 layers?
  • Does it work with recurrent connections?
  • How well do you expect the algorithm  to work on a larger dataset like CIFAR-10 or MNIST?

Minor Comments: I found a number of smaller mistakes in grammar and some unusual formulations. Some examples:

> on the benchmarks of Iris dataset and Wisconsin breast cancer dataset.

on the Iris and Wisconsin breast cancer benchmark datasets.

> system, spiking neural network (SNN) was firstly proposed and demonstrated powerful computational capabilities

... systems, spiking neural networks (SNNs) were first proposed and demonstrated ...

Unfortunately I didn't have time to list out all the small mistakes of that nature.

[1] CNNs for energy-efficient neuromorphic computing Steven K. Esser, Paul A. Merolla, John V. Arthur, Andrew S. Cassidy, Rathinakumar Appuswamy, Alexander Andreopoulos, David J. Berg, Jeffrey L. McKinstry, Timothy Melano, Davis R. Barch, Carmelo di Nolfo, PallabDatta, Arnon Amir, Brian Taba, Myron D. Flickner, Dharmendra S. Modha [2] Bellec, G., Salaj, D., Subramoney, A., Legenstein, R., & Maass, W. (2018). Long short-term memory and learning-to-learn in networks of spiking neurons. Advances in neural information processing systems, 31. [3] Neftci, E. O., Mostafa, H., & Zenke, F. (2019). Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Processing Magazine, 36(6), 51-63. [4] Fang, W., Yu, Z., Chen, Y., Huang, T., Masquelier, T., & Tian, Y. (2021). Deep residual learning in spiking neural networks. Advances in Neural Information Processing Systems, 34. [5] Wunderlich, T. C., & Pehle, C. (2021). Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports, 11(1), 1-17. [6] Kaiser, J., Mostafa, H., & Neftci, E. (2020). Synaptic plasticity dynamics for deep continuous local learning (DECOLLE). Frontiers in Neuroscience, 14, 424 [7] Bellec, G., Scherr, F., Subramoney, A., Hajek, E., Salaj, D., Legenstein, R., & Maass, W. (2020). A solution to the learning dilemma for recurrent networks of spiking neurons. Nature communications, 11(1), 1-15.

Author Response

Thank you for your careful review. Your enlightening suggestions are helpful for improving the quality of this manuscript. All revisions are highlighted and underlined in the revised manuscript. For details please see the attachment.

Author Response File: Author Response.docx

Back to TopTop