Computational Intelligence: Spiking Neural Networks

A special issue of Big Data and Cognitive Computing (ISSN 2504-2289).

Deadline for manuscript submissions: closed (20 October 2023) | Viewed by 13477

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer and Information Sciences, Auckland University of Technology, Auckland 1010, New Zealand
Interests: adaptive machine learning and intelligent information processing; including the theories of evolving connectionist systems; neuro-fuzzy systems; brain-inspired spiking neural networks
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, Loughborough University, Loughborough LE11 3TU, UK
Interests: spiking neural networks; computational intelligence; deep learning; neuro-robotics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

It is our pleasure to announce a new Special Issue “Computational Intelligence: Spiking Neural Networks” in the journal Big Data and Cognitive Computing. Spiking neural networks have recently emerged as an energy-efficient alternative to traditional deep neural networks. The close proximity of their information processing mechanisms to biological neurons allows them to handle temporal and spatiotemporal data naturally. This proximity also allows them to exploit recently proposed theories in neuroscience to develop novel approaches that can be used to address real-world problems. Further, recent advances in development of neuromorphic chips have made it possible to realize the software version of approaches into hardware versions. These theoretical and hardware level advances in the area of spiking neural networks have made it easier for recent artificial intelligence techniques to transition from research labs to real-world applications.

This Special Issue encourages researchers to present their recent theoretical and application-oriented advances involving spiking neural networks. Topics include novel theoretical approaches for learning using SNNs, bio-inspired learning, edge-computing, interpretability in SNNs, deep spiking neural networks, research involving neuromorphic hardware and application of SNNs in areas such as robotics, autonomous systems, etc.

This Special Issue invites contributions, including, but not limited to, the following detailed topics:

  • Bio-inspired learning;
  • Learning algorithms for SNNs to handle temporal, spatiotemporal data, etc.;
  • Edge-computing;
  • Interpretability in SNNs;
  • Deep spiking neural networks;
  • Robotics;
  • Research involving neuromorphic and other hardware platforms.

Prof. Dr. Nik Kasabov
Dr. Shirin Dora
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Big Data and Cognitive Computing is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • bio-inspired learning
  • deep spiking neural networks
  • edge-computing
  • interpretability
  • robotics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

17 pages, 1200 KiB  
Article
Comparison of Bagging and Sparcity Methods for Connectivity Reduction in Spiking Neural Networks with Memristive Plasticity
by Roman Rybka, Yury Davydov, Danila Vlasov, Alexey Serenko, Alexander Sboev and Vyacheslav Ilyin
Big Data Cogn. Comput. 2024, 8(3), 22; https://doi.org/10.3390/bdcc8030022 - 23 Feb 2024
Viewed by 1798
Abstract
Developing a spiking neural network architecture that could prospectively be trained on energy-efficient neuromorphic hardware to solve various data analysis tasks requires satisfying the limitations of prospective analog or digital hardware, i.e., local learning and limited numbers of connections, respectively. In this work, [...] Read more.
Developing a spiking neural network architecture that could prospectively be trained on energy-efficient neuromorphic hardware to solve various data analysis tasks requires satisfying the limitations of prospective analog or digital hardware, i.e., local learning and limited numbers of connections, respectively. In this work, we compare two methods of connectivity reduction that are applicable to spiking networks with local plasticity; instead of a large fully-connected network (which is used as the baseline for comparison), we employ either an ensemble of independent small networks or a network with probabilistic sparse connectivity. We evaluate both of these methods with a three-layer spiking neural network, which are applied to handwritten and spoken digit classification tasks using two memristive plasticity models and the classical spike time-dependent plasticity (STDP) rule. Both methods achieve an F1-score of 0.93–0.95 on the handwritten digits recognition task and 0.85–0.93 on the spoken digits recognition task. Applying a combination of both methods made it possible to obtain highly accurate models while reducing the number of connections by more than three times compared to the basic model. Full article
(This article belongs to the Special Issue Computational Intelligence: Spiking Neural Networks)
Show Figures

Figure 1

18 pages, 919 KiB  
Article
Extraction of Significant Features by Fixed-Weight Layer of Processing Elements for the Development of an Efficient Spiking Neural Network Classifier
by Alexander Sboev, Roman Rybka, Dmitry Kunitsyn, Alexey Serenko, Vyacheslav Ilyin and Vadim Putrolaynen
Big Data Cogn. Comput. 2023, 7(4), 184; https://doi.org/10.3390/bdcc7040184 - 18 Dec 2023
Viewed by 1986
Abstract
In this paper, we demonstrate that fixed-weight layers generated from random distribution or logistic functions can effectively extract significant features from input data, resulting in high accuracy on a variety of tasks, including Fisher’s Iris, Wisconsin Breast Cancer, and MNIST datasets. We have [...] Read more.
In this paper, we demonstrate that fixed-weight layers generated from random distribution or logistic functions can effectively extract significant features from input data, resulting in high accuracy on a variety of tasks, including Fisher’s Iris, Wisconsin Breast Cancer, and MNIST datasets. We have observed that logistic functions yield high accuracy with less dispersion in results. We have also assessed the precision of our approach under conditions of minimizing the number of spikes generated in the network. It is practically useful for reducing energy consumption in spiking neural networks. Our findings reveal that the proposed method demonstrates the highest accuracy on Fisher’s iris and MNIST datasets with decoding using logistic regression. Furthermore, they surpass the accuracy of the conventional (non-spiking) approach using only logistic regression in the case of Wisconsin Breast Cancer. We have also investigated the impact of non-stochastic spike generation on accuracy. Full article
(This article belongs to the Special Issue Computational Intelligence: Spiking Neural Networks)
Show Figures

Figure 1

Review

Jump to: Research

12 pages, 857 KiB  
Review
Spiking Neural Networks for Computational Intelligence: An Overview
by Shirin Dora and Nikola Kasabov
Big Data Cogn. Comput. 2021, 5(4), 67; https://doi.org/10.3390/bdcc5040067 - 15 Nov 2021
Cited by 29 | Viewed by 8514
Abstract
Deep neural networks with rate-based neurons have exhibited tremendous progress in the last decade. However, the same level of progress has not been observed in research on spiking neural networks (SNN), despite their capability to handle temporal data, energy-efficiency and low latency. This [...] Read more.
Deep neural networks with rate-based neurons have exhibited tremendous progress in the last decade. However, the same level of progress has not been observed in research on spiking neural networks (SNN), despite their capability to handle temporal data, energy-efficiency and low latency. This could be because the benchmarking techniques for SNNs are based on the methods used for evaluating deep neural networks, which do not provide a clear evaluation of the capabilities of SNNs. Particularly, the benchmarking of SNN approaches with regards to energy efficiency and latency requires realization in suitable hardware, which imposes additional temporal and resource constraints upon ongoing projects. This review aims to provide an overview of the current real-world applications of SNNs and identifies steps to accelerate research involving SNNs in the future. Full article
(This article belongs to the Special Issue Computational Intelligence: Spiking Neural Networks)
Show Figures

Figure 1

Back to TopTop