Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (10)

Search Parameters:
Keywords = ANN-to-SNN conversion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
38 pages, 4759 KB  
Review
Event-Based Vision at the Edge: A Review
by Michael Middleton, Teymoor Ali, Epifanios Baikas, Hakan Kayan, Basabdatta Sen Bhattacharya, Elena Gheorghiu, Mark Vousden, Charith Perera, Oliver Rhodes and Martin A. Trefzer
Brain Sci. 2026, 16(4), 422; https://doi.org/10.3390/brainsci16040422 - 17 Apr 2026
Viewed by 229
Abstract
Spiking Neural Networks (SNNs) executed on neuromorphic hardware promise energyefficient, low-latency inference well-suited to edge deployment in size, weight, and powerconstrained environments such as autonomous vehicles, wearable devices, and unmanned aerial platforms. However, a coherent research pathway to deployment of neuromorphic devices remains [...] Read more.
Spiking Neural Networks (SNNs) executed on neuromorphic hardware promise energyefficient, low-latency inference well-suited to edge deployment in size, weight, and powerconstrained environments such as autonomous vehicles, wearable devices, and unmanned aerial platforms. However, a coherent research pathway to deployment of neuromorphic devices remains elusive. This paper presents a structured review and position on the state of SNN-based vision across four interconnected dimensions: network architectures, training methodologies, event-based datasets and simulation techniques, and neuromorphic computing hardware. We survey the evolution from shallow convolutional SNNs to spiking Transformers and hybrid designs which leverage the advantages of SNNs and conventional artificial neural networks. We also examine surrogate gradient training and ANN-to-SNN conversion approaches, catalogue real-world and simulated event-based datasets, and assess the landscape of neuromorphic platforms ranging from rigid mixed-signal architectures to fully-configurable digital systems. Our analysis reveals that while each area has matured considerably in isolation, critical integration challenges persist. In particular, event-based datasets remain scarce and lack standardisation, training methodologies introduce systematic gaps relative to deployment hardware, and access to neuromorphic platforms is restricted by proprietary toolchains and limited development kit availability. We conclude that bridging these integration gaps, rather than advancing individual components alone, represents the most important and least addressed work required to realise the potential of SNN-based vision at the edge. Full article
Show Figures

Figure 1

31 pages, 5331 KB  
Review
Spiking Neural Networks in Imaging: A Review and Case Study
by Michael Voudaskas, Jack Iain MacLean, Neale A. W. Dutton, Brian D. Stewart and Istvan Gyongy
Sensors 2025, 25(21), 6747; https://doi.org/10.3390/s25216747 - 4 Nov 2025
Cited by 2 | Viewed by 5726
Abstract
This review examines the state of spiking neural networks (SNNs) for imaging, combining a structured literature survey, a comparative meta-analysis of reported datasets, training strategies, hardware platforms, and applications and a case study on LMU-based depth estimation in direct Time-of-Flight (dToF) imaging. While [...] Read more.
This review examines the state of spiking neural networks (SNNs) for imaging, combining a structured literature survey, a comparative meta-analysis of reported datasets, training strategies, hardware platforms, and applications and a case study on LMU-based depth estimation in direct Time-of-Flight (dToF) imaging. While SNNs demonstrate promise for energy-efficient, event-driven computation, current progress is constrained by reliance on small or custom datasets, ANN-SNN conversion inefficiencies, simulation-based hardware evaluation, and a narrow focus on classification tasks. The analysis highlights scaling trade-offs between accuracy and efficiency, persistent latency bottlenecks, and limited sensor–hardware integration. These findings were synthesised into key challenges and future directions, emphasising benchmarks, hardware-aware training, ecosystem development, and broader application domains. Full article
Show Figures

Figure 1

19 pages, 5808 KB  
Article
From Convolution to Spikes for Mental Health: A CNN-to-SNN Approach Using the DAIC-WOZ Dataset
by Victor Triohin, Monica Leba and Andreea Cristina Ionica
Appl. Sci. 2025, 15(16), 9032; https://doi.org/10.3390/app15169032 - 15 Aug 2025
Cited by 1 | Viewed by 4671
Abstract
Depression remains a leading cause of global disability, yet scalable and objective diagnostic tools are still lacking. Speech has emerged as a promising non-invasive modality for automated depression detection, due to its strong correlation with emotional state and ease of acquisition. While convolutional [...] Read more.
Depression remains a leading cause of global disability, yet scalable and objective diagnostic tools are still lacking. Speech has emerged as a promising non-invasive modality for automated depression detection, due to its strong correlation with emotional state and ease of acquisition. While convolutional neural networks (CNNs) have achieved state-of-the-art performance in this domain, their high computational demands limit deployment in low-resource or real-time settings. Spiking neural networks (SNNs), by contrast, offer energy-efficient, event-driven computation inspired by biological neurons, but they are difficult to train directly and often exhibit degraded performance on complex tasks. This study investigates whether CNNs trained on audio data from the clinically annotated DAIC-WOZ dataset can be effectively converted into SNNs while preserving diagnostic accuracy. We evaluate multiple conversion thresholds using the SpikingJelly framework and find that the 99.9% mode yields an SNN that matches the original CNN in both accuracy (82.5%) and macro F1 score (0.8254). Lower threshold settings offer increased sensitivity to depressive speech at the cost of overall accuracy, while naïve conversion strategies result in significant performance loss. These findings support the feasibility of CNN-to-SNN conversion for real-world mental health applications and underscore the importance of precise calibration in achieving clinically meaningful results. Full article
(This article belongs to the Special Issue eHealth Innovative Approaches and Applications: 2nd Edition)
Show Figures

Figure 1

26 pages, 3960 KB  
Article
ECA-ATCNet: Efficient EEG Decoding with Spike Integrated Transformer Conversion for BCI Applications
by Xuhang Li, Qianzi Shen, Haitao Wang and Zijian Wang
Appl. Sci. 2025, 15(4), 1894; https://doi.org/10.3390/app15041894 - 12 Feb 2025
Cited by 1 | Viewed by 3647
Abstract
The Brain–Computer Interface (BCI) has applications in smart homes and healthcare by converting EEG signals into control commands. However, traditional EEG signal decoding methods are affected by individual differences, and although deep learning techniques have made significant breakthroughs, challenges such as high energy [...] Read more.
The Brain–Computer Interface (BCI) has applications in smart homes and healthcare by converting EEG signals into control commands. However, traditional EEG signal decoding methods are affected by individual differences, and although deep learning techniques have made significant breakthroughs, challenges such as high energy consumption and the processing of raw EEG data remain. This paper introduces the Efficient Channel Attention Temporal Convolutional Network (ECA-ATCNet) to enhance feature learning by applying Efficient Channel Attention Convolution (ECA-conv) across spatial and spectral dimensions. The model outperforms state-of-the-art methods in both within-subject and between-subject classification tasks on MI-EEG datasets (BCI-2a and PhysioNet), achieving accuracies of 87.89% and 71.88%, respectively. Additionally, the proposed Spike Integrated Transformer Conversion (SIT-conversion) method, based on Spiking–Softmax, converts the Transformer’s self-attention mechanism into Spiking Neural Networks (SNNs) in just 12 time steps. The accuracy loss of the converted ECA-ATCNet model is only 0.6% to 0.73%, while its energy consumption is reduced by 52.84% to 53.52%. SIT-conversion enables ultra-low-latency, near-lossless ANN-to-SNN conversion, with SNNs achieving similar accuracy to their ANN counterparts on image datasets. Inference energy consumption is reduced by 18.18% to 45.13%. This method offers a novel approach for low-power, portable BCI applications and contributes to the advancement of energy-efficient SNN algorithms. Full article
Show Figures

Figure 1

15 pages, 3271 KB  
Article
Spiking PointCNN: An Efficient Converted Spiking Neural Network under a Flexible Framework
by Yingzhi Tao and Qiaoyun Wu
Electronics 2024, 13(18), 3626; https://doi.org/10.3390/electronics13183626 - 12 Sep 2024
Cited by 3 | Viewed by 2909
Abstract
Spiking neural networks (SNNs) are generating wide attention due to their brain-like simulation capabilities and low energy consumption. Converting artificial neural networks (ANNs) to SNNs provides great advantages, combining the high accuracy of ANNs with the robustness and energy efficiency of SNNs. Existing [...] Read more.
Spiking neural networks (SNNs) are generating wide attention due to their brain-like simulation capabilities and low energy consumption. Converting artificial neural networks (ANNs) to SNNs provides great advantages, combining the high accuracy of ANNs with the robustness and energy efficiency of SNNs. Existing point clouds processing SNNs have two issues to be solved: first, they lack a specialized surrogate gradient function; second, they are not robust enough to process a real-world dataset. In this work, we present a high-accuracy converted SNN for 3D point cloud processing. Specifically, we first revise and redesign the Spiking X-Convolution module based on the X-transformation. To address the problem of non-differentiable activation function arising from the binary signal from spiking neurons, we propose an effective adjustable surrogate gradient function, which can fit various models well by tuning the parameters. Additionally, we introduce a versatile ANN-to-SNN conversion framework enabling modular transformations. Based on this framework and the spiking X-Convolution module, we design the Spiking PointCNN, a highly efficient converted SNN for processing 3D point clouds. We conduct experiments on the public 3D point cloud datasets ModelNet40 and ScanObjectNN, on which our proposed model achieves excellent accuracy. Code will be available on GitHub. Full article
(This article belongs to the Special Issue Artificial Intelligence in Image Processing and Computer Vision)
Show Figures

Figure 1

10 pages, 2327 KB  
Communication
Electrocardiography Classification with Leaky Integrate-and-Fire Neurons in an Artificial Neural Network-Inspired Spiking Neural Network Framework
by Amrita Rana and Kyung Ki Kim
Sensors 2024, 24(11), 3426; https://doi.org/10.3390/s24113426 - 26 May 2024
Cited by 15 | Viewed by 2923
Abstract
Monitoring heart conditions through electrocardiography (ECG) has been the cornerstone of identifying cardiac irregularities. Cardiologists often rely on a detailed analysis of ECG recordings to pinpoint deviations that are indicative of heart anomalies. This traditional method, while effective, demands significant expertise and is [...] Read more.
Monitoring heart conditions through electrocardiography (ECG) has been the cornerstone of identifying cardiac irregularities. Cardiologists often rely on a detailed analysis of ECG recordings to pinpoint deviations that are indicative of heart anomalies. This traditional method, while effective, demands significant expertise and is susceptible to inaccuracies due to its manual nature. In the realm of computational analysis, Artificial Neural Networks (ANNs) have gained prominence across various domains, which can be attributed to their superior analytical capabilities. Conversely, Spiking Neural Networks (SNNs), which mimic the neural activity of the brain more closely through impulse-based processing, have not seen widespread adoption. The challenge lies primarily in the complexity of their training methodologies. Despite this, SNNs offer a promising avenue for energy-efficient computational models capable of displaying a high-level performance. This paper introduces an innovative approach employing SNNs augmented with an attention mechanism to enhance feature recognition in ECG signals. By leveraging the inherent efficiency of SNNs, coupled with the precision of attention modules, this model aims to refine the analysis of cardiac signals. The novel aspect of our methodology involves adapting the learned parameters from ANNs to SNNs using leaky integrate-and-fire (LIF) neurons. This transfer learning strategy not only capitalizes on the strengths of both neural network models but also addresses the training challenges associated with SNNs. The proposed method is evaluated through extensive experiments on two publicly available benchmark ECG datasets. The results show that our model achieves an overall accuracy of 93.8% on the MIT-BIH Arrhythmia dataset and 85.8% on the 2017 PhysioNet Challenge dataset. This advancement underscores the potential of SNNs in the field of medical diagnostics, offering a path towards more accurate, efficient, and less resource-intensive analyses of heart diseases. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

14 pages, 928 KB  
Article
Benchmarking Artificial Neural Network Architectures for High-Performance Spiking Neural Networks
by Riadul Islam, Patrick Majurski, Jun Kwon, Anurag Sharma and Sri Ranga Sai Krishna Tummala
Sensors 2024, 24(4), 1329; https://doi.org/10.3390/s24041329 - 19 Feb 2024
Cited by 8 | Viewed by 4884
Abstract
Organizations managing high-performance computing systems face a multitude of challenges, including overarching concerns such as overall energy consumption, microprocessor clock frequency limitations, and the escalating costs associated with chip production. Evidently, processor speeds have plateaued over the last decade, persisting within the range [...] Read more.
Organizations managing high-performance computing systems face a multitude of challenges, including overarching concerns such as overall energy consumption, microprocessor clock frequency limitations, and the escalating costs associated with chip production. Evidently, processor speeds have plateaued over the last decade, persisting within the range of 2 GHz to 5 GHz. Scholars assert that brain-inspired computing holds substantial promise for mitigating these challenges. The spiking neural network (SNN) particularly stands out for its commendable power efficiency when juxtaposed with conventional design paradigms. Nevertheless, our scrutiny has brought to light several pivotal challenges impeding the seamless implementation of large-scale neural networks (NNs) on silicon. These challenges encompass the absence of automated tools, the need for multifaceted domain expertise, and the inadequacy of existing algorithms to efficiently partition and place extensive SNN computations onto hardware infrastructure. In this paper, we posit the development of an automated tool flow capable of transmuting any NN into an SNN. This undertaking involves the creation of a novel graph-partitioning algorithm designed to strategically place SNNs on a network-on-chip (NoC), thereby paving the way for future energy-efficient and high-performance computing paradigms. The presented methodology showcases its effectiveness by successfully transforming ANN architectures into SNNs with a marginal average error penalty of merely 2.65%. The proposed graph-partitioning algorithm enables a 14.22% decrease in inter-synaptic communication and an 87.58% reduction in intra-synaptic communication, on average, underscoring the effectiveness of the proposed algorithm in optimizing NN communication pathways. Compared to a baseline graph-partitioning algorithm, the proposed approach exhibits an average decrease of 79.74% in latency and a 14.67% reduction in energy consumption. Using existing NoC tools, the energy-latency product of SNN architectures is, on average, 82.71% lower than that of the baseline architectures. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

13 pages, 617 KB  
Article
IDSNN: Towards High-Performance and Low-Latency SNN Training via Initialization and Distillation
by Xiongfei Fan, Hong Zhang and Yu Zhang
Biomimetics 2023, 8(4), 375; https://doi.org/10.3390/biomimetics8040375 - 18 Aug 2023
Cited by 7 | Viewed by 2805
Abstract
Spiking neural networks (SNNs) are widely recognized for their biomimetic and efficient computing features. They utilize spikes to encode and transmit information. Despite the many advantages of SNNs, they suffer from the problems of low accuracy and large inference latency, which are, respectively, [...] Read more.
Spiking neural networks (SNNs) are widely recognized for their biomimetic and efficient computing features. They utilize spikes to encode and transmit information. Despite the many advantages of SNNs, they suffer from the problems of low accuracy and large inference latency, which are, respectively, caused by the direct training and conversion from artificial neural network (ANN) training methods. Aiming to address these limitations, we propose a novel training pipeline (called IDSNN) based on parameter initialization and knowledge distillation, using ANN as a parameter source and teacher. IDSNN maximizes the knowledge extracted from ANNs and achieves competitive top-1 accuracy for CIFAR10 (94.22%) and CIFAR100 (75.41%) with low latency. More importantly, it can achieve 14× faster convergence speed than directly training SNNs under limited training resources, which demonstrates its practical value in applications. Full article
(This article belongs to the Special Issue Design and Control of a Bio-Inspired Robot)
Show Figures

Figure 1

19 pages, 4905 KB  
Article
IC-SNN: Optimal ANN2SNN Conversion at Low Latency
by Cuixia Li, Zhiquan Shang, Li Shi, Wenlong Gao and Shuyan Zhang
Mathematics 2023, 11(1), 58; https://doi.org/10.3390/math11010058 - 23 Dec 2022
Cited by 9 | Viewed by 3870
Abstract
The spiking neural network (SNN) has attracted the attention of many researchers because of its low energy consumption and strong bionics. However, when the network conversion method is used to solve the difficulty of network training caused by its discrete, too-long inference time, [...] Read more.
The spiking neural network (SNN) has attracted the attention of many researchers because of its low energy consumption and strong bionics. However, when the network conversion method is used to solve the difficulty of network training caused by its discrete, too-long inference time, it may hinder the practical application of SNN. This paper proposes a novel model named the SNN with Initialized Membrane Potential and Coding Compensation (IC-SNN) to solve this problem. The model focuses on the effect of residual membrane potential and rate encoding on the target SNN. After analyzing the conversion error and the information loss caused by the encoding method under the low time step, we propose a new initial membrane potential setting method and coding compensation scheme. The model can enable the network to still achieve high accuracy under a low number of time steps by eliminating residual membrane potential and encoding errors in the SNN. Finally, experimental results based on public datasets CIFAR10 and CIFAR100 also demonstrate that the model can still achieve competitive classification accuracy in 32 time steps. Full article
(This article belongs to the Special Issue Mathematical Method and Application of Machine Learning)
Show Figures

Figure 1

25 pages, 1400 KB  
Article
Exploring Optimized Spiking Neural Network Architectures for Classification Tasks on Embedded Platforms
by Tehreem Syed, Vijay Kakani, Xuenan Cui and Hakil Kim
Sensors 2021, 21(9), 3240; https://doi.org/10.3390/s21093240 - 7 May 2021
Cited by 21 | Viewed by 6478
Abstract
In recent times, the usage of modern neuromorphic hardware for brain-inspired SNNs has grown exponentially. In the context of sparse input data, they are undertaking low power consumption for event-based neuromorphic hardware, specifically in the deeper layers. However, using deep ANNs for training [...] Read more.
In recent times, the usage of modern neuromorphic hardware for brain-inspired SNNs has grown exponentially. In the context of sparse input data, they are undertaking low power consumption for event-based neuromorphic hardware, specifically in the deeper layers. However, using deep ANNs for training spiking models is still considered as a tedious task. Until recently, various ANN to SNN conversion methods in the literature have been proposed to train deep SNN models. Nevertheless, these methods require hundreds to thousands of time-steps for training and still cannot attain good SNN performance. This work proposes a customized model (VGG, ResNet) architecture to train deep convolutional spiking neural networks. In this current study, the training is carried out using deep convolutional spiking neural networks with surrogate gradient descent backpropagation in a customized layer architecture similar to deep artificial neural networks. Moreover, this work also proposes fewer time-steps for training SNNs with surrogate gradient descent. During the training with surrogate gradient descent backpropagation, overfitting problems have been encountered. To overcome these problems, this work refines the SNN based dropout technique with surrogate gradient descent. The proposed customized SNN models achieve good classification results on both private and public datasets. In this work, several experiments have been carried out on an embedded platform (NVIDIA JETSON TX2 board), where the deployment of customized SNN models has been extensively conducted. Performance validations have been carried out in terms of processing time and inference accuracy between PC and embedded platforms, showing that the proposed customized models and training techniques are feasible for achieving a better performance on various datasets such as CIFAR-10, MNIST, SVHN, and private KITTI and Korean License plate dataset. Full article
Show Figures

Figure 1

Back to TopTop