Next Article in Journal
Fuzzy Multi-Objective, Multi-Period Integrated Routing–Scheduling Problem to Distribute Relief to Disaster Areas: A Hybrid Ant Colony Optimization Approach
Previous Article in Journal
Discrete Random Renewable Replacements after the Expiration of Collaborative Preventive Maintenance Warranty
Previous Article in Special Issue
An Integrated Model of Deep Learning and Heuristic Algorithm for Load Forecasting in Smart Grid
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Particle Swarm Optimization-Based Interpretable Spiking Neural Classifier with Time-Varying Weights

by
Mohammed Thousif
1,*,
Shirin Dora
2 and
Suresh Sundaram
1
1
Department of Aerospace Engineering, Indian Institute of Science, Bangalore 560012, India
2
Department of Computer Science, Loughborough University, Loughborough LE11 3TU, UK
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(18), 2846; https://doi.org/10.3390/math12182846
Submission received: 17 August 2024 / Revised: 7 September 2024 / Accepted: 10 September 2024 / Published: 13 September 2024
(This article belongs to the Special Issue Heuristic Optimization and Machine Learning)

Abstract

:
This paper presents an interpretable, spiking neural classifier (IpT-SNC) with time-varying weights. IpT-SNC uses a two-layered spiking neural network (SNN) architecture in which weights of synapses are modeled using amplitude-modulated, time-varying Gaussian functions. Self-regulated particle swarm optimization (SRPSO) is used to update the amplitude, width, and centers of the Gaussian functions and thresholds of neurons in the output layer. IpT-SNC has been developed to improve the interpretability of spiking neural networks. The time-varying weights in IpT-SNC allow us to describe the rationale behind predictions in terms of specific input spikes. The performance of IpT-SNC is evaluated on ten benchmark datasets in the UCI machine learning repository and compared with the performance of other learning algorithms. According to the performance results, IpT-SNC enhances classification performance on testing datasets from a minimum of 0.5% to a maximum of 7.7%. The significance level of IpT-SNC with other learning algorithms is evaluated using statistical tests like the Friedman test and the paired t-test. Furthermore, on the challenging real-world BCI (Brain Computer Interface) competition IV dataset, IpT-SNC outperforms current classifiers by about 8% in terms of classification accuracy. The results indicate that IpT-SNC has better generalization performance than other algorithms.

1. Introduction

Biological neurons process and transmit information based on discrete temporal events, which are termed spikes. Artificial neurons that use spikes for signal processing are known as spiking neurons, and neural networks composed of these neurons are termed spiking neural networks (SNNs). In [1], it is shown that SNNs are more computationally powerful than neural networks made up of sigmoidal neurons. This enabled the development of SNNs for various tasks, such as classification [2], recognition [3], object detection [4], etc.; the Gradient-based backpropagation learning rule cannot be directly implemented in SNNs because of the discontinuous nature of spikes. Hence, in SpikeProp [5], to evaluate the gradient, linearity is assumed at the time of the spike in the potential developed using a spiking neuron. Similarly, many studies are available in the literature, such as [6,7], which evaluate surrogate gradients to update weights in SNNs.
Other than gradient-based approaches, several bio-inspired learning algorithms for SNNs are also available in the literature. Spike timing-dependent plasticity (STDP) [8] is one such biological phenomenon that served as the foundation for SNN learning algorithms in [2,3]. In STDP, if the neuron connected at the input side of the synapse (presynaptic neuron) fires earlier than the neuron connected to the output side (postsynaptic neuron), then the weight of the synapse is increased, and vice versa. Other biologically inspired phenomena, like rank-order learning [9] and astrocyte-inspired learning [10], have also been utilized as a basis for developing learning techniques for SNNs. All of the learning techniques previously stated assume that the weight of synaptic connections between two neurons in an SNN does not change over time after updation. This long-lasting phenomenon in the weights of the synapses is defined as long-term plasticity (LTP) [11] in the brain.
Contrary to this, short-term plasticity (STP) refers to changes exhibited by biological synapses that last up to a short duration [12,13]. The synapses that exhibit STP are also termed dynamic synapses, and their update functions depend on the input spikes. This ability of dynamic synapses increases the computational power of the SNN, making them suitable for the development of compact neural network architectures [14]. SNNs with dynamic synapses have been utilized for speech recognition [14,15,16], pattern classification [17], neuromorphic computing [18], etc. Even though dynamic synapses are updated using a dynamic function, their strengths are represented using a static function. In [19], a synaptic efficacy function-based leaky integrate and fire neuRON (SEFRON) is proposed in which a synapse model whose strength and updating are both formulated using time-varying functions. Each synapse in SEFRON is modeled using amplitude-modulated, time-varying Gaussian functions. The update for this time-varying weight of a synapse is also distributed over time using Gaussian functions. The time-varying weight synapse model proposed in SEFRON has shown better classification performance when compared with the dynamic synapse model. This alleviates the need to develop SNNs with synapses that are more dynamic in nature for better performance results. The classifier developed in SEFRON is suitable only for binary classification problems. In [20], SEFRON work is further extended for multi-classification problems with the addition of a meta-neuron to the network. Hence, it is termed a meta-neuron spiking classifier with a time-varying weight model (MeST). MeST utilizes the concept of a meta-neuron for online learning in SNNs [10]. Both SEFRON and MeST employ dynamic synapses, but it is not possible to interpret the reasoning underlying their predictions. In [21], a method is developed to process the time-varying weights in SEFRON for better interpretability. For each input feature, a shape function is estimated from the time-varying weights in [21] to understand the significance of features for classification performance. The approximation used to estimate the shape functions may result in a loss of information and lead to inaccurate interpretations.
In [22], an evolving SNN architecture is proposed that updates the weights based on the normalization of the potential of neurons. The weights in [22] are updated until the difference between the spike times of neurons reaches a certain threshold; hence, it is termed a threshold margin maximization learning algorithm for an evolving spiking neural network (TMM-SNN). Researchers have also trained SNNs by incorporating fuzzy logic techniques as in contribution degree-based SNN (CD-SNN) [23] and triangular spike response function-SNN (T-SNN) [24]. In CD-SNN, three membership functions are evaluated using the potential developed via spiking. These membership functions are further used to train an SNN. In T-SNN, a fuzzy C-means clustering algorithm is used to encode real-valued data to spikes. However, fuzzy techniques have limitations related to dimensionality, scalability, etc. In addition to fuzzy techniques and conventional learning, the search-based particle swarm optimization (PSO) algorithm is also used to train artificial neural networks in recommendation systems [25], to protect wireless sensor networks [26], etc. Optimization methods based on evolutionary computation provide an effective alternative approach to developing SNNs without relying on approximations for learning. Quantum-inspired particle swarm optimization (QiPSO) for feature selection and optimizing the parameters of an evolving SNN was proposed in [27,28]. QiPSO utilizes the probabilistic properties of quantum bits to optimize the parameters of SNNs with stochastic responses. In [29], a cooperative PSO (CPSO) was developed to optimize the parameters of SNNs. It was demonstrated in [29] that, by improving the weights and delays of the SNN, CPSO outperforms SpikeProp. It can be observed that existing optimization methods for SNNs do not focus on the interpretability of solutions.
In this paper, we develop an interpretable, time-varying, weight-spiking neural classifier (IpT-SNC) in which weights are modeled using multiple amplitude-modulated Gaussian functions. The centers of the Gaussian functions are located at different time instants over the simulation interval. Thus, the weight of a given presynaptic spike at a particular synapse is determined according to the temporal separation between the time of that spike and the center of all Gaussian functions associated with that particular synapse. A search-based algorithm based on self-regulated particle swarm optimization (SRPSO) [30] is implemented to update the amplitude, width, and centers of all Gaussian functions, along with the potential thresholds of the neurons in the output layer of IpT-SNC. SRPSO is a variant of PSO [31] that employs self-regulation to overcome premature convergence. Further works are carried out to improve the performance of SRPSO, such as in [30,32]. As a preliminary step in this work, SRPSO is used to update the parameters of IpT-SNC in order to overcome the computational complexities in the learning algorithms. We show that the output of IpT-SNC can be interpreted in terms of the times of the spikes generated via the input neurons. The interpretability of IpT-SNC is demonstrated using a synthetic classification problem by identifying a region using the simulation interval (with respect to input spikes) that influences the predictions.
Ten benchmark datasets from the UCI machine learning repository [33] are used to evaluate the performance of IpT-SNC. The performance of IpT-SNC is then compared with Spikeprop [5], TMM-SNN [22], DC-SNN [2], SEFRON [19], MeST [20], CD-SNN [23], and T-SNN [24]. Hypothesis testing is used to statistically validate the significance of IpT-SNC over other learning algorithms. These statistical findings demonstrate that IpT-SNC outperforms other learning algorithms utilized for comparison in terms of generalization performance. Further, the performance of IpT-SNC is evaluated on the Brain-Computer Interface (BCI) competition IV-2a dataset to classify left-hand vs. right-hand movements.
The main contributions of this paper are as follows:
  • To the best of our knowledge, this is the first time in the literature that a heuristic-based SRPSO algorithm has been used to update the parameters of a spiking neural classifier.
  • Inherent interpretations of the proposed IpT-SNC’s performance are drawn in relation to the input information using trained weights.
  • Performance results indicate that the proposed IpT-SNC has 8% higher accuracy with 3.5% less of a standard deviation than existing classifiers while classifying complex EEG-based BCI datasets, resulting in better generalization.
  • Performance results show that, in classifying complex EEG-based BCI datasets, the proposed IpT-SNC achieves a 3.5% lower standard deviation and 8% higher accuracy than existing classifiers, leading to better generalization.
The paper is organized as follows. The architecture of the IpT-SNC and the SRPSO-based parameter update are described in Section 2. The performance evaluation and comparison with different learning algorithms for SNNs results are presented in Section 3. The conclusions are summarized in Section 4.

2. Interpretable Spiking Neural Classifier IpT-SNC

The network architecture of the IpT-SNC and the modeling of the time-varying weights are described in this section. Self-regulated particle swarm optimization (SRPSO) for updating the parameters of IpT-SNC is also described in this section.

2.1. IpT-SNC Architecture

A two-layered SNN architecture is used in IpT-SNC. The input layer consists of m neurons, which are used to feed the network with input samples { ( x 1 , c 1 ) , ( x n , c n ) , } . The spike pattern of the n t h input sample is represented as x n   ( = [ x n 1 , x n i , x n m ] ) , which is m-dimensional. If C represents the total classes in the dataset, then the class label of input sample x n is represented as c n , ( { 1 , C } ). The input spike pattern with F i spikes is represented as x n i ( = [ t i 1 , t i f , t i F i ] ), where t i f denotes the f t h spike produced via the i t h input neuron. The output layer consists of C leaky integrated and fire-spiking neuron models, as in [5]. The neurons in the output layer are associated with a class in the dataset and designated as { y 1 , y j , y c } . The spike patterns of these output neurons are used to predict the class for a given input sample. Figure 1a shows the network architecture of IpT-SNC.
w i j ( t ) represents the time-varying weight of the synapse between the i t h input neuron and the j t h output neuron. Hence, the input neurons are termed presynaptic neurons, and the output neurons are termed postsynaptic neurons. The potential developed in the j t h postsynaptic neuron due to the incoming spikes is given as
v j ( t ) = i f w i j ( t i f ) · ϵ ( t t i f )
where ϵ is the unweighted potential known as the spike response function [5] given by Equation (2).
ϵ ( s ) = s τ exp 1 s τ
where τ denotes the neuron’s time constant. A spike is produced when the potential of the j t h postsynaptic neuron rises over a potential threshold ( θ j ) . The class predicted via the network for a given input spike pattern, x n , is identified using spikes produced through postsynaptic neurons, as shown below.
c n ^ = arg min j { 1 , , C } t j ^
where t j ^ represents the time of the first spike generated via the j t h output neuron. From Equation (1), note that the potential contributed via a particular presynaptic spike depends on the weight, w i j ( t ) , of that connection at the time of the spike. The weight of each synapse, w i j ( t ) , in IpT-SNC is modeled as a mixture of multiple amplitude-modulated Gaussian functions. If K i j Gaussian functions are present in the synapse connecting the i t h presynaptic neuron and the j t h postsynaptic neuron, weight w i j ( t ) of the synapse is evaluated as
w i j ( t ) = k = 1 K i j g i j k ( t )
where g i j k ( t ) represents the output of the k t h Gaussian function at time t, given by
g i j k ( t ) = α i j k exp ( t μ i j k ) 2 2 σ i j k 2
where, for the connection between the i t h input neuron and the j t h output neuron, α i j k , μ i j k , and σ i j k represent the amplitude, center, and width of the k t h Gaussian function present in that connection, respectively. It is assumed that the IpT-SNC architecture number of Gaussians in all synapses is equal. Therefore, K i j can be replaced with K, which represents the number of Gaussians per synapse in the IpT-SNC architecture. Figure 1b shows the time-varying weight for a synapse between a presynaptic neuron and a postsynaptic neuron. Below the synapse is a presynaptic spike, t p r e , between 0 to T, where T represents the duration of the simulation, g 1 , g 2 , and g 3 represent the response of the Gaussians on this synapse at each time step during the simulation. w ( t p r e ) represents the weight at the presynaptic spike, t p r e .
The amplitudes, centers, and widths of all synapses and the potential thresholds of all postsynaptic neurons in the IpT-SNC network are updated using SRPSO. This update is carried out in such a way that the postsynaptic neuron associated with the same class as that of the input sample always fires first. The SRPSO-based parameter-update strategy is presented in the next section.

2.2. IpT-SNC Parameter Update Strategy Using SRPSO

SRPSO is a variant of particle swarm optimization (PSO) that adds features such as self-cognition and social cognition to PSO for better optimization. In this work, SRPSO is used to search for optimal parameter settings of IpT-SNC in order to obtain better classification performance At first, SRPSO is used to initialize a population of solutions (termed swarm size, P), each of which represents a single possible setting for the parameters being optimized. The individual solutions in the population are also termed particles, and the associated parameter values represent the position of a given particle. The position of particles is then adjusted in each iteration to estimate parameter values that yield the best value for the considered fitness function. In this paper, SRPSO is used to estimate the values of the amplitude, center, and width of all the Gaussian functions and the threshold of all output neurons in IpT-SNC. It uses classification accuracy in evaluating the fitness function.
Without losing generality, let us assume that h iterations have been completed and that the l t h particle’s current position is indicated by ρ h l . ρ h l consists of the updated values of parameters of all Gaussian functions in the IpT-SNC architecture, along with the potential thresholds of postsynaptic neurons after h iterations.
ρ h l = { α i j k , μ i j k , σ i j k ( i [ 1 , m ] , j [ 1 , C ] , k [ 1 , K ] ) } h { θ j ( j [ 1 , C ] ) } h
For ease of understanding, the particle ρ h l can be expressed as below:
ρ h l = { ρ h l 1 , ρ h l 2 , ρ h l 3 , , ρ h l d , ρ h l D }
where ρ h l d denotes the ρ h l position in d t h dimension. D denotes the number of dimensions in the ρ h l particle. It is equal to the total number of parameters that are being updated using SRPSO, and it is evaluated as
D = ( m · C · K · 3 ) + C
The position of the ρ h l d in h + 1 t h iteration is obtained using the velocity function v h + 1 l d , as shown below
ρ h + 1 l d ρ h l d + v h + 1 l d v h + 1 l d = ω v h l d + z 1 r 1 γ 1 ( P b e s t h l d ρ h l d ) + z 2 r 2 γ 2 ( G b e s t h d ρ h l d )
where P b e s t h l d denotes the personal best position of ρ h l d in the d t h dimension, and G b e s t h d denotes the personal best position in the d t h dimension until h iterations. These best positions are the ones that give the least values for the fitness function F. The fitness function F is evaluated such that decreasing it improves the overall accuracy, η o , as shown in Equation (12). Mathematically the fitness function, F, is given as
F = 100 η o
The acceleration coefficients z 1 and z 2 in Equation (9) are selected such that the particle reaches its best position as quickly as possible. According to [30], these z 1 and z 2 coefficients are set to 1.49. r 1 and r 2 in Equation (9) are set so that the solution obtained using SRPSO does not get stuck in the local optimum. The uniform distribution over the range [0, 1] is used to generate the random values for r 1 and r 2 [30].
The inertia weight, ω , in Equation (9) is used to balance the processes of exploration and exploitation while searching in SRPSO. The best particle ω is linearly incremented to ensure that it maintains the current rate of acceleration in the search space. For particles other than the best particle, the ω decreases linearly, as the direction in which these particles are heading is not the best direction [30].
γ 1 and γ 2 in Equation (9) are factors for self-cognition and social cognition, respectively. The self-cognition factor γ 1 enables particles to accelerate toward their personal best-known position. Due to the social cognition factor γ 2 , particles are made to advance more quickly toward the global best-known position. For the best particle, γ 1 and γ 2 are set to zero. This is because the best particle disregards both self- and social cognitions because it thinks its orientation is the best. For other particles, γ 1 is set to one, thus enabling them to search in the direction of their personal best-known positions. γ 2 is set to either zero or one. When γ 2 is set to one, the solution will be biased towards the global best-known position in the current iteration, which may not be the overall global optimum. When γ 2 is set to zero, the solution may never converge to the global optimum. If u is any uniform random number, then, to achieve balanced social cognition, the value of γ 2 is chosen as described in [30].
γ 2 = 1 , if u > 0.5 0 , else
The SRPSO search process is performed for all particles for 500 iterations. The pseudocode for searching the gaussian parameters and threshold of output neurons of IpT-SNC is provided in Algorithm 1. The final performance accuracies are evaluated for a given dataset. The demonstration of interpretability and a performance comparison of IpT-SNC are described in the next section.
Algorithm 1 Pseudocode for the updating of the IpT-SNC parameters using SRPSO
  1:
for each spike pattern in training do
  2:
   Compute output spike pattern
  3:
   Compute accuracy, η o , using Equation (12)
  4:
   Compute fitness function, F, using Equation (10)
  5:
   while F ≠ 0 Or h ≤ max iterations do
  6:
      Search for Gaussian parameters ( α , μ , σ ) & potential threshold using SRPSO
  7:
      Compute accuracy, η o , using Equation (12)
  8:
      Compute updated fitness function F using Equation (10)
  9:
       h = h + 1
10:
   end while
11:
end for

3. Interpretability Demonstration and Performance Evaluation of IpT-SNC

The performance of the IpT-SNC is evaluated in this section using ten benchmark datasets from the UCI machine learning repository [33]. The problems were chosen to include binary and multiclass problems with different levels of difficulty. The results obtained are compared with existing SNN classifiers such as Spikeprop, TMM-SNN, DC-SNN SEFRON, MeST, CD-SNN, and T-SNN. When the performance is compared in multiclass problems, SEFRON is not considered, as it was developed only for binary classification problems. The overall accuracy of IpT-SNC on a given dataset is evaluated using Equation (12).
η o = 100 a , b q a b a q a a
where Q represents the confusion matrix, and q a b stands for the element in the a t h row and b t h column in Q. MATLAB 2021a and a Windows computer with 12 logical cores, 16 GB of memory, and a speed of 3.2 GHz are used for the performance evaluation of IpT-SNC.
Population coding: Real-value data-input features are encoded to spike patterns using the population-coding approach [5]. In this approach, each real-valued feature (v) is encoded into spikes using R Gaussian receptive field neurons. Each receptive field neuron emits a single spike in the interval [ 0 , T e ] . If t r denotes the spike emitted via the receptive field ϕ r for v, then
t r = T e ( 1 ϕ r )
where ϕ r represents the output of the r t h Gaussian receptive field neuron. The upper limit of the encoding interval T e used for population coding is set to 3 ms. If μ p o p r and σ p o p are the center and spread of the receptive field ϕ r , then
ϕ r = e x p ( v μ p o p r ) 2 2 σ p o p 2
μ p o p r = 2 r 3 2 ( R 2 )
σ p o p = 1 0.7 ( R 2 )
The values for μ p o p r and σ p o p r are set based on the method presented in [5].

3.1. Discussion on IpT-SNC Parameters

To obtain the best performance of IpT-SNC, a suitable search range for parameters that are optimized using SRPSO should be selected. The range [0, T] ms is used as the search interval for the centers of Gaussian functions, which is the same as the simulation interval (where T is the limit of the simulation interval). T is set to 4 ms. The search interval for widths of Gaussian functions is set to [0.01, 0.6] ms. The search interval for widths is chosen to ensure that Gaussians have a non-zero output over the entire encoding interval even if the center is set at its maximum permitted value in the search range (i.e., 4 ms). The threshold ( θ ) of each neuron is another parameter that is optimized using SRPSO. The search range for threshold ( θ ) is set to [0.1, 2]. From Equation (3), as only the first spikes of the postsynaptic spiking neurons are used for decision-making, the neuron time constant τ is set to T e .
Table 1 shows the search for the parameters such as the magnitude of Gaussian amplitude ( A R ) (where [ A R , + A R ] is the search range for amplitudes of Gaussians in each synapse), the number of Gaussians per synapse (K), and the swarm size (P) that have a significant effect on the performance of IpT-SNC. Figure 2 shows the impact of variations in these parameters on the performance of IpT-SNC in the IRIS classification problem (a description of IRIS presented in Table 2). As shown in Figure 2, the training and testing performance accuracies of IpT-SNC on IRIS are used for this study. Next, the impact of variation in each parameter on IpT-SNC performance is discussed.
  • Number of Gaussians per synapse (K): A high value for the number of Gaussians per synapse (K) results in a large number of parameters that need to be updated using SRPSO. This, in turn, affects the complexity of the model. From Figure 2a, it can be observed that the testing performance of IpT-SNC increases from K = 3 and is the maximum for K = 6. For values of K greater than 8, it can be observed that the training performance of IpT-SNC starts decreasing. Due to the trade-off between performance and computational complexity, the suitable values for K are chosen in the interval [5, 8], as shown in Table 1.
  • Magnitude of Gaussian amplitude ( A R ): From Figure 2b, it can be observed that the impact of variation in the magnitude of Gaussian amplitude ( A R ) is high on the performance of IpT-SNC. However, the testing performance of IpT-SNC reaches its peak multiple times when A R is in the range of (1–3). Therefore, the satisfactory values for A R are chosen in the interval [1, 3].
  • Swarm size (P): The testing performance of IpT-SNC in Figure 2c is higher when the value of swarm size (P) increases beyond 90. Hence, for the rest of the classification problems, the suitable initialization interval for swarm size (P) is set to [90, 100].

3.2. Computational Complexity

The computational complexity of IpT-SNC is estimated separately for feedforward propagation through the network and optimization using SRPSO. In the feedforward stage, the computations are carried out until the IpT-SNC predicts a certain class for a given input sample (Equation (3)). The computational complexity for the feedforward stage in O-notation is given by
κ 1 = O ( m · C 2 · K )
If P and D denote the swarm size and dimension of each particle, then the computational complexity of the SRPSO is computed as mentioned in [30], which is given by
κ 2 = O ( P · D )

3.3. Interpretability Demonstration of IpT-SNC Using a Synthetic Classification Problem

To study interpretability, IpT-SNC is used to classify a two-class synthetic dataset. Each sample in the synthetic dataset consists of two features, X and Y, as shown below
c l a s s 1 = 0 X 0.4 0 Y 0.4 c l a s s 2 = 0.6 X 1 0.6 Y 1
If the value of both features is lower than 0.4, then the sample belongs to class 1. In class 2 samples, both features are incorporated at greater than 0.6. Each class contains 50 randomly generated samples, which are divided into training and testing sets. The two classes in the dataset are represented by two neurons in the output layer. Each synapse is modeled using three Gaussian functions. SRPSO is used to update the parameters associated with all Gaussian functions in the network, as described in Section 2.2.
The relation between the presynaptic spike times for a randomly chosen sample from each class and the Gaussian functions present in synapses of the IpT-SNC architecture is depicted in Figure 3. All synapses connected to the output neuron associated with class 1 are shown in the top two rows of Figure 3 (i.e., from Figure 3a–h). Similarly, synapses connected to the output neuron associated with class 2 are shown in the bottom two rows of Figure 3 (i.e., from Figure 3i–p). A random class 1 sample is selected, and the response of all Gaussian functions in the network for the spikes generated via this sample is represented as black dots in Figure 3. Similarly, black diamonds represent the same for spikes generated via a random class 2 input sample. Three lines (dashed red, solid blue, and dotted orange) in each figure represent the Gaussian functions associated with that synapse over the duration of the simulation. The parameters for these Gaussian functions were estimated using SRPSO. From Figure 3a, it can be observed that the sum of the outputs of Gaussian functions is higher at the time of class 1 presynaptic spikes when compared with class 2 presynaptic spikes. Similarly, it can be reported from Figure 3i that the sum of the outputs of Gaussian functions is higher for class 2 presynaptic spikes when compared with class 1 presynaptic spikes. This property enables the correct class output neuron in IpT-SNC to fire first.
Figure 4 shows the weights of the synapses whose Gaussians are represented in Figure 3. These synapses transmit spikes generated via the first presynaptic neuron to the class 1 and class 2 postsynaptic neurons, respectively. It can be observed that the amplitudes of weights connected to the class 1 output neuron are larger before 1.1 ms. This indicates that input spikes before 1.1 ms strongly determine the response of the output neurons associated with class 1. Similarly, input spikes after 1.4 ms strongly determine the response of the output neuron that belongs to class 2. This kind of similar interpretation can be drawn from the weights of other synapses in the IpT-SNC network.
The performance of IpT-SNC is evaluated using ten benchmark datasets from the UCI [33] machine learning repository. Table 2 lists the details of these datasets, such as the number of features in each sample of the dataset, the number of classes in a dataset, and the number of training and testing samples in the dataset. Table 3 compares the performance of the IpT-SNC with those of the Spikeprop, TMM-SNN, DC-SNN, SEFRON, MeST, CD-SNN, and T-SNN spiking neural classifiers for binary-class datasets, whereas Table 4 shows the performance comparison for multi-class datasets. The performance accuracies mentioned in Table 3 and Table 4 are the mean values obtained across 10 random trials, as in [21]. In each trial, the whole dataset is randomly split into training and testing sets. Among the learning algorithms in Table 3, Spikeprop, TMM-SNN, and DC-SNN utilize constant weights, whereas SEFRON and MeST employ time-varying weights. CD-SNN and T-SNN incorporate fuzzy logic into the SNNs. Table 3 and Table 4 display N i : N h : N j as the architecture of an SNN employed with the learning algorithms. The total number of input neurons, hidden neurons, and output neurons are denoted, respectively, as N i , N h , and N j . For two-layer networks, the architecture is presented as N i : N j . IpT-SNC, SEFRON, and MeST utilize single-layer networks, while other algorithms employ multi-layer networks.

3.4. Performance Comparison of IpT-SNC on UCI Repository Datasets

Liver and PIMA are low-dimensional, high-interclass-overlap binary classification tasks. The performance of IpT-SNC on a training dataset of the liver is better than that of the other learning algorithms except for SEFRON. The performance of IpT-SNC on a testing dataset of the liver is 7.7–19% higher than that of all the other algorithms. For the PIMA dataset, SEFRON has the highest training accuracy, which is 2% higher than that of IpT-SNC. But IpT-SNC exhibits 3.5–10% higher performance on a testing dataset than all the other algorithms used for comparison. A mammogram (MMG) is another problem similar to that of the liver and PIMA. The performance of IpT-SNC on the testing dataset of MMG is better than the 1–14% of the other learning algorithms except for MeST. In this case, MeST outperforms IpT-SNC by 2.5%.
Echocardiogram (ECG) and breast cancer (BC) are simple binary classification problems mentioned in Table 3. All learning algorithms perform almost similarly on the training dataset of ECG, while, on the testing dataset of ECG, IpT-SNC has the highest performance accuracy compared to that of the other algorithms. An increment of approximately 1% can be seen in the performance of IpT-SNC on the testing dataset of BC. The multi-class classification problems mentioned in Table 4 are IRIS, wine, and acoustic emission (AE). IRIS and wine are three-class problems, while AE is a four-class problem. From Table 4, it is observed that almost all algorithms perform similarly on these simple multi-class classification datasets.
For high-dimensional problems like ionosphere (ION) and hepatitis, IpT-SNC achieves the best performance on training and testing datasets of ionosphere (ION). IpT-SNC exhibits an improvement of 0.5–13% on the training dataset and 0.9–9% on the testing dataset of ION, respectively, when compared with the other learning algorithms, while an increment of 1–8% and 6–11% is observed in the performance of IpT-SNC on the training and testing datasets of hepatitis, respectively, when compared with other learning algorithms except CD-SNN. CD-SNN performs 0.9% better than IpT-SNC in the case of a testing dataset of hepatitis. These findings indicate that IpT-SNC exhibits a significant performance increment on complex problems with high interclass overlap and high dimensional problems when compared with other learning algorithms, while there is a moderate increment in the performance of IpT-SNC on simple classification problems when compared with other learning algorithms.
Statistical analysis of the performance comparison: Statistical tests such as the Friedmann test and paired t-tests are used to evaluate the significance of the performance of IpT-SNC when compared with that of SpikeProp, TMM-SNN, DC-SNN, SEFRON, MeST, CD-SNN, and T-SNN. The testing accuracies of the algorithms are used for this statistical analysis. At first, with the help of the Friedmann test, the significance of all the algorithms as a group is evaluated. That the testing accuracies of all the algorithms used for comparison do not differ significantly from one another is the null hypothesis considered for the Friedmann test. Using the Friedmann test, a p-value of 0.000009 was achieved, which indicates a 95% confidence level in rejecting the null hypothesis. This suggests that at least one algorithm operates differently from the others in terms of performance. The pair-wise significance between IpT-SNC and each learning algorithm in Table 3 is evaluated using a paired t-test. The null hypothesis of the two algorithms does not differ significantly, and it is considered for the paired t-tests. The p-values of 0.0017, 0.0018, 0.007, 0.0009, 0.0613, 0.0165, and 0.0153 are obtained using the paired t-test when IpT-SNC is compared against SpikeProp, TMM-SNN, DC-SNN, SEFRON, MeST, CD-SNN, and T-SNN respectively. From these p-values, it can be inferred that the null hypothesis can be rejected with a 95% degree of confidence when IpT-SNC is compared with SpikeProp, TMM-SNN, DC-SNN, SEFRON, CD-SNN, and T-SNN. The p-value of 0.0613 obtained when IpT-SNC is compared with MeST indicates that the null hypothesis is rejected with a 90% confidence level.

3.5. Performance Comparison of IpT-SNC on BCI Competition IV Dataset

In this section, the performance of IpT-SNC in classifying right-hand versus left-hand movements from the Brain-Computer Interface (BCI) competition IV dataset is discussed [34]. The BCI competition IV dataset is a real-world dataset obtained from the recordings of electroencephalograph (EEG) signals from the brain. These EEG recordings are obtained from various participants while they are imagining movements of body parts without actually moving them. The BCI competition IV dataset has several such recordings, which are termed motor-imagery tasks. The 2a dataset in BCI-IV consists of data when participants are imagining the movement of the right hand, the left hand, both feet, and the tongue. These recordings are obtained using 22 electrodes placed on different regions of the scalp of the participant. A total of nine participants are considered, and recordings are carried out in two different sessions, each with 144 trials. In this paper, the problem of right-hand versus left-hand movement classification is considered.
The problem of hand movement classification for each participant is evaluated separately using IpT-SNC. Out of the total 144 trials 72 trials from each session are considered for training, and the remaining 72 are considered for testing. For each, six features are extracted using the Robust Common Spatial Pattern technique [35]. The testing performance of IpT-SNC on this motor imagery task is compared with the Common Spatial Pattern-Linear Discriminant Analysis (CSP-LDA) [36] classifier, the Self Regulated Interval Type-2 Neuro-Fuzzy Inference System (SRIT2NFIS) [37], and DC-SNN. Table 5 presents this comparison results. The nine participants in this task are indicated as A 1 A 9 subjects in Table 5. The mean and deviation indicated in the comparison clearly show that IpT-SNC outperforms the other learning algorithms by a margin of 8–13%. The lower standard deviation indicates that IpT-SNC has better generalization performance compared to the other algorithms.

4. Conclusions

This paper has presented an IpT-SNC for classification problems and a search-based method for IpT-SNC parameter updation. The weight of each synapse in the IpT-SNC was modeled using multiple amplitude-modulated Gaussian functions with centers located at various points throughout the simulation interval. The time-varying weight modeling of a synapse is used in the IpT-SNC architecture to exploit the advantages of its dynamic nature. A search-based self-regulated particle swarm optimization (SRPSO) technique is used to update the values of the parameters in the IpT-SNC in order to achieve better performance. In the interpretability study, it was shown that the rationale behind the predictions was described with the help of time-varying weights in IpT-SNC. The performance of IpT-SNC was compared with that of other learning algorithms with the help of ten benchmark datasets from the UCI machine learning repository. The IpT-SNC outperformed the other existing classifiers by a maximum of 1–7.7% while classifying highly overlapped datasets, whereas, for datasets with higher dimensions, IpT-SNC outperformed other classifiers by 0.9–6%. The statistical analysis of performance comparison showed that IpT-SNC performed better in terms of generalization while still providing the benefit of interpretable outcomes. Further, the performance of IpT-SNC was evaluated in real-world motor imagery classification tasks. An average improvement of 8% in the performance of IpT-SNC was seen in the performance of right-hand versus left-hand movement classification when compared with the other algorithms.

Author Contributions

Conceptualization, M.T.; Methodology, M.T.; Software, M.T.; Validation, M.T.; Writing—Original draft, M.T.; Writing—review & editing, S.D. and S.S.; Supervision, S.D. and S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets used in the study are openly available in [UCI repository at Available online: https://archive.ics.uci.edu/ (accessed on 1 August 2024)] and [BCI competition IV dataset at Available online: https://www.bbci.de/competition/iv/ (accessed on 1 August 2024)].

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Maass, W. NoisySpiking Neurons with Temporal Coding have more Computational Power than Sigmoidal Neurons. Adv. Neural Inf. Process. Syst. 1996, 9, 211–217. [Google Scholar]
  2. Machingal, P.; Thousif, M.; Dora, S.; Sundaram, S. Self-regulated Learning Algorithm for Distributed Coding Based Spiking Neural Classifier. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–7. [Google Scholar] [CrossRef]
  3. Kheradpisheh, S.R.; Ganjtabesh, M.; Thorpe, S.J.; Masquelier, T. STDP-based spiking deep convolutional neural networks for object recognition. Neural Netw. 2018, 99, 56–67. [Google Scholar] [CrossRef] [PubMed]
  4. Kim, S.; Park, S.; Na, B.; Yoon, S. Spiking-yolo: Spiking neural network for energy-efficient object detection. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 4, pp. 11270–11277. [Google Scholar]
  5. Bohte, S.M.; Kok, J.N.; La Poutré, H. Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing 2002, 48, 17–37. [Google Scholar] [CrossRef]
  6. Perez-Nieves, N.; Goodman, D. Sparse spiking gradient descent. Adv. Neural Inf. Process. Syst. 2021, 34, 11795–11808. [Google Scholar]
  7. Neftci, E.O.; Mostafa, H.; Zenke, F. Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Process. Mag. 2019, 36, 51–63. [Google Scholar] [CrossRef]
  8. Markram, H.; Lübke, J.; Frotscher, M.; Sakmann, B. Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science 1997, 275, 213–215. [Google Scholar] [CrossRef]
  9. Kasabov, N.; Capecci, E. Spiking neural network methodology for modelling, classification and understanding of EEG spatio-temporal data measuring cognitive processes. Inf. Sci. 2015, 294, 565–575. [Google Scholar] [CrossRef]
  10. Dora, S.; Suresh, S.; Sundararajan, N. Online Meta-neuron based Learning Algorithm for a spiking neural classifier. Inf. Sci. 2017, 414, 19–32. [Google Scholar] [CrossRef]
  11. Yang, Y.; Calakos, N. Presynaptic long-term plasticity. Front. Synaptic Neurosci. 2013, 5, 8. [Google Scholar] [CrossRef]
  12. Stevens, C.F.; Wang, Y. Facilitation and depression at single central synapses. Neuron 1995, 14, 795–802. [Google Scholar] [CrossRef]
  13. Abbott, L.F.; Regehr, W.G. Synaptic computation. Nature 2004, 431, 796–803. [Google Scholar] [CrossRef] [PubMed]
  14. Liaw, J.S.; Berger, T.W. Dynamic synapse: A new concept of neural representation and computation. Hippocampus 1996, 6, 591–600. [Google Scholar] [CrossRef]
  15. Liaw, J.S.; Berger, T. Robust speech recognition with dynamic synapses. In Proceedings of the 1998 IEEE International Joint Conference on Neural Networks Proceedings, Anchorage, AK, USA, 4–9 May 1998; IEEE World Congress on Computational Intelligence (Cat. No. 98CH36227). Volume 3, pp. 2175–2179. [Google Scholar] [CrossRef]
  16. Namarvar, H.; Liaw, J.S.; Berger, T. A new dynamic synapse neural network for speech recognition. In Proceedings of the IJCNN’01, International Joint Conference on Neural Networks, Washington, DC, USA, 15–19 July 2001; Proceedings (Cat. No. 01CH37222). Volume 4, pp. 2985–2990. [Google Scholar] [CrossRef]
  17. Wade, J.J.; McDaid, L.J.; Santos, J.A.; Sayers, H.M. SWAT: A Spiking Neural Network Training Algorithm for Classification Problems. IEEE Trans. Neural Netw. 2010, 21, 1817–1830. [Google Scholar] [CrossRef] [PubMed]
  18. Hu, M.; Chen, Y.; Yang, J.J.; Wang, Y.; Li, H.H. A Compact Memristor-Based Dynamic Synapse for Spiking Neural Networks. IEEE Trans.-Comput.-Aided Des. Integr. Circuits Syst. 2017, 36, 1353–1366. [Google Scholar] [CrossRef]
  19. Jeyasothy, A.; Sundaram, S.; Sundararajan, N. SEFRON: A new spiking neuron model with time-varying synaptic efficacy function for pattern classification. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 1231–1240. [Google Scholar] [CrossRef]
  20. Jeyasothy, A.; Ramasamy, S.; Sundaram, S. Meta-neuron learning based spiking neural classifier with time-varying weight model for credit scoring problem. Expert Syst. Appl. 2021, 178, 114985. [Google Scholar] [CrossRef]
  21. Jeyasothy, A.; Suresh, S.; Ramasamy, S.; Sundararajan, N. Development of a Novel Transformation of Spiking Neural Classifier to an Interpretable Classifier. IEEE Trans. Cybern. 2024, 54, 3–12. [Google Scholar] [CrossRef]
  22. Dora, S.; Sundaram, S.; Sundararajan, N. An Interclass Margin Maximization Learning Algorithm for Evolving Spiking Neural Network. IEEE Trans. Cybern. 2019, 49, 989–999. [Google Scholar] [CrossRef]
  23. Liu, F.; Yang, J.; Pedrycz, W.; Wu, W. A new fuzzy spiking neural network based on neuronal contribution degree. IEEE Trans. Fuzzy Syst. 2021, 30, 2665–2677. [Google Scholar] [CrossRef]
  24. Liu, F.; Pedrycz, W.; Zhang, C.; Yang, J.; Wu, W. Coding method based on fuzzy C-Means clustering for spiking neural network with triangular spike response function. IEEE Trans. Fuzzy Syst. 2023, 31, 1–14. [Google Scholar] [CrossRef]
  25. Hamada, M.; Hassan, M. Artificial neural networks and particle swarm optimization algorithms for preference prediction in multi-criteria recommender systems. Informatics 2018, 5, 25. [Google Scholar] [CrossRef]
  26. Narayanan, S.L.; Kasiselvanathan, M.; Gurumoorthy, K.; Kiruthika, V. Particle swarm optimization based artificial neural network (PSO-ANN) model for effective k-barrier count intrusion detection system in WSN. Meas. Sen. 2023, 29, 100875. [Google Scholar] [CrossRef]
  27. Hamed, H.N.A.; Kasabov, N.; Shamsuddin, S.M. Integrated feature selection and parameter optimization for evolving spiking neural networks using quantum inspired particle swarm optimization. In Proceedings of the 2009 International Conference of Soft Computing and Pattern Recognition, Malacca, Malaysia, 4–7 December 2009; pp. 695–698. [Google Scholar]
  28. Hamed, H.N.A.; Kasabov, N.; Shamsuddin, S. Probabilistic evolving spiking neural network optimization using dynamic quantum-inspired particle swarm optimization. Aust. J. Intell. Inf. Process. Syst. 2010, 11, 23–28. [Google Scholar]
  29. Hong, S.; Ning, L.; Xiaoping, L.; Qian, W. A cooperative method for supervised learning in spiking neural networks. In Proceedings of the the 14th International Conference on Computer Supported Cooperative Work in Design, Shanghai, China, 14–16 April 2010; pp. 22–26. [Google Scholar]
  30. Tanweer, M.; Suresh, S.; Sundararajan, N. Self Regulating Particle Swarm Optimization Algorithm. Inf. Sci. 2015, 294, 182–202. [Google Scholar] [CrossRef]
  31. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95 - International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  32. Tanweer, M.R.; Auditya, R.; Suresh, S.; Sundararajan, N.; Srikanth, N. Directionally driven self-regulating particle swarm optimization algorithm. Swarm Evol. Comput. 2016, 28, 98–116. [Google Scholar] [CrossRef]
  33. Dua, D.; Graff, C. UCI Machine Learning Repository. 2017. Available online: https://archive.ics.uci.edu (accessed on 1 August 2024).
  34. Naeem, M.; Brunner, C.; Leeb, R.; Graimann, B.; Pfurtscheller, G. Seperability of four-class motor imagery data using independent components analysis. J. Neural Eng. 2006, 3, 208. [Google Scholar] [CrossRef]
  35. Wang, J.; Feng, Z.; Lu, N. Feature extraction by common spatial pattern in frequency domain for motor imagery tasks classification. In Proceedings of the 29th Chinese Control and Decision Conference (CCDC), Chongqing, China, 28–30 May 2017; pp. 5883–5888. [Google Scholar] [CrossRef]
  36. Wu, S.L.; Wu, C.W.; Pal, N.R.; Chen, C.Y.; Chen, S.A.; Lin, C.T. Common spatial pattern and linear discriminant analysis for motor imagery classification. In Proceedings of the 2013 IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB), Singapore, 16–19 April 2013; pp. 146–151. [Google Scholar] [CrossRef]
  37. Das, A.K.; Sundaram, S.; Sundararajan, N. A self-regulated interval type-2 neuro-fuzzy inference system for handling nonstationarities in EEG signals for BCI. IEEE Trans. Fuzzy Syst. 2016, 24, 1565–1577. [Google Scholar] [CrossRef]
Figure 1. IpT-SNC architecture and time-varying weight model of a synapse. (a) The architecture of the time-varying weight spiking neural network. (b) Time-varying weight w ( t ) of a synapse connecting a presynaptic neuron and postsynaptic neuron. g 1 ( t ) , g 2 ( t ) , and g 3 ( t ) represent the Gaussian functions constituting w ( t ) .
Figure 1. IpT-SNC architecture and time-varying weight model of a synapse. (a) The architecture of the time-varying weight spiking neural network. (b) Time-varying weight w ( t ) of a synapse connecting a presynaptic neuron and postsynaptic neuron. g 1 ( t ) , g 2 ( t ) , and g 3 ( t ) represent the Gaussian functions constituting w ( t ) .
Mathematics 12 02846 g001
Figure 2. The variation in thetraining and testing classification accuracies of IpT-SNC with respect to variations in (a) the number of Gaussians per synapse-K, (b) the magnitude of Gaussian amplitude ( A R ) (where [ A R , + A R ] is the search range for amplitudes of Gaussians in each synapse), and (c) swarm size-P.
Figure 2. The variation in thetraining and testing classification accuracies of IpT-SNC with respect to variations in (a) the number of Gaussians per synapse-K, (b) the magnitude of Gaussian amplitude ( A R ) (where [ A R , + A R ] is the search range for amplitudes of Gaussians in each synapse), and (c) swarm size-P.
Mathematics 12 02846 g002
Figure 3. Gaussian functions present in synapses connecting input and output neurons. The first two rows of the figure show the gaussians present in synapses connected to output neuron 1 and the next two rows show the gaussians present in synapses connected to output neuron 2. The dot represents the response of Gaussian functions for spikes generated via a class 1 input sample, and the diamond represents the same for the class 2 input sample. Gaussian functions in synapses connecting (a) input neuron 1 to output neuron 1, (b) input neuron 2 to output neuron 1, (c) input neuron 3 to output neuron 1, (d) input neuron 4 to output neuron 1, (e) input neuron 5 to output neuron 1, (f) input neuron 6 to output neuron 1, (g) input neuron 7 to output neuron 1, (h) input neuron 8 to output neuron 1, (i) input neuron 1 to output neuron 2, (j) input neuron 2 to output neuron 2, (k) input neuron 3 to output neuron 2, (l) input neuron 4 to output neuron 2, (m) input neuron 5 to output neuron 2, (n) input neuron 6 to output neuron 2, (o) input neuron 7 to output neuron 2, and (p) input neuron 8 to output neuron 2.
Figure 3. Gaussian functions present in synapses connecting input and output neurons. The first two rows of the figure show the gaussians present in synapses connected to output neuron 1 and the next two rows show the gaussians present in synapses connected to output neuron 2. The dot represents the response of Gaussian functions for spikes generated via a class 1 input sample, and the diamond represents the same for the class 2 input sample. Gaussian functions in synapses connecting (a) input neuron 1 to output neuron 1, (b) input neuron 2 to output neuron 1, (c) input neuron 3 to output neuron 1, (d) input neuron 4 to output neuron 1, (e) input neuron 5 to output neuron 1, (f) input neuron 6 to output neuron 1, (g) input neuron 7 to output neuron 1, (h) input neuron 8 to output neuron 1, (i) input neuron 1 to output neuron 2, (j) input neuron 2 to output neuron 2, (k) input neuron 3 to output neuron 2, (l) input neuron 4 to output neuron 2, (m) input neuron 5 to output neuron 2, (n) input neuron 6 to output neuron 2, (o) input neuron 7 to output neuron 2, and (p) input neuron 8 to output neuron 2.
Mathematics 12 02846 g003
Figure 4. The blue line denotes the synapse’s weight connected in between input neuron 1 and output neuron 1. The red line denotes the synapse’s weight connected between input neuron 1 and output neuron 2.
Figure 4. The blue line denotes the synapse’s weight connected in between input neuron 1 and output neuron 1. The red line denotes the synapse’s weight connected between input neuron 1 and output neuron 2.
Mathematics 12 02846 g004
Table 1. Hyperparameters.
Table 1. Hyperparameters.
ParameterValue
K[5, 8]
A R [1, 3]
P[90, 100]
R6
T e 3 ms
T4 ms
Table 2. UCI repository datasets used for performance evaluation.
Table 2. UCI repository datasets used for performance evaluation.
Datasets# Features# Classes# Training Samples# Testing Samples
Liver62170175
ECG1026665
MMG928011
HEPATITIS1927877
PIMA92384384
BC92350333
ION342175176
Iris437575
WINE13360118
AE5462137
Table 3. Performance comparison of IpT-SNC on binary-class classification datasets with SpikeProp, TMM-SNN, DC-SNN, SEFRON, MeST, CD-SNN, and T-SNN.
Table 3. Performance comparison of IpT-SNC on binary-class classification datasets with SpikeProp, TMM-SNN, DC-SNN, SEFRON, MeST, CD-SNN, and T-SNN.
DatasetLearning AlgorithmArchitectureTraining Accuracy (%)Testing Accuracy (%)# Epochs
LiverSpikeProp37-15-271.5(5.2)65.1(4.7)3000
TMM-SNN36-(5-8)-274.2(3.5)70.4(2.0)442
DC-SNN36-8-268.8(4.4)70.0(2.0)1000
SEFRON37-191.5(5.4)67.7(1.3)100
MeST36-274.8(3.9)71.1(3.2)100
CD-SNN37-10-273.9(1.4)72.9(1.7)74
T-SNN36-12-279.1(2.1)73.0(1.2)35
IpT-SNC36-285.4(0.6)80.7(2.8)500
ECGSpikeProp61-10-286.6(2.5)84.5(3.0)1000
TMM-SNN60-(2-3)-286.5(2.1)85.4(1.7)177
DC-SNN60-8-288.2(4.1)86.5(2.1)1000
SEFRON61-188.6(3.9)86.5(3.3)100
MeST60-286.5(3.2)85.5(1.5)100
CD-SNN61-8-287.0(2.0)87.0 (2.2)45
T-SNN60-8-281.5(2.8)78.5(7.2)81
IpT-SNC60-292.8(0.7)92.7(2.3)500
MMGSpikeProp55-10-282.8(4.7)81.8(6.1)1000
TMM-SNN54-(5-7)-287.2(4.4)84.9(8.6)176
DC-SNN54-8-280.4(6.2)94.5(8.3)1000
SEFRON55-192.8(5.0)82.7(10.0)100
MeST54-298.2(0.8)98.0(0.4)100
CD-SNN55-6-285.3 (7.2)84.4 (9.1)53
T-SNN54-8-285.7(2.1)85.6(6.7)57
IpT-SNC54-293.7(0.7)95.6(4.2)500
HEPATITISSpikeProp115-15-287.8(5.0)83.5(2.5)1000
TMM-SNN114-(3-9)-291.2(2.5)86.6(2.2)192
DC-SNN114-8-287.7(3.0)88.3(1.4)1000
SEFRON115-194.6(3.5)82.7(3.3)100
MeST114-290.9(5.7)88.7(2.0)100
CD-SNN115-8-295.0(3.2)95.0(2.6)69
T-SNN114-8-288.5(1.7)86.4(4.2)75
IpT-SNC114-295.3(0.6)94.1(1.6)500
PIMASpikeProp55-20-278.6(2.5)76.2(1.8)3000
TMM-SNN54-(5-14)-279.7(2.3)78.1(1.7)160
DC-SNN49-12-278.6(1.9)77.8(1.2)1000
SEFRON49-184.1(1.5)74.0(1.2)100
MeST48-278.4(2.5)77.3(1.3)100
CD-SNN49-13-279.4(1.2)78.4(2.4)48
T-SNN24-18-279.6(1.0)79.0(0.8)48
IpT-SNC48-282.5(0.6)82.5(1.2)500
BCSpikeProp55-15-297.3(0.6)97.2(0.6)1000
TMM-SNN54-(2-8)-297.4(0.3)97.2(0.5)70
DC-SNN54-8-297.4(0.7)97.8(0.5)1000
SEFRON55-198.3(0.8)96.4(0.7)100
MeST54-298.2(0.8)98.0(0.4)100
CD-SNN55-8-297.8(1.0)97.3(1.1)55
T-SNN24-15-297.8(0.3)97.4(0.2)36
IpT-SNC54-298.4(0.1)98.8(1.5)500
IONSpikeProp205-25-289.0(7.9)86.5(7.2)3000
TMM-SNN204-(23-34)-298.7(0.4)92.4(1.8)246
DC-SNN199-20-297.1(1.2)92.7(1.5)1000
SEFRON199-197.0(2.5)88.9(1.7)100
MeST198-298.3(2.6)93.2(1.8)100
CD-SNN205-12-293.0(2.6)91.2(2.1)38
T-SNN204-15-297.8(1.2)94.5(0.7)35
IpT-SNC198-299.2(0.4)95.4(1.1)500
Table 4. Performance comparison of IpT-SNC on multi-class classification datasets with SpikeProp, TMM-SNN, DC-SNN, SEFRON, MeST, CD-SNN, and T-SNN.
Table 4. Performance comparison of IpT-SNC on multi-class classification datasets with SpikeProp, TMM-SNN, DC-SNN, SEFRON, MeST, CD-SNN, and T-SNN.
DatasetLearning AlgorithmArchitectureTraining Accuracy (%)Testing Accuracy (%)# Epochs
IRISSpikeProp25-10-397.2(1.9)96.7(1.6)1000
TMM-SNN24-(4-7)-397.5(0.8)97.2(1)246
DC-SNN24-11-396.1(2.9)97.7(1.4)1000
MeST24-398.1(1.5)98.0(1.4)100
CD-SNN25-8-398.3(0.3)97.9(1.7)51
T-SNN24-10-397.5(0.5)97.3(0.4)48
IpT-SNC24-398.7(0)98.9(0.8)500
WINESpikeProp79-10-399.2(1.2)96.8(1.6)1000
TMM-SNN78-3-3100(0)97.5(0.8)80
DC-SNN78-10-398.2(2.0)96.3(1.0)1000
MeST78-3100(0)98.0(1.2)100
CD-SNN79-8-3100(0)98.8(1.1)33
T-SNN52-8-399.8(0.2)98.9(0.4)38
IpT-SNC78-3100(0)98.8(0.8)500
AESpikeProp31-10-498.5(1.7)97.2(3.5)1000
TMM-SNN30-(4-7)-497.6(1.3)97.5(0.7)12
DC-SNN30-20-496.6(2.2)96.5(1.5)1000
MeST30-499.2(0.9)99.1(0.3)100
CD-SNN31-8-499.4(0.2)99.0(1.0)58
T-SNN25-10-499.4(0.1)99.1(0.2)35
IpT-SNC30-498.7(0.6)98.9(0.4)500
Table 5. Performance comparison of IpT-SNC with CSP-LDA, SRIT2NFIS, and DC-SNN on BCI competition IV 2a dataset.
Table 5. Performance comparison of IpT-SNC with CSP-LDA, SRIT2NFIS, and DC-SNN on BCI competition IV 2a dataset.
SubjectsCSP-LDASRIT2NFISDC-SNNIpT-SNC
A188.8993.0692.3697.22
A251.3968.7566.6779.86
A396.5397.2296.53100
A470.1475.0077.7886.81
A554.8665.9763.8978.47
A671.5372.2273.6186.80
A781.2586.1182.6493.75
A893.7597.2297.92100
A993.7593.7593.7597.92
Mean78.0183.2682.7991.2
Standard deviation17.0112.7612.278.45
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Thousif, M.; Dora, S.; Sundaram, S. A Particle Swarm Optimization-Based Interpretable Spiking Neural Classifier with Time-Varying Weights. Mathematics 2024, 12, 2846. https://doi.org/10.3390/math12182846

AMA Style

Thousif M, Dora S, Sundaram S. A Particle Swarm Optimization-Based Interpretable Spiking Neural Classifier with Time-Varying Weights. Mathematics. 2024; 12(18):2846. https://doi.org/10.3390/math12182846

Chicago/Turabian Style

Thousif, Mohammed, Shirin Dora, and Suresh Sundaram. 2024. "A Particle Swarm Optimization-Based Interpretable Spiking Neural Classifier with Time-Varying Weights" Mathematics 12, no. 18: 2846. https://doi.org/10.3390/math12182846

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop