1. Introduction
In the field of neuromorphic computing, spiking neural networks have garnered significant attention as a biologically inspired computational model and are regarded as the fourth generation of neural networks [
1]. This model is based on the spiking behavior of biomimetic neurons, allowing for the processing of spatiotemporal information while efficiently conserving energy. The representation of information by spiking neurons, known as neural coding [
2], is closely associated with the input encoding, information transmission, learning algorithms, and output readout of spiking neural networks.
Currently, one of the predominant approaches for encoding the activity of spiking neurons in spiking neural networks is rate coding [
3]. This method is simple and robust, as it considers the firing rate of neurons. However, rate coding suffers from slow information transmission and lower processing efficiency due to the need for statistical calculations over longer time windows. Additionally, rate coding only considers the firing rate of neurons, disregarding the crucial impact of precise spike timing on network activity, which contradicts relevant physiological findings [
4].
Another widely adopted encoding approach is temporal coding, which utilizes precise spike timing to represent information. Examples include time-to-first-spike (TTFS) coding [
5] and relative spike latency (RST) coding [
6]. These methods can speed information transmission and offer energy-efficient single-spike requirements. However, they exhibit poorer robustness against interference, and the sparse firing rates make it challenging to directly extend such networks to multiple layers. Moreover, other encoding methods, such as phase coding [
7] and burst coding [
8], have been proposed to seek simple, efficient, and robust encoding strategies for spiking neurons.
This paper proposes a novel spike-based encoding method called activeness, which integrates the historical activity information of spiking neurons. Activeness quantifies the discharge of spiking neurons using a single scalar value, effectively integrating the historical firing rate and precise spike timing information of the neurons while maintaining a limited resource cost. This encoding method can be directly applied to input encoding, learning algorithms, and output decoding in spiking neural networks. It enables resource savings, complexity reduction, and improved learning efficiency in network operations.
In the following sections, we begin by providing an exposition and analysis of the definition and computation method of activeness. Subsequently, we explore the representation capability and robustness of Activeness through experimental investigations, as well as its performance in classical classification tasks. Finally, we summarize and discuss the potential applications of activeness.
2. Proposed Methods
In the biological brain, neurons process and transmit information through seemingly instantaneous, stochastic, and disordered firing activities. Understanding the mechanisms of neuronal firing in the biological brain is of great significance for cognitive neuroscience and deciphering brain function. In spiking neural networks inspired by the brain, one of the primary tasks is to represent information at the level of neurons, which is known as neural coding.
A good neural coding scheme should accurately represent the input information of neurons (accuracy), yield consistent coding outcomes for different signal frequencies or intensities (robustness), maximize the inclusion of information to enhance coding efficiency (efficiency), and provide an explanation for the encoding mechanisms that align with the knowledge of neuroscience (interpretability).
Neuroscientific research has revealed that neuronal firing is a complex series of electrochemical reactions triggered by excitation. Sustained neuronal firing leads to the accumulation of calcium ions within the cell [
9,
10], and calcium ions participate in various processes of signal transmission and activity modulation [
11]. Neuronal firing activity also continuously regulates the synthesis and degradation of proteins within the cell [
12], and these proteins are involved in nearly all aspects of neuronal activity [
13]. The changes in calcium ions and proteins within the neuron have a significant impact on synaptic plasticity and neuronal function. However, these changes are not instantaneous. They occur gradually through the accumulation or degradation process, which is the result of long-term neuronal activity. This is different from the rapid processes occurring in synapses, such as the release of neurotransmitters controlled by the activity states of pre and postsynaptic neurons, and the switching of ion channels. These rapid processes are the transient activities of neurons. Nevertheless, there is a close relationship between these two processes. The transient activities cause changes in regulatory substances (such as continuous stimulation leading to the synthesis of transcription factors and new proteins, which may result in the formation of new synapses [
14]), and these changes in substances then affect the intensity of each transient activity (such as the concentration of calcium ions affecting the movement of synaptic vesicles and the release of neurotransmitters [
15], which directly influences the action of postsynaptic neurons).
The previous encoding schemes, including rate coding, temporal coding, phase coding, and sequence coding, mainly reflect the transient activity of neurons and lack emphasis on the historical context. To achieve an encoding scheme that captures both transient changes and historical activity, this paper proposes a novel neural encoding scheme called activeness coding. The specific definition is as follows.
Activeness is a metric used to quantify the activity level of a neuron. When an action potential arrives, the activeness, denoted as
A, increases by
R and subsequently decays to zero with a time constant
. The step size of activeness,
R, is determined by the time interval since the last discharge. When a neuron fires,
R is updated to 1 and then decays to zero with a time constant
. The computation method is
where
represents the moment of neuronal firing. The small positive constant
in Equation (
2) is used to ensure that the activeness
A of the neuron is updated before
R is set to 1. This guarantees that the increase in
A is correlated with the time interval between neuronal firings. In other words, as the time interval between two firings becomes shorter,
R approaches 1, whereas for longer intervals,
R approaches 0. The relationships among the membrane potential
V, the step size
R, and the activity level
A of the neuron are illustrated in
Figure 1.
Activeness is a scalar value that is dimensionless. It is not only influenced by the most recent discharge of the neuron but also takes into account the impact of all previous discharges. The step size
R reflects the precise timing of the most recent spike. When the input remains constant, the cumulative sum of step sizes
R can reflect the average number of spike occurrences within a certain time period, which is consistent with the spike rate. Therefore, we believe that activeness encompasses both spike rate information and precise spike timing information. Detailed experimental validation and analysis are presented in
Section 3.
In the activeness formula (see Equation (
1)),
determines the influence of the most recent spike on the activeness, while
determines the influence of the historical spike activity on the activeness. Typically,
is not greater than
, as otherwise, the activeness encoding results would exhibit significant fluctuations.
It is worth noting that the activeness calculation model proposed in this paper exhibits similarities with the output voltage waveform of an LIF (leaky integrate-and-fire) neuron’s RC integrator in some intervals. However, their fundamental natures are entirely different and they are not interchangeable. First and foremost, they address different problems. LIF neurons (with synaptic models included) deal with the dynamic relationship between input spikes and the membrane potential of neurons. In contrast, activeness coding quantifies the degree of neural activity. Furthermore, they correspond to different neurophysiological processes. LIF neurons primarily simulate changes in the neuronal membrane potential, whereas, as mentioned earlier, activeness coding abstracts the processes related to internal substances like calcium ions and proteins within neurons. Moreover, they operate on different time scales. The membrane time constant of LIF neurons typically falls in the range of tens of milliseconds, while activeness coding integrates information over longer time scales, typically in the hundreds of milliseconds, to more accurately represent the activity characteristics of neurons. Finally, there are differences in computational details. Although both LIF neurons and activeness involve nonlinear integrators, unlike the fixed time constant of LIF’s RC circuit, the integration time constant in activeness is not constant but varies depending on the input spike pattern. Activeness coding also does not require a comparator similar to the membrane potential threshold used in LIF neurons.
4. Discussion
4.1. Comparison
The activeness coding method is inspired by synaptic traces. Synaptic traces are a classic mathematical model used in spiking neural networks to characterize the impact of synaptic history on synaptic plasticity [
21]. However, the two are fundamentally different, primarily in the following aspects:
Structural differences. Synaptic traces record the impact of neural activity on synapses and are typically defined as variables within the synapse. They include both pre-synaptic and post-synaptic traces. In contrast, activeness coding mainly considers the potential influence of a neuron’s own historical activity on itself and is only relevant to the neuron (soma). It is simpler and more straightforward.
Computational differences. Synaptic traces can be either all-to-all or only consider the nearest spiking event. Typically, constant values are accumulated in their calculations. On the other hand, the dynamic changes in the accumulation of activeness amplify the impact of precise discharge timing, thereby improving the efficiency of network operation.
Resource requirements. Learning based on synaptic traces requires maintaining two to three trace variables per synapse. In activeness encoding, only two relevant variables need to be maintained per neuron. Taking the classification performance test network in
Section 3 as an example, using triplet STDP, the network would require a total of
trace variables. However, when using activeness coding, the network would only require
relevant variables, which is approximately 1/400th of the previous approach. As the network size increases, the resource savings from this improvement become even more significant.
4.2. Potential Future Research
As a concise, macroscopic, efficient, and robust encoding method, activeness coding holds promise beyond input encoding and can be applied to the learning algorithm and network output inference.
Taking learning algorithms as an example, with the expressive power of activeness coding in a macroscopic and robust manner, it is possible to improve weight update strategies based on the precise discharge timing of pre- and post-synaptic neurons in unsupervised learning, supervised learning, and reward-based reinforcement learning algorithms. This improvement can be achieved by adopting rules based on activeness difference, leading to more stable network activity and faster learning convergence.
In terms of network output inference, many networks require additional classifiers or statistical computations to obtain human-understandable operational results. However, activity encoding provides a single quantized value that directly reflects the activity of output layer neurons, making the network’s operational results readily interpretable.
In addition to using spiking neural networks with activeness coding for image classification tasks, we are also exploring its potential in cross-modal information perception and integration. In the future, the proposed method could be further investigated to tackle various cognitive tasks.