Next Article in Journal
A Proposal of Bioinspired Soft Active Hand Prosthesis
Previous Article in Journal
Towards a Novel Cost-Effective and Versatile Bioink for 3D-Bioprinting in Tissue Engineering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Influence of the Number of Spiking Neurons on Synaptic Plasticity

Department of Computer Engineering, Gheorghe Asachi Technical University of Iași, Dimitrie Mangeron 27, 700050 Iași, Romania
*
Author to whom correspondence should be addressed.
Biomimetics 2023, 8(1), 28; https://doi.org/10.3390/biomimetics8010028
Submission received: 3 November 2022 / Revised: 4 January 2023 / Accepted: 6 January 2023 / Published: 11 January 2023

Abstract

:
The main advantages of spiking neural networks are the high biological plausibility and their fast response due to spiking behaviour. The response time decreases significantly in the hardware implementation of SNN because the neurons operate in parallel. Compared with the traditional computational neural network, the SNN use a lower number of neurons, which also reduces their cost. Another critical characteristic of SNN is their ability to learn by event association that is determined mainly by postsynaptic mechanisms such as long-term potentiation. However, in some conditions, presynaptic plasticity determined by post-tetanic potentiation occurs due to the fast activation of presynaptic neurons. This violates the Hebbian learning rules that are specific to postsynaptic plasticity. Hebbian learning improves the SNN ability to discriminate the neural paths trained by the temporal association of events, which is the key element of learning in the brain. This paper quantifies the efficiency of Hebbian learning as the ratio between the LTP and PTP effects on the synaptic weights. On the basis of this new idea, this work evaluates for the first time the influence of the number of neurons on the PTP/LTP ratio and consequently on the Hebbian learning efficiency. The evaluation was performed by simulating a neuron model that was successfully tested in control applications. The results show that the firing rate of postsynaptic neurons p o s t depends on the number of presynaptic neurons p r e , which increases the effect of LTP on the synaptic potentiation. When p o s t activates at a requested rate, the learning efficiency varies in the opposite direction with the number of p r e s , reaching its maximum when fewer than two p r e s are used. In addition, Hebbian learning is more efficient at lower presynaptic firing rates that are divisors of the target frequency of p o s t . This study concluded that, when the electronic neurons additionally model presynaptic plasticity to LTP, the efficiency of Hebbian learning is higher when fewer neurons are used. This result strengthens the observations of our previous research where the SNN with a reduced number of neurons could successfully learn to control the motion of robotic fingers.

1. Introduction

Spiking neural networks (SNNs) benefit from biological plausibility, fast response, high reliability, and low power consumption when implemented in hardware. An SNN operates using spikes that are the effect of neuronal activation that occurs if a given threshold is exceeded and provides the SNN with sensitivity to event occurrence [1,2]. Thus, one of the critical advantages of SNN over traditional convolutional neural networks is the introduction of time in information processing. Another characteristic of SNN is their ability to learn, which is related to time based on the relative occurrence of the events.

1.1. Long-Term Plasticity

The main mechanism that determines learning is the long-term potentiation (LTP) that strengthens the synapses when the presynaptic neuron ( p r e ) activates before the stimulated postsynaptic neuron ( p o s t ). The reversed order of p o s t and p r e activation reduces the synaptic weights via long-term depression (LTD) [3]. The amplitude of the synaptic change due to pairs p r e p o s t and p o s t p r e depends strongly on the temporal difference between the activation of p r e and p o s t , respectively [4]. A detailed study of the biological neurons in vitro showed that LTP and LTD windows are asymmetric, making the LTP to dominate the LTD [5], implying that the resultant effect is LTP. Indeed, the essential mechanism of learning in the hippocampus is LTP, as stated in neuroscience [6]. LTP is also the basic element of Hebbian learning that determines the potentiation of the weak synapses if these are paired with strong synapses that activate postsynaptic neurons [7]. This implies that Hebb’s rules are critical for learning in the human brain and they are the foundation for the most biologically plausible supervised SNN learning algorithm [8,9].

1.2. Hebbian Learning in Artificial Systems

Besides the biological importance of Hebbian learning, these rules are used in artificial systems, mainly for training competitive networks [8] and to store memories in Hopfield neural networks [10,11]. Hebb’s rule strengthens neural paths that have temporal correlations between p r e and p o s t activation. This implies that each neuron tends to pick out its own cluster of neurons, of which the activation is correlated in time by potentiating the synapses that contribute to the activation of postsynaptic neurons [12,13]. In this case, each neuron competes to respond to a subset of inputs matching the principles of competitive learning [8]. In addition, recent research showed that Hebbian learning is suitable to train SNNs of high biological plausibility to control robotic fingers using external forces mimicking the principles of physical guidance [14]. Here, the effect of the strong synapses that are driven by sensors was associated with the effect of weak synapses driven by a command signal [15].
Supervised learning based on gradient descent is more powerful than Hebbian learning [8] in computational applications. However, these error-correcting learning rules are not suitable for bioinspired SNNs because the explicit adjustment of the synaptic weight is not typically feasible. Therefore, for adaptive systems of high biological plausibility, Hebb’s rules are more suitable to train the synapses unsupervised, as they are trained in the brain.
The repetitive activation of p r e s is independent of a p o s t activity increase in synaptic efficacy by the presynaptic elements of learning such as post-tetanic potentiation (PTP) [16] that can last from a few seconds to minutes [17]. This synaptic potentiation represents an increase in the quantity of the mediator released from the presynaptic membrane during p r e activation [16,18]. PTP influences the motor learning specific to Purkinje cells that plays a fundamental role in motor control [19]. Taking into account that this type of presynaptic long-term plasticity occurs in the absence of postsynaptic activity, the Hebbian learning mechanisms are altered by PTP [18].

1.3. The Number of Neurons in SNNs

Each neuron is a complex cell comprising multiple interacting parts and small chambers containing molecules, ions, and proteins. The human brain is composed of neurons in the order of 10 11 connected by about 10 15 synapses. Creating mathematical and computational models would be an efficient solution towards understanding the functions of the brain, but even if with an exponential increase in computational power, it does not seem achievable in the near future. Even if this could be achieved, the complexity of the resulting simulation maybe as complex as the brain itself. Hence, there is a need for tractable methods for reducing the complexity while preserving the functionality of the neural system. The size effect in SNNs has various approaches. Statistical physics formalism based on the many-body problem was used to derive the fluctuation and correlational effects on finite networks of N neurons as a perturbation expansion of 1 / N around the mean field limit of N [20]. Another method that was used to optimise the size and resilience of the SNN is with empirical analysis using evolutionary algorithms. Thus, smaller networks may be generated by using a multiobjective fitness function incorporating a penalty for the number of neurons evaluating every network in a population [21].
In addition, research on computational neural networks showed that, for classification problems, SNNs use fewer neurons than the second generation of artificial neural networks (ANNs) does [22]. In addition, the hardware implementation of SNNs demonstrated their efficacy in modelling conditional reflex formation [23] or in controlling the contraction of artificial muscles composed of shape memory alloy (SMA). In later applications, SNNs with only a few excitatory and inhibitory neurons have been able to control the force [24,25] and learn the motion [14,15] of anthropomorphic fingers. In addition, using fewer neurons is important in reducing the cost and increasing the reliability of the hardware implementation of SNNs.
Analysing the Hebbian learning efficiency of adaptive SNN provides a useful tool for reducing the size of experimental networks and minimising the simulation time while preserving bioinspired features.

1.4. The Goal and Motivation of the Current Research

The presynaptic long-term plasticity determined by PTP reduces the efficiency of Hebbian learning that is determined by LTP. This mechanism is critical to make the neural network respond to concurrent events by potentiating the untrained synapses when activated with the trained neural paths. Thus, PTP potentiates synapses in the absence of a postsynaptic response, meaning that the causality is broken.
Considering these aspects, the goal of this paper is to determine in which conditions the effect of LTP over PTP is maximised, increasing the efficacy of Hebbian learning. Typically, fewer neurons must fire at a higher rate or have stronger potentiated synapses to activate p o s t above the preset rates. Reducing the number of neurons can increase both the firing rate of p r e s and the synaptic weights that are necessary to reach the requested frequencies of p o s t .
At certain firing rates and synaptic weights, the ratio between the LTP and PTP rates can be higher, implying that associative learning is more efficient. Considering that d W L T P represents the maximal contribution of LTP and d W P T P , the effect of PTP to the synaptic weight during the period t L of training, then there is a maximal ratio: r W M A X of d W L T P r M A X and d W P T P r M A X . In this work, we consider that the maximal efficiency of Hebbian learning is for r W M A X , which corresponds to d W L T P r M A X . If a target frequency f P O S T for the postsynaptic neuron is requested, then a minimal number of untrained presynaptic neurons n U N T with weight d W L T P r M A X can be activated to reach f P O S T . Therefore, the ideal case when LTP is maximal n U N T depends on the functions that describe the weight variation by PTP and LTP, and on f P O S T .
Starting from these ideas, the contribution of this work is twofold: (i) The quantification of Hebbian learning efficiency as the ratio between LTP and PTP; (ii) the evaluation of the influence of the number of neurons on the efficiency of Hebbian learning, focusing on an SNN with reduced number of neurons (fewer than 20 per area).
As presented in Section 1.2 and Section 1.3, there are several comprehensive studies related to Hebbian learning or focused on the influence of variation in the number of neurons on the performance of adaptive SNN. However, there are no studies focused on overlapping these two research directions in systems of high biological plausibility.
The rest of the paper is organised as follows: Section 2 presents the general structure of the neural network and the experimental phases focusing on the proposed neuron model, and on the implementation of PTP and LTP mechanisms. The experimental results along with the details for each measured item are presented in Section 3. The paper ends with Section 4, which discusses the results, focusing on the biological plausibility of the used model, and presents some considerations for future research.

2. Materials and Methods

The SNN is based on a neuronal model of high biological plausibility [14]. Although this electronic neuron was implemented and tested in PCB hardware, the analysis presented in this work was based on spice simulations of the electronic circuit.

2.1. The Model of the Artificial Neuron

An artificial neuron includes a SOMA and one or more synapses. The electronic SOMA models elements related to information processing, such as the temporal integration of incoming stimuli, the detection of the activation threshold, and a refractory period. Electronic synapses model presynaptic elements of learning such as PTP and the postsynaptic plasticity that determines Hebbian learning via long-term potentiation (LTP). In addition, a synapse stores the synaptic weight using a capacitor that can be charged or discharged in real time using cheap circuits [26,27]. Electronic synapses generate excitatory or inhibitory spikes to feed the corresponding p o s t s .
Figure 1 shows the main elements related to learning that are included by the neuronal schematic that is detailed in Appendix A [14]. The neuron detects the activation threshold using the transistor T M , and, during activation, T S generates a spike of variable energy E S P K that depends on the synaptic weight w s . In this work, we refer to w s as the voltage V W read in the capacitor C L , shown in Figure 1. The synapse is potentiated by PTP that is modelled by the discharge of C L when the neuron activates, and by LTP, which alters the charge in capacitor during the activation of p o s t .
Potential V W determines the duration of the generated spike at OUT, modelling the effect of synaptic weight on postsynaptic activity. Spike duration t S P K is determined by potential V W because, during SOMA activation, transistor T S is open as long as V U (which is proportional with V W ) is below the emitter-base voltage of T S . The variation in V U is given by:
V U = V U 0 + ( V D D V U 0 ) e t R D C U
The initial potential V U 0 is calculated using Equation (2) for cut-off and Equation (3) for saturated regimes of transistor T S as follows:
V U 0 = V W R U R U + R D
V U 0 = V W R U + V B R B 1 R D + 1 R U + 1 R B
In V W = V D D + V A V F and V B = V E V E B , V F and V E B are the forward and emitter base voltages, respectively. Similarly, after SOMA inactivation, V U is restored to V D D as follows:
V U = ( V D D V U I ) ( 1 e t ( R C + R D ) C U )
where V U I is the initial value of V U when SOMA inactivates and C U starts charging.

2.2. Model for PTP and LTP

The activation of the neuron that lasts t S P K = 44 u s reverses the polarity of C L , which is discharged by an amount that is given by:
Δ V P T P = V D D e t S P K ( R D + R U ) C L
Equation (5) models the potentiation by PTP of the synapse. For expressing LTP, we should consider the charge variation in C L when it is discharged using C A , followed by a reset of the charge in C A during p o s t activation.
During neuronal activation, the potential in the capacitor C L varies as follows:
Δ V = V D D ( 1 e t S P K R A C E )
where the equivalent capacitance is:
C E = C L C A C L + C A
Considering that Δ V A and Δ V C represent the variation in potential in C A and C L , respectively, we denote the ratio:
k = Δ V A Δ V C = C A C L
Thus, variation in the potential in C L that represents the temporary potentiation of the synapse is:
Δ V C = C E C L Δ V
During the neuronal idle state after its activation, the resultant variation of the potential in C L and C A is:
Δ V S = Δ V e t W ( R A + R M ) C E
where t W is the time window t p o s t t p r e between the moments of the neuronal activation. C A discharges in C L until the potentials in both capacitors reach equilibrium. This variation restores the synaptic weight to the value that was before activation of the presynaptic neuron. If the postsynaptic neuron fires during the restoration of the synaptic weight, capacitor C A is discharged at a significantly higher rate until equilibrium is reached. Taking into account that R L < < R A + R M , the potential variation in C L during p o s t activation is negligible. This implies that the variation in the potential in C L that models weight variation by LTP is:
Δ V L T P = C E C L Δ V S
Δ V S decreases according to Equation (8), implying that Δ V L T P depends on the time window t W .
For this neuronal design w s varies in the opposite direction, with V W in the range of [0.2 V:1.6 V]. This implies that a lower V W models higher synaptic potentiation. To simplify the presentation, in this research, we refer to variation d W in the voltage in C L that occurs during potentiation.
Therefore, the experiments presented in the following section focus on evaluating how the number of neurons affect learning efficiency.

2.3. The Structure of the SNN

The synaptic configuration includes two presynaptic neural areas, p r e N A T and p r e N A U N T , which include n T and n U N T neurons, respectively. The p r e s included in these neural areas connect to only one p o s t , as in Figure 2a. For allowing weight variation by LTP, at the beginning of each experiment, synapses S T between P r e N A T and p o s t were fully potentiated for the activation of p o s t , while the weights of synapses S U N T driven by P r e N A U N T were minimal. The SNN included additional neurons P r e A U X and P o s t A U X for the evaluation of the potentiation by PTP of S A U X , which had the same value as that for S U N T . This allowed for us to compare the PTP and LTP effects in similar conditions.
As shown in Figure 2b, neurons in each presynaptic area were activated by constant potentials V 1 , , V N or by pulses, as we detail in the sequel. For modelling the variability in the electronic components, input resistors R 1 , R 2 , and R N , shown in Figure 2b, were set in an 10 % interval that varied the firing rate of p r e s slightly.
The firing rate of the postsynaptic neurons f P O S T , and the variation in the synaptic weights d W L T P and d W P T P due to LTP and PTP, respectively, were determined via measurements on the simulated electronic signals [28,29]. The values of the input voltages for the activation of p r e s were set to several values to activate the neurons in the range that was used during our previous experiments [14]. In order to highlight the influence of the number of neurons on the Hebbian learning efficiency, the initial synaptic weights were minimal for extending their variation range.

2.4. Experimental Phases

The experiments started with a preliminary phase in which we determined d W L T P and d W P T P for a single spike when f U N T took several values, and variation in f P O S T with the number n U N T . Following these preliminary measurements, we evaluated the efficiency of Hebbian learning by calculating the ratio r W = d W L T P / d W P T P during several phases as follows:
Phase 1. The value of d W L T P was determined when n T and n U N T in the neural areas p r e N A T and p r e N A U N T , respectively, varied independently or simultaneously. These results were compared with the effect of PTP when only the neurons in the untrained area p r e N A U N T were activated. Variation d W L T P in the synaptic weight included potentiation due to LTP being determined by p r e p o s t pair activation and due to PTP that occurred due to p r e action potential.
Typically, the frequency of p o s t can be controlled in certain limits by adjusting the firing rate of p r e s independent of the number of neurons. In order to simplify the SNN structure during Phases 2 and 3, p r e N A T included one neuron.
Phase 2. Next, we determined the variation in d W L T P and d W P T P when synapses in p r e N A U N T were trained until they were able to activate p o s t in the absence of p r e N A T activity.
Phase 3. For the last phase of the experiments, we considered a fixed frequency f M of the output neuron that matched the firing rate of the output neurons that actuated the robotic junctions in our previous experiments [14]. Thus, the SNN was trained until f M had reached 100   Hz when stimulated only by p r e N A U N T independent of p r e N A T . In order to extract the contribution of PTP to the d W L T P , neuron P r e A U X was activated at the same rate with p r e N A U N T , and d W P T P was measured.
For the untrained p r e s , we set different frequencies that were not divisible with the firing rate of p o s t , mimicking a less favourable scenario of neuronal activation. In this case, the time interval between p r e and p o s t activation varied randomly, increasing the diversity of the weight gain per the action potential of p o s t . In a favourable scenario, the frequency of p o s t is the divisor of the firing rate of p r e , which improves the weight gain via the synchronisation of neuronal activation.

3. Results

The obtained results during the experimental phases mentioned above are presented here.

3.1. Preliminary Phase

In order to asses the influence of the electronics on synaptic potentiation during a single spike, we determined the weight variation by PTP for several values of V I N when p r e activated once. As presented in Figure 3a, PTP decreased as f U N T increased. However, long term, this variation was compensated by the number of spikes per time unit that increased at a higher rate with f U N T . A similar evaluation was performed for LTP when p o s t was activated by trained neurons at t = 0.002   s after the activation of the untrained p r e . In this case, the influence of LTP presented in Figure 3b was extracted from the measured d W by the subtraction of the PTP effect shown in Figure 3a.
Typically, the output frequency of an SNN depends on the number of p r e s that stimulate p o s t , as shown in Figure 4a. Starting from this observation, we determined d W L T P after 2 s of training for a different number of p r e s in P r e N A U N T and P r e N A T when f U N T = 75   Hz and f T = 100   Hz . Figure 4b–d show that d W L T P depended on the number of p r e s following different patterns for the trained and untrained neurons. In addition, the learning rate by LTP increased with the number of p r e s , mainly due to higher values of f P O S T determined by the activation of more p r e s .

3.2. The Efficiency of Hebbian Learning

Variation in the ratio r W with the voltage V W synaptic weight is presented in Figure 5a. This represents the ideal case when LTP is maximal, which was obtained by the activation of the postsynaptic neuron shortly after the untrained p r e . r W was maximal for a specific weight that was far from the limits of the variation interval.
Weight variation d W for different firing rates of p r e was determined for both PTP and LTP when the neurons activated for a fixed period of time t = 2   s . The data plotted in Figure 5b show that variation in the ratio r W reduced significantly when the frequency of the untrained p r e s was above 50 Hz. Thus, taking into account that r W was almost stable for a single neuron in P r e N A U N T , for the next experimental phase, we evaluated the influence of the number of neurons on r W for fixed activation frequency f U N T = 50 Hz.
In this setup, the SNN was trained until the first activation of p o s t by the synapse that was potentiated by LTP. The PTP level for the synapse S A U X was determined via the activation of the auxiliary neuron P r e A U X (see SNN structure in Figure 2a) at the same frequency as that of the untrained p r e s . In order to determine if r W had a similar variation for another frequency of the neurons in the P r e N A U N T , we performed similar measurements for f U N T = 75 Hz. As presented in Figure 6, the variation in r W showed that the best learning efficiency was obtained for n U N T = 1 n e u r o n when f U N T = 75 Hz and for n U N T = 3 n e u r o n s when f U N T = 75 Hz. The different number of p r e s indicated that f U N T influenced the optimal number of neurons for the best learning efficiency when the neural paths were trained until the first activation of p o s t by P r e N A U N T independent of P r e N A T .
The next experimental phase evaluated r W and the duration of the training process t L when the firing rate of the output neurons reached f M = 100 Hz, while the untrained p r e s in the area P r e N A U N T activated at several firing rates in a set that included divisors of f M .
The plots in Figure 7a,b show that weight variations d W P T P and d W L T P decreased when the number of untrained p r e s n U N T increased. Typically, d W P T P is proportional with t L , implying that the SNN learns faster when more neurons activate p o s t . The improved value for t L was determined by the lower weights that were necessary to activate p o s t at the requested firing rate. In order to compare d W P T P and d W L T P , we determined the w s that were potentiated by PTP when the neuron P r e A U X was activated at f U N T as the neurons in the area P r e N A U N T . Figure 8a shows the variation r W for several firing rates f U N T that were not divisors of f M . In this case, local maximum r W l o c a l was for a single neuron per area ( n U N T = 1 ). Taking into account that LTP may be more efficient when f U N T is a divisor of f M due to the synchronisation of the p r e p o s t neurons, we evaluated the weight variation for f U N T = 25, 33.3, and 50 Hz. In order to eliminate the variation in PTP with the continuous input voltage of the neurons, the p r e s were activated with digital pulses with the same amplitude generated at rate f U N T . The results presented in Figure 8b show that the maximal r W l o c a l was obtained for n U N T = 2 neurons. The best learning efficiency r W M A X was obtained when f U N T = 25 Hz, and the untrained presynaptic area included two neurons.
Typically, the weight variation with training duration t L by LTP and PTP varies, following different patterns. The difference between the two functions implies that there is a value for t L m a x where the ratio between LTP and PTP is maximal. This value corresponds to a synaptic weight w L T P obtained by LTP and consequently to a potential V W . Typically, there is a minimal number of pres n U N T firing at a fixed frequency that are able to activate a p o s t when the weight is w L T P . In our work, the best r W = 4.28 for n U N T = 2 neurons corresponded to the potential V W = 0.7 V in the weight capacitor.

4. Discussion and Conclusions

At the synaptic level, the neural paths in the brain are trained by associative or Hebbian learning that is based on long-term potentiation, which is the postsynaptic element of learning. From a biological point of view, presynaptic long-term plasticity violates Hebbian learning rules that depend on postsynaptic activity. Previous research on SNN showed that the control systems use a reduced number of electronic neurons, while in classification tasks, SNN uses fewer neurons than the traditional CNN does. Starting from these ideas, this work focused on the evaluation of the influence of the number of neurons on the efficiency of Hebbian learning, characterised as the ratio of LTP and PTP effects on the synaptic weights. This ratio increases the effect of LTP and consequently the power of the SNN to discriminate between the neural paths that are trained by associative learning over the paths where only presynaptic plasticity occurs. The simulation results showed that, despite the fact that LTP depends mainly on the frequency of postsynaptic neurons, the number of neurons affect the Hebbian learning efficiency when the p o s t s must reach a predefined frequency. In this case, the best LTP/PTP ratio was obtained when the frequency of the untrained p r e s was the lowest divisor of the target frequency of p o s t . The efficiency of Hebbian learning reached a maximum for two p r e s and decreased in the opposite direction with the number of p r e s . Taking into account that, for a certain number of neurons, the LTP/PTP ratio was better, we could deduce that certain synaptic weights resulted in better Hebbian learning efficiency. Indeed, the position of the maximal r W inside the variation interval (Figure 8b) matched the variation in the r W in the ideal case presented in Figure 5a. This implies that the minimal number of neurons that were necessary to activate p o s t at the requested firing rate was related to the synaptic weight. In conclusion, previous research showed that electronic SNNs with a reduced number of neurons are trained efficiently by Hebbian learning, while the current research strengthen the idea showing that fewer neurons improve associative learning. This could reduce the cost and improve the reliability of the hardware implementation of SNNs.

Author Contributions

Conceptualisation, M.H.; methodology, G.-I.U. and M.H.; simulation, G.-I.U.; validation, M.H. and A.B.; formal analysis, G.-I.U.; investigation, G.-I.U. and M.H.; data curation, G.-I.U.; writing—original draft preparation, G.-I.U. and M.H.; writing—review and editing, M.H. and A.B.; supervision, M.H.; All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Doctoral School of TUIASI.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ANNArtificial neural network
LDTLong-term depression
LTPLong-term potentiation
PCBPrinted circuit board
PTPPost-tetanic potentiation
SMAShape memory alloy
SNNSpiking neural network

Appendix A

Figure A1 presents the schematic circuit, including the parametric values, of the electronic neuron that was implemented in PCB hardware [28]. The neuron included one electronic soma (SOMA) and one or more electronic synapses (SYN). The SOMA detects the neuronal activation threshold using transistor T 1 and activates the SYNs. The SOMA of the postsynaptic neurons that are stimulated by excitatory or inhibitory synapses includes an integrator of the input activity. When the SOMA activates the connected SYNs, S O U T generates pulses at their output N O U T , of which the nergy depends on the charge stored in the weight capacitor C L .
Figure A1. The schematic of (a) the electronic SOMA and (b) the electronic synapse (SYN).
Figure A1. The schematic of (a) the electronic SOMA and (b) the electronic synapse (SYN).
Biomimetics 08 00028 g0a1

References

  1. Maass, W. Networks of spiking neurons: The third generation of neural network models. Neural Netw. 1997, 10, 1659–1671. [Google Scholar] [CrossRef]
  2. Blachowicz, T.; Grzybowski, J.; Steblinski, P.; Ehrmann, A. Neuro-Inspired Signal Processing in Ferromagnetic Nanofibers. Biomimetics 2021, 6, 32. [Google Scholar] [CrossRef] [PubMed]
  3. Bi, G.Q.; Poo, M.M. Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 1990, 18, 10464–10472. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Karmarkar, R.; Buonomano, D. A Model of Spike-Timing Dependent Plasticity: One or Two Coincidence Detectors? J. Neurophysiol. 2002, 88, 507–513. [Google Scholar] [CrossRef] [Green Version]
  5. Sjostrom, P.J.; Turrigiano, G.; Nelson, S. Rate, Timing, and Cooperativity Jointly Determine Cortical Synaptic Plasticity. Neuron 2001, 32, 1149–1164. [Google Scholar] [CrossRef] [Green Version]
  6. Whitlock, J.R.; Heynen, A.J.; Shuler, M.G.; Bear, M.F. Learning Induces Long-Term Potentiation in the Hippocampus. Science 2006, 313, 1093. [Google Scholar] [CrossRef] [Green Version]
  7. Barrionuevo, G.; Brown, T.H. Associative long-term potentiation in hippocampal slices. Proc. Natl. Acad. Sci. USA 1983, 80, 7347–7351. [Google Scholar] [CrossRef] [Green Version]
  8. Mcclelland, J. How far can you go with Hebbian learning, and when does it lead you astray. Atten. Perform. 2006, 21, 33–69. [Google Scholar]
  9. Gavrilov, A.; Panchenko, K. Methods of Learning for Spiking Neural Networks. A Survey. In Proceedings of the 13th International Scientific-Technical Conference APEIE, Novosibirsk, Russia, 3–6 October 2016. [Google Scholar]
  10. Alemanno, F.; Aquaro, M.; Kanter, I.; Barra, A.; Agliari, E. Supervised Hebbian Learning. Europhys. Lett. 2022, 141, 11001. [Google Scholar] [CrossRef]
  11. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef] [Green Version]
  12. Kempter, R.; Gerstner, W.; van Hemmen, L. Hebbian learning and spiking neurons. Phys. Rev. E 1999, 59, 4498. [Google Scholar] [CrossRef]
  13. Song, S.; Miller, K.D.; Abbott, L. Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nat. Neurosci. 2000, 3, 919–926. [Google Scholar] [CrossRef]
  14. Hulea, M.; Uleru, G.I.; Caruntu, C.F. Adaptive SNN for anthropomorphic finger control. Sensors 2021, 28, 2730. [Google Scholar] [CrossRef] [PubMed]
  15. Uleru, G.I.; Hulea, M.; Manta, V.I. Using hebbian learning for training spiking neural networks to control fingers of robotic hands. Int. J. Humanoid Robot. 2022, 2250024. [Google Scholar] [CrossRef]
  16. Powell, C.M.; Castillo, P.E. 4.36—Presynaptic Mechanisms in Plasticity and Memory. In Learning and Memory: A Comprehensive Reference; Byrne, J.H., Ed.; Academic Press: Cambridge, MA, USA, 2008; pp. 741–769. [Google Scholar]
  17. Fioravante, D.; Regehr, W.G. Short-term forms of presynaptic plasticity. Curr. Opin. Neurobiol. 2011, 21, 269–274. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Yang, Y.; Calakos, N. Presynaptic long-term plasticity. Front. Synaptic Neurosci. 2013, 17, 8. [Google Scholar] [CrossRef] [Green Version]
  19. Kano, M.; Watanabe, M. 4—Cerebellar circuits. In Neural Circuit and Cognitive Development (Second Edition); Rubenstein, J., Rakic, P., Chen, B., Kwan, K.Y., Eds.; Academic Press: Cambridge, MA, USA, 2020; pp. 79–102. [Google Scholar]
  20. Buice, M.A.; Chow, C.C. Dynamic Finite Size Effects in Spiking Neural Networks. PLoS Comput. Biol. 2013, 9, 1659–1671. [Google Scholar] [CrossRef] [Green Version]
  21. Dimovska, M.; Johnston, T.; Schuman, C.; Mitchell, J.; Potok, T. Multi-Objective Optimization for Size and Resilience of Spiking Neural Networks. In Proceedings of the IEEE 10th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), New York, NY, USA, 10–12 October 2019; pp. 433–439. [Google Scholar]
  22. Hussain, I.; Thounaojam, D.M. SpiFoG: An efficient supervised learning algorithm for the network of spiking neurons. Sci. Rep. 2020, 4, 13122. [Google Scholar] [CrossRef]
  23. Hulea, M.; Barleanu, A. Electronic Neural Network For Modelling The Pavlovian Conditioning. In Proceedings of the International Conference on System Theory, Control and Computing (ICSTCC), Sinaia, Romania, 19–21 October 2017. [Google Scholar]
  24. Uleru, G.I.; Hulea, M.; Burlacu, A. Bio-Inspired Control System for Fingers Actuated by Multiple SMA Actuators. Biomimetics 2022, 7, 62. [Google Scholar] [CrossRef]
  25. Hulea, M.; Uleru, G.I.; Burlacu, A.; Caruntu, C.F. Bioinspired SNN For Robotic Joint Control. In Proceedings of the International Conference on Automation, Quality and Testing, Robotics (AQTR), Cluj-Napoca, Romania, 21–23 May 2020; pp. 1–5. [Google Scholar]
  26. Hulea, M.; Burlacu, A.; Caruntu, C.F. Intelligent motion planning and control for robotic joints using bio-inspired spiking neural networks. Int. J. Hum. Robot. 2019, 16, 1950012. [Google Scholar] [CrossRef]
  27. Hulea, M.; Barleanu, A. Refresh Method For The Weights of The Analogue Synapses. In Proceedings of the International Conference on System Theory, Control and Computing (ICSTCC), Sinaia, Romania, 8–10 October 2020; pp. 102–105. [Google Scholar]
  28. Hulea, M. Electronic Circuit for Modelling an Artificial Neuron. Patent RO126249 (A2), 29 November 2018. [Google Scholar]
  29. Hulea, M.; Ghassemlooy, Z.; Rajbhandari, S.; Younus, O.I.; Barleanu, A. Optical Axons for Electro-Optical Neural Networks. Sensors 2020, 20, 6119. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The main elements of the neuronal schematic that are used to model the mechanisms of learning.
Figure 1. The main elements of the neuronal schematic that are used to model the mechanisms of learning.
Biomimetics 08 00028 g001
Figure 2. (a) The structure of the SNN that was used to perform tests; (b) R varied among presynaptic neurons P r e 1 , P r e 2 , , P r e N .
Figure 2. (a) The structure of the SNN that was used to perform tests; (b) R varied among presynaptic neurons P r e 1 , P r e 2 , , P r e N .
Biomimetics 08 00028 g002
Figure 3. Weight variation with f p r e during a single spike by (a) PTP and (b) LTP for a time window.
Figure 3. Weight variation with f p r e during a single spike by (a) PTP and (b) LTP for a time window.
Biomimetics 08 00028 g003
Figure 4. (a) The frequency of the output neuron. The synaptic weight variation by LTP after 2 s of training for variation in the number of (b) untrained neurons with f U N T = 75 Hz; (c) trained neurons with f T = 100 Hz; (d) both trained and untrained neurons.
Figure 4. (a) The frequency of the output neuron. The synaptic weight variation by LTP after 2 s of training for variation in the number of (b) untrained neurons with f U N T = 75 Hz; (c) trained neurons with f T = 100 Hz; (d) both trained and untrained neurons.
Biomimetics 08 00028 g004
Figure 5. (a) Variation in the ratio r W with synaptic weight; (b) the ratio of synaptic weight variation between PTP and LTP after 2 s of training.
Figure 5. (a) Variation in the ratio r W with synaptic weight; (b) the ratio of synaptic weight variation between PTP and LTP after 2 s of training.
Biomimetics 08 00028 g005
Figure 6. The ratio of potentiation levels between LTP and PTP when the SNN was trained until first activation of p o s t for (a) f U N T = 50   Hz and (b) f U N T = 75   Hz .
Figure 6. The ratio of potentiation levels between LTP and PTP when the SNN was trained until first activation of p o s t for (a) f U N T = 50   Hz and (b) f U N T = 75   Hz .
Biomimetics 08 00028 g006
Figure 7. Learning until f P O S T = 100 Hz when the number of untrained neurons n U N T varied. (a) d W L T P and (b) d W P T P for f U N T = 50 Hz.
Figure 7. Learning until f P O S T = 100 Hz when the number of untrained neurons n U N T varied. (a) d W L T P and (b) d W P T P for f U N T = 50 Hz.
Biomimetics 08 00028 g007
Figure 8. Variation in ratio r W = d W L T P / d W P T P with n U N T when (a) f U N T is not a divisor of f M and (b) f U N T is a divisor of f M .
Figure 8. Variation in ratio r W = d W L T P / d W P T P with n U N T when (a) f U N T is not a divisor of f M and (b) f U N T is a divisor of f M .
Biomimetics 08 00028 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Uleru, G.-I.; Hulea, M.; Barleanu, A. The Influence of the Number of Spiking Neurons on Synaptic Plasticity. Biomimetics 2023, 8, 28. https://doi.org/10.3390/biomimetics8010028

AMA Style

Uleru G-I, Hulea M, Barleanu A. The Influence of the Number of Spiking Neurons on Synaptic Plasticity. Biomimetics. 2023; 8(1):28. https://doi.org/10.3390/biomimetics8010028

Chicago/Turabian Style

Uleru, George-Iulian, Mircea Hulea, and Alexandru Barleanu. 2023. "The Influence of the Number of Spiking Neurons on Synaptic Plasticity" Biomimetics 8, no. 1: 28. https://doi.org/10.3390/biomimetics8010028

Article Metrics

Back to TopTop