Next Article in Journal
Repetitive Transcranial Magnetic Stimulation of the Brain Region Activated by Motor Imagery Involving a Paretic Wrist and Hand for Upper-Extremity Motor Improvement in Severe Stroke: A Preliminary Study
Previous Article in Journal
Dissociating Language and Thought in Human Reasoning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Precise Spiking Motifs in Neurobiological and Neuromorphic Data

by
Antoine Grimaldi
1,
Amélie Gruel
2,
Camille Besnainou
1,
Jean-Nicolas Jérémie
1,
Jean Martinet
2 and
Laurent U. Perrinet
1,*
1
INT UMR 7289, Aix Marseille Univ, CNRS, 27 Bd Jean Moulin, 13005 Marseille, France
2
SPARKS, Côte d’Azur, CNRS, I3S, 2000 Rte des Lucioles, 06900 Sophia-Antipolis, France
*
Author to whom correspondence should be addressed.
Brain Sci. 2023, 13(1), 68; https://doi.org/10.3390/brainsci13010068
Submission received: 16 November 2022 / Revised: 20 December 2022 / Accepted: 23 December 2022 / Published: 29 December 2022
(This article belongs to the Section Computational Neuroscience and Neuroinformatics)

Abstract

:
Why do neurons communicate through spikes? By definition, spikes are all-or-none neural events which occur at continuous times. In other words, spikes are on one side binary, existing or not without further details, and on the other, can occur at any asynchronous time, without the need for a centralized clock. This stands in stark contrast to the analog representation of values and the discretized timing classically used in digital processing and at the base of modern-day neural networks. As neural systems almost systematically use this so-called event-based representation in the living world, a better understanding of this phenomenon remains a fundamental challenge in neurobiology in order to better interpret the profusion of recorded data. With the growing need for intelligent embedded systems, it also emerges as a new computing paradigm to enable the efficient operation of a new class of sensors and event-based computers, called neuromorphic, which could enable significant gains in computation time and energy consumption—a major societal issue in the era of the digital economy and global warming. In this review paper, we provide evidence from biology, theory and engineering that the precise timing of spikes plays a crucial role in our understanding of the efficiency of neural networks.

1. Introduction: Importance of Precise Spike Timings in the Brain

1.1. Is There a Neural Code?

Neural activity is directly influenced by our immediate environment and by internal states and is structured in order to generate motor actions. The efficiency of these actions is key for survival, which is the sole determinant in the light of natural selection. A central question of modern neuroscience is to better understand the essence of neural activity, as exemplified by the recordings observed in neurobiological experiments. One sometimes uses the expression “decoding the neural code”, although this implies the existence of a code, i.e., an explicit representation of cognitive processes within the neural activity. Nevertheless, we will use this terminology in all generality to denote the existence of a structure in neural activity. In this respect, it is reasonable to declare that neural activity may be related to specific measurable variables. Since Galvani’s experiments, we know that an electrical activity applied on muscular nerves can cause the stretching of a frog’s limb (for a review, see [1]). A central and well-studied way of communication between neurons is specific electrochemical events called action potentials, or spikes, which were first discovered at the beginning of the X X t h century [2]. In this study, the frequency at which these spikes are emitted has been shown to be roughly commensurate with the stretch of the frog’s limb. In the scope of this article, we focus mainly on these spikes in vertebrate systems. They can be described as brief (about one millisecond) and prototypical, i.e., “all-or-none”, impulses that propagate along the axons of neurons. Typically, a postsynaptic neuron receives incoming spikes from other afferent neurons on the arborized “input” dendrite. The integration of these spikes by the dendritic tree and the soma of the postsynaptic neuron results in the modification of its membrane potential that possibly leads to the emission of an action potential along its “output” axon to reach efferent neurons. Except notably in the retina where neurons communicate with graded potentials [3], it is assumed that spike trains are the main component of the neural code. Until recently, most neurophysiologists used the temporal evolution of the firing rate (for instance, as computed as the average occurrence of spikes in small temporal windows of about 100 m s ) in order to characterize the dynamical activity of neurons. This may be extended by computing different statistics on each neuron’s sequence of spikes [4] but also the dependence across neurons [5].
However, computational neuroscience models have suggested that the precise timing within a sequence of spikes may play a crucial role and that neurons may be synchrony detectors as well as integrators [6]. In particular, it is possible that the minute arrangement of temporal delays between neurons may provide a computational advantage. We will investigate this very hypothesis in this review. In comparison to a classical analog vector of inputs, this event-based representation observed in the neural code is essential in understanding information processing [7]. For instance, it expands the capabilities of representations of the rate coding hypothesis that relies only on the firing rate by considering representations based on the precise timing of single spikes. Additionally, numerous studies have demonstrated the importance of precise timing in neural population activity [8], efficient encoding thanks to the use of spike latencies [9,10] or precise timing in the auditory system [11,12]. All these findings, and more [13,14], highlight the importance of the temporal aspect of the neural code and further suggest the existence of precise spatio-temporal spiking motifs in the input which excites neurons. A mathematical formalization would be particularly well-suited to neuromorphic computing [15], and would allow for the supervised or self-supervised learning of such motifs in any event-driven data. Crucially, validating this hypothesis would also be crucial in our understanding of neural processes.

1.2. Dynamics of Vision and Consequences on the Neural Code

Let us start with a focus on the state-of-the-art of the role of dynamics in vision. Broadly speaking, vision is the set of processes that allow us to make sense of the world through luminous signals, and is an intensively studied field in neuroscience, particularly with respect to deciphering the neural code. In most mammals, light enters the eye to induce neural activity on the retina, which maintains a certain similitude between the topology of external visual space and its representation on the retina, called retinotopy. The origins of this neuroscientific question can be found in the first experiments from Pierre Flourens which, using lesions in animals, demonstrated the relationship between visual sensations and activity in the cerebral cortex [16,17]. This was also observed when recording the activity of the frog’s visual system [2]. In a series of seminal studies, Hubel and Wiesel [18] showed that this activity could be selective to different features, such as the visual orientation or motion of elementary contours. For a large proportion of neurons, there is a remarkably monotonic relationship between the contrast of visual features and the firing frequency of neurons. However, there is no consensus to explain the multiple nonlinear mechanisms that transform the visual scene into retinotopic neural activity maps, even though these processes seem to constitute essential pieces to this puzzle [19].
In particular, there have been some remarkable findings when studying the dynamics of vision. For instance, Simon Thorpe’s group has shown during the last decades numerous examples demonstrating that humans can categorize briefly presented images in a fraction of a second. Their experiments consisted in asking subjects to categorize images that do or do not contain animals [20]. The results showed that humans were able to perform this task very well (with a success rate of more than 95%) but above all that, a differential activity for the two categories of images could be observed by electroencephalography, showing that this differentiation emerges with a very short latency in neural activity. These results have been extended to several species, including primates. Different experimental protocols have shown, for example, that the motor response could be extremely fast (of the order of 120 m s ) when the task was to perform a saccade [21]. This fast processing correlates with the surprising experiments of fast serial detection, which consists in presenting a fast succession of different images and decoding via the EEG if the observer can detect, for example, the presence of an animal [22]. As expected, the performances decrease progressively as the frequency of presentation of the images increases. However, it has been shown in the macaque that a significant performance could be maintained with an image presentation time of only 14 m s per image.
Although surprising, this speed of the visual cortex in primates is compatible with the latencies that are recorded at the neuro-physiological level. The rapid propagation of the visual information in the thalamus, then in the primary visual cortex takes about 45 m s in the macaque [23] and about 60 m s in humans [24]. This functioning of visual processing as a forward pass is most prominent in fast processing (see Figure 1), and can be complemented with feedback loops from the higher areas to the sensory areas [25]. An important consequence of this speed of processing of vision is that it implies that processing is carried out using only very few spikes per layer. As a comparison, the latencies in macaque monkeys are approximately as follows: Retina, 20–40 m s ; V1, 40–60 m s ; IT, 80–100 m s ; MC, 140–190 m s ; and to finger muscles, 180–260 m s . Note that, since maximal conduction speeds are roughly constant, theses latencies are comparable to that found in humans, with a ratio given by the physical size of the whole system. It follows that if we consider that a behavioral response occurrs in only 200 m s , it would involve about ten processing stages along the “forward” pathways of the visual system. Such processes were indeed efficiently reproduced in feed-forward models trained with back-propagation [26,27]. At the same time, it was demonstrated that one spike requires a significant amount of time (about 10 m s ) to be conducted from one layer to the next [28,29]. This figure is inspired by similar schematics performed for monkeys in [29]. As a consequence, these results suggest that “like other senses, vision relies heavily on temporal strategies and temporal neural codes to extract and represent spatial information” [30].

1.3. How Precise Spike Timing May Encode Vectors of Real Values

Let us now focus on one processing step along a cortical pathway. Sensory data are most often represented by continuous values, such as the energy produced by a flow of photons that hits the different photoreceptors of the retina. How may such information be encoded in neuronal activity? The analysis of generic raster plots reveals particular traits that hint at the role of precise timing. For instance, the firing rate of cortical cells in awake monkeys is highly irregular [31], which makes it, at first sight, inconsistent with the temporal integration of firing rate. Remarkably, it was observed that the response of a neuron in a cortical slice to a current step could be highly non-reproducible: while the first spike is aligned to stimulus’ time, the subsequent spike times tend to diffuse for independent repetitions of the stimulation [32,33] (see Figure 2). However, if that same neuron is now driven by a frozen noise, that is, a highly dynamic signal which is repeatedly replayed from trial to trial, then the output spikes are highly reproducible (for a review, see [34]). This is consistent with the differential role of different stimulus frequencies (for instance, the gamma range around 80 Hz) on the reliability of the spike timing reported in [35]: “we found that, as expected given the resistive and capacitive properties of cortical neurons, low frequencies have a larger effect on the membrane potential of cortical neurons than do higher frequencies. However, increasing the amount of gamma range fluctuations in a stimulus leads to more precise timing of action potentials”.
At the level of the retina, it has been shown that a coding of luminance values in the image using the timing of the spikes may be at work [10]. In particular, these results show that the response of ganglion cells to the visual gratings projected on the retina could be encoded in the latency of the response and not only in the frequency of the discharge, as it is often assumed. These results have been extended to natural images and show a qualitatively similar behavior. The authors’ conclusion is that the precise spiking latency of the neurons encodes the spatial features of the image. Interestingly, such a precise latency mechanism may underlie some visual illusions, e.g., the false color illusion in the Benham Top based on center–surround interactions in the parvocellular pathway [36]. This evidence found in the retina can be extended to other areas, such as the visual cortex.
In fact, similar results have been demonstrated through neurophysiological recordings in the primary visual cortex and show that different levels of visual activity will induce different levels of neuronal discharge latency in the primary visual area [37]. First-spike latency codes are a feasible mechanism for information transfer, even when biologically plausible estimates of the stimulus onset are considered, for instance, for sound localization [38]. Note also that timing is not entirely sensory or internal, but can be used as a general neural coding principle. In [39], for instance, they found that the “timing accuracy was improved when the environment afforded cues that rats can incorporate into motor routines. Timing, at least in animals, may thus be fundamentally embodied and situated.” Many models have used these properties in temporal coding to build fast image categorization networks [9,40,41]. These models take the form of artificial spiking neural networks (SNNs) and have been able to demonstrate their practical applications for image categorization [42]. One of these is the SpikeNet algorithm, which uses a purely temporal approach by encoding information using one spike per neuron by using the rank of neurons’ activation [41,43]. Another class of artificial SNNs uses precise spike timing as a metric in order to determine the structure of the network in order to minimize a cost function. This was implemented in the SpikeProp algorithm [44] and has been extended in novel gradient-based methods. The subsequent surrogate gradient method is now widely used in methods that attempt to transfer performance from analog (CNNs) to spike-based (SNNs) architectures [45]. This type of modeling often uses the classical task of categorizing images developed in deep learning, while adapting it to the specificity of the event-based representation [46]. For instance, [47] implements a STDP-based spiking deep convolutional neural networks for object recognition or [48] develops a form of spike-based, competitive learning applied for unsupervised learning. However, the performance of SNNs is still lagging compared to that of analog networks, and the question of the advantage of using spikes in machine learning and computer vision remains open. Improvements in this new generation of Artificial Neural networks (ANN) would bring major advances in terms of efficient computations in machine learning. They would benefit in particular to a new generation of cameras called Silicon Retinas [49] (see Section 6).
Even if technology lags far behind biology, this introduction demonstrates the importance of timing in neural processes, and we will further review the role of precise spike timing in neural assemblies. We start by reviewing the different hypotheses that aim at deciphering the neural code with spatio-temporal spiking motifs. After listing some biological evidence for the use of precise spike timing, we review some computational models and neuromorphic technics that add this temporal dimension to their computations.

2. Role of Precise Spike Timing in Neural Assemblies

In this first section, we introduced the notion of rate coding and demonstrated that spike timing can also carry information. Scientists found experimental evidence for various hypotheses of neural representations such as population coding [50], time-to-first-spike coding [10], phase-of-firing coding [51], correlation coding [52] or sparse coding [53]. In the scope of this review, we infer that spike trains are composed of repeating spiking motifs and we focus on precise spatio-temporal representations composed of a motif of spikes defined precisely in time and in the presynaptic address space. In all generality, this representation can encompass all the previous ones, except for rate coding, which is not defined locally in time. In this section, we choose to describe different hypotheses, making use of spatio-temporal patterns of spikes that can be propagated among neural assemblies.

2.1. One First Hypothesis: Synchronous Firing in Cell Assemblies

In his book Corticonics, Abeles [54] queried as to whether the role of cortical neurons is to integrate synaptic inputs or rather to detect coincidences in temporal spiking patterns. The book gradually leads the reader from the macroscopic cortical anatomy and standard electro-physiological properties of single neurons to neural network models. While the first hypothesis favors the rate coding theory, the second possibility highlights the need for temporal precision in the neural code [6,55]. The book then demonstrates that neural assemblies could form so-called “synfire chains,” that is, showing the emergence of synchronous activity on subsets of neurons, which could be propagated in a stable fashion. More broadly, the idea of using the synchrony of co-activation in a cell assembly is reminiscent of the hypothesis that was formalized by Hebb [56]: “cells that fire together wire together”. Since this date, multiple experimental observations have suggested the existence of this precise zero-phase-lag spike synchronization in a defined subset of neurons [57]. One possible function of this synchronization may serve the binding of information distributed in the brain [58,59].
Some experimental results show the emergence of synchrony, for instance in motor cortical function [60]. Interestingly, these authors showed that “accurate spike synchronization occurred in relation to external events (stimuli, movements) and was commonly accompanied by discharge rate modulations but without precise time locking of the spikes to these external events. Spike synchronization also occurred in relation to purely internal events (stimulus expectancy), where firing rate modulations were distinctly absent. These findings indicate that internally generated synchronization of individual spike discharges may subserve the cortical organization of cognitive motor processes.” Moreover, such emergence could change over the learning period involved in learning a task [61] and showed some tuning to movement direction and reaction time [62]. It is important to note that synchronous events tend to lock to spatio-temporal patterns of neural activity called LFP beta waves [63] and were also extended to larger assemblies using statistical methods [64] (see Section 3 for further details). Synchronicity is also an interesting proposition to explain how the relatively weak thalamo-cortical synapses are able to drive cortical neurons. Among different explanations, including travelling waves [65], a synchronous activity at the synaptic level may be sufficient to elicit activity in the cortex [66].
Theoretically, even if the vertebrate’s neural system is not likely to be modeled only by such deterministic connectivity [67,68], it was shown that a simple model may allow the propagation of such synfire chains [69] by considering the dynamics of leaky integrate-and-fire (LIF) neurons in different groups of a similar size. Each neuron of one group is connected by an excitatory synapse to the next. When a pulse is elicited in the first group, this may generate a spike in the next group. Depending on the concentration of synaptic weight values, this new activity may become more or less synchronized with respect to that of the previous pulse (as measured by the standard deviation of spike times within the pulse). Recursively applying this to a sequence of groups within a chain generates either a synfire propagation or not. A simple simulation of synfire propagation is shown in Figure 3. A crucial aspect of this emergence is explained by the dynamics of the spiking neuron model [70] and, in particular, the balance between excitation and inhibition [71]. This balance was, for instance, modeled by feed-forward inhibition, a fine-scaled latency mechanism that is an essential ingredient in modelling so-called push–pull effects in the primary visual cortex [72]. Further models have shown that such synfire chains could be embedded in topographies [73], while others used conductance-based neurons with feed-forward inhibition to improve the robustness of the propagation [74]. In particular, this was implemented as a computational neuroscience benchmark model using the pyNN language [75] both in CPU-based and neuromorphic hardware [76].
Attempts have been made to detect such synfire chains in neurobiological data. Schrader et al. [77] envisioned that “sensitivity is high enough to detect synfire chain activity in simultaneous single-unit recordings of 100 to 200 neurons from such data, enabling application to experimental data in the near future”. Indeed, simultaneously recorded activities of neurons in the primary motor cortex of monkeys exhibited context-dependent, rapid changes in the patterns of coincident action potentials [60,78]. It is now commonly accepted that the planning and execution of movements are based on distributed processing by neural populations in motor cortical areas, yet it is less clear how these populations organize dynamically to cope with the momentary computational demands. In [79], the author proposed a simple spike-based computational framework, based on the idea that stimulus-induced synchrony can be used to extract sensory invariants (for example, the location of a sound source), which is a difficult task for classical ANNs. It relies on the simple remark that a serie of repeated coincidences is in itself an invariant. Many aspects of perception rely on extracting invariant features, such as the spatial location of a time-varying sound, the identity of an odor with fluctuating intensity, or the pitch of a musical note.
This is also expressed in the idea that different cortical areas could achieve binding by synchrony [80]. The synchronicity will generate rhythms at different ranges of frequencies, with spikes arriving at peak susceptibility (top of a cycle) or down. Such a theory has surprisingly been validated in EEG recordings to explain for example the continuous wagon-wheel illusion, i.e., the perceived reversal of the rotational movements of the spikes in a rotating wheel [81]. More generally, it can be shown that the phase of alpha oscillations (about 10 Hz) is causally linked with modulations of cortical excitability and with visual perception [82]. The question still remains open as to know if this is an epiphenomenon or a working mechanism of the neural code.

2.2. A Further Hypothesis: Travelling Waves

To further investigate the role of precise timing, let us also focus on the role of differential timings in an assembly of neurons. As we have seen, a visual feature will induce the firing of different cells at different latencies [37]. Using intracellular recordings, it was shown that the response to a focal visual activation would elicit a latency basin, that is, a graded onset of the neural response from the most activated to neighboring neurons [83]. In particular, it was shown that the network of the so-called horizontal connections within a cortical area is typically not myelinated, and that this latency basin would be determined by the propagation speed within that area. For generic visual scenes, these processes would generate a complex interplay between the dynamics of the sensory signal and the spatio-temporal determinants of these interactions. Such interactions may underlay anticipatory mechanisms in the primary visual cortex [84,85]. The underlying process could be the emergence of propagating waves on the surface of the cortex.
Propagating waves in the neuronal response occur in many excitable media and were found in neural systems, for instance, in the retina [86] or the neocortex [87]. While propagating waves are clearly present under anesthesia, whether they also appear during awake and conscious states remained unclear until recent discoveries. One possibility is that these waves were systematically missed in trial-averaged data, due to variability. A recent work [88] presents a method for detecting propagating waves in noisy multichannel recordings. Applying this method to single-trial voltage-sensitive dye imaging data, the authors show that the stimulus-evoked population response in the primary visual cortex of the awake monkey propagates as a travelling wave, with consistent dynamics across trials. A network model suggests that this reliability is the hallmark of the horizontal fiber network of superficial cortical layers. Propagating waves with similar properties occur independently in the secondary visual cortex, but maintain precise phase relations with the waves in the primary visual cortex. These results show that, in response to a visual stimulus, propagating waves are systematically evoked in several visual areas, generating a consistent spatio-temporal frame for further neural interactions.
More recently, novel multi-unit recording techniques have enabled the identification of travelling waves of neural activity in different areas of the cortex [89]. The authors reviewed these findings by considering the mechanisms by which travelling waves are generated, and evaluated their possible roles in cortical function. In particular, spontaneous travelling waves naturally emerge from horizontal fiber time delays and travel through locally asynchronous-irregular states [8]. Studies of sensory-evoked neuronal responses often focus on mean spike rates, with fluctuations treated as internally generated noise. However, fluctuations of spontaneous activity, often organized as travelling waves, shape stimulus-evoked responses and perceptual sensitivity. The mechanisms underlying these waves are unknown. Further, it is unclear whether waves are consistent with the low rate and weakly correlated “asynchronous-irregular” dynamics observed in cortical recordings. In that paper, the authors describe a large-scale computational model with topographically organized connectivity and conduction delays relevant to biological scales. They find that spontaneous travelling waves are a general property of these networks. The travelling waves that occur in the model are sparse, with only a small fraction of neurons participating in any individual wave. Consequently, they do not induce measurable spike correlations and remain consistent with locally asynchronous irregular states. Further, by modulating the state of the local network, they can shape responses to incoming inputs as observed in biology. Such waves also occur in motor areas and Lindén et al. [90] have recently presented ensemble recordings of neurons in the lumbar spinal cord that indicate that, rather than alternating, the population is performing a low-dimensional “rotation” in neural space, in which the neural activity is cycling through all phases continuously during the rhythmic behavior.
Interestingly, it can be shown that these travelling waves could have a measurable impact on the activity of the visual cortex. This was illustrated in a recent study investigating the long-range apparent motion effect (lrAM) [91]. The lrAM is the simple phenomenon of perceiving a smooth motion when showing two dots in a temporal sequence and in relatively close visual proximity. The lrAM is the core building block underlying the use of sequences of images to induce the perception of smooth, realistic visual scenes, which is at the base of movies seen in cinema theaters. In this study, the authors used voltage-sensitive dye imaging to record the activity of the primary visual cortex of macaque monkeys to the presentation of the pair of dots presented independently or in conjunction. A probabilistic modelling showed that the activity of the joint presentation induced a suppressive wave in the direction opposed to the perceived direction, shaping the formation of a wave of propagation travelling at a speed compatible with the perceived motion. A computational model validated the hypothesis that this process could be mediated by diffusion in the horizontal layers connecting the different locations within this cortical area. In summary, the study by Chemla et al. [91] gave a multi-disciplinary account to demonstrate the effect of travelling waves in the visual cortex.

2.3. A Rediscovered Hypothesis: Precise Spiking Motifs in Cell Assemblies

Travelling waves indicate that spatio-temporal correlations could play an important role in shaping neural activity. For instance, statistical dependencies in the responses of sensory neurons govern both the amount of stimulus information conveyed and the means by which downstream neurons can extract it. In particular, this was put in evidence by analyzing the functional significance of correlated firing in a complete population of macaque parasol retinal ganglion cells using a model of multi-neuron spike responses [92], which shows the precise spatio-temporal differences in this recurrently connected assembly. The different aspects of information in the data are evaluated by a decoding strategy, highlighting the role of correlations. Note that a similar dataset used in [93] is available from Michael Berry’s lab [94] and allows testing in order to validate or falsify these hypotheses. However, in theory, a cortical travelling wave would be stationary, which is incompatible with the limits in space and time of a neural system. Recent observations may suggest that neural groups or ensembles, rather than individual neurons, are emergent functional units of cortical activity. Miller et al. [95] showed that whereas intrinsic ensembles recur at random time intervals, visually evoked ensembles are time-locked to stimuli. Experiments are performed using two-photon calcium imaging of populations of neurons from the primary visual cortex of awake mice during visual stimulation and spontaneous activity. The study proposes that visual stimuli recruit endogenously generated ensembles to represent visual attributes. Note that evoked ensembles in response to a natural movie played in a loop were precisely timed across repetitions.
From another viewpoint, there is a substantial literature in neurobiology indicating that brain dynamics often organize into stereotyped sequences, such as synfire chains [96], packets [97] or hippocampal sequences [98]. Going further, researchers found precise repetitions of spontaneous patterns of synaptic inputs in neocortical neurons, in vivo and in vitro. These patterns repeat after minutes, maintaining millisecond accuracy. Indeed, Ikegaya et al. [96] demonstrated that in cortical activity, one can find a repetition of several motifs in spike activity (duration around 1 s +/− 0.5 s, some events in motifs are of similar size but sometimes absent). These sequences can be specific to a particular layer or column, can be synchronized with network activity oscillation, and can involve several cells. They also demonstrated that these sequences can form super sequences, so-called “cortical songs”. It consists of the assembly of several sequences which repeat in a specific order with a compressed timing. This spontaneous activity also reveals repeating sequences: about 3000 sequences, each involving 3–10 cells out (of about 900), and lasting up to 3 s. Sequences have specific topographic structures, in some cases involving only a particular layer or a vertical column of cells or cells located in a cluster, and are associated with a structured spatial organization of the neurons that formed them.
Additional studies detail the role of such precise spike timing in downstream information transfer and coding [99,100,101]. This is, for instance, relevant in sensory pathways in vision [102], audition [52], olfaction [103,104,105] or touch [106]. In particular, stereotyped sequences of neural activation have been described in the adult hippocampus and related to its function in mental travel in time and space [107]. These sequences can be internally generated [98,99] and may be formed by the chained activation of orthogonal assemblies, themselves organized as sequence packets [108]. In that protocol, hippocampal sequences are formed by the ordered activation of smaller sequence motifs. They are stereotyped and robust since neurons can be activated in the same order across days (see Figure 4 from [109]). As a consequence, hippocampal sequences may rely on an internally hardwired structure and form the functional building blocks for encoding, storing and retrieving experience.
It is interesting to make a parallel with the “Rapid Formation of Robust Auditory Memories” reported in [110], which uses noise patterns to observe if listeners could learn to detect repeated occurrences of some frozen noise patterns. In particular, they used random waveforms to probe the formation of new memories for arbitrary complex sounds. A behavioral measure was designed, based on the detection of repetitions embedded in noises up to 4 s long. Unbeknownst to listeners, some noise samples reoccurred randomly throughout an experimental block. They showed that the “repeated exposure induced learning for otherwise totally unpredictable and meaningless sounds” by showing that the sensitivity increases in that case. Note also that “acoustical analyses failed to reveal any obvious differences between good and bad noises” and that “time reversal had no significant effect on the detection accuracy (which is quite surprising). The learning is unsupervised (statistical, automatic), fast-acting (phase transition, “insight”), and long-lasting (memorization).
That results suggest that precise spiking motifs are not necessarily grouped on a topography and can be forming apparently randomly arranged connections. Interestingly, one theoretical viewpoint considers synfire braids [87], where a precise sequential motif of spikes will synchronize as it reaches the soma of a neuron for which synaptic delays are adequately tuned. Furthermore, computational modeling shows that at the scale of neurons, an efficient neural code can emerge where spike times are organized in prototypical, precise temporal motifs [111], which this author defined as polychronous groups. The rest of this review will be devoted to present evidence for the use of such precise spiking motifs in computational neuroscience, neurobiology and neuromorphic engineering. As a summary, it seems that such precise structural information is essential to the neural code and that it seems imperative to include this information in the decoding algorithm for a better understanding of neural activity.

3. Understanding Precise Spiking Motifs in Neurobiology

3.1. Decoding Neural Activity from Firing Rates

In this section, we will review current evidence on how we may take advantage of spiking motifs in neurobiology, that is, in an effort to understand actual recordings from biological neural tissues. In most generic computational models, the neural activity is assumed to be encoded in the firing rate. For instance, the output of the so-called linear non-linear (L-NL) models is assumed to model the response of a biological neuron as the sequence of a linear integration followed by a non-linear spiking response generating spikes according to a Poisson point process [112]. As such, a simple decoding strategy is to infer the input knowing the neuron’s tuning curves, that is, its selectivity to a range of features [113] or simply by a simple regression [114]. This latter model assumes a Bernoulli model for the generation of spikes, such that the decoding amounts to a single-layer logistic regression. An important perspective of these methods used to decipher the recorded activity is that it could be ultimately used to fit neural network models to the recorded activity [115]. In this particular paper, the authors fit the summary statistics of neural data with a (differentiable) spiking network model. The loss function is the cross entropy (following the Bernouilli hypothesis with a GLM, where each unit is modelled with an SRM neuron [70]) and embedded with recurrent dynamics. In particular, it comes with code and uses the publicly available V1 dataset [116], which allows supervising the model, with the input being the movie and the output the spikes recorded. These type of model may infer sparse activity in a set of binary latent variables, each describing the activity of a cell assembly [117]. Carefully picking the more appropriate metric, as implemented in that paper by the corresponding neural models, is essential in better understanding neural data. Importantly, these models are dependent on a core definition of spike measures, and we will review here how precise spiking motifs are taken into account by such spike distances.

3.2. Decoding Neural Activity Using Spike Distances

There are different solutions to provide with a distance between two given spike trains. A known measure is the Victor–Purpura distance, which overcomes inconsistencies experienced with a firing rate (Poisson model) of spike trains [118]. Then a study tries to solve the problem by including a time constant as a parameter [119]. This parameter will then be used to interpolate the distance between a coincidence detector and a rate difference counter. Such distances were extended to non-Euclidean metrics and use morphological manipulations to compute spike train dissimilarity [120]. Mathematically, the stability of distance measures induced by level-crossing sampling can be evaluated [121], notably in light of the so-called Weyl’s discrepancy measure [122], which may lead to the definition of a cross-correlation measure, an interesting conclusion since the cross-correlation measure is that which is adapted to the event-based nature of spiking signals. These observations lead to the intuition that each distance may be as good as the optimal solution of a generative model for these measures, possibly through non-linear relations [123].
Concerning spike timings, Levakova et al. [124] reviewed existing methods for estimating the latency of neural responses that include Bayesian binning. Alternatively, unitary event analysis can be performed by a statistical model of coincidence detection [125]. This was extensively used in detecting above chance significant synchronous patterns [126], particularly in recordings of pairs of neurons (see [60] for instance), and has been extended to non-stationary data [127]. A method to detect significant patterns of synchronous spiking in a subset of massively parallel spike trains in the presence of background activity can be defined using the statistical evaluation of synchronous spike patterns extracted by frequent item set mining [128]. By the same group, the SPADE, CAD or ASSET algorithms are methods for identification of spike patterns in massively parallel spike trains (the spiking activity of tens to hundred(s) of neurons recorded in parallel) by identifying fine temporal correlations in the m s precision range [129]. This was recently extended in [130] in order to find re-occurring patterns in parallel spike train data, and to determine their statistical significance. The extension improves the performance in the presence of patterns with different durations, as demonstrated by application to various synthetic data, such as the synthetic data for synfire chains (see Figure 5), such as surrogates generated to evaluate precisely timed higher-order spike correlations [131].
Another important algorithm, called SPOTDisClust, is based on the detection of structured temporal patterns [132]. They introduced an unsupervised method based on their detection from high-dimensional neural ensembles. The algorithm measures similarity between two ensemble spike patterns by determining the minimum transport cost of transforming their corresponding normalized cross-correlation matrices into each other. Many approaches to this problem are supervised. In other words, they take patterns occurring concurrently with a known event, such as the delivery of a stimulus for sensory neurons or the traversal of a running track for determining hippocampal place fields, as a “template” and then search for repetitions of the same template in spiking activity [133,134]. In SPOTDisClust, the learning is unsupervised. It uses the prior that there is only one spike per pattern. Using a so-called t-SNE projection (that allows to project this high-dimensional representation to a lower-dimension map) validated that this clustering method can retrieve all patterns from the data. The limits of this method are that it is computationally complex, block-based and strictly specialized for the task at hand. To overcome these difficulties, a novel method was recently developed [135]. Whether it is the distance between two given spike trains or a comparison of the spike timings, the complexity and the diversity of the methods used to measure them are witnesses of the growing interest of the integration of these measures in the understanding of the neural code. One of the steps to test their potential usefulness is then to scale these methods to larger amounts of data.

3.3. Scaling up to Very Large Scale Data

Over the past decade, tremendous technological advances across several disciplines have dramatically expanded the frontiers of experimentally accessible neuroscientific facts. Bridging across different spatial and temporal scales, combination of in vivo two photon imaging, large population recording-array technologies, optogenetic circuit control tools, transgenic manipulations as well as large volume circuit reconstructions are now used to examine the function, structure and dynamics of neural networks at an unprecedented level of detail and precision. The daunting complexity of the biological reality revealed by these technologies highlights the importance of neurobiological knowledge to provide a conceptual bridge between abstract principles of brain function and their biological implementations within neural circuits. As a consequence, there is a growing need to scale these methods to larger amounts of data.
There are multiple approaches which aim at tackling this problem. One algorithm capable of achieving such a daunting task is the Rastermap algorithm [136]. Basically, it rearranges neurons in the raster plot based on the similarity of their activity and applies a deconvolution strategy based on a linear model. Yet this method was mainly tested on calcium imaging data, which are known to add some imprecision to the timing of the original spiking activity. The model is openly accessible [137] and has led to important discoveries. In [138] for instance, it was shown that a neuronal population encodes information most efficiently when its stimulus responses are high-dimensional and uncorrelated, and most robustly when they are lower-dimensional and correlated. Then, in [139], the authors analyzed spontaneous neural firing, finding that neurons in the primary visual cortex encoded both visual information and motor activity related to facial movements. In [140], the authors developed novel machine learning tools and statistical tests for unsupervised spatio-temporal pattern detection in non-stationary environments, which were applied to simultaneous electro-physiological recordings from tens to hundreds of neurons for decoding cognitive processes from neural activity. Altogether, this provides evidence for the importance of such machine-learning-based tools to provide with breakthroughs in neuroscience.
In the paper by Russo and Durstewitz [140], the authors present a unifying methodological and conceptual framework which detects assembly structure at many different time scales, levels of precision, and with arbitrary internal organization. It uses sliding window as in [127] and the reliable and efficient analysis of an excess or deficiency of joint-spike events [141]. They extend the measure to multiple lags [142]. The core measure is based on a non-stationarity corrected parametric statistical test for assessing the independence of pairs and an agglomerative, heuristic clustering algorithm for fusing significant pairs into higher-order assemblies. To overcome the limits of models which require spike times to be discretized, utilize a suboptimal least-squares criterion, or do not provide uncertainty estimates for model predictions or estimated parameters, [143] addressed each of these shortcomings by developing a point process model that characterizes fine-scale sequences at the level of individual spikes and represents sequence occurrences as a few marked events in continuous time. As originally introduced by [144], they used learnable time-warping parameters to model sequences of varying duration, which were experimentally observed in neural circuits, and demonstrated these advantages on experimental recordings from the songbird higher vocal center and rodent hippocampus. At a larger scale, in [145], it was shown that attentional information from V4 or arousal can change the timings of groups of events in V1. They develop a hidden Markov model for quantifying the transitions. In particular, they showed that fluctuations in neural excitability are coordinated between visual areas with retinotopic precision. Top-down attention drives inter-areal coordination along the reverse cortical hierarchy, predicting better behavioral performance with increased coordination. Building such models for predicting changes of timings based on context, such as using a so-called change point model for blocked-based experimental protocols [146], could therefore provide useful prior information to enhance the decoding from neural activity.

4. What Biological Mechanism Could Allow Learning Spiking Motifs?

Despite the evidence for the effectiveness of precise spiking detection we presented above, doubts may remain as to the reliability of this learning mechanism and whether there is a real need for further research on this subject. The discovery of the existence of an equivalent biological mechanism in vertebrates as well as the demonstration of the importance of its role in various developed behaviors allow us to put these doubts to rest. In the following paragraphs, we will successively describe the first biological observations of delay learning, identify myelinization as an important actor, and finally study this phenomenon at the molecular level.

4.1. Biological Observations of Delay Adaptation

One of the first significant pieces of evidence of any neuronal delay in the information propagation within the animal neural system came from Hermann von Helmholtz’s study of a frog’s sciatic nerve in 1850 [147,148], and was later confirmed with Young’s study of the squid giant axon [149]. Dendritic propagation delays vary from sub-milliseconds to a few milliseconds, while axonal propagation delays range from a few milliseconds to tens of milliseconds, depending on the neuronal population studied [150]. Extensive measures of nerve conduction velocities were performed in different animals, including humans, and significant variations related to age, sex and other factors were identified [151].
However, it was not until the study of the interaural time difference (ITD) by Carr and Konishi [12] that it was discovered that this delay is not homogeneous for all neurons of the same type and species, but adapts according to their function. This ITD is a biological mechanism which allows for the azimuthal localization of sound by barn owls, by organically computing the difference in arrival time of a sound between their two ears. It was first theorized in the Jeffress “ITD-versus-place model” [152]. As hypothesized by this model, the nucleus laminaris of the avian brain contains coincidence detectors and, associated with the nucleus magnocellularis axons, forms circuits for processing ITD [153]. According to [154], there is a true paradox in auditory neural systems since “neural networks encode behaviorally relevant signals in the range of a few μ s with neurons that are at least one order of magnitude slower” and therefore necessarily need to play on the response time to do so. This assertion confused the mean interspike interval, i.e., how often a neuron can fire, and the specific spike time, i.e., how precisely a spike can be generated. However, it has nevertheless contributed to the recognition of the importance of time in various biological mechanisms. A first hypothesis suggested that the sound coincidence was detected using stereausis, i.e., the temporal disparity between the left and right cochlear loci in the owl’s brain. However, it was quickly set aside, as the predictions did not match the measured disparities in the loci, and no variation was perceived in the nuclear laminaris for a similar sound intensity in both owl ears. The authors supported a second hypothesis, that of different axonal delays in the ipsi- and contra-lateral cochlear nucleus magnocellularis. Seidl et al. [155] experimentally seconds this hypothesis of a “coarse” regulation of delay, as the authors concluded that regulations at different sites within individual axons of at least two parameters, namely, the axon diameter and internode distances, might be responsible for precise adjustments of physiological delays, thus allowing the ITD detection. The authors also noted that the barn owl’s axons seem to change in length, thus implementing a “pure delay line”.
The experiments described above thus conclude on the important role of physiological delays in the avian sound localization behaviour. However, the relevance of precise timing in spikes is not limited to birds; for example, in the mouse somatosensory cortex, [156] found a strong correlation between the delay of the mouse behavioral response and the timing of multiunit activity evoked by a trained whisker. These experiments also confirmed previous studies stating that the conduction velocity of a spike in a neuron (in other words, its delay) depends strongly on the axon diameter [157] and the internode distance between Ranvier nodes [158]. This mechanism adds to the axonal length delay, which was previously thought to be the sole influence on the conduction velocity due to its anatomical soundness—as Seidl et al. [155] experimentally demonstrated, this mechanism by itself is not sufficient to explain the biological functionality but should be added to the one of axonal length delay.

4.2. The Importance of Myelination

Gasser and Grundfest [157] experimentally confirmed with homogeneously selected neurons that the axonal delay is positively proportional to the axon diameter, i.e., the amount of myelin wrapped around the axon. Indeed, the oligodendrocytes, one of the many glial cells present in the vertebrates’ nervous system identified in 1924 by Pío del Rigo Hortega [159], produces thin protein sheets interspersed with lipid layers wrapped concentrically around the axon, called myelin [160]. Myelinization consists in “two motions: the wrapping of the leading edge of the inner tongue around the axon underneath the previously deposited membrane and the lateral extension of myelin membrane layers toward the nodal regions” [161]. Multiple myelin regions can appear on one neuron axon and form the following subdomains: “the internode (corresponding to the compacted region of myelin), the paranodes (where the outer loops of the myelin contact the axon), the juxtaparanode (the interface between the paranode and compact myelin, rich in potassium channels) and the node of Ranvier (the approx 1 μ m gap between adjacent myelin internodes [allowing for] the saltatory conduction)” [162]. On average, each oligodendrocyte produces 20 to 60 myelinization processes and each myelin sheath is 20 to 200 μ m long [161]. This demonstrates an additional impact of myelinization on the conduction velocity, as the number of segments is positively correlated to the axonal delay [158]. Thus, in several occasions, myelin has been identified as an important actor in the regulation of conduction velocity in neurons, i.e., axonal delay regulation.
Fields [163], Fields and Bukalo [164] state that myelin facilitates both the neural circuit function and the behavioral performance: experiments on mammals show that myelinization is activity-dependent and directly related to learning and memory consolidation, especially sensory or motor training and in enriched environments. This biological phenomenon takes place both at an early age, where the amount of oligodendrocites is particularly high in the central nervous system [165], and in older animals, due to its involvement in coupling the activity of distant neuron populations. Myelination helps memory consolidation by coupling the activity of distant neuron populations and generating nearly synchronous responses in postsynaptic neurons involved among others in path integration [150], as was experimentally demonstrated on mice using a Morris water maze [166], contextual fear conditioning [167] or oligodendrocyte precursor cells (OPCs) knock-out [168]. Myelin also inhibits axon sprouting and synapse formation, especially in pyramidal neurons [163], and is involved in axonal energy saving through a reduced axonal capacitance and a shift of the metabolic load from axons onto oligodendrocytes [162].
It is worth highlighting that myelinization becomes increasingly important in larger brains where conductance delays are substantial and brainwave rhythms are critical; synchrony errors can lead to neuropsychiatric and neurological dysfunctions [163], such as Parkinson’s disease, epilepsy or multiple sclerosis [150]. Additionally, a recent study suggests that demyelination of the optic nerve could be an underlying factor in glaucoma [169]. Duncan et al. [162] states that “the loss of myelin and oligodendrocytes fundamentally alters the neuron, [which are then] susceptible to energetic failure [and] subsequent degeneration”.

4.3. Interplay of Delay Adaptation and Neural Activity

However, one question remains: how do oligodendrocytes detect neuronal activity and regulate the myelinization accordingly? To answer this, we must first study the myelinization process. The OPCs first proliferate in the white matter via a self-repulsive process, thus allowing for an evenly spaced network, and identify target axons. Most OPCs then differentiate into oligodendrocytes and immediately initiate myelinization, with no further migration [161]. Not much is known about how oligodendrocytes select the axons to myelinate, but it seems that myelination only takes place on large enough axons and is strongly regulated by several factors [170], such as Ca 2 + activity [171] of the neuregulin 1 growth factor [172]. The important role of myelination on delay learning and biological behaviors, as highlighted in the previous paragraph, suggests that the identification of target neurons as well as the myelin production is also regulated by neuronal activity. Indeed, Cullen et al. [173] experimentally demonstrated that learning and associated neuronal activity modify the Ranvier nodes’ length and the periaxonal space width in the adult mouse brain. They also confirmed that the delay correlates with the level of skill acquisition. Gibson et al. [174] suggests that neuronal activity does not solely promote adaptive myelination in the mammalian brain, but also OPC differentiation and oligodendrogenesis. Some further studies show that oligodendrocytes may detect neuronal activity thanks to growth factors or neurotransmitters released through ion channels or via exocytosis, but does not require any axo-glial synaptic communication [163].
A potential scenario for selective myelination on electrically active axons using non-synaptic junctions between an axon and an oligodendrocyte is as follows: the axon releases glutamate in the extracellular environment by vesicle fusion, which activate the oligodendrocyte’s NMDA and metabotropic glutamate receptors. This triggers the axo-glial signalling complex, involving the phosphorylation of the SRC family kinase FYN followed by the translation of heterogeneous nuclear ribonucleoprotein A2 into local myelin basic proteins [163].
The previous paragraphs present the biological mechanisms behind the axonal delay, regulated by myelination. However, the dendritic delay as well as the axonal delay (see [155]) seems to play an equally important role in the precise timing within a sequence of spikes. Dendritic delay is involved in the performance, structure and function of the nervous system, the modulation of spatio-temporal properties of pre- and post-synaptic activity patterns and the functional limitations of sensory feedback control efficiency [150]. Its role has been specifically identified in the compensation of input asynchrony in the mammalian auditory brain stem [175]. Mel et al. [176] highlighted the dendrites’ impact on neuronal plasticity, which is caused by the wide variation of numerous parameters: plasticity rules applied to different dendritic subtrees or dendritic subregions, local passive cable properties, distance travelled by remote dendritic inputs, branching structures, dendritic diameters, the relative timing of back-propagating somatic action potentials, etc. Dendritic spiking involvement in synaptic potentiation following active backpropagation into dendrites was experimentally uncovered using calcium imaging to highlight dendritic calcium entry allowing for long-tem potentiation [177]. Branco et al. [100] demonstrated the dendritic sensitivity to a sequence of synaptic activation in cortical pyramid neurons, encoded by “both local dendritic calcium signals and somatic depolarization, leading to sequence-selective spike output”. The dendritic mechanism described can identify patterns delivered to a single dendrite or randomly distributed across the dendritic tree and relies on the dendritic calcium influx moderation by NMDA receptors.
All in all, learning spike motifs requires significantly complex pathways and biological mechanisms, whether in the dendrites or the axon of the neuron. More and more is becoming known about the non-trivial research topic that is biological delay learning, and extensive experimental data help develop ANNs, whose learning rules would be more neuromorphic.

5. Modeling Precise Spiking Motifs in Theoretical and Computational Neuroscience

Now that we have reviewed biological foundations for the role of delays in neural computations, we review, in the following section, theoretical models which directly take advantage of using precise spiking motifs. Spiking neural networks (SNNs) [178] are natural candidates to use these precise temporal patterns in the brain. The approach which is currently most prominent in the SNN community is to use existing algorithms from machine learning and to adapt them to the specificity of spiking architectures [46]. One such example is to adapt the successes of deep learning algorithms and to transfer the back-propagation algorithm to SNNs, as it is the most widely used to tune the weights of a classical (non-spiking) neural network. In a nutshell, it considers the system as implementing an input/output function and iteratively updates the weights according to the direction and magnitude of the error’s gradient. In deep learning, the gradient is computed on the activation function and since spikes are not differentiable, a recent popular approach consists in using a surrogate gradient [179] to “cross-compile” a classical neural network to a spiking architecture [180]. SNNs reach in some case a similar performance as their non-spiking equivalent, for instance on the MNIST dataset for categorizing digits in a stream of events [181]. So far, this approach does not outperform classical architectures both in term of training efficiency and performances [182]. However, they remain the best candidates to reproduce biological neural systems and their capacities in terms of accuracy, speed and energy consumption. There is, therefore, an immense gap in the way we understand biology to translate it to the efficiency of SNNs. To go beyond the state-of-the-art, we will focus here on one core computation of a spiking neuron, that is, its ability to switch from the classical integrator mode (summing analog currents on its dendrites) to a detector of precise spiking motifs [55]. In particular, we will explore different existing architectures which are able to overcome the diversity of input presynaptic patterns and learn to detect stable spiking motifs, that is, volleys of spikes which are stable up to a certain onset time (see Figure 7). These models will be compared in light of neuroscientific and computational perspectives.

5.1. Izhikevich’s Polychronization Model

As we saw, most SNN, and in particular those adapted from analogous deep-learning-like architectures, rely on an encoding of information based on a continuously varying firing rate. Notable exceptions of SNNs using precise spike timings are the time-encoding machine by Lazar [183] and polychronization model of Izhikevich [111]. In this section, we focus on the polychronization model based on a random recurrent model of spiking neurons, including synaptic delays chosen from a range of biologically realistic delays (from 0 to 20 m s ) and whose weights evolved with a spike-time-dependent plasticity (STDP) learning rule [184]. It was shown that spike timing (STDP) has an impact on the development of synaptic efficacy for many kinds of neurons [185]. Delays are defined as the total time taken for a spike to be conducted from one presynaptic neuron’s soma to the efferent postsynaptic neuron’s soma. It is worth mentioning that only the weights are changed using the STDP rule and that the set of delays is set randomly at initialization and that delays are then “frozen” for the rest of the simulation. Due to the interplay between the delays and STDP, the spiking neurons spontaneously self-organize into groups and generate patterns of stereotypical polychronous activity, i.e., exhibit reproducible time-locked firing patterns which the author defined as “polychronous groups” (PGs). One core ingredient of this model is the fact that the neurons composing a group fire at different times, but due to the heterogeneous delays, the spikes reach the postsynaptic neuron at the same time. This synchrony of arrival at the soma of the neuron leads to the summation of the excitatory post-synaptic potentials evoked by each spike, and thus to the crossing of the voltage threshold and to the discharge of a spike (see Figure 6). According to the STDP rule, the group of neurons involved in this polychronous activity will see their synaptic weight increase and, thus, may consolidate the formation of a polychronous group.
Interestingly, the paper by [111] stirred a lively debate in the field of computational neuroscience, with a general positive acceptance, but relatively few works extended this seminal paper. Indeed, there were already existing models of synaptic delay learning in spiking neural networks, see for instance [186] or [187], yet they had not shown potential applications to the detection of spiking motifs. A popular model for the detection of latency patterns is the tempotron [188], particularly reviewed in [189]. The tempotron is a supervised synaptic learning algorithm, which classifies a distractor from a target motif, in order to extend the perceptron, which does not incorporate a spike timing framework. The tempotron learning rule is derived by an optimization process and takes the form of a supervised STDP rule. The limits of this model are that its output is only binary and that its storage capacities are limited. An extension of [111] was made in a very detailed work aiming at reproducing the polychronization model [190]. Indeed, while the original paper contained material within the text to reproduce the whole model (using MATLAB), it was not complete such as to allow for the reproduction of all results presented in that manuscript. This more recent work details how this code could be slightly corrected. It comes with a Python code and a version control system detailing the whole process used to give provenance to the different steps in this scientific process. Another recent work gives a Bayesian account in a similar model [191]. In that work, based on the fact that previous methods for studying polychronous groups’ activation response to stimuli have been limited by the template-based methods used to identify PG activation, the authors outline a new method that overcomes these difficulties by establishing a probabilistic interpretation of PG activation. They demonstrate the use of this method by investigating the claim that PGs might provide the foundation of a representational system. Stimulation of the trained network produces the activation of a PG, i.e., the propagation of firing activity through multiple layers due to convergent patterns of firing. While extending the original method, these methods reveal shortcomings that we will try to analyze in the rest of this section.
Strikingly, thanks to the fact that a neuron can be involved in different polychronous groups, the number of coexisting polychronous groups far exceeds the number of neurons in the network, resulting in an unprecedented memory capacity of the system (see Figure 7). In other neuronal models, an efficient use or detection of these spatio-temporal patterns embedded in the spike train comes with the integration of heterogeneous delays [191,192]. The recent “multi-neuronal spike sequence detector” architecture integrates the weight- and delay-adjustment methods by combining plasticity with the modulation of spike latency emission [181]. Additional models for the detection of latency patterns are presented in the extensive (graph-centric) review on synchronization in time-varying networks [193,194]. This representation has potentially a much greater information capacity in comparison to other neural coding approaches through their connectivity and the possible coexistence of numerous superposed PGs [195]. Recently, by using a logistic regression model coupled with a temporal convolution, a model with heterogeneous delays was implemented to test the detection of the spiking motifs embedded in an event stream [196]. This allowed to detect a high number of superposed polychronous motifs in synthetic data, illustrating the computational benefit of such representations compared to that with a unique delay (see Figure 7). As such, these models use the neural dynamics to handle input signals with different delays but do not explicitly take full advantage of the representation capacity offered by heterogeneous delays.

5.2. Learning Synaptic Delays

First, the original model by Izhikevich uses a simple STDP rule while a whole range of STDP-based learning rules may implement precise spiking motifs detection. For instance, to address how transmission delays and STDP can jointly determine these emergent pairwise activity–connectivity patterns, a recent study analyzed phase synchronization properties and coupling symmetry between two bidirectionally coupled neurons using both phase oscillator and conductance-based neuron models [197]. Moreover, modified STDP rules have been used for synchronous coherence detection [198], for the learning of specific receptive fields [199]. They were also extended to recurrent neuronal networks [200] or delay selection [201]. In particular, this has been applied for recurrent networks of spiking neurons receiving oscillatory inputs [202] which targets for the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. More generally, our ability to track and respond to rapidly changing visual stimuli, such as a fast-moving tennis ball, indicates that the brain is capable of extrapolating the trajectory of a moving object to predict its current position, despite the delays that result from neural transmission. Specifically, the neural circuits underlying this ability can be learned through spike-timing-dependent synaptic plasticity, and these circuits emerge spontaneously and without supervision, demonstrating how the neural transmission delays can, in part, be compensated to implement the extrapolation mechanisms required to predict where a moving object is at the present moment [203].
At the implementation level, a recent work proposed a bio-plausible unsupervised delay learning for extracting temporal features in spiking neural networks [204]. The authors provided some mathematical proofs to show that their learning rule gives the ability to learn repeating spatio-temporal patterns. Applying this STDP-based rule on delays to the spiking neural network, the experimental results were validated on a simple motion detection task, but were prone to convergence issues. Another model of synaptic delay-weight plasticity integrates synaptic delay plasticity into supervised learning and proposes a novel learning method that adjusts both the synaptic delays and weights of the learning neurons to make them fire precisely timed spikes [192]. This was also presented by [205], who proposed a supervised delay learning algorithm for spiking neurons with temporal encoding, in which both the weight and delay of a synaptic connection can be adjusted to enhance the learning performance. Other models, such as that of [206], propose a weightless spiking neural networks that can perform a simple classification task which is applied to MNIST. In a recent paper [207], the authors proposed a gradient descent-based learning algorithm for synaptic delays to enhance the sequential learning performance of a single spiking neuron. In this algorithm, information is encoded in the relative timing of individual neuronal spikes, and learning is performed based on the exact derivatives of the postsynaptic spike times with respect to presynaptic spike times. In yet another computational model, Sun et al. [208] showed that the frequently activated polychronous neural groups can be learned efficiently by readout neurons with joint weight-delay spike-timing-dependent plasticity.

5.3. Real-World Applications

A second shortcoming of models derived from the polychronization model is their lack of applications in real-world scenarios. Indeed, most of these theoretical models are trying to reproduce neurobiological observations, while applications to machine learning methods, such as image processing, would further prove their plausibility. For instance, in a recent work, Ghosh et al. [209] proposed a two-stage unsupervised–supervised system for the categorization of spatio-temporal actions from an event-based stream. The first stage learns spatio-temporal convolutional filters targeted to minimize event-removal-related changes to a local spatio-temporal spike-event pattern. The second stage takes the output of the spatio-temporal filters as an input example containing multiple feature channels, and proceeds to train a classifier for recognition of spatio-temporal activity. For testing the system, two datasets are considered: DVS gesture and a new action recognition dataset recorded for this work. Results demonstrate the ability of the system to outperform the state-of-the-art in event-based gesture recognition, along with demonstrating superior performance to other alternative ways of obtaining the first stage filters, thus showing the potential of such representation.
There are more applications to image processing using spiking neural networks. For instance, a set of models are based on the design of micro-circuits with specific lateral interactions embedded with spatially anisotropic connections. Using this core computational unit, and extending it to computations on a topographic representation similar to that observed in the primary visual cortex of mammals, the anisotropic rules implemented a form of delayed activation. This result was based on a predictive model defined in the Bayesian framework (the so-called free-energy principle) which was able to account for temporal delays in the system, both at the sensory and motor levels [210], and in particular that “the application of delay operators just means changing synaptic connection strengths to take different mixtures of generalized sensations and their prediction errors.” Such a model was implemented at the network level and applied to various motion detection tasks. In essence, two neurons, which were selective to specific motions, were connected if the delay was coherent with the change in the position of their respective receptive fields [211]. This was also implemented in a neural mass model, showing that such anisotropic connectivity may explain the emergence of tracking [212], and further explored in a spiking neural network which reproduced the observation that neural activity was maintained during the trajectory of a smoothly moving dot, even if it was momentarily blanked [213]. This led in particular to the proposal that such delay-based computations could explain diverse perceptual mechanisms, such as the so-called flash–lag illusion [214]. However, these latter models used parametric rules for defining the weights. Extending such mechanisms with the ability of learning delays in a SNN will provide a breakthrough in the efficiency of these networks, and we will explore some exemplar results from neuromorphic engineering to obtain better insights on that aspect.

6. Applications of Precise Spiking Motifs in Neuromorphic Engineering

Artificial intelligence has made huge advances in the past decades. Deep learning algorithms, nowadays, outperform humans at complex tasks such as natural image recognition or abstract strategy board games. Yet, machine learning algorithms suffer from adversarial attacks or a lack of generalization capacity. However, their main weakness, compared to biological neural networks, is their poor energy efficiency. Neuromorphic engineering intends to mimic the neural bases of communication with a wide variety of technics, from strictly analog circuits to software-based neuromorphic systems, and to develop tools improving the capacities of current artificial intelligence [15,215]. Because the reduced energy consumption of biological networks can be explained in part by the use of spikes and asynchronous responses to exchange information [178], neuromorphic chips use this parallel and event-based representation to perform energy-efficient computations. Another important distinction with classical von Neumann architectures is the localized memory of this new type of chips. It can be materialized by the capacity of the physical connections between the processing units to store information [216]. An example of such a connection, directly inspired by synaptic plasticity, is the memristor [217] for which the resistance value can be dynamically adjusted. Using these event-based computations as a building brick, neuromorphic engineering proposes new hardware designs perfect to simulate SNNs and use the full power of asynchronous computations observed in biological systems. Even if some useful SNNs simulators run on GPUs [218,219,220], such event-based computing techniques show their advantages in terms of frugality and rapidity only on neuromorphic chips.
This field of research is inspired by neuroscientific advances and a computational formalism to design innovative architectures and, by artificially reproducing such mechanisms, it is interesting to study neural circuitry. Many connections can be drawn between neuromorphic engineering and computational neuroscience to aim at solving both research and technology challenges [221]. In this section, we give a description of the different neuromorphic hardware that have been developed and see how they can be used to deal with precise temporal motifs.

6.1. The Emergence of Novel Computational Architectures

To our knowledge, the first neuromorphic circuit is the pulsed current-source synapse proposed by Carver Mead in 1989 [222]. It was implemented with transistors operating in the sub-threshold domain and responded to asynchronous events, but was not capable of discriminating two different spiking sequences with the same firing rate. Indeed, the postsynaptic membrane potential was increased by a step proportional to the input current but did not decrease in time, as it can be observed in biological neurons. Then, electronics circuits became more and more bio-realistic and, two decades later, [223] released the Diff-Pair Integrator (DPI) synapse that could reproduce the global dynamics of the biological neurons. The DPI circuit could multiplex in time spikes from different sources and became a potential “silicon coincidence detector”. Today, many devices are good candidates for implementing event-based algorithms and use the address event representation (AER). They can be divided intro three major categories: digital, analog and mixed analog/digital platforms. For a more complete review, the reader can refer to [224], where we site the most popular ones. SpiNNaker [225,226], Loihi [182] and TrueNorth [227] chips are widely used examples of digital hardware implementations. Compared to TrueNorth, which exclusively implements a LIF neuron, SpiNNaker and Loihi offer some flexibility in terms of neuron model and allow for on-chip learning. This flexibility in the implementation comes at the cost of an increased energy consumption. Mixed analog–digital systems were developed at Stanford University: Neurogrid and Braindrop [228,229]. They are mostly used by computational neuroscientists to model brain activity with different levels of abstraction. BrainScaleS [230] is another mixed analog–digital system developed, just like SpiNNaker, for the Human Brain Project [231]. It is a wafer-scale neuromorphic hardware with analog components. Analog arrays (i.e., field programmable analog arrays (FPAA)) refer to the initial idea of neuromorphic hardware aiming at building strictly analog devices. The pulsed current source synapse and the DPI are examples of such devices; we can also mention the field programmable neural array [232] and the NeuroFPAA [233] specifically designed for neuromorphic systems. Due to their lack of generality and some issues specific to analog circuits, these fully analog devices are not yet widely used for neuromorphic computing.
Neuromorphic sensors have also been developed with the idea to capture external stimuli more efficiently and closer to biological systems. A widely used example is the dynamic vision sensors (DVSs) which provide a stream of binary asynchronous events signaling detectable changes in luminance. These devices, also named “silicon retinas” (see Figure 8), show great improvements in terms of memory allocation, or power consumption, for the recording of a visual scene. We also report other event-based sensing devices for sound [234] and touch [235] but will focus on DVS for the next subsection about the use of dynamics embedded in event-based signals.

6.2. On the Importance of Spatio-Temporal Information in Silicon Retinas

With the AER specification and their sub-millisecond temporal precision, DVSs bring a new approach to the storage and processing of visual information. From their generative model, these sparse events are markers of the dynamics of the visual scene captured by the sensor. The dynamics of the event streams have to be used to make sense of the recorded information, and new algorithms are needed to solve efficiently classical computer vision tasks.
In [236], time surfaces were introduced as an event-driven 2D image of the delay between the last event recorded at the address of a pixel and the current time. An exponential decay is applied on this delay to obtain the analog values of the time surfaces. It gives more precision to represent recent events and offers an analogy with the LIF spiking neuron. It is a way to represent the local dynamics embedded in the event-based recordings, and with unsupervised learning on the event stream based on the cosine similarity, they can capture repeating motifs within the input signals. Learned time surfaces can be used for object recognition [236,237,238,239,240] and show that this method could be efficiently applied to state-of-the-art benchmarks.
More generally, three-dimensional convolutions in both space and time are another representation of the spiking motifs embedded in the event stream [196,209,241]. With their additional temporal dimension, their kernels can capture multiple events on the same pixel address, as long as they belong to the local temporal window (see Figure 9). This representation is only limited by the time step used for the discretization of the signal; this factor defines the temporal precision of the representation. Other methods make direct use of the precise timing of events captured by the DVS to solve optical flow and time-to-contact challenges [242,243,244], inferring depth [245], feature detection and tracking [246], motion segmentation [247] or the simultaneous localization and mapping problem [248]. This non-exhaustive list of complex task solving is not directly linked to biological processes, but shows the potential of the precise temporal resolution of neuromorphic retina-like cameras. By essence, these sensors offer a novel view of visual information processing due to the asynchronous responses of the different pixels. With this type of signal, the use of spatio-temporal motifs embedded in the event streams is essential to solve high-level visual tasks.

6.3. Computations with Delays in Neuromorphic Hardware

For the rest of this section, we report examples of implementations of event-based algorithms using precise spatio-temporal motifs on neuromorphic hardware. [249] implemented a delay-learning algorithm on an analog chip. Online learning on neuromorphic chips is still a challenge today and for this work, only the detection of spiking motifs was performed on the analog architecture; training was performed digitally and based on the tempotron learning rule [188]. In addition to delay learning, a group at the University of West Sydney developed a neuromorphic implementation of multiple synaptic plasticity learning rules [250]. They showed that STDP and spike-timing-dependent delay plasticity rules could be implemented in both a digital and an analog chip. From the network parameters and the physical limitations to store it on-chip, they proved that the digital implementation is way easier to scale up and that an external memory would be needed for a larger network. The same group presented a FPGA hardware implementation of polychronous networks in which propagation delays are learned in a supervised manner, based on the expected firing time of the post-synaptic neuron [251]. Pfeil et al. [252] implemented STDP on a mixed analog–digital chip to simulate the sound localization processes observed on the barn owl auditory system [154]. Coherence detection on input spikes coming from two sources was obtained with a 50 ns precision. They claimed that this unsupervised learning denoises the input and compensates for variations between neural components. The variability of response of the analog components can be compensated by population coding for responses robust to noise, and this phenomenon is also observed in biological neural networks [253]. A recent work performed the implementation of a sparse vector symbol architecture binding operation on the Loihi chip, delay lines and coincidence detection, used to compute the binding operation [254]. They highlighted the fact that using delays can be expensive notably in memory bandwidth because incoming spikes have to be stored in blocks with a temporal dimension equivalent to the maximal delay. Note that this problem is also due to the algorithm used in this paper and that the analog chips must not suffer from this type of limitation.
Online on-chip learning and computations with delays are still emerging in neuromorphic engineering. The technical challenges linked to the development of this type of implementation and the growing interest in delay learning make advances in this field interesting for the future of computations with precise spatio-temporal motifs. While improvements are still to be made, neuromorphic chips seem to be a good candidate to efficiently make use of these particular features.

7. Discussion

7.1. Summary

In this review paper, we presented recent evidence for the role of precise spiking motifs in neuroscience. In particular, we showed that such particular motifs may play a crucial role in neurobiology, that they may be understood at the theoretical and computational levels, and that they may have numerous applications in neuromorphic engineering. In particular, we showed the following:
  • The efficiency of neural systems, and in particular the visual system, imposes strong constraints on the structure of neural activity which highlights the importance of precise spike times;
  • Growing evidence from neurobiology proves that neural systems are more than integrators and may use synchrony detection in different forms: synfire chains, travelling waves on arbitrary spiking motifs, and notably that an encoding based on precise spiking motifs may provide huge computational benefits;
  • Many theoretical models already exist, taking into account the specificity of spiking motifs, notably by using heterogeneous delays;
  • Using precise spiking motifs could ultimately be a key ingredient in neuromorphic systems to reach similar efficiencies as biological neural systems.
Overall, our reviewing effort has shown that a growing community is focusing on that aspect. This community is based on solid and validated evidence, which is breaking novel grounds thanks to the current technical advances. Moreover, we also showed that this community is highly diverse, operating in biology, computational neuroscience or neuromorphic engineering. As a consequence, the global effort is still largely scattered, which limits its larger acceptance in neuroscience.

7.2. Limits

Additionally, the different models of spike motif detection and learning that we have presented at these different levels (neurobiological, theoretical, and neuromorphic) individually present limitations that prevent their widespread application in neuroscience.
First, many models are based on a discretization of time. This assumption is important to allow for a useful representation of neuronal information in order to be processed in computers. This treatment amounts to transforming spike trains into a matrix form for processing in classical machine learning algorithms. This assumption therefore implies an ineffective use of the memory, as this representation transforms the sparse representation of a spike sequence into dense matrices. In addition, this representation can induce errors due to the discretization and the scale of temporal sampling. Finally, this representation encourages the use of classical methods, which are not adapted to disruptive applications, such as event-driven representations.
Moreover, the learning of patterns is often done in a supervised way. Indeed, the problem of detecting polychronous groups implies to infer both the address and the precise time of the occurrence of these motifs. Most of the models we have presented are based on the assumption that at least one of its variables is known: either the pattern, its identity, or its time of occurrence. This constraint is to be put in parallel with the way a biological nervous system works in which learning is performed autonomously, i.e., without supervision. However, we can note that some models can perform such learning, but only in the case of data for which the motifs are easily separable. More generally, to reproduce the efficiency of biological systems, one should account for the different temporal scales of adaptation, from seconds to years. For instance, the scaffolding of neural assemblies seems to follow critical periods during development [255].
A final limitation of the models we have presented is that they consist of a single processing layer that links an input to an output. However, we saw that the neurobiological system uses processing loops within hierarchical graphs. In general, these systems are bidirectionally connected across different layers (for instance cortical areas), but also within a layer, as was for instance used by Izhikevich [111]. The whole system forms a dynamical model which may be considered globally during the learning phase, yet while taking account the constraints of the system, for instance, the lack of a global clock, or the cost of fully connected topologies. Moreover, these processes have to be distinguished from judgements on timing, such as temporal order processing (judging when one event happens relative to another) or duration estimation (measuring how long an event lasts) [256].

7.3. Perspectives

The limits that we have presented can be treated individually in each model, as evidenced by individual efforts, which try to overcome them. However, to propose a real breakthrough, we believe that future venues should provide with a unified, interdisciplinary approach, with applications to real-world, ecological scenarios and with open and reproducible methods.
First, as we already noted, the effort is still largely scattered. This is in part due to the fact that interaction between neurobiology, theoretical and computational neuroscience and neuromorphic engineering are still scarce as of today. It was largely demonstrated that close, bidirectional interactions are essential to foster breakthroughs. For instance, the design of model-driven protocols has proven to be essential in modern neuroscience. Additionally, if neural networks were essential in shaping modern-day machine learning, e.g., computer vision using deep learning, spiking neural networks should prove essential in future emerging technologies.
In that perspective, it is essential that such models are tested on ecologically relevant, real-world scenarios. Indeed, classic convolutional neural networks have emerged as optimal solutions, for example, to classify static images into categories, yet they are not well-adapted for processing dynamic, multimodal sensory flows. The emerging necessity to be able to process more complex flows, such as the multiple flows of information in a car designed for autonomous driving, necessitates modifying such modelling paradigms, and in particular, to take into account that the generated actions may modify the sensory inputs. Notably, the protocols used as well in neurobiology, theory or engineering should take into account these novel levels of complexity.
Ultimately, the community should encourage the adoption of open, reproducible science. Indeed, the different models that we have displayed often come with the tools necessary to reproduce the results obtained. This is true in neurobiology [115], in theoretical neuroscience [111] or in engineering [240]. This aspect is essential to foster the emergence of interdisciplinary projects, such as model-driven neurobiological experiments or biologically inspired neuromorphic engineering. Solutions exist to optimize these collaborations [257] and suggest the emergence of a novel paradigm for scientific advances in neuroscience [258], i.e., by using data exploration in which the scientific models are fit to the data by learning algorithms. As such, this review aims at paving the way to openly share the variety of resources and to offer a unified view on the role of precise spiking motifs in neuroscience.

Funding

This research was funded by the European Union ERA-NET CHIST-ERA 2018 research and innovation program under grant number ANR-19-CHR3-0008-03 “APROVIS3D”. LP, CB and AG received funding from the ANR grant number ANR-20-CE23-0021 “AgileNeuroBot”. LP received support from the french government under the France 2030 investment plan, as part of the Initiative d’Excellence d’Aix-Marseille Université–A*MIDEX number AMX-21-RID-025 “Polychronies”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

This work is made reproducible using the following tools. First the manuscript’s source code, and the code reproducing all figures is available on GitHub. This list also links to preprints versions of this review as well as links to previous versions. Find the associated Zotero group which was used to regroup relevant literature on the subject.

Acknowledgments

We would like to thank Hugo Ladret and Frédéric Chavane for valuable feedback while writing this review. Many thanks to Simon Thorpe for the useful discussions and references on the latency of the visual system in human and non-human primates.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Piccolino, M. Luigi Galvani and animal electricity: Two centuries after the foundation of electrophysiology. Trends Neurosci. 1997, 20, 443–448. [Google Scholar] [CrossRef] [PubMed]
  2. Adrian, E.D.; Zotterman, Y. The impulses produced by sensory nerve endings. J. Physiol. 1926, 61, 465–483. [Google Scholar] [CrossRef] [PubMed]
  3. Gouras, P. Graded potentials of bream retina. J. Physiol. 1960, 152, 487–505. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Perkel, D.H.; Gerstein, G.L.; Moore, G.P. Neuronal Spike Trains and Stochastic Point Processes: I. The Single Spike Train. Biophys. J. 1967, 7, 391–418. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Perkel, D.H.; Gerstein, G.L.; Moore, G.P. Neuronal Spike Trains and Stochastic Point Processes: II. Simultaneous Spike Trains. Biophys. J. 1967, 7, 419–440. [Google Scholar] [CrossRef] [Green Version]
  6. Abeles, M. Role of the cortical neuron: Integrator or coincidence detector? Isr. J. Med. Sci. 1982, 18, 83–92. [Google Scholar]
  7. Carr, C.E. Processing of Temporal Information in the Brain. Annu. Rev. Neurosci. 1993, 16, 223–243. [Google Scholar] [CrossRef]
  8. Davis, Z.W.; Benigno, G.B.; Fletterman, C.; Desbordes, T.; Steward, C.; Sejnowski, T.J.; Reynolds, J.H.; Muller, L. Spontaneous traveling waves naturally emerge from horizontal fiber time delays and travel through locally asynchronous-irregular states. Nat. Commun. 2021, 12, 6057. [Google Scholar] [CrossRef]
  9. Perrinet, L.; Samuelides, M.; Thorpe, S. Coding static natural images using spiking event times: Do neurons cooperate? IEEE Trans. Neural Netw. 2004, 15, 1164–1175. [Google Scholar] [CrossRef]
  10. Gollisch, T.; Meister, M. Rapid Neural Coding in the Retina with Relative Spike Latencies. Science 2008, 319, 1108–1111. [Google Scholar] [CrossRef] [Green Version]
  11. DeWeese, M.R.; Zador, A.M. Binary Coding in Auditory Cortex; Neural Information Processing Systems Foundation: Cambridge, MA, USA; MIT Press: Cambridge, MA, USA, 2003. [Google Scholar]
  12. Carr, C.E.; Konishi, M. A circuit for detection of interaural time differences in the brain stem of the barn owl. J. Neurosci. 1990, 10, 3227–3246. [Google Scholar] [CrossRef] [PubMed]
  13. Bohte, S.M. The evidence for neural information processing with precise spike-times: A survey. Nat. Comput. 2004, 3, 195–206. [Google Scholar] [CrossRef] [Green Version]
  14. DiLorenzo, P.M.; Victor, J.D. Spike Timing: Mechanisms and Function; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
  15. Roy, K.; Jaiswal, A.; Panda, P. Towards spike-based machine intelligence with neuromorphic computing. Nature 2019, 575, 607–617. [Google Scholar] [CrossRef] [PubMed]
  16. Flourens, M.J.P. Recherches Expérimentales sur les Propriétés et les Fonctions du Système Nerveux, Dans les Animaux Vertébrés; Hachette BNF: Paris, France, 1842. [Google Scholar]
  17. Pearce, J. Marie-Jean-Pierre Flourens (1794–1867) and Cortical Localization. Eur. Neurol. 2009, 61, 311–314. [Google Scholar] [CrossRef] [PubMed]
  18. Hubel, D.H.; Wiesel, T.N. Receptive fields and functional architecture of monkey striate cortex. J. Physiol. 1968, 195, 215–243. [Google Scholar] [CrossRef]
  19. Carandini, M.; Heeger, D.J. Normalization as a canonical neural computation. Nat. Rev. Neurosci. 2012, 13, 51–62. [Google Scholar] [CrossRef] [Green Version]
  20. Thorpe, S.; Fize, D.; Marlot, C. Speed of processing in the human visual system. Nature 1996, 381, 520–522. [Google Scholar] [CrossRef]
  21. Kirchner, H.; Thorpe, S. Ultra-rapid object detection with saccadic eye movements: Visual processing speed revisited. Vis. Res. 2006, 46, 1762–1776. [Google Scholar] [CrossRef]
  22. Keysers, C.; Xiao, D.K.; Földiák, P.; Perrett, D.I. The Speed of Sight. J. Cogn. Neurosci. 2001, 13, 90–101. [Google Scholar] [CrossRef]
  23. Schmolesky, M.T.; Wang, Y.; Hanes, D.P.; Thompson, K.G.; Leutgeb, S.; Schall, J.D.; Leventhal, A.G. Signal timing across the macaque visual system. J. Neurophysiol. 1998, 79, 3272–3278. [Google Scholar] [CrossRef]
  24. Vanni, S.; Tanskanen, T.; Seppä, M.; Uutela, K.; Hari, R. Coinciding early activation of the human primary visual cortex and anteromedial cuneus. Proc. Natl. Acad. Sci. USA 2001, 98, 2776–2780. [Google Scholar] [CrossRef] [PubMed]
  25. Lamme, V.A.; Roelfsema, P.R. The distinct modes of vision offered by feedforward and recurrent processing. Trends Neurosci. 2000, 23, 571–579. [Google Scholar] [CrossRef] [PubMed]
  26. Serre, T.; Oliva, A.; Poggio, T. A feedforward architecture accounts for rapid categorization. Proc. Natl. Acad. Sci. USA 2007, 104, 6424–6429. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Jérémie, J.N.; Perrinet, L.U. Ultrafast image categorization in vivo and in silico. arXiv 2022, arXiv:2205.03635. [Google Scholar]
  28. Nowak, L.G.; Bullier, J. The Timing of Information Transfer in the Visual System. In Extrastriate Cortex in Primates; Springer: Boston, MA, USA, 1997; pp. 205–241. [Google Scholar]
  29. Thorpe, S.J.; Fabre-Thorpe, M. Seeking Categories in the Brain. Science 2001, 291, 260–263. [Google Scholar] [CrossRef]
  30. Rucci, M.; Ahissar, E.; Burr, D. Temporal Coding of Visual Space. Trends Cogn. Sci. 2018, 22, 883–895. [Google Scholar] [CrossRef]
  31. Softky, W.; Koch, C. The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPs. J. Neurosci. 1993, 13, 334–350. [Google Scholar] [CrossRef] [Green Version]
  32. Bryant, H.L.; Segundo, J.P. Spike initiation by transmembrane current: A white-noise analysis. J. Physiol. 1976, 260, 279–314. [Google Scholar] [CrossRef]
  33. Mainen, Z.F.; Sejnowski, T.J. Reliability of Spike Timing in Neocortical Neurons. Science 1995, 268, 1503–1506. [Google Scholar] [CrossRef] [Green Version]
  34. Ermentrout, G.B.; Galán, R.F.; Urban, N.N. Reliability, synchrony and noise. Trends Neurosci. 2008, 31, 428–434. [Google Scholar] [CrossRef] [Green Version]
  35. Nowak, L. Influence of low and high frequency inputs on spike timing in visual cortical neurons. Cereb. Cortex 1997, 7, 487–501. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Kenyon, G.T.; Hill, D.; Theiler, J.; George, J.S.; Marshak, D.W. A theory of the Benham Top based on center–surround interactions in the parvocellular pathway. Neural Netw. 2004, 17, 773–786. [Google Scholar] [CrossRef] [PubMed]
  37. Celebrini, S.; Thorpe, S.; Trotter, Y.; Imbert, M. Dynamics of orientation coding in area V1 of the awake primate. Vis. Neurosci. 1993, 10, 811–825. [Google Scholar] [CrossRef] [PubMed]
  38. Chase, S.M.; Young, E.D. First-spike latency information in single neurons increases when referenced to population onset. Proc. Natl. Acad. Sci. USA 2007, 104, 5175–5180. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Safaie, M.; Jurado-Parras, M.T.; Sarno, S.; Louis, J.; Karoutchi, C.; Petit, L.F.; Pasquet, M.O.; Eloy, C.; Robbe, D. Turning the body into a clock: Accurate timing is facilitated by simple stereotyped interactions with the environment. Proc. Natl. Acad. Sci. USA 2020, 117, 13084–13093. [Google Scholar] [CrossRef] [PubMed]
  40. Gautrais, J.; Thorpe, S. Rate coding versus temporal order coding: A theoretical approach. Biosystems 1998, 48, 57–65. [Google Scholar] [CrossRef]
  41. Delorme, A.; Gautrais, J.; Van Rullen, R.; Thorpe, S. SpikeNET: A simulator for modeling large networks of integrate and fire neurons. Neurocomputing 1999, 26, 989–996. [Google Scholar] [CrossRef]
  42. Delorme, A.; Richard, G.; Fabre-Thorpe, M. Ultra-rapid categorisation of natural scenes does not rely on colour cues: A study in monkeys and humans. Vis. Res. 2000, 40, 2187–2200. [Google Scholar] [CrossRef] [Green Version]
  43. Bonilla, L.; Gautrais, J.; Thorpe, S.; Masquelier, T. Analyzing time-to-first-spike coding schemes. Front. Neurosci. 2022, 16, 971937. [Google Scholar] [CrossRef]
  44. Bohte, S.M.; Kok, J.N.; La Poutré, H. Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing 2002, 48, 17–37. [Google Scholar] [CrossRef] [Green Version]
  45. Zenke, F.; Vogels, T.P. The Remarkable Robustness of Surrogate Gradient Learning for Instilling Complex Function in Spiking Neural Networks. Neural Comput. 2021, 33, 899–925. [Google Scholar] [CrossRef]
  46. Göltz, J.; Kriener, L.; Baumbach, A.; Billaudelle, S.; Breitwieser, O.; Cramer, B.; Dold, D.; Kungl, A.F.; Senn, W.; Schemmel, J.; et al. Fast and energy-efficient neuromorphic deep learning with first-spike times. arXiv 2021, arXiv:1912.11443. [Google Scholar] [CrossRef]
  47. Kheradpisheh, S.R.; Ganjtabesh, M.; Thorpe, S.J.; Masquelier, T. STDP-based spiking deep convolutional neural networks for object recognition. Neural Netw. 2018, 99, 56–67. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Tavanaei, A.; Masquelier, T.; Maida, A. Representation learning using event-based STDP. Neural Netw. 2018, 105, 294–303. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Gallego, G.; Delbruck, T.; Orchard, G.; Bartolozzi, C.; Taba, B.; Censi, A.; Leutenegger, S.; Davison, A.J.; Conradt, J.; Daniilidis, K.; et al. Event-Based Vision: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 154–180. [Google Scholar] [CrossRef]
  50. Maunsell, J.H.; Van Essen, D.C. Functional properties of neurons in middle temporal visual area of the macaque monkey. I. Selectivity for stimulus direction, speed, and orientation. J. Neurophysiol. 1983, 49, 1127–1147. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Montemurro, M.A.; Rasch, M.J.; Murayama, Y.; Logothetis, N.K.; Panzeri, S. Phase-of-Firing Coding of Natural Visual Stimuli in Primary Visual Cortex. Curr. Biol. 2008, 18, 375–380. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. deCharms, R.C.; Merzenich, M.M. Primary cortical representation of sounds by the coordination of action-potential timing. Nature 1996, 381, 610–613. [Google Scholar] [CrossRef] [PubMed]
  53. Vinje, W.E.; Gallant, J.L. Sparse Coding and Decorrelation in Primary Visual Cortex During Natural Vision. Science 2000, 287, 1273–1276. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Abeles, M. Corticonics: Neural Circuits of the Cerebral Cortex; Cambridge University Press: Cambridge, MA, USA; New York, NY, USA, 1991. [Google Scholar]
  55. Paugam-Moisy, H.; Bohte, S.M. Computing with spiking neuron networks. In Handbook of Natural Computing; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  56. Hebb, D.O. The Organization of Behavior: A Neuropsychological Theory; Wiley: New York, NY, USA, 1949. [Google Scholar]
  57. Harris, K.D.; Csicsvari, J.; Hirase, H.; Dragoi, G.; Buzsáki, G. Organization of cell assemblies in the hippocampus. Nature 2003, 424, 552–556. [Google Scholar] [CrossRef]
  58. Singer, W.; Gray, C.M. Visual Feature Integration and the Temporal Correlation Hypothesis. Annu. Rev. Neurosci. 1995, 18, 555–586. [Google Scholar] [CrossRef] [PubMed]
  59. Roelfsema, P.R.; Engel, A.K.; König, P.; Singer, W. Visuomotor integration is associated with zero time-lag synchronization among cortical areas. Nature 1997, 385, 157–161. [Google Scholar] [CrossRef] [PubMed]
  60. Riehle, A.; Grun, S.; Diesmann, M.; Aertsen, A. Spike synchronization and rate modulation differentially involved in motor cortical function. Science 1997, 278, 1950–1953. [Google Scholar] [CrossRef] [PubMed]
  61. Kilavik, B.E.; Roux, S.; Ponce-Alvarez, A.; Confais, J.; Grun, S.; Riehle, A. Long-Term Modifications in Motor Cortical Dynamics Induced by Intensive Practice. J. Neurosci. 2009, 29, 12653–12663. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  62. Grammont, F.; Riehle, A. Spike synchronization and firing rate in a population of motor cortical neurons in relation to movement direction and reaction time. Biol. Cybern. 2003, 88, 360–373. [Google Scholar] [CrossRef]
  63. Denker, M.; Zehl, L.; Kilavik, B.E.; Diesmann, M.; Brochier, T.; Riehle, A.; Grün, S. LFP beta amplitude is linked to mesoscopic spatio-temporal phase patterns. Sci. Rep. 2018, 8, 5200. [Google Scholar] [CrossRef] [Green Version]
  64. Torre, E.; Canova, C.; Denker, M.; Gerstein, G.; Helias, M.; Grün, S. ASSET: Analysis of Sequences of Synchronous Events in Massively Parallel Spike Trains. PLoS Comput. Biol. 2016, 12, e1004939. [Google Scholar] [CrossRef] [Green Version]
  65. Ben-yishai, R.; Hansel, D. Traveling Waves and the Processing of Weakly Tuned Inputs in a Cortical Network Module. J. Comput. Neurosci. 1997, 77, 57–77. [Google Scholar] [CrossRef]
  66. Bruno, R.M.; Sakmann, B. Cortex Is Driven by Weak but Synchronously Active Thalamocortical Synapses. Science 2006, 312, 1622–1627. [Google Scholar] [CrossRef] [Green Version]
  67. Deneve, S. Bayesian inference in spiking neurons. In Proceedings of the Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2004; Volume 17. [Google Scholar]
  68. Ballard, D.; Jehee, J. Dual Roles for Spike Signaling in Cortical Neural Populations. Front. Comput. Neurosci. 2011, 5, 22. [Google Scholar] [CrossRef] [Green Version]
  69. Gewaltig, M.O.; Diesmann, M.; Aertsen, A. Propagation of cortical synfire activity: Survival probability in single trials and stability in the mean. Neural Netw. 2001, 14, 657–673. [Google Scholar] [CrossRef] [PubMed]
  70. Gerstner, W. Time structure of the activity in neural network models. Phys. Rev. E 1995, 51, 738–758. [Google Scholar] [CrossRef] [PubMed]
  71. Azouz, R.; Gray, C.M. Stimulus-selective spiking is driven by the relative timing of synchronous excitation and disinhibition in cat striate neurons in vivo. Eur. J. Neurosci. 2008, 28, 1286–1300. [Google Scholar] [CrossRef] [PubMed]
  72. Kremkow, J.; Perrinet, L.U.; Monier, C.; Alonso, J.M.; Aertsen, A.; Frégnac, Y.; Masson, G.S. Push-Pull Receptive Field Organization and Synaptic Depression: Mechanisms for Reliably Encoding Naturalistic Stimuli in V1. Front. Neural Circuits 2016, 10, 37. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  73. Aviel, Y.; Mehring, C.; Abeles, M.; Horn, D. On Embedding Synfire Chains in a Balanced Network. Neural Comput. 2003, 15, 1321–1340. [Google Scholar] [CrossRef] [Green Version]
  74. Kremkow, J.; Perrinet, L.U.; Masson, G.S.; Aertsen, A. Functional consequences of correlated excitatory and inhibitory conductances in cortical networks. J. Comput. Neurosci. 2010, 28, 579–594. [Google Scholar] [CrossRef]
  75. Davison, A.P. PyNN: A common interface for neuronal network simulators. Front. Neuroinform. 2008, 2, 11. [Google Scholar] [CrossRef] [Green Version]
  76. Pfeil, T.; Grübl, A.; Jeltsch, S.; Müller, E.; Müller, P.; Petrovici, M.A.; Schmuker, M.; Brüderle, D.; Schemmel, J.; Meier, K. Six Networks on a Universal Neuromorphic Computing Substrate. Front. Neurosci. 2013, 7, 11. [Google Scholar] [CrossRef] [Green Version]
  77. Schrader, S.; Grün, S.; Diesmann, M.; Gerstein, G.L. Detecting Synfire Chain Activity Using Massively Parallel Spike Train Recording. J. Neurophysiol. 2008, 100, 2165–2176. [Google Scholar] [CrossRef] [Green Version]
  78. Grammont, F.; Riehle, A. Precise spike synchronization in monkey motor cortex involved in preparation for movement. Exp. Brain Res. 1999, 128, 118–122. [Google Scholar] [CrossRef]
  79. Brette, R. Computing with Neural Synchrony. PLoS Comput. Biol. 2012, 8, e1002561. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  80. Fries, P. A mechanism for cognitive dynamics: Neuronal communication through neuronal coherence. Trends Cogn. Sci. 2005, 9, 474–480. [Google Scholar] [CrossRef] [PubMed]
  81. VanRullen, R.; Reddy, L.; Koch, C. The Continuous Wagon Wheel Illusion Is Associated with Changes in Electroencephalogram Power at 13 Hz. J. Neurosci. 2006, 26, 502–507. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  82. Dugue, L.; Marque, P.; VanRullen, R. The Phase of Ongoing Oscillations Mediates the Causal Relation between Brain Excitation and Visual Perception. J. Neurosci. 2011, 31, 11889–11893. [Google Scholar] [CrossRef] [Green Version]
  83. Bringuier, V.; Chavane, F.; Glaeser, L.; Frégnac, Y. Horizontal Propagation of Visual Activity in the Synaptic Integration Field of Area 17 Neurons. Science 1999, 283, 695–699. [Google Scholar] [CrossRef]
  84. Benvenuti, G.; Chemla, S.; Boonman, A.; Perrinet, L.U.; Masson, G.S.; Chavane, F. Anticipatory responses along motion trajectories in awake monkey area V1. bioRxiv Prepr. Serv. Biol. 2020. [Google Scholar] [CrossRef] [Green Version]
  85. Le Bec, B.; Troncoso, X.G.; Desbois, C.; Passarelli, Y.; Baudot, P.; Monier, C.; Pananceau, M.; Frégnac, Y. Horizontal connectivity in V1: Prediction of coherence in contour and motion integration. PLoS ONE 2022, 17, e0268351. [Google Scholar] [CrossRef]
  86. Feller, M.B.; Butts, D.A.; Aaron, H.L.; Rokhsar, D.S.; Shatz, C.J. Dynamic Processes Shape Spatiotemporal Properties of Retinal Waves. Neuron 1997, 19, 293–306. [Google Scholar] [CrossRef] [Green Version]
  87. Bienenstock, E. A model of neocortex. Netw. Comput. Neural Syst. 1995, 6, 179–224. [Google Scholar] [CrossRef]
  88. Muller, L.; Reynaud, A.; Chavane, F.; Destexhe, A. The stimulus-evoked population response in visual cortex of awake monkey is a propagating wave. Nat. Commun. 2014, 5, 3675. [Google Scholar] [CrossRef] [Green Version]
  89. Muller, L.; Chavane, F.; Reynolds, J.; Sejnowski, T.J. Cortical travelling waves: Mechanisms and computational principles. Nat. Rev. Neurosci. 2018, 19, 255–268. [Google Scholar] [CrossRef] [PubMed]
  90. Lindén, H.; Petersen, P.C.; Vestergaard, M.; Berg, R.W. Movement is governed by rotational neural dynamics in spinal motor networks. Nature 2022, 610, 526–531. [Google Scholar] [CrossRef] [PubMed]
  91. Chemla, S.; Reynaud, A.; diVolo, M.; Zerlaut, Y.; Perrinet, L.U.; Destexhe, A.; Chavane, F.Y. Suppressive waves disambiguate the representation of long-range apparent motion in awake monkey V1. J. Neurosci. 2019, 2792, 18. [Google Scholar] [CrossRef] [Green Version]
  92. Pillow, J.W.; Shlens, J.; Paninski, L.; Sher, A.; Litke, A.M.; Chichilnisky, E.J.; Simoncelli, E.P. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature 2008, 454, 995–999. [Google Scholar] [CrossRef] [Green Version]
  93. Schneidman, E.; Berry, M.J.; Segev, R.; Bialek, W. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature 2006, 440, 1007–1012. [Google Scholar] [CrossRef] [Green Version]
  94. Puchalla, J.; Berry, M.J. Spike Trains of Retinal Ganglion Cells Viewing a Repeated Natural Movie; Princeton University: Princeton, NJ, USA, 2022. [Google Scholar] [CrossRef]
  95. Miller, J.e.K.; Ayzenshtat, I.; Carrillo-Reid, L.; Yuste, R. Visual stimuli recruit intrinsically generated cortical ensembles. Proc. Natl. Acad. Sci. USA 2014, 111, E4053–E4061. [Google Scholar] [CrossRef] [Green Version]
  96. Ikegaya, Y.; Aaron, G.; Cossart, R.; Aronov, D.; Lampl, I.; Ferster, D.; Yuste, R. Synfire Chains and Cortical Songs: Temporal Modules of Cortical Activity. Science 2004, 304, 559–564. [Google Scholar] [CrossRef] [Green Version]
  97. Luczak, A.; Barthó, P.; Marguet, S.L.; Buzsáki, G.; Harris, K.D. Sequential structure of neocortical spontaneous activity in vivo. Proc. Natl. Acad. Sci. USA 2007, 104, 347–352. [Google Scholar] [CrossRef] [Green Version]
  98. Pastalkova, E.; Itskov, V.; Amarasingham, A.; Buzsáki, G. Internally Generated Cell Assembly Sequences in the Rat Hippocampus. Science 2008, 321, 1322–1327. [Google Scholar] [CrossRef] [Green Version]
  99. Villette, V.; Malvache, A.; Tressard, T.; Dupuy, N.; Cossart, R. Internally Recurring Hippocampal Sequences as a Population Template of Spatiotemporal Information. Neuron 2015, 88, 357–366. [Google Scholar] [CrossRef] [Green Version]
  100. Branco, T.; Clark, B.A.; Häusser, M. Dendritic Discrimination of Temporal Input Sequences in Cortical Neurons. Science 2010, 329, 1671–1675. [Google Scholar] [CrossRef] [PubMed]
  101. Luczak, A.; McNaughton, B.L.; Harris, K.D. Packet-based communication in the cortex. Nat. Rev. Neurosci. 2015, 16, 745–755. [Google Scholar] [CrossRef] [PubMed]
  102. Meister, M.; Lagnado, L.; Baylor, D.A. Concerted Signaling by Retinal Ganglion Cells. Science 1995, 270, 1207–1210. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  103. Cleland, T.A. Construction of Odor Representations by Olfactory Bulb Microcircuits. In Progress in Brain Research; Elsevier: Amsterdam, The Netherlands, 2014; Volume 208, pp. 177–203. [Google Scholar] [CrossRef]
  104. Kashiwadani, H.; Sasaki, Y.F.; Uchida, N.; Mori, K. Synchronized Oscillatory Discharges of Mitral/Tufted Cells With Different Molecular Receptive Ranges in the Rabbit Olfactory Bulb. J. Neurophysiol. 1999, 82, 1786–1792. [Google Scholar] [CrossRef] [PubMed]
  105. Rinberg, D.; Koulakov, A.; Gelperin, A. Speed-Accuracy Tradeoff in Olfaction. Neuron 2006, 51, 351–358. [Google Scholar] [CrossRef] [Green Version]
  106. Johansson, R.S.; Birznieks, I. First spikes in ensembles of human tactile afferents code complex spatial fingertip events. Nat. Neurosci. 2004, 7, 170–177. [Google Scholar] [CrossRef]
  107. Buzsáki, G.; Tingley, D. Space and Time: The Hippocampus as a Sequence Generator. Trends Cogn. Sci. 2018, 22, 853–869. [Google Scholar] [CrossRef]
  108. Malvache, A.; Reichinnek, S.; Villette, V.; Haimerl, C.; Cossart, R. Awake hippocampal reactivations project onto orthogonal neuronal assemblies. Science 2016, 353, 1280–1283. [Google Scholar] [CrossRef]
  109. Haimerl, C.; Angulo-Garcia, D.; Villette, V.; Reichinnek, S.; Torcini, A.; Cossart, R.; Malvache, A. Internal representation of hippocampal neuronal population spans a time-distance continuum. Proc. Natl. Acad. Sci. USA 2019, 116, 7477–7482. [Google Scholar] [CrossRef] [Green Version]
  110. Agus, T.R.; Thorpe, S.J.; Pressnitzer, D. Rapid Formation of Robust Auditory Memories: Insights from Noise. Neuron 2010, 66, 610–618. [Google Scholar] [CrossRef]
  111. Izhikevich, E.M. Polychronization: Computation with Spikes. Neural Comput. 2006, 18, 245–282. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  112. Simoncelli, E.P.; Paninski, L.; Pillow, J.; Schwartz, O. Characterization of Neural Responses with Stochastic Stimuli. In The New Cognitive Neurosciences, 3rd ed.; Gazzaniga, M., Ed.; MIT Press: Cambridge, MA, USA, 2003. [Google Scholar]
  113. Jazayeri, M.; Movshon, J.A. Optimal representation of sensory information by neural populations. Nat. Neurosci. 2006, 9, 690–696. [Google Scholar] [CrossRef] [PubMed]
  114. Berens, P.; Ecker, A.S.; Cotton, R.J.; Ma, W.J.; Bethge, M.; Tolias, A.S. A Fast and Simple Population Code for Orientation in Primate V1. J. Neurosci. 2012, 32, 10618–10626. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  115. Bellec, G.; Wang, S.; Modirshanechi, A.; Brea, J.; Gerstner, W. Fitting summary statistics of neural data with a differentiable spiking network simulator. arXiv 2021, arXiv:2106.10064. [Google Scholar]
  116. Kohn, A.; Smith, M. Utah Array Extracellular Recordings of Spontaneous and Visually Evoked Activity from Anesthetized Macaque Primary Visual Cortex (V1). 2016. Available online: https://doi.org/10.6080/K0NC5Z4X (accessed on 25 December 2022).
  117. Warner, C.; Ruda, K.; Sommer, F.T. A probabilistic latent variable model for detecting structure in binary data. arXiv 2022, arXiv:2201.11108. [Google Scholar]
  118. Victor, J.D.; Purpura, K.P. Nature and precision of temporal coding in visual cortex: A metric-space analysis. J. Neurophysiol. 1996, 76, 1310–1326. [Google Scholar] [CrossRef]
  119. van Rossum, M.C. A novel spike distance. Neural Comput. 2001, 13, 751–763. [Google Scholar] [CrossRef] [Green Version]
  120. Kreuz, T.; Haas, J.S.; Morelli, A.; Abarbanel, H.D.I.; Politi, A. Measuring spike train synchrony. J. Neurosci. Methods 2007, 165, 151–161. [Google Scholar] [CrossRef] [Green Version]
  121. Moser, B.A.; Natschlager, T. On Stability of Distance Measures for Event Sequences Induced by Level-Crossing Sampling. IEEE Trans. Signal Process. 2014, 62, 1987–1999. [Google Scholar] [CrossRef]
  122. Weyl, H. Ueber die Gleichverteilung von Zahlen mod. Eins. Math. Ann. 1916, 77, 313–352. [Google Scholar] [CrossRef] [Green Version]
  123. Aronov, D.; Victor, J.D. Non-Euclidean properties of spike train metric spaces. Phys. Rev. E 2004, 69, 061905. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  124. Levakova, M.; Tamborrino, M.; Ditlevsen, S.; Lansky, P. A review of the methods for neuronal response latency estimation. Biosystems 2015, 136, 23–34. [Google Scholar] [CrossRef] [PubMed]
  125. Grün, S.; Diesmann, M.; Aertsen, A. Unitary Events in Multiple Single-Neuron Spiking Activity: II. Nonstationary Data. Neural Comput. 2002, 14, 81–119. [Google Scholar] [CrossRef] [PubMed]
  126. Grün, S.; Diesmann, M.; Aertsen, A. Unitary Event Analysis. In Analysis of Parallel Spike Trains; Springer: Boston, MA, USA, 2010; pp. 191–220. [Google Scholar]
  127. Grün, S.; Diesmann, M.; Aertsen, A. Unitary Events in Multiple Single-Neuron Spiking Activity: I. Detection and Significance. Neural Comput. 2002, 14, 43–80. [Google Scholar] [CrossRef]
  128. Torre, E.; Picado-Muiño, D.; Denker, M.; Borgelt, C.; Grün, S. Statistical evaluation of synchronous spike patterns extracted by frequent item set mining. Front. Comput. Neurosci. 2013, 7, 132. [Google Scholar] [CrossRef] [Green Version]
  129. Quaglio, P.; Rostami, V.; Torre, E.; Grün, S. Methods for identification of spike patterns in massively parallel spike trains. Biol. Cybern. 2018, 112, 57–80. [Google Scholar] [CrossRef] [Green Version]
  130. Stella, A.; Quaglio, P.; Torre, E.; Grün, S. 3d-SPADE: Significance evaluation of spatio-temporal patterns of various temporal extents. Biosystems 2019, 185, 104022. [Google Scholar] [CrossRef]
  131. Stella, A.; Bouss, P.; Palm, G.; Grün, S. Comparing Surrogates to Evaluate Precisely Timed Higher-Order Spike Correlations. eNeuro 2022, 9, ENEURO.0505–21.2022. [Google Scholar] [CrossRef]
  132. Grossberger, L.; Battaglia, F.P.; Vinck, M. Unsupervised clustering of temporal patterns in high-dimensional neuronal ensembles using a novel dissimilarity measure. PLoS Comput. Biol. 2018, 14, e1006283. [Google Scholar] [CrossRef]
  133. Nádasdy, Z.; Hirase, H.; Czurkó, A.; Csicsvari, J.; Buzsáki, G. Replay and Time Compression of Recurring Spike Sequences in the Hippocampus. J. Neurosci. 1999, 19, 9497–9507. [Google Scholar] [CrossRef] [Green Version]
  134. Lee, A.K.; Wilson, M.A. A Combinatorial Method for Analyzing Sequential Firing Patterns Involving an Arbitrary Number of Neurons Based on Relative Time Order. J. Neurophysiol. 2004, 92, 2555–2573. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  135. Sotomayor-Gómez, B.; Battaglia, F.P.; Vinck, M. SpikeShip: A method for fast, unsupervised discovery of high-dimensional neural spiking patterns. bioRxiv Prepr. Serv. Biol. 2021. [Google Scholar] [CrossRef]
  136. Pachitariu, M.; Stringer, C.; Harris, K.D. Robustness of Spike Deconvolution for Neuronal Calcium Imaging. J. Neurosci. 2018, 38, 7976–7985. [Google Scholar] [CrossRef] [Green Version]
  137. Stringer, C. MouseLand/Rastermap: A Multi-Dimensional Embedding Algorithm. 2020. Available online: https://github.com/MouseLand/rastermap (accessed on 25 December 2022).
  138. Stringer, C.; Pachitariu, M.; Steinmetz, N.; Reddy, C.B.; Carandini, M.; Harris, K.D. Spontaneous behaviors drive multidimensional, brainwide activity. Science 2019, 364, 255. [Google Scholar] [CrossRef] [PubMed]
  139. Stringer, C.; Michaelos, M.; Tsyboulski, D.; Lindo, S.E.; Pachitariu, M. High-precision coding in visual cortex. Cell 2021, 184, 2767–2778.e15. [Google Scholar] [CrossRef] [PubMed]
  140. Russo, E.; Durstewitz, D. Cell assemblies at multiple time scales with arbitrary lag constellations. eLife 2017, 6, e19428. [Google Scholar] [CrossRef]
  141. Pipa, G.; Wheeler, D.W.; Singer, W.; Nikolić, D. NeuroXidence: Reliable and efficient analysis of an excess or deficiency of joint-spike events. J. Comput. Neurosci. 2008, 25, 64–88. [Google Scholar] [CrossRef] [Green Version]
  142. Torre, E.; Quaglio, P.; Denker, M.; Brochier, T.; Riehle, A.; Grun, S. Synchronous Spike Patterns in Macaque Motor Cortex during an Instructed-Delay Reach-to-Grasp Task. J. Neurosci. 2016, 36, 8329–8340. [Google Scholar] [CrossRef] [Green Version]
  143. Williams, A.H.; Degleris, A.; Wang, Y.; Linderman, S.W. Point process models for sequence detection in high-dimensional neural spike trains. arXiv 2020, arXiv:2010.04875. [Google Scholar]
  144. Kass, R.E.; Ventura, V.; Brown, E.N. Statistical issues in the analysis of neuronal data. J. Neurophysiol. 2005, 94, 8–25. [Google Scholar] [CrossRef]
  145. van Kempen, J.; Gieselmann, M.A.; Boyd, M.; Steinmetz, N.A.; Moore, T.; Engel, T.A.; Thiele, A. Top-down coordination of local cortical state during selective attention. Neuron 2021, 109, 894–904.e8. [Google Scholar] [CrossRef] [PubMed]
  146. Pasturel, C.; Montagnini, A.; Perrinet, L.U. Humans adapt their anticipatory eye movements to the volatility of visual motion properties. PLoS Comput. Biol. 2020, 16, e1007438. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  147. Von Helmholz, H. Messungen über den zeitlichen Verlauf der Zuckung animalischer Muskeln und die Fortpflanzungsgeschwindigkeit der Reizung in den Nerven. Arch. Anat. Physiol. Wiss. Med. 1850, 17, 176–364. [Google Scholar]
  148. Peyrard, M. How is information transmitted in a nerve? J. Biol. Phys. 2020, 46, 327–341. [Google Scholar] [CrossRef] [PubMed]
  149. Young, J.Z. The Functioning of the Giant Nerve Fibres of the Squid. J. Exp. Biol. 1938, 15, 170–185. [Google Scholar] [CrossRef]
  150. Madadi Asl, M.; Valizadeh, A.; Tass, P.A. Dendritic and Axonal Propagation Delays May Shape Neuronal Networks With Plastic Synapses. Front. Physiol. 2018, 9, 1849. [Google Scholar] [CrossRef] [Green Version]
  151. Stetson, D.S.; Albers, J.W.; Silverstein, B.A.; Wolfe, R.A. Effects of age, sex, and anthropometric factors on nerve conduction measures. Muscle Nerve 1992, 15, 1095–1104. [Google Scholar] [CrossRef] [Green Version]
  152. Jeffress, L.A. A place theory of sound localization. J. Comp. Physiol. Psychol. 1948, 41, 35–39. [Google Scholar] [CrossRef]
  153. Konishi, M. Coding of auditory space. Annu. Rev. Neurosci. 2003, 26, 31–55. [Google Scholar] [CrossRef] [Green Version]
  154. Gerstner, W.; Kempter, R.; van Hemmen, J.L.; Wagner, H. A neuronal learning rule for sub-millisecond temporal coding. Nature 1996, 383, 76–78. [Google Scholar] [CrossRef] [Green Version]
  155. Seidl, A.H.; Rubel, E.W.; Harris, D.M. Mechanisms for adjusting interaural time differences to achieve binaural coincidence detection. J. Neurosci. Off. J. Soc. Neurosci. 2010, 30, 70–80. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  156. Camon, J.; Hugues, S.; Erlandson, M.A.; Robbe, D.; Lagoun, S.; Marouane, E.; Bureau, I. The Timing of Sensory-Guided Behavioral Response is Represented in the Mouse Primary Somatosensory Cortex. Cereb. Cortex 2019, 29, 3034–3047. [Google Scholar] [CrossRef] [Green Version]
  157. Gasser, H.S.; Grundfest, H. Axon Diameters in Relation to the Spike Dimensions and the Conduction Velocity in Mammalian A Fibers. Am. J.-Physiol.-Leg. Content 1939. [Google Scholar] [CrossRef]
  158. Brill, M.H.; Waxman, S.G.; Moore, J.W.; Joyner, R.W. Conduction velocity and spike configuration in myelinated fibres: Computed dependence on internode distance. J. Neurol. Neurosurg. Psychiatry 1977, 40, 769–774. [Google Scholar] [CrossRef] [PubMed]
  159. Pérez-Cerdá, F.; Sánchez-Gómez, M.V.; Matute, C. Pío del Río Hortega and the discovery of the oligodendrocytes. Front. Neuroanat. 2015, 9, 92. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  160. Schmitt, F.O.; Bear, R.S. The Ultrastructure of the Nerve Axon Sheath. Biol. Rev. 1939, 14, 27–50. [Google Scholar] [CrossRef]
  161. Simons, M.; Nave, K.A. Oligodendrocytes: Myelination and Axonal Support. Cold Spring Harb. Perspect. Biol. 2016, 8, a020479. [Google Scholar] [CrossRef] [PubMed]
  162. Duncan, G.J.; Simkins, T.J.; Emery, B. Neuron-Oligodendrocyte Interactions in the Structure and Integrity of Axons. Front. Cell Dev. Biol. 2021, 9, 653101. [Google Scholar] [CrossRef]
  163. Fields, R.D. A new mechanism of nervous system plasticity: Activity-dependent myelination. Nat. Rev. Neurosci. 2015, 16, 756–767. [Google Scholar] [CrossRef]
  164. Fields, R.D.; Bukalo, O. Myelin makes memories. Nat. Neurosci. 2020, 23, 469–470. [Google Scholar] [CrossRef]
  165. Reynolds, F.E.; Slater, J.K. A Study of the Structure and Function of the Interstitial Tissue of the Central Nervous System. Edinb. Med. J. 1928, 35, 49–57. [Google Scholar]
  166. Steadman, P.E.; Xia, F.; Ahmed, M.; Mocle, A.J.; Penning, A.R.; Geraghty, A.C.; Steenland, H.W.; Monje, M.; Josselyn, S.A.; Frankland, P.W. Disruption of Oligodendrogenesis Impairs Memory Consolidation in Adult Mice. Neuron 2020, 105, 150–164.e6. [Google Scholar] [CrossRef] [PubMed]
  167. Pan, S.; Mayoral, S.R.; Choi, H.S.; Chan, J.R.; Kheirbek, M.A. Preservation of a remote fear memory requires new myelin formation. Nat. Neurosci. 2020, 23, 487–499. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  168. Wan, R.; Cheli, V.T.; Santiago-González, D.A.; Rosenblum, S.L.; Wan, Q.; Paez, P.M. Impaired Postnatal Myelination in a Conditional Knockout Mouse for the Ferritin Heavy Chain in Oligodendroglial Cells. J. Neurosci. 2020, 40, 7609–7624. [Google Scholar] [CrossRef] [PubMed]
  169. Xue, J.; Zhu, Y.; Liu, Z.; Lin, J.; Li, Y.; Li, Y.; Zhuo, Y. Demyelination of the Optic Nerve: An Underlying Factor in Glaucoma? Front. Aging Neurosci. 2021, 13, 701322. [Google Scholar] [CrossRef]
  170. Kuhn, S.; Gritti, L.; Crooks, D.; Dombrowski, Y. Oligodendrocytes in Development, Myelin Generation and Beyond. Cells 2019, 8, 1424. [Google Scholar] [CrossRef] [Green Version]
  171. Baraban, M.; Koudelka, S.; Lyons, D.A. Ca2+ activity signatures of myelin sheath formation and growth in vivo. Nat. Neurosci. 2018, 21, 19–23. [Google Scholar] [CrossRef]
  172. Nave, K.A.; Salzer, J.L. Axonal regulation of myelination by neuregulin 1. Curr. Opin. Neurobiol. 2006, 16, 492–500. [Google Scholar] [CrossRef] [PubMed]
  173. Cullen, C.L.; Pepper, R.E.; Clutterbuck, M.T.; Pitman, K.A.; Oorschot, V.; Auderset, L.; Tang, A.D.; Ramm, G.; Emery, B.; Rodger, J.; et al. Periaxonal and nodal plasticities modulate action potential conduction in the adult mouse brain. Cell Rep. 2021, 34, 108641. [Google Scholar] [CrossRef]
  174. Gibson, E.M.; Purger, D.; Mount, C.W.; Goldstein, A.K.; Lin, G.L.; Wood, L.S.; Inema, I.; Miller, S.E.; Bieri, G.; Zuchero, J.B.; et al. Neuronal Activity Promotes Oligodendrogenesis and Adaptive Myelination in the Mammalian Brain. Science 2014, 344, 1252304. [Google Scholar] [CrossRef] [Green Version]
  175. Spencer, M.J.; Meffin, H.; Burkitt, A.N.; Grayden, D.B. Compensation for Traveling Wave Delay Through Selection of Dendritic Delays Using Spike-Timing-Dependent Plasticity in a Model of the Auditory Brainstem. Front. Comput. Neurosci. 2018, 12, 36. [Google Scholar] [CrossRef]
  176. Mel, B.W.; Schiller, J.; Poirazi, P. Synaptic plasticity in dendrites: Complications and coping strategies. Curr. Opin. Neurobiol. 2017, 43, 177–186. [Google Scholar] [CrossRef] [PubMed]
  177. Golding, N.L.; Staff, N.P.; Spruston, N. Dendritic spikes as a mechanism for cooperative long-term potentiation. Nature 2002, 418, 326–331. [Google Scholar] [CrossRef] [PubMed]
  178. Maass, W. Networks of spiking neurons: The third generation of neural network models. Neural Netw. 1997, 10, 1659–1671. [Google Scholar] [CrossRef]
  179. Neftci, E.O.; Mostafa, H.; Zenke, F. Surrogate Gradient Learning in Spiking Neural Networks: Bringing the Power of Gradient-Based Optimization to Spiking Neural Networks. IEEE Signal Process. Mag. 2019, 36, 51–63. [Google Scholar] [CrossRef]
  180. Rueckauer, B.; Lungu, I.A.; Hu, Y.; Pfeiffer, M.; Liu, S.C. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification. Front. Neurosci. 2017, 11, 682. [Google Scholar] [CrossRef] [Green Version]
  181. Susi, G.; Antón-Toro, L.F.; Maestú, F.; Pereda, E.; Mirasso, C. nMNSD-A Spiking Neuron-Based Classifier That Combines Weight-Adjustment and Delay-Shift. Front. Neurosci. 2021, 15, 582608. [Google Scholar] [CrossRef]
  182. Davies, M.; Srinivasa, N.; Lin, T.H.; Chinya, G.; Cao, Y.; Choday, S.H.; Dimou, G.; Joshi, P.; Imam, N.; Jain, S.; et al. Loihi: A Neuromorphic Manycore Processor with On-Chip Learning. IEEE Micro 2018, 38, 82–99. [Google Scholar] [CrossRef]
  183. Lazar, A.A. Time encoding with an integrate-and-fire neuron with a refractory period. Neurocomputing 2004, 58–60, 53–58. [Google Scholar] [CrossRef]
  184. Markram, H.; Lübke, J.; Frotscher, M.; Sakmann, B. Regulation of Synaptic Efficacy by Coincidence of Postsynaptic APs and EPSPs. Science 1997, 275, 213–215. [Google Scholar] [CrossRef] [Green Version]
  185. Caporale, N.; Dan, Y. Spike Timing–Dependent Plasticity: A Hebbian Learning Rule. Annu. Rev. Neurosci. 2008, 31, 25–46. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  186. Hüning, H.; Glünder, H.; Palm, G. Synaptic Delay Learning in Pulse-Coupled Neurons. Neural Comput. 1998, 10, 555–565. [Google Scholar] [CrossRef] [PubMed]
  187. Eurich, C.W.; Pawelzik, K.; Ernst, U.; Cowan, J.D.; Milton, J.G. Dynamics of Self-Organized Delay Adaptation. Phys. Rev. Lett. 1999, 82, 1594–1597. [Google Scholar] [CrossRef]
  188. Gütig, R.; Sompolinsky, H. The tempotron: A neuron that learns spike timing–based decisions. Nat. Neurosci. 2006, 9, 420–428. [Google Scholar] [CrossRef] [PubMed]
  189. Gütig, R. To spike, or when to spike? Curr. Opin. Neurobiol. 2014, 25, 134–139. [Google Scholar] [CrossRef]
  190. Pauli, R.; Weidel, P.; Kunkel, S.; Morrison, A. Reproducing Polychronization: A Guide to Maximizing the Reproducibility of Spiking Network Models. Front. Neuroinformatics 2018, 12, 46. [Google Scholar] [CrossRef] [Green Version]
  191. Guise, M.; Knott, A.; Benuskova, L. A Bayesian Model of Polychronicity. Neural Comput. 2014, 26, 2052–2073. [Google Scholar] [CrossRef]
  192. Zhang, M.; Wu, J.; Belatreche, A.; Pan, Z.; Xie, X.; Chua, Y.; Li, G.; Qu, H.; Li, H. Supervised learning in spiking neural networks with synaptic delay-weight plasticity. Neurocomputing 2020, 409, 103–118. [Google Scholar] [CrossRef]
  193. Ghosh, D.; Frasca, M.; Rizzo, A.; Majhi, S.; Rakshit, S.; Alfaro-Bittner, K.; Boccaletti, S. Synchronization in time-varying networks. arXiv 2021, arXiv:2109.07618. [Google Scholar]
  194. Ghosh, D.; Frasca, M.; Rizzo, A.; Majhi, S.; Rakshit, S.; Alfaro-Bittner, K.; Boccaletti, S. The synchronized dynamics of time-varying networks. Phys. Rep. 2022, 949, 1–63. [Google Scholar] [CrossRef]
  195. Izhikevich, E.M.; Hoppensteadt, F.C. Polychronous Wavefront Computations. Int. J. Bifurc. Chaos 2009, 19, 1733–1739. [Google Scholar] [CrossRef] [Green Version]
  196. Grimaldi, A.; Perrinet, L.U. Learning hetero-synaptic delays for motion detection in a single layer of spiking neurons. In Proceedings of the 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 16–19 October 2022; pp. 3591–3595. [Google Scholar] [CrossRef]
  197. Madadi Asl, M.; Ramezani Akbarabadi, S. Delay-dependent transitions of phase synchronization and coupling symmetry between neurons shaped by spike-timing-dependent plasticity. Cogn. Neurodyn. 2022, 1–14. [Google Scholar] [CrossRef]
  198. Perrinet, L.; Samuelides, M. Coherence detection in a spiking neuron via Hebbian learning. Neurocomputing 2002, 44-46, 133–139. [Google Scholar] [CrossRef]
  199. Perrinet, L.; Delorme, A.; Samuelides, M.; Thorpe, S. Networks of integrate-and-fire neuron using rank order coding A: How to implement spike time dependent Hebbian plasticity. Neurocomputing 2001, 38–40, 817–822. [Google Scholar] [CrossRef]
  200. Gilson, M. STDP in recurrent neuronal networks. Front. Comput. Neurosci. 2010, 4, 23. [Google Scholar] [CrossRef]
  201. Datadien, A.; Haselager, P.; Sprinkhuizen-Kuyper, I. The Right Delay—Detecting Specific Spike Patterns with STDP and Axonal Conduction Delays; Springer: Berlin/Heidelberg, Germany, 2011; pp. 90–99. [Google Scholar]
  202. Kerr, R.R.; Burkitt, A.N.; Thomas, D.A.; Gilson, M.; Grayden, D.B. Delay Selection by Spike-Timing-Dependent Plasticity in Recurrent Networks of Spiking Neurons Receiving Oscillatory Inputs. PLoS Comput. Biol. 2013, 9, e1002897. [Google Scholar] [CrossRef] [Green Version]
  203. Burkitt, A.N.; Hogendoorn, H. Predictive Visual Motion Extrapolation Emerges Spontaneously and without Supervision at Each Layer of a Hierarchical Neural Network with Spike-Timing-Dependent Plasticity. J. Neurosci. 2021, 41, 4428–4438. [Google Scholar] [CrossRef]
  204. Nadafian, A.; Ganjtabesh, M. Bio-plausible Unsupervised Delay Learning for Extracting Temporal Features in Spiking Neural Networks. arXiv 2020, arXiv:2011.09380. [Google Scholar]
  205. Wang, X.; Lin, X.; Dang, X. A Delay Learning Algorithm Based on Spike Train Kernels for Spiking Neurons. Front. Neurosci. 2019, 13, 252. [Google Scholar] [CrossRef] [Green Version]
  206. Hazan, H.; Caby, S.; Earl, C.; Siegelmann, H.; Levin, M. Memory via Temporal Delays in weightless Spiking Neural Network. arXiv 2022, arXiv:2202.07132. [Google Scholar]
  207. Luo, X.; Qu, H.; Wang, Y.; Yi, Z.; Zhang, J.; Zhang, M. Supervised Learning in Multilayer Spiking Neural Networks With Spike Temporal Error Backpropagation. IEEE Trans. Neural Netw. Learn. Syst. 2022, 1–13. [Google Scholar] [CrossRef] [PubMed]
  208. Sun, H.; Sourina, O.; Huang, G.B. Learning polychronous neuronal groups using joint weight-delay spike-timing-dependent plasticity. Neural Comput. 2016, 28, 2181–2212. [Google Scholar] [CrossRef] [PubMed]
  209. Ghosh, R.; Gupta, A.; Silva, A.N.; Soares, A.; Thakor, N.V. Spatiotemporal filtering for event-based action recognition. arXiv 2019, arXiv:1903.07067. [Google Scholar]
  210. Perrinet, L.U.; Adams, R.A.; Friston, K.J. Active inference, eye movements and oculomotor delays. Biol. Cybern. 2014, 108, 777–801. [Google Scholar] [CrossRef] [Green Version]
  211. Hogendoorn, H.; Burkitt, A.N. Predictive Coding with Neural Transmission Delays: A Real-Time Temporal Alignment Hypothesis. eNeuro 2019, 6, ENEURO.0412-18.2019. [Google Scholar] [CrossRef] [Green Version]
  212. Khoei, M.A.; Masson, G.S.; Perrinet, L.U. Motion-based prediction explains the role of tracking in motion extrapolation. J. Physiol.-Paris 2013, 107, 409–420. [Google Scholar] [CrossRef]
  213. Kaplan, B.A.; Lansner, A.; Masson, G.S.; Perrinet, L.U. Anisotropic connectivity implements motion-based prediction in a spiking neural network. Front. Comput. Neurosci. 2013, 7, 112. [Google Scholar] [CrossRef] [Green Version]
  214. Khoei, M.A.; Masson, G.S.; Perrinet, L.U. The Flash-Lag Effect as a Motion-Based Predictive Shift. PLoS Comput. Biol. 2017, 13, e1005068. [Google Scholar] [CrossRef]
  215. Javanshir, A.; Nguyen, T.T.; Mahmud, M.A.P.; Kouzani, A.Z. Advancements in Algorithms and Neuromorphic Hardware for Spiking Neural Networks. Neural Comput. 2022, 34, 1289–1328. [Google Scholar] [CrossRef]
  216. Marković, D.; Mizrahi, A.; Querlioz, D.; Grollier, J. Physics for neuromorphic computing. Nat. Rev. Phys. 2020, 2, 499–510. [Google Scholar] [CrossRef]
  217. Rasetto, M.; Wan, Q.; Akolkar, H.; Shi, B.; Xiong, F.; Benosman, R. The Challenges Ahead for Bio-inspired Neuromorphic Event Processors: How Memristors Dynamic Properties Could Revolutionize Machine Learning. arXiv 2022, arXiv:2201.12673. [Google Scholar]
  218. Diesmann, M.; Gewaltig, M.O. NEST: An Environment for Neural Systems Simulations. GWDG-Bericht Nr. 58 Theo Plesser, Volker Macho (Hrsg.). 2003, p. 29. Available online: https://paper.idea.edu.cn/paper/85561255 (accessed on 25 December 2022).
  219. Hazan, H.; Saunders, D.J.; Khan, H.; Patel, D.; Sanghavi, D.T.; Siegelmann, H.T.; Kozma, R. BindsNET: A Machine Learning-Oriented Spiking Neural Networks Library in Python. Front. Neuroinform. 2018, 12, 89. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  220. Stimberg, M.; Brette, R.; Goodman, D.F. Brian 2, an intuitive and efficient neural simulator. eLife 2019, 8, e47314. [Google Scholar] [CrossRef]
  221. Zenke, F.; Bohté, S.M.; Clopath, C.; Comşa, I.M.; Göltz, J.; Maass, W.; Masquelier, T.; Naud, R.; Neftci, E.O.; Petrovici, M.A.; et al. Visualizing a joint future of neuroscience and neuromorphic engineering. Neuron 2021, 109, 571–575. [Google Scholar] [CrossRef]
  222. Mead, C.; Ismail, M. Analog VLSI Implementation of Neural Systems; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1989. [Google Scholar]
  223. Bartolozzi, C.; Indiveri, G. Synaptic Dynamics in Analog VLSI. Neural Comput. 2007, 19, 2581–2603. [Google Scholar] [CrossRef] [Green Version]
  224. Schuman, C.D.; Potok, T.E.; Patton, R.M.; Birdwell, J.D.; Dean, M.E.; Rose, G.S.; Plank, J.S. A Survey of Neuromorphic Computing and Neural Networks in Hardware. arXiv 2017, arXiv:1705.06963. [Google Scholar]
  225. Furber, S.B.; Lester, D.R.; Plana, L.A.; Garside, J.D.; Painkras, E.; Temple, S.; Brown, A.D. Overview of the SpiNNaker System Architecture. IEEE Trans. Comput. 2013, 62, 2454–2467. [Google Scholar] [CrossRef] [Green Version]
  226. Furber, S.; Bogdan, P. (Eds.) SpiNNaker: A Spiking Neural Network Architecture; Now Publishers: Norwell, MA, USA, 2020. [Google Scholar] [CrossRef] [Green Version]
  227. Merolla, P.A.; Arthur, J.V.; Alvarez-Icaza, R.; Cassidy, A.S.; Sawada, J.; Akopyan, F.; Jackson, B.L.; Imam, N.; Guo, C.; Nakamura, Y.; et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 2014, 345, 668–673. [Google Scholar] [CrossRef]
  228. Benjamin, B.V.; Gao, P.; McQuinn, E.; Choudhary, S.; Chandrasekaran, A.R.; Bussat, J.M.; Alvarez-Icaza, R.; Arthur, J.V.; Merolla, P.A.; Boahen, K. Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations. Proc. IEEE 2014, 102, 699–716. [Google Scholar] [CrossRef]
  229. Neckar, A.; Fok, S.; Benjamin, B.V.; Stewart, T.C.; Oza, N.N.; Voelker, A.R.; Eliasmith, C.; Manohar, R.; Boahen, K. Braindrop: A mixed-signal neuromorphic architecture with a dynamical systems-based programming model. Proc. IEEE 2019, 107, 144–164. [Google Scholar] [CrossRef]
  230. Schemmel, J.; Brüderle, D.; Grübl, A.; Hock, M.; Meier, K.; Millner, S. A wafer-scale neuromorphic hardware system for large-scale neural modeling. In Proceedings of the 2010 IEEE International Symposium on Circuits and Systems (ISCAS), Paris, France, 30 May–2 June 2010; pp. 1947–1950. [Google Scholar] [CrossRef] [Green Version]
  231. Markram, H.; Meier, K.; Lippert, T.; Grillner, S.; Frackowiak, R.; Dehaene, S.; Knoll, A.; Sompolinsky, H.; Verstreken, K.; DeFelipe, J.; et al. Introducing the Human Brain Project. Procedia Comput. Sci. 2011, 7, 39–42. [Google Scholar] [CrossRef] [Green Version]
  232. Farquhar, E.; Gordon, C.; Hasler, P. A Field Programmable Neural Array. In Proceedings of the 2006 IEEE International Symposium on Circuits and Systems, Kos, Greece, 21–24 May 2006; pp. 4114–4117. [Google Scholar] [CrossRef]
  233. Liu, M.; Yu, H.; Wang, W. FPAA Based on Integration of CMOS and Nanojunction Devices for Neuromorphic Applications. In Nano-Net; Cheng, M., Ed.; Springer: Berlin/Heidelberg, Germany, 2009; Volume 3, pp. 44–48. [Google Scholar]
  234. Chan, V.; Liu, S.C.; van Schaik, A. AER EAR: A Matched Silicon Cochlea Pair With Address Event Representation Interface. IEEE Trans. Circuits Syst. I Regul. Pap. 2007, 54, 48–59. [Google Scholar] [CrossRef]
  235. Haessig, G.; Milde, M.B.; Aceituno, P.V.; Oubari, O.; Knight, J.C.; van Schaik, A.; Benosman, R.B.; Indiveri, G. Event-Based Computation for Touch Localization Based on Precise Spike Timing. Front. Neurosci. 2020, 14, 420. [Google Scholar] [CrossRef] [PubMed]
  236. Lagorce, X.; Orchard, G.; Galluppi, F.; Shi, B.E.; Benosman, R.B. HOTS: A Hierarchy of Event-Based Time-Surfaces for Pattern Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1346–1359. [Google Scholar] [CrossRef]
  237. Sironi, A.; Brambilla, M.; Bourdis, N.; Lagorce, X.; Benosman, R. HATS: Histograms of Averaged Time Surfaces for Robust Event-Based Object Classification. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1731–1740. [Google Scholar] [CrossRef] [Green Version]
  238. Maro, J.M.; Ieng, S.H.; Benosman, R. Event-Based Gesture Recognition With Dynamic Background Suppression Using Smartphone Computational Capabilities. Front. Neurosci. 2020, 14, 275. [Google Scholar] [CrossRef]
  239. Grimaldi, A.; Boutin, V.; Perrinet, L.; Ieng, S.H.; Benosman, R. A homeostatic gain control mechanism to improve event-driven object recognition. In Proceedings of the 2021 International Conference on Content-Based Multimedia Indexing (CBMI), Lille, France, 28–30 June 2021. [Google Scholar] [CrossRef]
  240. Grimaldi, A.; Boutin, V.; Ieng, S.H.; Benosman, R.; Perrinet, L.U. A robust event-driven approach to always-on object recognition. TechRxiv 2022. [Google Scholar] [CrossRef]
  241. Yu, C.; Gu, Z.; Li, D.; Wang, G.; Wang, A.; Li, E. STSC-SNN: Spatio-Temporal Synaptic Connection with Temporal Convolution and Attention for Spiking Neural Networks. arXiv 2022, arXiv:2210.05241. [Google Scholar] [CrossRef]
  242. Benosman, R.; Clercq, C.; Lagorce, X.; Ieng, S.-H.; Bartolozzi, C. Event-Based Visual Flow. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 407–417. [Google Scholar] [CrossRef]
  243. Clady, X.; Clercq, C.; Ieng, S.H.; Houseini, F.; Randazzo, M.; Natale, L.; Bartolozzi, C.; Benosman, R.B. Asynchronous visual event-based time-to-contact. Front. Neurosci. 2014, 8, 9. [Google Scholar] [CrossRef] [Green Version]
  244. Tschechne, S.; Sailer, R.; Neumann, H. Bio-Inspired Optic Flow from Event-Based Neuromorphic Sensor Input. In Proceedings of the Artificial Neural Networks in Pattern Recognition; El Gayar, N., Schwenker, F., Suen, C., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 171–182. [Google Scholar] [CrossRef] [Green Version]
  245. Hidalgo-Carrió, J.; Gehrig, D.; Scaramuzza, D. Learning Monocular Dense Depth from Events. arXiv 2020, arXiv:2010.08350. [Google Scholar]
  246. Dardelet, L.; Benosman, R.; Ieng, S.H. An Event-by-Event Feature Detection and Tracking Invariant to Motion Direction and Velocity. TechRxiv 2021. [Google Scholar] [CrossRef]
  247. Stoffregen, T.; Gallego, G.; Drummond, T.; Kleeman, L.; Scaramuzza, D. Event-Based Motion Segmentation by Motion Compensation. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 February 2019; pp. 7243–7252. [Google Scholar] [CrossRef] [Green Version]
  248. Kim, H.; Leutenegger, S.; Davison, A.J. Real-Time 3D Reconstruction and 6-DoF Tracking with an Event Camera. In Proceedings of the Computer Vision—ECCV 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 349–364. [Google Scholar] [CrossRef] [Green Version]
  249. Hussain, S.; Basu, A.; Wang, M.; Hamilton, T.J. DELTRON: Neuromorphic architectures for delay based learning. In Proceedings of the 2012 IEEE Asia Pacific Conference on Circuits and Systems, Kaohsiung, Taiwan, 2–5 December 2012; pp. 304–307. [Google Scholar] [CrossRef]
  250. Wang, R.M.; Hamilton, T.J.; Tapson, J.C.; van Schaik, A. A neuromorphic implementation of multiple spike-timing synaptic plasticity rules for large-scale neural networks. Front. Neurosci. 2015, 9, 180. [Google Scholar] [CrossRef] [PubMed]
  251. Wang, R.; Hamilton, T.J.; Tapson, J.; van Schaik, A. An FPGA design framework for large-scale spiking neural networks. In Proceedings of the 2014 IEEE International Symposium on Circuits and Systems (ISCAS), Melbourne, VIC, Australia, 1–5 June 2014; pp. 457–460. [Google Scholar] [CrossRef]
  252. Pfeil, T.; Scherzer, A.C.; Schemmel, J.; Meier, K. Neuromorphic learning towards nano second precision. In Proceedings of the The 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, 4–9 August 2013; pp. 1–5. [Google Scholar] [CrossRef] [Green Version]
  253. Boerlin, M.; Denève, S. Spike-Based Population Coding and Working Memory. PLoS Comput. Biol. 2011, 7, e1001080. [Google Scholar] [CrossRef] [PubMed]
  254. Renner, A.; Sandamirskaya, Y.; Sommer, F.T.; Frady, E.P. Sparse Vector Binding on Spiking Neuromorphic Hardware Using Synaptic Delays. In Proceedings of the International Conference on Neuromorphic Systems; ACM Digital Library: Knoxville, TN, USA, 2022. [Google Scholar] [CrossRef]
  255. Dard, R.F.; Leprince, E.; Denis, J.; Rao Balappa, S.; Suchkov, D.; Boyce, R.; Lopez, C.; Giorgi-Kurz, M.; Szwagier, T.; Dumont, T.; et al. The rapid developmental rise of somatic inhibition disengages hippocampal dynamics from self-motion. eLife 2022, 11, e78116. [Google Scholar] [CrossRef]
  256. Coull, J.T.; Giersch, A. The distinction between temporal order and duration processing, and implications for schizophrenia. Nat. Rev. Psychol. 2022, 1, 257–271. [Google Scholar] [CrossRef]
  257. Panahi, M.R.; Abrevaya, G.; Gagnon-Audet, J.C.; Voleti, V.; Rish, I.; Dumas, G. Generative Models of Brain Dynamics—A review. arXiv 2021, arXiv:2112.12147. [Google Scholar]
  258. Tolle, K.M.; Tansley, D.S.W.; Hey, A.J.G. The Fourth Paradigm: Data-Intensive Scientific Discovery [Point of View]. Proc. IEEE 2011, 99, 1334–1337. [Google Scholar] [CrossRef]
Figure 1. Latency of the different processing steps along the human visual pathway.Though the visual system is highly inter-connected, one can follow the sequence of activations whenever an image (here a yellow star) is flashed in front of the eyes. Different areas are schematically represented by ellipses, and arrows denote the fastest feed-forward activation, ordered with respect to their activation latency in m s . In order, the retina is first activated (20–40 m s ), then the thalamus and the primary visual cortex (V1, 60–90 m s ). This visual information projects to the temporal lobe to reach the infero-temporal area (IT, 150 m s ) for object recognition. It then reaches the prefrontal cortex (PFC, 180 m s ), which modulates decision making to evoke the motor cortex (MC, 220 m s ) which may mediate an action. This is eventually relayed through the spinal cord to trigger finger muscles, with latencies of about 280–400 m s .
Figure 1. Latency of the different processing steps along the human visual pathway.Though the visual system is highly inter-connected, one can follow the sequence of activations whenever an image (here a yellow star) is flashed in front of the eyes. Different areas are schematically represented by ellipses, and arrows denote the fastest feed-forward activation, ordered with respect to their activation latency in m s . In order, the retina is first activated (20–40 m s ), then the thalamus and the primary visual cortex (V1, 60–90 m s ). This visual information projects to the temporal lobe to reach the infero-temporal area (IT, 150 m s ) for object recognition. It then reaches the prefrontal cortex (PFC, 180 m s ), which modulates decision making to evoke the motor cortex (MC, 220 m s ) which may mediate an action. This is eventually relayed through the spinal cord to trigger finger muscles, with latencies of about 280–400 m s .
Brainsci 13 00068 g001
Figure 2. Reproducibility of the spiking response of a neuron. The timing of the spikes produced following the repetition of a step stimulus is less reproducible than that to a noisy stimulus. The stimulus current value over time for a step stimulus (top left) and for a noisy one (top right). Trial repetitions of a leaky integrate-and-fire neuron stimulated by the stimulus on the upper row (middle row). Membrane potential is represented by dark blue color when light yellow colors when depolarized) and quantified by the average firing rate across trials (lower row). While this seems paradoxical at first sight, it highlights the consequence of using the same frozen noise at each repetition and highlights the highly reproducible pattern of spikes when it is driven by a highly dynamic input. See this notebook for a replication of the results from [33] using a simple LIF model.
Figure 2. Reproducibility of the spiking response of a neuron. The timing of the spikes produced following the repetition of a step stimulus is less reproducible than that to a noisy stimulus. The stimulus current value over time for a step stimulus (top left) and for a noisy one (top right). Trial repetitions of a leaky integrate-and-fire neuron stimulated by the stimulus on the upper row (middle row). Membrane potential is represented by dark blue color when light yellow colors when depolarized) and quantified by the average firing rate across trials (lower row). While this seems paradoxical at first sight, it highlights the consequence of using the same frozen noise at each repetition and highlights the highly reproducible pattern of spikes when it is driven by a highly dynamic input. See this notebook for a replication of the results from [33] using a simple LIF model.
Brainsci 13 00068 g002
Figure 3. Simulation of a synfire propagation using Brian. The model consists of 10 groups (arranged with the first group represented in the lowest row) of 100 neurons each. Five pulses with decreasing jitter are generated in the first group around times 10, 30, 50, 70 and 90 m s (with jitters given by a standard deviation which linearly decreases from 5 to 1 m s ). This generates a pulse after a certain processing delay in the second group with a different jitter. While the first two pulses progressively vanish in the following groups, starting from the third input pulse (with a jitter of 3 m s ), it is propagated to the following groups. This allows the propagation of the synchronous activity along the chain of the neural groups.
Figure 3. Simulation of a synfire propagation using Brian. The model consists of 10 groups (arranged with the first group represented in the lowest row) of 100 neurons each. Five pulses with decreasing jitter are generated in the first group around times 10, 30, 50, 70 and 90 m s (with jitters given by a standard deviation which linearly decreases from 5 to 1 m s ). This generates a pulse after a certain processing delay in the second group with a different jitter. While the first two pulses progressively vanish in the following groups, starting from the third input pulse (with a jitter of 3 m s ), it is propagated to the following groups. This allows the propagation of the synchronous activity along the chain of the neural groups.
Brainsci 13 00068 g003
Figure 4. An example of a precise temporal motif observed in two subsequent days. In this study by [109], an analysis of calcium fluorescence (heatmap) of hippocampal CA1 neurons participating to run sequences in consecutive imaging sessions shows repetitions of precise spiking motifs with a time scale of the order of seconds. Cells were selected and ordered with respect to their activity in the first imaging session. The black line on top represents the speed of the mouse. Futher analysis showed that more than the majority of the cells participating in run sequences on the first day were recruited again in run sequences on the next day. Modified from Figure 1-A from [109] under the CC-BY PNAS License.
Figure 4. An example of a precise temporal motif observed in two subsequent days. In this study by [109], an analysis of calcium fluorescence (heatmap) of hippocampal CA1 neurons participating to run sequences in consecutive imaging sessions shows repetitions of precise spiking motifs with a time scale of the order of seconds. Cells were selected and ordered with respect to their activity in the first imaging session. The black line on top represents the speed of the mouse. Futher analysis showed that more than the majority of the cells participating in run sequences on the first day were recruited again in run sequences on the next day. Modified from Figure 1-A from [109] under the CC-BY PNAS License.
Brainsci 13 00068 g004
Figure 5. Detecting motifs using SPADE. We used the SPADE algorithm [130] by adapting their tutorial on the data generated in Figure 3. This allowed to label different precise spike motifs which are denoted by different colors. Spikes belonging to the same motif have the same color.
Figure 5. Detecting motifs using SPADE. We used the SPADE algorithm [130] by adapting their tutorial on the data generated in Figure 3. This allowed to label different precise spike motifs which are denoted by different colors. Spikes belonging to the same motif have the same color.
Brainsci 13 00068 g005
Figure 6. Core mechanism of polychrony detection [111]. (Left) In this example, three presynaptic neurons denoted b, c and, d are fully connected to two post-synaptic neurons a and e, with different delays of respectively 1, 5, and 9 m s for a and 8, 5, and 1 m s for e. (Middle) If three synchronous pulses are emitted from presynaptic neurons, this will generate post-synaptic potentials that will reach a and e asynchronously because of the heterogeneous delays, and they may not be sufficient to reach the membrane threshold in either of the post-synaptic neurons; therefore, no spike will be emitted, as this is not sufficient to reach the membrane threshold of the post synaptic neuron, so no output spike is emitted. (Right) If the pulses are emitted from presynaptic neurons such that, taking into account the delays, they reach the post-synaptic neuron a at the same time (here, at t = 10 m s ), the post-synaptic potentials evoked by the three pre-synaptic neurons sum up, causing the voltage threshold to be crossed and thus to the emission of an output spike (red color), while none is emitted from post-synaptic neuron e.
Figure 6. Core mechanism of polychrony detection [111]. (Left) In this example, three presynaptic neurons denoted b, c and, d are fully connected to two post-synaptic neurons a and e, with different delays of respectively 1, 5, and 9 m s for a and 8, 5, and 1 m s for e. (Middle) If three synchronous pulses are emitted from presynaptic neurons, this will generate post-synaptic potentials that will reach a and e asynchronously because of the heterogeneous delays, and they may not be sufficient to reach the membrane threshold in either of the post-synaptic neurons; therefore, no spike will be emitted, as this is not sufficient to reach the membrane threshold of the post synaptic neuron, so no output spike is emitted. (Right) If the pulses are emitted from presynaptic neurons such that, taking into account the delays, they reach the post-synaptic neuron a at the same time (here, at t = 10 m s ), the post-synaptic potentials evoked by the three pre-synaptic neurons sum up, causing the voltage threshold to be crossed and thus to the emission of an output spike (red color), while none is emitted from post-synaptic neuron e.
Brainsci 13 00068 g006
Figure 7. Detecting event-based motifs using spiking neurons with heterogeneous delays. (a) Given a generic raster plot defined by a set of spikes occurring on specific neuron addresses and at specific times, one may consider that this information consists of the repeated occurrence of groups of precise spiking motifs. (b) The generative model is defined by this set of motifs (here 4 of them) each defined by different weights at heterogeneous delays (red for excitatory, blue for inhibitory). (c) Generalizing the core polychrony detection model (see Figure 6), one can define a layer of neurons that detect the identity and timing of these spiking motifs [196]. (d) Knowing the results of this detection, one may for illustration purposes highlight them by different colors in the raster plots, showing that in this synthetic example, all spikes are now associated with a motif.
Figure 7. Detecting event-based motifs using spiking neurons with heterogeneous delays. (a) Given a generic raster plot defined by a set of spikes occurring on specific neuron addresses and at specific times, one may consider that this information consists of the repeated occurrence of groups of precise spiking motifs. (b) The generative model is defined by this set of motifs (here 4 of them) each defined by different weights at heterogeneous delays (red for excitatory, blue for inhibitory). (c) Generalizing the core polychrony detection model (see Figure 6), one can define a layer of neurons that detect the identity and timing of these spiking motifs [196]. (d) Knowing the results of this detection, one may for illustration purposes highlight them by different colors in the raster plots, showing that in this synthetic example, all spikes are now associated with a motif.
Brainsci 13 00068 g007
Figure 8. A miniature, event-based ATIS sensor. Contrary to a classical frame-based camera for which a full dense image representation is given at discrete, regularly spaced timings, the event-based camera provides events at the micro-second resolution. These are sparse, as they represent luminance increments or decrements (ON and OFF events, respectively). Figure courtesy of Sio-Hoi Ieng (Sorbonne Université/UPMC, Institut de la Vision).
Figure 8. A miniature, event-based ATIS sensor. Contrary to a classical frame-based camera for which a full dense image representation is given at discrete, regularly spaced timings, the event-based camera provides events at the micro-second resolution. These are sparse, as they represent luminance increments or decrements (ON and OFF events, respectively). Figure courtesy of Sio-Hoi Ieng (Sorbonne Université/UPMC, Institut de la Vision).
Brainsci 13 00068 g008
Figure 9. Detecting visual motion in an event stream with heterogeneous delays. Extending the polychrony detection model to the spatial domain, Grimaldi and Perrinet [196] have applied a supervised learning scheme to the detection of motion. The models’ parameters are represented by different spatio-temporal kernels, and we show three examples as pairs of rows, one targeting ON spikes, the other OFF spikes, the first column representing the corresponding motion detected. When trained on a set of natural images, it shows the emergence of localized, oriented kernels organized in a so-called push–pull organization for which weights to an ON spike are negatively proportional to that to an OFF cell [72]. Global weight is globally decreasing from the lowest delay (right) to less recent information (left).
Figure 9. Detecting visual motion in an event stream with heterogeneous delays. Extending the polychrony detection model to the spatial domain, Grimaldi and Perrinet [196] have applied a supervised learning scheme to the detection of motion. The models’ parameters are represented by different spatio-temporal kernels, and we show three examples as pairs of rows, one targeting ON spikes, the other OFF spikes, the first column representing the corresponding motion detected. When trained on a set of natural images, it shows the emergence of localized, oriented kernels organized in a so-called push–pull organization for which weights to an ON spike are negatively proportional to that to an OFF cell [72]. Global weight is globally decreasing from the lowest delay (right) to less recent information (left).
Brainsci 13 00068 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Grimaldi, A.; Gruel, A.; Besnainou, C.; Jérémie, J.-N.; Martinet, J.; Perrinet, L.U. Precise Spiking Motifs in Neurobiological and Neuromorphic Data. Brain Sci. 2023, 13, 68. https://doi.org/10.3390/brainsci13010068

AMA Style

Grimaldi A, Gruel A, Besnainou C, Jérémie J-N, Martinet J, Perrinet LU. Precise Spiking Motifs in Neurobiological and Neuromorphic Data. Brain Sciences. 2023; 13(1):68. https://doi.org/10.3390/brainsci13010068

Chicago/Turabian Style

Grimaldi, Antoine, Amélie Gruel, Camille Besnainou, Jean-Nicolas Jérémie, Jean Martinet, and Laurent U. Perrinet. 2023. "Precise Spiking Motifs in Neurobiological and Neuromorphic Data" Brain Sciences 13, no. 1: 68. https://doi.org/10.3390/brainsci13010068

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop