Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
235
result(s) for
"spike coding"
Sort by:
Neural Coding in Spiking Neural Networks: A Comparative Study for Robust Neuromorphic Systems
by
Fouda, Mohammed E.
,
Eltawil, Ahmed M.
,
Guo, Wenzhe
in
burst coding
,
Classification
,
Compression
2021
Various hypotheses of information representation in brain, referred to as neural codes, have been proposed to explain the information transmission between neurons. Neural coding plays an essential role in enabling the brain-inspired spiking neural networks (SNNs) to perform different tasks. To search for the best coding scheme, we performed an extensive comparative study on the impact and performance of four important neural coding schemes, namely, rate coding, time-to-first spike (TTFS) coding, phase coding, and burst coding. The comparative study was carried out using a biological 2-layer SNN trained with an unsupervised spike-timing-dependent plasticity (STDP) algorithm. Various aspects of network performance were considered, including classification accuracy, processing latency, synaptic operations (SOPs), hardware implementation, network compression efficacy, input and synaptic noise resilience, and synaptic fault tolerance. The classification tasks on Modified National Institute of Standards and Technology (MNIST) and Fashion-MNIST datasets were applied in our study. For hardware implementation, area and power consumption were estimated for these coding schemes, and the network compression efficacy was analyzed using pruning and quantization techniques. Different types of input noise and noise variations in the datasets were considered and applied. Furthermore, the robustness of each coding scheme to the non-ideality-induced synaptic noise and fault in analog neuromorphic systems was studied and compared. Our results show that TTFS coding is the best choice in achieving the highest computational performance with very low hardware implementation overhead. TTFS coding requires 4x/7.5x lower processing latency and 3.5x/6.5x fewer SOPs than rate coding during the training/inference process. Phase coding is the most resilient scheme to input noise. Burst coding offers the highest network compression efficacy and the best overall robustness to hardware non-idealities for both training and inference processes. The study presented in this paper reveals the design space created by the choice of each coding scheme, allowing designers to frame each scheme in terms of its strength and weakness given a designs’ constraints and considerations in neuromorphic systems.
Journal Article
SpykeTorch: Efficient Simulation of Convolutional Spiking Neural Networks With at Most One Spike per Neuron
by
Mozafari, Milad
,
Ganjtabesh, Mohammad
,
Nowzari-Dalini, Abbas
in
Artificial intelligence
,
Cognitive science
,
Computer science
2019
Application of deep convolutional spiking neural networks (SNNs) to artificial intelligence (AI) tasks has recently gained a lot of interest since SNNs are hardware-friendly and energy-efficient. Unlike the non-spiking counterparts, most of the existing SNN simulation frameworks are not practically efficient enough for large-scale AI tasks. In this paper, we introduce SpykeTorch, an open-source high-speed simulation framework based on PyTorch. This framework simulates convolutional SNNs with at most one spike per neuron and the rank-order encoding scheme. In terms of learning rules, both spike-timing-dependent plasticity (STDP) and reward-modulated STDP (R-STDP) are implemented, but other rules could be implemented easily. Apart from the aforementioned properties, SpykeTorch is highly generic and capable of reproducing the results of various studies. Computations in the proposed framework are tensor-based and totally done by PyTorch functions, which in turn brings the ability of just-in-time optimization for running on CPUs, GPUs, or Multi-GPU platforms.
Journal Article
Analyzing time-to-first-spike coding schemes: A theoretical approach
by
Gautrais, Jacques
,
Thorpe, Simon
,
Bonilla, Lina
in
Embedded systems
,
Firing pattern
,
Neural coding
2022
Spiking neural networks (SNNs) using time-to-first-spike (TTFS) codes, in which neurons fire at most once, are appealing for rapid and low power processing. In this theoretical paper, we focus on information coding and decoding in those networks, and introduce a new unifying mathematical framework that allows the comparison of various coding schemes. In an early proposal, called rank-order coding (ROC), neurons are maximally activated when inputs arrive in the order of their synaptic weights, thanks to a shunting inhibition mechanism that progressively desensitizes the neurons as spikes arrive. In another proposal, called NoM coding, only the first $N$ spikes of $M$ input neurons are propagated, and these ``first spike patterns'' can be readout by downstream neurons with homogeneous weights and no desensitization: as a result, the exact order between the first spikes does not matter. This paper also introduces a third option - \"Ranked-NoM\" (R-NoM), which combines features from both ROC and NoM coding schemes: only the first $N$ input spikes are propagated, but their order is readout by downstream neurons thanks to inhomogeneous weights and linear desensitization. The unifying mathematical framework allows the three codes to be compared in terms of discriminability, which measures to what extent a neuron responds more strongly to its preferred input spike pattern than to random patterns. This discriminability turns out to be much higher for R-NoM than for the other codes, especially in the early phase of the responses. We also argue that R-NoM is much more hardware-friendly than the original ROC proposal, although NoM remains the easiest to implement in hardware because it only requires binary synapses.
Journal Article
First-spike coding promotes accurate and efficient spiking neural networks for discrete events with rich temporal structures
by
Liu, Siying
,
Leung, Vincent C. H.
,
Dragotti, Pier Luigi
in
Back propagation
,
Decision making
,
event-based data
2023
Spiking neural networks (SNNs) are well-suited to process asynchronous event-based data. Most of the existing SNNs use rate-coding schemes that focus on firing rate (FR), and so they generally ignore the spike timing in events. On the contrary, methods based on temporal coding, particularly time-to-first-spike (TTFS) coding, can be accurate and efficient but they are difficult to train. Currently, there is limited research on applying TTFS coding to real events, since traditional TTFS-based methods impose one-spike constraint, which is not realistic for event-based data. In this study, we present a novel decision-making strategy based on first-spike (FS) coding that encodes FS timings of the output neurons to investigate the role of the first-spike timing in classifying real-world event sequences with complex temporal structures. To achieve FS coding, we propose a novel surrogate gradient learning method for discrete spike trains. In the forward pass, output spikes are encoded into discrete times to generate FS times. In the backpropagation, we develop an error assignment method that propagates error from FS times to spikes through a Gaussian window, and then supervised learning for spikes is implemented through a surrogate gradient approach. Additional strategies are introduced to facilitate the training of FS timings, such as adding empty sequences and employing different parameters for different layers. We make a comprehensive comparison between FS and FR coding in the experiments. Our results show that FS coding achieves comparable accuracy to FR coding while leading to superior energy efficiency and distinct neuronal dynamics on data sequences with very rich temporal structures. Additionally, a longer time delay in the first spike leads to higher accuracy, indicating important information is encoded in the timing of the first spike.
Journal Article
Simplified spiking neural network architecture and STDP learning algorithm applied to image classification
by
Rosado-Muñoz, Alfredo
,
Bataller-Mompeán, Manuel
,
Guerrero-Martínez, Juan F
in
Biometrics
,
Complexity
,
Computation
2015
Spiking neural networks (SNN) have gained popularity in embedded applications such as robotics and computer vision. The main advantages of SNN are the temporal plasticity, ease of use in neural interface circuits and reduced computation complexity. SNN have been successfully used for image classification. They provide a model for the mammalian visual cortex, image segmentation and pattern recognition. Different spiking neuron mathematical models exist, but their computational complexity makes them ill-suited for hardware implementation. In this paper, a novel, simplified and computationally efficient model of spike response model (SRM) neuron with spike-time dependent plasticity (STDP) learning is presented. Frequency spike coding based on receptive fields is used for data representation; images are encoded by the network and processed in a similar manner as the primary layers in visual cortex. The network output can be used as a primary feature extractor for further refined recognition or as a simple object classifier. Results show that the model can successfully learn and classify black and white images with added noise or partially obscured samples with up to ×20 computing speed-up at an equivalent classification ratio when compared to classic SRM neuron membrane models. The proposed solution combines spike encoding, network topology, neuron membrane model and STDP learning.
Journal Article
Is Neuromorphic MNIST Neuromorphic? Analyzing the Discriminative Power of Neuromorphic Datasets in the Time Domain
2021
A major characteristic of spiking neural networks (SNNs) over conventional artificial neural networks (ANNs) is their ability to spike, enabling them to use spike timing for coding and efficient computing. In this paper, we assess if neuromorphic datasets recorded from static images are able to evaluate the ability of SNNs to use spike timings in their calculations. We have analyzed N-MNIST, N-Caltech101 and DvsGesture along these lines, but focus our study on N-MNIST. First we evaluate if additional information is encoded in the time domain in a neuromorphic dataset. We show that an ANN trained with backpropagation on frame-based versions of N-MNIST and N-Caltech101 images achieve 99.23 and 78.01% accuracy. These are comparable to the state of the art—showing that an algorithm that purely works on spatial data can classify these datasets. Second we compare N-MNIST and DvsGesture on two STDP algorithms, RD-STDP, that can classify only spatial data, and STDP-tempotron that classifies spatiotemporal data. We demonstrate that RD-STDP performs very well on N-MNIST, while STDP-tempotron performs better on DvsGesture. Since DvsGesture has a temporal dimension, it requires STDP-tempotron, while N-MNIST can be adequately classified by an algorithm that works on spatial data alone. This shows that precise spike timings are not important in N-MNIST. N-MNIST does not, therefore, highlight the ability of SNNs to classify temporal data. The conclusions of this paper open the question—what dataset can evaluate SNN ability to classify temporal data?
Journal Article
Online Detection of Multiple Stimulus Changes Based on Single Neuron Interspike Intervals
by
Hildebrandt, K. Jannis
,
Koepcke, Lena
,
Kretzberg, Jutta
in
Adaptation
,
Algorithms
,
burst detection
2019
Nervous systems need to detect stimulus changes based on their neuronal responses without using any additional information on the number, times and types of stimulus changes. Here, \\blue{two relatively simple, biologically realistic change point detection methods are compared with two common analysis methods}. The four methods are applied to intra- and extracellularly recorded responses of a single cricket interneuron (AN2) to acoustic simulation. Solely based on these recorded responses, the methods should detect an unknown number of different types of sound intensity in- and decreases shortly after their occurrences. For this task, the methods rely on calculating an adjusting interspike interval (ISI). Both simple methods try to separate responses to intensity in- or decreases from activity during constant stimulation. The Pure-ISI method performs this task based on the distribution of the ISI, while the ISI-Ratio method uses the ratio of actual and previous ISI. These methods are compared to the frequently used Moving-Average method, which calculates mean and standard deviation of the instantaneous spike rate in a moving interval. Additionally, a classification method provides the upper limit of the change point detection performance that can be expected for the cricket interneuron responses. The classification learns the statistical properties of the actual and previous ISI during stimulus changes and constant stimulation from a training data set. The main results are: (1) The Moving-Average method requires a stable activity in a long interval to estimate the previous activity, \\blue{which was not always given in our data set.} (2) The Pure-ISI method can reliably detect stimulus intensity increases when the neuron bursts, but it fails to identify intensity decreases. (3)The ISI-Ratio method detects stimulus in- and decreases well, if the spike train is not too noisy. (4) The classification method shows good performance for the detection of stimulus in- and decreases. But due to the statistical learning, this method tends to confuse responses to constant stimulation with responses triggered by a stimulus change. Our results suggest that stimulus change detection does not require computationally costly mechanisms. Simple nervous systems like the cricket's could effectively apply ISI-Ratios to solve this fundamental task.
Journal Article
A Spike Time-Dependent Online Learning Algorithm Derived From Biological Olfaction
2019
We have developed a spiking neural network (SNN) algorithm for signal restoration and identification based on principles extracted from the mammalian olfactory system and broadly applicable to input from arbitrary sensor arrays. For interpretability and development purposes, we here examine the properties of its initial feedforward projection. Like the full algorithm, this feedforward component is fully spike timing-based, and utilizes online learning based on local synaptic rules such as spike timing-dependent plasticity (STDP). Using an intermediate metric to assess the properties of this initial projection, the feedforward network exhibits high classification performance after few-shot learning without catastrophic forgetting, and includes a
outcome to reflect classifier confidence. We demonstrate online learning performance using a publicly available machine olfaction dataset with challenges including relatively small training sets, variable stimulus concentrations, and 3 years of sensor drift.
Journal Article
Disruption of Early or Late Epochs of Auditory Cortical Activity Impairs Speech Discrimination in Mice
by
Weible, Aldis P.
,
Wehr, Michael
,
O’Sullivan, Conor
in
auditory cortex
,
Auditory discrimination
,
Behavior
2020
Speech evokes robust activity in auditory cortex, which contains information over a wide range of spatial and temporal scales. It remains unclear which components of these neural representations are causally involved in the perception and processing of speech sounds. Here we compared the relative importance of early and late speech-evoked activity for consonant discrimination. We trained mice to discriminate the initial consonants in spoken words, and then tested the effect of optogenetically suppressing different temporal windows of speech-evoked activity in auditory cortex. We found that both early and late suppression disrupted performance equivalently. These results suggest that mice are impaired at recognizing either type of disrupted representation because it differs from those learned in training.
Journal Article
Optimization of Spiking Neural Networks Based on Binary Streamed Rate Coding
2020
Spiking neural networks (SNN) increasingly attract attention for their similarity to the biological neural system. Hardware implementation of spiking neural networks, however, remains a great challenge due to their excessive complexity and circuit size. This work introduces a novel optimization method for hardware friendly SNN architecture based on a modified rate coding scheme called Binary Streamed Rate Coding (BSRC). BSRC combines the features of both rate and temporal coding. In addition, by employing a built-in randomizer, the BSRC SNN model provides a higher accuracy and faster training. We also present SNN optimization methods including structure optimization and weight quantization. Extensive evaluations with MNIST SNNs demonstrate that the structure optimization of SNN (81-30-20-10) provides 183.19 times reduction in hardware compared with SNN (784-800-10), while providing an accuracy of 95.25%, a small loss compared with 98.89% and 98.93% reported in the previous works. Our weight quantization reduces 32-bit weights to 4-bit integers leading to further hardware reduction of 4 times with only 0.56% accuracy loss. Overall, the SNN model (81-30-20-10) optimized by our method shrinks the SNN’s circuit area from 3089.49 mm2 for SNN (784-800-10) to 4.04 mm2—a reduction of 765 times.
Journal Article