Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
4,146
result(s) for
"Spiking"
Sort by:
Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification
by
Rueckauer, Bodo
,
Lungu, Iulia-Alexandra
,
Pfeiffer, Michael
in
artificial neural network
,
Biochips
,
Classification
2017
neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.
Journal Article
A Survey of Encoding Techniques for Signal Processing in Spiking Neural Networks
by
Auge, Daniel
,
Mueller, Etienne
,
Knoll, Alois
in
Artificial Intelligence
,
Codes
,
Complex Systems
2021
Biologically inspired spiking neural networks are increasingly popular in the field of artificial intelligence due to their ability to solve complex problems while being power efficient. They do so by leveraging the timing of discrete spikes as main information carrier. Though, industrial applications are still lacking, partially because the question of how to encode incoming data into discrete spike events cannot be uniformly answered. In this paper, we summarise the signal encoding schemes presented in the literature and propose a uniform nomenclature to prevent the vague usage of ambiguous definitions. Therefore we survey both, the theoretical foundations as well as applications of the encoding schemes. This work provides a foundation in spiking signal encoding and gives an overview over different application-oriented implementations which utilise the schemes.
Journal Article
Surrogate gradients for analog neuromorphic computing
by
Cramer, Benjamin
,
Grübl, Andreas
,
Schemmel, Johannes
in
Action Potentials - physiology
,
Algorithms
,
Benchmarks
2022
To rapidly process temporal information at a low metabolic cost, biological neurons integrate inputs as an analog sum, but communicate with spikes, binary events in time. Analog neuromorphic hardware uses the same principles to emulate spiking neural networks with exceptional energy efficiency. However, instantiating high-performing spiking networks on such hardware remains a significant challenge due to device mismatch and the lack of efficient training algorithms. Surrogate gradient learning has emerged as a promising training strategy for spiking networks, but its applicability for analog neuromorphic systems has not been demonstrated. Here, we demonstrate surrogate gradient learning on the BrainScaleS-2 analog neuromorphic system using an in-the-loop approach. We show that learning self-corrects for device mismatch, resulting in competitive spiking network performance on both vision and speech benchmarks. Our networks display sparse spiking activity with, on average, less than one spike per hidden neuron and input, perform inference at rates of up to 85,000 frames per second, and consume less than 200 mW. In summary, our work sets several benchmarks for low-energy spiking network processing on analog neuromorphic hardware and paves the way for future on-chip learning algorithms.
Journal Article
Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition
by
Chen, Yang
,
Khosla, Deepak
,
Cao, Yongqiang
in
Analysis
,
Artificial Intelligence
,
Back propagation
2015
Deep-learning neural networks such as convolutional neural network (CNN) have shown great potential as a solution for difficult vision problems, such as object recognition. Spiking neural networks (SNN)-based architectures have shown great potential as a solution for realizing ultra-low power consumption using spike-based neuromorphic hardware. This work describes a novel approach for converting a deep CNN into a SNN that enables mapping CNN to spike-based hardware architectures. Our approach first tailors the CNN architecture to fit the requirements of SNN, then trains the tailored CNN in the same way as one would with CNN, and finally applies the learned network weights to an SNN architecture derived from the tailored CNN. We evaluate the resulting SNN on publicly available Defense Advanced Research Projects Agency (DARPA) Neovision2 Tower and CIFAR-10 datasets and show similar object recognition accuracy as the original CNN. Our SNN implementation is amenable to direct mapping to spike-based neuromorphic hardware, such as the ones being developed under the DARPA SyNAPSE program. Our hardware mapping analysis suggests that SNN implementation on such spike-based hardware is two orders of magnitude more energy-efficient than the original CNN implementation on off-the-shelf FPGA-based hardware.
Journal Article
BS4NN: Binarized Spiking Neural Networks with Temporal Coding and Learning
by
Kheradpisheh, Saeed Reza
,
Masquelier, Timothée
,
Mirsadeghi, Maryam
in
Accuracy
,
Algorithms
,
Artificial Intelligence
2022
We recently proposed the S4NN algorithm, essentially an adaptation of backpropagation to multilayer spiking neural networks that use simple non-leaky integrate-and-fire neurons and a form of temporal coding known as time-to-first-spike coding. With this coding scheme, neurons fire at most once per stimulus, but the firing order carries information. Here, we introduce BS4NN, a modification of S4NN in which the synaptic weights are constrained to be binary (
+
1 or − 1), in order to decrease memory (ideally, one bit per synapse) and computation footprints. This was done using two sets of weights: firstly, real-valued weights, updated by gradient descent, and used in the backward pass of backpropagation, and secondly, their signs, used in the forward pass. Similar strategies have been used to train (non-spiking) binarized neural networks. The main difference is that BS4NN operates in the time domain: spikes are propagated sequentially, and different neurons may reach their threshold at different times, which increases computational power. We validated BS4NN on two popular benchmarks, MNIST and Fashion-MNIST, and obtained reasonable accuracies for this sort of network (97.0% and 87.3% respectively) with a negligible accuracy drop with respect to real-valued weights (0.4% and 0.7%, respectively). We also demonstrated that BS4NN outperforms a simple BNN with the same architectures on those two datasets (by 0.2% and 0.9% respectively), presumably because it leverages the temporal dimension.
Journal Article
Adaptive spiking neuron with population coding for a residual spiking neural network
by
Sun, Changhao
,
Meng, Lin
,
Li, Hengyi
in
Biological models (mathematics)
,
Coding
,
Data processing
2025
Spiking neural networks (SNNs) have attracted significant research attention due to their inherent sparsity and event-driven processing capabilities. Recent studies indicate that the incorporation of convolutional and residual structures into SNNs can substantially enhance performance. However, these converted spiking residual structures are associated with increased complexity and stacked parameterized spiking neurons. To address this challenge, this paper proposes a meticulously refined two-layer decision structure for residual-based SNNs, consisting solely of fully connected and spiking neuron layers. Specifically, the spiking neuron layers incorporate an innovative dynamic leaky integrate-and-fire (DLIF) neuron model with a nonlinear self-feedback mechanism, characterized by dynamic threshold adjustment and a self-regulating firing rate. Furthermore, diverging from traditional direct encoding, which focuses solely on individual neuronal frequency, we introduce a novel mixed coding mechanism that combines direct encoding with multineuronal population decoding. The proposed architecture improves the adaptability and responsiveness of spiking neurons in various computational contexts. Experimental results demonstrate the superior efficacy of our approach. Although it uses a highly simplified structure with only 6 timesteps, our proposal achieves enhanced performance in the experimental trials compared to multiple state-of-the-art methods. Specifically, it achieves accuracy improvements of 0.01-1.99% on three static datasets and of 0.14-7.50% on three N-datasets. The DLIF model excels in information processing, showing double mutual information compared to other neurons. In the sequential MNIST dataset, it balances biological realism and practicality, enhancing memory and the dynamic range. Our proposed method not only offers improved computational efficacy and simplified network structure but also enhances the biological plausibility of SNN models and can be easily adapted to other deep SNNs.
Journal Article
Memristor-induced mode transitions and extreme multistability in a map-based neuron model
2023
Because of the advent of discrete memristor, memristor effect in discrete map has become the important subject deserving discussion. To this end, this paper constructs a memristor-based neuron model considering magnetic induction by combining an existing map-based neuron model and a discrete memristor with absolute value memductance. Taking the coupling strength and initial state of the memristor as variables, complex mode transition behaviors induced by the introduced memristor are disclosed using numerical methods, including spiking-bursting behaviors, mode transition behaviors, and hyperchaotic spiking behaviors. In particular, all of these behaviors are greatly dependent on the memristor initial state, resulting in the existence of extreme multistability in the memristive neuron model. Therefore, this memristive neuron model can be used to effectively imitate the magnetic induction effects when complex mode transition behaviors appear in the neuronal action potential. Besides, a hardware platform based on FPGA is developed for implementing the memristive neuron model and various spiking-bursting sequences are experimentally captured therein. The results show that when biophysical memory effect is present, the memristive neuron model can better represent the firing activities of biological neurons than the original map-based neuron model.
Journal Article
Memristors—From In‐Memory Computing, Deep Learning Acceleration, and Spiking Neural Networks to the Future of Neuromorphic and Bio‐Inspired Computing
by
Vasilaki, Eleni
,
Simeone, Osvaldo
,
Rajendran, Bipin
in
Algorithms
,
Architecture
,
Artificial intelligence
2020
Machine learning, particularly in the form of deep learning (DL), has driven most of the recent fundamental developments in artificial intelligence (AI). DL is based on computational models that are, to a certain extent, bio‐inspired, as they rely on networks of connected simple computing units operating in parallel. The success of DL is supported by three factors: availability of vast amounts of data, continuous growth in computing power, and algorithmic innovations. The approaching demise of Moore's law, and the consequent expected modest improvements in computing power that can be achieved by scaling, raises the question of whether the progress will be slowed or halted due to hardware limitations. This article reviews the case for a novel beyond‐complementary metal–oxide–semiconductor (CMOS) technology—memristors—as a potential solution for the implementation of power‐efficient in‐memory computing, DL accelerators, and spiking neural networks. Central themes are the reliance on non‐von‐Neumann computing architectures and the need for developing tailored learning and inference algorithms. To argue that lessons from biology can be useful in providing directions for further progress in AI, an example‐based reservoir computing is briefly discussed. At the end, speculation is given on the “big picture” view of future neuromorphic and brain‐inspired computing systems. Memristor technologies, with their remarkable diversity and richness of functional properties, can prove to be fundamental building blocks for the next generation of extraordinarily power‐efficient computing systems. Herein, it is discussed how memristors fit within the ever‐expanding field of hardware for artificial intelligence applications—from in‐memory computing, deep learning accelerators, and spiking neural networks to more bio‐inspired computing models.
Journal Article
Simple framework for constructing functional spiking recurrent neural networks
by
Kim, Robert
,
Sejnowski, Terrence J.
,
Li, Yinghao
in
Action Potentials - physiology
,
Animals
,
Biological Sciences
2019
Cortical microcircuits exhibit complex recurrent architectures that possess dynamically rich properties. The neurons that make up these microcircuits communicate mainly via discrete spikes, and it is not clear how spikes give rise to dynamics that can be used to perform computationally challenging tasks. In contrast, continuous models of rate-coding neurons can be trained to perform complex tasks. Here, we present a simple framework to construct biologically realistic spiking recurrent neural networks (RNNs) capable of learning a wide range of tasks. Our framework involves training a continuous-variable rate RNN with important biophysical constraints and transferring the learned dynamics and constraints to a spiking RNN in a one-to-one manner. The proposed framework introduces only 1 additional parameter to establish the equivalence between rate and spiking RNN models. We also study other model parameters related to the rate and spiking networks to optimize the one-to-one mapping. By establishing a close relationship between rate and spiking models, we demonstrate that spiking RNNs could be constructed to achieve similar performance as their counterpart continuous rate networks.
Journal Article
SGLFormer: Spiking Global-Local-Fusion Transformer with high performance
2024
Spiking Neural Networks (SNNs), inspired by brain science, offer low energy consumption and high biological plausibility with their event-driven nature. However, the current SNNs are still suffering from insufficient performance.
Recognizing the brain's adeptness at information processing for various scenarios with complex neuronal connections within and across regions, as well as specialized neuronal architectures for specific functions, we propose a Spiking Global-Local-Fusion Transformer (SGLFormer), that significantly improves the performance of SNNs. This novel architecture enables efficient information processing on both global and local scales, by integrating transformer and convolution structures in SNNs. In addition, we uncover the problem of inaccurate gradient backpropagation caused by Maxpooling in SNNs and address it by developing a new Maxpooling module. Furthermore, we adopt spatio-temporal block (STB) in the classification head instead of global average pooling, facilitating the aggregation of spatial and temporal features.
SGLFormer demonstrates its superior performance on static datasets such as CIFAR10/CIFAR100, and ImageNet, as well as dynamic vision sensor (DVS) datasets including CIFAR10-DVS and DVS128-Gesture. Notably, on ImageNet, SGLFormer achieves a top-1 accuracy of 83.73% with 64 M parameters, outperforming the current SOTA directly trained SNNs by a margin of 6.66%.
With its high performance, SGLFormer can support more computer vision tasks in the future. The codes for this study can be found in https://github.com/ZhangHanN1/SGLFormer.
Journal Article