Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
High-performance deep spiking neural networks with 0.3 spikes per neuron
by
Bellec, Guillaume
, Pantazi, Angeliki
, Cherubini, Giovanni
, Woźniak, Stanisław
, Gerstner, Wulfram
, Stanojevic, Ana
in
631/378/116
/ 639/705/117
/ Algorithms
/ Artificial neural networks
/ Biological effects
/ Classification
/ Coding
/ Datasets
/ Energy efficiency
/ Firing pattern
/ Hardware
/ Humanities and Social Sciences
/ Image classification
/ Latency
/ Membrane potential
/ multidisciplinary
/ Network latency
/ Neural coding
/ Neural networks
/ Performance enhancement
/ Science
/ Science (multidisciplinary)
/ Spiking
/ Training
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
High-performance deep spiking neural networks with 0.3 spikes per neuron
by
Bellec, Guillaume
, Pantazi, Angeliki
, Cherubini, Giovanni
, Woźniak, Stanisław
, Gerstner, Wulfram
, Stanojevic, Ana
in
631/378/116
/ 639/705/117
/ Algorithms
/ Artificial neural networks
/ Biological effects
/ Classification
/ Coding
/ Datasets
/ Energy efficiency
/ Firing pattern
/ Hardware
/ Humanities and Social Sciences
/ Image classification
/ Latency
/ Membrane potential
/ multidisciplinary
/ Network latency
/ Neural coding
/ Neural networks
/ Performance enhancement
/ Science
/ Science (multidisciplinary)
/ Spiking
/ Training
2024
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
High-performance deep spiking neural networks with 0.3 spikes per neuron
by
Bellec, Guillaume
, Pantazi, Angeliki
, Cherubini, Giovanni
, Woźniak, Stanisław
, Gerstner, Wulfram
, Stanojevic, Ana
in
631/378/116
/ 639/705/117
/ Algorithms
/ Artificial neural networks
/ Biological effects
/ Classification
/ Coding
/ Datasets
/ Energy efficiency
/ Firing pattern
/ Hardware
/ Humanities and Social Sciences
/ Image classification
/ Latency
/ Membrane potential
/ multidisciplinary
/ Network latency
/ Neural coding
/ Neural networks
/ Performance enhancement
/ Science
/ Science (multidisciplinary)
/ Spiking
/ Training
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
High-performance deep spiking neural networks with 0.3 spikes per neuron
Journal Article
High-performance deep spiking neural networks with 0.3 spikes per neuron
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Communication by rare, binary spikes is a key factor for the energy efficiency of biological brains. However, it is harder to train biologically-inspired spiking neural networks than artificial neural networks. This is puzzling given that theoretical results provide exact mapping algorithms from artificial to spiking neural networks with time-to-first-spike coding. In this paper we analyze in theory and simulation the learning dynamics of time-to-first-spike-networks and identify a specific instance of the vanishing-or-exploding gradient problem. While two choices of spiking neural network mappings solve this problem at initialization, only the one with a constant slope of the neuron membrane potential at threshold guarantees the equivalence of the training trajectory between spiking and artificial neural networks with rectified linear units. For specific image classification architectures comprising feed-forward dense or convolutional layers, we demonstrate that deep spiking neural network models can be effectively trained from scratch on MNIST and Fashion-MNIST datasets, or fine-tuned on large-scale datasets, such as CIFAR10, CIFAR100 and PLACES365, to achieve the exact same performance as that of artificial neural networks, surpassing previous spiking neural networks. Our approach accomplishes high-performance classification with less than 0.3 spikes per neuron, lending itself for an energy-efficient implementation. We also show that fine-tuning spiking neural networks with our robust gradient descent algorithm enables their optimization for hardware implementations with low latency and resilience to noise and quantization.
To address challenges of training spiking neural networks (SNNs) at scale, the authors propose a scalable, approximation-free training method for deep SNNs using time-to-first-spike coding. They demonstrate enhanced performance and energy efficiency for neuromorphic hardware.
This website uses cookies to ensure you get the best experience on our website.