Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Bio-inspired, task-free continual learning through activity regularization
by
Sorbaro, Martino
, Lässig, Francesco
, Grewe, Benjamin F
, Aceituno, Pau Vilimelis
in
Algorithms
/ Back propagation
/ Boundaries
/ Brain
/ Computational neuroscience
/ Computer vision
/ Deep learning
/ Feedback control
/ Machine learning
/ Regularization
/ Representations
/ Sparsity
/ Take-all disease
2023
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Bio-inspired, task-free continual learning through activity regularization
by
Sorbaro, Martino
, Lässig, Francesco
, Grewe, Benjamin F
, Aceituno, Pau Vilimelis
in
Algorithms
/ Back propagation
/ Boundaries
/ Brain
/ Computational neuroscience
/ Computer vision
/ Deep learning
/ Feedback control
/ Machine learning
/ Regularization
/ Representations
/ Sparsity
/ Take-all disease
2023
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Bio-inspired, task-free continual learning through activity regularization
by
Sorbaro, Martino
, Lässig, Francesco
, Grewe, Benjamin F
, Aceituno, Pau Vilimelis
in
Algorithms
/ Back propagation
/ Boundaries
/ Brain
/ Computational neuroscience
/ Computer vision
/ Deep learning
/ Feedback control
/ Machine learning
/ Regularization
/ Representations
/ Sparsity
/ Take-all disease
2023
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Bio-inspired, task-free continual learning through activity regularization
Journal Article
Bio-inspired, task-free continual learning through activity regularization
2023
Request Book From Autostore
and Choose the Collection Method
Overview
The ability to sequentially learn multiple tasks without forgetting is a key skill of biological brains, whereas it represents a major challenge to the field of deep learning. To avoid catastrophic forgetting, various continual learning (CL) approaches have been devised. However, these usually require discrete task boundaries. This requirement seems biologically implausible and often limits the application of CL methods in the real world where tasks are not always well defined. Here, we take inspiration from neuroscience, where sparse, non-overlapping neuronal representations have been suggested to prevent catastrophic forgetting. As in the brain, we argue that these sparse representations should be chosen on the basis of feed forward (stimulus-specific) as well as top-down (context-specific) information. To implement such selective sparsity, we use a bio-plausible form of hierarchical credit assignment known as Deep Feedback Control (DFC) and combine it with a winner-take-all sparsity mechanism. In addition to sparsity, we introduce lateral recurrent connections within each layer to further protect previously learned representations. We evaluate the new sparse-recurrent version of DFC on the split-MNIST computer vision benchmark and show that only the combination of sparsity and intra-layer recurrent connections improves CL performance with respect to standard backpropagation. Our method achieves similar performance to well-known CL methods, such as Elastic Weight Consolidation and Synaptic Intelligence, without requiring information about task boundaries. Overall, we showcase the idea of adopting computational principles from the brain to derive new, task-free learning algorithms for CL.
Publisher
Springer Nature B.V
This website uses cookies to ensure you get the best experience on our website.