Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Incremental Task Learning with Incremental Rank Updates
by
Shao, Ken
, Asif, M Salman
, Hyder, Rakib
, Markopoulos, Panos
, Hou, Boyu
, Prater-Bennette, Ashley
in
Learning
/ Mathematical analysis
/ Matrices (mathematics)
/ Neural networks
/ Regularization
/ Training
2022
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Incremental Task Learning with Incremental Rank Updates
by
Shao, Ken
, Asif, M Salman
, Hyder, Rakib
, Markopoulos, Panos
, Hou, Boyu
, Prater-Bennette, Ashley
in
Learning
/ Mathematical analysis
/ Matrices (mathematics)
/ Neural networks
/ Regularization
/ Training
2022
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Paper
Incremental Task Learning with Incremental Rank Updates
2022
Request Book From Autostore
and Choose the Collection Method
Overview
Incremental Task learning (ITL) is a category of continual learning that seeks to train a single network for multiple tasks (one after another), where training data for each task is only available during the training of that task. Neural networks tend to forget older tasks when they are trained for the newer tasks; this property is often known as catastrophic forgetting. To address this issue, ITL methods use episodic memory, parameter regularization, masking and pruning, or extensible network structures. In this paper, we propose a new incremental task learning framework based on low-rank factorization. In particular, we represent the network weights for each layer as a linear combination of several rank-1 matrices. To update the network for a new task, we learn a rank-1 (or low-rank) matrix and add that to the weights of every layer. We also introduce an additional selector vector that assigns different weights to the low-rank matrices learned for the previous tasks. We show that our approach performs better than the current state-of-the-art methods in terms of accuracy and forgetting. Our method also offers better memory efficiency compared to episodic memory- and mask-based approaches. Our code will be available at https://github.com/CSIPlab/task-increment-rank-update.git
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.