Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
From MNIST to ImageNet and back: benchmarking continual curriculum learning
by
Japkowicz, Nathalie
, Corizzo, Roberto
, Vergari, Antonio
, Pietron, Marcin
, Faber, Kamil
, Zurek, Dominik
in
Artificial Intelligence
/ Benchmarks
/ Cognitive tasks
/ Computer Science
/ Control
/ Curricula
/ Datasets
/ Image quality
/ Machine Learning
/ Mechatronics
/ Natural Language Processing (NLP)
/ Performance evaluation
/ Robotics
/ Simulation and Modeling
/ Special Issue on Discovery Science 2022
/ Task complexity
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
From MNIST to ImageNet and back: benchmarking continual curriculum learning
by
Japkowicz, Nathalie
, Corizzo, Roberto
, Vergari, Antonio
, Pietron, Marcin
, Faber, Kamil
, Zurek, Dominik
in
Artificial Intelligence
/ Benchmarks
/ Cognitive tasks
/ Computer Science
/ Control
/ Curricula
/ Datasets
/ Image quality
/ Machine Learning
/ Mechatronics
/ Natural Language Processing (NLP)
/ Performance evaluation
/ Robotics
/ Simulation and Modeling
/ Special Issue on Discovery Science 2022
/ Task complexity
2024
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
From MNIST to ImageNet and back: benchmarking continual curriculum learning
by
Japkowicz, Nathalie
, Corizzo, Roberto
, Vergari, Antonio
, Pietron, Marcin
, Faber, Kamil
, Zurek, Dominik
in
Artificial Intelligence
/ Benchmarks
/ Cognitive tasks
/ Computer Science
/ Control
/ Curricula
/ Datasets
/ Image quality
/ Machine Learning
/ Mechatronics
/ Natural Language Processing (NLP)
/ Performance evaluation
/ Robotics
/ Simulation and Modeling
/ Special Issue on Discovery Science 2022
/ Task complexity
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
From MNIST to ImageNet and back: benchmarking continual curriculum learning
Journal Article
From MNIST to ImageNet and back: benchmarking continual curriculum learning
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Continual learning (CL) is one of the most promising trends in recent machine learning research. Its goal is to go beyond classical assumptions in machine learning and develop models and learning strategies that present high robustness in dynamic environments. This goal is realized by designing strategies that simultaneously foster the incorporation of new knowledge while avoiding forgetting past knowledge. The landscape of CL research is fragmented into several learning evaluation protocols, comprising different learning tasks, datasets, and evaluation metrics. Additionally, the benchmarks adopted so far are still distant from the complexity of real-world scenarios, and are usually tailored to highlight capabilities specific to certain strategies. In such a landscape, it is hard to clearly and objectively assess models and strategies. In this work, we fill this gap for CL on image data by introducing two novel CL benchmarks that involve multiple heterogeneous tasks from six image datasets, with varying levels of complexity and quality. Our aim is to fairly evaluate current state-of-the-art CL strategies on a common ground that is closer to complex real-world scenarios. We additionally structure our benchmarks so that tasks are presented in increasing and decreasing order of complexity—according to a curriculum—in order to evaluate if current CL models are able to exploit structure across tasks. We devote particular emphasis to providing the CL community with a rigorous and reproducible evaluation protocol for measuring the ability of a model to generalize and not to forget while learning. Furthermore, we provide an extensive experimental evaluation showing that popular CL strategies, when challenged with our proposed benchmarks, yield sub-par performance, high levels of forgetting, and present a limited ability to effectively leverage curriculum task ordering. We believe that these results highlight the need for rigorous comparisons in future CL works as well as pave the way to design new CL strategies that are able to deal with more complex scenarios.
Publisher
Springer US,Springer Nature B.V
This website uses cookies to ensure you get the best experience on our website.