Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Offline Reinforcement Learning as Anti-Exploration
by
Rezaeifar, Shideh
, Geist, Matthieu
, Dadashi, Robert
, Léonard Hussenot
, Bachem, Olivier
, Pietquin, Olivier
, Vieillard, Nino
in
Datasets
/ Exploration
/ Learning
/ Locomotion
/ Optimal control
/ Regularization
2021
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Offline Reinforcement Learning as Anti-Exploration
by
Rezaeifar, Shideh
, Geist, Matthieu
, Dadashi, Robert
, Léonard Hussenot
, Bachem, Olivier
, Pietquin, Olivier
, Vieillard, Nino
in
Datasets
/ Exploration
/ Learning
/ Locomotion
/ Optimal control
/ Regularization
2021
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Paper
Offline Reinforcement Learning as Anti-Exploration
2021
Request Book From Autostore
and Choose the Collection Method
Overview
Offline Reinforcement Learning (RL) aims at learning an optimal control from a fixed dataset, without interactions with the system. An agent in this setting should avoid selecting actions whose consequences cannot be predicted from the data. This is the converse of exploration in RL, which favors such actions. We thus take inspiration from the literature on bonus-based exploration to design a new offline RL agent. The core idea is to subtract a prediction-based exploration bonus from the reward, instead of adding it for exploration. This allows the policy to stay close to the support of the dataset. We connect this approach to a more common regularization of the learned policy towards the data. Instantiated with a bonus based on the prediction error of a variational autoencoder, we show that our agent is competitive with the state of the art on a set of continuous control locomotion and manipulation tasks.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.