Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Offline Reinforcement Learning with On-Policy Q-Function Regularization
by
Geist, Matthieu
, Dadashi, Robert
, Chi, Yuejie
, Castro, Pablo Samuel
, Shi, Laixi
in
Algorithms
/ Extrapolation
/ Regularization
2023
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Offline Reinforcement Learning with On-Policy Q-Function Regularization
by
Geist, Matthieu
, Dadashi, Robert
, Chi, Yuejie
, Castro, Pablo Samuel
, Shi, Laixi
in
Algorithms
/ Extrapolation
/ Regularization
2023
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Offline Reinforcement Learning with On-Policy Q-Function Regularization
Paper
Offline Reinforcement Learning with On-Policy Q-Function Regularization
2023
Request Book From Autostore
and Choose the Collection Method
Overview
The core challenge of offline reinforcement learning (RL) is dealing with the (potentially catastrophic) extrapolation error induced by the distribution shift between the history dataset and the desired policy. A large portion of prior work tackles this challenge by implicitly/explicitly regularizing the learning policy towards the behavior policy, which is hard to estimate reliably in practice. In this work, we propose to regularize towards the Q-function of the behavior policy instead of the behavior policy itself, under the premise that the Q-function can be estimated more reliably and easily by a SARSA-style estimate and handles the extrapolation error more straightforwardly. We propose two algorithms taking advantage of the estimated Q-function through regularizations, and demonstrate they exhibit strong performance on the D4RL benchmarks.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.