Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Off-Policy Evaluation via Adaptive Weighting with Data from Contextual Bandits
by
Athey, Susan
, Zhan, Ruohan
, Hadad, Vitor
, Hirshberg, David A
in
Confidence intervals
/ Data collection
/ Estimators
/ Observational weighting
/ Statistical analysis
/ Variance
/ Weighting
2021
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Off-Policy Evaluation via Adaptive Weighting with Data from Contextual Bandits
by
Athey, Susan
, Zhan, Ruohan
, Hadad, Vitor
, Hirshberg, David A
in
Confidence intervals
/ Data collection
/ Estimators
/ Observational weighting
/ Statistical analysis
/ Variance
/ Weighting
2021
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Off-Policy Evaluation via Adaptive Weighting with Data from Contextual Bandits
Paper
Off-Policy Evaluation via Adaptive Weighting with Data from Contextual Bandits
2021
Request Book From Autostore
and Choose the Collection Method
Overview
It has become increasingly common for data to be collected adaptively, for example using contextual bandits. Historical data of this type can be used to evaluate other treatment assignment policies to guide future innovation or experiments. However, policy evaluation is challenging if the target policy differs from the one used to collect data, and popular estimators, including doubly robust (DR) estimators, can be plagued by bias, excessive variance, or both. In particular, when the pattern of treatment assignment in the collected data looks little like the pattern generated by the policy to be evaluated, the importance weights used in DR estimators explode, leading to excessive variance. In this paper, we improve the DR estimator by adaptively weighting observations to control its variance. We show that a t-statistic based on our improved estimator is asymptotically normal under certain conditions, allowing us to form confidence intervals and test hypotheses. Using synthetic data and public benchmarks, we provide empirical evidence for our estimator's improved accuracy and inferential properties relative to existing alternatives.
Publisher
Cornell University Library, arXiv.org
This website uses cookies to ensure you get the best experience on our website.