Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Benchmarking Estimators for Natural Experiments: A Novel Dataset and a Doubly Robust Algorithm
by
Musco, Christopher
, R Teal Witter
in
Algorithms
/ Benchmarks
/ Datasets
/ Estimators
/ Performance evaluation
/ Robustness
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Benchmarking Estimators for Natural Experiments: A Novel Dataset and a Doubly Robust Algorithm
by
Musco, Christopher
, R Teal Witter
in
Algorithms
/ Benchmarks
/ Datasets
/ Estimators
/ Performance evaluation
/ Robustness
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Benchmarking Estimators for Natural Experiments: A Novel Dataset and a Doubly Robust Algorithm
Paper
Benchmarking Estimators for Natural Experiments: A Novel Dataset and a Doubly Robust Algorithm
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Estimating the effect of treatments from natural experiments, where treatments are pre-assigned, is an important and well-studied problem. We introduce a novel natural experiment dataset obtained from an early childhood literacy nonprofit. Surprisingly, applying over 20 established estimators to the dataset produces inconsistent results in evaluating the nonprofit's efficacy. To address this, we create a benchmark to evaluate estimator accuracy using synthetic outcomes, whose design was guided by domain experts. The benchmark extensively explores performance as real world conditions like sample size, treatment correlation, and propensity score accuracy vary. Based on our benchmark, we observe that the class of doubly robust treatment effect estimators, which are based on simple and intuitive regression adjustment, generally outperform other more complicated estimators by orders of magnitude. To better support our theoretical understanding of doubly robust estimators, we derive a closed form expression for the variance of any such estimator that uses dataset splitting to obtain an unbiased estimate. This expression motivates the design of a new doubly robust estimator that uses a novel loss function when fitting functions for regression adjustment. We release the dataset and benchmark in a Python package; the package is built in a modular way to facilitate new datasets and estimators.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.