Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
45 result(s) for "Kasy, Maximilian"
Sort by:
Identification of and Correction for Publication Bias
Some empirical results are more likely to be published than others. Selective publication leads to biased estimates and distorted inference. We propose two approaches for identifying the conditional probability of publication as a function of a study’s results, the first based on systematic replication studies and the second on meta-studies. For known conditional publication probabilities, we propose bias-corrected estimators and confidence sets. We apply our methods to recent replication studies in experimental economics and psychology, and to a meta-study on the effect of the minimum wage. When replication and meta-study data are available, we find similar results from both.
ADAPTIVE TREATMENT ASSIGNMENT IN EXPERIMENTS FOR POLICY CHOICE
Standard experimental designs are geared toward point estimation and hypothesis testing, while bandit algorithms are geared toward in-sample outcomes. Here, we instead consider treatment assignment in an experiment with several waves for choosing the best among a set of possible policies (treatments) at the end of the experiment. We propose a computationally tractable assignment algorithm that we call “exploration sampling,” where assignment probabilities in each wave are an increasing concave function of the posterior probabilities that each treatment is optimal. We prove an asymptotic optimality result for this algorithm and demonstrate improvements in welfare in calibrated simulations over both non-adaptive designs and bandit algorithms. An application to selecting between six different recruitment strategies for an agricultural extension service in India demonstrates practical feasibility.
Of Forking Paths and Tied Hands
A key challenge for interpreting published empirical research is the fact that published findings might be selected by researchers or by journals. Selection might be based on criteria such as significance, consistency with theory, or the surprisingness of findings or their plausibility. Selection leads to biased estimates, reduced coverage of confidence intervals, and distorted posterior beliefs. I review methods for detecting and quantifying selection based on the distribution of 𝑝-values, systematic replication studies, and meta-studies. I then discuss the conflicting recommendations regarding selection resulting from alternative objectives, in particular, the validity of inference versus the relevance of findings for decision-makers. Based on this discussion, I consider various reform proposals, such as deemphasizing significance, pre-analysis plans, journals for null results and replication studies, and a functionally differentiated publication system. In conclusion, I argue that we need alternative foundations of statistics that go beyond the single-agent model of decision theory.
Non-parametric inference on the number of equilibria
This paper proposes an estimator and develops an inference procedure for the number of roots of functions that are non-parametrically identified by conditional moment restrictions. It is shown that a smoothed plug-in estimator of the number of roots is superconsistent under i.i.d. asymptotics, but asymptotically normal under non-standard asymptotics. The smoothed estimator is furthermore asymptotically efficient relative to a simple plug-in estimator. The procedure proposed is used to construct confidence sets for the number of equilibria of static games of incomplete information and of stochastic difference equations. In an application to panel data on neighbourhood composition in the United States, no evidence of multiple equilibria is found.
Partial identification, distributional preferences, and the welfare ranking of policies
We discuss the tension between \"what we can get\" (identification) and \"what we want\" (parameters of interest) in models of policy choice (treatment assignment). Our nonstandard empirical object of interest is the ranking of counterfactual policies. Partial identification of treatment effects maps into a partial welfare ranking of treatment assignment policies. We characterize the identified ranking and show how the identifiability of the ranking depends on identifying assumptions, the feasible policy set, and distributional preferences. An application to the project STAR experiment illustrates this dependence. This paper connects the literatures on partial identification, robust statistics, and choice under Knightian uncertainty.
CHOOSING AMONG REGULARIZED ESTIMATORS IN EMPIRICAL ECONOMICS
Many settings in empirical economics involve estimation of a large number of parameters. In such settings, methods that combine regularized estimation and data-driven choices of regularization parameters are useful. We provide guidance to applied researchers on the choice between regularized estimators and data-driven selection of regularization parameters. We characterize the risk and relative performance of regularized estimators as a function of the data-generating process and show that data-driven choices of regularization parameters yield estimators with risk uniformly close to the risk attained under the optimal (unfeasible) choice of regularization parameters. We illustrate using examples from empirical economics.
Why Experimenters Might Not Always Want to Randomize, and What They Could Do Instead
Suppose that an experimenter has collected a sample as well as baseline information about the units in the sample. How should she allocate treatments to the units in this sample? We argue that the answer does not involve randomization if we think of experimental design as a statistical decision problem. If, for instance, the experimenter is interested in estimating the average treatment effect and evaluates an estimate in terms of the squared error, then she should minimize the expected mean squared error (MSE) through choice of a treatment assignment. We provide explicit expressions for the expected MSE that lead to easily implementable procedures for experimental design.
Adaptive targeted infectious disease testing
Abstract We show how to efficiently use costly testing resources in an epidemic, when testing outcomes can be used to make quarantine decisions. If the costs of false quarantine and false release exceed the cost of testing, the optimal myopic testing policy targets individuals with an intermediate likelihood of being infected. A high cost of false release means that testing is optimal for individuals with a low probability of infection, and a high cost of false quarantine means that testing is optimal for individuals with a high probability of infection. If individuals arrive over time, the policy-maker faces a dynamic trade-off: using tests for individuals for whom testing yields the maximum immediate benefit vs spreading out testing capacity across the population to learn prevalence rates thereby benefiting later individuals. We describe a simple policy that is nearly optimal from a dynamic perspective. We briefly discuss practical aspects of implementing our proposed policy, including imperfect testing technology, appropriate choice of prior, and non-stationarity of the prevalence rate.
Uniformity and the Delta Method
When are asymptotic approximations using the delta-method uniformly valid? We provide sufficient conditions as well as closely related necessary conditions for uniform negligibility of the remainder of such approximations. These conditions are easily verified by empirical practitioners and permit to identify settings and parameter regions where pointwise asymptotic approximations perform poorly. Our framework allows for a unified and transparent discussion of uniformity issues in various sub-fields of statistics and econometrics. Our conditions involve uniform bounds on the remainder of a first-order approximation for the function of interest.
HOW TO USE ECONOMIC THEORY TO IMPROVE ESTIMATORS
We propose to use economic theories to construct shrinkage estimators that perform well when the theories’ empirical implications are approximately correct but perform no worse than unrestricted estimators when the theories’ implications do not hold. We implement this construction in various settings, including labor demand and wage inequality, and estimation of consumer demand. We provide asymptotic and finite sample characterizations of the behavior of the proposed estimators. Our approach is an alternative to the use of theory as something to be tested or to be imposed on estimates. Our approach complements uses of theory for identification and extrapolation.