Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Language
      Language
      Clear All
      Language
  • Subject
      Subject
      Clear All
      Subject
  • Item Type
      Item Type
      Clear All
      Item Type
  • Discipline
      Discipline
      Clear All
      Discipline
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
34 result(s) for "Menzel, Konrad"
Sort by:
BOOTSTRAP WITH CLUSTER-DEPENDENCE IN TWO OR MORE DIMENSIONS
We propose a bootstrap procedure for data that may exhibit cluster-dependence in two or more dimensions. The asymptotic distribution of the sample mean or other statistics may be non-Gaussian if observations are dependent but uncorrelated within clusters. We show that there exists no procedure for estimating the limiting distribution of the sample mean under two-way clustering that achieves uniform consistency. However, we propose bootstrap procedures that achieve adaptivity with respect to different uniformity criteria. Important cases and extensions discussed in the paper include regression inference, U- and V-statistics, subgraph counts for network data, and non-exhaustive samples of matched data.
LARGE MATCHING MARKETS AS TWO-SIDED DEMAND SYSTEMS
This paper studies two-sided matching markets with non-transferable utility when the number of market participants grows large. We consider a model in which each agent has a random preference ordering over individual potential matching partners, and agents' types are only partially observed by the econometrician. We show that in a large market, the inclusive value is a sufficient statistic for an agent's endogenous choice set with respect to the probability of being matched to a spouse of a given observable type. Furthermore, while the number of pairwise stable matchings for a typical realization of random utilities grows at a fast rate as the number of market participants increases, the inclusive values resulting from any stable matching converge to a unique deterministic limit. We can therefore characterize the limiting distribution of the matching market as the unique solution to a fixed-point condition on the inclusive values. Finally we analyze identification and estimation of payoff parameters from the asymptotic distribution of observable characteristics at the level of pairs resulting from a stable matching.
Inference for Games with Many Players
We develop an asymptotic theory for static discrete-action games with a large number of players, and propose a novel inference approach based on stochastic expansions around the limit of the finite-player game. Our analysis focuses on anonymous games in which payoffs are a function of the agent's own action and the empirical distribution of her opponents' play. We establish a law of large numbers and central limit theorem which can be used to establish consistency of point or set estimators and asymptotic validity for inference on structural parameters as the number of players increases. The proposed methods as well as the limit theory are conditional on the realized equilibrium in the observed sample and therefore do not require any assumptions regarding selection among multiple equilibria.
A CAUSAL BOOTSTRAP
The bootstrap, introduced by The Jackknife, the Bootstrap and Other Resampling Plans ((1982), SIAM), has become a very popular method for estimating variances and constructing confidence intervals. A key insight is that one can approximate the properties of estimators by using the empirical distribution function of the sample as an approximation for the true distribution function. This approach views the uncertainty in the estimator as coming exclusively from sampling uncertainty. We argue that for causal estimands the uncertainty arises entirely, or partially, from a different source, corresponding to the stochastic nature of the treatment received. We develop a bootstrap procedure for inference regarding the average treatment effect that accounts for this uncertainty, and compare its properties to that of the classical bootstrap. We consider completely randomized and observational designs as well as designs with imperfect compliance.
Large sample properties for estimators based on the order statistics approach in auctions
For symmetric auctions, there is a close relationship between distributions of order statistics of bidders' valuations and observable bids that is often used to estimate or bound the valuation distribution, optimal reserve price, and other quantities of interest nonparametrically. However, we show that the functional mapping from distributions of order statistics to their parent distribution is, in general, not Lipschitz continuous and, therefore, introduces an irregularity into the estimation problem. More specifically, we derive the optimal rate for nonparametric point estimation of, and bounds for, the private value distribution, which is typically substantially slower than the regular root‐n rate. We propose trimming rules for the nonparametric estimator that achieve that rate and derive the asymptotic distribution for a regularized estimator. We then demonstrate that policy parameters that depend on the valuation distribution, including optimal reserve price and expected revenue, are irregularly identified when bidding data are incomplete. We also give rates for nonparametric estimation of descending bid auctions and strategic equivalents.
Inference on sets in finance
We consider the problem of inference on a class of sets describing a collection of admissible models as solutions to a single smooth inequality. Classical and recent examples include the Hansen–Jagannathan sets of admissible stochastic discount factors, Markowitz–Fama mean–variance sets for asset portfolio returns, and the set of structural elasticities in Chetty's (2012) analysis of demand with optimization frictions. The econometric structure of the problem allows us to construct convenient and powerful confidence regions based on the weighted likelihood ratio and weighted Wald statistics. Our statistics differ from existing statistics in that they enforce either exact or first-order equivariance to transformations of parameters, making them especially appealing in the target applications. We show that the resulting inference procedures are more powerful than the structured projection methods. Last, our framework is also useful for analyzing intersection bounds, namely sets defined as solutions to multiple smooth inequalities, since multiple inequalities can be conservatively approximated by a single smooth inequality. We present two empirical examples showing how the new econometric methods are able to generate sharp economic conclusions.
Transfer Estimates for Causal Effects across Heterogeneous Sites
We consider the problem of extrapolating treatment effects across heterogeneous populations (``sites\"/``contexts\"). We consider an idealized scenario in which the researcher observes cross-sectional data for a large number of units across several ``experimental\" sites in which an intervention has already been implemented to a new ``target\" site for which a baseline survey of unit-specific, pre-treatment outcomes and relevant attributes is available. Our approach treats the baseline as functional data, and this choice is motivated by the observation that unobserved site-specific confounders manifest themselves not only in average levels of outcomes, but also how these interact with observed unit-specific attributes. We consider the problem of determining the optimal finite-dimensional feature space in which to solve that prediction problem. Our approach is design-based in the sense that the performance of the predictor is evaluated given the specific, finite selection of experimental and target sites. Our approach is nonparametric, and our formal results concern the construction of an optimal basis of predictors as well as convergence rates for the estimated conditional average treatment effect relative to the constrained-optimal population predictor for the target site. We quantify the potential gains from adapting experimental estimates to a target location in an application to conditional cash transfer (CCT) programs using a combined data set from five multi-site randomized controlled trials.
Structural Sieves
This paper explores the use of deep neural networks for semiparametric estimation of economic models of maximizing behavior in production or discrete choice. We argue that certain deep networks are particularly well suited as a nonparametric sieve to approximate regression functions that result from nonlinear latent variable models of continuous or discrete optimization. Multi-stage models of this type will typically generate rich interaction effects between regressors (\"inputs\") in the regression function so that there may be no plausible separability restrictions on the \"reduced-form\" mapping form inputs to outputs to alleviate the curse of dimensionality. Rather, economic shape, sparsity, or separability restrictions either at a global level or intermediate stages are usually stated in terms of the latent variable model. We show that restrictions of this kind are imposed in a more straightforward manner if a sufficiently flexible version of the latent variable model is in fact used to approximate the unknown regression function.