Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
466,945 result(s) for "Averages"
Sort by:
Matching Methods for Causal Inference with Time-Series Cross-Sectional Data
Matching methods improve the validity of causal inference by reducing model dependence and offering intuitive diagnostics. Although they have become a part of the standard tool kit across disciplines, matching methods are rarely used when analysing time-series cross-sectional data. We fill this methodological gap. In the proposed approach, we first match each treated observation with control observations from other units in the same time period that have an identical treatment history up to the prespecified number of lags. We use standard matching and weighting methods to further refine this matched set so that the treated and matched control observations have similar covariate values. Assessing the quality of matches is done by examining covariate balance. Finally, we estimate both short-term and long-term average treatment effects using the difference-in-differences estimator, accounting for a time trend. We illustrate the proposed methodology through simulation and empirical studies. An open-source software package is available for implementing the proposed methods.
Metalearners for estimating heterogeneous treatment effects using machine learning
There is growing interest in estimating and analyzing heterogeneous treatment effects in experimental and observational studies. We describe a number of metaalgorithms that can take advantage of any supervised learning or regression method in machine learning and statistics to estimate the conditional average treatment effect (CATE) function. Metaalgorithms build on base algorithms—such as random forests (RFs), Bayesian additive regression trees (BARTs), or neural networks—to estimate the CATE, a function that the base algorithms are not designed to estimate directly. We introduce a metaalgorithm, the X-learner, that is provably efficient when the number of units in one treatment group is much larger than in the other and can exploit structural properties of the CATE function. For example, if the CATE function is linear and the response functions in treatment and control are Lipschitz-continuous, the X-learner can still achieve the parametric rate under regularity conditions. We then introduce versions of the X-learner that use RF and BART as base learners. In extensive simulation studies, the X-learner performs favorably, although none of the metalearners is uniformly the best. In two persuasion field experiments from political science, we demonstrate how our X-learner can be used to target treatment regimes and to shed light on underlying mechanisms. A software package is provided that implements our methods.
CHANNELING FISHER
I follow R. A. Fisher’s The Design of Experiments (1935), using randomization statistical inference to test the null hypothesis of no treatment effects in a comprehensive sample of 53 experimental papers drawn from the journals of the American Economic Association. In the average paper, randomization tests of the significance of individual treatment effects find 13% to 22% fewer significant results than are found using authors’ methods. In joint tests of multiple treatment effects appearing together in tables, randomization tests yield 33% to 49% fewer statistically significant results than conventional tests. Bootstrap and jackknife methods support and confirm the randomization results.
MATCHING ON THE ESTIMATED PROPENSITY SCORE
Propensity score matching estimators (Rosenbaum and Rubin (1983)) are widely used in evaluation research to estimate average treatment effects. In this article, we derive the large sample distribution of propensity score matching estimators. Our derivations take into account that the propensity score is itself estimated in a first step, prior to matching. We prove that first step estimation of the propensity score affects the large sample distribution of propensity score matching estimators, and derive adjustments to the large sample variances of propensity score matching estimators of the average treatment effect (ATE) and the average treatment effect on the treated (ATET). The adjustment for the ATE estimator is negative (or zero in some special cases), implying that matching on the estimated propensity score is more efficient than matching on the true propensity score in large samples. However, for the ATET estimator, the sign of the adjustment term depends on the data generating process, and ignoring the estimation error in the propensity score may lead to confidence intervals that are either too large or too small.
Is the US Public Corporation in Trouble?
We examine the current state of the US public corporation and how it has evolved over the last 40 years. After falling by 50 percent since its peak in 1997, the number of public corporations is now smaller than 40 years ago. These corporations are now much larger and over the last twenty years have become much older; they invest differently, as the average firm invests more in R&D than it spends on capital expenditures; and compared to the 1990s, the ratio of investment to assets is lower, especially for large firms. Public firms have record high cash holdings and, in most recent years, the average firm has more cash than long-term debt. Measuring profitability by the ratio of earnings to assets, the average firm is less profitable, but that is driven by smaller firms. Earnings of public firms have become more concentrated—the top 200 firms in profits earn as much as all public firms combined. Firms' total payouts to shareholders as a percent of earnings are at record levels. Possible explanations for the current state of the public corporation include a decrease in the net benefits of being a public company, changes in financial intermediation, technological change, globalization, and consolidation through mergers.
Doubly robust estimation of the local average treatment effect curve
We consider estimation of the causal effect of a binary treatment on an outcome, conditionally on covariates, from observational studies or natural experiments in which there is a binary instrument for treatment. We describe a doubly robust, locally efficient estimator of the parameters indexing a model for the local average treatment effect conditionally on covariates V when randomization of the instrument is only true conditionally on a high dimensional vector of covariates X, possibly bigger than V. We discuss the surprising result that inference is identical to inference for the parameters of a model for an additive treatment effect on the treated conditionally on V that assumes no treatment–instrument interaction. We illustrate our methods with the estimation of the local average effect of participating in 401(k) retirement programmes on savings by using data from the US Census Bureau's 1991 Survey of Income and Program Participation.
Measuring Subgroup Preferences in Conjoint Experiments
Conjoint analysis is a common tool for studying political preferences. The method disentangles patterns in respondents’ favorability toward complex, multidimensional objects, such as candidates or policies. Most conjoints rely upon a fully randomized design to generate average marginal component effects (AMCEs). They measure the degree to which a given value of a conjoint profile feature increases, or decreases, respondents’ support for the overall profile relative to a baseline, averaging across all respondents and other features. While the AMCE has a clear causal interpretation (about the effect of features), most published conjoint analyses also use AMCEs to describe levels of favorability. This often means comparing AMCEs among respondent subgroups. We show that using conditional AMCEs to describe the degree of subgroup agreement can be misleading as regression interactions are sensitive to the reference category used in the analysis. This leads to inferences about subgroup differences in preferences that have arbitrary sign, size, and significance. We demonstrate the problem using examples drawn from published articles and provide suggestions for improved reporting and interpretation using marginal means and an omnibus F-test. Given the accelerating use of these designs in political science, we offer advice for best practice in analysis and presentation of results.
Predicting key educational outcomes in academic trajectories: a machine-learning approach
Predicting and understanding different key outcomes in a student’s academic trajectory such as grade point average, academic retention, and degree completion would allow targeted intervention programs in higher education. Most of the predictive models developed for those key outcomes have been based on traditional methodological approaches. However, these models assume linear relationships between variables and do not always yield accurate predictive classifications. On the other hand, the use of machine-learning approaches such as artificial neural networks has been very effective in the classification of various educational outcomes, overcoming the limitations of traditional methodological approaches. In this study, multilayer perceptron artificial neural network models, with a backpropagation algorithm, were developed to classify levels of grade point average, academic retention, and degree completion outcomes in a sample of 655 students from a private university. Findings showed a high level of accuracy for all the classifications. Among the predictors, learning strategies had the greatest contribution for the prediction of grade point average. Coping strategies were the best predictors for degree completion, and background information had the largest predictive weight for the identification of students who will drop out or not from the university programs.
The Sad Truth about Happiness Scales
Happiness is reported in ordered intervals (e.g., very, pretty, not too happy). We review and apply standard statistical results to determine when such data permit identification of two groups’ relative average happiness. The necessary conditions for nonparametric identification are strong and unlikely to ever be satisfied. Standard parametric approaches cannot identify this ranking unless the variances are exactly equal. If not, ordered probit findings can be reversed by lognormal transformations. For nine prominent happiness research areas, conditions for nonparametric identification are rejected and standard parametric results are reversed using plausible transformations. Tests for a common reporting function consistently reject.
USING INSTRUMENTAL VARIABLES FOR INFERENCE ABOUT POLICY RELEVANT TREATMENT PARAMETERS
We propose a method for using instrumental variables (IV) to draw inference about causal effects for individuals other than those affected by the instrument at hand. Policy relevance and external validity turn on the ability to do this reliably. Our method exploits the insight that both the IV estimand and many treatment parameters can be expressed as weighted averages of the same underlying marginal treatment effects. Since the weights are identified, knowledge of the IV estimand generally places some restrictions on the unknown marginal treatment effects, and hence on the values of the treatment parameters of interest. We show how to extract information about the treatment parameter of interest from the IV estimand and, more generally, from a class of IV-like estimands that includes the two stage least squares and ordinary least squares estimands, among others. Our method has several applications. First, it can be used to construct nonparametric bounds on the average causal effect of a hypothetical policy change. Second, our method allows the researcher to flexibly incorporate shape restrictions and parametric assumptions, thereby enabling extrapolation of the average effects for compilers to the average effects for different or larger populations. Third, our method can be used to test model specification and hypotheses about behavior, such as no selection bias and/or no selection on gain.