Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
127 result(s) for "Robust standard errors"
Sort by:
Meta-analysis with Robust Variance Estimation: Expanding the Range of Working Models
In prevention science and related fields, large meta-analyses are common, and these analyses often involve dependent effect size estimates. Robust variance estimation (RVE) methods provide a way to include all dependent effect sizes in a single meta-regression model, even when the exact form of the dependence is unknown. RVE uses a working model of the dependence structure, but the two currently available working models are limited to each describing a single type of dependence. Drawing on flexible tools from multilevel and multivariate meta-analysis, this paper describes an expanded range of working models, along with accompanying estimation methods, which offer potential benefits in terms of better capturing the types of data structures that occur in practice and, under some circumstances, improving the efficiency of meta-regression estimates. We describe how the methods can be implemented using existing software (the “metafor” and “clubSandwich” packages for R), illustrate the proposed approach in a meta-analysis of randomized trials on the effects of brief alcohol interventions for adolescents and young adults, and report findings from a simulation study evaluating the performance of the new methods.
Small-Sample Methods for Cluster-Robust Variance Estimation and Hypothesis Testing in Fixed Effects Models
In panel data models and other regressions with unobserved effects, fixed effects estimation is often paired with cluster-robust variance estimation (CRVE) to account for heteroscedasticity and un-modeled dependence among the errors. Although asymptotically consistent, CRVE can be biased downward when the number of clusters is small, leading to hypothesis tests with rejection rates that are too high. More accurate tests can be constructed using bias-reduced linearization (BRL), which corrects the CRVE based on a working model, in conjunction with a Satterthwaite approximation for t-tests. We propose a generalization of BRL that can be applied in models with arbitrary sets of fixed effects, where the original BRL method is undefined, and describe how to apply the method when the regression is estimated after absorbing the fixed effects. We also propose a small-sample test for multiple-parameter hypotheses, which generalizes the Satterthwaite approximation for t-tests. In simulations covering a wide range of scenarios, we find that the conventional cluster-robust Wald test can severely over-reject while the proposed small-sample test maintains Type I error close to nominal levels. The proposed methods are implemented in an R package called clubSandwich . This article has online supplementary materials.
The Costs of Simplicity: Why Multilevel Models May Benefit from Accounting for Cross-Cluster Differences in the Effects of Controls
Context effects, where a characteristic of an upper-level unit or cluster (e.g., a country) affects outcomes and relationships at a lower level (e.g., that of the individual), are a primary object of sociological inquiry. In recent years, sociologists have increasingly analyzed such effects using quantitative multilevel modeling. Our review of multilevel studies in leading sociology journals shows that most assume the effects of lower-level control variables to be invariant across clusters, an assumption that is often implausible. Comparing mixed-effects (random-intercept and slope) models, cluster-robust pooled OLS, and two-step approaches, we find that erroneously assuming invariant coefficients reduces the precision of estimated context effects. Semi-formal reasoning and Monte Carlo simulations indicate that loss of precision is largest when there is pronounced cross-cluster heterogeneity in the magnitude of coefficients, when there are marked compositional differences among clusters, and when the number of clusters is small. Although these findings suggest that practitioners should fit more flexible models, illustrative analyses of European Social Survey data indicate that maximally flexible mixed-effects models do not perform well in real-life settings. We discuss the need to balance parsimony and flexibility, and we demonstrate the encouraging performance of one prominent approach for reducing model complexity.
AGNOSTIC NOTES ON REGRESSION ADJUSTMENTS TO EXPERIMENTAL DATA: REEXAMINING FREEDMAN'S CRITIQUE
Freedman [Adv. in Appl. Math. 40 (2008) 180—193; Ann. Appl. Stat. 2 (2008) 176—196] critiqued ordinary least squares regression adjustment of estimated treatment effects in randomized experiments, using Neyman's model for randomization inference. Contrary to conventional wisdom, he argued that adjustment can lead to worsened asymptotic precision, invalid measures of precision, and small-sample bias. This paper shows that in sufficiently large samples, those problems are either minor or easily fixed. OLS adjustment cannot hurt asymptotic precision when a full set of treatment—covariate interactions is included. Asymptotically valid confidence intervals can be constructed with the Huber—White sandwich standard error estimator. Checks on the asymptotic approximations are illustrated with data from Angrist, Lang, and Oreopoulos's [Am. Econ. J.: Appl. Econ. 1:1 (2009) 136—163] evaluation of strategies to improve college students' achievement. The strongest reasons to support Freedman's preference for unadjusted estimates are transparency and the dangers of specification search.
Small-Sample Adjustments for Tests of Moderators and Model Fit Using Robust Variance Estimation in Meta-Regression
Meta-analyses often include studies that report multiple effect sizes based on a common pool of subjects or that report effect sizes from several samples that were treated with very similar research protocols. The inclusion of such studies introduces dependence among the effect size estimates. When the number of studies is large, robust variance estimation (RVE) provides a method for pooling dependent effects, even when information on the exact dependence structure is not available. When the number of studies is small or moderate, however, test statistics and confidence intervals based on RVE can have inflated Type I error. This article describes and investigates several small-sample adjustments to F-statistics based on RVE. Simulation results demonstrate that one such test, which approximates the test statistic using Hotelling's T² distribution, is level-α and uniformly more powerful than the others. An empirical application demonstrates how results based on this test compare to the largesample F-test.
Robust Inference With Multiway Clustering
In this article we propose a variance estimator for the OLS estimator as well as for nonlinear estimators such as logit, probit, and GMM. This variance estimator enables cluster-robust inference when there is two-way or multiway clustering that is nonnested. The variance estimator extends the standard cluster-robust variance estimator or sandwich estimator for one-way clustering (e.g., Liang and Zeger 1986; Arellano 1987) and relies on similar relatively weak distributional assumptions. Our method is easily implemented in statistical packages, such as Stata and SAS, that already offer cluster-robust standard errors when there is one-way clustering. The method is demonstrated by a Monte Carlo analysis for a two-way random effects model; a Monte Carlo analysis of a placebo law that extends the state-year effects example of Bertrand, Duflo, and Mullainathan (2004) to two dimensions; and by application to studies in the empirical literature where two-way clustering is present.
Handling Complex Meta-analytic Data Structures Using Robust Variance Estimates: a Tutorial in R
Purpose Identifying and understanding causal risk factors for crime over the life-course is a key area of inquiry in developmental criminology. Prospective longitudinal studies provide valuable information about the relationships between risk factors and later criminal offending. Meta-analyses that synthesize findings from these studies can summarize the predictive strength of different risk factors for crime, and offer unique opportunities for examining the developmental variability of risk factors. Complex data structures are common in such meta-analyses, whereby primary studies provide multiple (dependent) effect sizes. Methods This paper describes a recent innovative method for handling complex meta-analytic data structures arising due to dependent effect sizes: robust variance estimation (RVE). We first present a brief overview of the RVE method, describing the underlying models and estimation procedures and their applicability to meta-analyses of research in developmental criminology. We then present a tutorial on implementing these methods in the R statistical environment, using an example meta-analysis on risk factors for adolescent delinquency. Results The tutorial demonstrates how to estimate mean effect sizes and meta-regression models using the RVE method in R, with particular emphasis on exploring developmental variation in risk factors for crime and delinquency. The tutorial also illustrates hypothesis testing for meta-regression coefficients, including tests for overall model fit and incremental hypothesis tests. Conclusions The paper concludes by summarizing the benefits of using the RVE method with complex meta-analytic data structures, highlighting how this method can advance research syntheses in the field of developmental criminology.
Correcting for Cross-Sectional and Time-Series Dependence in Accounting Research
We review and evaluate the methods commonly used in the accounting literature to correct for cross-sectional and time-series dependence. While much of the accounting literature studies settings in which variables are cross-sectionally and serially correlated, we find that the extant methods are not robust to both forms of dependence. Contrary to claims in the literature, we find that the Z2 statistic and Newey-West corrected Fama-MacBeth standard errors do not correct for both cross-sectional and time-series dependence. We show that extant methods produce misspecified test statistics in common accounting research settings, and that correcting for both forms of dependence substantially alters inferences reported in the literature. Specifically, several findings in the implied cost of equity capital literature, the cost of debt literature, and the conservatism literature appear not to be robust to the use of well-specified test statistics.
Dyadic Clustering in International Relations
Quantitative empirical inquiry in international relations often relies on dyadic data. Standard analytic techniques do not account for the fact that dyads are not generally independent of one another. That is, when dyads share a constituent member (e.g., a common country), they may be statistically dependent, or “clustered.” Recent work has developed dyadic clustering robust standard errors (DCRSEs) that account for this dependence. Using these DCRSEs, we reanalyzed all empirical articles published in International Organization between January 2014 and January 2020 that feature dyadic data. We find that published standard errors for key explanatory variables are, on average, approximately half as large as DCRSEs, suggesting that dyadic clustering is leading researchers to severely underestimate uncertainty. However, most (67% of) statistically significant findings remain statistically significant when using DCRSEs. We conclude that accounting for dyadic clustering is both important and feasible, and offer software in R and Stata to facilitate use of DCRSEs in future research.
Understanding, Choosing, and Unifying Multilevel and Fixed Effect Approaches
When working with grouped data, investigators may choose between “fixed effects” models (FE) with specialized (e.g., cluster-robust) standard errors, or “multilevel models” (MLMs) employing “random effects.” We review the claims given in published works regarding this choice, then clarify how these approaches work and compare by showing that: (i) random effects employed in MLMs are simply “regularized” fixed effects; (ii) unmodified MLMs are consequently susceptible to bias—but there is a longstanding remedy; and (iii) the “default” MLM standard errors rely on narrow assumptions that can lead to undercoverage in many settings. Our review of over 100 papers using MLM in political science, education, and sociology show that these “known” concerns have been widely ignored in practice. We describe how to debias MLM’s coefficient estimates, and provide an option to more flexibly estimate their standard errors. Most illuminating, once MLMs are adjusted in these two ways the point estimate and standard error for the target coefficient are exactly equal to those of the analogous FE model with cluster-robust standard errors. For investigators working with observational data and who are interested only in inference on the target coefficient, either approach is equally appropriate and preferable to uncorrected MLM.