Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
25,864
result(s) for
"standard errors"
Sort by:
Meta-analysis with Robust Variance Estimation: Expanding the Range of Working Models
2022
In prevention science and related fields, large meta-analyses are common, and these analyses often involve dependent effect size estimates. Robust variance estimation (RVE) methods provide a way to include all dependent effect sizes in a single meta-regression model, even when the exact form of the dependence is unknown. RVE uses a working model of the dependence structure, but the two currently available working models are limited to each describing a single type of dependence. Drawing on flexible tools from multilevel and multivariate meta-analysis, this paper describes an expanded range of working models, along with accompanying estimation methods, which offer potential benefits in terms of better capturing the types of data structures that occur in practice and, under some circumstances, improving the efficiency of meta-regression estimates. We describe how the methods can be implemented using existing software (the “metafor” and “clubSandwich” packages for R), illustrate the proposed approach in a meta-analysis of randomized trials on the effects of brief alcohol interventions for adolescents and young adults, and report findings from a simulation study evaluating the performance of the new methods.
Journal Article
Small-Sample Methods for Cluster-Robust Variance Estimation and Hypothesis Testing in Fixed Effects Models
2018
In panel data models and other regressions with unobserved effects, fixed effects estimation is often paired with cluster-robust variance estimation (CRVE) to account for heteroscedasticity and un-modeled dependence among the errors. Although asymptotically consistent, CRVE can be biased downward when the number of clusters is small, leading to hypothesis tests with rejection rates that are too high. More accurate tests can be constructed using bias-reduced linearization (BRL), which corrects the CRVE based on a working model, in conjunction with a Satterthwaite approximation for t-tests. We propose a generalization of BRL that can be applied in models with arbitrary sets of fixed effects, where the original BRL method is undefined, and describe how to apply the method when the regression is estimated after absorbing the fixed effects. We also propose a small-sample test for multiple-parameter hypotheses, which generalizes the Satterthwaite approximation for t-tests. In simulations covering a wide range of scenarios, we find that the conventional cluster-robust Wald test can severely over-reject while the proposed small-sample test maintains Type I error close to nominal levels. The proposed methods are implemented in an R package called
clubSandwich
. This article has online supplementary materials.
Journal Article
Inference in Linear Regression Models with Many Covariates and Heteroscedasticity
by
Cattaneo, Matias D.
,
Jansson, Michael
,
Newey, Whitney K.
in
Economic models
,
economics
,
equations
2018
The linear regression model is widely used in empirical work in economics, statistics, and many other disciplines. Researchers often include many covariates in their linear model specification in an attempt to control for confounders. We give inference methods that allow for many covariates and heteroscedasticity. Our results are obtained using high-dimensional approximations, where the number of included covariates is allowed to grow as fast as the sample size. We find that all of the usual versions of Eicker-White heteroscedasticity consistent standard error estimators for linear models are inconsistent under this asymptotics. We then propose a new heteroscedasticity consistent standard error formula that is fully automatic and robust to both (conditional) heteroscedasticity of unknown form and the inclusion of possibly many covariates. We apply our findings to three settings: parametric linear models with many covariates, linear panel models with many fixed effects, and semiparametric semi-linear models with many technical regressors. Simulation evidence consistent with our theoretical results is provided, and the proposed methods are also illustrated with an empirical application. Supplementary materials for this article are available online.
Journal Article
The connection between urbanization and carbon emissions: a panel evidence from West Africa
by
Mensah Isaac Adjei
,
Musah Mohammed
,
Kong Yusheng
in
Bias
,
Carbon dioxide
,
Carbon dioxide emissions
2021
This study examined the nexus between urbanization and carbon emissions in West Africa. Second-generation econometric techniques that are robust to cross-sectional dependence and slope heterogeneity were used for the study. From the Pesaran–Yamagata homogeneity test, the slope coefficients were heterogeneous in nature. Also, the Breusch–Pagan LM test, the Pesaran scaled LM test, bias-corrected LM test, Pesaran CD test and the Friedman’s test confirmed the studied panels to be cross-sectionally dependent. Further, the CADF and the CIPS unit root tests established the variables to be first-differenced stationary. Additionally, the Westerlund and Edgerton bootstrap cointegration test and the Pedroni residual cointegration test affirmed the series to be cointegrated in the long run. The Driscoll–Kraay standard errors regression estimator was employed to examine the long-run equilibrium relationship amid the series, and from the results, urbanization had a significantly positive influence on CO2 emissions in all the three panels. Also, economic growth had a materially positive effect on CO2 emissions, while renewable energy consumption had a substantially negative impact on CO2 emissions in all the panels. The causal connections amid the series were finally explored through the Dumitrescu–Hurlin panel causality test, and the discoveries were a bit varied across the various panels. Policy recommendations are further discussed.
Journal Article
AGNOSTIC NOTES ON REGRESSION ADJUSTMENTS TO EXPERIMENTAL DATA: REEXAMINING FREEDMAN'S CRITIQUE
Freedman [Adv. in Appl. Math. 40 (2008) 180—193; Ann. Appl. Stat. 2 (2008) 176—196] critiqued ordinary least squares regression adjustment of estimated treatment effects in randomized experiments, using Neyman's model for randomization inference. Contrary to conventional wisdom, he argued that adjustment can lead to worsened asymptotic precision, invalid measures of precision, and small-sample bias. This paper shows that in sufficiently large samples, those problems are either minor or easily fixed. OLS adjustment cannot hurt asymptotic precision when a full set of treatment—covariate interactions is included. Asymptotically valid confidence intervals can be constructed with the Huber—White sandwich standard error estimator. Checks on the asymptotic approximations are illustrated with data from Angrist, Lang, and Oreopoulos's [Am. Econ. J.: Appl. Econ. 1:1 (2009) 136—163] evaluation of strategies to improve college students' achievement. The strongest reasons to support Freedman's preference for unadjusted estimates are transparency and the dangers of specification search.
Journal Article
Heteroskedasticity-Robust Standard Errors for Fixed Effects Panel Data Regression
by
Stock, James H.
,
Watson, Mark W.
in
Applications
,
clustered standard errors
,
Consistent estimators
2008
The conventional heteroskedasticity-robust (HR) variance matrix estimator for cross-sectional regression (with or without a degrees-of-freedom adjustment), applied to the fixed-effects estimator for panel data with serially uncorrelated errors, is incon- sistent if the number of time periods T is fixed (and greater than 2) as the number of entities n increases. We provide a bias-adjusted HR estimator that is$\\sqrt{nT}$-consistent under any sequences (n, T) in which n and/or T increase to ∞. This estimator can be extended to handle serial correlation of fixed order.
Journal Article
Handling Complex Meta-analytic Data Structures Using Robust Variance Estimates: a Tutorial in R
by
Polanin, Joshua R.
,
Tipton, Elizabeth
,
Tanner-Smith, Emily E.
in
Complexity
,
Crime
,
Criminology
2016
Purpose
Identifying and understanding causal risk factors for crime over the life-course is a key area of inquiry in developmental criminology. Prospective longitudinal studies provide valuable information about the relationships between risk factors and later criminal offending. Meta-analyses that synthesize findings from these studies can summarize the predictive strength of different risk factors for crime, and offer unique opportunities for examining the developmental variability of risk factors. Complex data structures are common in such meta-analyses, whereby primary studies provide multiple (dependent) effect sizes.
Methods
This paper describes a recent innovative method for handling complex meta-analytic data structures arising due to dependent effect sizes: robust variance estimation (RVE). We first present a brief overview of the RVE method, describing the underlying models and estimation procedures and their applicability to meta-analyses of research in developmental criminology. We then present a tutorial on implementing these methods in the R statistical environment, using an example meta-analysis on risk factors for adolescent delinquency.
Results
The tutorial demonstrates how to estimate mean effect sizes and meta-regression models using the RVE method in R, with particular emphasis on exploring developmental variation in risk factors for crime and delinquency. The tutorial also illustrates hypothesis testing for meta-regression coefficients, including tests for overall model fit and incremental hypothesis tests.
Conclusions
The paper concludes by summarizing the benefits of using the RVE method with complex meta-analytic data structures, highlighting how this method can advance research syntheses in the field of developmental criminology.
Journal Article
Modeling of Experimental Adsorption Isotherm Data
2015
Adsorption is considered to be one of the most effective technologies widely used in global environmental protection areas. Modeling of experimental adsorption isotherm data is an essential way for predicting the mechanisms of adsorption, which will lead to an improvement in the area of adsorption science. In this paper, we employed three isotherm models, namely: Langmuir, Freundlich, and Dubinin-Radushkevich to correlate four sets of experimental adsorption isotherm data, which were obtained by batch tests in lab. The linearized and non-linearized isotherm models were compared and discussed. In order to determine the best fit isotherm model, the correlation coefficient (r2) and standard errors (S.E.) for each parameter were used to evaluate the data. The modeling results showed that non-linear Langmuir model could fit the data better than others, with relatively higher r2 values and smaller S.E. The linear Langmuir model had the highest value of r2, however, the maximum adsorption capacities estimated from linear Langmuir model were deviated from the experimental data.
Journal Article
Repeatability and minimal detectable change including clothing effects for smartphone-based 3D markerless motion capture
by
Kainz, Hans
,
Dumphart, Bernhard
,
Horsak, Brian
in
Error analysis
,
Error detection
,
Gait analysis
2024
OpenCap, a smartphone- and web-based markerless system, has shown acceptable accuracy compared to marker-based systems, but lacks information on repeatability. This study fills this gap by evaluating the intersession repeatability of OpenCap and investigating the effects of clothing on gait kinematics. Twenty healthy volunteers participated in a test–retest study, performing walking and sit-to-stand tasks with minimal clothing and regular street wear. Segment lengths and lower-limb kinematics were compared between both sessions and for both clothing conditions using the root-mean-square-deviation (RMSD) for entire waveforms and the standard error of measurement (SEM) and minimal detectable change (MDC) for discrete kinematic parameters. In general, the RMSD test–retest values were 2.8 degrees (SD: 1.0) for walking and 3.3 degrees (SD: 1.2) for sit-to-stand. The highest intersession variability was observed in the trunk, pelvis, and hip kinematics of the sagittal plane. SEM and MDC values were on average 2.2 and 6.0 degrees, respectively, for walking, and 2.4 and 6.5 degrees for sit-to-stand. Clothing had minimal effects on kinematics by adding on average less than one degree to the RMSD values for most variables. The segment lengths showed moderate to excellent agreement between both sessions and poor to moderate agreement between clothing conditions. The study highlights the reliability of OpenCap for markerless motion capture, emphasizing its potential for large-scale field studies. However, some variables showed high MDC values above 5 degrees and thus warrant further enhancement of the technology. Although clothing had minimal effects, it is still recommended to maintain consistent clothing to minimize overall variability.
Journal Article
The Costs of Simplicity: Why Multilevel Models May Benefit from Accounting for Cross-Cluster Differences in the Effects of Controls
by
Heisig, Jan Paul
,
Giesecke, Johannes
,
Schaeffer, Merlin
in
Analytical estimating
,
cluster-robust standard errors
,
Comparative analysis
2017
Context effects, where a characteristic of an upper-level unit or cluster (e.g., a country) affects outcomes and relationships at a lower level (e.g., that of the individual), are a primary object of sociological inquiry. In recent years, sociologists have increasingly analyzed such effects using quantitative multilevel modeling. Our review of multilevel studies in leading sociology journals shows that most assume the effects of lower-level control variables to be invariant across clusters, an assumption that is often implausible. Comparing mixed-effects (random-intercept and slope) models, cluster-robust pooled OLS, and two-step approaches, we find that erroneously assuming invariant coefficients reduces the precision of estimated context effects. Semi-formal reasoning and Monte Carlo simulations indicate that loss of precision is largest when there is pronounced cross-cluster heterogeneity in the magnitude of coefficients, when there are marked compositional differences among clusters, and when the number of clusters is small. Although these findings suggest that practitioners should fit more flexible models, illustrative analyses of European Social Survey data indicate that maximally flexible mixed-effects models do not perform well in real-life settings. We discuss the need to balance parsimony and flexibility, and we demonstrate the encouraging performance of one prominent approach for reducing model complexity.
Journal Article