Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
106
result(s) for
"Covariate Adjustments"
Sort by:
Principles of confounder selection
2019
Selecting an appropriate set of confounders for which to control is critical for reliable causal inference. Recent theoretical and methodological developments have helped clarify a number of principles of confounder selection. When complete knowledge of a causal diagram relating all covariates to each other is available, graphical rules can be used to make decisions about covariate control. Unfortunately, such complete knowledge is often unavailable. This paper puts forward a practical approach to confounder selection decisions when the somewhat less stringent assumption is made that knowledge is available for each covariate whether it is a cause of the exposure, and whether it is a cause of the outcome. Based on recent theoretically justified developments in the causal inference literature, the following proposal is made for covariate control decisions: control for each covariate that is a cause of the exposure, or of the outcome, or of both; exclude from this set any variable known to be an instrumental variable; and include as a covariate any proxy for an unmeasured variable that is a common cause of both the exposure and the outcome. Various principles of confounder selection are then further related to statistical covariate selection methods.
Journal Article
Meaningful associations in the adolescent brain cognitive development study
by
Fan, Chun Chieh
,
Palmer, Clare
,
Stuart, Elizabeth A.
in
Adolescent
,
Adolescent brain cognitive development study
,
Adolescent Development
2021
•Describes the ABCD study aims and design.•Covers issues surrounding estimation of meaningful associations, including population inferences, effect sizes, and control of covariates.•Outlines best practices for reproducible research and reporting of results.•Provides worked examples that illustrate the main points of the paper.
The Adolescent Brain Cognitive Development (ABCD) Study is the largest single-cohort prospective longitudinal study of neurodevelopment and children's health in the United States. A cohort of n = 11,880 children aged 9–10 years (and their parents/guardians) were recruited across 22 sites and are being followed with in-person visits on an annual basis for at least 10 years. The study approximates the US population on several key sociodemographic variables, including sex, race, ethnicity, household income, and parental education. Data collected include assessments of health, mental health, substance use, culture and environment and neurocognition, as well as geocoded exposures, structural and functional magnetic resonance imaging (MRI), and whole-genome genotyping. Here, we describe the ABCD Study aims and design, as well as issues surrounding estimation of meaningful associations using its data, including population inferences, hypothesis testing, power and precision, control of covariates, interpretation of associations, and recommended best practices for reproducible research, analytical procedures and reporting of results.
Journal Article
A scoping review described diversity in methods of randomization and reporting of baseline balance in stepped-wedge cluster randomized trials
by
Wang, Xueqi
,
Li, Fan
,
Pereira Macedo, Jules Antoine
in
Baseline balance
,
Clinical trials
,
Cluster Analysis
2023
In stepped-wedge cluster randomized trials (SW-CRTs), clusters are randomized not to treatment and control arms but to sequences dictating the times of crossing from control to intervention conditions. Randomization is an essential feature of this design but application of standard methods to promote and report on balance at baseline is not straightforward. We aimed to describe current methods of randomization and reporting of balance at baseline in SW-CRTs.
We used electronic searches to identify primary reports of SW-CRTs published between 2016 and 2022.
Across 160 identified trials, the median number of clusters randomized was 11 (Q1-Q3: 8-18). Sixty-three (39%) used restricted randomization—most often stratification based on a single cluster-level covariate; 12 (19%) of these adjusted for the covariate(s) in the primary analysis. Overall, 50 (31%) and 134 (84%) reported on balance at baseline on cluster- and individual-level characteristics, respectively. Balance on individual-level characteristics was most often reported by condition in cross-sectional designs and by sequence in cohort designs. Authors reported baseline imbalances in 72 (45%) trials.
SW-CRTs often randomize a small number of clusters using unrestricted allocation. Investigators need guidance on appropriate methods of randomization and assessment and reporting of balance at baseline.
Journal Article
Propensity score matching without conditional independence assumption-with an application to the gender wage gap in the United Kingdom
2007
Propensity score matching is frequently used for estimating average treatment effects. Its applicability, however, is not confined to treatment evaluation. In this paper, it is shown that propensity score matching does not hinge on a selection on observables assumption and can be used to estimate not only adjusted means but also their distributions, even with non-i.i.d. sampling. Propensity score matching is used to analyze the gender wage gap among graduates in the UK. It is found that subject of degree contributes substantially to explaining the gender wage gap, particularly at higher quantiles of the wage distribution.
Journal Article
AGNOSTIC NOTES ON REGRESSION ADJUSTMENTS TO EXPERIMENTAL DATA: REEXAMINING FREEDMAN'S CRITIQUE
Freedman [Adv. in Appl. Math. 40 (2008) 180—193; Ann. Appl. Stat. 2 (2008) 176—196] critiqued ordinary least squares regression adjustment of estimated treatment effects in randomized experiments, using Neyman's model for randomization inference. Contrary to conventional wisdom, he argued that adjustment can lead to worsened asymptotic precision, invalid measures of precision, and small-sample bias. This paper shows that in sufficiently large samples, those problems are either minor or easily fixed. OLS adjustment cannot hurt asymptotic precision when a full set of treatment—covariate interactions is included. Asymptotically valid confidence intervals can be constructed with the Huber—White sandwich standard error estimator. Checks on the asymptotic approximations are illustrated with data from Angrist, Lang, and Oreopoulos's [Am. Econ. J.: Appl. Econ. 1:1 (2009) 136—163] evaluation of strategies to improve college students' achievement. The strongest reasons to support Freedman's preference for unadjusted estimates are transparency and the dangers of specification search.
Journal Article
Planning a method for covariate adjustment in individually randomised trials: a practical guide
by
Walker, A. Sarah
,
Williamson, Elizabeth J.
,
White, Ian R.
in
Analysis
,
Biomedicine
,
Clinical trials
2022
Background
It has long been advised to account for baseline covariates in the analysis of confirmatory randomised trials, with the main statistical justifications being that this increases power and, when a randomisation scheme balanced covariates, permits a valid estimate of experimental error. There are various methods available to account for covariates but it is not clear how to choose among them.
Methods
Taking the perspective of writing a statistical analysis plan, we consider how to choose between the three most promising broad approaches: direct adjustment, standardisation and inverse-probability-of-treatment weighting.
Results
The three approaches are similar in being asymptotically efficient, in losing efficiency with mis-specified covariate functions and in handling designed balance. If a marginal estimand is targeted (for example, a risk difference or survival difference), then direct adjustment should be avoided because it involves fitting non-standard models that are subject to convergence issues. Convergence is most likely with IPTW. Robust standard errors used by IPTW are anti-conservative at small sample sizes. All approaches can use similar methods to handle missing covariate data. With missing outcome data, each method has its own way to estimate a treatment effect in the all-randomised population. We illustrate some issues in a reanalysis of
GetTested
, a randomised trial designed to assess the effectiveness of an electonic sexually transmitted infection testing and results service.
Conclusions
No single approach is always best: the choice will depend on the trial context. We encourage trialists to consider all three methods more routinely.
Journal Article
Covariate-specific ROC curve analysis can accommodate differences between covariate subgroups in the evaluation of diagnostic accuracy
by
van Es, Nick
,
Takada, Toshihiko
,
Bossuyt, Patrick M.
in
Accuracy
,
Algorithms
,
Bayesian analysis
2023
We present an illustrative application of methods that account for covariates in receiver operating characteristic (ROC) curve analysis, using individual patient data on D-dimer testing for excluding pulmonary embolism.
Bayesian nonparametric covariate-specific ROC curves were constructed to examine the performance/positivity thresholds in covariate subgroups. Standard ROC curves were constructed. Three scenarios were outlined based on comparison between subgroups and standard ROC curve conclusion: (1) identical distribution/identical performance, (2) different distribution/identical performance, and (3) different distribution/different performance. Scenarios were illustrated using clinical covariates. Covariate-adjusted ROC curves were also constructed.
Age groups had prominent differences in D-dimer concentration, paired with differences in performance (Scenario 3). Different positivity thresholds were required to achieve the same level of sensitivity. D-dimer had identical performance, but different distributions for YEARS algorithm items (Scenario 2), and similar distributions for sex (Scenario 1). For the later covariates, comparable positivity thresholds achieved the same sensitivity. All covariate-adjusted models had AUCs comparable to the standard approach.
Subgroup differences in performance and distribution of results can indicate that the conventional ROC curve is not a fair representation of test performance. Estimating conditional ROC curves can improve the ability to select thresholds with greater applicability.
Journal Article
A GENERALIZED BACK-DOOR CRITERION
2015
We generalize Pearl's back-door criterion for directed acyclic graphs (DAGs) to more general types of graphs that describe Markov equivalence classes of DAGs and/or allow for arbitrarily many hidden variables. We also give easily checkable necessary and sufficient graphical criteria for the existence of a set of variables that satisfies our generalized back-door criterion, when considering a single intervention and a single outcome variable. Moreover, if such a set exists, we provide an explicit set that fulfills the criterion. We illustrate the results in several examples. R-code is available in the R-package pcalg.
Journal Article
The risks and rewards of covariate adjustment in randomized trials: an assessment of 12 outcomes from 8 studies
by
Jairath, Vipul
,
Morris, Tim P
,
Kahan, Brennan C
in
Analysis of Variance
,
Biomedicine
,
Clinical trials
2014
Background
Adjustment for prognostic covariates can lead to increased power in the analysis of randomized trials. However, adjusted analyses are not often performed in practice.
Methods
We used simulation to examine the impact of covariate adjustment on 12 outcomes from 8 studies across a range of therapeutic areas. We assessed (1) how large an increase in power can be expected in practice; and (2) the impact of adjustment for covariates that are not prognostic.
Results
Adjustment for known prognostic covariates led to large increases in power for most outcomes. When power was set to 80% based on an unadjusted analysis, covariate adjustment led to a median increase in power to 92.6% across the 12 outcomes (range 80.6 to 99.4%). Power was increased to over 85% for 8 of 12 outcomes, and to over 95% for 5 of 12 outcomes. Conversely, the largest decrease in power from adjustment for covariates that were not prognostic was from 80% to 78.5%.
Conclusions
Adjustment for known prognostic covariates can lead to substantial increases in power, and should be routinely incorporated into the analysis of randomized trials. The potential benefits of adjusting for a small number of possibly prognostic covariates in trials with moderate or large sample sizes far outweigh the risks of doing so, and so should also be considered.
Journal Article
A comparison of covariate adjustment approaches under model misspecification in individually randomized trials
by
Morris, Tim
,
Williamson, Elizabeth
,
Tackney, Mia S.
in
Analysis
,
Analysis of covariance
,
ANCOVA
2023
Adjustment for baseline covariates in randomized trials has been shown to lead to gains in power and can protect against chance imbalances in covariates. For continuous covariates, there is a risk that the the form of the relationship between the covariate and outcome is misspecified when taking an adjusted approach. Using a simulation study focusing on individually randomized trials with small sample sizes, we explore whether a range of adjustment methods are robust to misspecification, either in the covariate–outcome relationship or through an omitted covariate–treatment interaction. Specifically, we aim to identify potential settings where G-computation, inverse probability of treatment weighting (IPTW), augmented inverse probability of treatment weighting (AIPTW) and targeted maximum likelihood estimation (TMLE) offer improvement over the commonly used analysis of covariance (ANCOVA). Our simulations show that all adjustment methods are generally robust to model misspecification if adjusting for a few covariates, sample size is 100 or larger, and there are no covariate–treatment interactions. When there is a non-linear interaction of treatment with a skewed covariate and sample size is small, all adjustment methods can suffer from bias; however, methods that allow for interactions (such as G-computation with interaction and IPTW) show improved results compared to ANCOVA. When there are a high number of covariates to adjust for, ANCOVA retains good properties while other methods suffer from under- or over-coverage. An outstanding issue for G-computation, IPTW and AIPTW in small samples is that standard errors are underestimated; they should be used with caution without the availability of small-sample corrections, development of which is needed. These findings are relevant for covariate adjustment in interim analyses of larger trials.
Journal Article