Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
LanguageLanguage
-
SubjectSubject
-
Item TypeItem Type
-
DisciplineDiscipline
-
YearFrom:-To:
-
More FiltersMore FiltersIs Peer Reviewed
Done
Filters
Reset
127
result(s) for
"Haneuse, Sebastien"
Sort by:
Distinguishing Selection Bias and Confounding Bias in Comparative Effectiveness Research
2016
Comparative effectiveness research (CER) aims to provide patients and physicians with evidence-based guidance on treatment decisions. As researchers conduct CER they face myriad challenges. Although inadequate control of confounding is the most-often cited source of potential bias, selection bias that arises when patients are differentially excluded from analyses is a distinct phenomenon with distinct consequences: confounding bias compromises internal validity, whereas selection bias compromises external validity. Despite this distinction, however, the label “treatment-selection bias” is being used in the CER literature to denote the phenomenon of confounding bias. Motivated by an ongoing study of treatment choice for depression on weight change over time, this paper formally distinguishes selection and confounding bias in CER. By formally distinguishing selection and confounding bias, this paper clarifies important scientific, design, and analysis issues relevant to ensuring validity. First is that the 2 types of biases may arise simultaneously in any given study; even if confounding bias is completely controlled, a study may nevertheless suffer from selection bias so that the results are not generalizable to the patient population of interest. Second is that the statistical methods used to mitigate the 2 biases are themselves distinct; methods developed to control one type of bias should not be expected to address the other. Finally, the control of selection and confounding bias will often require distinct covariate information. Consequently, as researchers plan future studies of comparative effectiveness, care must be taken to ensure that all data elements relevant to both confounding and selection bias are collected.
Journal Article
On the Assessment of Monte Carlo Error in Simulation-Based Statistical Analyses
by
Koehler, Elizabeth
,
Haneuse, Sebastien J.-P. A.
,
Brown, Elizabeth
in
Accuracy
,
Bootstrap
,
Bootstrap method
2009
Statistical experiments, more commonly referred to as Monte Carlo or simulation studies, are used to study the behavior of statistical methods and measures under controlled situations. Whereas recent computing and methodological advances have permitted increased efficiency in the simulation process, known as variance reduction, such experiments remain limited by their finite nature and hence are subject to uncertainty; when a simulation is run more than once, different results are obtained. However, virtually no emphasis has been placed on reporting the uncertainty, referred to here as Monte Carlo error, associated with simulation results in the published literature, or on justifying the number of replications used. These deserve broader consideration. Here we present a series of simple and practical methods for estimating Monte Carlo error as well as determining the number of replications required to achieve a desired level of accuracy. The issues and methods are demonstrated with two simple examples, one evaluating operating characteristics of the maximum likelihood estimator for the parameters in logistic regression and the other in the context of using the bootstrap to obtain 95% confidence intervals. The results suggest that in many settings, Monte Carlo error may be more substantial than traditionally thought.
Journal Article
Predicting the outcomes of preterm neonates beyond the neonatal intensive care unit: What are we missing?
by
Haneuse, Sebastien
,
Litt, Jonathan S.
,
Crilly, Colin J.
in
Humans
,
Infant, Newborn
,
Infant, Premature
2021
Preterm infants are a population at high risk for mortality and adverse health outcomes. With recent improvements in survival to childhood, increasing attention is being paid to risk of long-term morbidity, specifically during childhood and young-adulthood. Although numerous tools for predicting the functional outcomes of preterm neonates have been developed in the past three decades, no studies have provided a comprehensive overview of these tools, along with their strengths and weaknesses. The purpose of this article is to provide an in-depth, narrative review of the current risk models available for predicting the functional outcomes of preterm neonates. A total of 32 studies describing 43 separate models were considered. We found that most studies used similar physiologic variables and standard regression techniques to develop models that primarily predict the risk of poor neurodevelopmental outcomes. With a recently expanded knowledge regarding the many factors that affect neurodevelopment and other important outcomes, as well as a better understanding of the limitations of traditional analytic methods, we argue that there is great room for improvement in creating risk prediction tools for preterm neonates. We also consider the ethical implications of utilizing these tools for clinical decision-making.
Impact
Based on a literature review of risk prediction models for preterm neonates predicting functional outcomes, future models should aim for more consistent outcomes definitions, standardized assessment schedules and measurement tools, and consideration of risk beyond physiologic antecedents.
Our review provides a comprehensive analysis and critique of risk prediction models developed for preterm neonates, specifically predicting functional outcomes instead of mortality, to reveal areas of improvement for future studies aiming to develop risk prediction tools for this population.
To our knowledge, this is the first literature review and narrative analysis of risk prediction models for preterm neonates regarding their functional outcomes.
Journal Article
Social distancing to slow the US COVID-19 epidemic: Longitudinal pretest–posttest comparison group study
by
Harling, Guy
,
Venkataramani, Atheendar S.
,
Gilbert, Rebecca F.
in
Betacoronavirus - isolation & purification
,
Biology and Life Sciences
,
Communicable Disease Control - methods
2020
Social distancing measures to address the US coronavirus disease 2019 (COVID-19) epidemic may have notable health and social impacts.
We conducted a longitudinal pretest-posttest comparison group study to estimate the change in COVID-19 case growth before versus after implementation of statewide social distancing measures in the US. The primary exposure was time before (14 days prior to, and through 3 days after) versus after (beginning 4 days after, to up to 21 days after) implementation of the first statewide social distancing measures. Statewide restrictions on internal movement were examined as a secondary exposure. The primary outcome was the COVID-19 case growth rate. The secondary outcome was the COVID-19-attributed mortality growth rate. All states initiated social distancing measures between March 10 and March 25, 2020. The mean daily COVID-19 case growth rate decreased beginning 4 days after implementation of the first statewide social distancing measures, by 0.9% per day (95% CI -1.4% to -0.4%; P < 0.001). We did not observe a statistically significant difference in the mean daily case growth rate before versus after implementation of statewide restrictions on internal movement (0.1% per day; 95% CI -0.04% to 0.3%; P = 0.14), but there is substantial difficulty in disentangling the unique associations with statewide restrictions on internal movement from the unique associations with the first social distancing measures. Beginning 7 days after social distancing, the COVID-19-attributed mortality growth rate decreased by 2.0% per day (95% CI -3.0% to -0.9%; P < 0.001). Our analysis is susceptible to potential bias resulting from the aggregate nature of the ecological data, potential confounding by contemporaneous changes (e.g., increases in testing), and potential underestimation of social distancing due to spillover effects from neighboring states.
Statewide social distancing measures were associated with a decrease in the COVID-19 case growth rate that was statistically significant. Statewide social distancing measures were also associated with a decrease in the COVID-19-attributed mortality growth rate beginning 7 days after implementation, although this decrease was no longer statistically significant by 10 days.
Journal Article
Glucose Levels and Risk of Dementia
by
Montine, Thomas J
,
Li, Ge
,
Bowen, James D
in
Aged
,
Apolipoproteins E - genetics
,
Bayes Theorem
2013
Diabetes increases the risk of dementia. In this study, higher levels of glucose, even in persons without clinical diabetes, also increased the risk of dementia.
With the aging of the population, dementia has become a major threat to public health worldwide.
1
The rate of obesity is also increasing, with a parallel increase in the rate of diabetes.
2
The results of studies assessing the association between obesity or diabetes and the risk of dementia have been mixed.
3
,
4
It is imperative to understand the potential consequences of the obesity and diabetes epidemics for the incidence of dementia.
5
Any effects that obesity has on the risk of dementia are likely to include effects on metabolism. We evaluated extensive longitudinal clinical data from a prospective cohort with research-quality . . .
Journal Article
Characterization of Dementia and Alzheimer’s Disease in an Older Population: Updated Incidence and Life Expectancy With and Without Dementia
by
Haneuse, Sebastien J.
,
Larson, Eric B.
,
Hubbard, Rebecca A.
in
Adults
,
Age differences
,
Age Factors
2015
Objectives. We estimated dementia incidence rates, life expectancies with and without dementia, and percentage of total life expectancy without dementia. Methods. We studied 3605 members of Group Health (Seattle, WA) aged 65 years or older who did not have dementia at enrollment to the Adult Changes in Thought study between 1994 and 2008. We estimated incidence rates of Alzheimer’s disease and dementia, as well as life expectancies with and without dementia, defined as the average number of years one is expected to live with and without dementia, and percentage of total life expectancy without dementia. Results. Dementia incidence increased through ages 85 to 89 years (74.2 cases per 1000 person-years) and 90 years or older (105 cases per 1000 person-years). Life expectancy without dementia and percentage of total life expectancy without dementia decreased with age. Life expectancy with dementia was longer in women and people with at least a college degree. Percentage of total life expectancy without dementia was greater in younger age groups, men, and those with more education. Conclusions. Efforts to delay onset of dementia, if successful, would likely benefit older adults of all ages.
Journal Article
Fitting a shared frailty illness-death model to left-truncated semi-competing risks data to examine the impact of education level on incident dementia
by
Gilsanz, Paola
,
Haneuse, Sebastien
,
Lee, Catherine
in
B-splines
,
Competing risks
,
Data analysis
2021
Background
Semi-competing risks arise when interest lies in the time-to-event for some non-terminal event, the observation of which is subject to some terminal event. One approach to assessing the impact of covariates on semi-competing risks data is through the illness-death model with shared frailty, where hazard regression models are used to model the effect of covariates on the endpoints. The shared frailty term, which can be viewed as an individual-specific random effect, acknowledges dependence between the events that is not accounted for by covariates. Although methods exist for fitting such a model to right-censored semi-competing risks data, there is currently a gap in the literature for fitting such models when a flexible baseline hazard specification is desired and the data are left-truncated, for example when time is on the age scale. We provide a modeling framework and openly available code for implementation.
Methods
We specified the model and the likelihood function that accounts for left-truncated data, and provided an approach to estimation and inference via maximum likelihood. Our model was fully parametric, specifying baseline hazards via Weibull or B-splines. Using simulated data we examined the operating characteristics of the implementation in terms of bias and coverage. We applied our methods to a dataset of 33,117 Kaiser Permanente Northern California members aged 65 or older examining the relationship between educational level (categorized as: high school or less; trade school, some college or college graduate; post-graduate) and incident dementia and death.
Results
A simulation study showed that our implementation provided regression parameter estimates with negligible bias and good coverage. In our data application, we found higher levels of education are associated with a lower risk of incident dementia, after adjusting for sex and race/ethnicity.
Conclusions
As illustrated by our analysis of Kaiser data, our proposed modeling framework allows the analyst to assess the impact of covariates on semi-competing risks data, such as incident dementia and death, while accounting for dependence between the outcomes when data are left-truncated, as is common in studies of aging and dementia.
Journal Article
Comparing causal inference methods for point exposures with missing confounders: a simulation study
by
Haneuse, Sebastien
,
Levis, Alexander W.
,
Benz, Luke
in
Causal inference
,
Causality
,
Computer Simulation
2025
Causal inference methods based on electronic health record (EHR) databases must simultaneously handle confounding and missing data. In practice, when faced with partially missing confounders, analysts may proceed by first imputing missing data and subsequently using outcome regression or inverse-probability weighting (IPW) to address confounding. However, little is known about the theoretical performance of such reasonable, but ad hoc methods. Though vast literature exists on each of these two challenges separately, relatively few works attempt to address missing data and confounding in a formal manner simultaneously. In a recent paper Levis et al. (Can J Stat e11832, 2024) outlined a robust framework for tackling these problems together under certain identifying conditions, and introduced a pair of estimators for the average treatment effect (ATE), one of which is non-parametric efficient. In this work we present a series of simulations, motivated by a published EHR based study (Arterburn et al., Ann Surg 274:e1269-e1276, 2020) of the long-term effects of bariatric surgery on weight outcomes, to investigate these new estimators and compare them to existing ad hoc methods. While methods based on ad hoc combinations of imputation and confounding adjustment perform well in certain scenarios, no single estimator is uniformly best. We conclude with recommendations for good practice in the face of partially missing confounders.
Journal Article
Structural Racism and JAMA Network Open
by
Catenacci, Daniel V.
,
Desai, Angel N.
,
Powell, Elizabeth
in
Humans
,
Publishing - standards
,
Publishing - trends
2021
Journal Article