Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
23
result(s) for
"Pustejovsky, James E."
Sort by:
Small-Sample Adjustments for Tests of Moderators and Model Fit Using Robust Variance Estimation in Meta-Regression
2015
Meta-analyses often include studies that report multiple effect sizes based on a common pool of subjects or that report effect sizes from several samples that were treated with very similar research protocols. The inclusion of such studies introduces dependence among the effect size estimates. When the number of studies is large, robust variance estimation (RVE) provides a method for pooling dependent effects, even when information on the exact dependence structure is not available. When the number of studies is small or moderate, however, test statistics and confidence intervals based on RVE can have inflated Type I error. This article describes and investigates several small-sample adjustments to F-statistics based on RVE. Simulation results demonstrate that one such test, which approximates the test statistic using Hotelling's T² distribution, is level-α and uniformly more powerful than the others. An empirical application demonstrates how results based on this test compare to the largesample F-test.
Journal Article
Meta-analysis with Robust Variance Estimation: Expanding the Range of Working Models
2022
In prevention science and related fields, large meta-analyses are common, and these analyses often involve dependent effect size estimates. Robust variance estimation (RVE) methods provide a way to include all dependent effect sizes in a single meta-regression model, even when the exact form of the dependence is unknown. RVE uses a working model of the dependence structure, but the two currently available working models are limited to each describing a single type of dependence. Drawing on flexible tools from multilevel and multivariate meta-analysis, this paper describes an expanded range of working models, along with accompanying estimation methods, which offer potential benefits in terms of better capturing the types of data structures that occur in practice and, under some circumstances, improving the efficiency of meta-regression estimates. We describe how the methods can be implemented using existing software (the “metafor” and “clubSandwich” packages for R), illustrate the proposed approach in a meta-analysis of randomized trials on the effects of brief alcohol interventions for adolescents and young adults, and report findings from a simulation study evaluating the performance of the new methods.
Journal Article
Small-Sample Methods for Cluster-Robust Variance Estimation and Hypothesis Testing in Fixed Effects Models
2018
In panel data models and other regressions with unobserved effects, fixed effects estimation is often paired with cluster-robust variance estimation (CRVE) to account for heteroscedasticity and un-modeled dependence among the errors. Although asymptotically consistent, CRVE can be biased downward when the number of clusters is small, leading to hypothesis tests with rejection rates that are too high. More accurate tests can be constructed using bias-reduced linearization (BRL), which corrects the CRVE based on a working model, in conjunction with a Satterthwaite approximation for t-tests. We propose a generalization of BRL that can be applied in models with arbitrary sets of fixed effects, where the original BRL method is undefined, and describe how to apply the method when the regression is estimated after absorbing the fixed effects. We also propose a small-sample test for multiple-parameter hypotheses, which generalizes the Satterthwaite approximation for t-tests. In simulations covering a wide range of scenarios, we find that the conventional cluster-robust Wald test can severely over-reject while the proposed small-sample test maintains Type I error close to nominal levels. The proposed methods are implemented in an R package called
clubSandwich
. This article has online supplementary materials.
Journal Article
A meta-analysis of the effects of mindfulness meditation training on self-reported interoception
by
Chen, Ya-Yun
,
Pustejovsky, James E.
,
Goldberg, Simon B.
in
631/477/2811
,
692/308/575
,
Accuracy
2025
Mindfulness meditation training may cultivate interoceptive awareness and provide therapeutic benefit when implemented within mental and physical health interventions. This pre-registered meta-analysis evaluated the impact of mindfulness interventions on self-reported interoception measures and associated relationships with psychological outcomes. Twenty-nine randomized controlled trials with 2,191 participants (77.8% female, mean age 32.8 years) were meta-analyzed using correlated and hierarchical effects models. Interventions included mindfulness-based programs (
k
= 15), body-based approaches (incorporating elements like massage,
k
= 8), and other variations (
k
= 6). Five SIMs were tested; the Multidimensional Assessment of Interoceptive Awareness was the most common (22 studies). Results showed a small-to-medium positive effect on interoception measures across all studies (
g
= 0.31,
p
< 0.001, 95% CI [0.21, 0.42]) with low-to-moderate heterogeneity (
τ
= 0.16). Mindfulness-based programs demonstrated the largest effects (
g
= 0.41). No evidence of publication bias was found. No other moderators, such as practice dosage or clinical sample, were significant. Improvements in self-reported interoception were similar in size to improvements in self-reported mindfulness and were related to improvements in psychological distress. These meta-analytic findings provide evidence that mindfulness-based interventions lead to adaptive changes in the subjective experience of interoception, perhaps contributing to improved mental wellbeing.
Journal Article
Design-Comparable Effect Sizes in Multiple Baseline Designs: A General Modeling Framework
by
Shadish, William R.
,
Pustejovsky, James E.
,
Hedges, Larry V.
in
Comparative Analysis
,
Computation
,
Covariance matrices
2014
In single-case research, the multiple baseline design is a widely used approach for evaluating the effects of interventions on individuals. Multiple baseline designs involve repeated measurement of outcomes over time and the controlled introduction of a treatment at different times for different individuals. This article outlines a general framework for defining effect sizes in multiple baseline designs that are directly comparable to the standardized mean difference from a between-subjects randomized experiment. The target, design-comparable effect size parameter can be estimated using restricted maximum likelihood together with a small sample correction analogous to Hedges's g. The approach is demonstrated using hierarchical linear models that include baseline time trends and treatment-by-time interactions. A simulation compares the performance of the proposed estimator to that of an alternative, and an application illustrates the model-fitting process.
Journal Article
High replicability of newly discovered social-behavioural findings is achievable
by
MacInnis, Bo
,
Krosnick, Jon
,
Nosek, Brian A
in
Humans
,
Media and Communications
,
Medie-, kommunikations-, och informationsvetenskaper
2024
Failures to replicate evidence of new discoveries have forced scientists to ask whether this unreliability is due to suboptimal implementation of methods or whether presumptively optimal methods are not, in fact, optimal. This paper reports an investigation by four coordinated laboratories of the prospective replicability of 16 novel experimental findings using rigour-enhancing practices: confirmatory tests, large sample sizes, preregistration and methodological transparency. In contrast to past systematic replication efforts that reported replication rates averaging 50%, replication attempts here produced the expected effects with significance testing (P < 0.05) in 86% of attempts, slightly exceeding the maximum expected replicability based on observed effect sizes and sample sizes. When one lab attempted to replicate an effect discovered by another lab, the effect size in the replications was 97% that in the original study. This high replication rate justifies confidence in rigour-enhancing methods to increase the replicability of new discoveries.
Journal Article
Psychosocial interventions for cancer survivors: A meta-analysis of effects on positive affect
by
Berendsen, Mark
,
Moskowitz, Judith T
,
Pustejovsky, James E
in
Affect (Psychology)
,
Cancer
,
Clinical trials
2019
PurposePositive affect has demonstrated unique benefits in the context of health-related stress and is emerging as an important target for psychosocial interventions. The primary objective of this meta-analysis was to determine whether psychosocial interventions increase positive affect in cancer survivors.MethodsWe coded 28 randomized controlled trials of psychosocial interventions assessing 2082 cancer survivors from six electronic databases. We calculated 76 effect sizes for positive affect and conducted synthesis using random effects models with robust variance estimation. Tests for moderation included demographic, clinical, and intervention characteristics.ResultsInterventions had a modest effect on positive affect (g = 0.35, 95% CI [0.16, 0.54]) with substantial heterogeneity of effects across studies (τ̂=0.40\\[ \\hat{\\tau}=0.40 \\]; I2 = 78%). Three significant moderators were identified: in-person interventions outperformed remote interventions (P = .046), effects were larger when evaluated against standard of care or wait list control conditions versus attentional, educational, or component controls (P = .009), and trials with survivors of early-stage cancer diagnoses yielded larger effects than those with advanced-stage diagnoses (P = .046). We did not detect differential benefits of psychosocial interventions across samples varying in sex, age, on-treatment versus off-treatment status, or cancer type. Although no conclusive evidence suggested outcome reporting biases (P = .370), effects were smaller in studies with lower risk of bias.ConclusionsIn-person interventions with survivors of early-stage cancers hold promise for enhancing positive affect, but more methodological rigor is needed.Implications for Cancer SurvivorsPositive affect strategies can be an explicit target in evidence-based medicine and have a role in patient-centered survivorship care, providing tools to uniquely mobilize human strengths.
Journal Article
Between‐case standardized mean difference effect sizes for single‐case designs: a primer and tutorial using the scdhlm web application
by
Valentine, Jeffrey C.
,
Pustejovsky, James E.
,
Tanner‐Smith, Emily E.
in
Autism
,
Computation
,
Councils
2016
Executive summary Single‐case research designs are critically important for understanding the effectiveness of interventions that target individuals with low incidence disabilities (e.g., physical disabilities, autism spectrum disorders). These designs comprise an important part of the evidence base in fields such as special education and school psychology, and can provide credible and persuasive evidence for guiding practice and policy decisions. In this paper we discuss the development and use of between‐case standardized mean difference effect sizes for two popular single‐case research designs (the treatment reversal design and the multiple baseline design), and discuss how they might be used in meta‐analyses either with other single‐case research designs or in conjunction with between‐group research designs. Effect size computation is carried out using a user‐friendly web application, scdhlm, powered by the free statistical program R; no knowledge of R programming is needed to use this web application.
Journal Article