Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
76 result(s) for "Meta-epidemiology"
Sort by:
A tutorial on methodological studies: the what, when, how and why
Background Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. Main body We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies? Conclusion Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.
Treatment effect sizes vary in randomized trials depending on type of outcome measure
AbstractObjectiveTo compare estimated treatment effects of physical therapy (PT) between Patient-Reported Outcome Measures (PROMs) and outcomes measured in other ways. Study Design and SettingWe selected randomized trials of PT with both a PROM and a non-PROM included in Cochrane Systematic Reviews (CSRs). Two reviewers independently extracted data and risk-of-bias assessments. Our primary outcome was the ratio of odds ratios (ROR), used to quantify how effect vary between PROMs and non-PROMs; an ROR > 1 indicates larger effect when assessed by PROMs. We used REML-methods to estimate associations of trial characteristics with effects and between-trial heterogeneity. ResultsFrom 90 relevant CSRs, 205 PT trials were included. The summary ROR across all the comparisons was not statistically significant (ROR, 0.88 [95% CI: 0.70-1.12]; P=0.30); however, the heterogeneity was substantial (I 2=88.1%). When stratifying non-PROMs further into clearly objective non-PROMs (e.g., biomarkers) and other non-PROMs (e.g., aerobic capacity), the PROMs appeared more favorable than did clearly objective non-PROMs (ROR, 1.92 [95% CI: 0.99-3.72]; P=0.05). ConclusionEstimated treatment effects based on PROMs are generally comparable to treatment effects measured in other ways. However, in our study, PROMs indicate a more favorable treatment effect compared to treatment effects based on clearly objective outcomes.
Bibliographic study showed improving statistical methodology of network meta-analyses published between 1999 and 2015
To assess the characteristics and core statistical methodology specific to network meta-analyses (NMAs) in clinical research articles. We searched MEDLINE, EMBASE, and the Cochrane Database of Systematic Reviews from inception until April 14, 2015, for NMAs of randomized controlled trials including at least four different interventions. Two reviewers independently screened potential studies, whereas data abstraction was performed by a single reviewer and verified by a second. A total of 456 NMAs, which included a median (interquartile range) of 21 (13–40) studies and 7 (5–9) treatment nodes, were assessed. A total of 125 NMAs (27%) were star networks; this proportion declined from 100% in 2005 to 19% in 2015 (P = 0.01 by test of trend). An increasing number of NMAs discussed transitivity or inconsistency (0% in 2005, 86% in 2015, P < 0.01) and 150 (45%) used appropriate methods to test for inconsistency (14% in 2006, 74% in 2015, P < 0.01). Heterogeneity was explored in 256 NMAs (56%), with no change over time (P = 0.10). All pairwise effects were reported in 234 NMAs (51%), with some increase over time (P = 0.02). The hierarchy of treatments was presented in 195 NMAs (43%), the probability of being best was most commonly reported (137 NMAs, 70%), but use of surface under the cumulative ranking curves increased steeply (0% in 2005, 33% in 2015, P < 0.01). Many NMAs published in the medical literature have significant limitations in both the conduct and reporting of the statistical analysis and numerical results. The situation has, however, improved in recent years, in particular with respect to the evaluation of the underlying assumptions, but considerable room for further improvements remains.
Meta-analyses frequently pooled different study types together: a meta-epidemiological study
To evaluate the characteristics of therapeutic meta-analyses including both observational studies and randomized controlled trials (RCTs), how these studies were combined and whether there were differences in treatment effects. Meta-epidemiological study of meta-analyses, including both observational studies and RCTs. We searched MEDLINE for the five leading journals of each medical category according to Journal Citation Reports) and Cochrane Database of Systematic Reviews, from 2014 to 2018 for eligible meta-analyses and extracted how observational studies and RCTs were combined and results for each study. Of the 102 included meta-analyses, observational studies and RCTs were combined together without a subgroup analysis in 39 (38%) and with subgroup analysis in 15 (15%); they were pooled separately for the same outcome in 11 (11%) and not for the same outcome in 9 (9%). In 28 (27%) meta-analyses, only RCTs were combined, with a qualitative description of observational studies. Treatment effect estimates did not differ between observational studies and RCTs (ratio of estimates = 0.98 [95% confidence interval 0.80–1.21]), with substantial heterogeneity (I2 = 59%). Many meta-analyses, including both observational studies and RCTs pool results from both study types. Although treatment effects did not differ between them on average, we identified situations for which estimates differed. •Many meta-analyses combined observational studies with RCTs in meta-analysis. Overall, there was no difference in effects between RCTs and observational studies.•Heterogeneity across topics was substantial.•There were a few situations with significant differences between both study types.
Evidence certainty in neonatology—a meta-epidemiological analysis of Cochrane reviews
We hypothesized that certainty of the available evidence is relatively low in neonatology. Thus, we designed a meta-epidemiological review to examine what is the certainty of evidence in the latest Cochrane neonatal reviews and investigate if the number of trials and enrolled patients is associated with the certainty of evidence. We searched Cochrane neonatal reviews published between January 2022 and May 2024. We included all reviews on interventions concerning neonates that had at least one meta-analysis performed with GRADE-rated evidence certainty. From those reviews, we extracted the presented certainty of evidence and analyzed its association with the number of trials and participants by ANOVA. We screened 55 Cochrane reviews and included 49 of them. In these 49 reviews, there were 443 reported outcomes with graded certainty of evidence. The certainty was reported to be high in 8 (1.8%), moderate in 89 (20.2%), low in 195 (44.0%), and very low in 151 (34%) of the outcomes. Reviews reporting outcomes with higher certainty of evidence had significantly more trials and patients (approximately 3 and 1.5 times more, respectively) than those with only low certainty of evidence. Conclusion : In the past 2 years, Cochrane neonatal reviews have generally had low or very low certainty of evidence for most outcomes. Only 2% of the reviewed outcomes had high certainty. The number of included patients and trials significantly affected the certainty. These findings highlight the continuous need for better quality and larger trials. What Is Known: • Neonatology is among the largest specialities and the evidence certainties of interventions have been varying. • Neonatal patients and studies need to consider the uniqueness of the patients and the acute situations in the study designs. What Is New: • The included 49 reviews consisted of 443 outcomes and of these only 1.8% were classified as high certainty of evidence. • Higher evidence certainties were associated with higher number of included trials and participants.
Meta-analyses frequently include old trials that are associated with a larger intervention effect: a meta-epidemiological study
•Trials published before 2000 represented one fourth of all trials included in meta-analyses and trials published before 1990 almost 10%. The oldest trial was published in 1951.•Intervention effects were, on average, significantly larger for older than recent trials.•Results were consistent in sensitivity analyses adjusted on risk of bias and sample size.•It is generally recommended to include all trials within a meta-analysis whatever their publication date but this may have an influence on external validity and intervention effect.•With a biomedical literature that is increasing exponentially, we wonder whether it is reasonable to consider the results of old trials sometimes conducted more than 50 years ago. To assess whether meta-analyses include older randomized controlled trials (RCTs) and whether intervention effect differ between older and recent RCTs. In this meta-epidemiological study of 295 meta-analyses (2940 RCTs) published in 2017–2018, we evaluated the difference in intervention effects between older (i.e., published before 2000) and recent RCTs. We also compared effects by quarters of publication year within each meta-analysis (from quarter 1 including the 25% oldest trials to quarter 4 including the 25% most recent trials). A ratio of odds ratio (ROR) <1 indicates larger effects in older than recent RCTs. Trials published before 2000 and before 1990 represented 25% and 10% of all trials, respectively. Intervention effects were significantly larger for old than recent RCTs (ROR = 0.92, 95% confidence interval [CI] 0.85–1.00, I2 = 22%). Compared with the most recent trials (quarter 4), intervention effects were significantly larger for the oldest trials (quarter 1) (ROR = 0.85, 95% CI 0.79–0.92) and for trials in quarter 2 (ROR = 0.89, 95% CI 0.83–0.96) but not for trials in quarter 3 (ROR = 0.98, 95% CI 0.91–1.05). Intervention effects were larger for older than recent RCTs. Meta-analyses including older trials should be interpreted cautiously.
Per-Protocol analyses produced larger treatment effect sizes than intention to treat: a meta-epidemiological study
To undertake meta-analysis and compare treatment effects estimated by the intention-to-treat (ITT) method and per-protocol (PP) method in randomized controlled trials (RCTs). PP excludes trial participants who are non-adherent to trial protocol in terms of eligibility, interventions, or outcome assessment. Five high impact journals were searched for all RCTs published between July 2017 to June 2019. Primary outcome was a pooled estimate that quantified the difference between the treatment effects estimated by the two methods. Results are presented as ratio of odds ratios (ROR). Meta-regression was used to explore the association between level of trial protocol non-adherence and treatment effect. Sensitivity analyses compared results with varying within-study correlations and across various study characteristics. Random-effects meta-analysis (N = 156) showed that PP estimates were on average 2% greater compared to the ITT estimates (ROR: 1.02, 95% CI: 1.00–1.04, P = 0.03). The divergence further increased with higher degree of protocol non-adherence. Sensitivity analyses reassured consistent results with various within-study correlations and across various study characteristics. There was evidence of larger treatment effect with PP compared to ITT analysis. PP analysis should not be used to assess the impact of protocol non-adherence in RCTs. Instead, in addition to ITT, investigators should consider randomization based casual method such as Complier Average Causal Effect (CACE).
Immortal time bias tends to be more pronounced in methodological studies than in empirical studies: a metaepidemiological study
Immortal Time Bias (ITB) is a critical challenge in observational studies estimating treatment effects, often addressed using Mantel–Byar (MB) and Landmark (LM) methods. However, the impact of ITB appears to differ between methodological and empirical studies. This study aims to investigate whether the ITB would be affected by study types and how. We systematically searched PubMed from January 1, 2010, to May 31, 2023, to identify empirical and methodological studies explicitly using LM or MB to address ITB. Eligible studies reported hazard ratio comparing: (i) unadjusted vs MB/LM-adjusted or (ii) MB vs LM-adjusted. We first examined estimate discrepancies across ITB-handling strategies within empirical or methodological studies, and then evaluated concordance across study types. We included 67 studies (46 empirical, 21 methodological). For unadjusted vs adjusted comparisons (58 empirical, 42 methodological), methodological studies exhibited higher rates of conclusion discordance (64.3% vs 32.8%, P = .004), and opposite effect directions (40.5% vs 15.5%, P = .010). For MB vs LM comparisons (20 empirical, 12 methodological), more frequent conclusion discordance was observed in methodological studies (41.7% vs 0%, P = .004), and other discrepancy metrics showed no significant differences between study types. Our findings suggest that ITB tends to have a more pronounced impact in methodological studies, indicating that its influence may vary across different study settings. For methodological studies, it is important to clarify the critical ITB settings and the corresponding handling approaches. For empirical studies suspected of ITB, using rigorous handling strategies can enhance the robustness of treatment effect estimates. [Display omitted] •ITB impact is stronger in methodological than in empirical studies.•First systematic comparison across methodological and empirical studies.•These inconsistencies highlight the need to specify and report ITB setting.•Methodological studies should clarify ITB settings and justify chosen strategies.•Empirical studies should adjust for ITB and align methods to target estimands.
High certainty evidence is stable and trustworthy, whereas evidence of moderate or lower certainty may be equally prone to being unstable
To assess to what extent the overall quality of evidence indicates changes to observe intervention effect estimates when new data become available. We conducted a meta-epidemiological study. We obtained evidence from meta-analyses of randomized trials of Cochrane reviews addressing the same health-care question that was updated with inclusion of additional data between January 2016 and May 2021. We extracted the reported effect estimates with 95% confidence intervals (CIs) from meta-analyses and corresponding GRADE (Grading of Recommendations Assessment, Development, and Evaluation) assessments of any intervention comparison for the primary outcome in the first and the last updated review version. We considered the reported overall quality (certainty) of evidence (CoE) and specific evidence limitations (no, serious or very serious for risk of bias, imprecision, inconsistency, and/or indirectness). We assessed the change in pooled effect estimates between the original and updated evidence using the ratio of odds ratio (ROR), absolute ratio of odds ratio (aROR), ratio of standard errors (RoSE), direction of effects, and level of statistical significance. High CoE without limitations characterized 19.3% (n = 29) out of 150 included original Cochrane reviews. The update with additional data did not systematically change the effect estimates (mean ROR 1.00; 95% CI 0.99–1.02), which deviated 1.06-fold from the older estimates (median aROR; interquartile range [IQR]: 1.01–1.15), gained precision (median RoSE 0.87; IQR 0.76–1.00), and maintained the same direction with the same level of statistical significance in 93% (27 of 29) of cases. Lower CoE with limitations characterized 121 original reviews and graded as moderate CoE in 30.0% (45 of 150), low CoE in 32.0% (48 of 150), and very low CoE in 18.7% (28 of 150) reviews. Their update had larger absolute deviations (median aROR 1.12 to 1.33) and larger gains in precision (median RoSE 0.78–0.86) without clear and consistent differences between these categories of CoE. Changes in effect direction or statistical significance were also more common in the lower quality evidence, again with a similar extent across categories (without change in 75.6%, 64.6%, and 75.0% for moderate, low, very low CoE). As limitations increased, effect estimates deviated more (aROR 1.05 with zero, 1.11 with one, 1.25 with two, 1.24 with three limitations) and changes in direction or significance became more frequent (93.2% stable with no limitations, 74.5% with one, 68.2% with two, and 61.5% with three limitations). High-quality evidence without methodological deficiencies is trustworthy and stable, providing reliable intervention effect estimates when updated with new data. Evidence of moderate and lower quality may be equally prone to being unstable and cannot indicate if available effect estimates are true, exaggerated, or underestimated.
Around ten percent of most recent Cochrane reviews included outcomes in their literature search strategy and were associated with potentially exaggerated results: A research-on-research study
To assess the proportion of the recent Cochrane reviews that included outcomes in their literature search strategy, how often they acknowledged these limitations, and how qualitatively different the results of outcomes included and not included in the search strategy were. We identified all the Cochrane reviews of the interventions published in 2020 that used a search strategy connecting outcome terms with “AND.” Reviews were defined as acknowledging the limitations of searching for outcomes if they mentioned them in the discussion. We compared the characteristics of outcomes included and not included in the search strategy. Of the 523 Cochrane reviews published in 2020, 51 (9.8%) included outcomes in their search strategy. Only one review acknowledged it as a limitation. Forty-seven (92%) assessed outcomes not included in the search strategy. Outcomes included in the search strategies tended to include a larger number of studies and show their effects in favor of the intervention. Around ten percent of the recent Cochrane reviews included outcomes in their search, which may have resulted in more outcomes significantly in favor of the intervention. Reviewers should be more explicit in acknowledging the potential implications of searching for outcomes.