Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Language
      Language
      Clear All
      Language
  • Subject
      Subject
      Clear All
      Subject
  • Item Type
      Item Type
      Clear All
      Item Type
  • Discipline
      Discipline
      Clear All
      Discipline
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
79,738 result(s) for "Randomised"
Sort by:
Machine learning reduced workload with minimal risk of missing studies: development and evaluation of a randomized controlled trial classifier for Cochrane Reviews
This study developed, calibrated, and evaluated a machine learning classifier designed to reduce study identification workload in Cochrane for producing systematic reviews. A machine learning classifier for retrieving randomized controlled trials (RCTs) was developed (the “Cochrane RCT Classifier”), with the algorithm trained using a data set of title–abstract records from Embase, manually labeled by the Cochrane Crowd. The classifier was then calibrated using a further data set of similar records manually labeled by the Clinical Hedges team, aiming for 99% recall. Finally, the recall of the calibrated classifier was evaluated using records of RCTs included in Cochrane Reviews that had abstracts of sufficient length to allow machine classification. The Cochrane RCT Classifier was trained using 280,620 records (20,454 of which reported RCTs). A classification threshold was set using 49,025 calibration records (1,587 of which reported RCTs), and our bootstrap validation found the classifier had recall of 0.99 (95% confidence interval 0.98–0.99) and precision of 0.08 (95% confidence interval 0.06–0.12) in this data set. The final, calibrated RCT classifier correctly retrieved 43,783 (99.5%) of 44,007 RCTs included in Cochrane Reviews but missed 224 (0.5%). Older records were more likely to be missed than those more recently published. The Cochrane RCT Classifier can reduce manual study identification workload for Cochrane Reviews, with a very low and acceptable risk of missing eligible RCTs. This classifier now forms part of the Evidence Pipeline, an integrated workflow deployed within Cochrane to help improve the efficiency of the study identification processes that support systematic review production. •Systematic review processes need to become more efficient.•Machine learning is sufficiently mature for real-world use.•A machine learning classifier was built using data from Cochrane Crowd.•It was calibrated to achieve very high recall.•It is now live and in use in Cochrane review production systems.
Appropriate statistical methods for analysing partially nested randomised controlled trials with continuous outcomes: a simulation study
Background In individually randomised trials we might expect interventions delivered in groups or by care providers to result in clustering of outcomes for participants treated in the same group or by the same care provider. In partially nested randomised controlled trials (pnRCTs) this clustering only occurs in one trial arm, commonly the intervention arm. It is important to measure and account for between-cluster variability in trial design and analysis. We compare analysis approaches for pnRCTs with continuous outcomes, investigating the impact on statistical inference of cluster sizes, coding of the non-clustered arm, intracluster correlation coefficient (ICCs), and differential variance between intervention and control arm, and provide recommendations for analysis. Methods We performed a simulation study assessing the performance of six analysis approaches for a two-arm pnRCT with a continuous outcome. These include: linear regression model; fully clustered mixed-effects model with singleton clusters in control arm; fully clustered mixed-effects model with one large cluster in control arm; fully clustered mixed-effects model with pseudo clusters in control arm; partially nested homoscedastic mixed effects model, and partially nested heteroscedastic mixed effects model. We varied the cluster size, number of clusters, ICC, and individual variance between the two trial arms. Results All models provided unbiased intervention effect estimates. In the partially nested mixed-effects models, methods for classifying the non-clustered control arm had negligible impact. Failure to account for even small ICCs resulted in inflated Type I error rates and over-coverage of confidence intervals. Fully clustered mixed effects models provided poor control of the Type I error rates and biased ICC estimates. The heteroscedastic partially nested mixed-effects model maintained relatively good control of Type I error rates, unbiased ICC estimation, and did not noticeably reduce power even with homoscedastic individual variances across arms. Conclusions In general, we recommend the use of a heteroscedastic partially nested mixed-effects model, which models the clustering in only one arm, for continuous outcomes similar to those generated under the scenarios of our simulations study. However, with few clusters (3–6), small cluster sizes (5–10), and small ICC (≤0.05) this model underestimates Type I error rates and there is no optimal model.
A meta-epidemiological analysis of post-hoc comparisons and primary endpoint interpretability among randomized noncomparative trials in clinical medicine
Randomized noncomparative trials (RNCTs) promise reduced accrual requirements vs randomized controlled comparative trials because RNCTs do not enroll a control group and instead compare outcomes to historical controls or prespecified estimates. We hypothesized that RNCTs often suffer from two methodological concerns: (1) lack of interpretability due to group-specific inferences in nonrandomly selected samples and (2) misinterpretation due to unlicensed between-group comparisons lacking prespecification. The purpose of this study was to characterize RNCTs and the incidence of these two methodological concerns. We queried PubMed and Web of Science on September 14, 2023, to conduct a meta-epidemiological analysis of published RNCTs in any field of medicine. Trial characteristics and the incidence of methodological concerns were manually recorded. We identified 70 RNCTs published from 2002 to 2023. RNCTs have been increasingly published over time (slope = 0.28, 95% CI 0.17–0.39, P < .001). Sixty trials (60/70, 86%) had a lack of interpretability for the primary endpoint due to group-specific inferences. Unlicensed between-group comparisons were present in 36 trials (36/70, 51%), including in the primary conclusion of 31 trials (31/70, 44%), and were accompanied by significance testing in 20 trials (20/70, 29%). Only five (5/70, 7%) trials were found to have neither of these flaws. Although RNCTs are increasingly published over time, the primary analysis of nearly all published RNCTs in the medical literature was unsupported by their fundamental underlying methodological assumptions. RNCTs promise group-specific inference, which they are unable to deliver, and undermine the primary advantage of randomization, which is comparative inference. The ongoing use of the RNCT design in lieu of a traditional randomized controlled comparative trial should therefore be reconsidered.
Individual participant data informed risk of bias assessments for randomized controlled trials in systematic reviews and meta-analyses
In evidence synthesis, assessing risk of bias (ROB) of eligible studies is crucial to inform interpretation of findings. Standardized tools like Cochrane's ROB-1 or ROB-2 traditionally rely on published information to inform assessments, but this is often incomplete or unclear. Availability of raw individual participant data (IPD) enables more in-depth assessments; however, guidance on how to use IPD in ROB assessments is lacking. We aim to develop preliminary guidance on how to use IPD to inform ROB assessments of randomized controlled trials (RCTs) for three case studies. In stage 1, we reviewed relevant literature, consulted our networks, and drew on previous experience to compile items on how IPD may inform ROB assessment for each domain. We discussed feasibility and potential usefulness of each item with an international, interdisciplinary expert advisory group and developed preliminary guidance, which was piloted in two IPD meta-analyses (MAs) (65 RCTs) using ROB-1. In stage 2, the guide was adapted for ROB-2 and applied to another IPD-MA (34 RCTs). All assessments were conducted in duplicate by two independent reviewers. In stage 3, we conducted an evaluation workshop to further refine each item, and capture important lessons. To assess the impact of IPD-informed assessments, we compared them to existing ROB-1 assessments performed with published information alone for 33 trials. We identified 12 items across the ROB domains. IPD provided opportunities to enhance ROB assessments by enabling additional checks for selection bias (ie, testing randomization) and attrition bias (ie, more granular assessment of incomplete data at various time points). We also identified domains for which availability of IPD enabled reduction of ROB, for instance, by mitigating selective outcome reporting bias or by reincluding excluded participants in intention-to-treat analyses. Applying IPD-informed assessments led to changes in ROB judgment in 25 of 33 studies, most commonly, resolution of domains previously marked as “unclear”. Our preliminary guidance for IPD-informed ROB assessments may be applied in IPD-MAs to increase the accuracy of ROB assessments and in some cases reduce ROB to create a more reliable evidence base informing policy and practice. When making decisions about how to treat a patient in clinical practice, it is important to consider the results of all relevant studies. Usually, combined analyses of multiple clinical trials rely on published reports, in which researchers summarize their findings. However, looking at the original data from these studies, instead of just the published reports, can improve the quality of analyses. Access to these underlying data also allows for more thorough assessment of the studies' quality and any potential for bias. This is important for understanding the results properly and for making the most appropriate treatment decisions for patients. Here, we present guidance on how to assess risk of bias of trials using these original datasets. [Display omitted] Key findings•Using Individual participant data (IPD) to inform risk of bias (ROB) assessments may reduce uncertainty, and in some cases reduce ROB.What this adds to what is known?•Standard methods of assessing ROB in a systematic review and meta-analysis (MA) rely on published information alone. IPD allow additional checks to be performed across several domains during ROB assessment.What is the implication, what should change now?•Our preliminary guidance for IPD-informed ROB assessments may be applied in IPD-MAs to create a more reliable evidence base informing policy and practice.
Randomization procedures in parallel-arm cluster randomized trials in low- and middle-income countries: a review of 300 trials published between 2017-2022
Cluster randomized trials (CRTs) are frequently used to evaluate interventions in low- and middle-income countries (LMICs). Robust execution and transparent reporting of randomization procedures are essential for successful implementation and accurate interpretation of CRTs. Our objectives were to review the quality of reporting and implementation of randomization procedures in a sample of parallel-arm CRTs conducted in LMICs. We selected a random sample of 300 primary reports of parallel-arm CRTs from a database of 800 CRTs conducted in LMICs between 2017 and 2022. Data were extracted by two reviewers per trial and summarized using descriptive statistics. Among 300 trials, 192 (64%) reported the method of sequence generation, 213 (71%) reported the type of randomization procedure used, 146 (49%) reported who generated the sequence, 136 (45%) reported whether randomization was implemented by an independent person, and 75 (25%) reported a method of allocation concealment. Among those reporting the methods used, suboptimal randomization procedures were common: 28% did not use a computer, 21% did not use restricted randomization, 58% did not use a statistician to generate the sequence, in 53% the person was not independent from the trial, and 80% did not use central randomization. Public randomization ceremonies were used in 10% of trials as an alternative method of allocation concealment and to reassure participants of fair allocation procedures. The conduct and reporting of randomization procedures of CRTs in LMICs is suboptimal. Dissemination of guidance to promote robust implementation of randomization in LMICs is required, and future research on the implementation of public randomization ceremonies is warranted. Cluster randomized trials (CRTs) are trials where entire groups, rather than individuals, are randomly assigned to different treatments (eg, intervention or usual care). This randomization process can be challenging in CRTs; clear reporting and proper execution are important to ensure fairness and accurate results. In this study, we reviewed how well randomization procedures were reported and carried out in 300 CRTs, selected from a larger database of 800 CRTs, conducted in low- and middle-income countries (LMICs), and published between 2017 and 2022. We found that reporting on key aspects of randomization was often incomplete: 64% reported how they created the random allocation sequence, 71% reported the type of randomization method used, 49% reported who generated the sequence, 45% reported whether a person independent from the trial handled the randomization, and 25% reported how they kept group assignments hidden until the intervention was ready to begin. Even when trials did reported these methods, many did not follow best practices: 28% did not use a computer, 21% did not apply techniques to ensure balanced treatment arms, 58% did not involve a statistician to generate the sequence, 53% had someone involved in the trial handle randomization (as opposed to an independent person), and 80% did not use central randomization to assign groups, where a third party reveals treatment assignment to groups. Interestingly, 10% of trials used public randomization ceremonies (events where group assignments are revealed in a public setting) to keep group assignments hidden until revealment and to reassure participants that the process was fair. Overall, we found that randomization procedures in CRTs were often not well reported or carried out optimally. It is important for researchers to follow established guidelines to ensure randomization is done properly in CRTs in LMICs. More research is also needed to understand how public randomization ceremonies are used in practice. [Display omitted] •Robust randomization methods are essential for cluster randomized trials (CRTs).•Improved adherence to reporting and best practices for randomization in CRTs is needed.•Public randomization ceremonies may help with implementation challenges.•Further research on the conduct of public randomization ceremonies is warranted.
Heterogeneity in pragmatic randomised trials: sources and management
Background Pragmatic trials aim to generate evidence to directly inform patient, caregiver and health-system manager policies and decisions. Heterogeneity in patient characteristics contributes to heterogeneity in their response to the intervention. However, there are many other sources of heterogeneity in outcomes. Based on the expertise and judgements of the authors, we identify different sources of clinical and methodological heterogeneity, which translate into heterogeneity in patient responses—some we consider as desirable and some as undesirable. For each of them, we discuss and, using real-world trial examples, illustrate how heterogeneity should be managed over the whole course of the trial. Main text Heterogeneity in centres and patients should be welcomed rather than limited. Interventions can be flexible or tailored and control interventions are expected to reflect usual care, avoiding use of a placebo. Co-interventions should be allowed; adherence should not be enforced. All these elements introduce heterogeneity in interventions (experimental or control), which has to be welcomed because it mimics reality. Outcomes should be objective and possibly routinely collected; standardised assessment, blinding and adjudication should be avoided as much as possible because this is not how assessment would be done outside a trial setting. The statistical analysis strategy must be guided by the objective to inform decision-making, thus favouring the intention-to-treat principle. Pragmatic trials should consider including process analyses to inform an understanding of the trial results. Needed data to conduct these analyses should be collected unobtrusively. Finally, ethical principles must be respected, even though this may seem to conflict with goals of pragmatism; consent procedures could be incorporated in the flow of care.
Partially randomised patient preference trials as an alternative design to randomised controlled trials: systematic review and meta-analyses
ObjectiveRandomised controlled trials (RCT) are the gold standard to provide unbiased data. However, when patients have a treatment preference, randomisation may influence participation and outcomes (eg, external and internal validity). The aim of this study was to assess the influence of patients’ preference in RCTs by analysing partially randomised patient preference trials (RPPT); an RCT and preference cohort combined.DesignSystematic review and meta-analyses.Data sourcesMEDLINE, Embase, PsycINFO and the Cochrane Library.Eligibility criteria for selecting studiesRPPTs published between January 2005 and October 2018 reporting on allocation of patients to randomised and preference cohorts were included.Data extraction and synthesisTwo independent reviewers extracted data. The main outcomes were the difference in external validity (participation and baseline characteristics) and internal validity (lost to follow-up, crossover and the primary outcome) between the randomised and the preference cohort within each RPPT, compared in a meta-regression using a Wald test. Risk of bias was not assessed, as no quality assessment for RPPTs has yet been developed.ResultsIn total, 117 of 3734 identified articles met screening criteria and 44 were eligible (24 873 patients). The participation rate in RPPTs was >95% in 14 trials (range: 48%–100%) and the randomisation refusal rate was >50% in 26 trials (range: 19%–99%). Higher education, female, older age, race and prior experience with one treatment arm were characteristics of patients declining randomisation. The lost to follow-up and cross-over rate were significantly higher in the randomised cohort compared with the preference cohort. Following the meta-analysis, the reported primary outcomes were comparable between both cohorts of the RPPTs, mean difference 0.093 (95% CI −0.178 to 0.364, p=0.502).ConclusionsPatients’ preference led to a substantial proportion of a specific patient group refusing randomisation, while it did not influence the primary outcome within an RPPT. Therefore, RPPTs could increase external validity without compromising the internal validity compared with RCTs.PROSPERO registration numberCRD42019094438.
The effectiveness of workplace nutrition and physical activity interventions in improving productivity, work performance and workability: a systematic review
Background Healthy lifestyles play an important role in the prevention of premature death, chronic diseases, productivity loss and other social and economic concerns. However, workplace interventions to address issues of fitness and nutrition which include work-related outcomes are complex and thus challenging to implement and appropriately measure the effectiveness of. This systematic review investigated the impact of workplace nutrition and physical activity interventions, which include components aimed at workplace’s physical environment and organizational structure, on employees’ productivity, work performance and workability. Methods A systematic review that included randomized controlled trials and or non-randomized controlled studies was conducted. Medline, EMBASE.com, Cochrane Library and Scopus were searched until September 2016. Productivity, absenteeism, presenteeism, work performance and workability were the primary outcomes of our interest, while sedentary behavior and changes in other health-related behaviors were considered as secondary outcomes. Two reviewers independently screened abstracts and full-texts for study eligibility, extracted the data and performed a quality assessment using the Cochrane Collaboration Risk-of-Bias Tool for randomized trials and the Risk-of-Bias in non-randomized studies of interventions. Findings were narratively synthesized. Results Thirty-nine randomized control trials and non-randomized controlled studies were included. Nearly 28% of the included studies were of high quality, while 56% were of medium quality. The studies covered a broad range of multi-level and environmental-level interventions. Fourteen workplace nutrition and physical activity intervention studies yielded statistically significant changes on absenteeism ( n  = 7), work performance ( n  = 2), workability ( n  = 3), productivity ( n  = 1) and on both workability and productivity ( n  = 1). Two studies showed effects on absenteeism only between subgroups. Conclusions The scientific evidence shows that it is possible to influence work-related outcomes, especially absenteeism, positively through health promotion efforts that include components aimed at the workplace’s physical work environment and organizational structure. In order to draw further conclusions regarding work-related outcomes in controlled high-quality studies, long-term follow-up using objective outcomes and/or quality assured questionnaires are required. Trial registration Registration number: PROSPERO CRD42017081837 .