Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
46 result(s) for "Laura Flight"
Sort by:
Adaptive designs in clinical trials: why use them, and how to run and report them
Adaptive designs can make clinical trials more flexible by utilising results accumulating in the trial to modify the trial’s course in accordance with pre-specified rules. Trials with an adaptive design are often more efficient, informative and ethical than trials with a traditional fixed design since they often make better use of resources such as time and money, and might require fewer participants. Adaptive designs can be applied across all phases of clinical research, from early-phase dose escalation to confirmatory trials. The pace of the uptake of adaptive designs in clinical research, however, has remained well behind that of the statistical literature introducing new methods and highlighting their potential advantages. We speculate that one factor contributing to this is that the full range of adaptations available to trial designs, as well as their goals, advantages and limitations, remains unfamiliar to many parts of the clinical community. Additionally, the term adaptive design has been misleadingly used as an all-encompassing label to refer to certain methods that could be deemed controversial or that have been inadequately implemented. We believe that even if the planning and analysis of a trial is undertaken by an expert statistician, it is essential that the investigators understand the implications of using an adaptive design, for example, what the practical challenges are, what can (and cannot) be inferred from the results of such a trial, and how to report and communicate the results. This tutorial paper provides guidance on key aspects of adaptive designs that are relevant to clinical triallists. We explain the basic rationale behind adaptive designs, clarify ambiguous terminology and summarise the utility and pitfalls of adaptive designs. We discuss practical aspects around funding, ethical approval, treatment supply and communication with stakeholders and trial participants. Our focus, however, is on the interpretation and reporting of results from adaptive design trials, which we consider vital for anyone involved in medical research. We emphasise the general principles of transparency and reproducibility and suggest how best to put them into practice.
Recruitment and retention of participants in randomised controlled trials: a review of trials funded and published by the United Kingdom Health Technology Assessment Programme
BackgroundSubstantial amounts of public funds are invested in health research worldwide. Publicly funded randomised controlled trials (RCTs) often recruit participants at a slower than anticipated rate. Many trials fail to reach their planned sample size within the envisaged trial timescale and trial funding envelope.ObjectivesTo review the consent, recruitment and retention rates for single and multicentre randomised control trials funded and published by the UK's National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme.Data sources and study selectionHTA reports of individually randomised single or multicentre RCTs published from the start of 2004 to the end of April 2016 were reviewed.Data extractionInformation was extracted, relating to the trial characteristics, sample size, recruitment and retention by two independent reviewers.Main outcome measuresTarget sample size and whether it was achieved; recruitment rates (number of participants recruited per centre per month) and retention rates (randomised participants retained and assessed with valid primary outcome data).ResultsThis review identified 151 individually RCTs from 787 NIHR HTA reports. The final recruitment target sample size was achieved in 56% (85/151) of the RCTs and more than 80% of the final target sample size was achieved for 79% of the RCTs (119/151). The median recruitment rate (participants per centre per month) was found to be 0.92 (IQR 0.43–2.79) and the median retention rate (proportion of participants with valid primary outcome data at follow-up) was estimated at 89% (IQR 79–97%).ConclusionsThere is considerable variation in the consent, recruitment and retention rates in publicly funded RCTs. Investigators should bear this in mind at the planning stage of their study and not be overly optimistic about their recruitment projections.
Online randomised trials with children: A scoping review
Paediatric trials must contend with many challenges that adult trials face but often bring additional obstacles. Decentralised trials, where some or all trial methods occur away from a centralised location, are a promising strategy to help meet these challenges. This scoping review aims to (a) identify what methods and tools have been used to create and conduct entirely online-decentralised trials with children and (b) determine the gaps in the knowledge in this field. This review will describe the methods used in these trials to identify their facilitators and the gaps in the knowledge. The methods were informed by guidance from the Joanna Briggs Institute and the PRISMA extension for scoping reviews. We systematically searched MEDLINE, CENTRAL, CINAHL, and Embase databases, trial registries, pre-print servers, and the internet. We included randomised and quasi-randomised trials conducted entirely online with participants under 18 published in English. A risk of bias assessment was completed for all included studies. Twenty-one trials met our inclusion criteria. The average age of participants was 14.6 years. Social media was the most common method of online recruitment. Most trials employed an external host website to store and protect their data. Duration of trials ranged from single-session interventions up to ten weeks. Fourteen trials compensated participants. Eight trials involved children in their trial design process; none reported compensation for this. Most trials had a low risk of bias in \"random sequence generation\", \"selective reporting\", and \"other\". Most trials had a high risk of bias in \"blinding participants and personnel\", \"blinding of outcome assessment\", and \"incomplete outcome data\". \"Allocation concealment\" was unclear in most studies. There was a lack of transparent reporting of the recruitment, randomisation, and retention methods used in many of the trials included in this review. Patient and public involvement (PPI) was not common, and the compensation of PPI partners was not reported in any study. Consent methods and protection against fraudulent entries to trials were creative and thoroughly discussed by some trials and not addressed by others. More work and thorough reporting of how these trials are conducted is needed to increase their reproducibility and quality. Ethical approval was not necessary since all data sources used are publicly available.
Comparison of statistical methods for the analysis of patient-reported outcomes (PROs), particularly the Short-Form 36 (SF-36), in randomised controlled trials (RCTs) using standardised effect size (SES): an empirical analysis
Background The Short-Form 36 (SF-36), a widely used patient-reported outcome (PRO), is a questionnaire completed by patients measuring health outcomes in clinical trials. The PRO scores can be discrete, bounded, and skewed. Various statistical methods have been suggested to analyse PRO data, but their results may not be presented on the same scale as the original score, making it difficult to interpret and compare different approaches. This study aims to unify and compare the estimates from different statistical methods for analysing PROs, particularly the SF-36, in randomised controlled trials (RCTs), using standardised effect size (SES) summary measure. Methods SF-36 outcomes were analysed using ten statistical methods: multiple linear regression (MLR), median regression (Median), Tobit regression (Tobit), censored absolute least deviation regression (CLAD), beta-binomial regression (BB), binomial-logit-normal regression (BLN), ordered logit model (OL), ordered probit model (OP), fractional logistic regression (Frac), and beta regression (BR). Each SF-36 domain score at a specific follow-up in three clinical trials was analysed. The estimated treatment coefficients and SESs were generated, compared, and interpreted. Model fit was evaluated using the Akaike information criterion. Results Estimated treatment coefficients from the untransformed scale-based methods (Tobit, Median, & CLAD) deviated from MLR, whereas the SESs from Tobit produced almost identical values. Transformed scale-based methods (OL, OP, BB, BLN, Frac, and BR) shared a similar pattern, except that OL generated higher absolute coefficients and BLN produced higher SESs than other methods. The SESs from Tobit, BB, OP, and Frac had better agreement against MLR than other included methods. Conclusions The SES is a simple method to unify and compare estimates produced from various statistical methods on different scales. As these methods did not produce identical SES values, it is crucial to comprehensively understand and carefully select appropriate statistical methods, especially for analysing PROs like SF-36, to avoid drawing wrong estimates and conclusions using clinical trial data. Future research will focus on simulation analysis to compare the estimation accuracy and robustness of these methods.
Appropriate statistical methods for analysing partially nested randomised controlled trials with continuous outcomes: a simulation study
Background In individually randomised trials we might expect interventions delivered in groups or by care providers to result in clustering of outcomes for participants treated in the same group or by the same care provider. In partially nested randomised controlled trials (pnRCTs) this clustering only occurs in one trial arm, commonly the intervention arm. It is important to measure and account for between-cluster variability in trial design and analysis. We compare analysis approaches for pnRCTs with continuous outcomes, investigating the impact on statistical inference of cluster sizes, coding of the non-clustered arm, intracluster correlation coefficient (ICCs), and differential variance between intervention and control arm, and provide recommendations for analysis. Methods We performed a simulation study assessing the performance of six analysis approaches for a two-arm pnRCT with a continuous outcome. These include: linear regression model; fully clustered mixed-effects model with singleton clusters in control arm; fully clustered mixed-effects model with one large cluster in control arm; fully clustered mixed-effects model with pseudo clusters in control arm; partially nested homoscedastic mixed effects model, and partially nested heteroscedastic mixed effects model. We varied the cluster size, number of clusters, ICC, and individual variance between the two trial arms. Results All models provided unbiased intervention effect estimates. In the partially nested mixed-effects models, methods for classifying the non-clustered control arm had negligible impact. Failure to account for even small ICCs resulted in inflated Type I error rates and over-coverage of confidence intervals. Fully clustered mixed effects models provided poor control of the Type I error rates and biased ICC estimates. The heteroscedastic partially nested mixed-effects model maintained relatively good control of Type I error rates, unbiased ICC estimation, and did not noticeably reduce power even with homoscedastic individual variances across arms. Conclusions In general, we recommend the use of a heteroscedastic partially nested mixed-effects model, which models the clustering in only one arm, for continuous outcomes similar to those generated under the scenarios of our simulations study. However, with few clusters (3–6), small cluster sizes (5–10), and small ICC (≤0.05) this model underestimates Type I error rates and there is no optimal model.
Value-adaptive clinical trial designs for efficient delivery of publicly funded trials - a discussion of methods, case studies, opportunities and challenges
Background Value-adaptive designs for clinical trials are a novel set of emerging methods for delivering greater value for clinical research. There is increasing interest in using them within publicly funded health systems. A value-adaptive design permits ‘in progress’ changes to be made to the trial according to criteria which reflect its overall value to the healthcare system, including the cost-effectiveness of the technologies under investigation, the cost of running the trial and the total health benefit delivered to patients. These trial designs offer the potential to explicitly balance the costs and benefits of adaptive clinical trials with the health economic benefits expected for populations that are affected by any subsequent health technology adoption decisions. They may also improve the expected value of learning from the budget that is spent within a trial. Main body This paper introduces value-adaptive designs for publicly funded clinical trials. It discusses the idea of delivering ‘value for money’ in health technology assessment, what is meant by being ‘value-adaptive’ and the key features that characterise these designs. The methodology behind one kind of value-adaptive design – the value-based sequential model of a two-armed clinical trial proposed by Chick et al. (2017) – is described and illustrated using three retrospective case studies from the United Kingdom. The paper concludes by reviewing a range of perspectives provided by stakeholders, together with our own thoughts, on the practical opportunities and changes required for implementing a value-adaptive approach. Conclusions Value-adaptive clinical trial designs offer the potential to align health research funding allocations with population health economic goals. Many of the systems required to deploy value-adaptive designs within a publicly funded health system already exist and, with increased application, experience, and refinement they have the potential to deliver improved value for money.
Recommendations for the analysis of individually randomised controlled trials with clustering in one arm – a case of continuous outcomes
Background In an individually randomised controlled trial where the treatment is delivered by a health professional it seems likely that the effectiveness of the treatment, independent of any treatment effect, could depend on the skill, training or even enthusiasm of the health professional delivering it. This may then lead to a potential clustering of the outcomes for patients treated by the same health professional, but similar clustering may not occur in the control arm. Using four case studies, we aim to provide practical guidance and recommendations for the analysis of trials with some element of clustering in one arm. Methods Five approaches to the analysis of outcomes from an individually randomised controlled trial with clustering in one arm are identified in the literature. Some of these methods are applied to four case studies of completed randomised controlled trials with clustering in one arm with sample sizes ranging from 56 to 539. Results are obtained using the statistical packages R and Stata and summarised using a forest plot. Results The intra-cluster correlation coefficient (ICC) for each of the case studies was small (<0.05) indicating little dependence on the outcomes related to cluster allocations. All models fitted produced similar results, including the simplest approach of ignoring clustering for the case studies considered. Conclusions A partially clustered approach, modelling the clustering in just one arm, most accurately represents the trial design and provides valid results. Modelling homogeneous variances between the clustered and unclustered arm is adequate in scenarios similar to the case studies considered. We recommend treating each participant in the unclustered arm as a single cluster. This approach is simple to implement in R and Stata and is recommended for the analysis of trials with clustering in one arm only. However, the case studies considered had small ICC values, limiting the generalisability of these results.
Comprehensive review of statistical methods for analysing patient-reported outcomes (PROs) used as primary outcomes in randomised controlled trials (RCTs) published by the UK’s Health Technology Assessment (HTA) journal (1997–2020)
ObjectivesTo identify how frequently patient-reported outcomes (PROs) are used as primary and/or secondary outcomes in randomised controlled trials (RCTs) and to summarise what statistical methods are used for the analysis of PROs.DesignComprehensive review.SettingRCTs funded and published by the United Kingdom’s (UK) National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme.Data sources and eligibilityHTA reports of RCTs published between January 1997 and December 2020 were reviewed.Data extractionInformation relating to PRO use and analysis methods was extracted.Primary and secondary outcome measuresThe frequency of using PROs as primary and/or secondary outcomes; statistical methods that were used for the analysis of PROs as primary outcomes.ResultsIn this review, 37.6% (114/303) of trials used PROs as primary outcomes, and 82.8% (251/303) of trials used PROs as secondary outcomes from 303 NIHR HTA reports of RCTs. In the 114 RCTs where the PRO was the primary outcome, the most used PRO was the Short-Form 36 (8/114); the most popular methods for multivariable analysis were linear mixed model (45/114), linear regression (29/114) and analysis of covariance (13/114); logistic regression was applied for binary and ordinal outcomes in 14/114 trials; and the repeated measures analysis was used in 39/114 trials.ConclusionThe majority of trials used PROs as primary and/or secondary outcomes. Conventional methods such as linear regression are widely used, despite the potential violation of their assumptions. In recent years, there is an increasing trend of using complex models (eg, with mixed effects). Statistical methods developed to address these violations when analysing PROs, such as beta-binomial regression, are not routinely used in practice. Future research will focus on evaluating available statistical methods for the analysis of PROs.
How can health economics be used in the design and analysis of adaptive clinical trials? A qualitative analysis
Introduction Adaptive designs offer a flexible approach, allowing changes to a trial based on examinations of the data as it progresses. Adaptive clinical trials are becoming a popular choice, as the prudent use of finite research budgets and accurate decision-making are priorities for healthcare providers around the world. The methods of health economics, which aim to maximise the health gained for money spent, could be incorporated into the design and analysis of adaptive clinical trials to make them more efficient. We aimed to understand the perspectives of stakeholders in health technology assessments to inform recommendations for the use of health economics in adaptive clinical trials. Methods A qualitative study explored the attitudes of key stakeholders—including researchers, decision-makers and members of the public—towards the use of health economics in the design and analysis of adaptive clinical trials. Data were collected using interviews and focus groups (29 participants). A framework analysis was used to identify themes in the transcripts. Results It was considered that answering the clinical research question should be the priority in a clinical trial, notwithstanding the importance of cost-effectiveness for decision-making. Concerns raised by participants included handling the volatile nature of cost data at interim analyses; implementing this approach in global trials; resourcing adaptive trials which are designed and adapted based on health economic outcomes; and training stakeholders in these methods so that they can be implemented and appropriately interpreted. Conclusion The use of health economics in the design and analysis of adaptive clinical trials has the potential to increase the efficiency of health technology assessments worldwide. Recommendations are made concerning the development of methods allowing the use of health economics in adaptive clinical trials, and suggestions are given to facilitate their implementation in practice.
Lessons learned from building the kid’s trial with an online children’s and parents’ research advisory group: a descriptive, qualitative study
Health research increasingly incorporates public and patient involvement (PPI) to enhance trial inclusivity and relevance, and it is often mandated by funding and regulatory bodies. PPI boosts public engagement with trials and aligns trial objectives more closely with the priorities of the groups they aim to benefit. The Kid’s Trial, an online randomised trial co-created with children, aims to help them better understand what randomised trials are, why they matter, and improve their critical thinking skills. To ensure inclusivity and relevance, we established two PPI groups: the Children’s Research Advisory Group (CRAG) and the Parents’ Research Advisory Group (PRAG). We recruited a representative sample of children and parents from diverse ethnic, geographic, and socioeconomic backgrounds to reflect the trial’s target demographic. We engaged PPI group members through social media and email campaigns aimed at parents of children aged 7 to 12. PPI meetings were conducted online, followed set agendas, and included real-time trial updates, post-meeting feedback surveys, and polls. A PPI compensation plan was established in advance. Online interviews later captured their insights and experiences as PPI partners. Seven family units, comprised of eight children and seven parents, were recruited over 15 weeks from six countries. PPI partners shaped the trial design by contributing to website animations, aesthetic changes, and language adaptations. Interviews were analysed using reflexive thematic analysis to explore the facilitators, challenges, and outcomes of participating in our online research advisory groups. Reflections from researchers and PPI partners demonstrated that participation in the advisory groups enhanced children’s learning and confidence. Many members, including children and adults, experienced unexpected positive outcomes, such as increased scientific literacy, science communication and confidence. Their involvement meaningfully shaped the trial’s development and processes. This study also provides guidance for researchers engaging similar demographics in future PPI activities. Plain English summary Health research now often includes input from the public and patients (Patient and Public Involvement or PPI) to make studies more inclusive and useful. Many funding and regulatory organisations require this. When the public is involved, research studies become more relevant to the people they aim to help. The Kid’s Trial is an online study designed with children to teach them how health research works and help them think critically about health information they encounter. To make sure The Kid’s Trial was inclusive and meaningful, we created two PPI groups made up of children and their parents to help us design it. We used social media and email to recruit a diverse group of children and parents from different backgrounds. These groups met online to discuss the trial, make improvements, and give feedback. They worked on the website, website animations, trial design, and the language we used. The PPI group members were compensated for their time. Seven family units, consisting of eight children and seven parents from six countries, joined the PPI groups. We interviewed group members to understand what worked well, what was challenging, and what they gained from participating in the PPI groups. Children felt that their confidence and learning had improved. Many PPI group members experienced unexpected benefits. Their input significantly influenced the design of The Kid’s Trial. This study also offers valuable advice for researchers seeking to include children and parents as PPI partners in future studies.