Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,741 result(s) for "Interim analyses"
Sort by:
Trial Sequential Analysis in systematic reviews with meta-analysis
Background Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors) and too many false negative conclusions (type II errors). Methods We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. Results The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D 2 ) measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in systematic reviews with traditional meta-analyses can be reduced using Trial Sequential Analysis. Several empirical studies have demonstrated that the Trial Sequential Analysis provides better control of type I errors and of type II errors than the traditional naïve meta-analysis. Conclusions Trial Sequential Analysis represents analysis of meta-analytic data, with transparent assumptions, and better control of type I and type II errors than the traditional meta-analysis using naïve unadjusted confidence intervals.
Adaptive designs in clinical trials: why use them, and how to run and report them
Adaptive designs can make clinical trials more flexible by utilising results accumulating in the trial to modify the trial’s course in accordance with pre-specified rules. Trials with an adaptive design are often more efficient, informative and ethical than trials with a traditional fixed design since they often make better use of resources such as time and money, and might require fewer participants. Adaptive designs can be applied across all phases of clinical research, from early-phase dose escalation to confirmatory trials. The pace of the uptake of adaptive designs in clinical research, however, has remained well behind that of the statistical literature introducing new methods and highlighting their potential advantages. We speculate that one factor contributing to this is that the full range of adaptations available to trial designs, as well as their goals, advantages and limitations, remains unfamiliar to many parts of the clinical community. Additionally, the term adaptive design has been misleadingly used as an all-encompassing label to refer to certain methods that could be deemed controversial or that have been inadequately implemented. We believe that even if the planning and analysis of a trial is undertaken by an expert statistician, it is essential that the investigators understand the implications of using an adaptive design, for example, what the practical challenges are, what can (and cannot) be inferred from the results of such a trial, and how to report and communicate the results. This tutorial paper provides guidance on key aspects of adaptive designs that are relevant to clinical triallists. We explain the basic rationale behind adaptive designs, clarify ambiguous terminology and summarise the utility and pitfalls of adaptive designs. We discuss practical aspects around funding, ethical approval, treatment supply and communication with stakeholders and trial participants. Our focus, however, is on the interpretation and reporting of results from adaptive design trials, which we consider vital for anyone involved in medical research. We emphasise the general principles of transparency and reproducibility and suggest how best to put them into practice.
Anti-GD2 Antibody with GM-CSF, Interleukin-2, and Isotretinoin for Neuroblastoma
This study evaluated whether the addition of a monoclonal antibody against the tumor-associated disialoganglioside GD2, in combination with GM-CSF and interleukin-2, to standard therapy consisting of isotretinoin alone improved outcomes in children with high-risk neuroblastoma. Neuroblastoma, a cancer of the sympathetic nervous system responsible for 12% of deaths associated with cancer in children under 15 years of age, 1 is a heterogeneous disease, with nearly 50% of patients having a high-risk phenotype characterized by widespread dissemination of the cancer and poor long-term survival, even if intensive multimodal treatments are used. 2 The initial results of the last randomized, controlled trial showing a significant improvement in outcomes were published over a decade ago 3 , 4 and established the standard therapy for high-risk neuroblastoma: myeloablative therapy with stem-cell rescue, followed by the treatment of minimal residual disease with isotretinoin. However, . . .
Bayesian sequential designs in studies with multilevel data
In many studies in the social and behavioral sciences, the data have a multilevel structure, with subjects nested within clusters. In the design phase of such a study, the number of clusters to achieve a desired power level has to be calculated. This requires a priori estimates of the effect size and intraclass correlation coefficient. If these estimates are incorrect, the study may be under- or overpowered. This may be overcome by using a group-sequential design, where interim tests are done at various points in time of the study. Based on interim test results, a decision is made to either include additional clusters or to reject the null hypothesis and conclude the study. This contribution introduces Bayesian sequential designs as an alternative to group-sequential designs. This approach compares various hypotheses based on the support in the data for each of them. If neither hypothesis receives a sufficient degree of support, additional clusters are included in the study and the Bayes factor is recalculated. This procedure continues until one of the hypotheses receives sufficient support. This paper explains how the Bayes factor is used as a measure of support for a hypothesis and how a Bayesian sequential design is conducted. A simulation study in the setting of a two-group comparison was conducted to study the effects of the minimum and maximum number of clusters per group and the desired degree of support. It is concluded that Bayesian sequential designs are a flexible alternative to the group sequential design.
The role of combining interim and final analysis by using endoscopic and radiologic methods in total neoadjuvant treatment
We aim to compare the relative performance of flexible sigmoidoscopy (FS), rectal magnetic resonance imaging (MRI), and their combinations during interim (i) and final (f) analysis to evaluate concordance with complete response (CR) following total neoadjuvant treatment (TNT) in rectal cancer. Patients who opted TNT and underwent restaging with FS and MRI between 2015 and 2022 were evaluated. Concordance between the assessment methods and CR was analyzed using the weighted-κ test. A cohort comprising 208 patients revealed CR rate of 42.3 ​%. When evaluating individual methods, fFS alone demonstrated the most heightened sensitivity (68.2 ​%) for CR detection, with a moderate level of concordance (κ ​= ​0.46). Only the combinations of iFS-fFS and fFS-fMRI reached a comparable level of concordance to that achievable by fFS alone. Among the available diagnostic tools, the combination of final MRI and FS still appears to offer the highest concordance with CR, with relatively higher sensitivity. Additionally, interim MRI may not add significant clinical value and could be omitted. •The combination of final MRI and FS still appears to offer the highest concordance with CR.•Final flexible sigmoidoscopy (fFS) alone has the highest sensitivity for CR detection.•Interim MRI may not add significant clinical value and could be omitted.
Comparison of Bayesian and frequentist group-sequential clinical trial designs
Background There is a growing interest in the use of Bayesian adaptive designs in late-phase clinical trials. This includes the use of stopping rules based on Bayesian analyses in which the frequentist type I error rate is controlled as in frequentist group-sequential designs. Methods This paper presents a practical comparison of Bayesian and frequentist group-sequential tests. Focussing on the setting in which data can be summarised by normally distributed test statistics, we evaluate and compare boundary values and operating characteristics. Results Although Bayesian and frequentist group-sequential approaches are based on fundamentally different paradigms, in a single arm trial or two-arm comparative trial with a prior distribution specified for the treatment difference, Bayesian and frequentist group-sequential tests can have identical stopping rules if particular critical values with which the posterior probability is compared or particular spending function values are chosen. If the Bayesian critical values at different looks are restricted to be equal, O’Brien and Fleming’s design corresponds to a Bayesian design with an exceptionally informative negative prior, Pocock’s design to a Bayesian design with a non-informative prior and frequentist designs with a linear alpha spending function are very similar to Bayesian designs with slightly informative priors.This contrasts with the setting of a comparative trial with independent prior distributions specified for treatment effects in different groups. In this case Bayesian and frequentist group-sequential tests cannot have the same stopping rule as the Bayesian stopping rule depends on the observed means in the two groups and not just on their difference. In this setting the Bayesian test can only be guaranteed to control the type I error for a specified range of values of the control group treatment effect. Conclusions Comparison of frequentist and Bayesian designs can encourage careful thought about design parameters and help to ensure appropriate design choices are made.
Adaptive design clinical trials: a review of the literature and ClinicalTrials.gov
ObjectivesThis review investigates characteristics of implemented adaptive design clinical trials and provides examples of regulatory experience with such trials.DesignReview of adaptive design clinical trials in EMBASE, PubMed, Cochrane Registry of Controlled Clinical Trials, Web of Science and ClinicalTrials.gov. Phase I and seamless Phase I/II trials were excluded. Variables extracted from trials included basic study characteristics, adaptive design features, size and use of independent data monitoring committees (DMCs) and blinded interim analyses. We also examined use of the adaptive trials in new drug submissions to the Food and Drug Administration (FDA) and European Medicines Agency (EMA) and recorded regulators’ experiences with adaptive designs.Results142 studies met inclusion criteria. There has been a recent growth in publicly reported use of adaptive designs among researchers around the world. The most frequently appearing types of adaptations were seamless Phase II/III (57%), group sequential (21%), biomarker adaptive (20%), and adaptive dose-finding designs (16%). About one-third (32%) of trials reported an independent DMC, while 6% reported blinded interim analysis. We found that 9% of adaptive trials were used for FDA product approval consideration, and 12% were used for EMA product approval consideration. International regulators had mixed experiences with adaptive trials. Many product applications with adaptive trials had extensive correspondence between drug sponsors and regulators regarding the adaptive designs, in some cases with regulators requiring revisions or alterations to research designs.ConclusionsWider use of adaptive designs will necessitate new drug application sponsors to engage with regulatory scientists during planning and conduct of the trials. Investigators need to more consistently report protections intended to preserve confidentiality and minimise potential operational bias during interim analysis.
Reporting quality was suboptimal in a systematic review of randomized controlled trials with adaptive designs
The study was conducted to evaluate the reporting quality of randomized controlled trials (RCTs) that use an adaptive design (AD) based on the 2020 AD Consolidated Standards for Reporting Trials 2010 extension (ACE) guidelines and identify factors associated with better reporting quality. PubMed, Embase, Cochrane, Web of Science, and Google Scholar were searched until November 1, 2022. Multivariable linear regression analysis was performed to investigate potential predictors. In total, 109 RCTs were included in our study. The mean compliance rate for the ACE checklist items was 69.75% ± 16.02. Key methodological items including allocation concealment and its implementations were poorly reported. There was also suboptimal reporting of checklist items related to the conduct of interim analyzes. Multivariable regression analysis showed better reporting quality with trial registration, nonindustry affiliation (first author), a sample size of >100, general medical journal type, publication date (≥2020), funding, and disclosure of the number of interim analyzes. Our study showed that RCTs with AD had suboptimal reporting of 2020 ACE checklist items, particularly AD-specific items. Following the development of ACE guidelines, stricter adherence to the ACE guideline is necessary to improve their reporting quality. Pre-ACE and post-ACE adherence comparisons can be conducted in the future.
Using Bayesian adaptive designs to improve phase III trials: a respiratory care example
Background Bayesian adaptive designs can improve the efficiency of trials, and lead to trials that can produce high quality evidence more quickly, with fewer patients and lower costs than traditional methods. The aim of this work was to determine how Bayesian adaptive designs can be constructed for phase III clinical trials in critical care, and to assess the influence that Bayesian designs would have on trial efficiency and study results. Methods We re-designed the High Frequency OSCillation in Acute Respiratory distress syndrome (OSCAR) trial using Bayesian adaptive design methods, to allow for the possibility of early stopping for success or futility. We constructed several alternative designs and studied their operating characteristics via simulation. We then performed virtual re-executions by applying the Bayesian adaptive designs using the OSCAR data to demonstrate the practical applicability of the designs. Results We constructed five alternative Bayesian adaptive designs and identified a preferred design based on the simulated operating characteristics, which had similar power to the original design but recruited fewer patients on average. The virtual re-executions showed the Bayesian sequential approach and original OSCAR trial yielded similar trial conclusions. However, using a Bayesian sequential design could have led to a reduced sample size and earlier completion of the trial. Conclusions Using the OSCAR trial as an example, this case study found that Bayesian adaptive designs can be constructed for phase III critical care trials. If the OSCAR trial had been run using one of the proposed Bayesian adaptive designs, it would have terminated at a smaller sample size with fewer deaths in the trial, whilst reaching the same conclusions. We recommend the wider use of Bayesian adaptive approaches in phase III clinical trials. Trial registration OSCAR Trial registration ISRCTN, ISRCTN10416500 . Retrospectively registered 13 June 2007.
Do we need to adjust for interim analyses in a Bayesian adaptive trial design?
Background Bayesian adaptive methods are increasingly being used to design clinical trials and offer several advantages over traditional approaches. Decisions at analysis points are usually based on the posterior distribution of the treatment effect. However, there is some confusion as to whether control of type I error is required for Bayesian designs as this is a frequentist concept. Methods We discuss the arguments for and against adjusting for multiplicities in Bayesian trials with interim analyses. With two case studies we illustrate the effect of including interim analyses on type I/II error rates in Bayesian clinical trials where no adjustments for multiplicities are made. We propose several approaches to control type I error, and also alternative methods for decision-making in Bayesian clinical trials. Results In both case studies we demonstrated that the type I error was inflated in the Bayesian adaptive designs through incorporation of interim analyses that allowed early stopping for efficacy and without adjustments to account for multiplicity. Incorporation of early stopping for efficacy also increased the power in some instances. An increase in the number of interim analyses that only allowed early stopping for futility decreased the type I error, but also decreased power. An increase in the number of interim analyses that allowed for either early stopping for efficacy or futility generally increased type I error and decreased power. Conclusions Currently, regulators require demonstration of control of type I error for both frequentist and Bayesian adaptive designs, particularly for late-phase trials. To demonstrate control of type I error in Bayesian adaptive designs, adjustments to the stopping boundaries are usually required for designs that allow for early stopping for efficacy as the number of analyses increase. If the designs only allow for early stopping for futility then adjustments to the stopping boundaries are not needed to control type I error. If one instead uses a strict Bayesian approach, which is currently more accepted in the design and analysis of exploratory trials, then type I errors could be ignored and the designs could instead focus on the posterior probabilities of treatment effects of clinically-relevant values.