Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
3,424
result(s) for
"Statistical analysis plan"
Sort by:
When and How to Deviate From a Preregistration
2024
As the practice of preregistration becomes more common, researchers need guidance in how to report deviations from their preregistered statistical analysis plan. A principled approach to the use of preregistration should not treat all deviations as problematic. Deviations from a preregistered analysis plan can both reduce and increase the severity of a test, as well as increase the validity of inferences. I provide examples of how researchers can present deviations from preregistrations and evaluate the consequences of the deviation when encountering 1) unforeseen events, 2) errors in the preregistration, 3) missing information, 4) violations of untested assumptions, and 5) falsification of auxiliary hypotheses. The current manuscript aims to provide a principled approach to deciding when to deviate from a preregistration and how to report deviations from an error-statistical philosophy grounded in methodological falsificationism. The goal is to help researchers reflect on the consequence of deviations from preregistrations by evaluating the test’s severity and the validity of the inference.
Journal Article
Decentralized Clinical Trials in the Era of Real‐World Evidence: A Statistical Perspective
by
Rockhold, Frank W.
,
Di, Junrui
,
Kirk, Jennifer
in
Artificial intelligence
,
Clinical medicine
,
Clinical trials
2025
There has been a growing trend that activities relating to clinical trials take place at locations other than traditional trial sites (hence decentralized clinical trials or DCTs), some of which are at settings of real‐world clinical practice. Although there are numerous benefits of DCTs, this also brings some implications on a number of issues relating to the design, conduct, and analysis of DCTs. The Real‐World Evidence Scientific Working Group of the American Statistical Association Biopharmaceutical Section has been reviewing the field of DCTs and provides in this paper considerations for decentralized trials from a statistical perspective. This paper first discusses selected critical decentralized elements that may have statistical implications on the trial and then summarizes regulatory guidance, framework, and initiatives on DCTs. More discussions are presented by focusing on the design (including construction of estimand), implementation, statistical analysis plan (including missing data handling), and reporting of safety events. Some additional considerations (e.g., ethical considerations, technology infrastructure, study oversight, data security and privacy, and regulatory compliance) are also briefly discussed. This paper is intended to provide statistical considerations for decentralized trials of medical products to support regulatory decision‐making.
Journal Article
Posting of clinical trial results and other critical information from completed medicines trials on ClinicalTrials.gov
by
Dal-Ré, Rafael
,
Mahillo-Fernández, Ignacio
in
Clinical trials
,
Cross-sectional studies
,
Government agencies
2023
PurposeClinical trials transparency requires trial registration and the posting of results on a public register. US regulations also require the posting of protocols and statistical analysis plans (SAPs). For US Federal agency funded trials to be started on or after 21 January 2019, informed consent forms (ICFs) must also be posted. Posting these documents is not mandatory in other countries. We aimed to assess compliance with US regulations of trials conducted in the US or in other countries with respect to ICFs, protocols, SAPs, and results.MethodsThis cross-sectional analysis (27 April 2023) comprised completed medicines trials to be started on or after 21 January 2019 registered on ClinicalTrials.gov. Trial data were registered by funder type (i.e., ‘US federal agencies’, industry, and ‘all others’) and development phase.ResultsOf 5,584 trials, 40% were conducted solely in the US. 47% and 12% of US and non-US trials had posted results. Some 40% of US trials had posted protocols and SAPs as did 9% of trials conducted in other countries. Only 10% (US) and 2% (other countries) of trials had posted ICFs. When the margin of the last 2 and 12 months after primary completion date were considered in the analysis, ICF posting rate did not change, but posting results increased to 64% for US trials. ‘US Federal agencies’ funded trials were significantly more likely to post ICFs than industry [OR 23.9 (12.5-45.7; <.001)] or ‘all others’ [OR 3.16 (1.79-5.56; <.001)].ConclusionFuture interventions should be considered to encourage timely posting of trial results and information.
Journal Article
Treatment of Middle East respiratory syndrome with a combination of lopinavir/ritonavir and interferon-β1b (MIRACLE trial): statistical analysis plan for a recursive two-stage group sequential randomized controlled trial
by
AlJohani, Sameera
,
Aziz Jokhdar, Hani A.
,
Assiri, Abdullah M.
in
Antiretroviral drugs
,
Antiviral
,
Antiviral Agents - adverse effects
2020
The MIRACLE trial (MERS-CoV Infection tReated with A Combination of Lopinavir/ritonavir and intErferon-β1b) investigates the efficacy of a combination therapy of lopinavir/ritonavir and recombinant interferon-β1b provided with standard supportive care, compared to placebo provided with standard supportive care, in hospitalized patients with laboratory-confirmed MERS. The MIRACLE trial is designed as a recursive, two-stage, group sequential, multicenter, placebo-controlled, double-blind randomized controlled trial. The aim of this article is to describe the statistical analysis plan for the MIRACLE trial. The primary outcome is 90-day mortality. The primary analysis will follow the intention-to-treat principle. The MIRACLE trial is the first randomized controlled trial for MERS treatment.
Trial registration
ClinicalTrials.gov,
NCT02845843
. Registered on 27 July 2016.
Journal Article
Public availability and adherence to prespecified statistical analysis approaches was low in published randomized trials
by
Kahan, Brennan C.
,
Cro, Suzie
,
Ahmad, Tahania
in
Bias
,
Clinical trials
,
Data Interpretation, Statistical
2020
Prespecification of statistical methods in clinical trial protocols and statistical analysis plans can help to deter bias from p-hacking but is only effective if the prespecified approach is made available.
For 100 randomized trials published in 2018 and indexed in PubMed, we evaluated how often a prespecified statistical analysis approach for the trial's primary outcome was publicly available. For each trial with an available prespecified analysis, we compared this with the trial publication to identify whether there were unexplained discrepancies.
Only 12 of 100 trials (12%) had a publicly available prespecified analysis approach for their primary outcome; this document was dated before recruitment began for only two trials. Of the 12 trials with an available prespecified analysis approach, 11 (92%) had one or more unexplained discrepancies. Only 4 of 100 trials (4%) stated that the statistician was blinded until the SAP was signed off, and only 10 of 100 (10%) stated the statistician was blinded until the database was locked.
For most published trials, there is insufficient information available to determine whether the results may be subject to p-hacking. Where information was available, there were often unexplained discrepancies between the prespecified and final analysis methods.
Journal Article
Evidence of unexplained discrepancies between planned and conducted statistical analyses: a review of randomised trials
by
Johnson, Nicholas A.
,
Kahan, Brennan C.
,
Cro, Suzie
in
Biomedicine
,
Clinical trials
,
Data Interpretation, Statistical
2020
Background
Choosing or altering the planned statistical analysis approach after examination of trial data (often referred to as ‘p-hacking’) can bias the results of randomised trials. However, the extent of this issue in practice is currently unclear. We conducted a review of published randomised trials to evaluate how often a pre-specified analysis approach is publicly available, and how often the planned analysis is changed.
Methods
A review of randomised trials published between January and April 2018 in six leading general medical journals. For each trial, we established whether a pre-specified analysis approach was publicly available in a protocol or statistical analysis plan and compared this to the trial publication.
Results
Overall, 89 of 101 eligible trials (88%) had a publicly available pre-specified analysis approach. Only 22/89 trials (25%) had no unexplained discrepancies between the pre-specified and conducted analysis. Fifty-four trials (61%) had one or more unexplained discrepancies, and in 13 trials (15%), it was impossible to ascertain whether any unexplained discrepancies occurred due to incomplete reporting of the statistical methods. Unexplained discrepancies were most common for the analysis model (
n
= 31, 35%) and analysis population (
n
= 28, 31%), followed by the use of covariates (
n
= 23, 26%) and the approach for handling missing data (
n
= 16, 18%). Many protocols or statistical analysis plans were dated after the trial had begun, so earlier discrepancies may have been missed.
Conclusions
Unexplained discrepancies in the statistical methods of randomised trials are common. Increased transparency is required for proper evaluation of results.
Journal Article
Heterogeneity in multicentre trial participating centers: lessons from the TOPCAT trial on interpreting trial data for clinical practice
2023
Randomized controlled trials (RCTs) are considered a “gold standard” of evidence, provided they meet rigorous standards in design and execution. Recently, some investigators of the Treatment of Preserved Cardiac Function Heart Failure with an Aldosterone Antagonist (TOPCAT) trial advocate reanalysis of results, deviating from the statistical analysis plan. We briefly review the rationale by the TOPCAT investigators and implications for interpreting trial data.
Critical examination of existing literature.
The TOPCAT trial showed variation in patient characteristics and outcomes among different geographic regions. The investigators suggest that the observed variation indicated unreliable data, warranting deviation from protocol. That lead to claims of therapeutic effectiveness for populations in select regions. We suggest that some variation is expected in multicentre RCTs and argue that discriminating between natural variation and unreliable data can be challenging. Thus, the warrant for deviation from protocol is not clear.
The TOPCAT investigators highlight important concerns about heterogeneity in RCT samples and how that may impact our interpretation of the results. If we are to maintain rigor in the RCT methodology and preserve its status as a reliable form of evidence for clinical practice, we must carefully consider when it is appropriate to deviate from a protocol when analyzing and interpreting trial data.
Journal Article
Review of the quality of reporting of statistical analysis plans for cluster randomized trials
by
Wang, Yixin
,
Taljaard, Monica
,
Shaw, Julia
in
Cluster Analysis
,
cluster randomized trials
,
Clustering
2025
The guideline for the content of Statistical Analysis Plans (SAPs) outlines recommendations for items to be included in SAPs. As yet there is no specific tailoring of this guideline for Cluster Randomized Trials (CRTs). There has also been no assessment of reporting quality of SAPs against this guideline. Our intention is to identify how well a sample of SAPs for CRTs are adhering to the reporting of key items in the current guidelines, as well as additional analysis aspects considered to be important in CRTs.
We include (i) fully published standalone SAPs identified via Ovid-MEDLINE and (ii) SAPs published as supplementary material or appendices to the final published report identified by searching an existing database of nearly 800 CRTs.
The search identified 85 unique SAPs: 26 were published in standalone format and 59 were supplementary material to the full trial report. There was mixed clarity in reporting of items related to the current guideline (eg, most (61/85, 72%) reported what covariates will be included in any adjustment; but fewer (26/85, 31%) reported what method will be used to estimate the absolute measure of effect). Considering additional aspects important for CRTs, the majority (79/85, 93%) included a plan to allow for clustering in the analysis; but fewer (10/40, 25%) reported how a small number of clusters would be accommodated (this was only considered relevant for the subset of CRTs with fewer than 40 clusters). Few (5/85, 6%) reported how the intracluster correlation would be estimated. Few clearly reported statistical targets of inference: in only two SAPs (2/85, 2%) it was clear whether the objectives were related to the individual or cluster-level average; in trials where relevant, only three (3/70, 4%) clearly reported whether the objectives were related to the marginal or cluster-specific effect.
This review has identified specific areas of poor quality of reporting that might need additional consideration when developing the guidance for the reporting of SAPs for CRTs.
•Most statistical analysis plans (SAPs) for cluster randomized trials (CRTs) include a plan for clustering.•Fewer plans for how a small number of clusters will be accommodated.•Only a minority consider how they will estimate the intracluster correlation.•Few prespecifying the target of inference with regard to marginal or conditional effects.•Few prespecifying the target of inference with regard to individual vs cluster-level averages.
Journal Article
Proportional Assist Ventilation for Minimizing the Duration of Mechanical Ventilation (the PROMIZING study): update to the statistical analysis plan for a randomized controlled trial
by
Hu, Pingzhao
,
Bosma, Karen J.
,
Wade, Kaitlyn
in
Acute respiratory distress syndrome
,
Analysis
,
Bayes Theorem
2024
Background
We previously published the protocol and statistical analysis plan for a randomized controlled trial of Proportional Assist Ventilation for Minimizing the Duration of Mechanical Ventilation: the PROMIZING study in
Trials
(
https://doi.org/10.1186/s13063-023-07163-w
). This update summarizes changes made to the statistical analysis plan for the trial since the publication of the original protocol and statistical analysis plan.
Methods/design
The Proportional Assist Ventilation for Minimizing the Duration of Mechanical Ventilation (PROMIZING) study is a multi-center, open-label, randomized controlled trial designed to determine if ventilation with proportional assist ventilation with load-adjustable gain factors will result in a shorter duration of time spent on mechanical ventilation compared to ventilation with pressure support ventilation for patients with acute respiratory failure. The statistical analysis plan for the trial was incorporated into the original publication of the protocol in
Trials
(
https://doi.org/10.1186/s13063-023-07163-w
) and was based on version 5.0 of the study protocol and version 1.0 of the statistical analysis plan (SAP), which included plans for both frequentist and Bayesian analyses. We have since updated the SAP to refine the Bayesian analysis plan, update the multistate model diagram, and include plans for a cluster analysis to determine if there is heterogeneity of treatment effect. This update summarizes the changes made and their rationale and provides a refined SAP for the PROMIZING trial with additional background information, in adherence with guidelines for the prospective reporting of SAPs for randomized controlled trials.
Trial registration
ClinicalTrials.gov Identifier:
NCT02447692
prospectively registered May 19, 2015.
Journal Article
Stockholm Score of Lesion Detection on Computed Tomography following Mild Traumatic Brain Injury (SELECT-TBI) Study: Pilot Analysis and Statistical Analysis Plan
2025
Background
Mild traumatic brain injury (mTBI) is a common cause of emergency department visits. Only a small percentage of mTBI patients develop an intracranial lesion (ICL) and even fewer will require neurosurgical intervention due to their injury. The Stockholm Score of Lesion Detection on Computed Tomography following Mild Traumatic Brain Injury (SELECT-TBI) study aims to provide a data-driven approach to estimate individualized risk for traumatic ICL and clinically significant lesions in mTBI patients.
Objective
To provide a statistical analysis plan and pilot data analysis before completion of data collection, as pre-planned in the published study protocol.
Methods
Retrospective study of patients ≥ 15 years old who underwent a computed tomography (CT) scan for their mTBI in Stockholm, Sweden, between 2015–2020. Up to 73 variables were collected for each patient. Data analysis of the first 5 000 patients in the cohort was conducted to develop preliminary prediction models using Lasso regression, general linear model and random forest and to perform an optimal population analysis to determine whether the final sample size would be sufficient.
Results
Six data selection strategies were tested, and area under the curve (AUC) receiver operator characteristic (ROC) curves were generated with a 4:1 training/validation data segmentation. The best-performing model was the Lasso regression model which achieved an AUC of 0.807 for any ICL and 0.903 for clinically significant ICL (accuracy of 70% and 97.7%, and Brier scores of 0.3 and 0.023 respectively). Clinical variables identified as key features across all models were Glasgow Coma Scale, signs of basilar skull fracture, trauma mechanism, and vomiting, each with an importance score greater than 0.1 (explaining more than 10% model variance). Finally, the highest end prediction of the necessary population size was found to be 29 667 patients.
Conclusion
Our preliminary results demonstrate the potential for a data-driven approach to generate personalized risk stratification tools. With a final cohort size expected to exceed 40 000 patients, we anticipate being able to create more granular models optimized for integration into clinical decision-making.
Study registration
ClinicalTrials.gov NCT04995068.
Journal Article