Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
147
result(s) for
"Julious, Steven A"
Sort by:
An audit of sample sizes for pilot and feasibility trials being undertaken in the United Kingdom registered in the United Kingdom Clinical Research Network database
by
Whitehead, Amy L
,
Julious, Steven A
,
Billingham, Sophie AM
in
Analysis
,
Biomedical Research
,
Clinical trials
2013
Background
There is little published guidance as to the sample size required for a pilot or feasibility trial despite the fact that a sample size justification is a key element in the design of a trial. A sample size justification should give the minimum number of participants needed in order to meet the objectives of the trial. This paper seeks to describe the target sample sizes set for pilot and feasibility randomised controlled trials, currently running within the United Kingdom.
Methods
Data were gathered from the United Kingdom Clinical Research Network (UKCRN) database using the search terms ‘pilot’ and ‘feasibility’. From this search 513 studies were assessed for eligibility of which 79 met the inclusion criteria. Where the data summary on the UKCRN Database was incomplete, data were also gathered from: the International Standardised Randomised Controlled Trial Number (ISRCTN) register; the clinicaltrials.gov website and the website of the funders. For 62 of the trials, it was necessary to contact members of the research team by email to ensure completeness.
Results
Of the 79 trials analysed, 50 (63.3%) were labelled as pilot trials, 25 (31.6%) feasibility and 14 were described as both pilot and feasibility trials. The majority had two arms (n = 68, 86.1%) and the two most common endpoints were continuous (n = 45, 57.0%) and dichotomous (n = 31, 39.2%). Pilot trials were found to have a smaller sample size per arm (median = 30, range = 8 to 114 participants) than feasibility trials (median = 36, range = 10 to 300 participants). By type of endpoint, across feasibility and pilot trials, the median sample size per arm was 36 (range = 10 to 300 participants) for trials with a dichotomous endpoint and 30 (range = 8 to 114 participants) for trials with a continuous endpoint. Publicly funded pilot trials appear to be larger than industry funded pilot trials: median sample sizes of 33 (range = 15 to 114 participants) and 25 (range = 8 to 100 participants) respectively.
Conclusion
All studies should have a sample size justification. Not all studies however need to have a sample size calculation. For pilot and feasibility trials, while a sample size justification is important, a formal sample size calculation may not be appropriate. The results in this paper describe the observed sample sizes in feasibility and pilot randomised controlled trials on the UKCRN Database.
Journal Article
Guidance for using pilot studies to inform the design of intervention trials with continuous outcomes
2018
A pilot study can be an important step in the assessment of an intervention by providing information to design the future definitive trial. Pilot studies can be used to estimate the recruitment and retention rates and population variance and to provide preliminary evidence of efficacy potential. However, estimation is poor because pilot studies are small, so sensitivity analyses for the main trial's sample size calculations should be undertaken.
We demonstrate how to carry out easy-to-perform sensitivity analysis for designing trials based on pilot data using an example. Furthermore, we introduce rules of thumb for the size of the pilot study so that the overall sample size, for both pilot and main trials, is minimized.
The example illustrates how sample size estimates for the main trial can alter dramatically by plausibly varying assumptions. Required sample size for 90% power varied from 392 to 692 depending on assumptions. Some scenarios were not feasible based on the pilot study recruitment and retention rates.
Pilot studies can be used to help design the main trial, but caution should be exercised. We recommend the use of sensitivity analyses to assess the robustness of the design assumptions for a main trial.
Journal Article
Recruitment and retention of participants in randomised controlled trials: a review of trials funded and published by the United Kingdom Health Technology Assessment Programme
by
Nadin, Ben
,
Flight, Laura
,
Hind, Daniel
in
Cardiovascular disease
,
Catheters
,
Clinical trials
2017
BackgroundSubstantial amounts of public funds are invested in health research worldwide. Publicly funded randomised controlled trials (RCTs) often recruit participants at a slower than anticipated rate. Many trials fail to reach their planned sample size within the envisaged trial timescale and trial funding envelope.ObjectivesTo review the consent, recruitment and retention rates for single and multicentre randomised control trials funded and published by the UK's National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme.Data sources and study selectionHTA reports of individually randomised single or multicentre RCTs published from the start of 2004 to the end of April 2016 were reviewed.Data extractionInformation was extracted, relating to the trial characteristics, sample size, recruitment and retention by two independent reviewers.Main outcome measuresTarget sample size and whether it was achieved; recruitment rates (number of participants recruited per centre per month) and retention rates (randomised participants retained and assessed with valid primary outcome data).ResultsThis review identified 151 individually RCTs from 787 NIHR HTA reports. The final recruitment target sample size was achieved in 56% (85/151) of the RCTs and more than 80% of the final target sample size was achieved for 79% of the RCTs (119/151). The median recruitment rate (participants per centre per month) was found to be 0.92 (IQR 0.43–2.79) and the median retention rate (proportion of participants with valid primary outcome data at follow-up) was estimated at 89% (IQR 79–97%).ConclusionsThere is considerable variation in the consent, recruitment and retention rates in publicly funded RCTs. Investigators should bear this in mind at the planning stage of their study and not be overly optimistic about their recruitment projections.
Journal Article
Progression criteria in trials with an internal pilot: an audit of publicly funded randomised controlled trials
2019
Background
With millions of pounds spent annually on medical research in the UK, it is important that studies are spending funds wisely. Internal pilots offer the chance to stop a trial early if it becomes apparent that the study will not be able to recruit enough patients to show whether an intervention is clinically effective. This study aims to assess the use of internal pilots in individually randomised controlled trials funded by the Health Technology Assessment (HTA) programme and to summarise the progression criteria chosen in these trials.
Methods
Studies were identified from reports of the HTA committees’ funding decisions from 2012 to 2016. In total, 242 trials were identified of which 134 were eligible to be included in the audit. Protocols for the eligible studies were located on the NIHR Journals website, and if protocols were not available online then study managers were contacted to provide information.
Results
Over two-thirds (72.4%) of studies said in their protocol that they would include an internal pilot phase for their study and 37.8% of studies without an internal pilot had done an external pilot study to assess the feasibility of the full study. A typical study with an internal pilot has a target sample size of 510 over 24 months and aims to recruit one-fifth of their total target sample size within the first one-third of their recruitment time.
There has been an increase in studies adopting a three-tiered structure for their progression rules in recent years, with 61.5% (16/26) of studies using the system in 2016 compared to just 11.8% (2/17) in 2015. There was also a rise in the number of studies giving a target recruitment rate in their progression criteria: 42.3% (11/26) in 2016 compared to 35.3% (6/17) in 2015.
Conclusions
Progression criteria for an internal pilot are usually well specified but targets vary widely. For the actual criteria, red/amber/green systems have increased in popularity in recent years. Trials should justify the targets they have set, especially where targets are low.
Journal Article
Reporting and communication of sample size calculations in adaptive clinical trials: a review of trial protocols and grant applications
by
Dimairo, Munyaradzi
,
Julious, Steven A.
,
Zhang, Qiang
in
Adaptation
,
Adaptive Clinical Trials as Topic - methods
,
Adaptive Clinical Trials as Topic - statistics & numerical data
2024
Background
An adaptive design allows modifying the design based on accumulated data while maintaining trial validity and integrity. The final sample size may be unknown when designing an adaptive trial. It is therefore important to consider what sample size is used in the planning of the study and how that is communicated to add transparency to the understanding of the trial design and facilitate robust planning. In this paper, we reviewed trial protocols and grant applications on the sample size reporting for randomised adaptive trials.
Method
We searched protocols of randomised trials with comparative objectives on ClinicalTrials.gov (01/01/2010 to 31/12/2022). Contemporary eligible grant applications accessed from UK publicly funded researchers were also included. Suitable records of adaptive designs were reviewed, and key information was extracted and descriptively analysed.
Results
We identified 439 records, and 265 trials were eligible. Of these, 164 (61.9%) and 101 (38.1%) were sponsored by industry and public sectors, respectively, with 169 (63.8%) of all trials using a group sequential design although trial adaptations used were diverse.
The maximum and minimum sample sizes were the most reported or directly inferred (
n
= 199, 75.1%). The sample size assuming no adaptation would be triggered was usually set as the estimated target sample size in the protocol. However, of the 152 completed trials, 15 (9.9%) and 33 (21.7%) had their sample size increased or reduced triggered by trial adaptations, respectively.
The sample size calculation process was generally well reported in most cases (
n
= 216, 81.5%); however, the justification for the sample size calculation parameters was missing in 116 (43.8%) trials. Less than half gave sufficient information on the study design operating characteristics (
n
= 119, 44.9%).
Conclusion
Although the reporting of sample sizes varied, the maximum and minimum sample sizes were usually reported. Most of the trials were planned for estimated enrolment assuming no adaptation would be triggered. This is despite the fact a third of reported trials changed their sample size. The sample size calculation was generally well reported, but the justification of sample size calculation parameters and the reporting of the statistical behaviour of the adaptive design could still be improved.
Journal Article
A study of target effect sizes in randomised controlled trials published in the Health Technology Assessment journal
by
Cooper, Cindy L.
,
Rothwell, Joanne C.
,
Julious, Steven A.
in
Biomedicine
,
Design
,
Effect size
2018
Background
When designing a randomised controlled trial (RCT), an important consideration is the sample size required. This is calculated from several components; one of which is the target difference. This study aims to review the currently reported methods of elicitation of the target difference as well as to quantify the target differences used in Health Technology Assessment (HTA)-funded trials.
Methods
Trials were identified from the National Institute of Health Research
Health Technology Assessment
journal. A total of 177 RCTs published between 2006 and 2016 were assessed for eligibility. Eligibility was established by the design of the trial and the quality of data available. The trial designs were parallel-group, superiority RCTs with a continuous primary endpoint. Data were extracted and the standardised anticipated and observed effect size estimates were calculated. Exclusion criteria was based on trials not providing enough detail in the sample size calculation and results, and trials not being of parallel-group, superiority design.
Results
A total of 107 RCTs were included in the study from 102 reports. The most commonly reported method for effect size derivation was a review of evidence and use of previous research (52.3%). This was common across all clinical areas. The median standardised target effect size was 0.30 (interquartile range: 0.20–0.38), with the median standardised observed effect size 0.11 (IQR 0.05–0.29). The maximum anticipated and observed effect sizes were 0.76 and 1.18, respectively. Only two trials had anticipated target values above 0.60.
Conclusion
The most commonly reported method of elicitation of the target effect size is previous published research. The average target effect size was 0.3.
A clear distinction between the target difference and the minimum clinically important difference is recommended when designing a trial. Transparent explanation of target difference elicitation is advised, with multiple methods including a review of evidence and opinion-seeking advised as the more optimal methods for effect size quantification.
Journal Article
An investigation of the constancy of effect in Cochrane systematic reviews in context with the assumptions for noninferiority trials
by
Duro, Enass M.
,
Julious, Steven A.
,
Ren, Shijie
in
Clinical trials
,
Cochrane reviews
,
Comparative analysis
2022
When designing a noninferiority (NI) study one of the most important steps is to set the noninferiority (NI) limit. The NI limit is an acceptable loss of efficacy for a new investigative treatment compared to an active control treatment – often standard care. The limit should be a value so small that the loss efficacy is clinically zero. An approach to the setting of a noninferiority limit such that an effect over placebo can be shown through an indirect comparison to placebo-controlled trials where the active control treatment was compared to placebo. In this context, the setting of the NI limit depends on three assumptions: assay sensitivity, bias minimisation, and the constancy assumption. The last assumption of constancy assumes the effect of the active control over placebo is constant. This paper aims to assess the constancy assumption in placebo-controlled trials.
Methods:
236 Cochrane reviews of placebo-controlled trials published in 2015–2016 were collected and used to assess the relation between the placebo, active treatment, and the standardised treatment different (SMD) with the time (year of publication).
Results
:
The analysis showed that both the size of the study and the treatment effect were associated with year of publication. The three main variables that affect the estimate of any future trial are the estimate from the meta-analysis of previous trials prior to the trial, the year difference in the meta-analysis, and the year of the trial conduction. The regression analysis showed that an increase of one unit in the point estimate of the historical meta-analysis would lead to an increase in the predicted estimate of future trial on the SMD scale by 0.88. This result suggests the final trial results are 12% smaller than that from the meta-analysis of trials until that point.
Conclusion
:
The result of this study indicates that assuming constancy of the treatment difference between the active control and placebo can be questioned. It is therefore important to consider the effect of time in estimating the treatment response if indirect comparisons are being used as the basis of a NI limit.
Journal Article
A retrospective analysis of conditional power assumptions in clinical trials with continuous or binary endpoints
by
Walters, Stephen J.
,
Edwards, Julia M.
,
Julious, Steven A.
in
Adaptive Clinical Trials as Topic
,
Adaptive designs
,
Biomedicine
2023
Background
Adaptive clinical trials may use conditional power (CP) to make decisions at interim analyses, requiring assumptions about the treatment effect for remaining patients. It is critical that these assumptions are understood by those using CP in decision-making, as well as timings of these decisions.
Methods
Data for 21 outcomes from 14 published clinical trials were made available for re-analysis. CP curves for accruing outcome information were calculated using and compared with a pre-specified objective criteria for original and transformed versions of the trial data using four future treatment effect assumptions: (i) observed current trend, (ii) hypothesised effect, (iii) 80% optimistic confidence limit, (iv) 90% optimistic confidence limit.
Results
The hypothesised effect assumption met objective criteria when the true effect was close to that planned, but not when smaller than planned. The opposite was seen using the current trend assumption. Optimistic confidence limit assumptions appeared to offer a compromise between the two, performing well against objective criteria when the end observed effect was as planned or smaller.
Conclusion
The current trend assumption could be the preferable assumption when there is a wish to stop early for futility. Interim analyses could be undertaken as early as 30% of patients have data available. Optimistic confidence limit assumptions should be considered when using CP to make trial decisions, although later interim timings should be considered where logistically feasible.
Journal Article
A systematic review of the “promising zone” design
by
Walters, Stephen J.
,
Kunz, Cornelia
,
Edwards, Julia M.
in
Biomedicine
,
Clinical trials
,
Experimental design
2020
Introduction
Sample size calculations require assumptions regarding treatment response and variability. Incorrect assumptions can result in under- or overpowered trials, posing ethical concerns. Sample size re-estimation (SSR) methods investigate the validity of these assumptions and increase the sample size if necessary. The “promising zone” (Mehta and Pocock, Stat Med 30:3267–3284, 2011) concept is appealing to researchers for its design simplicity. However, it is still relatively new in the application and has been a source of controversy.
Objectives
This research aims to synthesise current approaches and practical implementation of the promising zone design.
Methods
This systematic review comprehensively identifies the reporting of methodological research and of clinical trials using promising zone. Databases were searched according to a pre-specified search strategy, and pearl growing techniques implemented.
Results
The combined search methods resulted in 270 unique records identified; 171 were included in the review, of which 30 were trials. The median time to the interim analysis was 60% of the original target sample size (IQR 41–73%). Of the 15 completed trials, 7 increased their sample size. Only 21 studies reported the maximum sample size that would be considered, for which the median increase was 50% (IQR 35–100%).
Conclusions
Promising zone is being implemented in a range of trials worldwide, albeit in low numbers. Identifying trials using promising zone was difficult due to the lack of reporting of SSR methodology. Even when SSR methodology was reported, some had key interim analysis details missing, and only eight papers provided promising zone ranges.
Journal Article
An Investigation of the Shortcomings of the CONSORT 2010 Statement for the Reporting of Group Sequential Randomised Controlled Trials: A Methodological Systematic Review
by
Cooper, Cindy L.
,
Stevely, Abigail
,
Todd, Susan
in
Adaptive search techniques
,
Analysis
,
Bias
2015
It can be argued that adaptive designs are underused in clinical research. We have explored concerns related to inadequate reporting of such trials, which may influence their uptake. Through a careful examination of the literature, we evaluated the standards of reporting of group sequential (GS) randomised controlled trials, one form of a confirmatory adaptive design.
We undertook a systematic review, by searching Ovid MEDLINE from the 1st January 2001 to 23rd September 2014, supplemented with trials from an audit study. We included parallel group, confirmatory, GS trials that were prospectively designed using a Frequentist approach. Eligible trials were examined for compliance in their reporting against the CONSORT 2010 checklist. In addition, as part of our evaluation, we developed a supplementary checklist to explicitly capture group sequential specific reporting aspects, and investigated how these are currently being reported.
Of the 284 screened trials, 68(24%) were eligible. Most trials were published in \"high impact\" peer-reviewed journals. Examination of trials established that 46(68%) were stopped early, predominantly either for futility or efficacy. Suboptimal reporting compliance was found in general items relating to: access to full trials protocols; methods to generate randomisation list(s); details of randomisation concealment, and its implementation. Benchmarking against the supplementary checklist, GS aspects were largely inadequately reported. Only 3(7%) trials which stopped early reported use of statistical bias correction. Moreover, 52(76%) trials failed to disclose methods used to minimise the risk of operational bias, due to the knowledge or leakage of interim results. Occurrence of changes to trial methods and outcomes could not be determined in most trials, due to inaccessible protocols and amendments.
There are issues with the reporting of GS trials, particularly those specific to the conduct of interim analyses. Suboptimal reporting of bias correction methods could potentially imply most GS trials stopping early are giving biased results of treatment effects. As a result, research consumers may question credibility of findings to change practice when trials are stopped early. These issues could be alleviated through a CONSORT extension. Assurance of scientific rigour through transparent adequate reporting is paramount to the credibility of findings from adaptive trials. Our systematic literature search was restricted to one database due to resource constraints.
Journal Article