Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
3,419
result(s) for
"Periodicals as Topic - standards"
Sort by:
CONSORT 2010 statement: extension to randomised pilot and feasibility trials
by
Hopewell, Sally
,
Campbell, Michael J
,
Bond, Christine M
in
90 Operations research, mathematical programming
,
90B Operations research and management science
,
92 Biology and other natural sciences
2016
The Consolidated Standards of Reporting Trials (CONSORT) statement is a guideline designed to improve the transparency and quality of the reporting of randomised controlled trials (RCTs). In this article we present an extension to that statement for randomised pilot and feasibility trials conducted in advance of a future definitive RCT. The checklist applies to any randomised study in which a future definitive RCT, or part of it, is conducted on a smaller scale, regardless of its design (eg, cluster, factorial, crossover) or the terms used by authors to describe the study (eg, pilot, feasibility, trial, study). The extension does not directly apply to internal pilot studies built into the design of a main trial, non-randomised pilot and feasibility studies, or phase II studies, but these studies all have some similarities to randomised pilot and feasibility studies and so many of the principles might also apply.The development of the extension was motivated by the growing number of studies described as feasibility or pilot studies and by research that has identified weaknesses in their reporting and conduct. We followed recommended good practice to develop the extension, including carrying out a Delphi survey, holding a consensus meeting and research team meetings, and piloting the checklist.The aims and objectives of pilot and feasibility randomised studies differ from those of other randomised trials. Consequently, although much of the information to be reported in these trials is similar to those in randomised controlled trials (RCTs) assessing effectiveness and efficacy, there are some key differences in the type of information and in the appropriate interpretation of standard CONSORT reporting items. We have retained some of the original CONSORT statement items, but most have been adapted, some removed, and new items added. The new items cover how participants were identified and consent obtained; if applicable, the prespecified criteria used to judge whether or how to proceed with a future definitive RCT; if relevant, other important unintended consequences; implications for progression from pilot to future definitive RCT, including any proposed amendments; and ethical approval or approval by a research review committee confirmed with a reference number.This article includes the 26 item checklist, a separate checklist for the abstract, a template for a CONSORT flowchart for these studies, and an explanation of the changes made and supporting examples. We believe that routine use of this proposed extension to the CONSORT statement will result in improvements in the reporting of pilot trials.Editor’s note: In order to encourage its wide dissemination this article is freely accessible on the BMJ and Pilot and Feasibility Studies journal websites.
Journal Article
The quality of reports of randomised trials in 2000 and 2006: comparative study of articles indexed in PubMed
2010
Objectives To examine the reporting characteristics and methodological details of randomised trials indexed in PubMed in 2000 and 2006 and assess whether the quality of reporting has improved after publication of the Consolidated Standards of Reporting Trials (CONSORT) Statement in 2001.Design Comparison of two cross sectional investigations.Study sample All primary reports of randomised trials indexed in PubMed in December 2000 (n=519) and December 2006 (n=616), including parallel group, crossover, cluster, factorial, and split body study designs.Main outcome measures The proportion of general and methodological items reported, stratified by year and study design. Risk ratios with 95% confidence intervals were calculated to represent changes in reporting between 2000 and 2006.Results The majority of trials were two arm (379/519 (73%) in 2000 v 468/616 (76%) in 2006) parallel group studies (383/519 (74%) v 477/616 (78%)) published in specialty journals (482/519 (93%) v 555/616 (90%)). In both 2000 and 2006, a median of 80 participants were recruited per trial for parallel group trials. The proportion of articles that reported drug trials decreased between 2000 and 2006 (from 393/519 (76%) to 356/616 (58%)), whereas the proportion of surgery trials increased (51/519 (10%) v 128/616 (21%)). There was an increase between 2000 and 2006 in the proportion of trial reports that included details of the primary outcome (risk ratio (RR) 1.18, 95% CI 1.04 to 1.33), sample size calculation (RR 1.66, 95% CI 1.40 to 1.95), and the methods of random sequence generation (RR 1.62, 95% CI 1.32 to 1.97) and allocation concealment (RR 1.40, 95% CI 1.11 to 1.76). There was no difference in the proportion of trials that provided specific details on who was blinded (RR 0.91, 95% CI 0.75 to 1.10). Conclusions Reporting of several important aspects of trial methods improved between 2000 and 2006; however, the quality of reporting remains well below an acceptable level. Without complete and transparent reporting of how a trial was designed and conducted, it is difficult for readers to assess its conduct and validity.
Journal Article
There is no reliable evidence that providing authors with customized article templates including items from reporting guidelines improves completeness of reporting: the GoodReports randomized trial (GRReaT)
by
Harwood, James
,
Collins, Gary S
,
de Beyer, Jennifer Anne
in
Authors
,
Authorship
,
Authorship - standards
2025
Background
Although medical journals endorse reporting guidelines, authors often struggle to find and use the right one for their study type and topic. The UK EQUATOR Centre developed the GoodReports website to direct authors to appropriate guidance. Pilot data suggested that authors did not improve their manuscripts when advised to use a particular reporting guideline by GoodReports.org at journal submission stage. User feedback suggested the checklist format of most reporting guidelines does not encourage use during manuscript writing. We tested whether providing customized reporting guidance within writing templates for use throughout the writing process resulted in clearer and more complete reporting than only giving advice on which reporting guideline to use.
Design and methods
GRReaT was a two-group parallel 1:1 randomized trial with a target sample size of 206. Participants were lead authors at an early stage of writing up a health-related study. Eligible study designs were cohort, cross-sectional, or case-control study, randomized trial, and systematic review. After randomization, the intervention group received an article template including items from the appropriate reporting guideline and links to explanations and examples. The control group received a reporting guideline recommendation and general advice on reporting. Participants sent their completed manuscripts to the GRReaT team before submitting for publication, for completeness of each item in the title, methods, and results section of the corresponding reporting guideline. The primary outcome was reporting completeness against the corresponding reporting guideline. Participants were not blinded to allocation. Assessors were blind to group allocation. As a recruitment incentive, all participants received a feedback report identifying missing or inadequately reported items in these three sections.
Results
Between 9 June 2021 and 30 June 2023, we randomized 130 participants, 65 to the intervention and 65 to the control group. We present findings from the assessment of reporting completeness for the 37 completed manuscripts we received, 18 in the intervention group and 19 in the control group. The mean (standard deviation) proportion of completely reported items from the title, methods, and results sections of the manuscripts (primary outcome) was 0.57 (0.18) in the intervention group and 0.50 (0.17) in the control group. The mean difference between the two groups was 0.069 (95% CI -0.046 to 0.184;
p
= 0.231). In the sensitivity analysis, when partially reported items were counted as completely reported, the mean (standard deviation) proportion of completely reported items was 0.75 (0.15) in the intervention group and 0.71 (0.11) in the control group. The mean difference between the two groups was 0.036 (95% CI -0.127 to 0.055;
p
= 0.423).
Conclusion
As the dropout rate was higher than expected, we did not reach the recruitment target, and the difference between groups was not statistically significant. We therefore found no evidence that providing authors with customized article templates including items from reporting guidelines, increases reporting completeness. We discuss the challenges faced when conducting the trial and suggest how future research testing innovative ways of improving reporting could be designed to improve recruitment and reduce dropouts.
Journal Article
Impact of an Online Writing Aid Tool for Writing a Randomized Trial Report: The COBWEB (Consort-Based WEB Tool) Randomized Controlled Trial
2015
BACKGROUND: Incomplete reporting is a frequent waste in research. Our aim was to evaluate the impact of a writing aid tool (WAT) based on the CONSORT statement and its extension for non-pharmacologic treatments on the completeness of reporting of randomized controlled trials (RCTs). METHODS: We performed a 'split-manuscript' RCT with blinded outcome assessment. Participants were masters and doctoral students in public health. They were asked to write, over a 4-hour period, the methods section of a manuscript based on a real RCT protocol, with a different protocol provided to each participant. Methods sections were divided into six different domains: 'trial design', 'randomization', 'blinding', 'participants', 'interventions', and 'outcomes'. Participants had to draft all six domains with access to the WAT for a random three of six domains. The random sequence was computer-generated and concealed. For each domain, the WAT comprised reminders of the corresponding CONSORT item(s), bullet points detailing all the key elements to be reported, and examples of good reporting. The control intervention consisted of no reminders. The primary outcome was the mean global score for completeness of reporting (scale 0-10) for all domains written with or without the WAT. RESULTS: Forty-one participants wrote 41 different manuscripts of RCT methods sections, corresponding to 246 domains (six for each of the 41 protocols). All domains were analyzed. For the primary outcome, the mean (SD) global score for completeness of reporting was higher with than without use of the WAT: 7.1 (1.2) versus 5.0 (1.6), with a mean (95 % CI) difference 2.1 (1.5-2.7; P <0.01). Completeness of reporting was significantly higher with the WAT for all domains except for blinding and outcomes. CONCLUSION: Use of the WAT could improve the completeness of manuscripts reporting the results of RCTs. TRIAL REGISTRATION: Clinicaltrials.gov ( http://clinicaltrials.gov NCT02127567 , registration date first received April 29, 2014).
Journal Article
Improving the Reporting Quality of Nonrandomized Evaluations of Behavioral and Public Health Interventions: The TREND Statement
by
Crepaz, Nicole
,
Lyles, Cynthia
,
TREND Group
in
Acquired immune deficiency syndrome
,
AIDS
,
Behavior modification
2004
Developing an evidence base for making public health decisions will require using data from evaluation studies with randomized and nonrandomized designs. Assessing individual studies and using studies in quantitative research syntheses require transparent reporting of the study, with sufficient detail and clarity to readily see differences and similarities among studies in the same area. The Consolidated Standards of Reporting Trials (CONSORT) statement provides guidelines for transparent reporting of randomized clinical trials. We present the initial version of the Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) statement. These guidelines emphasize the reporting of theories used and descriptions of intervention and comparison conditions, research design, and methods of adjusting for possible biases in evaluation studies that use nonrandomized designs.
Journal Article
A study of target effect sizes in randomised controlled trials published in the Health Technology Assessment journal
by
Cooper, Cindy L.
,
Rothwell, Joanne C.
,
Julious, Steven A.
in
Biomedicine
,
Design
,
Effect size
2018
Background
When designing a randomised controlled trial (RCT), an important consideration is the sample size required. This is calculated from several components; one of which is the target difference. This study aims to review the currently reported methods of elicitation of the target difference as well as to quantify the target differences used in Health Technology Assessment (HTA)-funded trials.
Methods
Trials were identified from the National Institute of Health Research
Health Technology Assessment
journal. A total of 177 RCTs published between 2006 and 2016 were assessed for eligibility. Eligibility was established by the design of the trial and the quality of data available. The trial designs were parallel-group, superiority RCTs with a continuous primary endpoint. Data were extracted and the standardised anticipated and observed effect size estimates were calculated. Exclusion criteria was based on trials not providing enough detail in the sample size calculation and results, and trials not being of parallel-group, superiority design.
Results
A total of 107 RCTs were included in the study from 102 reports. The most commonly reported method for effect size derivation was a review of evidence and use of previous research (52.3%). This was common across all clinical areas. The median standardised target effect size was 0.30 (interquartile range: 0.20–0.38), with the median standardised observed effect size 0.11 (IQR 0.05–0.29). The maximum anticipated and observed effect sizes were 0.76 and 1.18, respectively. Only two trials had anticipated target values above 0.60.
Conclusion
The most commonly reported method of elicitation of the target effect size is previous published research. The average target effect size was 0.3.
A clear distinction between the target difference and the minimum clinically important difference is recommended when designing a trial. Transparent explanation of target difference elicitation is advised, with multiple methods including a review of evidence and opinion-seeking advised as the more optimal methods for effect size quantification.
Journal Article
Reporting quality of randomized controlled trial abstracts in the seven highest-ranking anesthesiology journals
2018
Background
The aim of this study was to assess adherence to the Consolidated Standards of Reporting Trials (CONSORT) extension for Abstracts (CONSORT-A) in the highest-impact anesthesiology journals.
Methods
This was a descriptive, cross-sectional, methodological study. We analyzed whether abstracts of randomized controlled trials (RCTs) published in the highest-impact anesthesiology journals between 2014 and 2016 adhered with CONSORT-A. RCT abstracts published in the seven first-quartile journals in the Journal Citation Reports (JCR) category Anesthesiology were analyzed. The primary outcome was adherence to the 17-item CONSORT-A checklist. Secondary outcomes were adherence to individual checklist items and adherence with the checklist across the individual journals.
Results
Search results yielded 688 records, of which 622 abstracts were analyzed. Analysis of the total score of the CONSORT-A checklist indicated a per-article median of 41% (interquartile range 35–53%). The
European Journal of Anesthesiology
had the highest overall adherence rate (53%), whereas
Anaesthesia
had the lowest (32%). The lowest adherence was observed for the following items: Trial design (18%), Contact of the authors as an e-mail address of the corresponding author (16%), Recruitment status (9%), Number of participants analyzed (8%), Randomization (3%), and Funding (0.2%).
Conclusions
RCT abstracts published in top anesthesiology journals are poorly reported, providing insufficient information to readers. Interventions are needed to increase adherence to relevant reporting checklists for writing RCT abstracts.
Journal Article
General medical publications during COVID-19 show increased dissemination despite lower validation
2021
The COVID-19 pandemic has yielded an unprecedented quantity of new publications, contributing to an overwhelming quantity of information and leading to the rapid dissemination of less stringently validated information. Yet, a formal analysis of how the medical literature has changed during the pandemic is lacking. In this analysis, we aimed to quantify how scientific publications changed at the outset of the COVID-19 pandemic.
We performed a cross-sectional bibliometric study of published studies in four high-impact medical journals to identify differences in the characteristics of COVID-19 related publications compared to non-pandemic studies. Original investigations related to SARS-CoV-2 and COVID-19 published in March and April 2020 were identified and compared to non-COVID-19 research publications over the same two-month period in 2019 and 2020. Extracted data included publication characteristics, study characteristics, author characteristics, and impact metrics. Our primary measure was principal component analysis (PCA) of publication characteristics and impact metrics across groups.
We identified 402 publications that met inclusion criteria: 76 were related to COVID-19; 154 and 172 were non-COVID publications over the same period in 2020 and 2019, respectively. PCA utilizing the collected bibliometric data revealed segregation of the COVID-19 literature subset from both groups of non-COVID literature (2019 and 2020). COVID-19 publications were more likely to describe prospective observational (31.6%) or case series (41.8%) studies without industry funding as compared with non-COVID articles, which were represented primarily by randomized controlled trials (32.5% and 36.6% in the non-COVID literature from 2020 and 2019, respectively).
In this cross-sectional study of publications in four general medical journals, COVID-related articles were significantly different from non-COVID articles based on article characteristics and impact metrics. COVID-related studies were generally shorter articles reporting observational studies with less literature cited and fewer study sites, suggestive of more limited scientific support. They nevertheless had much higher dissemination.
Journal Article
Reporting quality of randomised controlled trial abstracts among high-impact general medical journals: a review and analysis
by
Douglas, Kevin
,
Callender, David
,
Andrews, Mary
in
Abstracting and Indexing - standards
,
Agreements
,
Citation management software
2016
ObjectiveThe aim of this study was to assess adherence to the Consolidated Standards of Reporting Trials (CONSORT) for Abstracts by five high-impact general medical journals and to assess whether the quality of reporting was homogeneous across these journals.DesignThis is a descriptive, cross-sectional study.SettingRandomised controlled trial (RCT) abstracts in five high-impact general medical journals.ParticipantsWe used up to 100 RCT abstracts published between 2011 and 2014 from each of the following journals: The New England Journal of Medicine (NEJM), the Annals of Internal Medicine (Annals IM), The Lancet, the British Medical Journal (The BMJ) and the Journal of the American Medical Association (JAMA).Main outcomeThe primary outcome was per cent overall adherence to the 19-item CONSORT for Abstracts checklist. Secondary outcomes included per cent adherence in checklist subcategories and assessing homogeneity of reporting quality across the individual journals.ResultsSearch results yielded 466 abstracts, 3 of which were later excluded as they were not RCTs. Analysis was performed on 463 abstracts (97 from NEJM, 66 from Annals IM, 100 from The Lancet, 100 from The BMJ, 100 from JAMA). Analysis of all scored items showed an overall adherence of 67% (95% CI 66% to 68%) to the CONSORT for Abstracts checklist. The Lancet had the highest overall adherence rate (78%; 95% CI 76% to 80%), whereas NEJM had the lowest (55%; 95% CI 53% to 57%). Adherence rates to 8 of the checklist items differed by >25% between journals.ConclusionsAmong the five highest impact general medical journals, there is variable and incomplete adherence to the CONSORT for Abstracts reporting checklist of randomised trials, with substantial differences between individual journals. Lack of adherence to the CONSORT for Abstracts reporting checklist by high-impact medical journals impedes critical appraisal of important studies. We recommend diligent assessment of adherence to reporting guidelines by authors, reviewers and editors to promote transparency and unbiased reporting of abstracts.
Journal Article
Impact of a web-based tool (WebCONSORT) to improve the reporting of randomised trials: results of a randomised controlled trial
by
Moher, David
,
Barbour, Ginny
,
Boutron, Isabelle
in
Biomedicine
,
Checklist - standards
,
Child & adolescent psychiatry
2016
Background
The CONSORT Statement is an evidence-informed guideline for reporting randomised controlled trials. A number of extensions have been developed that specify additional information to report for more complex trials. The aim of this study was to evaluate the impact of using a simple web-based tool (WebCONSORT, which incorporates a number of different CONSORT extensions) on the completeness of reporting of randomised trials published in biomedical publications.
Methods
We conducted a parallel group randomised trial. Journals which endorsed the CONSORT Statement (i.e. referred to it in the Instruction to Authors) but do not actively implement it (i.e. require authors to submit a completed CONSORT checklist) were invited to participate. Authors of randomised trials were requested by the editor to use the web-based tool at the manuscript revision stage. Authors registering to use the tool were randomised (centralised computer generated) to WebCONSORT or control. In the WebCONSORT group, they had access to a tool allowing them to combine the different CONSORT extensions relevant to their trial and generate a customised checklist and flow diagram that they must submit to the editor. In the control group, authors had only access to a CONSORT flow diagram generator. Authors, journal editors, and outcome assessors were blinded to the allocation. The primary outcome was the proportion of CONSORT items (main and extensions) reported in each article post revision.
Results
A total of 46 journals actively recruited authors into the trial (25 March 2013 to 22 September 2015); 324 author manuscripts were randomised (WebCONSORT
n
= 166; control
n
= 158), of which 197 were reports of randomised trials (
n
= 94;
n
= 103). Over a third (39%;
n
= 127) of registered manuscripts were excluded from the analysis, mainly because the reported study was not a randomised trial. Of those included in the analysis, the most common CONSORT extensions selected were non-pharmacologic (
n
= 43;
n
= 50), pragmatic (
n
= 20;
n
= 16) and cluster (
n
= 10;
n
= 9). In a quarter of manuscripts, authors either wrongly selected an extension or failed to select the right extension when registering their manuscript on the WebCONSORT study site. Overall, there was no important difference in the overall mean score between WebCONSORT (mean score 0.51) and control (0.47) in the proportion of CONSORT and CONSORT extension items reported pertaining to a given study (mean difference, 0.04; 95% CI −0.02 to 0.10).
Conclusions
This study failed to show a beneficial effect of a customised web-based CONSORT checklist to help authors prepare more complete trial reports. However, the exclusion of a large number of inappropriately registered manuscripts meant we had less precision than anticipated to detect a difference. Better education is needed, earlier in the publication process, for both authors and journal editorial staff on when and how to implement CONSORT and, in particular, CONSORT-related extensions.
Trial registration
ClinicalTrials.gov:
NCT01891448
[registered 24 May 2013].
Journal Article