Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
12,219
result(s) for
"Publishing - standards"
Sort by:
The Adaptive designs CONSORT Extension (ACE) statement: a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design
2020
AbstractAdaptive designs (ADs) allow pre-planned changes to an ongoing trial without compromising the validity of conclusions and it is essential to distinguish pre-planned from unplanned changes that may also occur. The reporting of ADs in randomised trials is inconsistent and needs improving. Incompletely reported AD randomised trials are difficult to reproduce and are hard to interpret and synthesise. This consequently hampers their ability to inform practice as well as future research and contributes to research waste. Better transparency and adequate reporting will enable the potential benefits of ADs to be realised.This extension to the Consolidated Standards Of Reporting Trials (CONSORT) 2010 statement was developed to enhance the reporting of randomised AD clinical trials. We developed an Adaptive designs CONSORT Extension (ACE) guideline through a two-stage Delphi process with input from multidisciplinary key stakeholders in clinical trials research in the public and private sectors from 21 countries, followed by a consensus meeting. Members of the CONSORT Group were involved during the development process.The paper presents the ACE checklists for AD randomised trial reports and abstracts, as well as an explanation with examples to aid the application of the guideline. The ACE checklist comprises seven new items, nine modified items, six unchanged items for which additional explanatory text clarifies further considerations for ADs, and 20 unchanged items not requiring further explanatory text. The ACE abstract checklist has one new item, one modified item, one unchanged item with additional explanatory text for ADs, and 15 unchanged items not requiring further explanatory text.The intention is to enhance transparency and improve reporting of AD randomised trials to improve the interpretability of their results and reproducibility of their methods, results and inference. We also hope indirectly to facilitate the much-needed knowledge transfer of innovative trial designs to maximise their potential benefits.
Journal Article
Potential predatory and legitimate biomedical journals: can you tell the difference? A cross-sectional comparison
2017
Background
The Internet has transformed scholarly publishing, most notably, by the introduction of open access publishing. Recently, there has been a rise of online journals characterized as ‘predatory’, which actively solicit manuscripts and charge publications fees without providing robust peer review and editorial services. We carried out a cross-sectional comparison of characteristics of potential predatory, legitimate open access, and legitimate subscription-based biomedical journals.
Methods
On July 10, 2014, scholarly journals from each of the following groups were identified – potential predatory journals (source: Beall’s List), presumed legitimate, fully open access journals (source: PubMed Central), and presumed legitimate subscription-based (including hybrid) journals (source: Abridged Index Medicus). MEDLINE journal inclusion criteria were used to screen and identify biomedical journals from within the potential predatory journals group. One hundred journals from each group were randomly selected. Journal characteristics (e.g., website integrity, look and feel, editors and staff, editorial/peer review process, instructions to authors, publication model, copyright and licensing, journal location, and contact) were collected by one assessor and verified by a second. Summary statistics were calculated.
Results
Ninety-three predatory journals, 99 open access, and 100 subscription-based journals were analyzed; exclusions were due to website unavailability. Many more predatory journals’ homepages contained spelling errors (61/93, 66%) and distorted or potentially unauthorized images (59/93, 63%) compared to open access journals (6/99, 6% and 5/99, 5%, respectively) and subscription-based journals (3/100, 3% and 1/100, 1%, respectively). Thirty-one (33%) predatory journals promoted a bogus impact metric – the Index Copernicus Value – versus three (3%) open access journals and no subscription-based journals. Nearly three quarters (
n
= 66, 73%) of predatory journals had editors or editorial board members whose affiliation with the journal was unverified versus two (2%) open access journals and one (1%) subscription-based journal in which this was the case. Predatory journals charge a considerably smaller publication fee (median $100 USD, IQR $63–$150) than open access journals ($1865 USD, IQR $800–$2205) and subscription-based hybrid journals ($3000 USD, IQR $2500–$3000).
Conclusions
We identified 13 evidence-based characteristics by which predatory journals may potentially be distinguished from presumed legitimate journals. These may be useful for authors who are assessing journals for possible submission or for others, such as universities evaluating candidates’ publications as part of the hiring process.
Journal Article
CONSORT 2025 statement: Updated guideline for reporting randomised trials
by
Aggarwal, Rakesh
,
Schulz, Kenneth
,
Siegried, Nandi
in
Check lists
,
Checklist - standards
,
Clinical trials
2025
Background Well designed and properly executed randomised trials are considered the most reliable evidence on the benefits of healthcare interventions. However, there is overwhelming evidence that the quality of reporting is not optimal. The CONSORT (Consolidated Standards of Reporting Trials) statement was designed to improve the quality of reporting and provides a minimum set of items to be included in a report of a randomised trial. CONSORT was first published in 1996, then updated in 2001 and 2010. Here, we present the updated CONSORT 2025 statement, which aims to account for recent methodological advancements and feedback from end users. Methods We conducted a scoping review of the literature and developed a project-specific database of empirical and theoretical evidence related to CONSORT, to generate a list of potential changes to the checklist. The list was enriched with recommendations provided by the lead authors of existing CONSORT extensions (Harms, Outcomes, Non-pharmacological Treatment), other related reporting guidelines (TIDieR) and recommendations from other sources (e.g., personal communications). The list of potential changes to the checklist was assessed in a large, international, online, three-round Delphi survey involving 317 participants and discussed at a two-day online expert consensus meeting of 30 invited international experts. Results We have made substantive changes to the CONSORT checklist. We added seven new checklist items, revised three items, deleted one item, and integrated several items from key CONSORT extensions. We also restructured the CONSORT checklist, with a new section on open science. The CONSORT 2025 statement consists of a 30-item checklist of essential items that should be included when reporting the results of a randomised trial and a diagram for documenting the flow of participants through the trial. To facilitate implementation of CONSORT 2025, we have also developed an expanded version of the CONSORT 2025 checklist, with bullet points eliciting critical elements of each item. Conclusions Authors, editors, reviewers, and other potential users should use CONSORT 2025 when writing and evaluating manuscripts of randomised trials to ensure that trial reports are clear and transparent.
Journal Article
Prospective registration and reporting of trial number in randomised clinical trials: global cross sectional study of the adoption of ICMJE and Declaration of Helsinki recommendations
by
Al-Durra, Mustafa
,
Seto, Emily
,
Cafazzo, Joseph A
in
Citation management software
,
Clinical trials
,
Cross-Sectional Studies
2020
AbstractObjectivesTo evaluate the compliance with prospective registration and inclusion of the trial registration number (TRN) in published randomised controlled trials (RCTs), and to analyse the rationale behind, and detect selective registration bias in, retrospective trial registration.DesignCross sectional analysis.Data sourcesPubMed, the 17 World Health Organization’s trial registries, University of Toronto library, International Committee of Medical Journal Editors (ICMJE) list of member journals, and the InCites Journal Citation Reports.Study selection criteriaRCTs registered in any WHO trial registry and published in any PubMed indexed journal in 2018.ResultsThis study included 10 500 manuscripts published in 2105 journals. Overall, 71.2% (7473/10500) reported the TRN and 41.7% (3013/7218) complied with prospective trial registration. The univariable and multivariable analyses reported significant relations (P<0.05) between reporting the TRN and the impact factor and ICMJE membership of the publishing journal. A significant relation (P<0.05) was also observed between prospective trial registration and the registry, region, condition, funding, trial size, interval between paper registration and submission dates, impact factor, and ICMJE membership of the publishing journal. A manuscript published in an ICMJE member journal was 5.8 times more likely to include the TRN (odds ratio 5.8, 95% confidence interval 4.0 to 8.2), and a published trial was 1.8 times more likely to be registered prospectively (1.8, 1.5 to 2.2) when published in an ICMJE member journal compared with other journals. This study detected a new form of bias, selective registration bias, with a higher proportion (85.2% (616/723)) of trials registered retrospectively within a year of submission for publication. Higher rates of retrospective registrations were observed within the first three to eight weeks after enrolment of study participants. Within the 286 RCTs registered retrospectively and published in an ICMJE member journal, only 2.8% (8/286) of the authors included a statement justifying the delayed registration. Reasons included lack of awareness, error of omission, and the registration process taking longer than anticipated.ConclusionsThis study found a high compliance in reporting of the TRN for trial papers published in ICMJE member journals, but prospective trial registration was low.
Journal Article
Practical Guidance for Knowledge Synthesis: Scoping Review Methods
by
Lockwood, Craig
,
Pap, Robin
,
dos Santos, Kelli Borgess
in
evidence-based practice
,
Knowledge
,
methods
2019
Scoping reviews are a useful approach to synthesizing research evidence although the objectives and methods are different to that of systematic reviews, yet some confusion persists around how to plan and prepare so that a completed scoping review complies with best practice in methods and meets international standards for reporting criteria. This paper describes how to use available guidance to ensure a scoping review project meets global standards, has transparency of methods and promotes readability though the use of innovative approaches to data analysis and presentation. We address some of the common issues such as which projects are more suited to systematic reviews, how to avoid an inadequate search and/or poorly reported search strategy, poorly described methods and lack of transparency, and the issue of how to plan and present results that are clear, visually compelling and accessible to readers. Effective pre-planning, adhering to protocol and detailed consideration of how the results data will be communicated to the readership are critical. The aim of this article is to provide clarity about what is meant by conceptual clarity and how pre-planning enables review authors to produce scoping reviews which are of high quality, reliability and readily publishable.
Journal Article
Emerging Standards for Enhanced Publications and Repository Technology
by
Vanderfeesten, Maurice
,
Hochstenbach, Patrick
,
Bijsterbosch, Magchiel
in
Digital libraries
,
Electronic publications
,
Electronic publishing
2009,2025
Emerging Standards for Enhanced Publications and Repository Technology serves as a technology watch on the rapidly evolving world of digital publication. It provides an up-to-date overview of technical issues, underlying the development of universally accessible publications, their elemental components and linked information. More specifically it deals with questions as how to bring together the communities of the Current Research Information Systems (CRIS) and the Common European Research Information Format (CERIF). Case studies like EGEE, DILIGENT and DRIVER are analyzed, as well as implementations in projects in Ireland, Denmark and The Netherlands. Interoperability is the keyword in this context and this book introduces to new standards and to concepts used in the design of envelopes and packages, overlays and feeds, embedding, publishing formats and Web services and serviceoriented architecture. It is a must-read for quick and comprehensive orientation.
Improving the reporting of pragmatic trials: an extension of the CONSORT statement
2008
Background The CONSORT statement is intended to improve reporting of randomised controlled trials and focuses on minimising the risk of bias (internal validity). The applicability of a trial’s results (generalisability or external validity) is also important, particularly for pragmatic trials. A pragmatic trial (a term first used in 1967 by Schwartz and Lellouch) can be broadly defined as a randomised controlled trial whose purpose is to inform decisions about practice. This extension of the CONSORT statement is intended to improve the reporting of such trials and focuses on applicability. Methods At two, two-day meetings held in Toronto in 2005 and 2008, we reviewed the CONSORT statement and its extensions, the literature on pragmatic trials and applicability, and our experiences in conducting pragmatic trials. Recommendations We recommend extending eight CONSORT checklist items for reporting of pragmatic trials: the background, participants, interventions, outcomes, sample size, blinding, participant flow, and generalisability of the findings. These extensions are presented, along with illustrative examples of reporting, and an explanation of each extension. Adherence to these reporting criteria will make it easier for decision makers to judge how applicable the results of randomised controlled trials are to their own conditions. Empirical studies are needed to ascertain the usefulness and comprehensiveness of these CONSORT checklist item extensions. In the meantime we recommend that those who support, conduct, and report pragmatic trials should use this extension of the CONSORT statement to facilitate the use of trial results in decisions about health care.
Journal Article
There is no reliable evidence that providing authors with customized article templates including items from reporting guidelines improves completeness of reporting: the GoodReports randomized trial (GRReaT)
by
Harwood, James
,
Collins, Gary S
,
de Beyer, Jennifer Anne
in
Authors
,
Authorship
,
Authorship - standards
2025
Background
Although medical journals endorse reporting guidelines, authors often struggle to find and use the right one for their study type and topic. The UK EQUATOR Centre developed the GoodReports website to direct authors to appropriate guidance. Pilot data suggested that authors did not improve their manuscripts when advised to use a particular reporting guideline by GoodReports.org at journal submission stage. User feedback suggested the checklist format of most reporting guidelines does not encourage use during manuscript writing. We tested whether providing customized reporting guidance within writing templates for use throughout the writing process resulted in clearer and more complete reporting than only giving advice on which reporting guideline to use.
Design and methods
GRReaT was a two-group parallel 1:1 randomized trial with a target sample size of 206. Participants were lead authors at an early stage of writing up a health-related study. Eligible study designs were cohort, cross-sectional, or case-control study, randomized trial, and systematic review. After randomization, the intervention group received an article template including items from the appropriate reporting guideline and links to explanations and examples. The control group received a reporting guideline recommendation and general advice on reporting. Participants sent their completed manuscripts to the GRReaT team before submitting for publication, for completeness of each item in the title, methods, and results section of the corresponding reporting guideline. The primary outcome was reporting completeness against the corresponding reporting guideline. Participants were not blinded to allocation. Assessors were blind to group allocation. As a recruitment incentive, all participants received a feedback report identifying missing or inadequately reported items in these three sections.
Results
Between 9 June 2021 and 30 June 2023, we randomized 130 participants, 65 to the intervention and 65 to the control group. We present findings from the assessment of reporting completeness for the 37 completed manuscripts we received, 18 in the intervention group and 19 in the control group. The mean (standard deviation) proportion of completely reported items from the title, methods, and results sections of the manuscripts (primary outcome) was 0.57 (0.18) in the intervention group and 0.50 (0.17) in the control group. The mean difference between the two groups was 0.069 (95% CI -0.046 to 0.184;
p
= 0.231). In the sensitivity analysis, when partially reported items were counted as completely reported, the mean (standard deviation) proportion of completely reported items was 0.75 (0.15) in the intervention group and 0.71 (0.11) in the control group. The mean difference between the two groups was 0.036 (95% CI -0.127 to 0.055;
p
= 0.423).
Conclusion
As the dropout rate was higher than expected, we did not reach the recruitment target, and the difference between groups was not statistically significant. We therefore found no evidence that providing authors with customized article templates including items from reporting guidelines, increases reporting completeness. We discuss the challenges faced when conducting the trial and suggest how future research testing innovative ways of improving reporting could be designed to improve recruitment and reduce dropouts.
Journal Article
The rise of predatory publishing and journals
by
O´Rorke, Rachael
,
Bhujel, Nabina
,
White, Christopher
in
Career advancement
,
Citation indexes
,
Dentistry
2024
Predatory publishing is a practice where businesses offer illegitimate and unethical publishing where an open access model is generally used with very little or no peer review at all. These publishers also charge high fees for article processing without the standard editorial or publishing standards for scientific disciplines. Predatory journals are an increasing phenomenon in dentistry, as in any healthcare academic publishing. This can mean poor-quality or false research is given false legitimacy and becomes available for dissemination and public consumption. They can be seen as an easy route to publish any work, particularly by junior colleagues who are trying to advance their academic careers. This article discusses the features and issues with predatory publishing while also highlighting the importance of ensuring healthcare literature remains credible, reputable and trustworthy.
Key points
Maintaining integrity in healthcare literature is essential.
Predatory journals and publishing are on the rise and it can be increasingly difficult to distinguish these from legitimate publications.
Junior colleagues may be most susceptible to predatory publishing due to pressures to publish for career progression.
Journal Article