Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
338,467
result(s) for
"Research review"
Sort by:
Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension
by
Moher, David
,
Denniston, Alastair K.
,
Cruz Rivera, Samantha
in
692/308/2779
,
706/703/559
,
Artificial Intelligence
2020
The CONSORT 2010 statement provides minimum guidelines for reporting randomized trials. Its widespread use has been instrumental in ensuring transparency in the evaluation of new interventions. More recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate impact on health outcomes. The CONSORT-AI (Consolidated Standards of Reporting Trials–Artificial Intelligence) extension is a new reporting guideline for clinical trials evaluating interventions with an AI component. It was developed in parallel with its companion statement for clinical trial protocols: SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials–Artificial Intelligence). Both guidelines were developed through a staged consensus process involving literature review and expert consultation to generate 29 candidate items, which were assessed by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed upon in a two-day consensus meeting (31 stakeholders) and refined through a checklist pilot (34 participants). The CONSORT-AI extension includes 14 new items that were considered sufficiently important for AI interventions that they should be routinely reported in addition to the core CONSORT 2010 items. CONSORT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention is integrated, the handling of inputs and outputs of the AI intervention, the human–AI interaction and provision of an analysis of error cases. CONSORT-AI will help promote transparency and completeness in reporting clinical trials for AI interventions. It will assist editors and peer reviewers, as well as the general readership, to understand, interpret and critically appraise the quality of clinical trial design and risk of bias in the reported outcomes.
The CONSORT-AI and SPIRIT-AI extensions improve the transparency of clinical trial design and trial protocol reporting for artificial intelligence interventions.
Journal Article
Peer reviews of peer reviews: A randomized controlled trial and other experiments
by
Goldberg, Alexander
,
Cho, Kyunghyun
,
Belgrave, Danielle
in
Bias
,
Clinical trials
,
Computer and Information Sciences
2025
Is it possible to reliably evaluate the quality of peer reviews? We study this question driven by two primary motivations – incentivizing high-quality reviewing using assessed quality of reviews and measuring changes to review quality in experiments. We conduct a large scale study at the NeurIPS 2022 conference, a top-tier conference in machine learning, in which we invited (meta)-reviewers and authors to voluntarily evaluate reviews given to submitted papers. First, we conduct a randomized controlled trial to examine bias due to the length of reviews. We generate elongated versions of reviews by adding substantial amounts of non-informative content. Participants in the control group evaluate the original reviews, whereas participants in the experimental group evaluate the artificially lengthened versions. We find that lengthened reviews are scored (statistically significantly) higher quality than the original reviews. Additionally, in analysis of observational data we find that authors are positively biased towards reviews recommending acceptance of their own papers, even after controlling for confounders of review length, quality, and different numbers of papers per author. We also measure disagreement rates between multiple evaluations of the same review of 28% – 32%, which is comparable to that of paper reviewers at NeurIPS. Further, we assess the amount of miscalibration of evaluators of reviews using a linear model of quality scores and find that it is similar to estimates of miscalibration of paper reviewers at NeurIPS. Finally, we estimate the amount of variability in subjective opinions around how to map individual criteria to overall scores of review quality and find that it is roughly the same as that in the review of papers. Our results suggest that the various problems that exist in reviews of papers – inconsistency, bias towards irrelevant factors, miscalibration, subjectivity – also arise in reviewing of reviews.
Journal Article
Effect of revealing authors’ conflicts of interests in peer review: randomized controlled trial
by
John, Leslie K
,
Callaham, Michael L
,
Loewenstein, George
in
Adult
,
Bibliometrics
,
Biomedical research
2019
AbstractObjectiveTo assess the effect of disclosing authors’ conflict of interest declarations to peer reviewers at a medical journal.DesignRandomized controlled trial.SettingManuscript review process at the Annals of Emergency Medicine.ParticipantsReviewers (n=838) who reviewed manuscripts submitted between 2 June 2014 and 23 January 2018 inclusive (n=1480 manuscripts).InterventionReviewers were randomized to either receive (treatment) or not receive (control) authors’ full International Committee of Medical Journal Editors format conflict of interest disclosures before reviewing manuscripts. Reviewers rated the manuscripts as usual on eight quality ratings and were then surveyed to obtain “counterfactual scores”—that is, the scores they believed they would have given had they been assigned to the opposite arm—as well as attitudes toward conflicts of interest.Main outcome measureOverall quality score that reviewers assigned to the manuscript on submitting their review (1 to 5 scale). Secondary outcomes were scores the reviewers submitted for the seven more specific quality ratings and counterfactual scores elicited in the follow-up survey.ResultsProviding authors’ conflict of interest disclosures did not affect reviewers’ mean ratings of manuscript quality (Mcontrol=2.70 (SD 1.11) out of 5; Mtreatment=2.74 (1.13) out of 5; mean difference 0.04, 95% confidence interval –0.05 to 0.14), even for manuscripts with disclosed conflicts (Mcontrol= 2.85 (1.12) out of 5; Mtreatment=2.96 (1.16) out of 5; mean difference 0.11, –0.05 to 0.26). Similarly, no effect of the treatment was seen on any of the other seven quality ratings that the reviewers assigned. Reviewers acknowledged conflicts of interest as an important matter and believed that they could correct for them when they were disclosed. However, their counterfactual scores did not differ from actual scores (Mactual=2.69; Mcounterfactual=2.67; difference in means 0.02, 0.01 to 0.02). When conflicts were reported, a comparison of different source types (for example, government, for-profit corporation) found no difference in effect.ConclusionsCurrent ethical standards require disclosure of conflicts of interest for all scientific reports. As currently implemented, this practice had no effect on any quality ratings of real manuscripts being evaluated for publication by real peer reviewers.
Journal Article
What’s next for Registered Reports?
2019
Reviewing and accepting study plans before results are known can counter perverse incentives. Chris Chambers sets out three ways to improve the approach.
Reviewing and accepting study plans before results are known can counter perverse incentives. Chris Chambers sets out three ways to improve the approach.
Journal Article
Improving peer review of systematic reviews and related review types by involving librarians and information specialists as methodological peer reviewers: a randomised controlled trial
2025
ObjectiveTo evaluate the impact of adding librarians and information specialists (LIS) as methodological peer reviewers to the formal journal peer review process on the quality of search reporting and risk of bias in systematic review searches in the medical literature.DesignPragmatic two-group parallel randomised controlled trial.SettingThree biomedical journals.ParticipantsSystematic reviews and related evidence synthesis manuscripts submitted to The BMJ, BMJ Open and BMJ Medicine and sent out for peer review from 3 January 2023 to 1 September 2023. Randomisation (allocation ratio, 1:1) was stratified by journal and used permuted blocks (block size=4). Of 2670 manuscripts sent to peer review during study enrollment, 400 met inclusion criteria and were randomised (62 The BMJ, 334 BMJ Open, 4 BMJ Medicine). 76 manuscripts were revised and resubmitted in the intervention group and 90 in the control group by 2 January 2024.InterventionsAll manuscripts followed usual journal practice for peer review, but those in the intervention group had an additional (LIS) peer reviewer invited.Main outcome measuresThe primary outcomes are the differences in first revision manuscripts between intervention and control groups in the quality of reporting and risk of bias. Quality of reporting was measured using four prespecified PRISMA-S items. Risk of bias was measured using ROBIS Domain 2. Assessments were done in duplicate and assessors were blinded to group allocation. Secondary outcomes included differences between groups for each individual PRISMA-S and ROBIS Domain 2 item. The difference in the proportion of manuscripts rejected as the first decision post-peer review between the intervention and control groups was an additional outcome.ResultsDifferences in the proportion of adequately reported searches (4.4% difference, 95% CI: −2.0% to 10.7%) and risk of bias in searches (0.5% difference, 95% CI: −13.7% to 14.6%) showed no statistically significant differences between groups. By 4 months post-study, 98 intervention and 70 control group manuscripts had been rejected after peer review (13.8% difference, 95% CI: 3.9% to 23.8%).ConclusionsInviting LIS peer reviewers did not impact adequate reporting or risk of bias of searches in first revision manuscripts of biomedical systematic reviews and related review types, though LIS peer reviewers may have contributed to a higher rate of rejection after peer review.Trial registration numberOpen Science Framework: https://doi.org/10.17605/OSF.IO/W4CK2.
Journal Article
Tracking changes between preprint posting and journal publication during a pandemic
by
Pálfy, Máté
,
Coates, Jonathon Alexis
,
Brierley, Liam
in
Annotations
,
Authorship
,
Computer and Information Sciences
2022
Amid the Coronavirus Disease 2019 (COVID-19) pandemic, preprints in the biomedical sciences are being posted and accessed at unprecedented rates, drawing widespread attention from the general public, press, and policymakers for the first time. This phenomenon has sharpened long-standing questions about the reliability of information shared prior to journal peer review. Does the information shared in preprints typically withstand the scrutiny of peer review, or are conclusions likely to change in the version of record? We assessed preprints from bioRxiv and medRxiv that had been posted and subsequently published in a journal through April 30, 2020, representing the initial phase of the pandemic response. We utilised a combination of automatic and manual annotations to quantify how an article changed between the preprinted and published version. We found that the total number of figure panels and tables changed little between preprint and published articles. Moreover, the conclusions of 7.2% of non-COVID-19–related and 17.2% of COVID-19–related abstracts undergo a discrete change by the time of publication, but the majority of these changes do not qualitatively change the conclusions of the paper.
Journal Article
How swamped preprint servers are blocking bad coronavirus research
2020
Repositories are rapidly disseminating crucial pandemic science — and they’re screening more closely to guard against poor-quality work.
Repositories are rapidly disseminating crucial pandemic science — and they’re screening more closely to guard against poor-quality work.
Journal Article