Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
57,053
result(s) for
"Peer Review"
Sort by:
Peer reviews of peer reviews: A randomized controlled trial and other experiments
by
Goldberg, Alexander
,
Cho, Kyunghyun
,
Belgrave, Danielle
in
Bias
,
Clinical trials
,
Computer and Information Sciences
2025
Is it possible to reliably evaluate the quality of peer reviews? We study this question driven by two primary motivations – incentivizing high-quality reviewing using assessed quality of reviews and measuring changes to review quality in experiments. We conduct a large scale study at the NeurIPS 2022 conference, a top-tier conference in machine learning, in which we invited (meta)-reviewers and authors to voluntarily evaluate reviews given to submitted papers. First, we conduct a randomized controlled trial to examine bias due to the length of reviews. We generate elongated versions of reviews by adding substantial amounts of non-informative content. Participants in the control group evaluate the original reviews, whereas participants in the experimental group evaluate the artificially lengthened versions. We find that lengthened reviews are scored (statistically significantly) higher quality than the original reviews. Additionally, in analysis of observational data we find that authors are positively biased towards reviews recommending acceptance of their own papers, even after controlling for confounders of review length, quality, and different numbers of papers per author. We also measure disagreement rates between multiple evaluations of the same review of 28% – 32%, which is comparable to that of paper reviewers at NeurIPS. Further, we assess the amount of miscalibration of evaluators of reviews using a linear model of quality scores and find that it is similar to estimates of miscalibration of paper reviewers at NeurIPS. Finally, we estimate the amount of variability in subjective opinions around how to map individual criteria to overall scores of review quality and find that it is roughly the same as that in the review of papers. Our results suggest that the various problems that exist in reviews of papers – inconsistency, bias towards irrelevant factors, miscalibration, subjectivity – also arise in reviewing of reviews.
Journal Article
Improving peer review of systematic reviews and related review types by involving librarians and information specialists as methodological peer reviewers: a randomised controlled trial
2025
ObjectiveTo evaluate the impact of adding librarians and information specialists (LIS) as methodological peer reviewers to the formal journal peer review process on the quality of search reporting and risk of bias in systematic review searches in the medical literature.DesignPragmatic two-group parallel randomised controlled trial.SettingThree biomedical journals.ParticipantsSystematic reviews and related evidence synthesis manuscripts submitted to The BMJ, BMJ Open and BMJ Medicine and sent out for peer review from 3 January 2023 to 1 September 2023. Randomisation (allocation ratio, 1:1) was stratified by journal and used permuted blocks (block size=4). Of 2670 manuscripts sent to peer review during study enrollment, 400 met inclusion criteria and were randomised (62 The BMJ, 334 BMJ Open, 4 BMJ Medicine). 76 manuscripts were revised and resubmitted in the intervention group and 90 in the control group by 2 January 2024.InterventionsAll manuscripts followed usual journal practice for peer review, but those in the intervention group had an additional (LIS) peer reviewer invited.Main outcome measuresThe primary outcomes are the differences in first revision manuscripts between intervention and control groups in the quality of reporting and risk of bias. Quality of reporting was measured using four prespecified PRISMA-S items. Risk of bias was measured using ROBIS Domain 2. Assessments were done in duplicate and assessors were blinded to group allocation. Secondary outcomes included differences between groups for each individual PRISMA-S and ROBIS Domain 2 item. The difference in the proportion of manuscripts rejected as the first decision post-peer review between the intervention and control groups was an additional outcome.ResultsDifferences in the proportion of adequately reported searches (4.4% difference, 95% CI: −2.0% to 10.7%) and risk of bias in searches (0.5% difference, 95% CI: −13.7% to 14.6%) showed no statistically significant differences between groups. By 4 months post-study, 98 intervention and 70 control group manuscripts had been rejected after peer review (13.8% difference, 95% CI: 3.9% to 23.8%).ConclusionsInviting LIS peer reviewers did not impact adequate reporting or risk of bias of searches in first revision manuscripts of biomedical systematic reviews and related review types, though LIS peer reviewers may have contributed to a higher rate of rejection after peer review.Trial registration numberOpen Science Framework: https://doi.org/10.17605/OSF.IO/W4CK2.
Journal Article
Medical Journal Peer Review: Process and Bias
2015
Scientific peer review is pivotal in health care research in that it facilitates the evaluation of findings for competence, significance, and originality by qualified experts. While the origins of peer review can be traced to the societies of the eighteenth century, it became an institutionalized part of the scholarly process in the latter half of the twentieth century. This was a response to the growth of research and greater subject specialization. With the current increase in the number of specialty journals, the peer review process continues to evolve to meet the needs of patients, clinicians, and policy makers. The peer review process itself faces challenges. Unblinded peer review might suffer from positive or negative bias towards certain authors, specialties, and institutions. Peer review can also suffer when editors and/or reviewers might be unable to understand the contents of the submitted manuscript. This can result in an inability to detect major flaws, or revelations of major flaws after acceptance of publication by the editors. Other concerns include potentially long delays in publication and challenges uncovering plagiarism, duplication, corruption and scientific misconduct. Conversely, a multitude of these challenges have led to claims of scientific misconduct and an erosion of faith. These challenges have invited criticism of the peer review process itself. However, despite its imperfections, the peer review process enjoys widespread support in the scientific community. Peer review bias is one of the major focuses of today’s scientific assessment of the literature. Various types of peer review bias include content-based bias, confirmation bias, bias due to conservatism, bias against interdisciplinary research, publication bias, and the bias of conflicts of interest. Consequently, peer review would benefit from various changes and improvements with appropriate training of reviewers to provide quality reviews to maintain the quality and integrity of research without bias. Thus, an appropriate, transparent peer review is not only ideal, but necessary for the future to facilitate scientific progress. Key words: Scientific research, peer review process, scientific publications, peer review bias, blinded peer review, scientific misconduct.
Journal Article
Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension
by
Moher, David
,
Denniston, Alastair K.
,
Cruz Rivera, Samantha
in
692/308/2779
,
706/703/559
,
Artificial Intelligence
2020
The CONSORT 2010 statement provides minimum guidelines for reporting randomized trials. Its widespread use has been instrumental in ensuring transparency in the evaluation of new interventions. More recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate impact on health outcomes. The CONSORT-AI (Consolidated Standards of Reporting Trials–Artificial Intelligence) extension is a new reporting guideline for clinical trials evaluating interventions with an AI component. It was developed in parallel with its companion statement for clinical trial protocols: SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials–Artificial Intelligence). Both guidelines were developed through a staged consensus process involving literature review and expert consultation to generate 29 candidate items, which were assessed by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed upon in a two-day consensus meeting (31 stakeholders) and refined through a checklist pilot (34 participants). The CONSORT-AI extension includes 14 new items that were considered sufficiently important for AI interventions that they should be routinely reported in addition to the core CONSORT 2010 items. CONSORT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention is integrated, the handling of inputs and outputs of the AI intervention, the human–AI interaction and provision of an analysis of error cases. CONSORT-AI will help promote transparency and completeness in reporting clinical trials for AI interventions. It will assist editors and peer reviewers, as well as the general readership, to understand, interpret and critically appraise the quality of clinical trial design and risk of bias in the reported outcomes.
The CONSORT-AI and SPIRIT-AI extensions improve the transparency of clinical trial design and trial protocol reporting for artificial intelligence interventions.
Journal Article
How predictive is peer review for gauging impact? The association between reviewer rating scores, publication status, and article impact measured by citations in a pain subspecialty journal
2025
BackgroundPeer review represents a cornerstone of the scientific process, yet few studies have evaluated its association with scientific impact. The objective of this study is to assess the association of peer review scores with measures of impact for manuscripts submitted and ultimately published.Methods3173 manuscripts submitted to Regional Anesthesia & Pain Medicine (RAPM) between August 2018 and October 2021 were analyzed, with those containing an abstract included. Articles were categorized by topic, type, acceptance status, author demographics and open-access status. Articles were scored based on means for the initial peer review where each reviewer’s recommendation was assigned a number: 5 for ‘accept’, 3 for ‘minor revision’, 2 for ‘major revision’ and 0 for ‘reject’. Articles were further classified by whether any reviewers recommended ‘reject’. Rejected articles were analyzed to determine whether they were subsequently published in an indexed journal, and their citations were compared with those of accepted articles when the impact factor was <1.4 points lower than RAPM’s 5.1 impact factor. The main outcome measure was the number of Clarivate citations within 2 years from publication. Secondary outcome measures were Google Scholar citations within 2 years and Altmetric score.Results422 articles met inclusion criteria for analysis. There was no significant correlation between the number of Clarivate 2-year review citations and reviewer rating score (r=0.038, p=0.47), Google Scholar citations (r=0.053, p=0.31) or Altmetric score (p=0.38). There was no significant difference in 2-year Clarivate citations between accepted (median (IQR) 5 (2–10)) and rejected manuscripts published in journals with impact factors >3.7 (median 5 (2–7); p=0.39). Altmetric score was significantly higher for RAPM-published papers compared with RAPM-rejected ones (median 10 (5–17) vs 1 (0–2); p<0.001).ConclusionsPeer review rating scores were not associated with citations, though the impact of peer review on quality and association with other metrics remains unclear.
Journal Article
On improving the sustainability of peer review
by
Pariente, Nonia
,
Routledge, Daniel
in
Artificial intelligence
,
Biology
,
Biology and Life Sciences
2025
The term \"reviewer fatigue\" has become only too familiar in scientific publishing. How can we ease the burden on reviewers to make the peer review system more sustainable, while streamlining the publication process for authors?
Journal Article
Measuring the effectiveness of scientific gatekeeping
by
Lee, Kirby
,
Siler, Kyle
,
Bero, Lisa
in
Databases, Bibliographic
,
Decision Making
,
Editorial Policies
2015
Peer review is the main institution responsible for the evaluation and gestation of scientific research. Although peer review is widely seen as vital to scientific evaluation, anecdotal evidence abounds of gatekeeping mistakes in leading journals, such as rejecting seminal contributions or accepting mediocre submissions. Systematic evidence regarding the effectiveness—or lack thereof—of scientific gatekeeping is scant, largely because access to rejected manuscripts from journals is rarely available. Using a dataset of 1,008 manuscripts submitted to three elite medical journals, we show differences in citation outcomes for articles that received different appraisals from editors and peer reviewers. Among rejected articles, desk-rejected manuscripts, deemed as unworthy of peer review by editors, received fewer citations than those sent for peer review. Among both rejected and accepted articles, manuscripts with lower scores from peer reviewers received relatively fewer citations when they were eventually published. However, hindsight reveals numerous questionable gatekeeping decisions. Of the 808 eventually published articles in our dataset, our three focal journals rejected many highly cited manuscripts, including the 14 most popular; roughly the top 2 percent. Of those 14 articles, 12 were desk-rejected. This finding raises concerns regarding whether peer review is ill-suited to recognize and gestate the most impactful ideas and research. Despite this finding, results show that in our case studies, on the whole, there was value added in peer review. Editors and peer reviewers generally—but not always—made good decisions regarding the identification and promotion of quality in scientific manuscripts.
Significance Peer review is an institution of enormous importance for the careers of scientists and the content of published science. The decisions of gatekeepers—editors and peer reviewers—legitimize scientific findings, distribute professional rewards, and influence future research. However, appropriate data to gauge the quality of gatekeeper decision-making in science has rarely been made publicly available. Our research tracks the popularity of rejected and accepted manuscripts at three elite medical journals. We found that editors and reviewers generally made good decisions regarding which manuscripts to promote and reject. However, many highly cited articles were surprisingly rejected. Our research suggests that evaluative strategies that increase the mean quality of published science may also increase the risk of rejecting unconventional or outstanding work.
Journal Article