Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
19,762 result(s) for "Risk of bias"
Sort by:
The Cochrane risk of bias assessment tool 2 (RoB 2) versus the original RoB: A perspective on the pros and cons
Background and Aims Critical appraisal or risk of bias assessment is a fundamental part of systematic reviews that clarifies the degree to which included research articles are qualified and reliable. Version 2 of the Cochrane tool for assessing the risk of bias in randomized trials (RoB 2), the updated version of the first tool, was released in 2019. Here, we have compared these two versions of Cochrane risk of bias assessment tools and highlighted the pros and cons of RoB 2. Methods Statistical analysis and methodology is not applicable to this article as no new data were created or analyzed in this study. Results The overall approach in RoB 2 is that by answering some signaling questions after the specification of results, effects of interest, and sources of information, an overall judgment for the quality of each study is reached. Accordingly, in the original version of the Cochrane RoB tool, the judgment can be in three different conclusions, including low, unclear, and high risk of bias. The most prominent difference in bias domains is the removal of “other bias” domain being replaced by “overall bias” judgment. Also, the most common presentation types of Cochrane risk of bias assessments are the “summary” and “graph” which are generated by Review Manager, web‐based applications, or packages in R software. Conclusion The RoB 2 tool, compared to the original RoB, has improved and is the recommended version by the Cochrane Collaboration for quality assessment of randomized controlled trials. It is recommended to consider funding source, duration of follow‐up, declaration of data availability, the status of baseline characteristics between groups, and sample size calculation methods in further revisions of the Cochrane risk of bias assessment tools.
GRADE guidelines: 18. How ROBINS-I and other tools to assess risk of bias in nonrandomized studies should be used to rate the certainty of a body of evidence
To provide guidance on how systematic review authors, guideline developers, and health technology assessment practitioners should approach the use of the risk of bias in nonrandomized studies of interventions (ROBINS-I) tool as a part of GRADE's certainty rating process. The study design and setting comprised iterative discussions, testing in systematic reviews, and presentation at GRADE working group meetings with feedback from the GRADE working group. We describe where to start the initial assessment of a body of evidence with the use of ROBINS-I and where one would anticipate the final rating would end up. The GRADE accounted for issues that mitigate concerns about confounding and selection bias by introducing the upgrading domains: large effects, dose-effect relations, and when plausible residual confounders or other biases increase certainty. They will need to be considered in an assessment of a body of evidence when using ROBINS-I. The use of ROBINS-I in GRADE assessments may allow for a better comparison of evidence from randomized controlled trials (RCTs) and nonrandomized studies (NRSs) because they are placed on a common metric for risk of bias. Challenges remain, including appropriate presentation of evidence from RCTs and NRSs for decision-making and how to optimally integrate RCTs and NRSs in an evidence assessment.
ROBIS: A new tool to assess risk of bias in systematic reviews was developed
To develop ROBIS, a new tool for assessing the risk of bias in systematic reviews (rather than in primary studies). We used four-stage approach to develop ROBIS: define the scope, review the evidence base, hold a face-to-face meeting, and refine the tool through piloting. ROBIS is currently aimed at four broad categories of reviews mainly within health care settings: interventions, diagnosis, prognosis, and etiology. The target audience of ROBIS is primarily guideline developers, authors of overviews of systematic reviews (“reviews of reviews”), and review authors who might want to assess or avoid risk of bias in their reviews. The tool is completed in three phases: (1) assess relevance (optional), (2) identify concerns with the review process, and (3) judge risk of bias. Phase 2 covers four domains through which bias may be introduced into a systematic review: study eligibility criteria; identification and selection of studies; data collection and study appraisal; and synthesis and findings. Phase 3 assesses the overall risk of bias in the interpretation of review findings and whether this considered limitations identified in any of the phase 2 domains. Signaling questions are included to help judge concerns with the review process (phase 2) and the overall risk of bias in the review (phase 3); these questions flag aspects of review design related to the potential for bias and aim to help assessors judge risk of bias in the review process, results, and conclusions. ROBIS is the first rigorously developed tool designed specifically to assess the risk of bias in systematic reviews.
COSMIN Risk of Bias checklist for systematic reviews of Patient-Reported Outcome Measures
Purpose The original COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist was developed to assess the methodological quality of single studies on measurement properties of Patient-Reported Outcome Measures (PROMs). Now it is our aim to adapt the COSMIN checklist and its four-point rating system into a version exclusively for use in systematic reviews of PROMs, aiming to assess risk of bias of studies on measurement properties. Methods For each standard (i.e., a design requirement or preferred statistical method), it was discussed within the COSMIN steering committee if and how it should be adapted. The adapted checklist was pilot-tested to strengthen content validity in a systematic review on the quality of PROMs for patients with hand osteoarthritis. Results Most important changes were the reordering of the measurement properties to be assessed in a systematic review of PROMs; the deletion of standards that concerned reporting issues and standards that not necessarily lead to biased results; the integration of standards on general requirements for studies on item response theory with standards for specific measurement properties; the recommendation to the review team to specify hypotheses for construct validity and responsiveness in advance, and subsequently the removal of the standards about formulating hypotheses; and the change in the labels of the four-point rating system. Conclusions The COSMIN Risk of Bias checklist was developed exclusively for use in systematic reviews of PROMs to distinguish this application from other purposes of assessing the methodological quality of studies on measurement properties, such as guidance for designing or reporting a study on the measurement properties.
The revised Cochrane risk of bias tool for randomized trials (RoB 2) showed low interrater reliability and challenges in its application
The objective of the study is to assess the interrater reliability (IRR) and usability of the revised Cochrane risk of bias tool for randomized trials (RoB 2). This is a cross-sectional study. Four raters independently applied RoB 2 on the primary outcome of a random sample of individually randomized parallel-group trials (randomized controlled trials (RCTs)). We calculated the Fleiss’ kappa for multiple raters, the time needed to complete the tool, and discussed the application of RoB 2 to identify difficulties and reasons for disagreement. A total of 70 outcomes from 70 RCTs were included. IRR was slight for overall judgment (IRR 0.16, 95% confidence interval (CI) 0.08–0.24); individual domain analysis gave IRR as moderate for “randomization process” (IRR 0.45, 95% CI 0.37–0.53), slight for “deviations from intended intervention” for RCTs assessing the effect of the assignment to an intervention (IRR 0.04, 95% CI −0.06 to 0.14), fair for those assessing the effect of adhering (IRR 0.21, 95% CI 0.11–0.31), and fair for the other domains, ranging from 0.22 (95% CI 0.14–0.30) for “missing outcome data” to 0.30 (95% CI 0.22–0.38) for “selection of reported results”. Mean time to apply the tool was 28 minutes (standard deviation 13.4) per study outcome. The main difficulties were due to poor knowledge of the subject matter of primary studies, new terminology, different approaches for some domains compared with the previous tool, and way of formulating signaling questions. RoB 2 is a detailed and comprehensive tool but difficult and demanding, even for raters with substantial expertise in systematic reviews. Calibration exercises and intensive training are needed before its application, to improve reliability.
Quality versus Risk-of-Bias assessment in clinical research
Assessment of internal validity safeguards implemented by researchers has been used to examine the potential reliability of evidence generated within a study. These safeguards protect against systematic error, and such an assessment has traditionally been called a quality assessment. When the results of a quality assessment are translated through some empirical construct to the potential risk of bias, this has been termed a risk of bias assessment. The latter has gained popularity and is commonly used interchangeably with the term quality assessment. This key concept paper clarifies the differences between these assessments and how they may be used and interpreted when assessing clinical evidence for internal validity.
COSMIN Risk of Bias tool to assess the quality of studies on reliability or measurement error of outcome measurement instruments: a Delphi study
Background Scores on an outcome measurement instrument depend on the type and settings of the instrument used, how instructions are given to patients, how professionals administer and score the instrument, etc. The impact of all these sources of variation on scores can be assessed in studies on reliability and measurement error, if properly designed and analyzed. The aim of this study was to develop standards to assess the quality of studies on reliability and measurement error of clinician-reported outcome measurement instruments, performance-based outcome measurement instrument, and laboratory values. Methods We conducted a 3-round Delphi study involving 52 panelists. Results Consensus was reached on how a comprehensive research question can be deduced from the design of a reliability study to determine how the results of a study inform us about the quality of the outcome measurement instrument at issue. Consensus was reached on components of outcome measurement instruments, i.e. the potential sources of variation. Next, we reached consensus on standards on design requirements ( n  = 5), standards on preferred statistical methods for reliability ( n  = 3) and measurement error ( n  = 2), and their ratings on a four-point scale. There was one term for a component and one rating of one standard on which no consensus was reached, and therefore required a decision by the steering committee. Conclusion We developed a tool that enables researchers with and without thorough knowledge on measurement properties to assess the quality of a study on reliability and measurement error of outcome measurement instruments.
PO:38:280 | Accuracy of generative artificial intelligence in risk of bias assessment using RoB 2.0 tool and extracting data in exercise therapy for chronic low back pain randomised controlled trials
Background. Systematic reviews are essential tools for evidence-based practice but are often time-consuming. Recent developments in artificial intelligence, including large language models such as ChatGPT-4o, offer potential to support and partially automate some processes. This study aimed to evaluate the performance of ChatGPT-4o in assessing Risk of Bias (RoB) using RoB 2.0 tool and in extracting data from randomised controlled trials (RCTs) on exercise therapy for chronic low back pain.   Materials and Methods. This cross-sectional comparative study included 150 RCTs previously assessed by human reviewers. ChatGPT-4o was tested in two tasks: (1) RoB assessment using a single structured prompt, compared with expert ratings across five domains and overall judgement; (2) data extraction across 34 predefined variables using both simplified and detailed prompts. Human reviewers served as the reference standard. Statistical analysis focused on Cohen’s kappa and overall accuracy, alongside sensitivity, specificity, PPV, NPV, and F1-score.   Results. Agreement between ChatGPT-4o and human reviewers for RoB assessment was low (Cohen’s κ = 0.14), with a tendency to underestimate bias. Sensitivity was 40%, while specificity and PPV were higher for low-risk classifications. For data extraction, ChatGPT-4o showed strong performance, with accuracy above 84% and F1-scores over 90% using both prompt types. Hallucinations were rare (<0.02%). Task duration was reduced from 30-50 minutes (human reviewers) to under 5 minutes with ChatGPT-4o.   Conclusions. While ChatGPT-4o showed limited reliability for RoB assessment, likely due to the interpretative complexity of the task, it performed robustly in data extraction, particularly when guided by well-structured prompts. Its high precision, speed, and low hallucination rate suggest strong potential as a secondary reviewer in systematic review workflows. Nonetheless, its application to evaluative tasks requiring critical judgement remains premature. With further development and proper human oversight, LLMs like ChatGPT-4o may help streamline systematic review processes, especially for labour-intensive, structured data extraction.  
ROB-MEN: a tool to assess risk of bias due to missing evidence in network meta-analysis
Background Selective outcome reporting and publication bias threaten the validity of systematic reviews and meta-analyses and can affect clinical decision-making. A rigorous method to evaluate the impact of this bias on the results of network meta-analyses of interventions is lacking. We present a tool to assess the Risk Of Bias due to Missing Evidence in Network meta-analysis (ROB-MEN). Methods ROB-MEN first evaluates the risk of bias due to missing evidence for each of the possible pairwise comparison that can be made between the interventions in the network. This step considers possible bias due to the presence of studies with unavailable results ( within-study assessment of bias ) and the potential for unpublished studies ( across-study assessment of bias ). The second step combines the judgements about the risk of bias due to missing evidence in pairwise comparisons with (i) the contribution of direct comparisons to the network meta-analysis estimates, (ii) possible small-study effects evaluated by network meta-regression, and (iii) any bias from unobserved comparisons. Then, a level of “low risk”, “some concerns”, or “high risk” for the bias due to missing evidence is assigned to each estimate, which is our tool’s final output. Results We describe the methodology of ROB-MEN step-by-step using an illustrative example from a published NMA of non-diagnostic modalities for the detection of coronary artery disease in patients with low risk acute coronary syndrome. We also report a full application of the tool on a larger and more complex published network of 18 drugs from head-to-head studies for the acute treatment of adults with major depressive disorder. Conclusions ROB-MEN is the first tool for evaluating the risk of bias due to missing evidence in network meta-analysis and applies to networks of all sizes and geometry. The use of ROB-MEN is facilitated by an R Shiny web application that produces the Pairwise Comparisons and ROB-MEN Table and is incorporated in the reporting bias domain of the CINeMA framework and software.