Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Language
    • Place of Publication
    • Contributors
    • Location
51,134 result(s) for "Research bias"
Sort by:
Unreliable : bias, fraud, and the reproducibility crisis in biomedical research
\"Scientists specializing in the in-depth analysis of the published scientific literature have to the conclusion that a large part of the scientific literature covers results that cannot be replicated in other independent laboratories. Scientists take this to mean that the results are unreliable or untrue. In this book, biomedical researcher Csaba Szabo summarizes the causes and consequences of this so-called \"reproducibility crisis\" in biomedical research. The range of causes is wide, from the specificities of the methods used, through various pitfalls in the design of experiments and analysis of experimental data (e.g., confirmation bias), plagiarism and deliberate data falsification, to the systematic publication of fictitious experiments that have never been performed. Through a few blatant examples - e.g. Anil Potti (Duke University); Piero Anversa (Harvard University) - Szabo describes the damaging impact that blatant fraud can have on the development of an entire field of science, and introduces some of the maverick \"science investigators\" - often working in anonymity - who devote their lives to tracking down and exposing scientific fraudsters. The book also answers the questions (a) what individual and systemic factors are involved in allowing these phenomena to occur, (b) why the appropriate steps have not been taken to control them, and (c) what the implications of the crisis are for the future of medicine and, within it, for the development of new drugs\"-- Provided by publisher.
Rein in the four horsemen of irreproducibility
Dorothy Bishop describes how threats to reproducibility, recognized but unaddressed for decades, might finally be brought under control. Dorothy Bishop describes how threats to reproducibility, recognized but unaddressed for decades, might finally be brought under control. “Many researchers persist in working in a way guaranteed not to deliver meaningful results.”
Peer Review Bias: A Critical Review
Various types of bias and confounding have been described in the biomedical literature that can affect a study before, during, or after the intervention has been delivered. The peer review process can also introduce bias. A compelling ethical and moral rationale necessitates improving the peer review process. A double-blind peer review system is supported on equipoise and fair-play principles. Triple- and quadruple-blind systems have also been described but are not commonly used. The open peer review system introduces “Skin in the Game” heuristic principles for both authors and reviewers and has a small favorable effect on the quality of published reports. In this exposition, we present, on the basis of a comprehensive literature search of PubMed from its inception until October 20, 2017, various possible mechanisms by which the peer review process can distort research results, and we discuss the evidence supporting different strategies that may mitigate this bias. It is time to improve the quality, transparency, and accountability of the peer review system.
Risk of bias tools in systematic reviews of health interventions: an analysis of PROSPERO-registered protocols
Background Systematic reviews of health interventions are increasingly incorporating evidence outside of randomized controlled trials (RCT). While non-randomized study (NRS) types may be more prone to bias compared to RCT, the tools used to evaluate risk of bias (RoB) in NRS are less straightforward and no gold standard tool exists. The objective of this study was to evaluate the planned use of RoB tools in systematic reviews of health interventions, specifically for reviews that planned to incorporate evidence from RCT and/or NRS. Methods We evaluated a random sample of non-Cochrane protocols for systematic reviews of interventions registered in PROSPERO between January 1 and October 12, 2018. For each protocol, we extracted data on the types of studies to be included (RCT and/or NRS) as well as the name and number of RoB tools planned to be used according to study design. We then conducted a longitudinal analysis of the most commonly reported tools in the random sample. Using keywords and name variants for each tool, we searched PROSPERO records by year since the inception of the database (2011 to December 7, 2018), restricting the keyword search to the “Risk of bias (quality) assessment” field. Results In total, 471 randomly sampled PROSPERO protocols from 2018 were included in the analysis. About two-thirds (63%) of these planned to include NRS, while 37% restricted study design to RCT or quasi-RCT. Over half of the protocols that planned to include NRS listed only a single RoB tool, most frequently the Cochrane RoB Tool. The Newcastle-Ottawa Scale and ROBINS-I were the most commonly reported tools for NRS (39% and 33% respectively) for systematic reviews that planned to use multiple RoB tools. Looking at trends over time, the planned use of the Cochrane RoB Tool and ROBINS-I seems to be increasing. Conclusions While RoB tool selection for RCT was consistent, with the Cochrane RoB Tool being the most frequently reported in PROSPERO protocols, RoB tools for NRS varied widely. Results suggest a need for more education and awareness on the appropriate use of RoB tools for NRS. Given the heterogeneity of study designs comprising NRS, multiple RoB tools tailored to specific designs may be required.
The risk of bias in observational studies of exposures (ROBINS-E) tool: concerns arising from application to observational studies of exposures
Background Systematic reviews, which assess the risk of bias in included studies, are increasingly used to develop environmental hazard assessments and public health guidelines. These research areas typically rely on evidence from human observational studies of exposures, yet there are currently no universally accepted standards for assessing risk of bias in such studies. The risk of bias in non-randomised studies of exposures (ROBINS-E) tool has been developed by building upon tools for risk of bias assessment of randomised trials, diagnostic test accuracy studies and observational studies of interventions. This paper reports our experience with the application of the ROBINS-E tool. Methods We applied ROBINS-E to 74 exposure studies (60 cohort studies, 14 case-control studies) in 3 areas: environmental risk, dietary exposure and drug harm. All investigators provided written feedback, and we documented verbal discussion of the tool. We inductively and iteratively classified the feedback into 7 themes based on commonalities and differences until all the feedback was accounted for in the themes. We present a description of each theme. Results We identified practical concerns with the premise that ROBINS-E is a structured comparison of the observational study being rated to the ‘ideal’ randomised controlled trial. ROBINS-E assesses 7 domains of bias, but relevant questions related to some critical sources of bias, such as exposure and funding source, are not assessed. ROBINS-E fails to discriminate between studies with a single risk of bias or multiple risks of bias. ROBINS-E is severely limited at determining whether confounders will bias study outcomes. The construct of co-exposures was difficult to distinguish from confounders. Applying ROBINS-E was time-consuming and confusing. Conclusions Our experience suggests that the ROBINS-E tool does not meet the need for an international standard for evaluating human observational studies for questions of harm relevant to public and environmental health. We propose that a simpler tool, based on empirical evidence of bias, would provide accurate measures of risk of bias and is more likely to meet the needs of the environmental and public health community.
Benchmarking of T cell receptor repertoire profiling methods reveals large systematic biases
Monitoring the T cell receptor (TCR) repertoire in health and disease can provide key insights into adaptive immune responses, but the accuracy of current TCR sequencing (TCRseq) methods is unclear. In this study, we systematically compared the results of nine commercial and academic TCRseq methods, including six rapid amplification of complementary DNA ends (RACE)-polymerase chain reaction (PCR) and three multiplex-PCR approaches, when applied to the same T cell sample. We found marked differences in accuracy and intra- and inter-method reproducibility for T cell receptor α (TRA) and T cell receptor β (TRB) TCR chains. Most methods showed a lower ability to capture TRA than TRB diversity. Low RNA input generated non-representative repertoires. Results from the 5′ RACE-PCR methods were consistent among themselves but differed from the RNA-based multiplex-PCR results. Using an in silico meta-repertoire generated from 108 replicates, we found that one genomic DNA-based method and two non-unique molecular identifier (UMI) RNA-based methods were more sensitive than UMI methods in detecting rare clonotypes, despite the better clonotype quantification accuracy of the latter. A comparison of T cell receptor repertoire profiling methods shows substantial differences in their outputs.
The Evidence Project risk of bias tool: assessing study rigor for both randomized and non-randomized intervention studies
Background Different tools exist for assessing risk of bias of intervention studies for systematic reviews. We present a tool for assessing risk of bias across both randomized and non-randomized study designs. The tool was developed by the Evidence Project, which conducts systematic reviews and meta-analyses of behavioral interventions for HIV in low- and middle-income countries. Methods We present the eight items of the tool and describe considerations for each and for the tool as a whole. We then evaluate reliability of the tool by presenting inter-rater reliability for 125 selected studies from seven published reviews, calculating a kappa for each individual item and a weighted kappa for the total count of items. Results The tool includes eight items, each of which is rated as being present (yes) or not present (no) and, for some items, not applicable or not reported. The items include (1) cohort, (2) control or comparison group, (3) pre-post intervention data, (4) random assignment of participants to the intervention, (5) random selection of participants for assessment, (6) follow-up rate of 80% or more, (7) comparison groups equivalent on sociodemographics, and (8) comparison groups equivalent at baseline on outcome measures. Together, items (1)–(3) summarize the study design, while the remaining items consider other common elements of study rigor. Inter-rater reliability was moderate to substantial for all items, ranging from 0.41 to 0.80 (median κ  = 0.66). Agreement between raters on the total count of items endorsed was also substantial ( κ w  = 0.66). Conclusions Strengths of the tool include its applicability to a range of study designs, from randomized trials to various types of observational and quasi-experimental studies. It is relatively easy to use and interpret and can be applied to a range of review topics without adaptation, facilitating comparability across reviews. Limitations include the lack of potentially relevant items measured in other tools and potential threats to validity of some items. To date, the tool has been applied in over 30 reviews. We believe it is a practical option for assessing risk of bias in systematic reviews of interventions that include a range of study designs.