Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
328 result(s) for "Bossuyt, Patrick M"
Sort by:
False-negative results of initial RT-PCR assays for COVID-19: A systematic review
A false-negative case of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection is defined as a person with suspected infection and an initial negative result by reverse transcription-polymerase chain reaction (RT-PCR) test, with a positive result on a subsequent test. False-negative cases have important implications for isolation and risk of transmission of infected people and for the management of coronavirus disease 2019 (COVID-19). We aimed to review and critically appraise evidence about the rate of RT-PCR false-negatives at initial testing for COVID-19. We searched MEDLINE, EMBASE, LILACS, as well as COVID-19 repositories, including the EPPI-Centre living systematic map of evidence about COVID-19 and the Coronavirus Open Access Project living evidence database. Two authors independently screened and selected studies according to the eligibility criteria and collected data from the included studies. The risk of bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. We calculated the proportion of false-negative test results using a multilevel mixed-effect logistic regression model. The certainty of the evidence about false-negative cases was rated using the GRADE approach for tests and strategies. All information in this article is current up to July 17, 2020. We included 34 studies enrolling 12,057 COVID-19 confirmed cases. All studies were affected by several risks of bias and applicability concerns. The pooled estimate of false-negative proportion was highly affected by unexplained heterogeneity (tau-squared = 1.39; 90% prediction interval from 0.02 to 0.54). The certainty of the evidence was judged as very low due to the risk of bias, indirectness, and inconsistency issues. There is substantial and largely unexplained heterogeneity in the proportion of false-negative RT-PCR results. The collected evidence has several limitations, including risk of bias issues, high heterogeneity, and concerns about its applicability. Nonetheless, our findings reinforce the need for repeated testing in patients with suspicion of SARS-Cov-2 infection given that up to 54% of COVID-19 patients may have an initial false-negative RT-PCR (very low certainty of evidence). Protocol available on the OSF website: https://tinyurl.com/vvbgqya.
Beyond Diagnostic Accuracy: The Clinical Utility of Diagnostic Tests
Like any other medical technology or intervention, diagnostic tests should be thoroughly evaluated before their introduction into daily practice. Increasingly, decision makers, physicians, and other users of diagnostic tests request more than simple measures of a test's analytical or technical performance and diagnostic accuracy; they would also like to see testing lead to health benefits. In this last article of our series, we introduce the notion of clinical utility, which expresses—preferably in a quantitative form—to what extent diagnostic testing improves health outcomes relative to the current best alternative, which could be some other form of testing or no testing at all. In most cases, diagnostic tests improve patient outcomes by providing information that can be used to identify patients who will benefit from helpful downstream management actions, such as effective treatment in individuals with positive test results and no treatment for those with negative results. We describe how comparative randomized clinical trials can be used to estimate clinical utility. We contrast the definition of clinical utility with that of the personal utility of tests and markers. We show how diagnostic accuracy can be linked to clinical utility through an appropriate definition of the target condition in diagnostic-accuracy studies.
Recommendations for liver transplantation for hepatocellular carcinoma: an international consensus conference report
Although liver transplantation is a widely accepted treatment for hepatocellular carcinoma (HCC), much controversy remains and there is no generally accepted set of guidelines. An international consensus conference was held on Dec 2–4, 2010, in Zurich, Switzerland, with the aim of reviewing current practice regarding liver transplantation in patients with HCC and to develop internationally accepted statements and guidelines. The format of the conference was based on the Danish model. 19 working groups of experts prepared evidence-based reviews according to the Oxford classification, and drafted recommendations answering 19 specific questions. An independent jury of nine members was appointed to review these submissions and make final recommendations, after debates with the experts and audience at the conference. This report presents the final 37 statements and recommendations, covering assessment of candidates for liver transplantation, criteria for listing in cirrhotic and non-cirrhotic patients, role of tumour downstaging, management of patients on the waiting list, role of living donation, and post-transplant management.
Increasing value and reducing waste in biomedical research: who's listening?
The biomedical research complex has been estimated to consume almost a quarter of a trillion US dollars every year. Unfortunately, evidence suggests that a high proportion of this sum is avoidably wasted. In 2014, The Lancet published a series of five reviews showing how dividends from the investment in research might be increased from the relevance and priorities of the questions being asked, to how the research is designed, conducted, and reported. 17 recommendations were addressed to five main stakeholders—funders, regulators, journals, academic institutions, and researchers. This Review provides some initial observations on the possible effects of the Series, which seems to have provoked several important discussions and is on the agendas of several key players. Some examples of individual initiatives show ways to reduce waste and increase value in biomedical research. This momentum will probably move strongly across stakeholder groups, if collaborative relationships evolve between key players; further important work is needed to increase research value. A forthcoming meeting in Edinburgh, UK, will provide an initial forum within which to foster the collaboration needed.
Accuracy of cytokeratin 18 (M30 and M65) in detecting non-alcoholic steatohepatitis and fibrosis: A systematic review and meta-analysis
Introduction Association between elevated cytokeratin 18 (CK-18) levels and hepatocyte death has made circulating CK-18 a candidate biomarker to differentiate non-alcoholic fatty liver from non-alcoholic steatohepatitis (NASH). Yet studies produced variable diagnostic performance. We aimed to provide summary estimates with increased precision for the accuracy of CK-18 (M30, M65) in detecting NASH and fibrosis among non-alcoholic fatty liver disease (NAFLD) adults. Methods We searched five databases to retrieve studies evaluating CK-18 against a liver biopsy in NAFLD adults. Reference screening, data extraction and quality assessment (QUADAS-2) were independently conducted by two authors. Meta-analyses were performed for five groups based on the CK-18 antigens and target conditions, using one of two methods: linear mixed-effects multiple thresholds model or bivariate logit-normal random-effects model. Results We included 41 studies, with data on 5,815 participants. A wide range of disease prevalence was observed. No study reported a pre-defined cut-off. Thirty of 41 studies provided sufficient data for inclusion in any of the meta-analyses. Summary AUC [95% CI] were: 0.75 [0.69-0.82] (M30) and 0.82 [0.69-0.91] (M65) for NASH; 0.73 [0.57-0.85] (M30) for fibrotic NASH; 0.68 (M30) for significant (F2-4) fibrosis; and 0.75 (M30) for advanced (F3-4) fibrosis. Thirteen studies used CK-18 as a component of a multimarker model. Conclusions For M30 we found lower diagnostic accuracy to detect NASH compared to previous meta-analyses, indicating a limited ability to act as a stand-alone test, with better performance for M65. Additional external validation studies are needed to obtain credible estimates of the diagnostic accuracy of multimarker models.
The PRISMA 2020 statement: an updated guideline for reporting systematic reviews
The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.
PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews
The methods and results of systematic reviews should be reported in sufficient detail to allow users to assess the trustworthiness and applicability of the review findings. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement was developed to facilitate transparent and complete reporting of systematic reviews and has been updated (to PRISMA 2020) to reflect recent advances in systematic review methodology and terminology. Here, we present the explanation and elaboration paper for PRISMA 2020, where we explain why reporting of each item is recommended, present bullet points that detail the reporting recommendations, and present examples from published reviews. We hope that changes to the content and structure of PRISMA 2020 will facilitate uptake of the guideline and lead to more transparent, complete, and accurate reporting of systematic reviews.
The PRISMA 2020 statement: an updated guideline for reporting systematic reviews
To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did (such as how studies were identified and selected) and what they found (such as characteristics of contributing studies and results of meta-analyses). [...]technological advances have enabled the use of natural language processing and machine learning to identify relevant evidence [22,23,24], methods have been proposed to synthesise and present findings when meta-analysis is not possible or appropriate [25,26,27], and new methods have been developed to assess the risk of bias in results of included studies [28, 29]. [...]the publishing landscape has transformed, with multiple avenues now available for registering and disseminating systematic review protocols [33, 34], disseminating reports of systematic reviews, and sharing data and materials, such as preprint servers and publicly accessible repositories. [...]extensions to the PRISMA 2009 statement have been developed to guide reporting of network meta-analyses [49], meta-analyses of individual participant data [50], systematic reviews of harms [51], systematic reviews of diagnostic test accuracy studies [52], and scoping reviews [53]; for these types of reviews we recommend authors report their review in accordance with the recommendations in PRISMA 2020 along with the guidance specific to the extension.
Preferred reporting items for systematic review and meta-analysis of diagnostic test accuracy studies (PRISMA-DTA): explanation, elaboration, and checklist
Systematic reviews of diagnostic test accuracy (DTA) studies are fundamental to the decision making process in evidence based medicine. Although such studies are regarded as high level evidence, these reviews are not always reported completely and transparently. Suboptimal reporting of DTA systematic reviews compromises their validity and generalisability, and subsequently their value to key stakeholders. An extension of the PRISMA (preferred reporting items for systematic review and meta-analysis) statement was recently developed to improve the reporting quality of DTA systematic reviews. The PRISMA-DTA statement has 27 items, of which eight are unmodified from the original PRISMA statement. This article provides an explanation for the 19 new and modified items, along with their meaning and rationale. Examples of complete reporting are used for each item to illustrate best practices.
Cochran's Q test was useful to assess heterogeneity in likelihood ratios in studies of diagnostic accuracy
Empirical evaluations have demonstrated that diagnostic accuracy frequently shows significant heterogeneity between subgroups of patients within a study. We propose to use Cochran's Q test to assess heterogeneity in diagnostic likelihood ratios (LRs). We reanalyzed published data of six articles that showed within-study heterogeneity in diagnostic accuracy. We used the Q test to assess heterogeneity in LRs and compared the results of the Q test with those obtained using another method for stratified analysis of LRs, based on subgroup confidence intervals. We also studied the behavior of the Q test using hypothetical data. The Q test detected significant heterogeneity in LRs in all six example data sets. The Q test detected significant heterogeneity in LRs more frequently than the confidence interval approach (38% vs. 20%). When applied to hypothetical data, the Q test would be able to detect relatively small variations in LRs, of about a twofold increase, in a study including 300 participants. Reanalysis of published data using the Q test can be easily performed to assess heterogeneity in diagnostic LRs between subgroups of patients, potentially providing important information to clinicians who base their decisions on published LRs.