Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
65 result(s) for "Altman, Doug"
Sort by:
Choosing Important Health Outcomes for Comparative Effectiveness Research: A Systematic Review
A core outcome set (COS) is a standardised set of outcomes which should be measured and reported, as a minimum, in all effectiveness trials for a specific health area. This will allow results of studies to be compared, contrasted and combined as appropriate, as well as ensuring that all trials contribute usable information. The COMET (Core Outcome Measures for Effectiveness Trials) Initiative aims to support the development, reporting and adoption of COS. Central to this is a publically accessible online resource, populated with all available COS. The aim of the review we report here was to identify studies that sought to determine which outcomes or domains to measure in all clinical trials in a specific condition and to describe the methodological techniques used in these studies. We developed a multi-faceted search strategy for electronic databases (MEDLINE, SCOPUS, and Cochrane Methodology Register). We included studies that sought to determine which outcomes/domains to measure in all clinical trials in a specific condition. A total of 250 reports relating to 198 studies were judged eligible for inclusion in the review. Studies covered various areas of health, most commonly cancer, rheumatology, neurology, heart and circulation, and dentistry and oral health. A variety of methods have been used to develop COS, including semi-structured discussion, unstructured group discussion, the Delphi Technique, Consensus Development Conference, surveys and Nominal Group Technique. The most common groups involved were clinical experts and non-clinical research experts. Thirty-one (16%) studies reported that the public had been involved in the process. The geographic locations of participants were predominantly North America (n = 164; 83%) and Europe (n = 150; 76%). This systematic review identified many health areas where a COS has been developed, but also highlights important gaps. It is a further step towards a comprehensive, up-to-date database of COS. In addition, it shows the need for methodological guidance, including how to engage key stakeholder groups, particularly members of the public.
External validation of clinical prediction models using big datasets from e-health records or IPD meta-analysis: opportunities and challenges
Access to big datasets from e-health records and individual participant data (IPD) meta-analysis is signalling a new advent of external validation studies for clinical prediction models. In this article, the authors illustrate novel opportunities for external validation in big, combined datasets, while drawing attention to methodological challenges and reporting issues.
Bias Due to Changes in Specified Outcomes during the Systematic Review Process
Adding, omitting or changing outcomes after a systematic review protocol is published can result in bias because it increases the potential for unacknowledged or post hoc revisions of the planned analyses. The main objective of this study was to look for discrepancies between primary outcomes listed in protocols and in the subsequent completed reviews published on the Cochrane Library. A secondary objective was to quantify the risk of bias in a set of meta-analyses where discrepancies between outcome specifications in protocols and reviews were found. New reviews from three consecutive issues of the Cochrane Library were assessed. For each review, the primary outcome(s) listed in the review protocol and the review itself were identified and review authors were contacted to provide reasons for any discrepancies. Over a fifth (64/288, 22%) of protocol/review pairings were found to contain a discrepancy in at least one outcome measure, of which 48 (75%) were attributable to changes in the primary outcome measure. Where lead authors could recall a reason for the discrepancy in the primary outcome, there was found to be potential bias in nearly a third (8/28, 29%) of these reviews, with changes being made after knowledge of the results from individual trials. Only 4(6%) of the 64 reviews with an outcome discrepancy described the reason for the change in the review, with no acknowledgment of the change in any of the eight reviews containing potentially biased discrepancies. Outcomes that were promoted in the review were more likely to be significant than if there was no discrepancy (relative risk 1.66 95% CI (1.10, 2.49), p = 0.02). In a review, making changes after seeing the results for included studies can lead to biased and misleading interpretation if the importance of the outcome (primary or secondary) is changed on the basis of those results. Our assessment showed that reasons for discrepancies with the protocol are not reported in the review, demonstrating an under-recognition of the problem. Complete transparency in the reporting of changes in outcome specification is vital; systematic reviewers should ensure that any legitimate changes to outcome specification are reported with reason in the review.
Scoping review on interventions to improve adherence to reporting guidelines in health research
ObjectivesThe goal of this study is to identify, analyse and classify interventions to improve adherence to reporting guidelines in order to obtain a wide picture of how the problem of enhancing the completeness of reporting of biomedical literature has been tackled so far.DesignScoping review.Search strategyWe searched the MEDLINE, EMBASE and Cochrane Library databases and conducted a grey literature search for (1) studies evaluating interventions to improve adherence to reporting guidelines in health research and (2) other types of references describing interventions that have been performed or suggested but never evaluated. The characteristics and effect of the evaluated interventions were analysed. Moreover, we explored the rationale of the interventions identified and determined the existing gaps in research on the evaluation of interventions to improve adherence to reporting guidelines.Results109 references containing 31 interventions (11 evaluated) were included. These were grouped into five categories: (1) training on the use of reporting guidelines, (2) improving understanding, (3) encouraging adherence, (4) checking adherence and providing feedback, and (5) involvement of experts. Additionally, we identified lack of evaluated interventions (1) on training on the use of reporting guidelines and improving their understanding, (2) at early stages of research and (3) after the final acceptance of the manuscript.ConclusionsThis scoping review identified a wide range of strategies to improve adherence to reporting guidelines that can be taken by different stakeholders. Additional research is needed to assess the effectiveness of many of these interventions.
Red Blood Cell Transfusion and Mortality in Trauma Patients: Risk-Stratified Analysis of an Observational Study
Haemorrhage is a common cause of death in trauma patients. Although transfusions are extensively used in the care of bleeding trauma patients, there is uncertainty about the balance of risks and benefits and how this balance depends on the baseline risk of death. Our objective was to evaluate the association of red blood cell (RBC) transfusion with mortality according to the predicted risk of death. A secondary analysis of the CRASH-2 trial (which originally evaluated the effect of tranexamic acid on mortality in trauma patients) was conducted. The trial included 20,127 trauma patients with significant bleeding from 274 hospitals in 40 countries. We evaluated the association of RBC transfusion with mortality in four strata of predicted risk of death: <6%, 6%-20%, 21%-50%, and >50%. For this analysis the exposure considered was RBC transfusion, and the main outcome was death from all causes at 28 days. A total of 10,227 patients (50.8%) received at least one transfusion. We found strong evidence that the association of transfusion with all-cause mortality varied according to the predicted risk of death (p-value for interaction <0.0001). Transfusion was associated with an increase in all-cause mortality among patients with <6% and 6%-20% predicted risk of death (odds ratio [OR] 5.40, 95% CI 4.08-7.13, p<0.0001, and OR 2.31, 95% CI 1.96-2.73, p<0.0001, respectively), but with a decrease in all-cause mortality in patients with >50% predicted risk of death (OR 0.59, 95% CI 0.47-0.74, p<0.0001). Transfusion was associated with an increase in fatal and non-fatal vascular events (OR 2.58, 95% CI 2.05-3.24, p<0.0001). The risk associated with RBC transfusion was significantly increased for all the predicted risk of death categories, but the relative increase was higher for those with the lowest (<6%) predicted risk of death (p-value for interaction <0.0001). As this was an observational study, the results could have been affected by different types of confounding. In addition, we could not consider haemoglobin in our analysis. In sensitivity analyses, excluding patients who died early; conducting propensity score analysis adjusting by use of platelets, fresh frozen plasma, and cryoprecipitate; and adjusting for country produced results that were similar. The association of transfusion with all-cause mortality appears to vary according to the predicted risk of death. Transfusion may reduce mortality in patients at high risk of death but increase mortality in those at low risk. The effect of transfusion in low-risk patients should be further tested in a randomised trial. www.ClinicalTrials.gov NCT01746953.
Consensus-based recommendations for investigating clinical heterogeneity in systematic reviews
Background Critics of systematic reviews have argued that these studies often fail to inform clinical decision making because their results are far too general, that the data are sparse, such that findings cannot be applied to individual patients or for other decision making. While there is some consensus on methods for investigating statistical and methodological heterogeneity, little attention has been paid to clinical aspects of heterogeneity. Clinical heterogeneity, true effect heterogeneity, can be defined as variability among studies in the participants, the types or timing of outcome measurements, and the intervention characteristics. The objective of this project was to develop recommendations for investigating clinical heterogeneity in systematic reviews. Methods We used a modified Delphi technique with three phases: (1) pre-meeting item generation; (2) face-to-face consensus meeting in the form of a modified Delphi process; and (3) post-meeting feedback. We identified and invited potential participants with expertise in systematic review methodology, systematic review reporting, or statistical aspects of meta-analyses, or those who published papers on clinical heterogeneity. Results Between April and June of 2011, we conducted phone calls with participants. In June 2011 we held the face-to-face focus group meeting in Ann Arbor, Michigan. First, we agreed upon a definition of clinical heterogeneity: Variations in the treatment effect that are due to differences in clinically related characteristics. Next, we discussed and generated recommendations in the following 12 categories related to investigating clinical heterogeneity: the systematic review team, planning investigations, rationale for choice of variables, types of clinical variables, the role of statistical heterogeneity, the use of plotting and visual aids, dealing with outlier studies, the number of investigations or variables, the role of the best evidence synthesis, types of statistical methods, the interpretation of findings, and reporting. Conclusions Clinical heterogeneity is common in systematic reviews. Our recommendations can help guide systematic reviewers in conducting valid and reliable investigations of clinical heterogeneity. Findings of these investigations may allow for increased applicability of findings of systematic reviews to the management of individual patients.
Inconsistency between direct and indirect comparisons of competing interventions: meta-epidemiological study
Objective To investigate the agreement between direct and indirect comparisons of competing healthcare interventions.Design Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials.Data sources Cochrane Database of Systematic Reviews and PubMed.Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic.Main outcome measure Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods.Results The study included 112 independent trial networks (including 1552 trials with 478 775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined.Conclusions Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence.
Exploration of Analysis Methods for Diagnostic Imaging Tests: Problems with ROC AUC and Confidence Scores in CT Colonography
Different methods of evaluating diagnostic performance when comparing diagnostic tests may lead to different results. We compared two such approaches, sensitivity and specificity with area under the Receiver Operating Characteristic Curve (ROC AUC) for the evaluation of CT colonography for the detection of polyps, either with or without computer assisted detection. In a multireader multicase study of 10 readers and 107 cases we compared sensitivity and specificity, using radiological reporting of the presence or absence of polyps, to ROC AUC calculated from confidence scores concerning the presence of polyps. Both methods were assessed against a reference standard. Here we focus on five readers, selected to illustrate issues in design and analysis. We compared diagnostic measures within readers, showing that differences in results are due to statistical methods. Reader performance varied widely depending on whether sensitivity and specificity or ROC AUC was used. There were problems using confidence scores; in assigning scores to all cases; in use of zero scores when no polyps were identified; the bimodal non-normal distribution of scores; fitting ROC curves due to extrapolation beyond the study data; and the undue influence of a few false positive results. Variation due to use of different ROC methods exceeded differences between test results for ROC AUC. The confidence scores recorded in our study violated many assumptions of ROC AUC methods, rendering these methods inappropriate. The problems we identified will apply to other detection studies using confidence scores. We found sensitivity and specificity were a more reliable and clinically appropriate method to compare diagnostic tests.
Explicit inclusion of treatment in prognostic modeling was recommended in observational and randomized settings
To compare different methods to handle treatment when developing a prognostic model that aims to produce accurate probabilities of the outcome of individuals if left untreated. Simulations were performed based on two normally distributed predictors, a binary outcome, and a binary treatment, mimicking a randomized trial or an observational study. Comparison was made between simply ignoring treatment (SIT), restricting the analytical data set to untreated individuals (AUT), inverse probability weighting (IPW), and explicit modeling of treatment (MT). Methods were compared in terms of predictive performance of the model and the proportion of incorrect treatment decisions. Omitting a genuine predictor of the outcome from the prognostic model decreased model performance, in both an observational study and a randomized trial. In randomized trials, the proportion of incorrect treatment decisions was smaller when applying AUT or MT, compared to SIT and IPW. In observational studies, MT was superior to all other methods regarding the proportion of incorrect treatment decisions. If a prognostic model aims to produce correct probabilities of the outcome in the absence of treatment, ignoring treatments that affect that outcome can lead to suboptimal model performance and incorrect treatment decisions. Explicitly, modeling treatment is recommended.
Assessing and reporting heterogeneity in treatment effects in clinical trials: a proposal
Mounting evidence suggests that there is frequently considerable variation in the risk of the outcome of interest in clinical trial populations. These differences in risk will often cause clinically important heterogeneity in treatment effects (HTE) across the trial population, such that the balance between treatment risks and benefits may differ substantially between large identifiable patient subgroups; the \"average\" benefit observed in the summary result may even be non-representative of the treatment effect for a typical patient in the trial. Conventional subgroup analyses, which examine whether specific patient characteristics modify the effects of treatment, are usually unable to detect even large variations in treatment benefit (and harm) across risk groups because they do not account for the fact that patients have multiple characteristics simultaneously that affect the likelihood of treatment benefit. Based upon recent evidence on optimal statistical approaches to assessing HTE, we propose a framework that prioritizes the analysis and reporting of multivariate risk-based HTE and suggests that other subgroup analyses should be explicitly labeled either as primary subgroup analyses (well-motivated by prior evidence and intended to produce clinically actionable results) or secondary (exploratory) subgroup analyses (performed to inform future research). A standardized and transparent approach to HTE assessment and reporting could substantially improve clinical trial utility and interpretability.