Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
459 result(s) for "Riley, Richard D"
Sort by:
Evaluation of clinical prediction models (part 1): from development to external validation
Evaluating the performance of a clinical prediction model is crucial to establish its predictive accuracy in the populations and settings intended for use. In this article, the first in a three part series, Collins and colleagues describe the importance of a meaningful evaluation using internal, internal-external, and external validation, as well as exploring heterogeneity, fairness, and generalisability in model performance.
Evaluation of clinical prediction models (part 3): calculating the sample size required for an external validation study
An external validation study evaluates the performance of a prediction model in new data, but many of these studies are too small to provide reliable answers. In the third article of their series on model evaluation, Riley and colleagues describe how to calculate the sample size required for external validation studies, and propose to avoid rules of thumb by tailoring calculations to the model and setting at hand.
Prognosis Research Strategy (PROGRESS) 3: Prognostic Model Research
Prognostic models are abundant in the medical literature yet their use in practice seems limited. In this article, the third in the PROGRESS series, the authors review how such models are developed and validated, and then address how prognostic models are assessed for their impact on practice and patient outcomes, illustrating these ideas with examples.
Protocol for development of a reporting guideline (TRIPOD-AI) and risk of bias tool (PROBAST-AI) for diagnostic and prognostic prediction model studies based on artificial intelligence
IntroductionThe Transparent Reporting of a multivariable prediction model of Individual Prognosis Or Diagnosis (TRIPOD) statement and the Prediction model Risk Of Bias ASsessment Tool (PROBAST) were both published to improve the reporting and critical appraisal of prediction model studies for diagnosis and prognosis. This paper describes the processes and methods that will be used to develop an extension to the TRIPOD statement (TRIPOD-artificial intelligence, AI) and the PROBAST (PROBAST-AI) tool for prediction model studies that applied machine learning techniques.Methods and analysisTRIPOD-AI and PROBAST-AI will be developed following published guidance from the EQUATOR Network, and will comprise five stages. Stage 1 will comprise two systematic reviews (across all medical fields and specifically in oncology) to examine the quality of reporting in published machine-learning-based prediction model studies. In stage 2, we will consult a diverse group of key stakeholders using a Delphi process to identify items to be considered for inclusion in TRIPOD-AI and PROBAST-AI. Stage 3 will be virtual consensus meetings to consolidate and prioritise key items to be included in TRIPOD-AI and PROBAST-AI. Stage 4 will involve developing the TRIPOD-AI checklist and the PROBAST-AI tool, and writing the accompanying explanation and elaboration papers. In the final stage, stage 5, we will disseminate TRIPOD-AI and PROBAST-AI via journals, conferences, blogs, websites (including TRIPOD, PROBAST and EQUATOR Network) and social media. TRIPOD-AI will provide researchers working on prediction model studies based on machine learning with a reporting guideline that can help them report key details that readers need to evaluate the study quality and interpret its findings, potentially reducing research waste. We anticipate PROBAST-AI will help researchers, clinicians, systematic reviewers and policymakers critically appraise the design, conduct and analysis of machine learning based prediction model studies, with a robust standardised tool for bias evaluation.Ethics and disseminationEthical approval has been granted by the Central University Research Ethics Committee, University of Oxford on 10-December-2020 (R73034/RE001). Findings from this study will be disseminated through peer-review publications.PROSPERO registration numberCRD42019140361 and CRD42019161764.
Prognosis Research Strategy (PROGRESS) 2: Prognostic Factor Research
Prognostic factor research aims to identify factors associated with subsequent clinical outcome in people with a particular disease or health condition. In this article, the second in the PROGRESS series, the authors discuss the role of prognostic factors in current clinical practice, randomised trials, and developing new interventions, and explain why and how prognostic factor research should be improved.
Multivariate and network meta-analysis of multiple outcomes and multiple treatments: rationale, concepts, and examples
Organisations such as the National Institute for Health and Care Excellence require the synthesis of evidence from existing studies to inform their decisions—for example, about the best available treatments with respect to multiple efficacy and safety outcomes. However, relevant studies may not provide direct evidence about all the treatments or outcomes of interest. Multivariate and network meta-analysis methods provide a framework to address this, using correlated or indirect evidence from such studies alongside any direct evidence. In this article, the authors describe the key concepts and assumptions of these methods, outline how correlated and indirect evidence arises, and illustrate the contribution of such evidence in real clinical examples involving multiple outcomes and multiple treatments
Association between antihypertensive treatment and adverse events: systematic review and meta-analysis
AbstractObjectiveTo examine the association between antihypertensive treatment and specific adverse events.DesignSystematic review and meta-analysis.Eligibility criteriaRandomised controlled trials of adults receiving antihypertensives compared with placebo or no treatment, more antihypertensive drugs compared with fewer antihypertensive drugs, or higher blood pressure targets compared with lower targets. To avoid small early phase trials, studies were required to have at least 650 patient years of follow-up.Information sourcesSearches were conducted in Embase, Medline, CENTRAL, and the Science Citation Index databases from inception until 14 April 2020.Main outcome measuresThe primary outcome was falls during trial follow-up. Secondary outcomes were acute kidney injury, fractures, gout, hyperkalaemia, hypokalaemia, hypotension, and syncope. Additional outcomes related to death and major cardiovascular events were extracted. Risk of bias was assessed using the Cochrane risk of bias tool, and random effects meta-analysis was used to pool rate ratios, odds ratios, and hazard ratios across studies, allowing for between study heterogeneity (τ2).ResultsOf 15 023 articles screened for inclusion, 58 randomised controlled trials were identified, including 280 638 participants followed up for a median of 3 (interquartile range 2-4) years. Most of the trials (n=40, 69%) had a low risk of bias. Among seven trials reporting data for falls, no evidence was found of an association with antihypertensive treatment (summary risk ratio 1.05, 95% confidence interval 0.89 to 1.24, τ2=0.009). Antihypertensives were associated with an increased risk of acute kidney injury (1.18, 95% confidence interval 1.01 to 1.39, τ2=0.037, n=15), hyperkalaemia (1.89, 1.56 to 2.30, τ2=0.122, n=26), hypotension (1.97, 1.67 to 2.32, τ2=0.132, n=35), and syncope (1.28, 1.03 to 1.59, τ2=0.050, n=16). The heterogeneity between studies assessing acute kidney injury and hyperkalaemia events was reduced when focusing on drugs that affect the renin angiotensin-aldosterone system. Results were robust to sensitivity analyses focusing on adverse events leading to withdrawal from each trial. Antihypertensive treatment was associated with a reduced risk of all cause mortality, cardiovascular death, and stroke, but not of myocardial infarction.ConclusionsThis meta-analysis found no evidence to suggest that antihypertensive treatment is associated with falls but found evidence of an association with mild (hyperkalaemia, hypotension) and severe adverse events (acute kidney injury, syncope). These data could be used to inform shared decision making between doctors and patients about initiation and continuation of antihypertensive treatment, especially in patients at high risk of harm because of previous adverse events or poor renal function.RegistrationPROSPERO CRD42018116860.
External validation of clinical prediction models using big datasets from e-health records or IPD meta-analysis: opportunities and challenges
Access to big datasets from e-health records and individual participant data (IPD) meta-analysis is signalling a new advent of external validation studies for clinical prediction models. In this article, the authors illustrate novel opportunities for external validation in big, combined datasets, while drawing attention to methodological challenges and reporting issues.
Penalization and shrinkage methods produced unreliable clinical prediction models especially when sample size was small
When developing a clinical prediction model, penalization techniques are recommended to address overfitting, as they shrink predictor effect estimates toward the null and reduce mean-square prediction error in new individuals. However, shrinkage and penalty terms (‘tuning parameters’) are estimated with uncertainty from the development data set. We examined the magnitude of this uncertainty and the subsequent impact on prediction model performance. This study comprises applied examples and a simulation study of the following methods: uniform shrinkage (estimated via a closed-form solution or bootstrapping), ridge regression, the lasso, and elastic net. In a particular model development data set, penalization methods can be unreliable because tuning parameters are estimated with large uncertainty. This is of most concern when development data sets have a small effective sample size and the model's Cox-Snell R2 is low. The problem can lead to considerable miscalibration of model predictions in new individuals. Penalization methods are not a ‘carte blanche’; they do not guarantee a reliable prediction model is developed. They are more unreliable when needed most (i.e., when overfitting may be large). We recommend they are best applied with large effective sample sizes, as identified from recent sample size calculations that aim to minimize the potential for model overfitting and precisely estimate key parameters. •When developing a clinical prediction model, penalization and shrinkage techniques are recommended to address overfitting.•Some methodology articles suggest penalization methods are a ‘carte blanche’ and resolve any issues to do with overfitting.•We show that penalization methods can be unreliable, as their unknown shrinkage and tuning parameter estimates are often estimated with large uncertainty.•Although penalization methods will, on average, improve on standard estimation methods, in a particular data set, they are often unreliable.•The most problematic data sets are those with small effective sample sizes and where the developed model has a Cox-Snell R2 far from 1, which is common for prediction models of binary and time-to-event outcomes.•Penalization methods are best used in situations when a sufficiently large development data set is available, as identified from sample size calculations to minimize the potential for model overfitting and precisely estimate key parameters.•When the sample size is adequately large, any of the studied penalization or shrinkage methods can be used, as they should perform similarly and better than unpenalized regression unless sample size is extremely large and Rapp2 is large.
A guide to systematic review and meta-analysis of prediction model performance
Validation of prediction models is highly recommended and increasingly common in the literature. A systematic review of validation studies is therefore helpful, with meta-analysis needed to summarise the predictive performance of the model being validated across different settings and populations. This article provides guidance for researchers systematically reviewing and meta-analysing the existing evidence on a specific prediction model, discusses good practice when quantitatively summarising the predictive performance of the model across studies, and provides recommendations for interpreting meta-analysis estimates of model performance. We present key steps of the meta-analysis and illustrate each step in an example review, by summarising the discrimination and calibration performance of the EuroSCORE for predicting operative mortality in patients undergoing coronary artery bypass grafting.