Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
51,027 result(s) for "predictive value of tests"
Sort by:
Rationale and design of the precise percutaneous coronary intervention plan (P3) study: Prospective evaluation of a virtual computed tomography‐based percutaneous intervention planner
Introduction Fractional flow reserve (FFR) measured after percutaneous coronary intervention (PCI) has been identified as a surrogate marker for vessel related adverse events. FFR can be derived from standard coronary computed tomography angiography (CTA). Moreover, the FFR derived from coronary CTA (FFRCT) Planner is a tool that simulates PCI providing modeled FFRCT values after stenosis opening. Aim To validate the accuracy of the FFRCT Planner in predicting FFR after PCI with invasive FFR as a reference standard. Methods Prospective, international and multicenter study of patients with chronic coronary syndromes undergoing PCI. Patients will undergo coronary CTA with FFRCT prior to PCI. Combined morphological and functional evaluations with motorized FFR hyperemic pullbacks, and optical coherence tomography (OCT) will be performed before and after PCI. The FFRCT Planner will be applied by an independent core laboratory blinded to invasive data, replicating the invasive procedure. The primary objective is to assess the agreement between the predicted FFRCT post‐PCI derived from the Planner and invasive FFR. A total of 127 patients will be included in the study. Results Patient enrollment started in February 2019. Until December 2020, 100 patients have been included. Mean age was 64.1 ± 9.03, 76% were males and 24% diabetics. The target vessels for PCI were LAD 83%, LCX 6%, and RCA 11%. The final results are expected in 2021. Conclusion This study will determine the accuracy and precision of the FFRCT Planner to predict post‐PCI FFR in patients with chronic coronary syndromes undergoing percutaneous revascularization.
Added Value of Shaking Chills for Predicting Bacteremia in Patients with Suspected Infection
Detailed grading of chills is more useful for diagnosing bacteremia than simply classifying the presence or absence of chills. However, its value added to other clinical information has not been evaluated. To evaluate the value of adding chills grading to other clinical information compared to simply noting the presence or absence of chills for predicting bacteremia in patients with suspected infection. Prospective observational study. Adult patients admitted to two acute-care hospitals with suspected infection from April 2018 to March 2019. Two types of categorization for chills were applied: \"presence\" or \"absence\" (dichotomized chills); and \"no chills\", \"mild/moderate chills\", and \"shaking chills\" (trichotomized chills). Three multivariable logistic regression models incorporating each of dichotomized chills, trichotomized chills, and C-reactive protein (CRP) with other clinical information were developed and compared. To assess the potential consequences of using each model to identify patients with high risk of bacteremia (i.e., requiring prompt intervention), we applied a cut-off point of an estimated probability of 60%. The number of patients with bacteremia correctly identified by each model was compared. Among the 2,013 patients, 327 (16.2%) were diagnosed with bacteremia. The three models showed comparable discrimination and calibration performance. At the 60% cut-off, the dichotomized chills model correctly identified 11 patients (3.4% [95% confidence interval (CI) 1.9-3.4] of patients with bacteremia). The trichotomized chills model and CRP model correctly identified an additional 15 patients (4.6% [95% CI 2.8-7.4]) and 2 patients (0.6% [95% CI 0.1-2.3]) with bacteremia, respectively. Differentiating shaking chills in comparison with dichotomized chills for predicting bacteremia allowed the correct identification of an additional 4.6% of patients with bacteremia. Detailed grading of chills can be assessed without additional time, cost, or burden on patients and can be recommended in the routine history taking.
Practice Recommendations for Diagnosis and Treatment of the Most Common Forms of Secondary Hypertension
The vast majority of hypertensive patients are never sought for a cause of their high blood pressure, i.e. for a ‘secondary’ form of arterial hypertension. This under detection explains why only a tiny percentage of hypertensive patients are ultimately diagnosed with a secondary form of arterial hypertension. The prevalence of these forms is, therefore, markedly underestimated, although, they can involve as many as one-third of the cases among referred patients and up to half of those with difficult to treat hypertension. The early detection of a secondary form is crucial, because if diagnosed in a timely manner, these forms can be cured at long-term, and even when cure cannot be achieved, their diagnosis provides a better control of high blood pressure, and allows prevention of hypertension-mediated organ damage, and related cardiovascular complications. Enormous progress has been made in the understanding, diagnostic work-up, and management of secondary hypertension in the last decades. The aim of this minireview is, therefore, to provide updated concise information on the screening, diagnosis, and management of the most common forms, including primary aldosteronism, renovascular hypertension, pheochromocytoma and paraganglioma, Cushing’s syndrome, and obstructive sleep apnea.
Is the 1-minute sit-to-stand test a good tool for the evaluation of the impact of pulmonary rehabilitation? Determination of the minimal important difference in COPD
The 1-minute sit-to-stand (STS) test could be valuable to assess the level of exercise tolerance in chronic obstructive pulmonary disease (COPD). There is a need to provide the minimal important difference (MID) of this test in pulmonary rehabilitation (PR). COPD patients undergoing the 1-minute STS test before PR were included. The test was performed at baseline and the end of PR, as well as the 6-minute walk test, and the quadriceps maximum voluntary contraction (QMVC). Home and community-based programs were conducted as recommended. Responsiveness to PR was determined by the difference in the 1-minute STS test between baseline and the end of PR. The MID was evaluated using distribution and anchor-based methods. Forty-eight COPD patients were included. At baseline, the significant predictors of the number of 1-minute STS repetitions were the 6-minute walk distance (6MWD) ( =0.574; <10 ), age ( =-0.453; =0.001), being on long-term oxygen treatment ( =-0.454; =0.017), and the QMVC ( =0.424; =0.031). The multivariate analysis explained 75.8% of the variance of 1-minute STS repetitions. The improvement of the 1-minute STS repetitions at the end of PR was 3.8±4.2 ( <10 ). It was mainly correlated with the change in QMVC ( =0.572; =0.004) and 6MWD ( =0.428; =0.006). Using the distribution-based analysis, an MID of 1.9 (standard error of measurement method) or 3.1 (standard deviation method) was found. With the 6MWD as anchor, the receiver operating characteristic curve identified the MID for the change in 1-minute STS repetitions at 2.5 (sensibility: 80%, specificity: 60%) with area under curve of 0.716. The 1-minute STS test is simple and sensitive to measure the efficiency of PR. An improvement of at least three repetitions is consistent with physical benefits after PR.
Large-Scale Assessment of a Smartwatch to Identify Atrial Fibrillation
Using a smartphone app, the investigators recruited 419,297 participants to be monitored for irregular pulses. Patterns suggesting atrial fibrillation were detected in 2161 participants who then received ECG monitoring devices to be worn for 7 days to confirm the presence or absence of atrial fibrillation.
Calibration: the Achilles heel of predictive analytics
Background The assessment of calibration performance of risk prediction models based on regression or more flexible machine learning algorithms receives little attention. Main text Herein, we argue that this needs to change immediately because poorly calibrated algorithms can be misleading and potentially harmful for clinical decision-making. We summarize how to avoid poor calibration at algorithm development and how to assess calibration at algorithm validation, emphasizing balance between model complexity and the available sample size. At external validation, calibration curves require sufficiently large samples. Algorithm updating should be considered for appropriate support of clinical practice. Conclusion Efforts are required to avoid poor calibration when developing prediction models, to evaluate calibration when validating models, and to update models when indicated. The ultimate aim is to optimize the utility of predictive analytics for shared decision-making and patient counseling.
Scaling up COVID-19 rapid antigen tests: promises and challenges
WHO recommends a minimum of 80% sensitivity and 97% specificity for antigen-detection rapid diagnostic tests (Ag-RDTs), which can be used for patients with symptoms consistent with COVID-19. However, after the acute phase when viral load decreases, use of Ag-RDTs might lead to high rates of false negatives, suggesting that the tests should be replaced by a combination of molecular and serological tests. When the likelihood of having COVID-19 is low, such as for asymptomatic individuals in low prevalence settings, for travel, return to schools, workplaces, and mass gatherings, Ag-RDTs with high negative predictive values can be used with confidence to rule out infection. For those who test positive in low prevalence settings, the high false positive rate means that mitigation strategies, such as molecular testing to confirm positive results, are needed. Ag-RDTs, when used appropriately, are promising tools for scaling up testing and ensuring that patient management and public health measures can be implemented without delay.
Digital pathology and artificial intelligence
In modern clinical practice, digital pathology has a crucial role and is increasingly a technological requirement in the scientific laboratory environment. The advent of whole-slide imaging, availability of faster networks, and cheaper storage solutions has made it easier for pathologists to manage digital slide images and share them for clinical use. In parallel, unprecedented advances in machine learning have enabled the synergy of artificial intelligence and digital pathology, which offers image-based diagnosis possibilities that were once limited only to radiology and cardiology. Integration of digital slides into the pathology workflow, advanced algorithms, and computer-aided diagnostic techniques extend the frontiers of the pathologist's view beyond a microscopic slide and enable true utilisation and integration of knowledge that is beyond human limits and boundaries, and we believe there is clear potential for artificial intelligence breakthroughs in the pathology setting. In this Review, we discuss advancements in digital slide-based image diagnosis for cancer along with some challenges and opportunities for artificial intelligence in digital pathology.
The preregistration revolution
Progress in science relies in part on generating hypotheses with existing observations and testing hypotheses with new observations. This distinction between postdiction and prediction is appreciated conceptually but is not respected in practice. Mistaking generation of postdictions with testing of predictions reduces the credibility of research findings. However, ordinary biases in human reasoning, such as hindsight bias, make it hard to avoid this mistake. An effective solution is to define the research questions and analysis plan before observing the research outcomes—a process called preregistration. Preregistration distinguishes analyses and outcomes that result from predictions from those that result from postdictions. A variety of practical strategies are available to make the best possible use of preregistration in circumstances that fall short of the ideal application, such as when the data are preexisting. Services are now available for preregistration across all disciplines, facilitating a rapid increase in the practice. Widespread adoption of preregistration will increase distinctiveness between hypothesis generation and hypothesis testing and will improve the credibility of research findings.
Risk stratification of patients admitted to hospital with covid-19 using the ISARIC WHO Clinical Characterisation Protocol: development and validation of the 4C Mortality Score
AbstractObjectiveTo develop and validate a pragmatic risk score to predict mortality in patients admitted to hospital with coronavirus disease 2019 (covid-19).DesignProspective observational cohort study.SettingInternational Severe Acute Respiratory and emerging Infections Consortium (ISARIC) World Health Organization (WHO) Clinical Characterisation Protocol UK (CCP-UK) study (performed by the ISARIC Coronavirus Clinical Characterisation Consortium—ISARIC-4C) in 260 hospitals across England, Scotland, and Wales. Model training was performed on a cohort of patients recruited between 6 February and 20 May 2020, with validation conducted on a second cohort of patients recruited after model development between 21 May and 29 June 2020.ParticipantsAdults (age ≥18 years) admitted to hospital with covid-19 at least four weeks before final data extraction.Main outcome measureIn-hospital mortality.Results35 463 patients were included in the derivation dataset (mortality rate 32.2%) and 22 361 in the validation dataset (mortality rate 30.1%). The final 4C Mortality Score included eight variables readily available at initial hospital assessment: age, sex, number of comorbidities, respiratory rate, peripheral oxygen saturation, level of consciousness, urea level, and C reactive protein (score range 0-21 points). The 4C Score showed high discrimination for mortality (derivation cohort: area under the receiver operating characteristic curve 0.79, 95% confidence interval 0.78 to 0.79; validation cohort: 0.77, 0.76 to 0.77) with excellent calibration (validation: calibration-in-the-large=0, slope=1.0). Patients with a score of at least 15 (n=4158, 19%) had a 62% mortality (positive predictive value 62%) compared with 1% mortality for those with a score of 3 or less (n=1650, 7%; negative predictive value 99%). Discriminatory performance was higher than 15 pre-existing risk stratification scores (area under the receiver operating characteristic curve range 0.61-0.76), with scores developed in other covid-19 cohorts often performing poorly (range 0.63-0.73).ConclusionsAn easy-to-use risk stratification score has been developed and validated based on commonly available parameters at hospital presentation. The 4C Mortality Score outperformed existing scores, showed utility to directly inform clinical decision making, and can be used to stratify patients admitted to hospital with covid-19 into different management groups. The score should be further validated to determine its applicability in other populations.Study registrationISRCTN66726260