Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
36,905 result(s) for "Risk Assessment - statistics "
Sort by:
Comparison of risk scoring systems for patients presenting with upper gastrointestinal bleeding: international multicentre prospective study
Objective To compare the predictive accuracy and clinical utility of five risk scoring systems in the assessment of patients with upper gastrointestinal bleeding.Design International multicentre prospective study.Setting Six large hospitals in Europe, North America, Asia, and Oceania.Participants 3012 consecutive patients presenting over 12 months with upper gastrointestinal bleeding.Main outcome measures Comparison of pre-endoscopy scores (admission Rockall, AIMS65, and Glasgow Blatchford) and post-endoscopy scores (full Rockall and PNED) for their ability to predict predefined clinical endpoints: a composite endpoint (transfusion, endoscopic treatment, interventional radiology, surgery, or 30 day mortality), endoscopic treatment, 30 day mortality, rebleeding, and length of hospital stay. Optimum score thresholds to identify low risk and high risk patients were determined.Results The Glasgow Blatchford score was best (area under the receiver operating characteristic curve (AUROC) 0.86) at predicting intervention or death compared with the full Rockall score (0.70), PNED score (0.69), admission Rockall score (0.66, and AIMS65 score (0.68) (all P<0.001). A Glasgow Blatchford score of ≤1 was the optimum threshold to predict survival without intervention (sensitivity 98.6%, specificity 34.6%). The Glasgow Blatchford score was better at predicting endoscopic treatment (AUROC 0.75) than the AIMS65 (0.62) and admission Rockall scores (0.61) (both P<0.001). A Glasgow Blatchford score of ≥7 was the optimum threshold to predict endoscopic treatment (sensitivity 80%, specificity 57%). The PNED (AUROC 0.77) and AIMS65 scores (0.77) were best at predicting mortality, with both superior to admission Rockall score (0.72) and Glasgow Blatchford score (0.64; P<0.001). Score thresholds of ≥4 for PNED, ≥2 for AIMS65, ≥4 for admission Rockall, and ≥5 for full Rockall were optimal at predicting death, with sensitivities of 65.8-78.6% and specificities of 65.0-65.3%. No score was helpful at predicting rebleeding or length of stay.Conclusions The Glasgow Blatchford score has high accuracy at predicting need for hospital based intervention or death. Scores of ≤1 appear the optimum threshold for directing patients to outpatient management. AUROCs of scores for the other endpoints are less than 0.80, therefore their clinical utility for these outcomes seems to be limited.Trial registration Current Controlled Trials ISRCTN16235737.
Prenatal Household Air Pollution Is Associated with Impaired Infant Lung Function with Sex-Specific Effects. Evidence from GRAPHS, a Cluster Randomized Cookstove Intervention Trial
Abstract Rationale Approximately 2.8 billion people are exposed daily to household air pollution from polluting cookstoves. The effects of prenatal household air pollution on lung development are unknown. Objectives To prospectively examine associations between prenatal household air pollution and infant lung function and pneumonia in rural Ghana. Methods Prenatal household air pollution exposure was indexed by serial maternal carbon monoxide personal exposure measurements. Using linear regression, we examined associations between average prenatal carbon monoxide and infant lung function at age 30 days, first in the entire cohort (n = 384) and then stratified by sex. Quasi-Poisson generalized additive models explored associations between infant lung function and pneumonia. Measurements and Main Results Multivariable linear regression models showed that average prenatal carbon monoxide exposure was associated with reduced time to peak tidal expiratory flow to expiratory time (β = −0.004; P = 0.01), increased respiratory rate (β = 0.28; P = 0.01), and increased minute ventilation (β = 7.21; P = 0.05), considered separately, per 1 ppm increase in average prenatal carbon monoxide. Sex-stratified analyses suggested that girls were particularly vulnerable (time to peak tidal expiratory flow to expiratory time: β = −0.003, P = 0.05; respiratory rate: β = 0.36, P = 0.01; minute ventilation: β = 11.25, P = 0.01; passive respiratory compliance normalized for body weight: β = 0.005, P = 0.01). Increased respiratory rate at age 30 days was associated with increased risk for physician-assessed pneumonia (relative risk, 1.02; 95% confidence interval, 1.00–1.04) and severe pneumonia (relative risk, 1.04; 95% confidence interval, 1.00–1.08) in the first year of life. Conclusions Increased prenatal household air pollution exposure is associated with impaired infant lung function. Altered infant lung function may increase risk for pneumonia in the first year of life. These findings have implications for future respiratory health. Clinical trial registered with www.clinicaltrials.gov (NCT 01335490).
Deep Learning to Assess Long-term Mortality From Chest Radiographs
Chest radiography is the most common diagnostic imaging test in medicine and may also provide information about longevity and prognosis. To develop and test a convolutional neural network (CNN) (named CXR-risk) to predict long-term mortality, including noncancer death, from chest radiographs. In this prognostic study, CXR-risk CNN development (n = 41 856) and testing (n = 10 464) used data from the screening radiography arm of the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial (PLCO) (n = 52 320), a community cohort of asymptomatic nonsmokers and smokers (aged 55-74 years) enrolled at 10 US sites from November 8, 1993, through July 2, 2001. External testing used data from the screening radiography arm of the National Lung Screening Trial (NLST) (n = 5493), a community cohort of heavy smokers (aged 55-74 years) enrolled at 21 US sites from August 2002, through April 2004. Data analysis was performed from January 1, 2018, to May 23, 2019. Deep learning CXR-risk score (very low, low, moderate, high, and very high) based on CNN analysis of the enrollment radiograph. All-cause mortality. Prognostic value was assessed in the context of radiologists' diagnostic findings (eg, lung nodule) and standard risk factors (eg, age, sex, and diabetes) and for cause-specific mortality. Among 10 464 PLCO participants (mean [SD] age, 62.4 [5.4] years; 5405 men [51.6%]; median follow-up, 12.2 years [interquartile range, 10.5-12.9 years]) and 5493 NLST test participants (mean [SD] age, 61.7 [5.0] years; 3037 men [55.3%]; median follow-up, 6.3 years [interquartile range, 6.0-6.7 years]), there was a graded association between CXR-risk score and mortality. The very high-risk group had mortality of 53.0% (PLCO) and 33.9% (NLST), which was higher compared with the very low-risk group (PLCO: unadjusted hazard ratio [HR], 18.3 [95% CI, 14.5-23.2]; NLST: unadjusted HR, 15.2 [95% CI, 9.2-25.3]; both P < .001). This association was robust to adjustment for radiologists' findings and risk factors (PLCO: adjusted HR [aHR], 4.8 [95% CI, 3.6-6.4]; NLST: aHR, 7.0 [95% CI, 4.0-12.1]; both P < .001). Comparable results were seen for lung cancer death (PLCO: aHR, 11.1 [95% CI, 4.4-27.8]; NLST: aHR, 8.4 [95% CI, 2.5-28.0]; both P ≤ .001) and for noncancer cardiovascular death (PLCO: aHR, 3.6 [95% CI, 2.1-6.2]; NLST: aHR, 47.8 [95% CI, 6.1-374.9]; both P < .001) and respiratory death (PLCO: aHR, 27.5 [95% CI, 7.7-97.8]; NLST: aHR, 31.9 [95% CI, 3.9-263.5]; both P ≤ .001). In this study, the deep learning CXR-risk score stratified the risk of long-term mortality based on a single chest radiograph. Individuals at high risk of mortality may benefit from prevention, screening, and lifestyle interventions.
Outcome risk model development for heterogeneity of treatment effect analyses: a comparison of non-parametric machine learning methods and semi-parametric statistical methods
Background In randomized clinical trials, treatment effects may vary, and this possibility is referred to as heterogeneity of treatment effect (HTE). One way to quantify HTE is to partition participants into subgroups based on individual’s risk of experiencing an outcome, then measuring treatment effect by subgroup. Given the limited availability of externally validated outcome risk prediction models, internal models (created using the same dataset in which heterogeneity of treatment analyses also will be performed) are commonly developed for subgroup identification. We aim to compare different methods for generating internally developed outcome risk prediction models for subject partitioning in HTE analysis. Methods Three approaches were selected for generating subgroups for the 2,441 participants from the United States enrolled in the ASPirin in Reducing Events in the Elderly (ASPREE) randomized controlled trial. An extant proportional hazards-based outcomes predictive risk model developed on the overall ASPREE cohort of 19,114 participants was identified and was used to partition United States’ participants by risk of experiencing a composite outcome of death, dementia, or persistent physical disability. Next, two supervised non-parametric machine learning outcome classifiers, decision trees and random forests, were used to develop multivariable risk prediction models and partition participants into subgroups with varied risks of experiencing the composite outcome. Then, we assessed how the partitioning from the proportional hazard model compared to those generated by the machine learning models in an HTE analysis of the 5-year absolute risk reduction (ARR) and hazard ratio for aspirin vs. placebo in each subgroup. Cochran’s Q test was used to detect if ARR varied significantly by subgroup. Results The proportional hazard model was used to generate 5 subgroups using the quintiles of the estimated risk scores; the decision tree model was used to generate 6 subgroups (6 automatically determined tree leaves); and the random forest model was used to generate 5 subgroups using the quintiles of the prediction probability as risk scores. Using the semi-parametric proportional hazards model, the ARR at 5 years was 15.1% (95% CI 4.0–26.3%) for participants with the highest 20% of predicted risk. Using the random forest model, the ARR at 5 years was 13.7% (95% CI 3.1–24.4%) for participants with the highest 20% of predicted risk. The highest outcome risk group in the decision tree model also exhibited a risk reduction, but the confidence interval was wider (5-year ARR = 17.0%, 95% CI= -5.4–39.4%). Cochran’s Q test indicated ARR varied significantly only by subgroups created using the proportional hazards model. The hazard ratio for aspirin vs. placebo therapy did not significantly vary by subgroup in any of the models. The highest risk groups for the proportional hazards model and random forest model contained 230 participants each, while the highest risk group in the decision tree model contained 41 participants. Conclusions The choice of technique for internally developed models for outcome risk subgroups influences HTE analyses. The rationale for the use of a particular subgroup determination model in HTE analyses needs to be explicitly defined based on desired levels of explainability (with features importance), uncertainty of prediction, chances of overfitting, and assumptions regarding the underlying data structure. Replication of these analyses using data from other mid-size clinical trials may help to establish guidance for selecting an outcomes risk prediction modelling technique for HTE analyses.
Impact of early in-hospital medication review by clinical pharmacists on health services utilization
Adverse drug events are a leading cause of emergency department visits and unplanned admissions, and prolong hospital stays. Medication review interventions aim to identify adverse drug events and optimize medication use. Previous evaluations of in-hospital medication reviews have focused on interventions at discharge, with an unclear effect on health outcomes. We assessed the effect of early in-hospital pharmacist-led medication review on the health outcomes of high-risk patients. We used a quasi-randomized design to evaluate a quality improvement project in three hospitals in British Columbia, Canada. We incorporated a clinical decision rule into emergency department triage pathways, allowing nurses to identify patients at high-risk for adverse drug events. After randomly selecting the first eligible patient for participation, clinical pharmacists systematically allocated subsequent high-risk patients to medication review or usual care. Medication review included obtaining a best possible medication history and reviewing the patient's medications for appropriateness and adverse drug events. The primary outcome was the number of days spent in-hospital over 30 days, and was ascertained using administrative data. We used median and inverse propensity score weighted logistic regression modeling to determine the effect of pharmacist-led medication review on downstream health services use. Of 10,807 high-risk patients, 6,416 received early pharmacist-led medication review and 4,391 usual care. Their baseline characteristics were balanced. The median number of hospital days was reduced by 0.48 days (95% confidence intervals [CI] = 0.00 to 0.96; p = 0.058) in the medication review group compared to usual care, representing an 8% reduction in the median length of stay. Among patients under 80 years of age, the median number of hospital days was reduced by 0.60 days (95% CI = 0.06 to 1.17; p = 0.03), representing 11% reduction in the median length of stay. There was no significant effect on emergency department revisits, admissions, readmissions, or mortality. We were limited by our inability to conduct a randomized controlled trial, but used quasi-random patient allocation methods and propensity score modeling to ensure balance between treatment groups, and administrative data to ensure blinded outcomes ascertainment. We were unable to account for alternate level of care days, and therefore, may have underestimated the treatment effect in frail elderly patients who are likely to remain in hospital while awaiting long-term care. Early pharmacist-led medication review was associated with reduced hospital-bed utilization compared to usual care among high-risk patients under 80 years of age, but not among those who were older. The results of our evaluation suggest that medication review by pharmacists in the emergency department may impact the length of hospital stay in select patient populations.
Development and Validation of a Multivariable Lung Cancer Risk Prediction Model That Includes Low-Dose Computed Tomography Screening Results
Low-dose computed tomography lung cancer screening is most effective when applied to high-risk individuals. To develop and validate a risk prediction model that incorporates low-dose computed tomography screening results. A logistic regression risk model was developed in National Lung Screening Trial (NLST) Lung Screening Study (LSS) data and was validated in NLST American College of Radiology Imaging Network (ACRIN) data. The NLST was a randomized clinical trial that recruited participants between August 2002 and April 2004, with follow-up to December 31, 2009. This secondary analysis of data from the NLST took place between August 10, 2013, and November 1, 2018. Included were LSS (n = 14 576) and ACRIN (n = 7653) participants who had 3 screens, adequate follow-up, and complete predictor information. Incident lung cancers occurring 1 to 4 years after the third screen (202 LSS and 96 ACRIN). Predictors included scores from the validated PLCOm2012 risk model and Lung CT Screening Reporting & Data System (Lung-RADS) screening results. Overall, the mean (SD) age of 22 229 participants was 61.3 (5.0) years, 59.3% were male, and 90.9% were of non-Hispanic white race/ethnicity. During follow-up, 298 lung cancers were diagnosed in 22 229 individuals (1.3%). Eight result combinations were pooled into 4 groups based on similar associations. Adjusted for PLCOm2012 risks, compared with participants with 3 negative screens, participants with 1 positive screen and last negative had an odds ratio (OR) of 1.93 (95% CI, 1.34-2.76), and participants with 2 positive screens with last negative or 2 negative screens with last positive had an OR of 2.66 (95% CI, 1.60-4.43); when 2 or more screens were positive with last positive, the OR was 8.97 (95% CI, 5.76-13.97). In ACRIN validation data, the model that included PLCOm2012 scores and screening results (PLCO2012results) demonstrated significantly greater discrimination (area under the curve, 0.761; 95% CI, 0.716-0.799) than when screening results were excluded (PLCOm2012) (area under the curve, 0.687; 95% CI, 0.645-0.728) (P < .001). In ACRIN validation data, PLCO2012results demonstrated good calibration. Individuals who had initial negative scans but elevated PLCOm2012 six-year risks of at least 2.6% did not have risks decline below the 1.5% screening eligibility criterion when subsequent screens were negative. According to this analysis, some individuals with elevated risk scores who have negative initial screens remain at elevated risks, warranting annual screening. Positive screens seem to increase baseline risk scores and may identify high-risk individuals for continued screening and enrollment into clinical trials. ClinicalTrials.gov Identifier: NCT00047385.
Long-Term Recurrence of Small Papillary Thyroid Cancer and Its Risk Factors in a Korean Multicenter Study
Context:Small papillary thyroid cancer (PTC) generally has an excellent prognosis. However, long-term recurrence is not uncommon and sometimes leads to morbidity or mortality.Objective:To identify high-risk factors for long-term recurrence in patients with small PTC by stratifying their pathologic characteristics.Design, Setting, and Patients:We conducted a nationwide, retrospective, multicenter study of 3282 patients with PTC sized ≤2 cm from 9 high-volume hospitals in Korea.Main Outcome Measures:The maximally selected χ2 method was used to find the best cutoff points of tumor size, the number of metastatic lymph nodes (LNs), and the ratio of metastatic/examined LNs (LNR) to predict recurrence. Kaplan-Meier analysis and the Cox proportional hazards regression model were used to analyze recurrence and risk factors.Results:The optimal tumor size cutoff was 1.8 cm (10-year recurrence rates for tumors sized 0.1 to 1.7 cm and 1.8 to 2.0 cm: 7.7% vs 17.2%, respectively). Metastatic LNs ≤1 and ≥2 provided optimal estimates of recurrence (10-year recurrence rates: 4.0% vs 16.8%, respectively). The LNR of 0.19 was the optimal cutoff point for predicting the risk of recurrence (10-year recurrence rates for LNRs of 0 to 0.18 and 0.19 to 1: 2.7% vs 16.2%, respectively). LN metastasis, lobectomy, tumor size ≥1.8 cm, and bilateral tumors were independent risk factors for recurrence.Conclusions:Long-term recurrence was increased in patients who underwent lobectomy or with tumor sized ≥1.8 cm, 2 or more metastatic LNs, or bilateral tumors. For patients with these high-risk features, total thyroidectomy could be considered to avoid reoperation.Lobectomy, tumor size ≥1.8 cm, 2 or more metastatic lymph nodes, and bilateral tumors were independent risk factors for recurrence in patients with papillary thyroid cancer sized ≤2.0 cm.
Patient Preference and Risk Assessment in Opioid Prescribing Disparities
Although racial disparities in acute pain control are well established, the role of patient analgesic preference and the factors associated with these disparities remain unclear. To characterize racial disparities in opioid prescribing for acute pain after accounting for patient preference and to test the hypothesis that racial disparities may be mitigated by giving clinicians additional information about their patients' treatment preferences and risk of opioid misuse. This study is a secondary analysis of data collected from Life STORRIED (Life Stories for Opioid Risk Reduction in the ED), a multicenter randomized clinical trial conducted between June 2017 and August 2019 in the emergency departments (EDs) of 4 academic medical centers. Participants included 1302 patients aged 18 to 70 years who presented to the ED with ureter colic or musculoskeletal back and/or neck pain. The treatment arm was randomized to receive a patient-facing intervention (not examined in this secondary analysis) and a clinician-facing intervention that consisted of a form containing information about each patient's analgesic treatment preference and risk of opioid misuse. Concordance between patient preference for opioid-containing treatment (assessed before ED discharge) and receipt of an opioid prescription at ED discharge. Among 1302 participants in the Life STORRIED clinical trial, 1012 patients had complete demographic and treatment preference data available and were included in this secondary analysis. Of those, 563 patients (55.6%) self-identified as female, with a mean (SD) age of 40.8 (14.1) years. A total of 455 patients (45.0%) identified as White, 384 patients (37.9%) identified as Black, and 173 patients (17.1%) identified as other races. After controlling for demographic characteristics and clinical features, Black patients had lower odds than White patients of receiving a prescription for opioid medication at ED discharge (odds ratio [OR], 0.42; 95% CI, 0.27-0.65). When patients who did and did not prefer opioids were considered separately, Black patients continued to have lower odds of being discharged with a prescription for opioids compared with White patients (among those who preferred opioids: OR, 0.43 [95% CI, 0.24-0.77]; among those who did not prefer opioids: OR, 0.45 [95% CI, 0.23-0.89]). These disparities were not eliminated in the treatment arm, in which clinicians were given additional data about their patients' treatment preferences and risk of opioid misuse. In this secondary analysis of data from a randomized clinical trial, Black patients received different acute pain management than White patients after patient preference was accounted for. These disparities remained after clinicians were given additional patient-level data, suggesting that a lack of patient information may not be associated with opioid prescribing disparities. ClinicalTrials.gov Identifier: NCT03134092.
Salivary concentrations of Streptococcus mutans and Lactobacilli during an orthodontic treatment. An observational study comparing fixed and removable orthodontic appliances
Aim This study aimed to investigate salivary concentrations of Streptococcus mutans (S. mutans) and some Lactobacilli, and plaque index (PI) in patients wearing fixed versus removable orthodontic appliances. Methods A sample of 90 orthodontic patients (56 males and 34 females) was included in the study: 30 subjects (aged 21.5±1.5 years) were treated with removable clear aligners (CA), while for other 30 cases (aged 23.3±1.6 years) a fixed multibrackets appliance (MB) were utilized, and 30 patients (aged 18.2 ±1.5 years) wearied a removable positioner (RP). Salivary concentrations of S. mutans and Lactobacilli and PI were evaluated prior to start of the orthodontic treatment, after 3 months and 6 months. Results After 6 months, 40% of MB patients (12 subjects over 30) showed a concentration of S. mutans associated to high risk of developing tooth decay (CFU/ml>105), differently from participants wearing removable appliances (odds ratio = 5.05; 95% C.I. = 1.72‐14.78; chi‐square = 9.64; p = 0.0019). The same trens was observed for the concentration of Lactobacilli (odds ratio = 4.33; 95% C.I. = 1.53‐12.3; chi‐square = 8.229; p = 0.004). In addition, over the duration of the study, CA patients maintained PI at 0 level, while MB patients experienced a statistically significant increasing trend of PI over time, and their PI became clinically/statistically relevant after 6 months, respect to CA and RP patients. Conclusions Comparing all the data, while, after 6 months, only about 10% of CA patients and 13.3% of RP patients achieved a microbial colonization which may lead to high risk of caries development, about 40% of MB patients ‐ and 20% after 3 months ‐ showed a high level of vulnerability to developing caries, which require additional strategies for plaque control and microbial colonization to be employed.
Risk perceptions and their relation to risk behavior
Because risk perceptions can affect protective behavior and protective behavior can affect risk perceptions, the relations between these 2 constructs are complex and incorrect tests often lead to invalid conclusions. To discuss and carry out appropriate tests of 3 easily confused hypotheses: (a). the behavior motivation hypothesis (perceptions of personal risk cause people to take protective action), (b). the risk reappraisal hypothesis (when people take actions thought to be effective, they lower their risk perceptions), and (c). the accuracy hypothesis (risk perceptions accurately reflect risk behavior). Longitudinal study with an initial interview just after the Lyme disease vaccine was made publicly available and a follow-up interview 18 months later. Random sample of adult homeowners (N = 745) in 3 northeastern U.S. counties with high Lyme disease incidence. Lyme disease vaccination behavior and risk perception were assessed. All 3 hypotheses were supported. Participants with higher initial risk perceptions were much more likely than those with lower risk perceptions to get vaccinated against Lyme disease (OR = 5.81, 95% CI 2.63-12.82, p <.001). Being vaccinated led to a reduction in risk perceptions, chi2(1, N = 745) = 30.90, p <.001, and people vaccinated correctly believed that their risk of future infection was lower than that of people not vaccinated (OR =.44, 95% CI.21-.91, p <.05). The behavior motivation hypothesis was supported in this longitudinal study, but the opposite conclusion (i.e., that higher risk led to less protective behavior) would have been drawn from an incorrect test based only on cross-sectional data. Health researchers should take care in formulating and testing risk-perception-behavior hypotheses.