Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
11 result(s) for "Sutherland, Tori"
Sort by:
Association of the 2016 US Centers for Disease Control and Prevention Opioid Prescribing Guideline With Changes in Opioid Dispensing After Surgery
While the 2016 US Centers for Disease Control and Prevention (CDC) guideline for prescribing opioids for chronic pain was not intended to address postoperative pain management, observers have noted the potential for the guideline to have affected postoperative opioid prescribing. To assess changes in postoperative opioid dispensing after vs before the CDC guideline release in March 2016. This cross-sectional study included 361 556 opioid-naive patients who received 1 of 8 common surgical procedures between March 16, 2014, and March 15, 2018. Data were retrieved from a private insurance database, and a retrospective interrupted time series analysis was conducted. Data analysis was conducted from March 2014 to April 2018. Outcomes were measured before and after release of the 2016 CDC guideline. The primary outcome was the total amount of opioid dispensed in the first prescription filled within 7 days following surgery in morphine milligram equivalents (MMEs); secondary outcomes included the total amount of opioids prescribed and the incidence of any opioid refilled within 30 days after surgery. To characterize absolute opioid dispensing levels, the amount dispensed in initial prescriptions was compared with available procedure-specific recommendations. The sample included 361 556 opioid-naive patients undergoing 8 general and orthopedic surgical procedures; 164 009 (45.4%) were male patients, and the median (interquartile range) age of the sample was 58 (45 to 69) years. The total amount of opioids dispensed in the first prescription after surgery decreased in the 2 years following the CDC guideline release, compared with an increasing trend in the 2 years prior (prerelease trend: 1.43 MME/month; 95% CI, 0.62 to 2.24 MME/month; P = .001; postrelease trend: -2.18 MME/month; 95% CI, -3.01 to -1.35 MME/month; P < .001; trend change: -3.61 MME/month; 95% CI, -4.87 to -2.35 MME/month; P < .001). Changes in initial dispensing amount trends were greatest for patients undergoing hip or knee replacement (-8.64 MME/month; 95% CI, -11.68 to -5.60 MME/month; P < .001). Minimal changes were observed in rates of refills over time (net change: 0.14% per month; 95% CI, 0.06% to 0.23% per month; P = .001). Absolute amounts prescribed remained high throughout the period, with nearly half of patients (47.7%; 95% CI, 47.4%-47.9%) treated in the postguideline period receiving at least twice the initial opioid dose anticipated to treat postoperative pain based on available procedure-specific recommendations. In this study, opioid dispensing after surgery decreased substantially after the 2016 CDC guideline release, compared with an increasing trend during the 2 years prior. Absolute amounts prescribed for surgery remained high during the study period, supporting the need for further efforts to improve postoperative pain management.
Widespread antimicrobial resistance among bacterial infections in a Rwandan referral hospital
Resistance among bacterial infections is increasingly well-documented in high-income countries; however, relatively little is known about bacterial antimicrobial resistance in low-income countries, where the burden of infections is high. We prospectively screened all adult inpatients at a referral hospital in Rwanda for suspected infection for seven months. Blood, urine, wound and sputum samples were cultured and tested for antibiotic susceptibility. We examined factors associated with resistance and compared hospital outcomes for participants with and without resistant isolates. We screened 19,178 patient-days, and enrolled 647 unique participants with suspected infection. We obtained 942 culture specimens, of which 357 were culture-positive specimens. Of these positive specimens, 155 (43.4%) were wound, 83 (23.2%) urine, 64 (17.9%) blood, and 55 (15.4%) sputum. Gram-negative bacteria comprised 323 (88.7%) of all isolates. Of 241 Gram-negative isolates tested for ceftriaxone, 183 (75.9%) were resistant. Of 92 Gram-negative isolates tested for the extended spectrum beta-lactamase (ESBL) positive phenotype, 66 (71.7%) were ESBL positive phenotype. Transfer from another facility, recent surgery or antibiotic exposure, and hospital-acquired infection were each associated with resistance. Mortality was 19.6% for all enrolled participants. This is the first published prospective hospital-wide antibiogram of multiple specimen types from East Africa with ESBL testing. Our study suggests that low-resource settings with limited and inconsistent access to the full range of antibiotic classes may bear the highest burden of resistant infections. Hospital-acquired infections and recent antibiotic exposure are associated with a high proportion of resistant infections. Efforts to slow the development of resistance and supply effective antibiotics are urgently needed.
Predicting mortality in adults with suspected infection in a Rwandan hospital: an evaluation of the adapted MEWS, qSOFA and UVA scores
RationaleMortality prediction scores are increasingly being evaluated in low and middle income countries (LMICs) for research comparisons, quality improvement and clinical decision-making. The modified early warning score (MEWS), quick Sequential (Sepsis-Related) Organ Failure Assessment (qSOFA), and Universal Vital Assessment (UVA) score use variables that are feasible to obtain, and have demonstrated potential to predict mortality in LMIC cohorts.ObjectiveTo determine the predictive capacity of adapted MEWS, qSOFA and UVA in a Rwandan hospital.Design, setting, participants and outcome measuresWe prospectively collected data on all adult patients admitted to a tertiary hospital in Rwanda with suspected infection over 7 months. We calculated an adapted MEWS, qSOFA and UVA score for each participant. The predictive capacity of each score was assessed including sensitivity, specificity, positive and negative predictive value, OR, area under the receiver operating curve (AUROC) and performance by underlying risk quartile.ResultsWe screened 19 178 patient days, and enrolled 647 unique patients. Median age was 35 years, and in-hospital mortality was 18.1%. The proportion of data missing for each variable ranged from 0% to 11.7%. The sensitivities and specificities of the scores were: adapted MEWS >4, 50.4% and 74.9%, respectively; qSOFA >2, 24.8% and 90.4%, respectively; and UVA >4, 28.2% and 91.1%, respectively. The scores as continuous variables demonstrated the following AUROCs: adapted MEWS 0.69 (95% CI 0.64 to 0.74), qSOFA 0.65 (95% CI 0.60 to 0.70), and UVA 0.71 (95% CI 0.66 to 0.76); there was no statistically significant difference between the discriminative capacities of the scores.ConclusionThree scores demonstrated a modest ability to predict mortality in a prospective study of inpatients with suspected infection at a Rwandan tertiary hospital. Careful consideration must be given to their adequacy before using them in research comparisons, quality improvement or clinical decision-making.
Preoperative vs Postoperative Opioid Prescriptions and Prolonged Opioid Refills Among US Youths
High-risk practices, including dispensing an opioid prescription before surgery when not recommended, remain poorly characterized among US youths and may contribute to new persistent opioid use. To characterize changes in preoperative, postoperative, and refill opioid prescriptions up to 180 days after surgery. This retrospective cohort study was performed using national claims data to determine opioid prescribing practices among a cohort of opioid-naive youths aged 11 to 20 years undergoing 22 inpatient and outpatient surgical procedures between 2015 and 2020. Statistical analysis was performed from June 2023 to April 2024. The primary outcome was the percentage of initial opioid prescriptions filled up to 14 days prior to vs 7 days after a procedure. Secondary outcomes included the likelihood of a refill up to 180 days after surgery, including refills at 91 to 180 days, as a proxy for new persistent opioid use, and the opioid quantity dispensed in the initial and refill prescriptions in morphine milligram equivalents (MME). Exposures included patient and prescriber characteristics. Multivariable logistic regression models were used to estimate the association between prescription timing and prolonged refills. Among 100 026 opioid-naive youths (median [IQR] age, 16.0 [14.0-18.0] years) undergoing a surgical procedure, 46 951 (46.9%) filled an initial prescription, of which 7587 (16.2%) were dispensed 1 to 14 days before surgery. The mean quantity dispensed was 227 (95% CI, 225-229) MME; 6467 youths (13.8%) filled a second prescription (mean MME, 239 [95% CI, 231-246]) up to 30 days after surgery, and 1216 (3.0%) refilled a prescription 91 to 180 days after surgery. Preoperative prescriptions, increasing age, and procedures not typically associated with severe pain were most strongly associated with new persistent opioid use. In this retrospective study of youths undergoing surgical procedures, of which, many are typically not painful enough to require opioid use, opioid dispensing declined, but approximately 1 in 6 prescriptions were filled before surgery, and 1 in 33 adolescents filled prescriptions 91 to 180 days after surgery, consistent with new persistent opioid use. These findings should be addressed by policymakers and communicated by professional societies to clinicians who prescribe opioids.
Predicting pediatric emergence delirium using data-driven machine learning applied to electronic health record dataset at a quaternary care pediatric hospital
Objectives Pediatric emergence delirium is an undesirable outcome that is understudied. Development of a predictive model is an initial step toward reducing its occurrence. This study aimed to apply machine learning (ML) methods to a large clinical dataset to develop a predictive model for pediatric emergence delirium. Materials and Methods We performed a single-center retrospective cohort study using electronic health record data from February 2015 to December 2019. We built and evaluated 4 commonly used ML models for predicting emergence delirium: least absolute shrinkage and selection operator, ridge regression, random forest, and extreme gradient boosting. The primary outcome was the occurrence of emergence delirium, defined as a Watcha score of 3 or 4 recorded at any time during recovery. Results The dataset included 54 776 encounters across 43 830 patients. The 4 ML models performed similarly with performance assessed by the area under the receiver operating characteristic curves ranging from 0.74 to 0.75. Notable variables associated with increased risk included adenoidectomy with or without tonsillectomy, decreasing age, midazolam premedication, and ondansetron administration, while intravenous induction and ketorolac were associated with reduced risk of emergence delirium. Conclusions Four different ML models demonstrated similar performance in predicting postoperative emergence delirium using a large pediatric dataset. The prediction performance of the models draws attention to our incomplete understanding of this phenomenon based on the studied variables. The results from our modeling could serve as a first step in designing a predictive clinical decision support system, but further optimization and validation are needed. Clinical trial number and registry URL Not applicable. Lay Summary Pediatric emergence delirium is a transient phenomenon that occurs in children as they wake up (emerge) from anesthesia in which they may have disturbances in awareness of and attention to their environment, disorientation, hypersensitivity to stimuli, and hyperactive motor behaviors. Emergence delirium is an undesirable outcome whose accurate prediction could allow clinicians to administer targeted preventive therapy. This study applied machine learning methods to a large clinical dataset to develop a predictive model for pediatric emergence delirium. The dataset included 54 776 encounters across 43 830 patients. The models tested had moderate predictive performance, drawing attention to our incomplete understanding of this phenomenon. Several variables were identified to be associated with an increased risk of emergence delirium, while others were identified to be associated with a reduced risk of emergence delirium. The results from our modeling could serve as a first step in designing a predictive clinical decision support system, but further optimization and validation are needed.
Use of the Non-Pneumatic Anti-Shock Garment (NASG) for Life-Threatening Obstetric Hemorrhage: A Cost-Effectiveness Analysis in Egypt and Nigeria
To assess the cost-effectiveness of a non-pneumatic anti-shock garment (NASG) for obstetric hemorrhage in tertiary hospitals in Egypt and Nigeria. We combined published data from pre-intervention/NASG-intervention clinical trials with costs from study sites. For each country, we used observed proportions of initial shock level (mild: mean arterial pressure [MAP] >60 mmHg; severe: MAP ≤60 mmHg) to define a standard population of 1,000 women presenting in shock. We examined three intervention scenarios: no women in shock receive the NASG, only women in severe shock receive the NASG, and all women in shock receive the NASG. Clinical data included frequencies of adverse health outcomes (mortality, severe morbidity, severe anemia), and interventions to manage bleeding (uterotonics, blood transfusions, hysterectomies). Costs (in 2010 international dollars) included the NASG, training, and clinical interventions. We compared costs and disability-adjusted life years (DALYs) across the intervention scenarios. For 1000 women presenting in shock, providing the NASG to those in severe shock results in decreased mortality and morbidity, which averts 357 DALYs in Egypt and 2,063 DALYs in Nigeria. Differences in use of interventions result in net savings of $9,489 in Egypt (primarily due to reduced transfusions) and net costs of $6,460 in Nigeria, with a cost per DALY averted of $3.13. Results of providing the NASG for women in mild shock has smaller and uncertain effects due to few clinical events in this data set. Using the NASG for women in severe shock resulted in markedly improved health outcomes (2-2.9 DALYs averted per woman, primarily due to reduced mortality), with net savings or extremely low cost per DALY averted. This suggests that in resource-limited settings, the NASG is a very cost-effective intervention for women in severe hypovolemic shock. The effects of the NASG for mild shock are less certain.
The “Just Right” Amount of Oxygen. Improving Oxygen Use in a Rwandan Emergency Department
Despite oxygen's classification as an essential medication by the World Health Organization, it is inconsistently available in many resource-constrained settings. Hypoxemia is associated with increased mortality, and mounting evidence suggests that hyperoxia may also be associated with adverse outcomes. To determine if overuse of oxygen for some patients in a Rwandan tertiary care hospital emergency department might coexist with oxygen shortages and underuse of oxygen for other patients, and whether an educational intervention coupled with provision of pulse oximeters could improve the distribution of limited oxygen resources. We screened all patients in the adult emergency department (ED) of the University Teaching Hospital of Kigali for hypoxemia and receipt of oxygen therapy for 5 weeks. After completing baseline data collection, we provided pulse oximeters and conducted a didactic training with pre- and posttests on oxygen titration, with a chosen target oxygen saturation (Sp ) of 90% to 95%. Four and 12 weeks after the intervention, we evaluated all patients in the ED again for Sp and receipt of oxygen therapy for 4 weeks each period. We also recorded ED oxygen use and availability of reserve oxygen for the hospital during the three study periods. During all data collection periods, 214 of 1,765 (12.1%) unique patients screened were hypoxemic. The proportion of patient-days with appropriately titrated oxygen therapy (Sp , 90-95%) increased from 18.7% at baseline to 38.5% and 42.0% at 4 and 12 weeks postintervention (  < 0.001). On a multiple-choice examination testing knowledge of appropriate oxygen titration, clinicians' scores improved from average 60% (interquartile range [IQR], 40-80%) correct to 80% (IQR, 60-80%) correct immediately after the educational intervention (  < 0.001). Oxygen use in the ED decreased from a median of 32.0 (IQR, 28.0-35.0) tanks per day to 25.5 (IQR, 24.0-29.0) and 16.0 (IQR, 12.5-21.0) tanks per day at Weeks 4 and 12, respectively (  < 0.001), and the median daily number of tanks in reserve for the hospital appeared to increase, although this did not reach statistical significance (30.0 [IQR, 9.0-46.0], 86.5 [IQR, 74.0-92.0], and 75.5 [IQR, 8.5-88.5], respectively;  = 0.07). Among patients in a Rwandan adult ED, 12.1% of patients were hypoxemic and 81.3% of patient-days were either under- or overtreated with oxygen during baseline data collection on the basis of our defined target of Sp 90% to 95%. Follow-up results at 4 and 12 weeks postintervention demonstrated sustained improvement in oxygen titration and likely increased availability of oxygen resources.
Breastfeeding Practices Among First-Time Mothers and Across Multiple Pregnancies
To investigate maternal characteristics associated with breastfeeding initiation and success. Women enrolled in the Mothers Outcomes After Delivery study reported breastfeeding practices 5–10 years after a first delivery. Women were classified as successful breastfeeding initiators, unsuccessful initiators, or non-initiators. For the first birth, demographic and obstetrical characteristics were compared across these three breastfeeding groups. For multiparous women, agreement in breastfeeding status between births was evaluated. Multivariate regression analysis was used to identify characteristics associated with non-initiation and unsuccessful breastfeeding across all births. Of 812 participants, 740 (91%) mothers tried to breastfeed their first child and 593 (73%) reported breastfeeding successfully. In a multivariate analysis, less educated women were less likely to initiate breastfeeding (odds ratio (OR) for non-initiation 1.97; 95% confidence interval (CI) 1.23, 3.14). There was a notable decrease in breastfeeding initiation with increasing birth order: compared to the first birth, the odds for non-initiation after a second delivery almost doubled (OR 1.83, 95% CI 1.42, 2.35) and the odds for non-initiation after a third delivery were further increased (OR 2.44, 95% CI 1.56, 3.82). Successful breastfeeding in a first pregnancy was a predictor of subsequent breastfeeding initiation and success. Specifically, women who did not attempt breastfeeding or who reported unsuccessful attempts to breastfeed at first birth were unlikely to initiate breastfeeding at later births. Cesarean delivery was not associated with breastfeeding initiation (OR 1.01; 95% CI 0.68, 1.48) or success (OR 1.33; 95% CI 0.92, 1.94). Breastfeeding practices after a first birth are a significant predictor of breastfeeding in subsequent births.
Description of a multidisciplinary initiative to improve SCIP measures related to pre-operative antibiotic prophylaxis compliance: a single-center success story
Background The Surgical Care Improvement Project (SCIP) was launched in 2005. The core prophylactic perioperative antibiotic guidelines were created due to recognition of the impact of proper perioperative prophylaxis on an estimated annual one million inpatient days and $1.6 billion in excess health care costs secondary to preventable surgical site infections (SSIs). An internal study was conducted to create low cost, standardized processes on an institutional level to improve compliance with prophylactic antibiotic administration. Methods We assessed the impact of auditing and notifying providers of SCIP errors on overall compliance with inpatient antibiotic guidelines and on net financial gain or loss to a large tertiary center between March 1st 2010 and September 31st 2013. We hypothesized that direct physician-to-physician feedback would result in significant compliance improvements. Results Through physician notification, our hospital was able to significantly improve SCIP compliance and emphasis on patient safety within a year of intervention implementation. The hospital earned an additional $290,612 in 2011 and $209,096 in 2012 for re-investment in patient care initiatives. Conclusions Provider education and direct notification of SCIP prophylactic antibiotic dosing errors resulted in improved compliance with national patient improvement guidelines. There were differences between the anesthesiology and surgery department feedback responses, the latter likely attributed to diverse surgical department sub-divisions, frequent changes in resident trainees and supervising attending staff, and the comparative ability. Provider notification of guideline non-compliance should be encouraged as standard practice to improve patient safety. Also, the hospital experienced increased revenue for re-investment in patient care as a secondary result of provider notification.
Use of the Non-Pneumatic Anti-Shock Garment (NASG) for Life-Threatening Obstetric Hemorrhage: A Cost-Effectiveness Analysis in Egypt and Nigeria. e62282
Objective To assess the cost-effectiveness of a non-pneumatic anti-shock garment (NASG) for obstetric hemorrhage in tertiary hospitals in Egypt and Nigeria. Methods We combined published data from pre-intervention/NASG-intervention clinical trials with costs from study sites. For each country, we used observed proportions of initial shock level (mild: mean arterial pressure [MAP] >60 mmHg; severe: MAP less than or equal to 60 mmHg) to define a standard population of 1,000 women presenting in shock. We examined three intervention scenarios: no women in shock receive the NASG, only women in severe shock receive the NASG, and all women in shock receive the NASG. Clinical data included frequencies of adverse health outcomes (mortality, severe morbidity, severe anemia), and interventions to manage bleeding (uterotonics, blood transfusions, hysterectomies). Costs (in 2010 international dollars) included the NASG, training, and clinical interventions. We compared costs and disability-adjusted life years (DALYs) across the intervention scenarios. Results For 1000 women presenting in shock, providing the NASG to those in severe shock results in decreased mortality and morbidity, which averts 357 DALYs in Egypt and 2,063 DALYs in Nigeria. Differences in use of interventions result in net savings of $9,489 in Egypt (primarily due to reduced transfusions) and net costs of $6,460 in Nigeria, with a cost per DALY averted of $3.13. Results of providing the NASG for women in mild shock has smaller and uncertain effects due to few clinical events in this data set. Conclusion Using the NASG for women in severe shock resulted in markedly improved health outcomes (2-2.9 DALYs averted per woman, primarily due to reduced mortality), with net savings or extremely low cost per DALY averted. This suggests that in resource-limited settings, the NASG is a very cost-effective intervention for women in severe hypovolemic shock. The effects of the NASG for mild shock are less certain.