Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
2,030 result(s) for "Drug Monitoring - standards"
Sort by:
Effect of an Intensive Glucose Management Protocol on the Mortality of Critically Ill Adult Patients
To assess the effect of an intensive glucose management protocol in a heterogeneous population of critically ill adult patients. This study consisted of 800 consecutive patients admitted after institution of the protocol (treatment group, between February 1, 2003, and January 10, 2004) and 800 patients admitted immediately preceding institution of the protocol (baseline group, between February 23, 2002, and January 31, 2003). The setting was a 14-bed medical-surgical intensive care unit (ICU) in a university-affiliated community teaching hospital. The protocol involved intensive monitoring and treatment to maintain plasma glucose values lower than 140 mg/dL. Continuous intravenous insulin was used if glucose values exceeded 200 mg/ dL on 2 successive occasions. The 2 groups of patients were well matched, with similar age, sex, race, prevalence of diabetes mellitus, Acute Physiology and Chronic Health Evaluation II scores, and distribution of diagnoses. After institution of the protocol, the mean glucose value decreased from 152.3 to 130.7 mg/dL ( P<.001), marked by a 56.3% reduction in the percentage of glucose values of 200 mg/dL or higher, without a significant change in hypoglycemia. The development of new renal insufficiency decreased 75% ( P=.03), and the number of patients undergoing transfusion of packed red blood cells decreased 18.7% ( P=.04). Hospital mortality decreased 29.3% ( P=.002), and length of stay in the ICU decreased 10.8% ( P=.01). The protocol resulted in significantly improved glycemic control and was associated with decreased mortality, organ dysfunction, and length of stay in the ICU in a heterogeneous population of critically ill adult patients. These results support the adoption of this low-cost intervention as a standard of care for critically ill patients.
Management of older or unfit patients with acute myeloid leukemia
Acute myeloid leukemia (AML) is primarily a disease of older adults, for whom optimal treatment strategies remain controversial. Because of the concern for therapeutic resistance and, in particular, excessive toxicity or even treatment-related mortality, many older or medically unfit patients do not receive AML-directed therapy. Yet, evidence suggests that outcomes are improved if essentially all of these patients are offered AML therapy, ideally at a specialized cancer center. Medical fitness for tolerating intensive chemotherapy can be estimated relatively accurately with multiparameter assessment tools; this information should serve as basis for the assignment to intensive or non-intensive therapy. Until our accuracy in predicting the success of individual therapies improves, all patients should be considered for participation in a randomized controlled trial. Comparisons between individual trials will be facilitated once standardized, improved response criteria are developed, and standard treatment approaches have been defined against which novel therapies can be tested.
At-Home Versus In-Clinic INR Monitoring: A Cost–Utility Analysis from The Home INR Study (THINRS)
BackgroundEffective management of patients using warfarin is resource-intensive, requiring frequent in-clinic testing of the international normalized ratio (INR). Patient self-testing (PST) using portable at-home INR monitoring devices has emerged as a convenient alternative. As revealed by The Home INR Study (THINRS), event rates for PST were not significantly different from those for in-clinic high-quality anticoagulation management (HQACM), and a cumulative gain in quality of life was observed for patients undergoing PST.ObjectiveTo perform a cost–utility analysis of weekly PST versus monthly HQACM and to examine the sensitivity of these results to testing frequency.Patients/InterventionsIn this study, 2922 patients taking warfarin for atrial fibrillation or mechanical heart valve, and who demonstrated PST competence, were randomized to either weekly PST (n = 1465) or monthly in-clinic testing (n = 1457). In a sub-study, 234 additional patients were randomized to PST once every 4 weeks (n = 116) or PST twice weekly (n = 118). The endpoints were quality of life (measured by the Health Utilities Index), health care utilization, and costs over 2 years of follow-up.ResultsPST and HQACM participants were similar with regard to gender, age, and CHADS2 score. The total cost per patient over 2 years of follow-up was $32,484 for HQACM and $33,460 for weekly PST, representing a difference of $976. The incremental cost per quality-adjusted life year gained with PST once weekly was $5566 (95 % CI, −$11,490 to $25,142). The incremental cost-effectiveness ratio (ICER) was sensitive to testing frequency: weekly PST dominated PST twice weekly and once every 4 weeks. Compared to HQACM, weekly PST was associated with statistically significant and clinically meaningful improvements in quality of life. The ICER for weekly PST versus HQACM was well within accepted standards for cost-effectiveness, and was preferred over more or less frequent PST. These results were robust to sensitivity analyses of key assumptions.ConclusionWeekly PST is a cost-effective alternative to monthly HQACM and a preferred testing frequency compared to twice weekly or monthly PST.
The Impact of Prescription Drug Monitoring Programs and Prescribing Guidelines on Emergency Department Opioid Prescribing: A Multi-Center Survey
Objective. Emergency department (ED) providers are high volume but low quantity prescribers of opioid analgesics (OA). Few studies have examined differences in opioid prescribing decisions specifically among ED providers. The aim of this study was to describe OA prescribing decisions of ED providers at geographically diverse centers, including utilization of prescribing guidelines and prescription drug monitoring programs (PDMP). Methods. This was a multi-center cross-sectional Web-based survey of ED providers who prescribe OA. Respondents were asked about their OA prescribing decisions, their use of PDMPs, and their use of prescribing guidelines. Data was analyzed using descriptive statistics and chi-square tests of association were used to assess the relationship between providers' opioid prescribing decisions and independent covariates. Results. The total survey population was 957 individuals and 515 responded to the survey for an overall response rate of 54%. The frequency of respondents who prescribed different types of pain medication was variable between centers. Fifty-nine percent (219/369) of respondents were registered to access a PDMP, and 5% (18/369) were not aware whether their state had a PDMP. Forty percent (172/426) of respondents used OA prescribing guidelines, while 24% (103/426) did not, and 35% (151/426) were unaware of prescribing guidelines. Sixteen percent (68/439) of respondents indicated they have prescribed OA to expedite patient discharge, and 12% (54/439) to improve patient satisfaction. No significant differences in OA prescribing decisions were found between groups either by use of PDMP or by guideline adherence. Conclusions. In this multi-center survey study of ED clinicians, OA prescribing decisions varied between centers and found some providers occasionally prescribe OA for non-medical reasons including expediting ED discharge and increasing patient satisfaction. The utilization of prescribing guidelines and PDMPs was not associated with differences in OA prescribing decisions.
Standardized MRD flow and ASO IGH RQ-PCR for MRD quantification in CLL patients after rituximab-containing immunochemotherapy: a comparative analysis
Rituximab-containing regimens are becoming a therapeutic standard in chronic lymphocytic leukemia (CLL), so that a validation of flow cytometric minimal residual disease (MRD) quantification (MRD flow) in the presence of this antibody is necessary. We therefore compared results obtained by real-time quantitative (RQ)-PCR to MRD flow in 530 samples from 69 patients randomized to receive chemotherapy or chemotherapy plus rituximab. Quantitative MRD levels assessed by both techniques were closely correlated irrespective of therapy ( r =0.95). The sensitivity and specificity of MRD flow was not influenced by the presence of rituximab. With 58.9% positive and 26.4% negative samples by both techniques, 85.3% of assessments (452/530) were qualitatively concordant between MRD flow and RQ-PCR. Discordant samples were typically negative by MRD flow and simultaneously positive close to the detection limit of the PCR assays, indicating a higher sensitivity of PCR for very low MRD levels. However, 93.8% of all samples were concordantly classified by both methods using a threshold of 10 −4 to determine MRD positivity. MRD flow and PCR are equally effective for MRD quantification in rituximab-treated CLL patients within a sensitivity range of up to 10 −4 , whereas PCR is more sensitive for detecting MRD below that level.
Opportunities for improving medication use and monitoring in gout
Purpose:To study patterns and predictors of medication use and laboratory monitoring in gout.Methods:In a cohort of veterans with a diagnosis of gout prescribed allopurinol, colchicine or probenecid, quality of care was assessed by examining adherence to the following evidence-based recommendations: (1) whether patients starting a new allopurinol prescription (a) received continuous allopurinol, (b) received colchicine prophylaxis, (c) achieved the target uric acid level of ⩽6 mg/dl; and (2) whether doses were adjusted for renal insufficiency. The association of sociodemographic characteristics, healthcare utilisation and comorbidity with the recommendations was examined by logistic/Poisson regression.Results:Of the 643 patients with gout receiving a new allopurinol prescription, 297 (46%) received continuous allopurinol, 66 (10%) received colchicine prophylaxis and 126 (20%) reached the target uric acid level of ⩽6 mg/dl. During episodes of renal insufficiency, appropriate dose reduction/discontinuation of probenecid was done in 24/31 episodes (77%) and of colchicine in 36/52 episodes (69%). Multivariable regression showed that higher outpatient utilisation, more rheumatology care and lower comorbidity were associated with better quality of care; more rheumatology clinic or primary care visits were associated with less frequent allopurinol discontinuation; more total outpatient visit days or most frequent visits to a rheumatology clinic were associated with a higher likelihood of receiving colchicine prophylaxis; and a lower Charlson Comorbidity Index or more outpatient visit days were associated with higher odds of reaching the target uric acid level of ⩽6 mg/dl.Conclusions:Important variations were found in patterns of medication use and monitoring in patients with gout with suboptimal care. A concerted effort is needed to improve the overall care of gout.
The impact of frequency of patient self-testing of prothrombin time on time in target range within VA Cooperative Study #481: The Home INR Study (THINRS), a randomized, controlled trial
Anticoagulation (AC) is effective in reducing thromboembolic events for individuals with atrial fibrillation (AF) or mechanical heart valve (MHV), but maintaining patients in target range for international normalized ratio (INR) can be difficult. Evidence suggests increasing INR testing frequency can improve time in target range (TTR), but this can be impractical with in-clinic testing. The objective of this study was to test the hypothesis that more frequent patient-self testing (PST) via home monitoring increases TTR. This planned substudy was conducted as part of The Home INR Study, a randomized controlled trial of in-clinic INR testing every 4 weeks versus PST at three different intervals. The setting for this study was 6 VA centers across the United States. 1,029 candidates with AF or MHV were trained and tested for competency using ProTime INR meters; 787 patients were deemed competent and, after second consent, randomized across four arms: high quality AC management (HQACM) in a dedicated clinic, with venous INR testing once every 4 weeks; and telephone monitored PST once every 4 weeks; weekly; and twice weekly. The primary endpoint was TTR at 1-year follow-up. The secondary endpoints were: major bleed, stroke and death, and quality of life. Results showed that TTR increased as testing frequency increased (59.9 ± 16.7 %, 63.3 ± 14.3 %, and 66.8 ± 13.2 % [mean ± SD] for the groups that underwent PST every 4 weeks, weekly and twice weekly, respectively). The proportion of poorly managed patients (i.e., TTR <50 %) was significantly lower for groups that underwent PST versus HQACM, and the proportion decreased as testing frequency increased. Patients and their care providers were unblinded given the nature of PST and HQACM. In conclusion, more frequent PST improved TTR and reduced the proportion of poorly managed patients.
Efficacy and tolerability of novel triple combination therapy in drug-naïve patients with type 2 diabetes from the TRIPLE-AXEL trial: protocol for an open-label randomised controlled trial
IntroductionPatients with type 2 diabetes are at risk of microvascular and macrovascular complications. Intensive glycaemic control, especially in patients with short duration of diabetes, is the mainstay of management of type 2 diabetes to lower the risk of complications. However, despite the improvement in the understanding of the pathophysiology of type 2 diabetes and development of novel glucose-lowering agents, long-term durable glycaemic control remains a difficult goal to achieve. Several challenging clinical trials proved that an early combination therapy with a variety of glucose-lowering agents had a more favourable effect than conventional stepwise therapy in terms of glycaemic control. We aim to evaluate the efficacy and tolerability of a novel, initial triple combination therapy with metformin, sodium glucose cotransporter 2 inhibitor (dapagliflozin) and dipeptidyl peptidase-4 inhibitor (saxagliptin) compared with conventional stepwise add-on therapy in drug-naïve patients with recent-onset type 2 diabetes.Methods and analysisThis study is a multicentre, prospective, randomised, open-label, parallel group, comparator-controlled trial. A total of 104 eligible participants will be randomised to either the initial combination therapy group or the conventional stepwise add-on therapy group for 104 weeks. The primary endpoint is the proportion of patients who achieved haemoglobin A1c level<6.5% without hypoglycaemia, weight gain or discontinuation due to adverse events at 104 weeks. This trial will determine whether a novel triple combination therapy with metformin, dapagliflozin and saxagliptin has a beneficial effect on durable glycaemic control compared with conventional therapy in drug-naïve patients with type 2 diabetes.Ethics and disseminationThis study protocol was approved by the local institutional review boards and independent ethics committees over the recruitment sites. Results of this study will be disseminated in scientific journals and scientific conferences.Trial registration number NCT02946632; Pre-results.
Sensitivity to changes during antidepressant treatment: a comparison of unidimensional subscales of the Inventory of Depressive Symptomatology (IDS-C) and the Hamilton Depression Rating Scale (HAMD) in patients with mild major, minor or subsyndromal depression
In the efficacy evaluation of antidepressant treatments, the total score of the Hamilton Depression Rating Scale (HAMD) is still regarded as the ‘gold standard’. We previously had shown that the Inventory of Depressive Symptomatology (IDS) was more sensitive to detect depressive symptom changes than the HAMD17 (Helmreich et al. 2011 ). Furthermore, studies suggest that the unidimensional subscales of the HAMD, which capture the core depressive symptoms, outperform the full HAMD regarding the detection of antidepressant treatment effects. The aim of the present study was to compare several unidimensional subscales of the HAMD and the IDS regarding their sensitivity to changes in depression symptoms in a sample of patients with mild major, minor or subsyndromal depression (MIND). Biweekly IDS-C28 and HAMD17 data from 287 patients of a 10-week randomised, placebo-controlled trial comparing the effectiveness of sertraline and cognitive–behavioural group therapy in patients with MIND were converted to subscale scores and analysed during the antidepressant treatment course. We investigated sensitivity to depressive change for all scales from assessment-to-assessment, in relation to depression severity level and placebo–verum differences. The subscales performed similarly during the treatment course, with slight advantages for some subscales in detecting treatment effects depending on the treatment modality and on the items included. Most changes in depressive symptomatology were detected by the IDS short scale, but regarding the effect sizes, it performed worse than most subscales. Unidimensional subscales are a time- and cost-saving option in judging drug therapy outcomes, especially in antidepressant treatment efficacy studies. However, subscales do not cover all facets of depression (e.g. atypical symptoms, sleep disturbances), which might be important for comprehensively understanding the nature of the disease depression. Therefore, the cost-to-benefit ratio must be carefully assessed in the decision for using unidimensional subscales.
Measuring quality of care for rheumatic diseases using an electronic medical record
Objectives:The objective of this study was twofold: (1) to determine how best to measure adherence with time-dependent quality indicators (QIs) related to laboratory monitoring, and (2) to assess the accuracy and efficiency of gathering QI adherence information from an electronic medical record (EMR).Methods:A random sample of 100 patients were selected who had at least three visits with the diagnosis of rheumatoid arthritis (RA) at Brigham and Women’s Hospital Arthritis Center in 2005. Using the EMR, it was determined whether patients had been prescribed a disease-modifying antirheumatic drug (DMARD) (QI #1) and if patients starting therapy received appropriate baseline laboratory testing (QI #2). For patients consistently prescribed a DMARD, adherence with follow-up testing (QI #3) was calculated using three different methods, the Calendar, Interval and Rolling Interval Methods.Results:It was found that 97% of patients were prescribed a DMARD (QI #1) and baseline tests were completed in 50% of patients (QI #2). For follow-up testing (QI #3), mean adherence was 60% for the Calendar Method, 35% for the Interval Method, and 48% for the Rolling Interval Method. Using the Rolling Interval Method, adherence rates were similar across drug and laboratory testing type.Conclusions:Results for adherence with laboratory testing QIs for DMARD use differed depending on how the QIs were measured, suggesting that care must be taken in clearly defining methods. While EMRs will provide important opportunities for measuring adherence with QIs, they also present challenges that must be examined before widespread adoption of these data collection methods.