Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
5,806 result(s) for "Evidence-Based Practice - standards"
Sort by:
A randomized matched-pairs study of feasibility, acceptability, and effectiveness of systems consultation: a novel implementation strategy for adopting clinical guidelines for Opioid prescribing in primary care
Background This paper reports on the feasibility, acceptability, and effectiveness of an innovative implementation strategy named “systems consultation” aimed at improving adherence to clinical guidelines for opioid prescribing in primary care. While clinical guidelines for opioid prescribing have been developed, they have not been widely implemented, even as opioid abuse reaches epidemic levels. Methods We tested a blended implementation strategy consisting of several discrete implementation strategies, including audit and feedback, academic detailing, and external facilitation. The study compares four intervention clinics to four control clinics in a randomized matched-pairs design. Each systems consultant aided clinics on implementing the guidelines during a 6-month intervention consisting of monthly site visits and teleconferences/videoconferences. The mixed-methods evaluation employs the RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance) framework. Quantitative outcomes are compared using time series analysis. Qualitative methods included focus groups, structured interviews, and ethnographic field techniques. Results Seven clinics were randomly approached to recruit four intervention clinics. Each clinic designated a project team consisting of six to eight staff members, each with at least one prescriber. Attendance at intervention meetings was 83%. More than 80% of staff respondents agreed or strongly agreed with the statements: “I am more familiar with guidelines for safe opioid prescribing” and “My clinic’s workflow for opioid prescribing is easier.” At 6 months, statistically significant improvements were noted in intervention clinics in the percentage of patients with mental health screens, treatment agreements, urine drug tests, and opioid-benzodiazepine co-prescribing. At 12 months, morphine-equivalent daily dose was significantly reduced in intervention clinics compared to controls. The cost to deliver the strategy was $7345 per clinic. Adaptations were required to make the strategy more acceptable for primary care. Qualitatively, intervention clinics reported that chronic pain was now treated using approaches similar to those employed for other chronic conditions, such as hypertension and diabetes. Conclusions The systems consultation implementation strategy demonstrated feasibility, acceptability, and effectiveness in a study involving eight primary care clinics. This multi-disciplinary strategy holds potential to mitigate the prevalence of opioid addiction and ultimately may help to improve implementation of clinical guidelines across healthcare. Trial registration ClinicalTrials.gov (NCT02433496). https://clinicaltrials.gov/ct2/show/NCT02433496 Registered May 5, 2015
Timing of randomization after an acute coronary syndrome in patients with type 2 diabetes mellitus
The timing of enrolment following an acute coronary syndrome (ACS) may influence cardiovascular (CV) outcomes and potentially treatment effect in clinical trials. Understanding the timing and type of clinical events after an ACS will allow for clinicians to better tailor evidence-based treatments to optimize therapeutic effect. Using a large contemporary trial in patients with type 2 diabetes mellitus (T2DM) post-ACS, we examined the impact of timing of enrolment on subsequent CV outcomes. EXAMINE was a randomized trial of alogliptin versus placebo in 5,380 patients with T2DM and a recent ACS from October 2009 to March 2013. The primary outcome was a composite of CV death, nonfatal myocardial infarction (MI), or nonfatal stroke. The median follow-up was 18 months. In this post hoc analysis, we examined the occurrence of subsequent CV events by timing of enrollment divided by tertiles of time from ACS to randomization: 8-34, 35-56, and 57-141 days. Patients randomized early (compared to the latest times) had less comorbidities at baseline including a history of heart failure (HF; 24.7% vs 33.0%), prior coronary artery bypass graft (9.6% vs 15.9%), or atrial fibrillation (5.9% vs 9.4%). Despite the reduced comorbidity burden, the risk of the primary outcome was highest in patients randomized early compared to the latest time (adjusted hazard ratio 1.47; 95% CI 1.21-1.74). Similarly, patients randomized early had an increased risk of recurrent MI (adjusted hazard ratio 1.51; 95% CI 1.17-1.96) and HF hospitalization (1.49; 95% CI 1.05-2.10). In a contemporary cohort of T2DM with a recent ACS, the risk for recurrent CV events including MI and HF hospitalization is elevated early after an ACS. Given the emergence of antihyperglycemic therapies that reduce the risk of MI and HF among patients with T2DM at high CV risk, future studies evaluating the initiation of these therapies in the early period following an ACS are warranted given the large burden of potentially modifiable CV events.
Prevalence, severity, and nature of preventable patient harm across medical care settings: systematic review and meta-analysis
AbstractObjectiveTo systematically quantify the prevalence, severity, and nature of preventable patient harm across a range of medical settings globally.DesignSystematic review and meta-analysis.Data sourcesMedline, PubMed, PsycINFO, Cinahl and Embase, WHOLIS, Google Scholar, and SIGLE from January 2000 to January 2019. The reference lists of eligible studies and other relevant systematic reviews were also searched.Review methodsObservational studies reporting preventable patient harm in medical care. The core outcomes were the prevalence, severity, and types of preventable patient harm reported as percentages and their 95% confidence intervals. Data extraction and critical appraisal were undertaken by two reviewers working independently. Random effects meta-analysis was employed followed by univariable and multivariable meta regression. Heterogeneity was quantified by using the I2 statistic, and publication bias was evaluated.ResultsOf the 7313 records identified, 70 studies involving 337 025 patients were included in the meta-analysis. The pooled prevalence for preventable patient harm was 6% (95% confidence interval 5% to 7%). A pooled proportion of 12% (9% to 15%) of preventable patient harm was severe or led to death. Incidents related to drugs (25%, 95% confidence interval 16% to 34%) and other treatments (24%, 21% to 30%) accounted for the largest proportion of preventable patient harm. Compared with general hospitals (where most evidence originated), preventable patient harm was more prevalent in advanced specialties (intensive care or surgery; regression coefficient b=0.07, 95% confidence interval 0.04 to 0.10).ConclusionsAround one in 20 patients are exposed to preventable harm in medical care. Although a focus on preventable patient harm has been encouraged by the international patient safety policy agenda, there are limited quality improvement practices specifically targeting incidents of preventable patient harm rather than overall patient harm (preventable and non-preventable). Developing and implementing evidence-based mitigation strategies specifically targeting preventable patient harm could lead to major service quality improvements in medical care which could also be more cost effective.
Measuring evidence-based practice knowledge and skills in occupational therapy—a brief instrument
Background Valid and reliable instruments are required to measure the effect of educational interventions to improve evidence-based practice (EBP) knowledge and skills in occupational therapy. The aims of this paper are to: 1) describe amendments to the Adapted Fresno Test of Competence in EBP (AFT), and 2) report the psychometric properties of the modified instrument when used with South African occupational therapists. Methods The clinical utility of the AFT was evaluated for use with South African occupational therapists and modifications made. The modified AFT was used in two studies to assess its reliability and validity. In Study 1 a convenience sample of 26 occupational therapists in private practice or government-funded health facilities in a South African province were recruited to complete the modified AFT on two occasions 1 week apart. Completed questionnaires were scored independently by two raters. Inter-rater, test-retest reliability and internal consistency were determined. Study 2 was a pragmatic randomised controlled trial involving occupational therapists in four Western Cape Department of Health district municipalities ( n  = 58). Therapists were randomised in matched pairs to one of two educational interventions (interactive or didactic), and completed the modified AFT at baseline and 12 weeks after the intervention. An intention-to-treat analysis was performed. Data were not normally distributed, thus non-parametric statistics were used. Results In Study 1, 21 of 26 participants completed the questionnaire twice. Test-retest (ICC = 0.95, 95 % CI = 0.88–0.98) and inter-rater reliability (Time 1: ICC = 0.995, 95 % CI = 0.99–0.998; Time 2: ICC = 0.99, 95 % CI = 0.97–0.995) were excellent for total scores. Internal consistency based on time 1 scores was satisfactory (α = 0.70). In Study 2, 28 participants received an interactive educational intervention and completed the modified AFT at baseline and 12 weeks later. Median total SAFT scores increased significantly from baseline to 12-weeks ( Z  = −4.078, p  < 0.001) with a moderate effect size ( r  = 0.55). Conclusion The modified AFT has demonstrated validity for detecting differences in EBP knowledge between two groups. It also has excellent test-retest and inter-rater reliability. The instrument is recommended for contexts where EBP is an emerging approach and time is at a premium. Trial registration Pan African Controlled Trials Register PACTR201201000346141 . Registered 31 January 2012. Clinical Trials NCT01512823 . Registered 1 February 2012. South African National Clinical Trial Register DOH2710093067 . Registered 27 October 2009.
A systematic review of adaptations of evidence-based public health interventions globally
Background Adaptations of evidence-based interventions (EBIs) often occur. However, little is known about the reasons for adaptation, the adaptation process, and outcomes of adapted EBIs. To address this gap, we conducted a systematic review to answer the following questions: (1) What are the reasons for and common types of adaptations being made to EBIs in community settings as reported in the published literature? (2) What steps are described in making adaptations to EBIs? and (3) What outcomes are assessed in evaluations of adapted EBIs? Methods We conducted a systematic review of English language publications that described adaptations of public health EBIs. We searched Ovid PubMed, PsycINFO, PsycNET, and CINAHL and citations of included studies for adapted public health EBIs. We abstracted characteristics of the original and adapted populations and settings, reasons for adaptation, types of modifications, use of an adaptation framework, adaptation steps, and evaluation outcomes. Results Forty-two distinct EBIs were found focusing on HIV/AIDS, mental health, substance abuse, and chronic illnesses. More than half (62%) reported on adaptations in the USA. Frequent reasons for adaptation included the need for cultural appropriateness (64.3%), focusing on a new target population (59.5%), and implementing in a new setting (57.1%). Common adaptations were content (100%), context (95.2%), cultural modifications (73.8%), and delivery (61.9%). Most study authors conducted a community assessment, prepared new materials, implemented the adapted intervention, evaluated or planned to evaluate the intervention, determined needed changes, trained staff members, and consulted experts/stakeholders. Most studies that reported an evaluation ( k  = 36) included behavioral outcomes (71.4%), acceptability (66.7%), fidelity (52.4%), and feasibility (52.4%). Fewer measured adoption (47.6%) and changes in practice (21.4%). Conclusions These findings advance our understanding of the patterns and effects of modifications of EBIs that are reported in published studies and suggest areas of further research to understand and guide the adaptation process. Furthermore, findings can inform better reporting of adapted EBIs and inform capacity building efforts to assist health professionals in adapting EBIs.
The need for a complex systems model of evidence for public health
Despite major investment in both research and policy, many pressing contemporary public health challenges remain. To date, the evidence underpinning responses to these challenges has largely been generated by tools and methods that were developed to answer questions about the effectiveness of clinical interventions, and as such are grounded in linear models of cause and effect. Identification, implementation, and evaluation of effective responses to major public health challenges require a wider set of approaches1,2 and a focus on complex systems.3,4
Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement
  Selective reporting of complete studies (e.g., publication bias) [28] as well as the more recently empirically demonstrated \"outcome reporting bias\" within individual studies [40],[41] should be considered by authors when conducting a systematic review and reporting its results. Terminology The terminology used to describe a systematic review and meta-analysis has evolved over time.\\n Deeks, PhD, University of Birmingham (Birmingham, UK); P. J. Devereaux, MD, PhD, Departments of Medicine, Clinical Epidemiology and Biostatistics, McMaster University (Hamilton, Canada); Kay Dickersin, PhD, Johns Hopkins Bloomberg School of Public Health (Baltimore, Maryland, US); Matthias Egger, MD, Department of Social and Preventive Medicine, University of Bern (Bern, Switzerland); Edzard Ernst, MD, PhD, FRCP, FRCP(Edin), Peninsula Medical School (Exeter, UK); Peter C. Gøtzsche, MD, MSc, The Nordic Cochrane Centre (Copenhagen, Denmark); Jeremy Grimshaw, MBChB, PhD, FRCFP, Ottawa Hospital Research Institute (Ottawa, Canada); Gordon Guyatt, MD, Departments of Medicine, Clinical Epidemiology and Biostatistics, McMaster University (Hamilton, Canada); Julian Higgins, PhD, MRC Biostatistics Unit (Cambridge, UK); John P. A. Ioannidis, MD, University of Ioannina Campus (Ioannina, Greece); Jos Kleijnen, MD, PhD, Kleijnen Systematic Reviews Ltd (York, UK) and School for Public Health and Primary Care (CAPHRI), University of Maastricht (Maastricht, Netherlands); Tom Lang, MA, Tom Lang Communications and Training (Davis, California, US); Alessandro Liberati, MD, Università di Modena e Reggio Emilia (Modena, Italy) and Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri (Milan, Italy); Nicola Magrini, MD, NHS Centre for the Evaluation of the Effectiveness of Health Care - CeVEAS (Modena, Italy); David McNamee, PhD, The Lancet (London, UK); Lorenzo Moja, MD, MSc, Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri (Milan, Italy); David Moher, PhD, Ottawa Methods Centre, Ottawa Hospital Research Institute (Ottawa, Canada); Cynthia Mulrow, MD, MSc, Annals of Internal Medicine (Philadelphia, Pennsylvania, US); Maryann Napoli, Center for Medical Consumers (New York, New York, US); Andy Oxman, MD, Norwegian Health Services Research Centre (Oslo, Norway); Ba' Pham, MMath, Toronto Health Economics and Technology Assessment Collaborative (Toronto, Canada) (at the time of the first meeting of the group, GlaxoSmithKline Canada, Mississauga, Canada); Drummond Rennie, MD, FRCP, FACP, University of California San Francisco (San Francisco, California, US); Margaret Sampson, MLIS, Children's Hospital of Eastern Ontario (Ottawa, Canada); Kenneth F. Schulz, PhD, MBA, Family Health International (Durham, North Carolina, US); Paul G. Shekelle, MD, PhD, Southern California Evidence Based Practice Center (Santa Monica, California, US); Jennifer Tetzlaff, BSc, Ottawa Methods Centre, Ottawa Hospital Research Institute (Ottawa, Canada); David Tovey, FRCGP, The Cochrane Library, Cochrane Collaboration (Oxford, UK) (at the time of the first meeting of the group, BMJ, London, UK); Peter Tugwell, MD, MSc, FRCPC, Institute of Population Health, University of Ottawa (Ottawa, Canada).
Core Competencies in Evidence-Based Practice for Health Professionals
Evidence-based practice (EBP) is necessary for improving the quality of health care as well as patient outcomes. Evidence-based practice is commonly integrated into the curricula of undergraduate, postgraduate, and continuing professional development health programs. There is, however, inconsistency in the curriculum content of EBP teaching and learning programs. A standardized set of minimum core competencies in EBP that health professionals should meet has the potential to standardize and improve education in EBP. To develop a consensus set of core competencies for health professionals in EBP. For this modified Delphi survey study, a set of EBP core competencies that should be covered in EBP teaching and learning programs was developed in 4 stages: (1) generation of an initial set of relevant EBP competencies derived from a systematic review of EBP education studies for health professionals; (2) a 2-round, web-based Delphi survey of health professionals, selected using purposive sampling, to prioritize and gain consensus on the most essential EBP core competencies; (3) consensus meetings, both face-to-face and via video conference, to finalize the consensus on the most essential core competencies; and (4) feedback and endorsement from EBP experts. From an earlier systematic review of 83 EBP educational intervention studies, 86 unique EBP competencies were identified. In a Delphi survey of 234 participants representing a range of health professionals (physicians, nurses, and allied health professionals) who registered interest (88 [61.1%] women; mean [SD] age, 45.2 [10.2] years), 184 (78.6%) participated in round 1 and 144 (61.5%) in round 2. Consensus was reached on 68 EBP core competencies. The final set of EBP core competencies were grouped into the main EBP domains. For each key competency, a description of the level of detail or delivery was identified. A consensus-based, contemporary set of EBP core competencies has been identified that may inform curriculum development of entry-level EBP teaching and learning programs for health professionals and benchmark standards for EBP teaching.
A process evaluation accompanying an attempted randomized controlled trial of an evidence service for health system policymakers
Background We developed an evidence service that draws inputs from Health Systems Evidence (HSE), which is a comprehensive database of research evidence about governance, financial and delivery arrangements within health systems and about implementation strategies relevant to health systems. Our goal was to evaluate whether, how and why a ‘full-serve’ evidence service increases the use of synthesized research evidence by policy analysts and advisors in the Ontario Ministry of Health and Long-Term Care as compared to a ‘self-serve’ evidence service. Methods We attempted to conduct a two-arm, 10-month randomized controlled trial (RCT), along with a follow-up qualitative process evaluation, but we terminated the RCT when we failed to reach our recruitment target. For the qualitative process evaluation we modified the original interview guide to allow us to explore the (1) factors influencing participation in the trial; (2) usage of HSE, factors explaining usage patterns, and strategies to increase usage; (3) participation in training workshops and use of other supports; and (4) views about and experiences with key HSE features. Results We terminated the RCT given our 15% recruitment rate. Six factors were identified by those who had agreed to participate in the trial as encouraging their participation: relevance of the study to participants’ own work; familiarity with the researchers; personal view of the importance of using research evidence in policymaking; academic background; support from supervisors; and participation of colleagues. Most reported that they never, infrequently or inconsistently used HSE and suggested strategies to increase its use, including regular email reminders and employee training. However, only two participants indicated that employee training, in the form of a workshop about finding and using research evidence, had influenced their use of HSE. Most participants found HSE features to be intuitive and helpful, although registration/sign-in and some page formats (particularly the advanced search page and detailed search results page) discouraged their use or did not optimize the user experience. Conclusions The qualitative findings informed a re-design of HSE, which allows users to more efficiently find and use research evidence about how to strengthen or reform health systems or in how to get cost-effective programs, services and drugs to those who need them. Our experience with RCT recruitment suggests the need to consider changing the unit of allocation to divisions instead of individuals within divisions, among other lessons. Trial registration This protocol for this study is published in Implementation Science and registered with ClinicalTrials.gov ( HHS/FHS REB 10–267 ).