Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Is Full-Text Available
      Is Full-Text Available
      Clear All
      Is Full-Text Available
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Subject
    • Country Of Publication
    • Publisher
    • Source
    • Language
    • Place of Publication
    • Contributors
    • Location
42,513 result(s) for "Implementation science"
Sort by:
Policy, geophilosophy and education
Education policy is premised on its instrumentalist approach. This instrumentalism is based on narrow assumptions concerning people (the subject), decision-making (power), problem-solving (science and methodology), and knowledge (epistemology). Policy, Geophilosophy and Education reconceptualises the object, and hence, the objectives, of education policy. Specifically, the book illustrates how education policy positions and constitutes objects and subjects through emergent policy arrangements that simultaneously influence how policy is sensed, embodied, and enacted. The book examines the disciplinary and multi-disciplinary approaches to education policy analysis over the last sixty years, and reveals how policy analysis constitutes the ontologies and epistemologies of policy. In order to reconceptualise policy, Policy, Geophilosophy and Education uses ideas of spatiality, affect and problematization from the disciplines of geography and philosophy. The book problematizes case-vignettes to illustrate the complex and often paradoxical relations between neo-liberal education policy equity, and educational inequalities produced in the representational registers of race and ethnicity.
Using an implementation science approach to implement and evaluate patient-reported outcome measures (PROM) initiatives in routine care settings
Purpose Patient-reported outcome and experience measures (PROMs/PREMs) are well established in research for many health conditions, but barriers persist for implementing them in routine care. Implementation science (IS) offers a potential way forward, but its application has been limited for PROMs/PREMs. Methods We compare similarities and differences for widely used IS frameworks and their applicability for implementing PROMs/PREMs through case studies. Three case studies implemented PROMs: (1) pain clinics in Canada; (2) oncology clinics in Australia; and (3) pediatric/adult clinics for chronic conditions in the Netherlands. The fourth case study is planning PREMs implementation in Canadian primary care clinics. We compare case studies on barriers, enablers, implementation strategies, and evaluation. Results Case studies used IS frameworks to systematize barriers, to develop implementation strategies for clinics, and to evaluate implementation effectiveness. Across case studies, consistent PROM/PREM implementation barriers were technology, uncertainty about how or why to use PROMs/PREMs, and competing demands from established clinical workflows. Enabling factors in clinics were context specific. Implementation support strategies changed during pre-implementation, implementation, and post-implementation stages. Evaluation approaches were inconsistent across case studies, and thus, we present example evaluation metrics specific to PROMs/PREMs. Conclusion Multilevel IS frameworks are necessary for PROM/PREM implementation given the complexity. In cross-study comparisons, barriers to PROM/PREM implementation were consistent across patient populations and care settings, but enablers were context specific, suggesting the need for tailored implementation strategies based on clinic resources. Theoretically guided studies are needed to clarify how, why, and in what circumstances IS principles lead to successful PROM/PREM integration and sustainability.
Commentary
This commentary explores the ways in which robust research focused on policy implementation will increase our ability to understand how to – and how not to – address social determinants of health. We make three key points in this commentary. First, policies that affect our lives and health are developed and implemented every single day, like it or not. These include “small p” policies, such as those at our workplaces that influence whether we have affordable access to healthy food at work, as well as “large P” policies that, for example, determine at a larger level whether our children’s schools are required to provide physical education. However, policies interact with context and are likely to have differential effects across different groups based on demographics, socioeconomic status, geography, and culture. We are unlikely to improve health equity if we do not begin to systematically evaluate the ways in which policies can incorporate evidence-based approaches to reducing inequities and to provide structural supports needed for such interventions to have maximal impact. A policy mandating physical education in schools will do little to address disparities in fitness and weight-related outcomes if all schools cannot provide the resources for physical education teachers and safe activity spaces. Second, as we argue for an increased emphasis on policy implementation science, we acknowledge its nascent status. Although the field of implementation science has become increasingly robust in the past decade, there has been only limited application to policy. However, if we are strategic and systematic in application of implementation science approaches and methods to health-related policy, there is great opportunity to discover its impact on social determinants. This will entail fundamental work to develop common measures of policy-relevant implementation processes and outcomes, to develop the capacity to track policy proposal outcomes, and to maximize our capacity to study natural experiments of policy implementation. Third, development of an explicit policy implementation science agenda focused on health equity is critical. This will include efforts to bridge scientific evidence and policy adoption and implementation, to evaluate policy impact on a range of health equity outcomes, and to examine differential effects of varied policy implementation processes across population groups. We cannot escape the reality that policy influences health and health equity. Policy implementation science can have an important bearing in understanding how
Guidance for conducting feasibility and pilot studies for implementation trials
Background Implementation trials aim to test the effects of implementation strategies on the adoption, integration or uptake of an evidence-based intervention within organisations or settings. Feasibility and pilot studies can assist with building and testing effective implementation strategies by helping to address uncertainties around design and methods, assessing potential implementation strategy effects and identifying potential causal mechanisms. This paper aims to provide broad guidance for the conduct of feasibility and pilot studies for implementation trials. Methods We convened a group with a mutual interest in the use of feasibility and pilot trials in implementation science including implementation and behavioural science experts and public health researchers. We conducted a literature review to identify existing recommendations for feasibility and pilot studies, as well as publications describing formative processes for implementation trials. In the absence of previous explicit guidance for the conduct of feasibility or pilot implementation trials specifically, we used the effectiveness-implementation hybrid trial design typology proposed by Curran and colleagues as a framework for conceptualising the application of feasibility and pilot testing of implementation interventions. We discuss and offer guidance regarding the aims, methods, design, measures, progression criteria and reporting for implementation feasibility and pilot studies. Conclusions This paper provides a resource for those undertaking preliminary work to enrich and inform larger scale implementation trials.
Participatory implementation science to increase the impact of evidence-based cancer prevention and control
It is critical to accelerate the integration of evidence-based programs, practices, and strategies for cancer prevention and control into clinical, community, and public health settings. While it is clear that effective translation of existing knowledge into practice can reduce cancer burden, it is less clear how best to achieve this. This gap is addressed by the rapidly growing field of implementation science. Given that context influences and is influenced by implementation efforts, engaging stakeholders in the co-production of knowledge and solutions offers an opportunity to increase the likelihood that implementation efforts are useful, scalable, and sustainable in real-world settings. We argue that a participatory implementation science approach is critical, as it supports iterative, ongoing engagement between stakeholders and researchers to improve the pathway between research and practice, create system change, and address health disparities and health equity. This article highlights the utility of participatory implementation science for cancer prevention and control research and addresses (a) the spectrum of participatory research approaches that may be of use, (b) benefits of participatory implementation science, and (c) key considerations for researchers embarking on such projects.
Cost data in implementation science: categories and approaches to costing
A lack of cost information has been cited as a barrier to implementation and a limitation of implementation research. This paper explains how implementation researchers might optimize their measurement and inclusion of costs, building on traditional economic evaluations comparing costs and effectiveness of health interventions. The objective of all economic evaluation is to inform decision-making for resource allocation and to measure costs that reflect opportunity costs—the value of resource inputs in their next best alternative use, which generally vary by decision-maker perspective(s) and time horizon(s). Analyses that examine different perspectives or time horizons must consider cost estimation accuracy, because over longer time horizons, all costs are variable; however, with shorter time horizons and narrower perspectives, one must differentiate the fixed and variable costs, with fixed costs generally excluded from the evaluation. This paper defines relevant costs, identifies sources of cost data, and discusses cost relevance to potential decision-makers contemplating or implementing evidence-based interventions. Costs may come from the healthcare sector, informal healthcare sector, patient, participant or caregiver, and other sectors such as housing, criminal justice, social services, and education. Finally, we define and consider the relevance of costs by phase of implementation and time horizon, including pre-implementation and planning, implementation, intervention, downstream, and adaptation, and through replication, sustainment, de-implementation, or spread.
Building capacity in dissemination and implementation science: a systematic review of the academic literature on teaching and training initiatives
Background The field of dissemination and implementation (D&I) science has grown significantly over recent years. Alongside this, an increased demand for training in D&I from researchers and implementers has been seen. Research describing and evaluating D&I training opportunities, referred to here as ‘capacity building initiatives’ (CBIs), can help provide an understanding of different methods of training as well as training successes and challenges. However, to gain a more detailed understanding of the evidence-base and how D&I CBIs are being reported in publications, a field-wide examination of the academic literature is required. Methods Systematic review to identify the type and range of D&I CBIs discussed and/or appraised in the academic literature. EMBASE, Medline and PsycINFO were searched between January 2006 and November 2019. Articles were included if they reported on a D&I CBI that was developed by the authors (of each of the included articles) or the author’s host institution. Two reviewers independently screened the articles and extracted data using a standardised form. Results Thirty-one articles (from a total of 4181) were included. From these, 41 distinct D&I CBIs were identified which focussed on different contexts and professions, from 8 countries across the world. CBIs ranged from short courses to training institutes to being part of academic programmes. Nearly half were delivered face-face with the remainder delivered remotely or using a blended format. CBIs often stipulated specific eligibility criteria, strict application processes and/or were oversubscribed. Variabilities in the way in which the D&I CBIs were reported and/or evaluated were evident. Conclusions Increasing the number of training opportunities, as well as broadening their reach (to a wider range of learners), would help address the recognised deficit in D&I training. Standardisation in the reporting of D&I CBIs would enable the D&I community to better understand the findings across different contexts and scientific professions so that training gaps can be identified and overcome. More detailed examination of publications on D&I CBIs as well as the wider literature on capacity building would be of significant merit to the field.
Implementation Science Research in Communication Sciences and Disorders: A Scoping Review
The purpose of this study was to complete a scoping review of implementation science (IS) research in communication sciences and disorders (CSD) over time and to determine characteristics of IS research in CSD. A scoping review was conducted of PubMed and Education Resources Information Center for sources published in English that (a) included CSD practitioners, (b) addressed IS research, and (c) identified a specific evidence-based practice. Resulting sources were systematically examined for study aim, patient populations, implementation framework utilized, setting of the study, implementation strategy examined, and implementation outcome measured. The majority of the 82 studies that underwent a full-text review (80.5%) were published in 2014 or later. One fourth of the studies were concept papers, and another one fourth focused on context assessment (25.6% of studies, each), 11% focused on designing implementation strategies, and 36.6% focused on testing implementation strategies. The patient population most frequently represented aphasia (21.3%), and most studies (34.4%) were conducted in inpatient medical settings. Nearly half (42.6%) of the nonconcept studies lacked an IS framework. Among implementation strategies identified, approximately one third of studies focused on education and/or training plus another strategy and one fourth focused on education and/or training alone. Implementation outcomes measured typically represented early stages of implementation. This scoping review of IS research in CSD described the landscape of IS studies in CSD. IS is intersecting with CSD at a rapid rate, especially since 2014. Future IS research in CSD should adopt an implementation framework a priori and consider the broad range of implementation strategies and outcomes to support the uptake of research into typical practice settings.
Implementation science issues in understanding, collecting, and using cost estimates: a multi-stakeholder perspective
Understanding the resources needed to achieve desired implementation and effectiveness outcomes is essential to implementing and sustaining evidence-based practices (EBPs). Despite this frequent observation, cost and economic measurement and reporting are rare, but becoming more frequent in implementation science, and when present is seldom reported from the perspective of multiple stakeholders (e.g., the organization, supervisory team), including those who will ultimately implement and sustain EBPs. Incorporating a multi-level framework is useful for understanding and integrating the perspectives and priorities of the diverse set of stakeholders involved in implementation. Stakeholders across levels, from patients to delivery staff to health systems, experience different economic impacts (costs, benefit, and value) related to EBP implementation and have different perspectives on these issues. Economic theory can aid in understanding multi-level perspectives and approaches to addressing potential conflict across perspectives. This paper provides examples of key cost components especially important to different types of stakeholders. It provides specific guidance and recommendations for cost assessment activities that address the concerns of various stakeholder groups, identifies areas of agreement and conflict in priorities, and outlines theoretically informed approaches to understanding conflicts among stakeholder groups and processes to address them. Involving stakeholders throughout the implementation process and presenting economic information in ways that are clear and meaningful to different stakeholder groups can aid in maximizing benefits within the context of limited resources. We posit that such approaches are vital to advancing economic evaluation in implementation science. Finally, we identify directions for future research and application. Considering a range of stakeholders is critical to informing economic evaluation that will support appropriate decisions about resource allocation across contexts to inform decisions about successful adoption, implementation, and sustainment. Not all perspectives need to be addressed in a given project but identifying and understanding perspectives of multiple groups of key stakeholders including patients and direct implementation staff not often explicitly considered in traditional economic evaluation are needed in implementation research.
The utility of the implementation science framework “Integrated Promoting Action on Research Implementation in Health Services” (i-PARIHS) and the facilitator role for introducing patient-reported outcome measures (PROMs) in a medical oncology outpatient department
Purpose We evaluated the utility of the implementation science framework “Integrated Promoting Action on Research Implementation in Health Services” (i-PARIHS) for introducing patient-reported outcome measures (PROMs) into a medical oncology outpatient department. The i-PARIHS framework identifies four core constructs for implementation, including Facilitation, Innovation, Context and Recipients. Methods A pilot study used the i-PARIHS framework to identify PROM implementation barriers and enablers to inform facilitation support strategies, such as training clinicians and staff, workflow support, technical support and audit and feedback. Pre- and post-implementation surveys were completed by 83 and 72 staff, respectively, (nurses, doctors and allied health), to assess perceived knowledge, enablers, barriers and utility of PROMs; and acceptability of the PROM intervention was also assessed post-implementation. Results Important barriers included time constraints and previous experiences with technology. Enablers included good leadership support and a culture of learning. Facilitation strategies were used to overcome barriers identified in the i-PARIHS core domains. Compared to before the intervention, staff surveys showed improvement in perceived usefulness, perceived understanding and interpretation skills for PROMs. Staff perceptions about lack of time to use PROMs during visits remained a major perceived barrier post-implementation. Conclusion The i-PARIHS framework was useful for guiding the implementation of PROMs in routine oncology care. The four core i-PARIHS constructs (Facilitation, Innovation, Context and Recipients) identified factors that directly impacted implementation, with Facilitation having a particularly important role to overcome these barriers. Oncology clinics and health systems considering implementing PROMs should consider having a dedicated Facilitator available during PROM implementation.