Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,324 result(s) for "Implementation fidelity"
Sort by:
The relative value of Pre-Implementation stages for successful implementation of evidence-informed programs
Background Most implementations fail before the corresponding services are ever delivered. Measuring implementation process fidelity may reveal when and why these attempts fail. This knowledge is necessary to support the achievement of positive implementation milestones, such as delivering services to clients (program start-up) and competency in treatment delivery. The present study evaluates the extent to which implementation process fidelity at different implementation stages predicts achievement of those milestones. Methods Implementation process fidelity data—as measured by the Stages of Implementation Completion (SIC)—from 1287 implementing sites across 27 evidence-informed programs were examined in mixed effects regression models with sites nested within programs. Implementation process fidelity, as measured by the proportion of implementation activities completed during the three stages of the SIC Pre-Implementation phase and overall Pre-Implementation (Phase 1) and Implementation (Phase 2) proportion scores, was assessed as a predictor of sites achieving program start-up (i.e., delivering services) and competency in program delivery. Results The predicted probability of start-up across all sites was low at 35% (95% CI [33%, 38%]). When considering the evidence-informed program being implemented, that probability was nearly twice as high (64%; 95% CI [42%, 82%]), and 57% of the total variance in program start-up was attributable to the program. Implementation process fidelity was positively and significantly associated with achievement of program start-up and competency. The magnitude of this relationship varied significantly across programs for Pre-Implementation Stage 1 (i.e., Engagement) only. Compared to other stages, completing more Pre-Implementation Stage 3 (Readiness Planning) activities resulted in the most rapid gains in probability of achieving program start-up. The predicted probability of achieving competency was very low unless sites had high scores in both Pre-Implementation and Implementation phases. Conclusions Strong implementation process fidelity—as measured by SIC Pre-Implementation and Implementation phase proportion scores—was associated with sites’ achievement of program start-up and competency in program delivery, with early implementation process fidelity being especially potent. These findings highlight the importance of a rigorous Pre-Implementation process.
The effects of care bundles on patient outcomes: a systematic review and meta-analysis
Background Care bundles are a set of three to five evidence-informed practices performed collectively and reliably to improve the quality of care. Care bundles are used widely across healthcare settings with the aim of preventing and managing different health conditions. This is the first systematic review designed to determine the effects of care bundles on patient outcomes and the behaviour of healthcare workers in relation to fidelity with care bundles. Methods This systematic review is reported in line with the PRISMA statement for reporting systematic reviews and meta-analyses. A total of 5796 abstracts were retrieved through a systematic search for articles published between January 1, 2001, to February 4, 2017, in the Cochrane Central Register for Controlled Trials, MEDLINE, EMBASE, British Nursing Index, CINAHL, PsychInfo, British Library, Conference Proceeding Citation Index, OpenGrey trials (including cluster-randomised trials) and non-randomised studies (comprising controlled before-after studies, interrupted time series, cohort studies) of care bundles for any health condition and any healthcare settings were considered. Following the removal of duplicated studies, two reviewers independently screen 3134 records. Three authors performed data extraction independently. We compared the care bundles with usual care to evaluate the effects of care bundles on the risk of negative patient outcomes. Random-effect models were used to further explore the effects of subgroups. Results In total, 37 studies (6 randomised trials, 31 controlled before-after studies) were eligible for inclusion. The effect of care bundles on patient outcomes is uncertain. For randomised trial data, the pooled relative risk of negative effects between care bundle and control groups was 0.97 [95% CI 0.71 to 1.34; 2049 participants]. The relative risk of negative patient outcomes from controlled before-after studies favoured the care bundle treated groups (0.66 [95% CI 0.59 to 0.75; 119,178 participants]). However, using GRADE, we assessed the certainty of all of the evidence to be very low (downgraded for risk of bias, inconsistency, indirectness). Conclusions Very low quality evidence from controlled before-after studies suggests that care bundles may reduce the risk of negative outcomes when compared with usual care. By contrast, the better quality evidence from six randomised trials is more uncertain. Trial registration PROSPERO, CRD42016033175
From protocol to practice: Enhancing fidelity, supporting adaptation, and advancing science
Background Effective translation of evidence-based interventions (EBIs) into real-world settings hinges on the careful balance between fidelity and adaptation. Fidelity, the extent to which an intervention is delivered as intended, is essential for maintaining theoretical integrity and intervention effectiveness, yet rigid adherence often undermines contextual relevance. Purpose This paper explores the importance of and barriers to measuring and reporting intervention fidelity and adaptation, emphasizing their implications for implementation success and scalability. Methods Drawing on implementation science frameworks and the Transitional Care Model (TCM) as a case study, we highlight the interplay between fidelity, adaptation, and context. Emphasis is placed on creating practical tools, fostering bidirectional learning, and establishing fidelity standards in intervention protocols. Conclusions We conclude with recommendations to improve training, reporting, and collaboration that will support equitable and effective implementation of EBIs in meaningful ways across diverse settings.Lay Summary This article explores how evidence-based interventions are often changed when they are used in real-world environments. These changes, known as adaptations, may be necessary to meet the needs of different healthcare settings, patients, or resources, but they can also reduce how well the program works if not done carefully. The article focuses on the concept of fidelity, which means delivering a program as it was originally designed, and discusses the importance of balancing this with adaptation. Using the Transitional Care Model, a program to support older adults moving from hospital to home, the author highlights how and why such programs are frequently changed in practice. They also explain that some changes help while others may harm the program’s effectiveness. The article calls for clearer guidance on what parts of a program should remain the same and better tools to help both researchers and clinicians track and report what changes are made. This work is critical to ensuring that evidence-based interventions are both effective and adaptable, thereby maintaining their relevance across diverse settings and enhancing the quality of care for all populations.
Capitalizing on natural language processing (NLP) to automate the evaluation of coach implementation fidelity in guided digital cognitive-behavioral therapy (GdCBT)
As the use of guided digitally-delivered cognitive-behavioral therapy (GdCBT) grows, pragmatic analytic tools are needed to evaluate coaches' implementation fidelity. We evaluated how natural language processing (NLP) and machine learning (ML) methods might automate the monitoring of coaches' implementation fidelity to GdCBT delivered as part of a randomized controlled trial. Coaches served as guides to 6-month GdCBT with 3,381 assigned users with or at risk for anxiety, depression, or eating disorders. CBT-trained and supervised human coders used a rubric to rate the implementation fidelity of 13,529 coach-to-user messages. NLP methods abstracted data from text-based coach-to-user messages, and 11 ML models predicting coach implementation fidelity were evaluated. Inter-rater agreement by human coders was excellent (intra-class correlation coefficient = .980-.992). Coaches achieved behavioral targets at the start of the GdCBT and maintained strong fidelity throughout most subsequent messages. Coaches also avoided prohibited actions (e.g. reinforcing users' avoidance). Sentiment analyses generally indicated a higher frequency of coach-delivered positive than negative sentiment words and predicted coach implementation fidelity with acceptable performance metrics (e.g. area under the receiver operating characteristic curve [AUC] = 74.48%). The final best-performing ML algorithms that included a more comprehensive set of NLP features performed well (e.g. AUC = 76.06%). NLP and ML tools could help clinical supervisors automate monitoring of coaches' implementation fidelity to GdCBT. These tools could maximize allocation of scarce resources by reducing the personnel time needed to measure fidelity, potentially freeing up more time for high-quality clinical care.
Teachers’ Perceptions of the Impact of the COVID-19 Pandemic and Their Implementation of an Evidence-based HIV Prevention Program in the Bahamas
Information on how school-based programs is implemented and sustained during crises is limited. In this study, we assessed the impact of the COVID-19 pandemic on the implementation of a HIV prevention intervention in The Bahamas. Data were collected from 139 Grade 6 teachers in 2021–2022. Teachers attended virtual training and received implementation monitoring from coordinators. On average, teachers taught 26.4 (SD = 9.2) of the 35 core activities, and 7.4 (SD = 2.4) out of 9 sessions. More than half (58.3%) of teachers completed 28 or more core activities; 69.1% covered eight or all nine sessions, which is equivalent to 80% of the HIV intervention curriculum. Almost half of the teachers (43%) reported that the pandemic negatively impacted their ability to teach the program; 72% of teachers maintained that the program remained “very important” during times of crisis. Greater self-efficacy and supports increased implementation fidelity.
Methods for capturing and analyzing adaptations: implications for implementation research
Background Interventions are often adapted; some adaptations may provoke more favorable outcomes, whereas some may not. A better understanding of the adaptations and their intended goals may elucidate which adaptations produce better outcomes. Improved methods are needed to better capture and characterize the impact of intervention adaptations. Methods We used multiple data collection and analytic methods to characterize adaptations made by practices participating in a hybrid effectiveness-implementation study of a complex, multicomponent diabetes intervention. Data collection methods to identify adaptations included interviews, observations, and facilitator sessions resulting in transcripts, templated notes, and field notes. Adaptations gleaned from these sources were reduced and combined; then, their components were cataloged according to the framework for reporting adaptations and modifications to evidence-based interventions (FRAME). Analytic methods to characterize adaptations included a co-occurrence table, statistically based k-means clustering, and a taxonomic analysis. Results We found that (1) different data collection methods elicited more overall adaptations, (2) multiple data collection methods provided understanding of the components of and reasons for adaptation, and (3) analytic methods revealed ways that adaptation components cluster together in unique patterns producing adaptation “types.” These types may be useful for understanding how the “who, what, how, and why” of adaptations may fit together and for analyzing with outcome data to determine if the adaptations produce more favorable outcomes rather than by adaptation components individually. Conclusion Adaptations were prevalent and discoverable through different methods. Enhancing methods to describe adaptations may better illuminate what works in providing improved intervention fit within context. Trial registration This trial is registered on clinicaltrials.gov under Trial number NCT03590041 , posted July 18, 2018.
Electronically Delivered Support to Promote Intervention Implementation Fidelity: A Research Synthesis
The coronavirus (COVID-19) pandemic has led to many students receiving remote instruction. As educators support students' learning and behavior from a distance, school psychologists can utilize their consultation skills to offer educators implementation support to ensure interventions implemented are maximally successful. Although the effectiveness of intervention implementation supports (such as direct training and performance feedback) has been well documented, most existing studies involved providing implementation support in person. The purpose of this study was to synthesize single-case research design studies that examined electronically delivered implementation supports (EDIS) to school personnel (e.g., teachers, paraeducators) to promote implementation fidelity. One hundred and seventy-four articles were screened and 12 studies were coded for characteristics of EDIS upon being determined to meet research design quality standards outlined by the What Works Clearinghouse. Results of the review indicated that, overall, studies identified were conducted with sufficient methodological rigor and effect sizes estimates indicated EDIS promoted high levels of intervention implementation. We translate these research findings to guide practitioners' actions. Implications include EDIS being considered for interventions provided to students involved in distance learning, including those engaged in schooling at home due to COVID-19. IMPACT STATEMENT This systematic review examined the effect of electronically delivered implementation support (EDIS) on educators' implementation fidelity. Following rigorous screening and coding procedures, 12 studies containing 46 individual cases met research design standards outlined by the What Works Clearinghouse and were included in the synthesis. As a result of this review, the authors concluded that EDIS may be useful to school psychologists supporting educators from a distance to implement interventions with youth who may be engaged in schooling remotely.
A Systematic Review of the Facilitators and Barriers to the Sustained Implementation of School-Wide Positive Behavioral Interventions and Supports
School-wide positive behavioral interventions and supports (SWPBIS) is an effective and widely adopted school-wide framework for promoting positive behavioral, social, and academic outcomes for students. However, over the past decade, the degree to which schools were able to implement specific SWPBIS practices with fidelity has emerged as a critical issue, with varying levels of SWPBIS abandonment reported in the literature. Authors have used the term facilitators to describe specific variables that appear to support the sustained implementation of SWPBIS, and the term barriers to describe specific variables that appear to hinder the sustained implementation of SWPBIS (Kincaid et al., 2007 ). The purpose of the present review was to identify, summarize, and appraise the extant literature on variables reported to function as facilitators and barriers to the sustained implementation of SWPBIS. We identified 22 unique variables that may function as facilitators or barriers. The provision of resources, high fidelity of implementation, and effective SWPBIS team function were the most commonly described facilitators, whereas a conflict in personal beliefs, an ineffective SWPBIS team, and a lack of resources were the most commonly identified barriers. The implications of these findings, along with areas for future research, are discussed.
Understanding the implementation of Direct Health Facility Financing and its effect on health system performance in Tanzania: a non-controlled before and after mixed method study protocol
Background Globally, good health system performance has resulted from continuous reform, including adaptation of Decentralisation by Devolution policies, for example, the Direct Health Facility Financing (DHFF). Generally, the role of decentralisation in the health sector is to improve efficiency, to foster innovations and to improve quality, patient experience and accountability. However, such improvements have not been well realised in most low- and middle-income countries, with the main reason cited being the poor mechanism for disbursement of funds, which remain largely centralised. The introduction of the DHFF programme in Tanzania is expected to help improve the quality of health service delivery and increase service utilisation resulting in improved health system performance. This paper describes the protocol, which aims to evaluate the effects of DHFF on health system performance in Tanzania. Methods An evaluation of the effect of the DHFF programme will be carried out as part of a nationwide programme rollout. A before and after non-controlled concurrent mixed methods design study will be employed to examine the effect of the DHFF programme implementation on the structural quality of maternal health, health facility governing committee governance and accountability, and health system responsiveness as perceived by the patients’ experiences. Data will be collected from a nationally representative sample involving 42 health facilities, 422 patient consultations, 54 health workers, and 42 health facility governing committees in seven regions from the seven zones of the Tanzanian mainland. The study is grounded in a conceptual framework centered on the Theory of Change and the Implementation Fidelity Framework. The study will utilise a mixture of quantitative and qualitative data collection tools (questionnaires, focus group discussions, in-depth interviews and documentary review). The study will collect information related to knowledge, acceptability and practice of the programme, fidelity of implementation, structural qualities of maternal and child health services, accountability, governance, and patient perception of health system responsiveness. Discussion This evaluation study will generate evidence on both the process and impact of the DHFF programme implementation, and help to inform policy improvement. The study is expected to inform policy on the implementation of DHFF within decentralised health system government machinery, with particular regard to health system strengthening through quality healthcare delivery. Health system responsiveness assessment, accountability and governance of Health Facility Government Committee should bring autonomy to lower levels and improve patient experiences. A major strength of the proposed study is the use of a mixed methods approach to obtain a more in-depth understanding of factors that may influence the implementation of the DHFF programme. This evaluation has the potential to generate robust data for evidence-based policy decisions in a low-income setting.
The Impact of the Contextual Fit Enhancement Protocol on Behavior Support Plan Fidelity and Student Behavior
The contextual fit of abehavior support plan refers to the extent that the procedures of the plan are consistent with the knowledge, values, skills, resources, and administrative support of those who are expected to implement the plan. This study used a concurrent multiple baseline design across four participants to assess the presence of a functional relation between introduction of the Contextual Fit Enhancement Protocol, an intervention designed to improve contextual fit, and (a) an increase in fidelity of support plan implementation and (b) improved student behavior. Results indicate that following implementation of the Contextual Fit Enhancement Protocol, support plan implementation fidelity increased and student problem behavior decreased. In addition, teachers participating in the study rated the contextual fit intervention process as effective and efficient. Limitations and implications for future research, practice, and training are discussed.