Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
110 result(s) for "Damschroder, Laura"
Sort by:
The updated Consolidated Framework for Implementation Research based on user feedback
Background Many implementation efforts fail, even with highly developed plans for execution, because contextual factors can be powerful forces working against implementation in the real world. The Consolidated Framework for Implementation Research (CFIR) is one of the most commonly used determinant frameworks to assess these contextual factors; however, it has been over 10 years since publication and there is a need for updates. The purpose of this project was to elicit feedback from experienced CFIR users to inform updates to the framework. Methods User feedback was obtained from two sources: (1) a literature review with a systematic search; and (2) a survey of authors who used the CFIR in a published study. Data were combined across both sources and reviewed to identify themes; a consensus approach was used to finalize all CFIR updates. The VA Ann Arbor Healthcare System IRB declared this study exempt from the requirements of 38 CFR 16 based on category 2. Results The systematic search yielded 376 articles that contained the CFIR in the title and/or abstract and 334 unique authors with contact information; 59 articles included feedback on the CFIR. Forty percent ( n  = 134/334) of authors completed the survey. The CFIR received positive ratings on most framework sensibility items (e.g., applicability, usability), but respondents also provided recommendations for changes. Overall, updates to the CFIR include revisions to existing domains and constructs as well as the addition, removal, or relocation of constructs. These changes address important critiques of the CFIR, including better centering innovation recipients and adding determinants to equity in implementation. Conclusion The updates in the CFIR reflect feedback from a growing community of CFIR users. Although there are many updates, constructs can be mapped back to the original CFIR to ensure longitudinal consistency. We encourage users to continue critiquing the CFIR, facilitating the evolution of the framework as implementation science advances.
Conceptualizing outcomes for use with the Consolidated Framework for Implementation Research (CFIR): the CFIR Outcomes Addendum
Background The challenges of implementing evidence-based innovations (EBIs) are widely recognized among practitioners and researchers. Context, broadly defined as everything outside the EBI, includes the dynamic and diverse array of forces working for or against implementation efforts. The Consolidated Framework for Implementation Research (CFIR) is one of the most widely used frameworks to guide assessment of contextual determinants of implementation. The original 2009 article invited critique in recognition for the need for the framework to evolve. As implementation science has matured, gaps in the CFIR have been identified and updates are needed. Our team is developing the CFIR 2.0 based on a literature review and follow-up survey with authors. We propose an Outcomes Addendum to the CFIR to address recommendations from these sources to include outcomes in the framework. Main text We conducted a literature review and surveyed corresponding authors of included articles to identify recommendations for the CFIR. There were recommendations to add both implementation and innovation outcomes from these sources. Based on these recommendations, we make conceptual distinctions between (1) anticipated implementation outcomes and actual implementation outcomes, (2) implementation outcomes and innovation outcomes, and (3) CFIR-based implementation determinants and innovation determinants. Conclusion An Outcomes Addendum to the CFIR is proposed. Our goal is to offer clear conceptual distinctions between types of outcomes for use with the CFIR, and perhaps other determinant implementation frameworks as well. These distinctions can help bring clarity as researchers consider which outcomes are most appropriate to evaluate in their research. We hope that sharing this in advance will generate feedback and debate about the merits of our proposed addendum.
Evaluation of a large-scale weight management program using the consolidated framework for implementation research (CFIR)
Background In the United States, as in many other parts of the world, the prevalence of overweight/obesity is at epidemic proportions in the adult population and even higher among Veterans. To address the high prevalence of overweight/obesity among Veterans, the MOVE!® weight management program was disseminated nationally to Veteran Affairs (VA) medical centers. The objective of this paper is two-fold: to describe factors that explain the wide variation in implementation of MOVE!; and to illustrate, step-by-step, how to apply a theory-based framework using qualitative data. Methods Five VA facilities were selected to maximize variation in implementation effectiveness and geographic location. Twenty-four key stakeholders were interviewed about their experiences in implementing MOVE!. The Consolidated Framework for Implementation Research (CFIR) was used to guide collection and analysis of qualitative data. Constructs that most strongly influence implementation effectiveness were identified through a cross-case comparison of ratings. Results Of the 31 CFIR constructs assessed, ten constructs strongly distinguished between facilities with low versus high program implementation effectiveness. The majority (six) were related to the inner setting: networks and communications; tension for change; relative priority; goals and feedback; learning climate; and leadership engagement. One construct each, from intervention characteristics (relative advantage) and outer setting (patient needs and resources), plus two from process (executing and reflecting) also strongly distinguished between high and low implementation. Two additional constructs weakly distinguished, 16 were mixed, three constructs had insufficient data to assess, and one was not applicable. Detailed descriptions of how each distinguishing construct manifested in study facilities and a table of recommendations is provided. Conclusions This paper presents an approach for using the CFIR to code and rate qualitative data in a way that will facilitate comparisons across studies. An online Wiki resource ( http://www.wiki.cfirwiki.net ) is available, in addition to the information presented here, that contains much of the published information about the CFIR and its constructs and sub-constructs. We hope that the described approach and open access to the CFIR will generate wide use and encourage dialogue and continued refinement of both the framework and approaches for applying it.
Rapid versus traditional qualitative analysis using the Consolidated Framework for Implementation Research (CFIR)
Background Qualitative approaches, alone or in mixed methods, are prominent within implementation science. However, traditional qualitative approaches are resource intensive, which has led to the development of rapid qualitative approaches. Published rapid approaches are often inductive in nature and rely on transcripts of interviews. We describe a deductive rapid analysis approach using the Consolidated Framework for Implementation Research (CFIR) that uses notes and audio recordings. This paper compares our rapid versus traditional deductive CFIR approach. Methods Semi-structured interviews were conducted for two cohorts of the Veterans Health Administration (VHA) Diffusion of Excellence (DoE). The CFIR guided data collection and analysis. In cohort A, we used our traditional CFIR-based deductive analysis approach (directed content analysis), where two analysts completed independent in-depth manual coding of interview transcripts using qualitative software. In cohort B, we used our new rapid CFIR-based deductive analysis approach (directed content analysis), where the primary analyst wrote detailed notes during interviews and immediately “coded” notes into a MS Excel CFIR construct by facility matrix; a secondary analyst then listened to audio recordings and edited the matrix. We tracked time for our traditional and rapid deductive CFIR approaches using a spreadsheet and captured transcription costs from invoices. We retrospectively compared our approaches in terms of effectiveness and rigor. Results Cohorts A and B were similar in terms of the amount of data collected. However, our rapid deductive CFIR approach required 409.5 analyst hours compared to 683 h during the traditional deductive CFIR approach. The rapid deductive approach eliminated $7250 in transcription costs. The facility-level analysis phase provided the greatest savings: 14 h/facility for the traditional analysis versus 3.92 h/facility for the rapid analysis. Data interpretation required the same number of hours for both approaches. Conclusion Our rapid deductive CFIR approach was less time intensive and eliminated transcription costs, yet effective in meeting evaluation objectives and establishing rigor. Researchers should consider the following when employing our approach: (1) team expertise in the CFIR and qualitative methods, (2) level of detail needed to meet project aims, (3) mode of data to analyze, and (4) advantages and disadvantages of using the CFIR.
Choosing implementation strategies to address contextual barriers: diversity in recommendations and future directions
Background A fundamental challenge of implementation is identifying contextual determinants (i.e., barriers and facilitators) and determining which implementation strategies will address them. Numerous conceptual frameworks (e.g., the Consolidated Framework for Implementation Research; CFIR) have been developed to guide the identification of contextual determinants, and compilations of implementation strategies (e.g., the Expert Recommendations for Implementing Change compilation; ERIC) have been developed which can support selection and reporting of implementation strategies. The aim of this study was to identify which ERIC implementation strategies would best address specific CFIR-based contextual barriers. Methods Implementation researchers and practitioners were recruited to participate in an online series of tasks involving matching specific ERIC implementation strategies to specific implementation barriers. Participants were presented with brief descriptions of barriers based on CFIR construct definitions. They were asked to rank up to seven implementation strategies that would best address each barrier. Barriers were presented in a random order, and participants had the option to respond to the barrier or skip to another barrier. Participants were also asked about considerations that most influenced their choices. Results Four hundred thirty-five invitations were emailed and 169 (39%) individuals participated. Respondents had considerable heterogeneity in opinions regarding which ERIC strategies best addressed each CFIR barrier. Across the 39 CFIR barriers, an average of 47 different ERIC strategies (SD = 4.8, range 35 to 55) was endorsed at least once for each, as being one of seven strategies that would best address the barrier. A tool was developed that allows users to specify high-priority CFIR-based barriers and receive a prioritized list of strategies based on endorsements provided by participants. Conclusions The wide heterogeneity of endorsements obtained in this study’s task suggests that there are relatively few consistent relationships between CFIR-based barriers and ERIC implementation strategies. Despite this heterogeneity, a tool aggregating endorsements across multiple barriers can support taking a structured approach to consider a broad range of strategies given those barriers. This study’s results point to the need for a more detailed evaluation of the underlying determinants of barriers and how these determinants are addressed by strategies as part of the implementation planning process.
Comparison of rapid vs in-depth qualitative analytic methods from a process evaluation of academic detailing in the Veterans Health Administration
Background It is challenging to conduct and quickly disseminate findings from in-depth qualitative analyses, which can impede timely implementation of interventions because of its time-consuming methods. To better understand tradeoffs between the need for actionable results and scientific rigor, we present our method for conducting a framework-guided rapid analysis (RA) and a comparison of these findings to an in-depth analysis of interview transcripts. Methods Set within the context of an evaluation of a successful academic detailing (AD) program for opioid prescribing in the Veterans Health Administration, we developed interview guides informed by the Consolidated Framework for Implementation Research (CFIR) and interviewed 10 academic detailers (clinical pharmacists) and 20 primary care providers to elicit detail about successful features of the program. For the RA, verbatim transcripts were summarized using a structured template (based on CFIR); summaries were subsequently consolidated into matrices by participant type to identify aspects of the program that worked well and ways to facilitate implementation elsewhere. For comparison purposes, we later conducted an in-depth analysis of the transcripts. We described our RA approach and qualitatively compared the RA and deductive in-depth analysis with respect to consistency of themes and resource intensity. Results Integrating the CFIR throughout the RA and in-depth analysis was helpful for providing structure and consistency across both analyses. Findings from the two analyses were consistent. The most frequently coded constructs from the in-depth analysis aligned well with themes from the RA, and the latter methods were sufficient and appropriate for addressing the primary evaluation goals. Our approach to RA was less resource-intensive than the in-depth analysis, allowing for timely dissemination of findings to our operations partner that could be integrated into ongoing implementation. Conclusions In-depth analyses can be resource-intensive. If consistent with project needs (e.g., to quickly produce information to inform ongoing implementation or to comply with a policy mandate), it is reasonable to consider using RA, especially when faced with resource constraints. Our RA provided valid findings in a short timeframe, enabling identification of actionable suggestions for our operations partner.
A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project
Background Identifying, developing, and testing implementation strategies are important goals of implementation science. However, these efforts have been complicated by the use of inconsistent language and inadequate descriptions of implementation strategies in the literature. The Expert Recommendations for Implementing Change (ERIC) study aimed to refine a published compilation of implementation strategy terms and definitions by systematically gathering input from a wide range of stakeholders with expertise in implementation science and clinical practice. Methods Purposive sampling was used to recruit a panel of experts in implementation and clinical practice who engaged in three rounds of a modified Delphi process to generate consensus on implementation strategies and definitions. The first and second rounds involved Web-based surveys soliciting comments on implementation strategy terms and definitions. After each round, iterative refinements were made based upon participant feedback. The third round involved a live polling and consensus process via a Web-based platform and conference call. Results Participants identified substantial concerns with 31% of the terms and/or definitions and suggested five additional strategies. Seventy-five percent of definitions from the originally published compilation of strategies were retained after voting. Ultimately, the expert panel reached consensus on a final compilation of 73 implementation strategies. Conclusions This research advances the field by improving the conceptual clarity, relevance, and comprehensiveness of implementation strategies that can be used in isolation or combination in implementation research and practice. Future phases of ERIC will focus on developing conceptually distinct categories of strategies as well as ratings for each strategy’s importance and feasibility. Next, the expert panel will recommend multifaceted strategies for hypothetical yet real-world scenarios that vary by sites’ endorsement of evidence-based programs and practices and the strength of contextual supports that surround the effort.
Champions in context: which attributes matter for change efforts in healthcare?
Background Research to date has focused on strategies and resources used by effective champions of healthcare change efforts, rather than personal characteristics that contribute to their success. We sought to identify and describe champion attributes influencing outcomes of healthcare change efforts. To examine attributes of champions, we used postpartum contraceptive care as a case study, because recommended services are largely unavailable, and implementation requires significant effort. Methods We conducted a comparative case study of the implementation of inpatient postpartum contraceptive care at 11 U.S. maternity hospitals in 2017–18. We conducted site visits that included semi-structured key informant interviews informed by the Consolidated Framework for Implementation Research (CFIR). Phase one analysis (qualitative content analysis using a priori CFIR codes and cross-case synthesis) showed that implementation leaders (“champions”) strongly influenced outcomes across sites. To understand champion effects, phase two inductive analysis included (1) identifying and elaborating key attributes of champions; (2) rating the presence or absence of each attribute in champions; and 3) cross-case synthesis to identify patterns among attributes, context, and implementation outcomes. Results We completed semi-structured interviews with 78 clinicians, nurses, residents, pharmacy and revenue cycle staff, and hospital administrators. All identified champions were obstetrician-gynecologists. Six key attributes of champions emerged: influence, ownership, physical presence at the point of change, persuasiveness, grit, and participative leadership style. These attributes promoted success by enabling champions to overcome institutional siloing, build and leverage professional networks, create tension for change, cultivate a positive learning climate, optimize compatibility with existing workflow, and engage key stakeholders. Not all champion attributes were required for success, and having all attributes did not guarantee success. Conclusions Effective champions appear to leverage six key attributes to facilitate healthcare change efforts. Prospective evaluations of the interactions among champion attributes, context, and outcomes may further elucidate how champions exert their effects.
Specifying and comparing implementation strategies across seven large implementation interventions: a practical application of theory
Background The use of implementation strategies is an active and purposive approach to translate research findings into routine clinical care. The Expert Recommendations for Implementing Change (ERIC) identified and defined discrete implementation strategies, and Proctor and colleagues have made recommendations for specifying operationalization of each strategy. We use empirical data to test how the ERIC taxonomy applies to a large dissemination and implementation initiative aimed at taking cardiac prevention to scale in primary care practice. Methods EvidenceNOW is an Agency for Healthcare Research and Quality initiative that funded seven cooperatives across seven regions in the USA. Cooperatives implemented multi-component interventions to improve heart health and build quality improvement capacity, and used a range of implementation strategies to foster practice change. We used ERIC to identify cooperatives’ implementation strategies and specified the actor, action, target, dose, temporality, justification, and expected outcome for each. We mapped and compiled a matrix of the specified ERIC strategies across the cooperatives, and used consensus to resolve mapping differences. We then grouped implementation strategies by outcomes and justifications, which led to insights regarding the use of and linkages between ERIC strategies in real-world scale-up efforts. Results Thirty-three ERIC strategies were used by cooperatives. We identified a range of revisions to the ERIC taxonomy to improve the practical application of these strategies. These proposed changes include revisions to four strategy names and 12 definitions. We suggest adding three new strategies because they encapsulate distinct actions that were not described in the existing ERIC taxonomy. In addition, we organized ERIC implementation strategies into four functional groupings based on the way we observed them being applied in practice. These groupings show how ERIC strategies are, out of necessity, interconnected, to achieve the work involved in rapidly taking evidence to scale. Conclusions Findings of our work suggest revisions to the ERIC implementation strategies to reflect their utilization in real-work dissemination and implementation efforts. The functional groupings of the ERIC implementation strategies that emerged from on-the-ground implementers will help guide others in choosing among and linking multiple implementation strategies when planning small- and large-scale implementation efforts. Trial registration Registered as Observational Study at www.clinicaltrials.gov ( NCT02560428 ).
A systematic review of the use of the Consolidated Framework for Implementation Research
Background In 2009, Damschroder et al. developed the Consolidated Framework for Implementation Research (CFIR), which provides a comprehensive listing of constructs thought to influence implementation. This systematic review assesses the extent to which the CFIR’s use in implementation research fulfills goals set forth by Damschroder et al. in terms of breadth of use, depth of application, and contribution to implementation research. Methods We searched Scopus and Web of Science for publications that cited the original CFIR publication by Damschroder et al. (Implement Sci 4:50, 2009) and downloaded each unique result for review. After applying exclusion criteria, the final articles were empirical studies published in peer-review journals that used the CFIR in a meaningful way (i.e., used the CFIR to guide data collection, measurement, coding, analysis, and/or reporting). A framework analysis approach was used to guide abstraction and synthesis of the included articles. Results Twenty-six of 429 unique articles (6 %) met inclusion criteria. We found great breadth in CFIR application; the CFIR was applied across a wide variety of study objectives, settings, and units of analysis. There was also variation in the method of included studies (mixed methods ( n  = 13); qualitative ( n  = 10); quantitative ( n  = 3)). Depth of CFIR application revealed some areas for improvement. Few studies ( n  = 3) reported justification for selection of CFIR constructs used; the majority of studies ( n  = 14) used the CFIR to guide data analysis only; and few studies investigated any outcomes ( n  = 11). Finally, reflections on the contribution of the CFIR to implementation research were scarce. Conclusions Our results indicate that the CFIR has been used across a wide range of studies, though more in-depth use of the CFIR may help advance implementation science. To harness its potential, researchers should consider how to most meaningfully use the CFIR. Specific recommendations for applying the CFIR include explicitly justifying selection of CFIR constructs; integrating the CFIR throughout the research process (in study design, data collection, and analysis); and appropriately using the CFIR given the phase of implementation of the research (e.g., if the research is post-implementation, using the CFIR to link determinants of implementation to outcomes).