Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
75,616 result(s) for "Implementation"
Sort by:
The updated Consolidated Framework for Implementation Research based on user feedback
Background Many implementation efforts fail, even with highly developed plans for execution, because contextual factors can be powerful forces working against implementation in the real world. The Consolidated Framework for Implementation Research (CFIR) is one of the most commonly used determinant frameworks to assess these contextual factors; however, it has been over 10 years since publication and there is a need for updates. The purpose of this project was to elicit feedback from experienced CFIR users to inform updates to the framework. Methods User feedback was obtained from two sources: (1) a literature review with a systematic search; and (2) a survey of authors who used the CFIR in a published study. Data were combined across both sources and reviewed to identify themes; a consensus approach was used to finalize all CFIR updates. The VA Ann Arbor Healthcare System IRB declared this study exempt from the requirements of 38 CFR 16 based on category 2. Results The systematic search yielded 376 articles that contained the CFIR in the title and/or abstract and 334 unique authors with contact information; 59 articles included feedback on the CFIR. Forty percent ( n  = 134/334) of authors completed the survey. The CFIR received positive ratings on most framework sensibility items (e.g., applicability, usability), but respondents also provided recommendations for changes. Overall, updates to the CFIR include revisions to existing domains and constructs as well as the addition, removal, or relocation of constructs. These changes address important critiques of the CFIR, including better centering innovation recipients and adding determinants to equity in implementation. Conclusion The updates in the CFIR reflect feedback from a growing community of CFIR users. Although there are many updates, constructs can be mapped back to the original CFIR to ensure longitudinal consistency. We encourage users to continue critiquing the CFIR, facilitating the evolution of the framework as implementation science advances.
Policy, geophilosophy and education
Education policy is premised on its instrumentalist approach. This instrumentalism is based on narrow assumptions concerning people (the subject), decision-making (power), problem-solving (science and methodology), and knowledge (epistemology). Policy, Geophilosophy and Education reconceptualises the object, and hence, the objectives, of education policy. Specifically, the book illustrates how education policy positions and constitutes objects and subjects through emergent policy arrangements that simultaneously influence how policy is sensed, embodied, and enacted. The book examines the disciplinary and multi-disciplinary approaches to education policy analysis over the last sixty years, and reveals how policy analysis constitutes the ontologies and epistemologies of policy. In order to reconceptualise policy, Policy, Geophilosophy and Education uses ideas of spatiality, affect and problematization from the disciplines of geography and philosophy. The book problematizes case-vignettes to illustrate the complex and often paradoxical relations between neo-liberal education policy equity, and educational inequalities produced in the representational registers of race and ethnicity.
Multifactorial falls prevention programme compared with usual care in UK care homes for older people: multicentre cluster randomised controlled trial with economic evaluation
AbstractObjectivesTo determine the clinical and cost effectiveness of a multifactorial fall prevention programme compared with usual care in long term care homes.DesignMulticentre, parallel, cluster randomised controlled trial.SettingLong term care homes in the UK, registered to care for older people or those with dementia.Participants1657 consenting residents and 84 care homes. 39 were randomised to the intervention group and 45 were randomised to usual care.InterventionsGuide to Action for Care Homes (GtACH): a multifactorial fall prevention programme or usual care.Main outcome measuresPrimary outcome measure was fall rate at 91-180 days after randomisation. The economic evaluation measured health related quality of life using quality adjusted life years (QALYs) derived from the five domain five level version of the EuroQoL index (EQ-5D-5L) or proxy version (EQ-5D-5L-P) and the Dementia Quality of Life utility measure (DEMQOL-U), which were self-completed by competent residents and by a care home staff member proxy (DEMQOL-P-U) for all residents (in case the ability to complete changed during the study) until 12 months after randomisation. Secondary outcome measures were falls at 1-90, 181-270, and 271-360 days after randomisation, Barthel index score, and the Physical Activity Measure-Residential Care Homes (PAM-RC) score at 91, 180, 270, and 360 days after randomisation.ResultsMean age of residents was 85 years. 32% were men. GtACH training was delivered to 1051/1480 staff (71%). Primary outcome data were available for 630 participants in the GtACH group and 712 in the usual care group. The unadjusted incidence rate ratio for falls between 91 and 180 days was 0.57 (95% confidence interval 0.45 to 0.71, P<0.001) in favour of the GtACH programme (GtACH: six falls/1000 residents v usual care: 10 falls/1000). Barthel activities of daily living indices and PAM-RC scores were similar between groups at all time points. The incremental cost was £108 (95% confidence interval −£271.06 to 487.58), incremental QALYs gained for EQ-5D-5L-P was 0.024 (95% confidence interval 0.004 to 0.044) and for DEMQOL-P-U was 0.005 (−0.019 to 0.03). The incremental costs per EQ-5D-5L-P and DEMQOL-P-U based QALY were £4544 and £20 889, respectively.ConclusionsThe GtACH programme was associated with a reduction in fall rate and cost effectiveness, without a decrease in activity or increase in dependency.Trial registrationISRCTN34353836.
Choosing implementation strategies to address contextual barriers: diversity in recommendations and future directions
Background A fundamental challenge of implementation is identifying contextual determinants (i.e., barriers and facilitators) and determining which implementation strategies will address them. Numerous conceptual frameworks (e.g., the Consolidated Framework for Implementation Research; CFIR) have been developed to guide the identification of contextual determinants, and compilations of implementation strategies (e.g., the Expert Recommendations for Implementing Change compilation; ERIC) have been developed which can support selection and reporting of implementation strategies. The aim of this study was to identify which ERIC implementation strategies would best address specific CFIR-based contextual barriers. Methods Implementation researchers and practitioners were recruited to participate in an online series of tasks involving matching specific ERIC implementation strategies to specific implementation barriers. Participants were presented with brief descriptions of barriers based on CFIR construct definitions. They were asked to rank up to seven implementation strategies that would best address each barrier. Barriers were presented in a random order, and participants had the option to respond to the barrier or skip to another barrier. Participants were also asked about considerations that most influenced their choices. Results Four hundred thirty-five invitations were emailed and 169 (39%) individuals participated. Respondents had considerable heterogeneity in opinions regarding which ERIC strategies best addressed each CFIR barrier. Across the 39 CFIR barriers, an average of 47 different ERIC strategies (SD = 4.8, range 35 to 55) was endorsed at least once for each, as being one of seven strategies that would best address the barrier. A tool was developed that allows users to specify high-priority CFIR-based barriers and receive a prioritized list of strategies based on endorsements provided by participants. Conclusions The wide heterogeneity of endorsements obtained in this study’s task suggests that there are relatively few consistent relationships between CFIR-based barriers and ERIC implementation strategies. Despite this heterogeneity, a tool aggregating endorsements across multiple barriers can support taking a structured approach to consider a broad range of strategies given those barriers. This study’s results point to the need for a more detailed evaluation of the underlying determinants of barriers and how these determinants are addressed by strategies as part of the implementation planning process.
Conceptualizing outcomes for use with the Consolidated Framework for Implementation Research (CFIR): the CFIR Outcomes Addendum
Background The challenges of implementing evidence-based innovations (EBIs) are widely recognized among practitioners and researchers. Context, broadly defined as everything outside the EBI, includes the dynamic and diverse array of forces working for or against implementation efforts. The Consolidated Framework for Implementation Research (CFIR) is one of the most widely used frameworks to guide assessment of contextual determinants of implementation. The original 2009 article invited critique in recognition for the need for the framework to evolve. As implementation science has matured, gaps in the CFIR have been identified and updates are needed. Our team is developing the CFIR 2.0 based on a literature review and follow-up survey with authors. We propose an Outcomes Addendum to the CFIR to address recommendations from these sources to include outcomes in the framework. Main text We conducted a literature review and surveyed corresponding authors of included articles to identify recommendations for the CFIR. There were recommendations to add both implementation and innovation outcomes from these sources. Based on these recommendations, we make conceptual distinctions between (1) anticipated implementation outcomes and actual implementation outcomes, (2) implementation outcomes and innovation outcomes, and (3) CFIR-based implementation determinants and innovation determinants. Conclusion An Outcomes Addendum to the CFIR is proposed. Our goal is to offer clear conceptual distinctions between types of outcomes for use with the CFIR, and perhaps other determinant implementation frameworks as well. These distinctions can help bring clarity as researchers consider which outcomes are most appropriate to evaluate in their research. We hope that sharing this in advance will generate feedback and debate about the merits of our proposed addendum.
iCHECK-DH: Guidelines and Checklist for the Reporting on Digital Health Implementations
Implementation of digital health technologies has grown rapidly, but many remain limited to pilot studies due to challenges, such as a lack of evidence or barriers to implementation. Overcoming these challenges requires learning from previous implementations and systematically documenting implementation processes to better understand the real-world impact of a technology and identify effective strategies for future implementation. A group of global experts, facilitated by the Geneva Digital Health Hub, developed the Guidelines and Checklist for the Reporting on Digital Health Implementations (iCHECK-DH, pronounced \"I checked\") to improve the completeness of reporting on digital health implementations. A guideline development group was convened to define key considerations and criteria for reporting on digital health implementations. To ensure the practicality and effectiveness of the checklist, it was pilot-tested by applying it to several real-world digital health implementations, and adjustments were made based on the feedback received. The guiding principle for the development of iCHECK-DH was to identify the minimum set of information needed to comprehensively define a digital health implementation, to support the identification of key factors for success and failure, and to enable others to replicate it in different settings. The result was a 20-item checklist with detailed explanations and examples in this paper. The authors anticipate that widespread adoption will standardize the quality of reporting and, indirectly, improve implementation standards and best practices. Guidelines for reporting on digital health implementations are important to ensure the accuracy, completeness, and consistency of reported information. This allows for meaningful comparison and evaluation of results, transparency, and accountability and informs stakeholder decision-making. i-CHECK-DH facilitates standardization of the way information is collected and reported, improving systematic documentation and knowledge transfer that can lead to the development of more effective digital health interventions and better health outcomes.
A Complex Digital Health Intervention to Support People With HIV: Organizational Readiness Survey Study and Preimplementation Planning for a Hybrid Effectiveness-Implementation Study
Evaluating implementation of digital health interventions (DHIs) in practice settings is complex, involving diverse users and multistep processes. Proactive planning can ensure implementation determinants and outcomes are captured for hybrid studies, but operational guidance for designing or planning hybrid DHI studies is limited. This study aimed to proactively define, prioritize, and operationalize measurement of implementation outcomes and determinants for a DHI hybrid effectiveness-implementation trial. We describe unique advantages and limitations of planning the trial implementation evaluation among a large-scale cohort study population and share results of a pretrial organizational readiness assessment. We planned a cluster-randomized, type II hybrid effectiveness-implementation trial testing PositiveLinks, a smartphone app for HIV care, compared to usual care (n=6 sites per arm), among HIV outpatient sites in the DC Cohort Longitudinal HIV Study in Washington, DC. We (1) defined components of the DHI and associated implementation strategy; (2) selected implementation science frameworks to accomplish evaluation aims; (3) mapped framework dimensions, domains, and constructs to implementation strategy steps; (4) modified or created instruments to collect data for implementation outcome measures and determinants; and (5) developed a compatible implementation science data collection and management plan. Provider baseline surveys administered at intervention sites probed usage of digital tools and assessed provider readiness for implementation with the Organizational Readiness to Implement Change tool. We specified DHI and implementation strategy toward planning measurement of DHI and broader program reach and adoption. Mapping of implementation strategy steps to the Reach Effectiveness Adoption Implementation Maintenance framework prompted considerations for how to capture understudied aspects of each dimension: denominators and demographic representativeness within reach or adoption, and provider or organization-level adaptations, dose, and fidelity within the implementation dimension. Our process also prompted the creation of tools to obtain detailed determinants across domains and constructs of the Consolidated Framework for Implementation Research within a large sample at multiple time points. Some aspects of real-world PositiveLinks implementation were not reflected within the planned hybrid trial (eg, research assistants selected as de facto site implementation leads) or were modified to preserve internal validity of effectiveness measurement (eg, \"Community of Practice\"). Providers and research assistants (n=17) at intervention sites self-reported high baseline use of digital tools to communicate with patients. Readiness assessment revealed high median (48, IQR 45-54) total Organizational Readiness to Implement Change scores, with research assistants scoring higher than physicians (52.5, IQR 44-55 vs 48.0, IQR 46-49). Key takeaways, challenges, and opportunities arose in planning the implementation evaluation within a hybrid DHI trial among a cohort population. Prospective trial planning must balance generalizability of implementation processes to \"real world\" conditions with rigorous procedures to measure intervention effectiveness. Rapid, scalable tools require further study to enable evaluations within large multisite hybrid studies.
Developing measures to assess constructs from the Inner Setting domain of the Consolidated Framework for Implementation Research
Background Scientists and practitioners alike need reliable, valid measures of contextual factors that influence implementation. Yet, few existing measures demonstrate reliability or validity. To meet this need, we developed and assessed the psychometric properties of measures of several constructs within the Inner Setting domain of the Consolidated Framework for Implementation Research (CFIR). Methods We searched the literature for existing measures for the 7 Inner Setting domain constructs ( Culture Overall , Culture Stress , Culture Effort , Implementation Climate , Learning Climate , Leadership Engagement , and Available Resources ). We adapted items for the healthcare context, pilot-tested the adapted measures in 4 Federally Qualified Health Centers (FQHCs), and implemented the revised measures in 78 FQHCs in the 7 states ( N  = 327 respondents) with a focus on colorectal cancer (CRC) screening practices. To psychometrically assess our measures, we conducted confirmatory factor analysis models (CFA; structural validity), assessed inter-item consistency (reliability), computed scale correlations (discriminant validity), and calculated inter-rater reliability and agreement (organization-level construct reliability and validity). Results CFAs for most constructs exhibited good model fit (CFI > 0.90, TLI > 0.90, SRMR < 0.08, RMSEA < 0.08), with almost all factor loadings exceeding 0.40. Scale reliabilities ranged from good (0.7 ≤  α  < 0.9) to excellent ( α  ≥ 0.9). Scale correlations fell below 0.90, indicating discriminant validity. Inter-rater reliability and agreement were sufficiently high to justify measuring constructs at the clinic-level. Conclusions Our findings provide psychometric evidence in support of the CFIR Inner Setting measures. Our findings also suggest the Inner Setting measures from individuals can be aggregated to represent the clinic-level. Measurement of the Inner Setting constructs can be useful in better understanding and predicting implementation in FQHCs and can be used to identify targets of strategies to accelerate and enhance implementation efforts in FQHCs.
Psychometric assessment of three newly developed implementation outcome measures
Background Implementation outcome measures are essential for monitoring and evaluating the success of implementation efforts. Yet, currently available measures lack conceptual clarity and have largely unknown reliability and validity. This study developed and psychometrically assessed three new measures: the Acceptability of Intervention Measure (AIM), Intervention Appropriateness Measure (IAM), and Feasibility of Intervention Measure (FIM). Methods Thirty-six implementation scientists and 27 mental health professionals assigned 31 items to the constructs and rated their confidence in their assignments. The Wilcoxon one-sample signed rank test was used to assess substantive and discriminant content validity. Exploratory and confirmatory factor analysis (EFA and CFA) and Cronbach alphas were used to assess the validity of the conceptual model. Three hundred twenty-six mental health counselors read one of six randomly assigned vignettes depicting a therapist contemplating adopting an evidence-based practice (EBP). Participants used 15 items to rate the therapist’s perceptions of the acceptability, appropriateness, and feasibility of adopting the EBP. CFA and Cronbach alphas were used to refine the scales, assess structural validity, and assess reliability. Analysis of variance (ANOVA) was used to assess known-groups validity. Finally, half of the counselors were randomly assigned to receive the same vignette and the other half the opposite vignette; and all were asked to re-rate acceptability, appropriateness, and feasibility. Pearson correlation coefficients were used to assess test-retest reliability and linear regression to assess sensitivity to change. Results All but five items exhibited substantive and discriminant content validity. A trimmed CFA with five items per construct exhibited acceptable model fit (CFI = 0.98, RMSEA = 0.08) and high factor loadings (0.79 to 0.94). The alphas for 5-item scales were between 0.87 and 0.89. Scale refinement based on measure-specific CFAs and Cronbach alphas using vignette data produced 4-item scales (α’s from 0.85 to 0.91). A three-factor CFA exhibited acceptable fit (CFI = 0.96, RMSEA = 0.08) and high factor loadings (0.75 to 0.89), indicating structural validity. ANOVA showed significant main effects, indicating known-groups validity. Test-retest reliability coefficients ranged from 0.73 to 0.88. Regression analysis indicated each measure was sensitive to change in both directions. Conclusions The AIM, IAM, and FIM demonstrate promising psychometric properties. Predictive validity assessment is planned.