Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
48 result(s) for "Haines, Emily R."
Sort by:
Criteria for selecting implementation science theories and frameworks: results from an international survey
Background Theories provide a synthesizing architecture for implementation science. The underuse, superficial use, and misuse of theories pose a substantial scientific challenge for implementation science and may relate to challenges in selecting from the many theories in the field. Implementation scientists may benefit from guidance for selecting a theory for a specific study or project. Understanding how implementation scientists select theories will help inform efforts to develop such guidance. Our objective was to identify which theories implementation scientists use, how they use theories, and the criteria used to select theories. Methods We identified initial lists of uses and criteria for selecting implementation theories based on seminal articles and an iterative consensus process. We incorporated these lists into a self-administered survey for completion by self-identified implementation scientists. We recruited potential respondents at the 8th Annual Conference on the Science of Dissemination and Implementation in Health and via several international email lists. We used frequencies and percentages to report results. Results Two hundred twenty-three implementation scientists from 12 countries responded to the survey. They reported using more than 100 different theories spanning several disciplines. Respondents reported using theories primarily to identify implementation determinants, inform data collection, enhance conceptual clarity, and guide implementation planning. Of the 19 criteria presented in the survey, the criteria used by the most respondents to select theory included analytic level (58%), logical consistency/plausibility (56%), empirical support (53%), and description of a change process (54%). The criteria used by the fewest respondents included fecundity (10%), uniqueness (12%), and falsifiability (15%). Conclusions Implementation scientists use a large number of criteria to select theories, but there is little consensus on which are most important. Our results suggest that the selection of implementation theories is often haphazard or driven by convenience or prior exposure. Variation in approaches to selecting theory warn against prescriptive guidance for theory selection. Instead, implementation scientists may benefit from considering the criteria that we propose in this paper and using them to justify their theory selection. Future research should seek to refine the criteria for theory selection to promote more consistent and appropriate use of theory in implementation science.
Advancing understanding and identifying strategies for sustaining evidence-based practices: a review of reviews
Background Implementation science has focused mainly on the initial uptake and use of evidence-based practices (EBPs), with less attention to sustainment—i.e., continuous use of these practices, as intended, over time in ongoing operations, often involving adaptation to dynamic contexts. Declining EBP use following implementation is well-documented yet poorly understood. Using theories, models, and frameworks (TMFs) to conceptualize sustainment could advance understanding. We consolidated knowledge from published reviews of sustainment studies to identify TMFs with the potential to conceptualize sustainment, evaluate past uses of TMFs in sustainment studies, and assess the TMFs’ potential contribution to developing sustainment strategies. Methods We drew upon reviews of sustainment studies published within the past 10 years, evaluated the frequency with which included articles used a TMF for conceptualizing sustainment, and evaluated the relevance of TMFs to sustainment research using the Theory, Model, and Framework Comparison and Selection Tool (T-CaST). Specifically, we examined whether the TMFs were familiar to researchers, hypothesized relationships among constructs, provided a face-valid explanation of relationships, and included sustainment as an outcome. Findings Nine sustainment reviews referenced 648 studies; these studies cited 76 unique TMFs. Only 28 TMFs were used in more than one study. Of the 19 TMFs that met the criteria for T-CaST analysis, six TMFs explicitly included sustainment as the outcome of interest, 12 offered face-valid explanations of proposed conceptual relationships, and six identified mechanisms underlying relationships between included constructs and sustainment. Only 11 TMFs performed adequately with respect to all these criteria. Conclusions We identified 76 TMFs that have been used in sustainment studies. Of these, most were only used once, contributing to a fractured understanding of sustainment. Improved reporting and use of TMFs may improve understanding of this critical topic. Of the more consistently used TMFs, few proposed face-valid relationships between included constructs and sustainment, limiting their ability to advance our understanding and identify potential sustainment strategies. Future research is needed to explore the TMFs that we identified as potentially relevant, as well as TMFs not identified in our study that nonetheless have the potential to advance our understanding of sustainment and identification of strategies for sustaining EBP use.
T-CaST: an implementation theory comparison and selection tool
Background Theories, models, and frameworks (TMF) are foundational for generalizing implementation efforts and research findings. However, TMF and the criteria used to select them are not often described in published articles, perhaps due in part to the challenge of selecting from among the many TMF that exist in the field. The objective of this international study was to develop a user-friendly tool to help scientists and practitioners select appropriate TMF to guide their implementation projects. Methods Implementation scientists across the USA, the UK, and Canada identified and rated conceptually distinct categories of criteria in a concept mapping exercise. We then used the concept mapping results to develop a tool to help users select appropriate TMF for their projects. We assessed the tool’s usefulness through expert consensus and cognitive and semi-structured interviews with implementation scientists. Results Thirty-seven implementation scientists (19 researchers and 18 practitioners) identified four criteria domains: usability, testability, applicability, and familiarity. We then developed a prototype of the tool that included a list of 25 criteria organized by domain, definitions of the criteria, and a case example illustrating an application of the tool. Results of cognitive and semi-structured interviews highlighted the need for the tool to (1) be as succinct as possible; (2) have separate versions to meet the unique needs of researchers versus practitioners; (3) include easily understood terms; (4) include an introduction that clearly describes the tool’s purpose and benefits; (5) provide space for noting project information, comparing and scoring TMF, and accommodating contributions from multiple team members; and (6) include more case examples illustrating its application. Interview participants agreed that the tool (1) offered them a way to select from among candidate TMF, (2) helped them be explicit about the criteria that they used to select a TMF, and (3) enabled them to compare, select from among, and/or consider the usefulness of combining multiple TMF. These revisions resulted in the Theory Comparison and Selection Tool (T-CaST), a paper and web-enabled tool that includes 16 specific criteria that can be used to consider and justify the selection of TMF for a given project. Criteria are organized within four categories: applicability, usability, testability, and acceptability. Conclusions T-CaST is a user-friendly tool to help scientists and practitioners select appropriate TMF to guide implementation projects. Additionally, T-CaST has the potential to promote transparent reporting of criteria used to select TMF within and beyond the field of implementation science.
Longitudinal Monitoring of Clinician-Patient Video Visits During the Peak of the COVID-19 Pandemic: Adoption and Sustained Challenges in an Integrated Health Care Delivery System
Numerous prior opinion papers, administrative electronic health record data studies, and cross-sectional surveys of telehealth during the pandemic have been published, but none have combined assessments of video visit success monitoring with longitudinal assessments of perceived challenges to the rapid adoption of video visits during the pandemic. This study aims to quantify (1) the use of video visits (compared with in-person and telephone visits) over time during the pandemic, (2) video visit successful connection rates, and (3) changes in perceived video visit challenges. A web-based survey was developed for the dual purpose of monitoring and improving video visit implementation in our health care system during the COVID-19 pandemic. The survey included questions regarding rates of in-person, telephone, and video visits for clinician-patient encounters; the rate of successful connection for video visits; and perceived challenges to video visits (eg, software, hardware, bandwidth, and technology literacy). The survey was distributed via email to physicians, advanced practice professionals, and clinicians in May 2020. The survey was repeated in March 2021. Differences between the 2020 and 2021 responses were adjusted for within-respondent correlation across surveys and tested using generalized estimating equations. A total of 1126 surveys were completed (511 surveys in 2020 and 615 surveys in 2021). In 2020, only 21.7% (73/336) of clinicians reported no difficulty connecting with patients during video visits and 28.6% (93/325) of clinicians reported no difficulty in 2021. The distribution of the percentage of successfully connected video visits (\"Over the past two weeks of scheduled visits, what percentage did you successfully connect with patients by video?\") was not significantly different between 2020 and 2021 (P=.74). Challenges in conducting video visits persisted over time. Poor connectivity was the most common challenge reported by clinicians. This response increased over time, with 30.5% (156/511) selecting it as a challenge in 2020 and 37.1% (228/615) in 2021 (P=.01). Patients not having access to their electronic health record portals was also a commonly reported challenge (109/511, 21.3% in 2020 and 137/615, 22.3% in 2021, P=.73). During the pandemic, our health care delivery system rapidly adopted synchronous patient-clinician communication using video visits. As experience with video visits increased, the reported failure rate did not significantly decline, and clinicians continued to report challenges related to general network connectivity and patient access to technology.
Harmonizing evidence-based practice, implementation context, and implementation strategies with user-centered design: a case example in young adult cancer care
Background Attempting to implement evidence-based practices in contexts for which they are not well suited may compromise their fidelity and effectiveness or burden users (e.g., patients, providers, healthcare organizations) with elaborate strategies intended to force implementation. To improve the fit between evidence-based practices and contexts, implementation science experts have called for methods for adapting evidence-based practices and contexts and tailoring implementation strategies; yet, methods for considering the dynamic interplay among evidence-based practices, contexts, and implementation strategies remain lacking. We argue that harmonizing the three can be facilitated by user-centered design, an iterative and highly stakeholder-engaged set of principles and methods. Methods This paper presents a case example in which we used a three-phase user-centered design process to design and plan to implement a care coordination intervention for young adults with cancer. Specifically, we used usability testing to redesign and augment an existing patient-reported outcome measure that served as the basis for our intervention to optimize its usability and usefulness, ethnographic contextual inquiry to prepare the context (i.e., a comprehensive cancer center) to promote receptivity to implementation, and iterative prototyping workshops with a multidisciplinary design team to design the care coordination intervention and anticipate implementation strategies needed to enhance contextual fit. Results Our user-centered design process resulted in the Young Adult Needs Assessment and Service Bridge (NA-SB), including a patient-reported outcome measure and a collection of referral pathways that are triggered by the needs young adults report, as well as implementation guidance. By ensuring NA-SB directly responded to features of users and context, we designed NA-SB for implementation , potentially minimizing the strategies needed to address misalignment that may have otherwise existed. Furthermore, we designed NA-SB for scale-up ; by engaging users from other cancer programs across the country to identify points of contextual variation which would require flexibility in delivery, we created a tool intended to accommodate diverse contexts. Conclusions User-centered design can help maximize usability and usefulness when designing evidence-based practices, preparing contexts, and informing implementation strategies—in effect, harmonizing evidence-based practices, contexts, and implementation strategies to promote implementation and effectiveness.
From Mandate to Meaning: A Health Equity Implementation Framework and Knowledge-to-Action-Informed Qualitative Study of Health-Related Social Needs Implementation
Background: Health-related social needs (HRSNs), including food insecurity, housing instability, transportation barriers, and financial strain, are increasingly recognized as critical to patient-centered care. Despite growing mandates and incentives to integrate HRSN screening and referral into routine clinical workflows, healthcare systems face significant challenges in implementing HRSN screening and referral processes at scale. Objectives: This study explores the early implementation of HRSN screening and referral across a multistate healthcare system, using the Health Equity Implementation Framework (HEIF) and Knowledge-to-Action (KTA) Framework to examine multilevel barriers and facilitators. Design: Qualitative descriptive design. Methods: Semi-structured interviews (n = 23) were conducted with healthcare leaders, navigators, clinicians, and community health workers (CHW), eliciting their experience with leading and implementation of HRSN screening. Results: Findings reveal that many frontline staff (including clinicians, navigators, and CHWs) reported disjointed workflows, unclear referral roles, and limited communication related to HRSN implementation. They also reported distress when screening occurred without available resources to address identified needs. CHW’s explained their pivotal but under-integrated roles, serving as relational and cultural bridges between health systems and communities. All participant cohorts identified organizational and interpersonal misalignments between implementation mandates and on-the-ground realities. Suggested strategies for improvement included role-specific training, participatory design, improved integration of CHWs into care teams, feedback loops, and locally adapted referral protocols. Conclusion: These findings reinforce the value of frontline staff knowledge and experience to ensure robust implementation of HRSNs. Aligning system-level priorities with the complex realities of care delivery is essential for realizing the promise of HRSN screening as a tool for health equity.
Applying cognitive walkthrough methodology to improve the usability of an equity-focused implementation strategy
Background Our research team partnered with primary care and quality improvement staff in Federally Qualified Community Health Centers (CHCs) to develop Partnered and Equity Data-Driven Implementation (PEDDI) to promote equitable implementation of evidence-based interventions. The current study used a human-centered design methodology to evaluate the usability of PEDDI and generate redesign solutions to address usability issues in the context of a cancer screening intervention. Methods We applied the Cognitive Walkthrough for Implementation Strategies (CWIS), a pragmatic assessment method with steps that include group testing with end users to identify and prioritize usability problems. We conducted three facilitated 60-min CWIS sessions with end users ( N  = 7) from four CHCs that included scenarios and related tasks for implementing a colorectal cancer (CRC) screening intervention. Participants rated the likelihood of completing each task and identified usability issues and generated ideas for redesign solutions during audio-recorded CWIS sessions. Participants completed a pre-post survey of PEDDI usability. Our research team used consensus coding to synthesize usability problems and redesign solutions from transcribed CWIS sessions. Results Usability ratings (scale 0–100: higher scores indicating higher usability) of PEDDI averaged 66.3 (SD = 12.4) prior to the CWIS sessions. Scores averaged 77.8 (SD = 9.1) following the three CWIS sessions improving usability ratings from “marginal acceptability” to “acceptable”. Ten usability problems were identified across four PEDDI tasks, comprised of 2–3 types of usability problems per task. CWIS participants suggested redesign solutions that included making data fields for social determinants of health and key background variables for identifying health equity targets mandatory in the electronic health record and using asynchronous communication tools to elicit ideas from staff for adaptations. Conclusions Usability ratings indicated PEDDI was in the acceptable range following CWIS sessions. Staff identified usability problems and redesign solutions that provide direction for future improvements in PEDDI. In addition, this study highlights opportunities to use the CWIS methodology to address inequities in the implementation of cancer screening and other clinical innovations in resource-constrained healthcare settings.
A case study of a theory-based method for identifying and reporting core functions and forms of evidence-based interventions
A brief, primary care-based web intervention shows potential to help adults with obesity take a first step in engaging in evidence-based behavioral weight loss treatment. Abstract Adaptation of existing evidence-based interventions (EBIs) to improve their fit in new contexts is common. A critical first step in adaptation is to identify core functions (purposes) and forms (activities) of EBIs. Core functions should not be adapted as they are what account for the efficacy of EBIs. Despite their importance, core functions are rarely identified by EBI developers; methods for identifying them post hoc are lacking. We present a case study of theory-based methods for identifying core functions and forms post hoc. We developed these methods as the first step in a larger effort to adapt an existing EBI to improve the timeliness of referrals to hospice to a new patient population and care setting. Our methods were rooted in the Planned Adaptation Model (PAM). Through our case study, we developed six steps for identifying core functions and forms, as well as accompanying tools and methods. Our case study further operationalized PAM in several ways. Where PAM offered guiding tenets for identifying core functions and forms (review existing EBI materials, conduct primary data collection, and identify the theory of change), we produced specific tools (interview guides and codebooks) and methods (sampling approaches and analytic methods). Our case study extended PAM with the addition of two steps in the process of identifying core functions and forms: (a) identifying the usual care pathway, including barriers to the outcome of interest encountered in usual care, and (b) mapping EBI core functions onto an extant theory. Identifying core functions and forms is a critical first step in the adaptation process to ensure adaptations do not inadvertently compromise the efficacy or effectiveness of the EBI by compromising core functions. Our case study presents step-by-step methods that could be used by researchers or practitioners to identify core functions and forms post hoc.
An actionable needs assessment for adolescents and young adults with cancer: the AYA Needs Assessment & Service Bridge (NA-SB)
PurposeIn the USA, many of the nearly 90,000 adolescents and young adults (AYAs) diagnosed with cancer each year do not receive services to address the full scope of needs they experience during and after cancer treatment. To facilitate a systematic and patient-centered approach to delivering services to address the unmet needs of AYAs with cancer, we developed the AYA Needs Assessment & Service Bridge (NA-SB).MethodsTo develop NA-SB, we leveraged user-centered design, an iterative process for intervention development based on prospective user (i.e., provider and AYA) engagement. Specifically, we conducted usability testing and concept mapping to refine an existing tool—the Cancer Needs Questionnaire-Young People—to promote its usability and usefulness in routine cancer practice.ResultsOur user-centered design process yielded a need assessment which assesses AYAs’ physical, psychosocial, and practical needs. Importantly, needs in the assessment are grouped by services expected to address them, creating an intuitive and actionable link between needs and services.ConclusionNA-SB has the potential to improve care coordination at the individual level by allowing cancer care programs to tailor service delivery and resource provision to the individual needs of AYAs they serve.
Ethnography and user-centered design to inform context-driven implementation
Abstract Despite pervasive findings pointing to its inextricable role in intervention implementation, context remains poorly understood in implementation science. Existing approaches for describing context (e.g., surveys, interviews) may be narrow in scope or superficial in their elicitation of contextual data. Thus, in-depth and multilevel approaches are needed to meaningfully describe the contexts into which interventions will be implemented. Moreover, many studies assess context without subsequently using contextual information to enhance implementation. To be useful for improving implementation, though, methods are needed to apply contextual information during implementation. In the case example presented in this paper, we embedded an ethnographic assessment of context within a user-centered design approach to describe implementation context and apply that information to promote implementation. We developed a patient-reported outcome measure-based clinical intervention to assess and address the pervasive unmet needs of young adults with cancer: the Needs Assessment & Service Bridge (NA-SB). In this paper, we describe the user-centered design process that we used to anticipate context modifications needed to deliver NA-SB and implementation strategies needed to facilitate its implementation. Our ethnographic contextual inquiry yielded a rich understanding of local implementation context and contextual variation across potential scale-up contexts. Other methods from user-centered design (i.e., translation tables and a design team prototyping workshop) allowed us to translate that information into specifications for NA-SB delivery and a plan for implementation. Embedding ethnographic methods within a user-centered design approach can help us to tailor interventions and implementation strategies to their contexts of use to promote implementation. Lay Summary The field of implementation science studies how to better integrate research evidence into practice. To accomplish this integration, it is important to understand the contexts into which interventions are being implemented. For example, implementation may be influenced by contextual factors such as patient/provider beliefs about an intervention, budget constraints, leadership buy-in, an organization’s readiness to change, and many others. Understanding these factors upfront can allow us to adapt interventions to better suit context (e.g., tailoring intervention content to patients’ needs), change context to make it more ready for implementation (e.g., changing provider workflow to accommodate an intervention), and anticipate strategies that may be needed to implement an intervention (e.g., delivering training on the intervention to providers). To do this, the field of implementation science is in need of methods for assessing context and using that information to improve implementation. In this paper, we present several methods, including ethnography and methods from user-centered design, for using context to inform implementation efforts. Using a case example in young adult cancer care, we describe novel methods for assessing the context in which an intervention will be used and leveraging that information to improve implementation of the intervention.