Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
11,403 result(s) for "Program Development - standards"
Sort by:
Process Evaluation of the BOOST-A™ Transition Planning Program for Adolescents on the Autism Spectrum: A Strengths-Based Approach
A process evaluation was conducted to determine the effectiveness, usability, and barriers and facilitators related to the Better OutcOmes & Successful Transitions for Autism (BOOST-A™), an online transition planning program. Adolescents on the autism spectrum (n = 33) and their parents (n = 39) provided feedback via an online questionnaire. Of these, 13 participants were interviewed to gain in-depth information about their experiences. Data were analyzed using descriptive statistics and thematic analysis. Four themes were identified: (i) taking action to overcome inertia, (ii) new insights that led to clear plans for the future, (iii) adolescent empowerment through strengths focus, and (iv) having a champion to guide the way. The process evaluation revealed why BOOST-A™ was beneficial to some participants more than others.Trial registration #ACTRN12615000119594
Performance criteria for verbal autopsy-based systems to estimate national causes of death: development and application to the Indian Million Death Study
Background Verbal autopsy (VA) has been proposed to determine the cause of death (COD) distributions in settings where most deaths occur without medical attention or certification. We develop performance criteria for VA-based COD systems and apply these to the Registrar General of India’s ongoing, nationally-representative Indian Million Death Study (MDS). Methods Performance criteria include a low ill-defined proportion of deaths before old age; reproducibility, including consistency of COD distributions with independent resampling; differences in COD distribution of hospital, home, urban or rural deaths; age-, sex- and time-specific plausibility of specific diseases; stability and repeatability of dual physician coding; and the ability of the mortality classification system to capture a wide range of conditions. Results The introduction of the MDS in India reduced the proportion of ill-defined deaths before age 70 years from 13% to 4%. The cause-specific mortality fractions (CSMFs) at ages 5 to 69 years for independently resampled deaths and the MDS were very similar across 19 disease categories. By contrast, CSMFs at these ages differed between hospital and home deaths and between urban and rural deaths. Thus, reliance mostly on urban or hospital data can distort national estimates of CODs. Age-, sex- and time-specific patterns for various diseases were plausible. Initial physician agreement on COD occurred about two-thirds of the time. The MDS COD classification system was able to capture more eligible records than alternative classification systems. By these metrics, the Indian MDS performs well for deaths prior to age 70 years. The key implication for low- and middle-income countries where medical certification of death remains uncommon is to implement COD surveys that randomly sample all deaths, use simple but high-quality field work with built-in resampling, and use electronic rather than paper systems to expedite field work and coding. Conclusions Simple criteria can evaluate the performance of VA-based COD systems. Despite the misclassification of VA, the MDS demonstrates that national surveys of CODs using VA are an order of magnitude better than the limited COD data previously available.
What Constitutes High-Quality Implementation of SEL Programs? A Latent Class Analysis of Second Step® Implementation
With the increased number of schools adopting social-emotional learning (SEL) programming, there is increased emphasis on the role of implementation in obtaining desired outcomes. Despite this, the current knowledge of the active ingredients of SEL programming is lacking, and there is a need to move from a focus on “whether” implementation matters to “what” aspects of implementation matter. To address this gap, the current study utilizes a latent class approach with data from year 1 of a randomized controlled trial of Second Step® (61 schools, 321 teachers, over 7300 students). Latent classes of implementation were identified, then used to predict student outcomes. Teachers reported on multiple dimensions of implementation (adherence, dosage, competency), as well as student outcomes. Observational data were also used to assess classroom behavior (academic engagement and disruptive behavior). Results suggest that a three-class model fits the data best, labeled as high-quality, low-engagement, and low-adherence classes. Only the low-engagement class showed significant associations with poorer outcomes, when compared to the high-quality class (not the low-adherence class). Findings are discussed in terms of implications for program development and implementation science more broadly.
Development, implementation and evaluation of a disaster training programme for nurses: A Switching Replications randomized controlled trial
Training efforts in disaster education need to provide updated knowledge, skills and expertise to nurses through evidence-based interventions. The purpose of the study was the development, implementation and evaluation of an educational programme for nurses regarding the provision of health care during disasters. A randomized controlled trial using Switching Replications design was conducted for the evaluation of the programme. 207 hospital-based nurses were randomly assigned into intervention (n = 112) and original control (n = 95) groups. Changes between groups and over time were measured by questionnaire and used as the outcome measure to demonstrate effectiveness of the training intervention. The intervention improved nurses' knowledge and self-confidence levels while no significant changes were detected in behavioral intentions. A significant increase in the mean knowledge score was observed in both groups in times 2 and 3 compared to time 1 [pre-test: 6.43 (2.8); post-test: 16.49 (1.7); follow-up test: 13.5 (2.8)], (P < 0.002). Changes in knowledge between intervention and control group were significantly different (P < 0.001) with a large effect size (eta-squared = 0.8). The training programme was feasible and effective in improving nurses' knowledge concerning disaster response.
Validity and usefulness of members reports of implementation progress in a quality improvement initiative: findings from the Team Check-up Tool (TCT)
Background Team-based interventions are effective for improving safety and quality of healthcare. However, contextual factors, such as team functioning, leadership, and organizational support, can vary significantly across teams and affect the level of implementation success. Yet, the science for measuring context is immature. The goal of this study is to validate measures from a short instrument tailored to track dynamic context and progress for a team-based quality improvement (QI) intervention. Methods Design: Secondary cross-sectional and longitudinal analysis of data from a clustered randomized controlled trial (RCT) of a team-based quality improvement intervention to reduce central line-associated bloodstream infection (CLABSI) rates in intensive care units (ICUs). Setting: Forty-six ICUs located within 35 faith-based, not-for-profit community hospitals across 12 states in the U.S. Population: Team members participating in an ICU-based QI intervention. Measures: The primary measure is the Team Check-up Tool (TCT), an original instrument that assesses context and progress of a team-based QI intervention. The TCT is administered monthly. Validation measures include CLABSI rate, Team Functioning Survey (TFS) and Practice Environment Scale (PES) from the Nursing Work Index. Analysis: Temporal stability, responsiveness and validity of the TCT. Results We found evidence supporting the temporal stability, construct validity, and responsiveness of TCT measures of intervention activities, perceived group-level behaviors, and barriers to team progress. Conclusions The TCT demonstrates good measurement reliability, validity, and responsiveness. By having more validated measures on implementation context, researchers can more readily conduct rigorous studies to identify contextual variables linked to key intervention and patient outcomes and strengthen the evidence base on successful spread of efficacious team-based interventions. QI teams participating in an intervention should also find data from a validated tool useful for identifying opportunities to improve their own implementation.
Meaningful patient and public involvement in digital health innovation, implementation and evaluation: A systematic review
Introduction The importance of meaningfully involving patients and the public in digital health innovation is widely acknowledged, but often poorly understood. This review, therefore, sought to explore how patients and the public are involved in digital health innovation and to identify factors that support and inhibit meaningful patient and public involvement (PPI) in digital health innovation, implementation and evaluation. Methods Searches were undertaken from 2010 to July 2020 in the electronic databases MEDLINE, EMBASE, PsycINFO, CINAHL, Scopus and ACM Digital Library. Grey literature searches were also undertaken using the Patient Experience Library database and Google Scholar. Results Of the 10,540 articles identified, 433 were included. The majority of included articles were published in the United States, United Kingdom, Canada and Australia, with representation from 42 countries highlighting the international relevance of PPI in digital health. 112 topic areas where PPI had reportedly taken place were identified. Areas most often described included cancer (n = 50), mental health (n = 43), diabetes (n = 26) and long‐term conditions (n = 19). Interestingly, over 133 terms were used to describe PPI; few were explicitly defined. Patients were often most involved in the final, passive stages of an innovation journey, for example, usability testing, where the ability to proactively influence change was severely limited. Common barriers to achieving meaningful PPI included data privacy and security concerns, not involving patients early enough and lack of trust. Suggested enablers were often designed to counteract such challenges. Conclusions PPI is largely viewed as valuable and essential in digital health innovation, but rarely practised. Several barriers exist for both innovators and patients, which currently limits the quality, frequency and duration of PPI in digital health innovation, although improvements have been made in the past decade. Some reported barriers and enablers such as the importance of data privacy and security appear to be unique to PPI in digital innovation. Greater efforts should be made to support innovators and patients to become meaningfully involved in digital health innovations from the outset, given its reported benefits and impacts. Stakeholder consensus on the principles that underpin meaningful PPI in digital health innovation would be helpful in providing evidence‐based guidance on how to achieve this. Patient or Public Contribution This review has received extensive patient and public contributions with a representative from the Patient Experience Library involved throughout the review's conception, from design (including suggested revisions to the search strategy) through to article production and dissemination. Other areas of patient and public contributor involvement include contributing to the inductive thematic analysis process, refining the thematic framework and finalizing theme wording, helping to ensure relevance, value and meaning from a patient perspective. Findings from this review have also been presented to a variety of stakeholders including patients, patient advocates and clinicians through a series of focus groups and webinars. Given their extensive involvement, the representative from the Patient Experience Library is rightly included as an author of this review.
What is the extent and quality of documentation and reporting of fidelity to implementation strategies: a scoping review
Background Implementation fidelity is critical to the internal and external validity of implementation research. Much of what is written about implementation fidelity addresses fidelity of evidence-informed interventions rather than fidelity of implementation strategies . The documentation and reporting of fidelity to implementation strategies requires attention. Therefore, in this scoping review, we identify the extent and quality of documentation and reporting of fidelity of implementation strategies that were used to implement evidence-informed interventions. Methods A six-stage methodological framework for scoping studies guided our work. Studies were identified from the outputs of the Effective Practice and Organization of Care (EPOC) review group within the Cochrane Database of Systematic Reviews. EPOC’s primary focus, implementation strategies influencing provider behavior change, optimized our ability to identify articles for inclusion. We organized the retrieved articles from the systematic reviews by journal and selected the three journals with the largest number of retrieved articles. Using a data extraction tool, we organized retrieved article data from these three journals. In addition, we summarized implementation strategies using the EPOC categories. Data extraction pertaining to the quality of reporting the fidelity of implementation strategies was facilitated with an “Implementation Strategy Fidelity Checklist” based on definitions adapted from Dusenbury et al. We conducted inter-rater reliability checks for all of the independently scored articles. Using linear regression, we assessed the fidelity scores in relation to the publication year. Results Seventy-two implementation articles were included in the final analysis. Researchers reported neither fidelity definitions nor conceptual frameworks for fidelity in any articles. The most frequently employed implementation strategies included distribution of education materials ( n  = 35), audit and feedback ( n  = 32), and educational meetings ( n  = 25). Fidelity of implementation strategies was documented in 51 (71 %) articles. Inter-rater reliability coefficients of the independent reviews for each component of fidelity were as follows: adherence = 0.85, dose = 0.89, and participant responsiveness = 0.96. The mean fidelity score was 2.6 (SD = 2.25). We noted a statistically significant decline in fidelity scores over time. Conclusions In addition to identifying the under-reporting of fidelity of implementation strategies in the health literature, we developed and tested a simple checklist to assess the reporting of fidelity of implementation strategies. More research is indicated to assess the definitions and scoring schema of this checklist. Careful reporting of details about fidelity of implementation strategies will make an important contribution to implementation science.
Observational measure of implementation progress in community based settings: The Stages of implementation completion (SIC)
Background An increasingly large body of research is focused on designing and testing strategies to improve knowledge about how to embed evidence-based programs (EBP) into community settings. Development of strategies for overcoming barriers and increasing the effectiveness and pace of implementation is a high priority. Yet, there are few research tools that measure the implementation process itself. The Stages of Implementation Completion (SIC) is an observation-based measure that is used to track the time to achievement of key implementation milestones in an EBP being implemented in 51 counties in 53 sites (two counties have two sites) in two states in the United States. Methods The SIC was developed in the context of a randomized trial comparing the effectiveness of two implementation strategies: community development teams (experimental condition) and individualized implementation (control condition). Fifty-one counties were randomized to experimental or control conditions for implementation of multidimensional treatment foster care (MTFC), an alternative to group/residential care placement for children and adolescents. Progress through eight implementation stages was tracked by noting dates of completion of specific activities in each stage. Activities were tailored to the strategies for implementing the specific EBP. Results Preliminary data showed that several counties ceased progress during pre-implementation and that there was a high degree of variability among sites in the duration scores per stage and on the proportion of activities that were completed in each stage. Progress through activities and stages for three example counties is shown. Conclusions By assessing the attainment time of each stage and the proportion of activities completed, the SIC measure can be used to track and compare the effectiveness of various implementation strategies. Data from the SIC will provide sites with relevant information on the time and resources needed to implement MTFC during various phases of implementation. With some modifications, the SIC could be appropriate for use in evaluating implementation strategies in head-to-head randomized implementation trials and as a monitoring tool for rolling out other EBPs.
Protocol for an intervention development and pilot implementation evaluation study of an e-health solution to improve newborn care quality and survival in two low-resource settings, Malawi and Zimbabwe: Neotree
IntroductionEvery year 2.4 million deaths occur worldwide in babies younger than 28 days. Approximately 70% of these deaths occur in low-resource settings because of failure to implement evidence-based interventions. Digital health technologies may offer an implementation solution. Since 2014, we have worked in Bangladesh, Malawi, Zimbabwe and the UK to develop and pilot Neotree: an android app with accompanying data visualisation, linkage and export. Its low-cost hardware and state-of-the-art software are used to improve bedside postnatal care and to provide insights into population health trends, to impact wider policy and practice.Methods and analysisThis is a mixed methods (1) intervention codevelopment and optimisation and (2) pilot implementation evaluation (including economic evaluation) study. Neotree will be implemented in two hospitals in Zimbabwe, and one in Malawi. Over the 2-year study period clinical and demographic newborn data will be collected via Neotree, in addition to behavioural science informed qualitative and quantitative implementation evaluation and measures of cost, newborn care quality and usability. Neotree clinical decision support algorithms will be optimised according to best available evidence and clinical validation studies.Ethics and disseminationThis is a Wellcome Trust funded project (215742_Z_19_Z). Research ethics approvals have been obtained: Malawi College of Medicine Research and Ethics Committee (P.01/20/2909; P.02/19/2613); UCL (17123/001, 6681/001, 5019/004); Medical Research Council Zimbabwe (MRCZ/A/2570), BRTI and JREC institutional review boards (AP155/2020; JREC/327/19), Sally Mugabe Hospital Ethics Committee (071119/64; 250418/48). Results will be disseminated via academic publications and public and policy engagement activities. In this study, the care for an estimated 15 000 babies across three sites will be impacted.Trial registration numberNCT0512707; Pre-results
Advancing implementation science through measure development and evaluation: a study protocol
Background Significant gaps related to measurement issues are among the most critical barriers to advancing implementation science. Three issues motivated the study aims: (a) the lack of stakeholder involvement in defining pragmatic measure qualities; (b) the dearth of measures, particularly for implementation outcomes; and (c) unknown psychometric and pragmatic strength of existing measures. Aim 1: Establish a stakeholder-driven operationalization of pragmatic measures and develop reliable, valid rating criteria for assessing the construct. Aim 2: Develop reliable, valid, and pragmatic measures of three critical implementation outcomes, acceptability , appropriateness , and feasibility . Aim 3: Identify Consolidated Framework for Implementation Research and Implementation Outcome Framework-linked measures that demonstrate both psychometric and pragmatic strength. Methods/design For Aim 1, we will conduct (a) interviews with stakeholder panelists ( N  = 7) and complete a literature review to populate pragmatic measure construct criteria, (b) Q-sort activities ( N  = 20) to clarify the internal structure of the definition, (c) Delphi activities ( N  = 20) to achieve consensus on the dimension priorities, (d) test-retest and inter-rater reliability assessments of the emergent rating system, and (e) known-groups validity testing of the top three prioritized pragmatic criteria. For Aim 2, our systematic development process involves domain delineation, item generation, substantive validity assessment, structural validity assessment, reliability assessment, and predictive validity assessment. We will also assess discriminant validity, known-groups validity, structural invariance, sensitivity to change, and other pragmatic features. For Aim 3, we will refine our established evidence-based assessment (EBA) criteria, extract the relevant data from the literature, rate each measure using the EBA criteria, and summarize the data. Discussion The study outputs of each aim are expected to have a positive impact as they will establish and guide a comprehensive measurement-focused research agenda for implementation science and provide empirically supported measures, tools, and methods for accomplishing this work.