Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
324,173 result(s) for "improvements"
Sort by:
SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process
Since the publication of Standards for QUality Improvement Reporting Excellence (SQUIRE 1.0) guidelines in 2008, the science of the field has advanced considerably. In this manuscript, we describe the development of SQUIRE 2.0 and its key components. We undertook the revision between 2012 and 2015 using (1) semistructured interviews and focus groups to evaluate SQUIRE 1.0 plus feedback from an international steering group, (2) two face-to-face consensus meetings to develop interim drafts and (3) pilot testing with authors and a public comment period. SQUIRE 2.0 emphasises the reporting of three key components of systematic efforts to improve the quality, value and safety of healthcare: the use of formal and informal theory in planning, implementing and evaluating improvement work; the context in which the work is done and the study of the intervention(s). SQUIRE 2.0 is intended for reporting the range of methods used to improve healthcare, recognising that they can be complex and multidimensional. It provides common ground to share these discoveries in the scholarly literature (http://www.squire-statement.org).
Explanation and elaboration of the SQUIRE (Standards for Quality Improvement Reporting Excellence) Guidelines, V.2.0: examples of SQUIRE elements in the healthcare improvement literature
Since its publication in 2008, SQUIRE (Standards for Quality Improvement Reporting Excellence) has contributed to the completeness and transparency of reporting of quality improvement work, providing guidance to authors and reviewers of reports on healthcare improvement work. In the interim, enormous growth has occurred in understanding factors that influence the success, and failure, of healthcare improvement efforts. Progress has been particularly strong in three areas: the understanding of the theoretical basis for improvement work; the impact of contextual factors on outcomes; and the development of methodologies for studying improvement work. Consequently, there is now a need to revise the original publication guidelines. To reflect the breadth of knowledge and experience in the field, we solicited input from a wide variety of authors, editors and improvement professionals during the guideline revision process. This Explanation and Elaboration document (E&E) is a companion to the revised SQUIRE guidelines, SQUIRE 2.0. The product of collaboration by an international and interprofessional group of authors, this document provides examples from the published literature, and an explanation of how each reflects the intent of a specific item in SQUIRE. The purpose of the guidelines is to assist authors in writing clearly, precisely and completely about systematic efforts to improve the quality, safety and value of healthcare services. Authors can explore the SQUIRE statement, this E&E and related documents in detail at http://www.squire-statement.org.
The Model for Understanding Success in Quality (MUSIQ): building a theory of context in healthcare quality improvement
BackgroundQuality improvement (QI) efforts have become widespread in healthcare, however there is significant variability in their success. Differences in context are thought to be responsible for some of the variability seen.ObjectiveTo develop a conceptual model that can be used by organisations and QI researchers to understand and optimise contextual factors affecting the success of a QI project.Methods10 QI experts were provided with the results of a systematic literature review and then participated in two rounds of opinion gathering to identify and define important contextual factors. The experts subsequently met in person to identify relationships among factors and to begin to build the model.ResultsThe Model for Understanding Success in Quality (MUSIQ) is organised based on the level of the healthcare system and identifies 25 contextual factors likely to influence QI success. Contextual factors within microsystems and those related to the QI team are hypothesised to directly shape QI success, whereas factors within the organisation and external environment are believed to influence success indirectly.ConclusionsThe MUSIQ framework has the potential to guide the application of QI methods in healthcare and focus research. The specificity of MUSIQ and the explicit delineation of relationships among factors allows a deeper understanding of the mechanism of action by which context influences QI success. MUSIQ also provides a foundation to support further studies to test and refine the theory and advance the field of QI science.
Exploring the sustainability of quality improvement interventions in healthcare organisations: a multiple methods study of the 10-year impact of the ‘Productive Ward: Releasing Time to Care’ programme in English acute hospitals
BackgroundThe ‘Productive Ward: Releasing Time to Care’ programme is a quality improvement (QI) intervention introduced in English acute hospitals a decade ago to: (1) Increase time nurses spend in direct patient care. (2) Improve safety and reliability of care. (3) Improve experience for staff and patients. (4) Make changes to physical environments to improve efficiency.ObjectiveTo explore how timing of adoption, local implementation strategies and processes of assimilation into day-to-day practice relate to one another and shape any sustained impact and wider legacies of a large-scale QI intervention.DesignMultiple methods within six hospitals including 88 interviews (with Productive Ward leads, ward staff, Patient and Public Involvement representatives and senior managers), 10 ward manager questionnaires and structured observations on 12 randomly selected wards.ResultsResource constraints and a managerial desire for standardisation meant that, over time, there was a shift away from the original vision of empowering ward staff to take ownership of Productive Ward towards a range of implementation ‘short cuts’. Nonetheless, material legacies (eg, displaying metrics data; storage systems) have remained in place for up to a decade after initial implementation as have some specific practices (eg, protected mealtimes). Variations in timing of adoption, local implementation strategies and contextual changes influenced assimilation into routine practice and subsequent legacies. Productive Ward has informed wider organisational QI strategies that remain in place today and developed lasting QI capabilities among those meaningfully involved in its implementation.ConclusionsAs an ongoing QI approach Productive Ward has not been sustained but has informed contemporary organisational QI practices and strategies. Judgements about the long-term sustainability of QI interventions should consider the evolutionary and adaptive nature of change processes.
Understanding the conditions for improvement: research to discover which context influences affect improvement success
Context can be defined as all factors that are not part of a quality improvement intervention itself. More research indicates which aspects are ‘conditions for improvement’, which influence improvement success. However, little is known about which conditions are most important, whether these are different for different quality interventions or whether some become less or more important at different times in carrying out an improvement. Knowing more about these conditions could help speed up and spread improvements and develop the science. This paper proposes ways to build knowledge about the conditions needed for different changes, and to create conditional-attribution explanations to provide qualified generalisations. It describes theory-based, non-experimental research designs. It also suggests that ‘practical improvers’ can make their changes more effective by reflecting on and revising their own ‘assumption-theories’ about the conditions which will help and hinder the improvements they aim to implement.
How Does Professional Development Improve Teaching?
Professional development programs are based on different theories of how students learn and different theories of how teachers learn. Reviewers often sort programs according to design features such as program duration, intensity, or the use of specific techniques such as coaches or online lessons, but these categories do not illuminate the programs' underlying purpose or premises about teaching and teacher learning. This review sorts programs according to their underlying theories of action, which include (a) a main idea that teachers should learn and (b) a strategy for helping teachers enact that idea within their own ongoing systems of practice. Using rigorous research design standards, the review identifies 28 studies. Because studies differ in multiple ways, the review presents program effects graphically rather than statistically. Visual patterns suggest that many popular design features are not associated with program effectiveness. Furthermore, different main ideas are not differentially effective. However, the pedagogies used to facilitate enactment differ in their effectiveness. Finally, the review addresses the question of research design for studies of professional development and suggests that some widely favored research designs might adversely affect study outcomes.
3-006 Optimising chest pain referrals: a cross-sector and multidisciplinary QIP to reduce inappropriate referrals to a chest pain clinic
Background and Local ProblemA six-month audit at a Rapid Access Chest Pain Clinic in Southeast England found that 20% of referrals were inappropriate, contributing to inefficiencies in service delivery. Given the burden of coronary artery disease in the UK, a Quality Improvement Project (QIP) aimed to reduce inappropriate referrals below 10%, decrease total referrals by 10%, and maintain service capacity to review patients within two weeks.Methods and InterventionStakeholder engagement played a central role, facilitating collaboration across a multidisciplinary action group integrating primary and secondary care. This alignment supported consistency in referral pathways and clinical decision-making. A Root Cause Analysis identified key issues, while a Pareto Analysis prioritised the most significant factors, including referrals for non-anginal chest pain, incomplete information, alternative diagnoses, and patients already under cardiology care. A driver diagram mapped cause-and-effect relationships, and a QIP Decision Matrix determined that updating the referral form was the most feasible and impactful solution. A structured family of measures guided data collection to monitor progress.The Plan-Do-Study-Act (PDSA) framework was used to iteratively test and refine interventions. PDSA-1 applied qualitative data analysis to develop a new referral form, which was subsequently approved. PDSA-2 introduced the form in the Emergency Department over eight weeks, assessing its impact on inappropriate referrals. PDSA-3 expanded implementation across all referral sources, including primary care, over ten weeks. Collaboration between hospital specialists, GPs, and referral coordinators facilitated clear communication and the integration of updated referral criteria into routine practice.ResultsPDSA-1 led to referral form approval. PDSA-2 in ED achieved a 91% reduction in inappropriate referrals but did not lower overall referral volume.Abstract 3-006 Figure 1The new RACPC form[Figure omitted. See PDF]PDSA-3 reduced inappropriate referrals by 66.5%, lowering the baseline rate from 20.6% to 6.9%. Total referrals decreased by 10%. The reduction applied across all targeted causes, most notably eliminating non-anginal pain referrals. Coordination between primary and secondary care enhanced triage processes, contributing to success. Measurement of waiting times was discontinued due to external service changes affecting accuracy.Abstract 3-006 Figure 2PDSA-3 outcomes[Figure omitted. See PDF]ConclusionThe new referral form significantly reduced inappropriate referrals and overall referral volume to the RACPC. The intervention was assessed as sustainable using the five measures of the Sustainability in Quality Improvement (2023) framework. This QIP highlights the value of effective stakeholder engagement, multidisciplinary collaboration, and cross-organisational integration, demonstrating how structured quality improvement methodologies can optimise referral processes and improve service efficiency.
All systems go
Changing whole education systems for the better as measured by student achievement requires coordinated leadership at school, community, district, and government levels. This book lays out a comprehensive action plan for achieving whole-system reform.
CONSORT 2010 statement: extension to randomised pilot and feasibility trials
The Consolidated Standards of Reporting Trials (CONSORT) statement is a guideline designed to improve the transparency and quality of the reporting of randomised controlled trials (RCTs). In this article we present an extension to that statement for randomised pilot and feasibility trials conducted in advance of a future definitive RCT. The checklist applies to any randomised study in which a future definitive RCT, or part of it, is conducted on a smaller scale, regardless of its design (eg, cluster, factorial, crossover) or the terms used by authors to describe the study (eg, pilot, feasibility, trial, study). The extension does not directly apply to internal pilot studies built into the design of a main trial, non-randomised pilot and feasibility studies, or phase II studies, but these studies all have some similarities to randomised pilot and feasibility studies and so many of the principles might also apply.The development of the extension was motivated by the growing number of studies described as feasibility or pilot studies and by research that has identified weaknesses in their reporting and conduct. We followed recommended good practice to develop the extension, including carrying out a Delphi survey, holding a consensus meeting and research team meetings, and piloting the checklist.The aims and objectives of pilot and feasibility randomised studies differ from those of other randomised trials. Consequently, although much of the information to be reported in these trials is similar to those in randomised controlled trials (RCTs) assessing effectiveness and efficacy, there are some key differences in the type of information and in the appropriate interpretation of standard CONSORT reporting items. We have retained some of the original CONSORT statement items, but most have been adapted, some removed, and new items added. The new items cover how participants were identified and consent obtained; if applicable, the prespecified criteria used to judge whether or how to proceed with a future definitive RCT; if relevant, other important unintended consequences; implications for progression from pilot to future definitive RCT, including any proposed amendments; and ethical approval or approval by a research review committee confirmed with a reference number.This article includes the 26 item checklist, a separate checklist for the abstract, a template for a CONSORT flowchart for these studies, and an explanation of the changes made and supporting examples. We believe that routine use of this proposed extension to the CONSORT statement will result in improvements in the reporting of pilot trials.Editor’s note: In order to encourage its wide dissemination this article is freely accessible on the BMJ and Pilot and Feasibility Studies journal websites.