Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
17 result(s) for "The future of pragmatic trials"
Sort by:
Defining key design elements of registry-based randomised controlled trials: a scoping review
Background Traditional randomised controlled trials remain the gold standard for improving clinical care but they do have their limitations, including their associated high costs, high failure rate and limited external validity. An alternative methodology is the newly defined, prospective, registry-based randomised controlled trial (RRCT), where treatment and outcome data is collected in an existing registry. This scoping review explores the current literature regarding RRCTs to help identify the key design elements of RRCTs and the characteristics of clinical registries on which they are reliant on. Methods A scoping review methodology conducted in accordance with the Joanna Briggs Institute guidelines was performed. Four databases were searched for articles published from inception to June 2018: Medline; Embase; the Cumulative Index to Nursing and Allied Health Literature and; Scopus. The search strategy included MeSH and text words related to RRCT. Results We identified 2369 articles of which 75 were selected for full-text screening. Of these, only 17 articles satisfied our inclusion criteria. All studies were published between 1996 and 2017 and all were investigator-initiated. Study designs were mainly multi-site comparative/effectiveness studies incorporating the use of disease registries ( n  = 8), procedure registries ( n  = 8) and a health services registry ( n  = 1). The low cost, reduced administrative burden and enhanced external validity of RRCTs make them an attractive research methodology which can be used to address questions of public health importance. We identified that that there are variable definitions of what constituted a RRCT and that issues related to ethical conduct and data integrity, completeness, timeliness, validation and endpoint adjudication need to be carefully addressed. Conclusion RRCTs potentially have an important role to play in informing best clinical practice and health policy. There are a number of issues that need to be addressed to optimise the utility of this approach, including establishing universally accepted criteria for the definition of a RRCT.
The ethical challenges raised in the design and conduct of pragmatic trials: an interview study with key stakeholders
Background There is a concern that the apparent effectiveness of interventions tested in clinical trials may not be an accurate reflection of their actual effectiveness in usual practice. Pragmatic randomized controlled trials (RCTs) are designed with the intent of addressing this discrepancy. While pragmatic RCTs may increase the relevance of research findings to practice they may also raise new ethical concerns (even while reducing others). To explore this question, we interviewed key stakeholders with the aim of identifying potential ethical challenges in the design and conduct of pragmatic RCTs with a view to developing future guidance on these issues. Methods Interviews were conducted with clinical investigators, methodologists, patient partners, ethicists, and other knowledge users (e.g., regulators). Interviews covered experiences with pragmatic RCTs, ethical issues relevant to pragmatic RCTs, and perspectives on the appropriate oversight of pragmatic RCTs. Interviews were coded inductively by two coders. Interim and final analyses were presented to the broader team for comment and discussion before the analytic framework was finalized. Results We conducted 45 interviews between April and September 2018. Interviewees represented a range of disciplines and jurisdictions as well as varying content expertise. Issues of importance in pragmatic RCTs were (1) identification of relevant risks from trial participation and determination of what constitutes minimal risk; (2) determining when alterations to traditional informed consent approaches are appropriate; (3) the distinction between research, quality improvement, and practice; (4) the potential for broader populations to be affected by the trial and what protections they might be owed; (5) the broader range of trial stakeholders in pragmatic RCTs, and determining their roles and responsibilities; and (6) determining what constitutes “usual care” and implications for trial reporting. Conclusions Our findings suggest both the need to discuss familiar ethical topics in new ways and that there are new ethical issues in pragmatic RCTs that need greater attention. Addressing the highlighted issues and developing guidance will require multidisciplinary input, including patient and community members, within a broader and more comprehensive analysis that extends beyond consent and attends to the identified considerations relating to risk and stakeholder roles and responsibilities.
Process evaluation within pragmatic randomised controlled trials: what is it, why is it done, and can we find it?—a systematic review
Background Process evaluations are increasingly conducted within pragmatic randomised controlled trials (RCTs) of health services interventions and provide vital information to enhance understanding of RCT findings. However, issues pertaining to process evaluation in this specific context have been little discussed. We aimed to describe the frequency, characteristics, labelling, value, practical conduct issues, and accessibility of published process evaluations within pragmatic RCTs in health services research. Methods We used a 2-phase systematic search process to (1) identify an index sample of journal articles reporting primary outcome results of pragmatic RCTs published in 2015 and then (2) identify all associated publications. We used an operational definition of process evaluation based on the Medical Research Council’s process evaluation framework to identify both process evaluations reported separately and process data reported in the trial results papers. We extracted and analysed quantitative and qualitative data to answer review objectives. Results From an index sample of 31 pragmatic RCTs, we identified 17 separate process evaluation studies. These had varied characteristics and only three were labelled ‘process evaluation’. Each of the 31 trial results papers also reported process data, with a median of five different process evaluation components per trial. Reported barriers and facilitators related to real-world collection of process data, recruitment of participants to process evaluations, and health services research regulations. We synthesised a wide range of reported benefits of process evaluations to interventions, trials, and wider knowledge. Visibility was often poor, with 13/17 process evaluations not mentioned in the trial results paper and 12/16 process evaluation journal articles not appearing in the trial registry. Conclusions In our sample of reviewed pragmatic RCTs, the meaning of the label ‘process evaluation’ appears uncertain, and the scope and significance of the term warrant further research and clarification. Although there were many ways in which the process evaluations added value, they often had poor visibility. Our findings suggest approaches that could enhance the planning and utility of process evaluations in the context of pragmatic RCTs. Trial registration Not applicable for PROSPERO registration
Streamlining the institutional review board process in pragmatic randomized clinical trials: challenges and lessons learned from the Aspirin Dosing: A Patient-centric Trial Assessing Benefits and Long-Term Effectiveness (ADAPTABLE) trial
Background New considerations during the ethical review processes may emerge from innovative, yet unfamiliar operational methods enabled in pragmatic randomized controlled trials (RCT), potentially making institutional review board (IRB) evaluation more complex. In this manuscript, key components of the pragmatic “Aspirin Dosing: A Patient-Centric Trial Assessing Benefits and Long-term Effectiveness (ADAPTABLE)” randomized trial that required a reappraisal of the IRB submission, review, and approval processes are discussed. Main text ADAPTABLE is a pragmatic, multicenter, open-label RCT evaluating the comparative effectiveness of two doses of aspirin widely used for secondary prevention (81 mg and 325 mg) in 15,000 patients with an established history of atherosclerotic cardiovascular disease. The electronic informed consent form is completed online by the participants at the time of enrollment, and endpoint ascertainment is conducted through queries of electronic health records. IRB challenges encountered regarding centralized IRB evaluation, electronic informed consent, patient engagement, and risk determination in ADAPTABLE are described in this manuscript. The experience of ADAPTABLE encapsulates how pragmatic protocol components intended to facilitate the study conduct have been tempered by unexpected, yet justified concerns raised by local IRBs. How the lessons learned can be applied to future similar pragmatic trials is delineated. Conclusion Development of engaging communication channels between IRB and study personnel in pragmatic randomized trials as early as at the time of protocol design allows to reduce issues with IRB approval. Integrations of the lessons learned in ADAPTABLE regarding the IRB process for centralized IRBs, informed consent, patient engagement, and risk determination can be emulated and will be instrumental in future pragmatic studies.
Understanding implementation fidelity in a pragmatic randomized clinical trial in the nursing home setting:a mixed-methods examination
Background The Pragmatic Trial of Video Education in Nursing Homes (PROVEN) is one of the first large pragmatic randomized clinical trials (pRCTs) to be conducted in U.S. nursing homes ( N  = 119 intervention and N  = 241 control across two health-care systems). The trial aims to evaluate the effectiveness of a suite of videos to improve advance care planning (ACP) for nursing home patients. This report uses mixed methods to explore the optimal and suboptimal conditions necessary for implementation fidelity within pRCTs in nursing homes. Methods PROVEN’s protocol required designated facility champions to offer an ACP video to long-stay patients every 6 months during the 18-month implementation period. Champions completed a video status report, stored within electronic medical records, each time a video was offered. Data from the report were used to derive each facility’s adherence rate (i.e., cumulative video offer). Qualitative interviews held after 15 months with champions were purposively sampled from facilities within the highest and lowest adherence rates (i.e., those in the top and bottom quintiles). Two researchers analyzed interview data thematically using a deductive approach based upon six domains of the revised Conceptual Framework for Implementation Fidelity (CFIF). Matrices were developed to compare coded narratives by domain across facility adherence status. Results In total, 28 interviews involving 33 champions were analyzed. Different patterns were observed across high- versus low-adherence facilities for five CFIF domains. In low-adherence nursing homes, (1) there were limited implementation resources (Context), (2) there was often a perceived negative patient or family responsiveness to the program (Participant Responsiveness), and (3) champions were reticent in offering the videos (Recruitment). In high-adherence nursing homes, (1) there was more perceived patient and family willingness to engage in the program (Participant Responsiveness), (2) champions supplemented the video with ACP conversations (Quality of Delivery), (3) there were strategic approaches to recruitment (Recruitment), and (4) champions appreciated external facilitation (Strategies to Facilitate Implementation). Conclusions Critical lessons for implementing pRCTs in nursing homes emerged from this report: (1) flexible fidelity is important (i.e., delivering core elements of an intervention while permitting the adaptation of non-core elements), (2) reciprocal facilitation is vital (i.e., early and ongoing stakeholder engagement in research design and, reciprocally, researchers’ and organizational leaders’ ongoing support of the implementation), and (3) organizational and champion readiness should be formally assessed early and throughout implementation to facilitate remediation. Trial registration ClinicalTrials.gov, NCT02612688 . Registered on 19 November 2015.
Ethical care requires pragmatic care research to guide medical practice under uncertainty
Background The current research-care separation was introduced to protect patients from explanatory studies designed to gain knowledge for future patients. Care trials are all-inclusive pragmatic trials integrated into medical practice, with no extra tests, risks, or cost, and have been designed to guide practice under uncertainty in the best medical interest of the patient. Proposed revision Patients need a distinction between validated care, previously verified to provide better outcomes, and promising but unvalidated care, which may include unnecessary or even harmful interventions. While validated care can be practiced normally, unvalidated care should only be offered within declared pragmatic care research, designed to protect patients from harm. The validated/unvalidated care distinction is normative, necessary to the ethics of medical practice. Care trials, which mark the distinction and allow the tentative use of promising interventions necessarily involve patients, and thus the design and conduct of pragmatic care research must respect the overarching rule of care ethics “to always act in the best medical interest of the patient.” Yet, unvalidated interventions offered in contexts of medical uncertainty cannot be prescribed or practiced as if they were validated care. The medical interests of current patients are best protected when unvalidated practices are restricted to a care trial protocol, with 1:1 random allocation (or “hemi-prescription”) versus previously validated care, to optimize potential benefits and minimize risks for each patient. Conclusion Pragmatic trials can regulate medical practice by providing (i) a transparent demarcation between unvalidated and validated care; (ii) norms of medical conduct when using tests and interventions of yet unknown benefits in practice; and eventually (iii) a verdict regarding optimal care.
The protocol for the prehabilitation for thoracic surgery study: a randomized pragmatic trial comparing a short home-based multimodal program to aerobic training in patients undergoing video-assisted thoracoscopic surgery lobectomy
Background Prehabilitation has been shown to have a positive effect on the postoperative recovery of functional capacity in patients undergoing video-assisted thoracoscopic surgery (VATS) lobectomy. The optimal way to implement prehabilitation programs, such as the optimal forms of prehabilitation, duration, intensity, and methods to improve compliance, remained to be studied. This Prehabilitation for Thoracic Surgery Study will compare the effectiveness of multimodal and aerobic training-only programs in patients undergoing thoracoscopic lobectomy. Methods This randomized pragmatic trial will be conducted in Peking Union Medical College Hospital (PUMCH) and include 100 patients who are eligible to undergo VATS lobectomy. Patients will be randomized to a multimodal or aerobic training group. Prehabilitation training guidance will be provided by a multidisciplinary care team. The patients in the multimodal group will perform aerobic exercises, resistance exercises, breathing exercises, psychological improvement strategies, and nutritional supplementation. Meanwhile, the patients in the aerobic group will conduct only aerobic exercises. The interventions will be home-based and supervised by medical providers. The patients will be followed up until 30 days after surgery to investigate whether the multimodal prehabilitation program differs from the aerobic training program in terms of the magnitude of improvement in functional capability pre- to postoperatively. The primary outcome will be the perioperative 6-min walk distance (6MWD). The secondary outcomes will include the postoperative pulmonary functional recovery status, health-related quality of life score, incidence of postoperative complications, and clinical outcomes. Discussion Prehabilitation remains a relatively new approach that is not widely performed by thoracic surgery patients. The existing studies mainly focus on unimodal interventions. While multimodal prehabilitation strategies have been shown to be preferable to unimodal strategies in a few studies, the evidence remains scarce for thoracic surgery patients. The results of this study will contribute to the understanding of methods for thoracoscopic lobectomy patients. Trial registration ClinicalTrials.gov NCT04049942 . Registered on August 8, 2019.
Implementation of a stepped wedge cluster randomized trial to evaluate a hospital mobility program
Background Stepped wedge cluster randomized trials (SW-CRT) are increasingly used to evaluate new clinical programs, yet there is limited guidance on practical aspects of applying this design. We report our early experiences conducting a SW-CRT to examine an inpatient mobility program (STRIDE) in the Veterans Health Administration (VHA). We provide recommendations for future research using this design to evaluate clinical programs. Methods Based on data from study records and reflections from the investigator team, we describe and assess the design and initial stages of a SW-CRT, from site recruitment to program launch in 8 VHA hospitals. Results Site recruitment consisted of thirty 1-h conference calls with representatives from 22 individual VAs who expressed interest in implementing STRIDE. Of these, 8 hospitals were enrolled and randomly assigned in two stratified blocks (4 hospitals per block) to a STRIDE launch date. Block 1 randomization occurred in July 2017 with first STRIDE launch in December 2017; block 2 randomization occurred in April 2018 with first STRIDE launch in January 2019. The primary study outcome of discharge destination will be assessed using routinely collected data in the electronic health record (EHR). Within randomized blocks, two hospitals per sequence launched STRIDE approximately every 3 months with primary outcome assessment paused during the 3-month time period of program launch. All sites received 6–8 implementation support calls, according to a pre-specified schedule, from the time of recruitment to program launch, and all 8 sites successfully launched within their assigned 3-month window. Seven of the eight sites initially started with a limited roll out (for example on one ward) or modified version of STRIDE (for example, using existing staff to conduct walks until new positions were filled). Conclusions Future studies should incorporate sufficient time for site recruitment and carefully consider the following to inform design of SW-CRTs to evaluate rollout of a new clinical program: (1) whether a blocked randomization fits study needs, (2) the amount of time and implementation support sites will need to start their programs, and (3) whether clinical programs are likely to include a “ramp-up” period. Successful execution of SW-CRT designs requires both adherence to rigorous design principles and also careful consideration of logistical requirements for timing of program roll out. Trial registration ClinicalsTrials.gov NCT03300336 . Prospectively registered on 3 October 2017.
Addressing identification bias in the design and analysis of cluster-randomized pragmatic trials: a case study
Background Pragmatic trials provide the opportunity to study the effectiveness of health interventions to improve care in real-world settings. However, use of open-cohort designs with patients becoming eligible after randomization and reliance on electronic health records (EHRs) to identify participants may lead to a form of selection bias referred to as identification bias . This bias can occur when individuals identified as a result of the treatment group assignment are included in analyses. Methods To demonstrate the importance of identification bias and how it can be addressed, we consider a motivating case study, the PRimary care Opioid Use Disorders treatment (PROUD) Trial. PROUD is an ongoing pragmatic, cluster-randomized implementation trial in six health systems to evaluate a program for increasing medication treatment of opioid use disorders (OUDs). A main study objective is to evaluate whether the PROUD intervention decreases acute care utilization among patients with OUD (effectiveness aim). Identification bias is a particular concern, because OUD is underdiagnosed in the EHR at baseline, and because the intervention is expected to increase OUD diagnosis among current patients and attract new patients with OUD to the intervention site. We propose a framework for addressing this source of bias in the statistical design and analysis. Results The statistical design sought to balance the competing goals of fully capturing intervention effects and mitigating identification bias, while maximizing power. For the primary analysis of the effectiveness aim, identification bias was avoided by defining the study sample using pre-randomization data (pre-trial modeling demonstrated that the optimal approach was to use individuals with a prior OUD diagnosis). To expand generalizability of study findings, secondary analyses were planned that also included patients newly diagnosed post-randomization, with analytic methods to account for identification bias. Conclusion As more studies seek to leverage existing data sources, such as EHRs, to make clinical trials more affordable and generalizable and to apply novel open-cohort study designs, the potential for identification bias is likely to become increasingly common. This case study highlights how this bias can be addressed in the statistical study design and analysis. Trial registration ClinicalTrials.gov , NCT03407638 . Registered on 23 January 2018.
Using the half normal distribution to quantify covariate balance in cluster-randomized pragmatic trials
Background Pragmatic trials often consist of cluster-randomized controlled trials (C-RCTs), where staff of existing clinics or sites deliver interventions and randomization occurs at the site level. Covariate-constrained randomization (CCR) methods are often recommended to minimize imbalance on important site characteristics across intervention and control arms because sizable imbalances can occur by chance in simple randomizations when the number of units to be randomized is relatively small. CCR methods involve multiple random assignments initially, an assessment of balance achieved on site-level covariates from each randomization, and the final selection of an allocation that produces acceptable balance. However, no clear consensus exists on how to assess imbalance or identify allocations with sufficient balance. In this article, we describe an overall imbalance index ( I ) that is based on the mean of the absolute value of the standardized differences in means on the site characteristics. Methods We derive the theoretical distribution of I , then conduct simulation studies to examine its empirical properties under the varying covariate distributions and inter-correlations. Results I has an expected value of 0.798 and, assuming independent site characteristics, a variance of 0.363/ k , where k is the number of site characteristics being balanced. Simulations indicated that the properties of I are robust under varying covariate circumstances as long as k is greater than 3 and the covariates are not too highly inter-correlated. Conclusions We recommend that values of I below the 10th percentile indicate sufficient overall site balance in CCRs. Definitions of acceptable randomizations might also include individual covariate criteria specified in advance, in addition to overall balance criteria.