Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
545
result(s) for
"Comparative Effectiveness Research - standards"
Sort by:
Finding What Works in Health Care
by
Berg, Alfred O.
,
Morton, Sally
,
Institute of Medicine (U.S.). Committee on Standards for Systematic Reviews of Comparative Effectiveness Research
in
Comparative Effectiveness Research
,
Comparative Effectiveness Research -- standards -- United States
,
Health care delivery
2011
Healthcare decision makers in search of reliable information that compares health interventions increasingly turn to systematic reviews for the best summary of the evidence. Systematic reviews identify, select, assess, and synthesize the findings of similar but separate studies, and can help clarify what is known and not known about the potential benefits and harms of drugs, devices, and other healthcare services. Systematic reviews can be helpful for clinicians who want to integrate research findings into their daily practices, for patients to make well-informed choices about their own care, for professional medical societies and other organizations that develop clinical practice guidelines.
Too often systematic reviews are of uncertain or poor quality. There are no universally accepted standards for developing systematic reviews leading to variability in how conflicts of interest and biases are handled, how evidence is appraised, and the overall scientific rigor of the process.
In Finding What Works in Health Care the Institute of Medicine (IOM) recommends 21 standards for developing high-quality systematic reviews of comparative effectiveness research. The standards address the entire systematic review process from the initial steps of formulating the topic and building the review team to producing a detailed final report that synthesizes what the evidence shows and where knowledge gaps remain.
Finding What Works in Health Care also proposes a framework for improving the quality of the science underpinning systematic reviews. This book will serve as a vital resource for both sponsors and producers of systematic reviews of comparative effectiveness research.
Caveats for the Use of Operational Electronic Health Record Data in Comparative Effectiveness Research
by
Hartzog, Timothy H.
,
Lehmann, Harold P.
,
Saltz, Joel H.
in
Clinical coding
,
Clinical Informatics
,
Clinical research
2013
The growing amount of data in operational electronic health record systems provides unprecedented opportunity for its reuse for many tasks, including comparative effectiveness research. However, there are many caveats to the use of such data. Electronic health record data from clinical settings may be inaccurate, incomplete, transformed in ways that undermine their meaning, unrecoverable for research, of unknown provenance, of insufficient granularity, and incompatible with research protocols. However, the quantity and real-world nature of these data provide impetus for their use, and we develop a list of caveats to inform would-be users of such data as well as provide an informatics roadmap that aims to insure this opportunity to augment comparative effectiveness research can be best leveraged.
Journal Article
ISOQOL recommends minimum standards for patient-reported outcome measures used in patient-centered outcomes and comparative effectiveness research
2013
Purpose An essential aspect of patient-centered outcomes research (PCOR) and comparative effectiveness research (CER) is the integration of patient perspectives and experiences with clinical data to evaluate interventions. Thus, PCOR and CER require capturing patient-reported outcome (PRO) data appropriately to inform research, healthcare delivery, and policy. This initiative’s goal was to identify minimum standards for the design and selection of a PRO measure for use in PCOR and CER. Methods We performed a literature review to find existing guidelines for the selection of PRO measures. We also conducted an online survey of the International Society for Quality of Life Research (ISOQOL) membership to solicit input on PRO standards. A standard was designated as “recommended” when >50 % respondents endorsed it as “required as a minimum standard.” Results The literature review identified 387 articles. Survey response rate was 120 of 506 ISOQOL members. The respondents had an average of 15 years experience in PRO research, and 89 % felt competent or very competent providing feedback. Final recommendations for PRO measure standards included: documentation of the conceptual and measurement model; evidence for reliability, validity (content validity, construct validity, responsiveness); interpretability of scores; quality translation, and acceptable patient and investigator burden. Conclusion The development of these minimum measurement standards is intended to promote the appropriate use of PRO measures to inform PCOR and CER, which in turn can improve the effectiveness and efficiency of healthcare delivery. A next step is to expand these minimum standards to identify best practices for selecting decision-relevant PRO measures.
Journal Article
Choosing important health outcomes for comparative effectiveness research: 4th annual update to a systematic review of core outcome sets for research
by
Gorst, Sarah L.
,
Harman, Nicola L.
,
Matvienko-Sikar, Karen
in
Animals
,
Cardiovascular disease
,
Citations
2018
The Core Outcome Measures in Effectiveness Trials (COMET) database is a publically available, searchable repository of published and ongoing core outcome set (COS) studies. An annual systematic review update is carried out to maintain the currency of database content.
The methods used in the fourth update of the systematic review followed the same approach used in the original review and previous updates. Studies were eligible for inclusion if they reported the development of a COS, regardless of any restrictions by age, health condition or setting. Searches were carried out in March 2018 to identify studies that had been published or indexed between January 2017 and the end of December 2017.
Forty-eight new studies, describing the development of 56 COS, were included. There has been an increase in the number of studies clearly specifying the scope of the COS in terms of the population (n = 43, 90%) and intervention (n = 48, 100%) characteristics. Public participation has continued to rise with over half (n = 27, 56%) of studies in the current review including input from members of the public. The rate of inclusion of all stakeholder groups has increased, in particular participation from non-clinical research experts has risen from 32% (mean average in previous reviews) to 62% (n = 29). Input from participants located in Australasia (n = 17; 41%), Asia (n = 18; 44%), South America (n = 13; 32%) and Africa (n = 7; 17%) have all increased since the previous reviews.
This update included a pronounced increase in the number of new COS identified compared to the previous three updates. There was an improvement in the reporting of the scope, stakeholder participants and methods used. Furthermore, there has been an increase in participation from Australasia, Asia, South America and Africa. These advancements are reflective of the efforts made in recent years to raise awareness about the need for COS development and uptake, as well as developments in COS methodology.
Journal Article
Re-Orientation of Clinical Research in Traumatic Brain Injury: Report of an International Workshop on Comparative Effectiveness Research
by
Lingsma, Hester F.
,
Manley, Geoffrey T.
,
Menon, David K.
in
Brain Injuries - therapy
,
Clinical trials
,
Comparative analysis
2012
During the National Neurotrauma Symposium 2010, the DG Research of the European Commission and the National Institutes of Health/National Institute of Neurological Disorders and Stroke (NIH/NINDS) organized a workshop on comparative effectiveness research (CER) in traumatic brain injury (TBI). This workshop reviewed existing approaches to improve outcomes of TBI patients. It had two main outcomes: First, it initiated a process of re-orientation of clinical research in TBI. Second, it provided ideas for a potential collaboration between the European Commission and the NIH/NINDS to stimulate research in TBI. Advances in provision of care for TBI patients have resulted from observational studies, guideline development, and meta-analyses of individual patient data. In contrast, randomized controlled trials have not led to any identifiable major advances. Rigorous protocols and tightly selected populations constrain generalizability. The workshop addressed additional research approaches, summarized the greatest unmet needs, and highlighted priorities for future research. The collection of high-quality clinical databases, associated with systems biology and CER, offers substantial opportunities. Systems biology aims to identify multiple factors contributing to a disease and addresses complex interactions. Effectiveness research aims to measure benefits and risks of systems of care and interventions in ordinary settings and broader populations. These approaches have great potential for TBI research. Although not new, they still need to be introduced to and accepted by TBI researchers as instruments for clinical research. As with therapeutic targets in individual patient management, so it is with research tools: one size does not fit all.
Journal Article
Real-world data: towards achieving the achievable in cancer care
by
Karim Safiya
,
Booth, Christopher M
,
Mackillop, William J
in
Cancer
,
Clinical trials
,
Electronic health records
2019
The use of data from the real world to address clinical and policy-relevant questions that cannot be answered using data from clinical trials is garnering increased interest. Indeed, data from cancer registries and linked treatment records can provide unique insights into patients, treatments and outcomes in routine oncology practice. In this Review, we explore the quality of real-world data (RWD), provide a framework for the use of RWD and draw attention to the methodological pitfalls inherent to using RWD in studies of comparative effectiveness. Randomized controlled trials and RWD remain complementary forms of medical evidence; studies using RWD should not be used as substitutes for clinical trials. The comparison of outcomes between nonrandomized groups of patients who have received different treatments in routine practice remains problematic. Accordingly, comparative effectiveness studies need to be designed and interpreted very carefully. With due diligence, RWD can be used to identify and close gaps in health care, offering the potential for short-term improvement in health-care systems by enabling them to achieve the achievable.In the past few years, the use of data from the real world has garnered increasing interest; however, studies using real-world data (RWD) should not be used as substitutes for clinical trials. The authors of this Review explore the quality of RWD, provide a framework for the use of RWD and draw attention to the methodological pitfalls inherent to using RWD.
Journal Article
Distinguishing Selection Bias and Confounding Bias in Comparative Effectiveness Research
2016
Comparative effectiveness research (CER) aims to provide patients and physicians with evidence-based guidance on treatment decisions. As researchers conduct CER they face myriad challenges. Although inadequate control of confounding is the most-often cited source of potential bias, selection bias that arises when patients are differentially excluded from analyses is a distinct phenomenon with distinct consequences: confounding bias compromises internal validity, whereas selection bias compromises external validity. Despite this distinction, however, the label “treatment-selection bias” is being used in the CER literature to denote the phenomenon of confounding bias. Motivated by an ongoing study of treatment choice for depression on weight change over time, this paper formally distinguishes selection and confounding bias in CER. By formally distinguishing selection and confounding bias, this paper clarifies important scientific, design, and analysis issues relevant to ensuring validity. First is that the 2 types of biases may arise simultaneously in any given study; even if confounding bias is completely controlled, a study may nevertheless suffer from selection bias so that the results are not generalizable to the patient population of interest. Second is that the statistical methods used to mitigate the 2 biases are themselves distinct; methods developed to control one type of bias should not be expected to address the other. Finally, the control of selection and confounding bias will often require distinct covariate information. Consequently, as researchers plan future studies of comparative effectiveness, care must be taken to ensure that all data elements relevant to both confounding and selection bias are collected.
Journal Article
Integrating Randomized Comparative Effectiveness Research with Patient Care
by
Fiore, Louis D
,
Lavori, Philip W
in
Anti-Bacterial Agents - therapeutic use
,
Chlorhexidine - administration & dosage
,
Clinical medicine
2016
Clinical Trials Series: Comparative Effectiveness Studies and Patient Care
Clinical trials of interventions in common practice can be built into the workflow of an electronic medical record. The authors review four such trials and highlight the strengths and weaknesses of this approach to gathering information.
Clinical trials that are embedded into usual care have the potential to yield outcomes of great relevance to the institutions where they are performed and at the same time to yield information that may be generalizable to the health care system at large. In this article, we review four clinical trials that were conducted in three health care systems using their extant electronic health record (EHR) systems. We find that EHR-based clinical trials are feasible but pose limitations on the questions that can be addressed, the processes that can be implemented, and the outcomes that can be assessed. We think . . .
Journal Article
Data Quality Assessment for Comparative Effectiveness Research in Distributed Data Networks
by
Toh, Sengwee
,
Kahn, Michael
,
Brown, Jeffrey S.
in
Analytic Methods
,
Biomedical data
,
Comparative Effectiveness Research - organization & administration
2013
Background: Electronic health information routinely collected during health care delivery and reimbursement can help address the need for evidence about the real-world effectiveness, safety, and quality of medical care. Often, distributed networks that combine information from multiple sources are needed to generate this real-world evidence. Objective: We provide a set of field-tested best practices and a set of recommendations for data quality checking for comparative effectiveness research (CER) in distributed data networks. Methods: Explore the requirements for data quality checking and describe data quality approaches undertaken by several existing multi-site networks. Results: There are no established standards regarding how to evaluate the quality of electronic health data for CER within distributed networks. Data checks of increasing complexity are often used, ranging from consistency with syntactic rules to evaluation of semantics and consistency within and across sites. Temporal trends within and across sites are widely used, as are checks of each data refresh or update. Rates of specific events and exposures by age group, sex, and month are also common. Discussion: Secondary use of electronic health data for CER holds promise but is complex, especially in distributed data networks that incorporate periodic data refreshes. The viability of a learning health system is dependent on a robust understanding of the quality, validity, and optimal secondary uses of routinely collected electronic health data within distributed health data networks. Robust data quality checking can strengthen confidence in findings based on distributed data network.
Journal Article
Challenges in Using Electronic Health Record Data for CER: Experience of 4 Learning Organizations and Solutions Applied
by
Savitz, Lucy
,
Bayley, K. Bruce
,
Shah, Nilay
in
Blood pressure
,
Clinical Coding
,
Comparative Effectiveness Research - organization & administration
2013
Objective: To document the strengths and challenges of using electronic health records (EHRs) for comparative effectiveness research (CER). Methods: A replicated case study of comparative effectiveness in hypertension treatment was conducted across 4 health systems, with instructions to extract data and document problems encountered using a specified list of required data elements. Researchers at each health system documented successes and challenges, and suggested solutions for addressing challenges. Results: Data challenges fell into 5 categories: missing data, erroneous data, uninterpretable data, inconsistencies among providers and over time, and data stored in noncoded text notes. Suggested strategies to address these issues include data validation steps, use of surrogate markers, natural language processing, and statistical techniques. Discussion: A number of EHR issues can hamper the extraction of valid data for cross-health system comparative effectiveness studies. Our case example cautions against a blind reliance on EHR data as a single definitive data source. Nevertheless, EHR data are superior to administrative or claims data alone, and are cheaper and timelier than clinical trials or manual chart reviews. All 4 participating health systems are pursuing pathways to more effectively use EHR data for CER. A partnership between clinicians, researchers, and information technology specialists is encouraged as a way to capitalize on the wealth of information contained in the EHR. Future developments in both technology and care delivery hold promise for improvement in the ability to use EHR data for CER.
Journal Article