Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
900 result(s) for "Ryan, Patrick B"
Sort by:
Comprehensive comparative effectiveness and safety of first-line antihypertensive drug classes: a systematic, multinational, large-scale analysis
Uncertainty remains about the optimal monotherapy for hypertension, with current guidelines recommending any primary agent among the first-line drug classes thiazide or thiazide-like diuretics, angiotensin-converting enzyme inhibitors, angiotensin receptor blockers, dihydropyridine calcium channel blockers, and non-dihydropyridine calcium channel blockers, in the absence of comorbid indications. Randomised trials have not further refined this choice. We developed a comprehensive framework for real-world evidence that enables comparative effectiveness and safety evaluation across many drugs and outcomes from observational data encompassing millions of patients, while minimising inherent bias. Using this framework, we did a systematic, large-scale study under a new-user cohort design to estimate the relative risks of three primary (acute myocardial infarction, hospitalisation for heart failure, and stroke) and six secondary effectiveness and 46 safety outcomes comparing all first-line classes across a global network of six administrative claims and three electronic health record databases. The framework addressed residual confounding, publication bias, and p-hacking using large-scale propensity adjustment, a large set of control outcomes, and full disclosure of hypotheses tested. Using 4·9 million patients, we generated 22 000 calibrated, propensity-score-adjusted hazard ratios (HRs) comparing all classes and outcomes across databases. Most estimates revealed no effectiveness differences between classes; however, thiazide or thiazide-like diuretics showed better primary effectiveness than angiotensin-converting enzyme inhibitors: acute myocardial infarction (HR 0·84, 95% CI 0·75–0·95), hospitalisation for heart failure (0·83, 0·74–0·95), and stroke (0·83, 0·74–0·95) risk while on initial treatment. Safety profiles also favoured thiazide or thiazide-like diuretics over angiotensin-converting enzyme inhibitors. The non-dihydropyridine calcium channel blockers were significantly inferior to the other four classes. This comprehensive framework introduces a new way of doing observational health-care science at scale. The approach supports equivalence between drug classes for initiating monotherapy for hypertension—in keeping with current guidelines, with the exception of thiazide or thiazide-like diuretics superiority to angiotensin-converting enzyme inhibitors and the inferiority of non-dihydropyridine calcium channel blockers. US National Science Foundation, US National Institutes of Health, Janssen Research & Development, IQVIA, South Korean Ministry of Health & Welfare, Australian National Health and Medical Research Council.
Empirical confidence interval calibration for population-level effect estimation studies in observational healthcare data
Observational healthcare data, such as electronic health records and administrative claims, offer potential to estimate effects of medical products at scale. Observational studies have often been found to be nonreproducible, however, generating conflicting results even when using the same database to answer the same question. One source of discrepancies is error, both random caused by sampling variability and systematic (for example, because of confounding, selection bias, and measurement error). Only random error is typically quantified but converges to zero as databases become larger, whereas systematic error persists independent from sample size and therefore, increases in relative importance. Negative controls are exposure–outcome pairs, where one believes no causal effect exists; they can be used to detect multiple sources of systematic error, but interpreting their results is not always straightforward. Previously, we have shown that an empirical null distribution can be derived from a sample of negative controls and used to calibrate P values, accounting for both random and systematic error. Here, we extend this work to calibration of confidence intervals (CIs). CIs require positive controls, which we synthesize by modifying negative controls. We show that our CI calibration restores nominal characteristics, such as 95% coverage of the true effect size by the 95% CI. We furthermore show that CI calibration reduces disagreement in replications of two pairs of conflicting observational studies: one related to dabigatran, warfarin, and gastrointestinal bleeding and one related to selective serotonin reuptake inhibitors and upper gastrointestinal bleeding. We recommend CI calibration to improve reproducibility of observational studies.
Characterizing treatment pathways at scale using the OHDSI network
Observational research promises to complement experimental research by providing large, diverse populations that would be infeasible for an experiment. Observational research can test its own clinical hypotheses, and observational studies also can contribute to the design of experiments and inform the generalizability of experimental research. Understanding the diversity of populations and the variance in care is one component. In this study, the Observational Health Data Sciences and Informatics (OHDSI) collaboration created an international data network with 11 data sources from four countries, including electronic health records and administrative claims data on 250 million patients. All data were mapped to common data standards, patient privacy was maintained by using a distributed model, and results were aggregated centrally. Treatment pathways were elucidated for type 2 diabetes mellitus, hypertension, and depression. The pathways revealed that the world is moving toward more consistent therapy over time across diseases and across locations, but significant heterogeneity remains among sources, pointing to challenges in generalizing clinical trial results. Diabetes favored a single first-line medication, metformin, to a much greater extent than hypertension or depression. About 10% of diabetes and depression patients and almost 25% of hypertension patients followed a treatment pathway that was unique within the cohort. Aside from factors such as sample size and underlying population (academic medical center versus general population), electronic health records data and administrative claims data revealed similar results. Large-scale international observational research is feasible.
Inferring disease severity in rheumatoid arthritis using predictive modeling in administrative claims databases
Confounding by disease severity is an issue in pharmacoepidemiology studies of rheumatoid arthritis (RA), due to channeling of sicker patients to certain therapies. To address the issue of limited clinical data for confounder adjustment, a patient-level prediction model to differentiate between patients prescribed and not prescribed advanced therapies was developed as a surrogate for disease severity, using all available data from a US claims database. Data from adult RA patients were used to build regularized logistic regression models to predict current and future disease severity using a biologic or tofacitinib prescription claim as a surrogate for moderate-to-severe disease. Model discrimination was assessed using the area under the receiver (AUC) operating characteristic curve, tested and trained in Optum Clinformatics® Extended DataMart (Optum) and additionally validated in three external IBM MarketScan® databases. The model was further validated in the Optum database across a range of patient cohorts. In the Optum database (n = 68,608), the AUC for discriminating RA patients with a prescription claim for a biologic or tofacitinib versus those without in the 90 days following index diagnosis was 0.80. Model AUCs were 0.77 in IBM CCAE (n = 75,579) and IBM MDCD (n = 7,537) and 0.75 in IBM MDCR (n = 36,090). There was little change in the prediction model assessing discrimination 730 days following index diagnosis (prediction model AUC in Optum was 0.79). A prediction model demonstrated good discrimination across multiple claims databases to identify RA patients with a prescription claim for advanced therapies during different time-at-risk periods as proxy for current and future moderate-to-severe disease. This work provides a robust model-derived risk score that can be used as a potential covariate and proxy measure to adjust for confounding by severity in multivariable models in the RA population. An R package to develop the prediction model and risk score are available in an open source platform for researchers.
Wisdom of the CROUD:  Development and validation of a patient-level prediction model for opioid use disorder using population-level claims data
Some patients who are given opioids for pain could develop opioid use disorder. If it was possible to identify patients who are at a higher risk of opioid use disorder, then clinicians could spend more time educating these patients about the risks. We develop and validate a model to predict a person's future risk of opioid use disorder at the point before being dispensed their first opioid. A cohort study patient-level prediction using four US claims databases with target populations ranging between 343,552 and 384,424 patients. The outcome was recorded diagnosis of opioid abuse, dependency or unspecified drug abuse as a proxy for opioid use disorder from 1 day until 365 days after the first opioid is dispensed. We trained a regularized logistic regression using candidate predictors consisting of demographics and any conditions, drugs, procedures or visits prior to the first opioid. We then selected the top predictors and created a simple 8 variable score model. We estimated the percentage of new users of opioids with reported opioid use disorder within a year to range between 0.04%-0.26% across US claims data. We developed an 8 variable Calculator of Risk for Opioid Use Disorder (CROUD) score, derived from the prediction models to stratify patients into higher and lower risk groups. The 8 baseline variables were age 15-29, medical history of substance abuse, mood disorder, anxiety disorder, low back pain, renal impairment, painful neuropathy and recent ER visit. 1.8% of people were in the high risk group for opioid use disorder and had a score > = 23 with the model obtaining a sensitivity of 13%, specificity of 98% and PPV of 1.14% for predicting opioid use disorder. CROUD could be used by clinicians to obtain personalized risk scores. CROUD could be used to further educate those at higher risk and to personalize new opioid dispensing guidelines such as urine testing. Due to the high false positive rate, it should not be used for contraindication or to restrict utilization.
Defining a Reference Set to Support Methodological Research in Drug Safety
Background Methodological research to evaluate the performance of methods requires a benchmark to serve as a referent comparison. In drug safety, the performance of analyses of spontaneous adverse event reporting databases and observational healthcare data, such as administrative claims and electronic health records, has been limited by the lack of such standards. Objectives To establish a reference set of test cases that contain both positive and negative controls, which can serve the basis for methodological research in evaluating methods performance in identifying drug safety issues. Research Design Systematic literature review and natural language processing of structured product labeling was performed to identify evidence to support the classification of drugs as either positive controls or negative controls for four outcomes: acute liver injury, acute kidney injury, acute myocardial infarction, and upper gastrointestinal bleeding. Results Three-hundred and ninety-nine test cases comprised of 165 positive controls and 234 negative controls were identified across the four outcomes. The majority of positive controls for acute kidney injury and upper gastrointestinal bleeding were supported by randomized clinical trial evidence, while the majority of positive controls for acute liver injury and acute myocardial infarction were only supported based on published case reports. Literature estimates for the positive controls shows substantial variability that limits the ability to establish a reference set with known effect sizes. Conclusions A reference set of test cases can be established to facilitate methodological research in drug safety. Creating a sufficient sample of drug-outcome pairs with binary classification of having no effect (negative controls) or having an increased effect (positive controls) is possible and can enable estimation of predictive accuracy through discrimination. Since the magnitude of the positive effects cannot be reliably obtained and the quality of evidence may vary across outcomes, assumptions are required to use the test cases in real data for purposes of measuring bias, mean squared error, or coverage probability.
CohortDiagnostics: Phenotype evaluation across a network of observational data sources using population-level characterization
This paper introduces a novel framework for evaluating phenotype algorithms (PAs) using the open-source tool, Cohort Diagnostics. The method is based on several diagnostic criteria to evaluate a patient cohort returned by a PA. Diagnostics include estimates of incidence rate, index date entry code breakdown, and prevalence of all observed clinical events prior to, on, and after index date. We test our framework by evaluating one PA for systemic lupus erythematosus (SLE) and two PAs for Alzheimer's disease (AD) across 10 different observational data sources. By utilizing CohortDiagnostics, we found that the population-level characteristics of individuals in the cohort of SLE closely matched the disease's anticipated clinical profile. Specifically, the incidence rate of SLE was consistently higher in occurrence among females. Moreover, expected clinical events like laboratory tests, treatments, and repeated diagnoses were also observed. For AD, although one PA identified considerably fewer patients, absence of notable differences in clinical characteristics between the two cohorts suggested similar specificity. We provide a practical and data-driven approach to evaluate PAs, using two clinical diseases as examples, across a network of OMOP data sources. Cohort Diagnostics can ensure the subjects identified by a specific PA align with those intended for inclusion in a research study. Diagnostics based on large-scale population-level characterization can offer insights into the misclassification errors of PAs.
Advancing Real-World Evidence Through a Federated Health Data Network (EHDEN): Descriptive Study
Real-world data (RWD) are increasingly used in health research and regulatory decision-making to assess the effectiveness, safety, and value of interventions in routine care. However, the heterogeneity of European health care systems, data capture methods, coding standards, and governance structures poses challenges for generating robust and reproducible real-world evidence. The European Health Data & Evidence Network (EHDEN) was established to address these challenges by building a large-scale federated data infrastructure that harmonizes RWD across Europe. This study aims to describe the composition and characteristics of the databases harmonized within EHDEN as of September 2024. We seek to provide transparency regarding the types of RWD available and their potential to support collaborative research and regulatory use. EHDEN recruited data partners through structured open calls. Selected data partners received funding and technical support to harmonize their data to the Observational Medical Outcomes Partnership Common Data Model (OMOP CDM), with assistance from certified small-to-medium enterprises trained through the EHDEN Academy. Each data source underwent an extract-transform-load process and data quality assessment using the data quality dashboard. Metadata-including country, care setting, capture method, and population criteria-were compiled in the publicly accessible EHDEN Portal. As of September 1, 2024, the EHDEN Portal includes 210 harmonized data sources from 30 countries. The highest representation comes from Italy (13%), Great Britain (12.5%), and Spain (11.5%). The mean number of persons per data source is 2,147,161, with a median of 457,664 individuals. Regarding care setting, 46.7% (n=98) of data sources reflect data exclusively from secondary care, 42.4% (n=89) from mixed care settings (both primary and secondary), and 11% (n=23) from primary care only. In terms of population inclusion criteria, 55.7% (n=117) of data sources include individuals based on health care encounters, 32.9% (n=69) through disease-specific data collection, and 11.4% (n=24) via population-based sources. Data capture methods also vary, with electronic health records (EHRs) being the most common. A total of 74.7% (n=157) of data sources use EHRs, and more than half of those (n=85) rely on EHRs as their sole method of data collection. Laboratory data are used in 29.5% (n=62) of data sources, although only one relies exclusively on laboratory data. Most laboratory-based data sources combine this method with other forms of data capture. EHDEN is the largest federated health data network in Europe, enabling standardized, General Data Protection Regulation-compliant analysis of RWD across diverse care settings and populations. This descriptive summary of the network's data sources enhances transparency and supports broader efforts to scale federated research. These findings demonstrate EHDEN's potential to enable collaborative studies and generate trusted evidence for public health and regulatory purposes.
A curated and standardized adverse drug event resource to accelerate drug safety research
Identification of adverse drug reactions (ADRs) during the post-marketing phase is one of the most important goals of drug safety surveillance. Spontaneous reporting systems (SRS) data, which are the mainstay of traditional drug safety surveillance, are used for hypothesis generation and to validate the newer approaches. The publicly available US Food and Drug Administration (FDA) Adverse Event Reporting System (FAERS) data requires substantial curation before they can be used appropriately, and applying different strategies for data cleaning and normalization can have material impact on analysis results. We provide a curated and standardized version of FAERS removing duplicate case records, applying standardized vocabularies with drug names mapped to RxNorm concepts and outcomes mapped to SNOMED-CT concepts, and pre-computed summary statistics about drug-outcome relationships for general consumption. This publicly available resource, along with the source code, will accelerate drug safety research by reducing the amount of time spent performing data management on the source FAERS reports, improving the quality of the underlying data, and enabling standardized analyses using common vocabularies. Design Type(s) data cleaning objective • data integration objective Measurement Type(s) drug adverse event reporting Technology Type(s) digital curation Factor Type(s) Sample Characteristic(s) Homo sapiens Machine-accessible metadata file describing the reported data (ISA-Tab format)
Feasibility and evaluation of a large-scale external validation approach for patient-level prediction in an international data network: validation of models predicting stroke in female patients newly diagnosed with atrial fibrillation
Background To demonstrate how the Observational Healthcare Data Science and Informatics (OHDSI) collaborative network and standardization can be utilized to scale-up external validation of patient-level prediction models by enabling validation across a large number of heterogeneous observational healthcare datasets. Methods Five previously published prognostic models (ATRIA, CHADS 2 , CHADS 2 VASC, Q-Stroke and Framingham) that predict future risk of stroke in patients with atrial fibrillation were replicated using the OHDSI frameworks. A network study was run that enabled the five models to be externally validated across nine observational healthcare datasets spanning three countries and five independent sites. Results The five existing models were able to be integrated into the OHDSI framework for patient-level prediction and they obtained mean c-statistics ranging between 0.57–0.63 across the 6 databases with sufficient data to predict stroke within 1 year of initial atrial fibrillation diagnosis for females with atrial fibrillation. This was comparable with existing validation studies. The validation network study was run across nine datasets within 60 days once the models were replicated. An R package for the study was published at https://github.com/OHDSI/StudyProtocolSandbox/tree/master/ExistingStrokeRiskExternalValidation . Conclusion This study demonstrates the ability to scale up external validation of patient-level prediction models using a collaboration of researchers and a data standardization that enable models to be readily shared across data sites. External validation is necessary to understand the transportability or reproducibility of a prediction model, but without collaborative approaches it can take three or more years for a model to be validated by one independent researcher. In this paper we show it is possible to both scale-up and speed-up external validation by showing how validation can be done across multiple databases in less than 2 months. We recommend that researchers developing new prediction models use the OHDSI network to externally validate their models.