Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
125 result(s) for "Schuemie, Martijn"
Sort by:
Comprehensive comparative effectiveness and safety of first-line antihypertensive drug classes: a systematic, multinational, large-scale analysis
Uncertainty remains about the optimal monotherapy for hypertension, with current guidelines recommending any primary agent among the first-line drug classes thiazide or thiazide-like diuretics, angiotensin-converting enzyme inhibitors, angiotensin receptor blockers, dihydropyridine calcium channel blockers, and non-dihydropyridine calcium channel blockers, in the absence of comorbid indications. Randomised trials have not further refined this choice. We developed a comprehensive framework for real-world evidence that enables comparative effectiveness and safety evaluation across many drugs and outcomes from observational data encompassing millions of patients, while minimising inherent bias. Using this framework, we did a systematic, large-scale study under a new-user cohort design to estimate the relative risks of three primary (acute myocardial infarction, hospitalisation for heart failure, and stroke) and six secondary effectiveness and 46 safety outcomes comparing all first-line classes across a global network of six administrative claims and three electronic health record databases. The framework addressed residual confounding, publication bias, and p-hacking using large-scale propensity adjustment, a large set of control outcomes, and full disclosure of hypotheses tested. Using 4·9 million patients, we generated 22 000 calibrated, propensity-score-adjusted hazard ratios (HRs) comparing all classes and outcomes across databases. Most estimates revealed no effectiveness differences between classes; however, thiazide or thiazide-like diuretics showed better primary effectiveness than angiotensin-converting enzyme inhibitors: acute myocardial infarction (HR 0·84, 95% CI 0·75–0·95), hospitalisation for heart failure (0·83, 0·74–0·95), and stroke (0·83, 0·74–0·95) risk while on initial treatment. Safety profiles also favoured thiazide or thiazide-like diuretics over angiotensin-converting enzyme inhibitors. The non-dihydropyridine calcium channel blockers were significantly inferior to the other four classes. This comprehensive framework introduces a new way of doing observational health-care science at scale. The approach supports equivalence between drug classes for initiating monotherapy for hypertension—in keeping with current guidelines, with the exception of thiazide or thiazide-like diuretics superiority to angiotensin-converting enzyme inhibitors and the inferiority of non-dihydropyridine calcium channel blockers. US National Science Foundation, US National Institutes of Health, Janssen Research & Development, IQVIA, South Korean Ministry of Health & Welfare, Australian National Health and Medical Research Council.
Empirical confidence interval calibration for population-level effect estimation studies in observational healthcare data
Observational healthcare data, such as electronic health records and administrative claims, offer potential to estimate effects of medical products at scale. Observational studies have often been found to be nonreproducible, however, generating conflicting results even when using the same database to answer the same question. One source of discrepancies is error, both random caused by sampling variability and systematic (for example, because of confounding, selection bias, and measurement error). Only random error is typically quantified but converges to zero as databases become larger, whereas systematic error persists independent from sample size and therefore, increases in relative importance. Negative controls are exposure–outcome pairs, where one believes no causal effect exists; they can be used to detect multiple sources of systematic error, but interpreting their results is not always straightforward. Previously, we have shown that an empirical null distribution can be derived from a sample of negative controls and used to calibrate P values, accounting for both random and systematic error. Here, we extend this work to calibration of confidence intervals (CIs). CIs require positive controls, which we synthesize by modifying negative controls. We show that our CI calibration restores nominal characteristics, such as 95% coverage of the true effect size by the 95% CI. We furthermore show that CI calibration reduces disagreement in replications of two pairs of conflicting observational studies: one related to dabigatran, warfarin, and gastrointestinal bleeding and one related to selective serotonin reuptake inhibitors and upper gastrointestinal bleeding. We recommend CI calibration to improve reproducibility of observational studies.
Risk of ischemic stroke and the use of individual non-steroidal anti-inflammatory drugs: A multi-country European database study within the SOS Project
A multi-country European study using data from six healthcare databases from four countries was performed to evaluate in a large study population (>32 million) the risk of ischemic stroke (IS) associated with individual NSAIDs and to assess the impact of risk factors of IS and co-medication. Case-control study nested in a cohort of new NSAID users. For each case, up to 100 sex- and age-matched controls were selected and confounder-adjusted odds ratios for current use of individual NSAIDs compared to past use calculated. 49,170 cases of IS were observed among 4,593,778 new NSAID users. Use of coxibs (odds ratio 1.08, 95%-confidence interval 1.02-1.15) and use of traditional NSAIDs (1.16, 1.12-1.19) were associated with an increased risk of IS. Among 32 individual NSAIDs evaluated, the highest significant risk of IS was observed for ketorolac (1.46, 1.19-1.78), but significantly increased risks (in decreasing order) were also found for diclofenac, indomethacin, rofecoxib, ibuprofen, nimesulide, diclofenac with misoprostol, and piroxicam. IS risk associated with NSAID use was generally higher in persons of younger age, males, and those with a prior history of IS. Risk of IS differs between individual NSAIDs and appears to be higher in patients with a prior history of IS or transient ischemic attack (TIA), in younger or male patients. Co-medication with aspirin, other antiplatelets or anticoagulants might mitigate this risk. The small to moderate observed risk increase (by 13-46%) associated with NSAIDs use represents a public health concern due to widespread NSAID usage.
Risk of acute myocardial infarction during use of individual NSAIDs: A nested case-control study from the SOS project
Use of selective COX-2 non-steroidal anti-inflammatory drugs (NSAIDs) (coxibs) has been associated with an increased risk of acute myocardial infarction (AMI). However, the risk of AMI has only been studied for very few NSAIDs that are frequently used. To estimate the risk of AMI for individual NSAIDs. A nested case-control study was performed from a cohort of new NSAID users ≥18 years (1999-2011) matching cases to a maximum of 100 controls on database, sex, age, and calendar time. Data were retrieved from six healthcare databases. Adjusted odds ratios (ORs) of current use of individual NSAIDs compared to past use were estimated per database. Pooling was done by two-stage pooling using a random effects model (ORmeta) and by one-stage pooling (ORpool). Among 8.5 million new NSAID users, 79,553 AMI cases were identified. The risk was elevated for current use of ketorolac (ORmeta 2.06;95%CI 1.83-2.32, ORpool 1.80; 1.49-2.18) followed, in descending order of point estimate, by indometacin, etoricoxib, rofecoxib, diclofenac, fixed combination of diclofenac with misoprostol, piroxicam, ibuprofen, naproxen, celecoxib, meloxicam, nimesulide and ketoprofen (ORmeta 1.12; 1.03-1.22, ORpool 1.00;0.86-1.16). Higher doses showed higher risk estimates than lower doses. The relative risk estimates of AMI differed slightly between 28 individual NSAIDs. The relative risk was highest for ketorolac and was correlated with COX-2 potency, but not restricted to coxibs.
CohortDiagnostics: Phenotype evaluation across a network of observational data sources using population-level characterization
This paper introduces a novel framework for evaluating phenotype algorithms (PAs) using the open-source tool, Cohort Diagnostics. The method is based on several diagnostic criteria to evaluate a patient cohort returned by a PA. Diagnostics include estimates of incidence rate, index date entry code breakdown, and prevalence of all observed clinical events prior to, on, and after index date. We test our framework by evaluating one PA for systemic lupus erythematosus (SLE) and two PAs for Alzheimer's disease (AD) across 10 different observational data sources. By utilizing CohortDiagnostics, we found that the population-level characteristics of individuals in the cohort of SLE closely matched the disease's anticipated clinical profile. Specifically, the incidence rate of SLE was consistently higher in occurrence among females. Moreover, expected clinical events like laboratory tests, treatments, and repeated diagnoses were also observed. For AD, although one PA identified considerably fewer patients, absence of notable differences in clinical characteristics between the two cohorts suggested similar specificity. We provide a practical and data-driven approach to evaluate PAs, using two clinical diseases as examples, across a network of OMOP data sources. Cohort Diagnostics can ensure the subjects identified by a specific PA align with those intended for inclusion in a research study. Diagnostics based on large-scale population-level characterization can offer insights into the misclassification errors of PAs.
Characterizing treatment pathways at scale using the OHDSI network
Observational research promises to complement experimental research by providing large, diverse populations that would be infeasible for an experiment. Observational research can test its own clinical hypotheses, and observational studies also can contribute to the design of experiments and inform the generalizability of experimental research. Understanding the diversity of populations and the variance in care is one component. In this study, the Observational Health Data Sciences and Informatics (OHDSI) collaboration created an international data network with 11 data sources from four countries, including electronic health records and administrative claims data on 250 million patients. All data were mapped to common data standards, patient privacy was maintained by using a distributed model, and results were aggregated centrally. Treatment pathways were elucidated for type 2 diabetes mellitus, hypertension, and depression. The pathways revealed that the world is moving toward more consistent therapy over time across diseases and across locations, but significant heterogeneity remains among sources, pointing to challenges in generalizing clinical trial results. Diabetes favored a single first-line medication, metformin, to a much greater extent than hypertension or depression. About 10% of diabetes and depression patients and almost 25% of hypertension patients followed a treatment pathway that was unique within the cohort. Aside from factors such as sample size and underlying population (academic medical center versus general population), electronic health records data and administrative claims data revealed similar results. Large-scale international observational research is feasible.
Applied comparison of large‐scale propensity score matching and cardinality matching for causal inference in observational research
Background Cardinality matching (CM), a novel matching technique, finds the largest matched sample meeting prespecified balance criteria thereby overcoming limitations of propensity score matching (PSM) associated with limited covariate overlap, which are especially pronounced in studies with small sample sizes. The current study proposes a framework for large-scale CM (LS-CM); and compares large-scale PSM (LS-PSM) and LS-CM in terms of post-match sample size, covariate balance and residual confounding at progressively smaller sample sizes. Methods Evaluation of LS-PSM and LS-CM within a comparative cohort study of new users of angiotensin-converting enzyme inhibitor (ACEI) and thiazide or thiazide-like diuretic monotherapy identified from a U.S. insurance claims database. Candidate covariates included patient demographics, and all observed prior conditions, drug exposures and procedures. Propensity scores were calculated using LASSO regression, and candidate covariates with non-zero beta coefficients in the propensity model were defined as matching covariates for use in LS-CM. One-to-one matching was performed using progressively tighter parameter settings. Covariate balance was assessed using standardized mean differences. Hazard ratios for negative control outcomes perceived as unassociated with treatment (i.e., true hazard ratio of 1) were estimated using unconditional Cox models. Residual confounding was assessed using the expected systematic error of the empirical null distribution of negative control effect estimates compared to the ground truth. To simulate diverse research conditions, analyses were repeated within 10 %, 1 and 0.5 % subsample groups with increasingly limited covariate overlap. Results A total of 172,117 patients (ACEI: 129,078; thiazide: 43,039) met the study criteria. As compared to LS-PSM, LS-CM was associated with increased sample retention. Although LS-PSM achieved balance across all matching covariates within the full study population, substantial matching covariate imbalance was observed within the 1 and 0.5 % subsample groups. Meanwhile, LS-CM achieved matching covariate balance across all analyses. LS-PSM was associated with better candidate covariate balance within the full study population. Otherwise, both matching techniques achieved comparable candidate covariate balance and expected systematic error. Conclusions LS-CM found the largest matched sample meeting prespecified balance criteria while achieving comparable candidate covariate balance and residual confounding. We recommend LS-CM as an alternative to LS-PSM in studies with small sample sizes or limited covariate overlap.
Improving reproducibility by using high-throughput observational studies with empirical calibration
Concerns over reproducibility in science extend to research using existing healthcare data; many observational studies investigating the same topic produce conflicting results, even when using the same data. To address this problem, we propose a paradigm shift. The current paradigm centres on generating one estimate at a time using a unique study design with unknown reliability and publishing (or not) one estimate at a time. The new paradigm advocates for high-throughput observational studies using consistent and standardized methods, allowing evaluation, calibration and unbiased dissemination to generate a more reliable and complete evidence base. We demonstrate this new paradigm by comparing all depression treatments for a set of outcomes, producing 17 718 hazard ratios, each using methodology on par with current best practice. We furthermore include control hypotheses to evaluate and calibrate our evidence generation process. Results show good transitivity and consistency between databases, and agree with four out of the five findings from clinical trials. The distribution of effect size estimates reported in the literature reveals an absence of small or null effects, with a sharp cut-off at p = 0.05. No such phenomena were observed in our results, suggesting more complete and more reliable evidence. This article is part of a discussion meeting issue 'The growing ubiquity of algorithms in society: implications, impacts and innovations'.
Prenatal antidepressant use and risk of attention-deficit/hyperactivity disorder in offspring: population based cohort study
Objective To assess the potential association between prenatal use of antidepressants and the risk of attention-deficit/hyperactivity disorder (ADHD) in offspring.Design Population based cohort study.Setting Data from the Hong Kong population based electronic medical records on the Clinical Data Analysis and Reporting System.Participants 190 618 children born in Hong Kong public hospitals between January 2001 and December 2009 and followed-up to December 2015.Main outcome measure Hazard ratio of maternal antidepressant use during pregnancy and ADHD in children aged 6 to 14 years, with an average follow-up time of 9.3 years (range 7.4-11.0 years).Results Among 190 618 children, 1252 had a mother who used prenatal antidepressants. 5659 children (3.0%) were given a diagnosis of ADHD or received treatment for ADHD. The crude hazard ratio of maternal antidepressant use during pregnancy was 2.26 (P<0.01) compared with non-use. After adjustment for potential confounding factors, including maternal psychiatric disorders and use of other psychiatric drugs, the adjusted hazard ratio was reduced to 1.39 (95% confidence interval 1.07 to 1.82, P=0.01). Likewise, similar results were observed when comparing children of mothers who had used antidepressants before pregnancy with those who were never users (1.76, 1.36 to 2.30, P<0.01). The risk of ADHD in the children of mothers with psychiatric disorders was higher compared with the children of mothers without psychiatric disorders even if the mothers had never used antidepressants (1.84, 1.54 to 2.18, P<0.01). All sensitivity analyses yielded similar results. Sibling matched analysis identified no significant difference in risk of ADHD in siblings exposed to antidepressants during gestation and those not exposed during gestation (0.54, 0.17 to 1.74, P=0.30).Conclusions The findings suggest that the association between prenatal use of antidepressants and risk of ADHD in offspring can be partially explained by confounding by indication of antidepressants. If there is a causal association, the size of the effect is probably smaller than that reported previously.
Advancing Real-World Evidence Through a Federated Health Data Network (EHDEN): Descriptive Study
Real-world data (RWD) are increasingly used in health research and regulatory decision-making to assess the effectiveness, safety, and value of interventions in routine care. However, the heterogeneity of European health care systems, data capture methods, coding standards, and governance structures poses challenges for generating robust and reproducible real-world evidence. The European Health Data & Evidence Network (EHDEN) was established to address these challenges by building a large-scale federated data infrastructure that harmonizes RWD across Europe. This study aims to describe the composition and characteristics of the databases harmonized within EHDEN as of September 2024. We seek to provide transparency regarding the types of RWD available and their potential to support collaborative research and regulatory use. EHDEN recruited data partners through structured open calls. Selected data partners received funding and technical support to harmonize their data to the Observational Medical Outcomes Partnership Common Data Model (OMOP CDM), with assistance from certified small-to-medium enterprises trained through the EHDEN Academy. Each data source underwent an extract-transform-load process and data quality assessment using the data quality dashboard. Metadata-including country, care setting, capture method, and population criteria-were compiled in the publicly accessible EHDEN Portal. As of September 1, 2024, the EHDEN Portal includes 210 harmonized data sources from 30 countries. The highest representation comes from Italy (13%), Great Britain (12.5%), and Spain (11.5%). The mean number of persons per data source is 2,147,161, with a median of 457,664 individuals. Regarding care setting, 46.7% (n=98) of data sources reflect data exclusively from secondary care, 42.4% (n=89) from mixed care settings (both primary and secondary), and 11% (n=23) from primary care only. In terms of population inclusion criteria, 55.7% (n=117) of data sources include individuals based on health care encounters, 32.9% (n=69) through disease-specific data collection, and 11.4% (n=24) via population-based sources. Data capture methods also vary, with electronic health records (EHRs) being the most common. A total of 74.7% (n=157) of data sources use EHRs, and more than half of those (n=85) rely on EHRs as their sole method of data collection. Laboratory data are used in 29.5% (n=62) of data sources, although only one relies exclusively on laboratory data. Most laboratory-based data sources combine this method with other forms of data capture. EHDEN is the largest federated health data network in Europe, enabling standardized, General Data Protection Regulation-compliant analysis of RWD across diverse care settings and populations. This descriptive summary of the network's data sources enhances transparency and supports broader efforts to scale federated research. These findings demonstrate EHDEN's potential to enable collaborative studies and generate trusted evidence for public health and regulatory purposes.