Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
27
result(s) for
"Sammon, Cormac"
Sort by:
Can real-world data really replace randomised clinical trials?
by
Ramagopalan, Sreeram V.
,
Sammon, Cormac
,
Simpson, Alex
in
Analytical methods
,
Bias
,
Biomarkers
2020
Background Classically, randomised controlled trials (RCTs) are considered the gold standard for demonstrating product efficacy for the regulatory approval of medicines. However, as personalised medicine becomes increasingly common, patient recruitment into RCTs is affected and – sometimes – it is not possible to include a control arm [1]. Real-world data (RWD) are data that are collected outside of RCTs [2]. They are gaining increasing attention for their use in regulatory decision-making. The United States twenty-first Century Cures Act mandated that the US Food and Drug Administration (FDA) should provide guidance about the circumstances under which manufacturers can use RWD to support the approval of a medicine. More recently, investigators from the European Medicines Agency (EMA) detailed their views on this topic [3]. RWD for regulatory approval: opportunities and challenges Eichler et al., from the EMA, state that, “the RCT will, in our view, remain the best available standard and be required in many circumstances, but will need to be complemented by other methodologies to address research questions where a traditional RCT may be unfeasible or unethical.” Thus, the gauntlet has been laid down for RWD to be used to support European regulatory approval. Indeed, RWD has been used by the EMA to approve several medicines for rare/orphan indications [4]. Eichler and colleagues, however, highlight that RWD methods must be critically appraised before they can be more widely accepted. They suggest that this appraisal can be undertaken via prospective validation of any proposed method with a pre-defined protocol. Why the need for validation? Studies of the concordance between the results of RCTs and RWD studies investigating the same research question have given mixed results [5, 6]. It has been suggested that this discordance can be attributed to differences in the populations being investigated, or bias in RWD studies as a result of lack of randomisation. Using an example of cancer risk in statin users, Dickerman and co-workers attempted to understand why RWD studies have shown a protective effect and RCTs showed no effect on neoplasm incidence [7]. One of the key principles of an RCT is to assess patient characteristics at baseline to check study eligibility based on inclusion/exclusion criteria. If eligibility is met, the next task is to randomise subjects into groups and, subsequently, to provide treatment as assigned for each group. Dickerman et al. operationalised a similar ‘target trial’ approach using RWD and followed up trial-eligible new and non-users of statins to compare rates of cancer between these groups. Performing the analysis in this way enabled the researchers to illustrate that results from RWD were in acquiescence with those from RCTs. Furthermore, previously reported differences were largely a result of two avoidable issues: immortal time and selection bias caused by the inclusion of prevalent statin users (prevalent users had to have survived without cancer up to baseline, leading to artificially lower rates of cancer in the statin group), rather than being attributed to the lack of randomisation per se. As Dickerman et al. acknowledge, a limitation of the outcome they studied is that confounding by indication (whereby the reason for prescribing a patient medication is also associated with the outcome of interest) is unlikely to have a major role. Where the outcome is more likely to be affected by confounding by indication, then – to mimic the randomisation element of an RCT and appropriately compare treatment groups – RWD studies must carefully adjust for all baseline confounders. In this regard, Carrigan et al. recently report results exploring a research question more likely to be affected by confounding by indication [8]: whether control groups generated from RWD could approximate the control arms used in published RCTs in non-small cell lung cancer. In 10 of the 11 analyses conducted, hazard ratio estimates for overall survival derived from comparing RWD control arms with the intervention arm from the RCT were similar to those seen in the original RCT comparison. However, the analyses showed that a simple ‘target trial’ alignment of the RWD arm with the trial inclusion/exclusion criteria could not fully replicate the RCT effect estimate; additional adjustment to control for confounding using propensity scores was required. The single non-concordant analysis was thought to be associated with a biomarker that was likely enriched in the RCT but was not present in RWD and therefore could not be adjusted for. This exception to the overall consistency between RWD and RCT findings highlights the importance of needing RWD with information available on all possible confounders to avoid generating inaccurate results. These two recent studies show that analytical methods and approaches are in place to enable consistency between RCT and RWD results. Further evidence will arise from the FDA-funded RCT DUPLICATE project, which will investigate RCT–RWD concordance on a larger scale [9]. In light of this, the question arises: how many examples are required before regulators can begin to accept RWD for regulatory decision-making? Eichler et al. state that the answer is unlikely to be simple: decision-makers should perhaps first accept RWD analyses for situations in which there is a relatively small impact (e.g. label expansion) and then gradually expand acceptability as confidence in the method grows.
Journal Article
Evaluating the Hazard of Foetal Death following H1N1 Influenza Vaccination; A Population Based Cohort Study in the UK GPRD
by
Sammon, Cormac J.
,
Snowball, Julia
,
McGrogan, Anita
in
Cohort analysis
,
Cohort Studies
,
Comparative analysis
2012
To evaluate the risk of foetal loss associated with pandemic influenza vaccination in pregnancy. Retrospective cohort study. UK General Practice Research Database Pregnancies ending in delivery or spontaneous foetal death after 21 October 2009 and starting before 01 January 2010.
Hazard ratios of foetal death for vaccinated compared to unvaccinated pregnancies were estimated for gestational weeks 9 to 12, 13 to 24 and 25 to 43 using discrete-time survival analysis. Separate models were specified to evaluate whether the potential effect of vaccination on foetal loss might be transient (for ~4 weeks post vaccination only) or more permanent (for the duration of the pregnancy). 39,863 pregnancies meeting our inclusion criteria contributed a total of 969,322 gestational weeks during the study period. 9,445 of the women were vaccinated before or during pregnancy. When the potential effect of vaccination was assumed to be transient, the hazard of foetal death during gestational weeks 9 through 12 (HR(unadj) 0.56; CI(95) 0.43 to 0.73) and 13 through 24 (HR(unadj) 0.45; CI(95) 0.28 to 0.73) was lower in the 4 weeks after vaccination than in other weeks. Where the more permanent exposure definition was specified, vaccinated pregnancies also had a lower hazard of foetal loss than unvaccinated pregnancies in gestational weeks 9 through 12 (HR(unadj) 0.74; CI(95) 0.62 to 0.88) and 13 through 24 (HR(unadj) 0.59; CI(95) 0.45 to 0.77). There was no difference in the hazard of foetal loss during weeks 25 to 43 in either model. Sensitivity analyses suggest the strong protective associations observed may be due in part to unmeasured confounding.
Influenza vaccination during pregnancy does not appear to increase the risk of foetal death. This study therefore supports the continued recommendation of influenza vaccination of pregnant women.
Journal Article
Initiation and duration of selective serotonin reuptake inhibitor prescribing over time: UK cohort study
2016
Recent media reports have focused on the large increase in antidepressants dispensed in England. We investigated this, focusing on selective serotonin reuptake inhibitors (SSRIs).
To examine the rate of initiation of SSRIs over time and changes over time in the duration of prescribing episodes.
We estimated initiation and duration of SSRI prescribing from 7 025 802 individuals aged over 18 years and registered with a general practice that contributed data to The Health Improvement Network.
Rates of SSRI initiation increased from 1.03 per 100 person-years in 1995 to 2.15 in 2001, but remained stable from then to 2012. The median duration of prescribing episodes increased from 112 to 169 days for episodes starting in 1995 to 2010.
Despite media reports describing an increasing rate of antidepressant prescribing, SSRI initiation rates have stabilised since 2001. However, our results suggest that individuals who take SSRIs are receiving treatment for longer.
Journal Article
The value of innovation: association between improvements in survival of advanced and metastatic non-small cell lung cancer and targeted and immunotherapy
2021
Background
Significant improvements in mortality among patients with non-small cell lung cancer (NSCLC) in the USA over the past two decades have been reported based on Surveillance, Epidemiology, and End Results (SEER) data. The timing of these improvements led to suggestions that they result from the introduction of new treatments; however, few studies have directly investigated this. The aim of this study was to investigate the extent to which population level improvements in survival of advanced and/or metastatic NSCLC (admNSCLC) patients were associated with changes in treatment patterns.
Methods
We utilized a de-identified database to select three cohorts of patients with admNSCLC: (1) patients with non-oncogene (EGFR/ALK/ROS1/BRAF) positive tumors, (2) patients with ALK-positive (ALK+) tumors, and (3) patients with EGFR-positive (EGFR+) tumors. All patients were diagnosed with admNSCLC between 2012 and 2019. Multivariable Cox models adjusting for baseline characteristics and receipt of targeted and immunotherapy were utilized to explore the relationship between these variables and changes in the hazard of death by calendar year in each cohort.
Results
We included 28,154 admNSCLC patients with non-oncogene positive tumors, 598 with ALK+ tumors, and 2464 with EGFR+ tumors eligible for analysis. After adjustment for differences in baseline characteristics, the hazard of death in patients who had non-oncogene positive tumors diagnosed in 2015, 2016, 2017, 2018 ,and 2019 was observed to be 12%, 11%, 17%, 20%, and 21% lower respectively than that for those diagnosed in 2012. Upon additionally adjusting for receipt of first line or second line immunotherapy, the decrease in the hazard of death by calendar year was no longer observed, suggesting improvements in survival observed over time may be explained by the introduction of these treatments. Similarly, decreases in the hazard of death were only observed in patients with ALK+ tumors diagnosed between 2017 and 2019 relative to 2012 but were no longer observed following adjustment for the use of 1st and later generation ALK inhibitors. Among patients with EGFR+ tumors, the hazard of death did not improve significantly over time.
Conclusion
Our findings expand on the SEER data and provide additional evidence suggesting improvements in survival of patients with advanced and metastatic NSCLC over the past decade could be explained by the change in treatment patterns over this period.
Journal Article
Estimating the prevalence of diagnosed Alzheimer disease in England across deprivation groups using electronic health records: a clinical practice research datalink study
by
Sammon, Cormac
,
Ballard, Clive
,
Gsteiger, Sandro
in
Alzheimer Disease - diagnosis
,
Alzheimer Disease - epidemiology
,
Alzheimer's disease
2023
ObjectiveEstimate the prevalence of diagnosed Alzheimer’s disease (AD) and early Alzheimer’s disease (eAD) overall and stratified by age, sex and deprivation and combinations thereof in England on 1 January 2020.DesignCross-sectional.SettingPrimary care electronic health record data, the Clinical Practice Research database linked with secondary care data, Hospital Episode Statistics (HES) and patient-level deprivation data, Index of Multiple Deprivation (IMD).Outcome measuresThe prevalence per 100 000 of the population and corresponding 95% CIs for both diagnosed AD and eAD overall and stratified by covariates. Sensitivity analyses were conducted to assess the sensitivity of the population definition and look-back period.ResultsThere were 448 797 patients identified in the Clinical Practice Research Datalink that satisfied the study inclusion criteria and were eligible for HES and IMD linkage. For the main analysis of AD and eAD, 379 763 patients are eligible for inclusion in the denominator. This resulted in an estimated prevalence of diagnosed AD of 378.39 (95% CI, 359.36 to 398.44) per 100 000 and eAD of 292.81 (95% CI, 276.12 to 310.52) per 100 000. Prevalence estimates across main and sensitivity analyses for the entire AD study population were found to vary widely with estimates ranging from 137.48 (95% CI, 127.05 to 148.76) to 796.55 (95% CI, 768.77 to 825.33). There was significant variation in prevalence of diagnosed eAD when assessing the sensitivity with the look-back periods, as low as 120.54 (95% CI, 110.80 to 131.14) per 100 000, and as high as 519.01 (95% CI, 496.64 to 542.37) per 100 000.ConclusionsThe study found relatively consistent patterns of prevalence across both AD and eAD populations. Generally, the prevalence of diagnosed AD increased with age and increased with deprivation for each age category. Women had a higher prevalence than men. More granular levels of stratification reduced patient numbers and increased the uncertainty of point prevalence estimates. Despite this, the study found a relationship between deprivation and prevalence of AD.
Journal Article
The use of UK primary care databases in health technology assessments carried out by the National Institute for health and care excellence (NICE)
by
Ramagopalan, Sreeram
,
Leahy, Thomas P.
,
Sammon, Cormac
in
Appraisals
,
Clinical medicine
,
CPRD
2020
Background
Real world evidence (RWE) is becoming more frequently used in technology appraisals (TAs). This study sought to explore the use and acceptance of evidence from primary care databases, a key source of RWE in the UK, in National Institute for Health and Care Excellence (NICE) technology assessments and to provide recommendations regarding their use in future submissions.
Methods
A keyword search was conducted relating to the main primary care databases in the UK on the NICE website. All NICE TAs identified through this search were screened, assessed for duplication and information on the data source and the way the data was used in the submission were extracted. Comments by the evidence review group (ERG) and the appraisal committee were also extracted and reviewed. All data extraction was performed by two independent reviewers and all decisions were reached by consensus with an additional third reviewer.
Results
A total of 52 NICE TAs were identified, 47 used the General Practice Research Database /Clinical Practice Research Datalink (GPRD/CPRD) database, 10 used The Health Improvement Network (THIN) database and 3 used the QResearch databases. Data from primary care databases were used to support arguments regarding clinical need and current treatment in 33 NICE TAs while 36 were used to inform input parameters for economic models. The databases were sometimes used for more than one purpose. The data from the three data sources were generally well received by the ERGs/committees. Criticisms of the data typically occurred where the results had been repurposed from a published study or had not been applied appropriately.
Conclusions
The potential of UK primary care databases in NICE submissions is increasingly being realised, particularly in informing the parameters of economic models. Purpose conducted studies are less likely to receive criticism from ERGs/committees, particularly when providing clinical input into cost effectiveness models.
Journal Article
Recording and treatment of premenstrual syndrome in UK general practice: a retrospective cohort study
2016
ObjectivesTo investigate the rate of recording of premenstrual syndrome diagnoses in UK primary care and describe pharmacological treatments initiated following a premenstrual syndrome (PMS) diagnosis.DesignRetrospective cohort study.SettingUK primary care.ParticipantsWomen registered with a practice contributing to The Health Improvement Network primary care database between 1995 and 2013.Primary and secondary outcome measuresThe primary outcome was the rate of first premenstrual syndrome records per 1000 person years, stratified by calendar year and age. The secondary outcome was the proportions of women with a premenstrual syndrome record prescribed a selective serotonin reuptake inhibitor, progestogen, oestrogen, combined oral contraceptive, progestin only contraceptive, gonadotrophin-releasing hormone, danazol and vitamin B6.ResultsThe rate of recording of premenstrual syndrome diagnoses decreased over calendar time from 8.43 in 1995 to 1.72 in 2013. Of the 38 614 women without treatment in the 6 months prior to diagnosis, 54% received a potentially premenstrual syndrome-related prescription on the day of their first PMS record while 77% received a prescription in the 24 months after. Between 1995 and 1999, the majority of women were prescribed progestogens (23%) or vitamin B6 (20%) on the day of their first PMS record; after 1999, these figures fell to 3% for progestogen and vitamin B6 with the majority of women instead being prescribed a selective serotonin reuptake inhibitor (28%) or combined oral contraceptive (17%).ConclusionsRecording of premenstrual syndrome diagnoses in UK primary care has declined substantially over time and preferred prescription treatment has changed from progestogen to selective serotonin reuptake inhibitor and combined oral contraceptives.
Journal Article
Real-world evidence and nonrandomized data in health technology assessment: using existing methods to address unmeasured confounding?
2020
[...]when faced with these data, HTA bodies commonly provide qualitative descriptions of their concerns regarding the uncertainty unmeasured confounding introduces into the decision-making process and highlight concerns about the extent to which this complicates the interpretation of quantitative assessments of clinical and cost-effectiveness. [...]in their very useful guidelines regarding ‘the use of observational data to inform estimates of treatment effectiveness in technology appraisal’ and ‘methods for population-adjusted indirect comparisons in submissions with NICE’ the NICE Decision Support Unitgives very limited advice about how to address unmeasured confounding quantitatively, highlighting this as an area for future research (1,2). In Germany, IQWIG’s methods guidance allows for the consideration of treatment effects from nonrandomized studies where ‘dramatic effects’ are observed, citing a relative risk of greater than 10 and statistical significance at the 1% level as an effect broadly in a range dramatic enough to be unlikely to be due to unmeasured confounding. In the discussion, the authors noted that unmeasured confounding may be an issue, for example, due to the absence of complete information on a key prognostic score in the electronic health record database. Since these sensitivity analysis methods are typically applied on the relative risk scale, the first step in applying them is to approximate the adjusted risk ratio (ARR) using the square-root transformation (17).
Journal Article
Real‐world evaluation of ocrelizumab in multiple sclerosis: A systematic review
by
Ramagopalan, Sreeram
,
Matthews, Paul M.
,
Simpson, Alex
in
Clinical trials
,
Humans
,
Immunologic Factors - pharmacology
2023
Across its clinical development program, ocrelizumab demonstrated efficacy in improving clinical outcomes in multiple sclerosis, including annualized relapse rates and confirmed disability progression. However, as with any new treatment, it was unclear how this efficacy would translate into real‐world clinical practice. The objective of this study was to systematically collate the published real‐world clinical effectiveness data for ocrelizumab in relapsing remitting multiple sclerosis and primary progressive multiple sclerosis. A search strategy was developed in MEDLINE and Embase to identify articles reporting real‐world evidence in people with relapsing remitting multiple sclerosis or primary progressive multiple sclerosis receiving treatment with ocrelizumab. The search focused on English language articles only but was not limited by the country in which the study was conducted or the time frame of the study. Additional manual searches of relevant websites were also performed. Fifty‐two studies were identified reporting relevant evidence. Real‐world effectiveness data for ocrelizumab were consistently favorable, with reductions in relapse rate and disease progression rates similar to those reported in the OPERA I/OPERA II and ORATORIO clinical trials, including in studies with more diverse patient populations not well represented in the pivotal trials. Although direct comparisons are confounded by lack of randomization of treatments, outcomes reported suggest that ocrelizumab has a similar or greater efficacy than other therapy options. Initial real‐world effectiveness data for ocrelizumab appear favorable and consistent with results reported in clinical trials, providing clinicians with an efficacious option to treat patients with multiple sclerosis.
Journal Article
The incidence of childhood and adolescent seizures in the UK from 1999 to 2011: A retrospective cohort study using the Clinical Practice Research Datalink
by
Sammon, Cormac J.
,
Snowball, Julia
,
Weil, John G.
in
Adolescent
,
adolescents
,
Allergy and Immunology
2015
In postmarketing vaccine surveillance, adverse events observed in a vaccinated population are compared to the number expected based on a background incidence rate. The background rate should be accurate and obtained from a population comparable to the one vaccinated. Such rates are often not available.
The incidence rate of generalised convulsive, febrile and afebrile seizures was estimated in individuals born after 01-January-1998 and aged between 2 months and 15 years of age using the UK Clinical Practice Research Datalink (1999–2011).
The study population consisted of 1532,992 individuals (4917,369 person years (PY) of follow up). A total of 28,917 generalised convulsive seizure events were identified during follow-up, the overall incidence rate was 5.88 per 1000PY. Age specific rates increased sharply from 4/1000PY at 2 months of age, peaked at 19/1000PY at 16 months and decreased until approximately 6 years of age at which point they became relatively stable at 2/1000PY. 67% of GCSs were categorised as febrile: 56% using Read codes, 11% using free text. Febrile seizures accounted for the age trend in GCS, with rates peaking at 16.1/1000PY at 16 months of age while afebrile seizure rates remained relatively stable across all ages (24 seizures per 1000PY). Analysis by first occurrence of febrile seizure showed a similar pattern, comparable to published studies on the incidence of seizures in childhood.
The rates reported in this study could be used in the postmarketing surveillance of infant vaccines. However, given the variation across strata, and the potential underascertainment of seizure events presenting to A&E, care must be taken when interpreting and using these rates.
Journal Article