Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
24
result(s) for
"Pasea, Laura"
Sort by:
Assessing the efficacy of oral immunotherapy for the desensitisation of peanut allergy in children (STOP II): a phase 2 randomised controlled trial
by
Palmer, Chris
,
Anagnostou, Katherine
,
King, Yvonne
in
Active control
,
Administration, Oral
,
Adolescent
2014
Small studies suggest peanut oral immunotherapy (OIT) might be effective in the treatment of peanut allergy. We aimed to establish the efficacy of OIT for the desensitisation of children with allergy to peanuts.
We did a randomised controlled crossover trial to compare the efficacy of active OIT (using characterised peanut flour; protein doses of 2–800 mg/day) with control (peanut avoidance, the present standard of care) at the NIHR/Wellcome Trust Cambridge Clinical Research Facility (Cambridge, UK). Randomisation (1:1) was by use of an audited online system; group allocation was not masked. Eligible participants were aged 7–16 years with an immediate hypersensitivity reaction after peanut ingestion, positive skin prick test to peanuts, and positive by double-blind placebo-controlled food challenge (DBPCFC). We excluded participants if they had a major chronic illness, if the care provider or a present household member had suspected or diagnosed allergy to peanuts, or if there was an unwillingness or inability to comply with study procedures. Our primary outcome was desensitisation, defined as negative peanut challenge (1400 mg protein in DBPCFC) at 6 months (first phase). Control participants underwent OIT during the second phase, with subsequent DBPCFC. Immunological parameters and disease-specific quality-of-life scores were measured. Analysis was by intention to treat. Fisher's exact test was used to compare the proportion of those with desensitisation to peanut after 6 months between the active and control group at the end of the first phase. This trial is registered with Current Controlled Trials, number ISRCTN62416244.
The primary outcome, desensitisation, was recorded for 62% (24 of 39 participants; 95% CI 45–78) in the active group and none of the control group after the first phase (0 of 46; 95% CI 0–9; p<0·001). 84% (95% CI 70–93) of the active group tolerated daily ingestion of 800 mg protein (equivalent to roughly five peanuts). Median increase in peanut threshold after OIT was 1345 mg (range 45–1400; p<0·001) or 25·5 times (range 1·82–280; p<0·001). After the second phase, 54% (95% CI 35–72) tolerated 1400 mg challenge (equivalent to roughly ten peanuts) and 91% (79–98) tolerated daily ingestion of 800 mg protein. Quality-of-life scores improved (decreased) after OIT (median change −1·61; p<0·001). Side-effects were mild in most participants. Gastrointestinal symptoms were, collectively, most common (31 participants with nausea, 31 with vomiting, and one with diarrhoea), then oral pruritus after 6·3% of doses (76 participants) and wheeze after 0·41% of doses (21 participants). Intramuscular adrenaline was used after 0·01% of doses (one participant).
OIT successfully induced desensitisation in most children within the study population with peanut allergy of any severity, with a clinically meaningful increase in peanut threshold. Quality of life improved after intervention and there was a good safety profile. Immunological changes corresponded with clinical desensitisation. Further studies in wider populations are recommended; peanut OIT should not be done in non-specialist settings, but it is effective and well tolerated in the studied age group.
MRC-NIHR partnership.
Journal Article
Estimated impact of the COVID-19 pandemic on cancer services and excess 1-year mortality in people with cancer and multimorbidity: near real-time data on cancer care, cancer deaths and a population-based cohort study
2020
ObjectivesTo estimate the impact of the COVID-19 pandemic on cancer care services and overall (direct and indirect) excess deaths in people with cancer.MethodsWe employed near real-time weekly data on cancer care to determine the adverse effect of the pandemic on cancer services. We also used these data, together with national death registrations until June 2020 to model deaths, in excess of background (pre-COVID-19) mortality, in people with cancer. Background mortality risks for 24 cancers with and without COVID-19-relevant comorbidities were obtained from population-based primary care cohort (Clinical Practice Research Datalink) on 3 862 012 adults in England.ResultsDeclines in urgent referrals (median=−70.4%) and chemotherapy attendances (median=−41.5%) to a nadir (lowest point) in the pandemic were observed. By 31 May, these declines have only partially recovered; urgent referrals (median=−44.5%) and chemotherapy attendances (median=−31.2%). There were short-term excess death registrations for cancer (without COVID-19), with peak relative risk (RR) of 1.17 at week ending on 3 April. The peak RR for all-cause deaths was 2.1 from week ending on 17 April. Based on these findings and recent literature, we modelled 40% and 80% of cancer patients being affected by the pandemic in the long-term. At 40% affected, we estimated 1-year total (direct and indirect) excess deaths in people with cancer as between 7165 and 17 910, using RRs of 1.2 and 1.5, respectively, where 78% of excess deaths occured in patients with ≥1 comorbidity.ConclusionsDramatic reductions were detected in the demand for, and supply of, cancer services which have not fully recovered with lockdown easing. These may contribute, over a 1-year time horizon, to substantial excess mortality among people with cancer and multimorbidity. It is urgent to understand how the recovery of general practitioner, oncology and other hospital services might best mitigate these long-term excess mortality risks.
Journal Article
Missed opportunities to manage complex comorbidity of heart failure, type 2 diabetes mellitus and chronic kidney disease: a retrospective cohort study
by
Dashtban, Ashkan
,
Mizani, Mehrdad A
,
Bhuva, Anish
in
Aged
,
Cancer
,
Chronic obstructive pulmonary disease
2025
BackgroundEffective management of coexisting heart failure (HF), chronic kidney disease (CKD) and type 2 diabetes mellitus (T2D) is critical, yet evidence of adherence to guideline-recommended standards in routine care remains unclear. We aimed to assess primary care adherence to guideline-recommended standards for patients with overlapping HF, CKD and T2D in England.MethodsUsing UK Clinical Practice Research Datalink (1998–2020), we evaluated care adherence across 161 529 individuals with HF, CKD or T2D before and after developing a second of these conditions. We analysed disease investigation rates, medication use and predictors of guideline adherence.ResultsWe identified 161 529 patients with CKD followed by HF (CKD+HF, 40%), CKD+T2D (51.3%) and HF+T2D (8.6%) with a median of 3.1 years follow-up after the second diagnosis. In CKD+HF, CKD+T2D and HF+T2D groups, prescription rates of renin-angiotensin system inhibitors (71%, 64.1% and 74.4%), beta-blockers (53.1%,36.2% and 55.1%), antiplatelets (56.2%, 45.2% and 54.4%) and statins (56.7%, 68.5% and 72%) were suboptimal. Advanced age, female sex, peripheral arterial disease and cancer were associated with a lower likelihood of checking blood pressure, creatinine and glycated haemoglobin (HbA1C) after HF, CKD and T2D diagnoses, respectively. The first diagnosis of HF was associated with reduced odds of having HbA1C measured after T2D diagnosis (OR 0.79, 95% CI 0.72 to 0.86), compared with CKD as the first diagnosis.ConclusionsIn overlapping HF, CKD and T2D, guideline-recommended care is suboptimal, with inequalities by age, sex, disease on first presentation and comorbidities. Quality improvement requires linked data collection, monitoring and action across diseases.
Journal Article
Time spent at blood pressure target and the risk of death and cardiovascular diseases
2018
The time a patient spends with blood pressure at target level is an intuitive measure of successful BP management, but population studies on its effectiveness are as yet unavailable.
We identified a population-based cohort of 169,082 individuals with newly identified high blood pressure who were free of cardiovascular disease from January 1997 to March 2010. We used 1.64 million clinical blood pressure readings to calculate the TIme at TaRgEt (TITRE) based on current target blood pressure levels.
The median (Inter-quartile range) TITRE among all patients was 2.8 (0.3, 5.6) months per year, only 1077 (0.6%) patients had a TITRE ≥11 months. Compared to people with a 0% TITRE, patients with a TITRE of 3-5.9 months, and 6-8.9 months had 75% and 78% lower odds of the composite of cardiovascular death, myocardial infarction and stroke (adjusted odds ratios, 0.25 (95% confidence interval: 0.21, 0.31) and 0.22 (0.17, 0.27), respectively). These associations were consistent for heart failure and any cardiovascular disease and death (comparing a 3-5.9 month to 0% TITRE, 63% and 60% lower in odds, respectively), among people who did or did not have blood pressure 'controlled' on a single occasion during the first year of follow-up, and across groups defined by number of follow-up BP measure categories.
Based on the current frequency of measurement of blood pressure this study suggests that few newly hypertensive patients sustained a complete, year-round on target blood pressure over time. The inverse associations between a higher TITRE and lower risk of incident cardiovascular diseases were independent of widely-used blood pressure 'control' indicators. Randomized trials are required to evaluate interventions to increase a person's time spent at blood pressure target.
Journal Article
Elevated plasma triglyceride concentration and risk of adverse clinical outcomes in 1.5 million people: a CALIBER linked electronic health record study
2022
Background
Assessing the spectrum of disease risk associated with hypertriglyceridemia is needed to inform potential benefits from emerging triglyceride lowering treatments. We sought to examine the associations between a full range of plasma triglyceride concentration with five clinical outcomes.
Methods
We used linked data from primary and secondary care for 15 M people, to explore the association between triglyceride concentration and risk of acute pancreatitis, chronic pancreatitis, new onset diabetes, myocardial infarction and all-cause mortality, over a median of 6–7 years follow up.
Results
Triglyceride concentration was available for 1,530,411 individuals (mean age 56·6 ± 15·6 years, 51·4% female), with a median of 1·3 mmol/L (IQR: 0.9.to 1.9). Severe hypertriglyceridemia, defined as > 10 mmol/L, was identified in 3289 (0·21%) individuals including 620 with > 20 mmol/L. In multivariable analyses, a triglyceride concentration > 20 mmol/L was associated with very high risk for acute pancreatitis (Hazard ratio (HR) 13·55 (95% CI 9·15–20·06)); chronic pancreatitis (HR 25·19 (14·91–42·55)); and high risk for diabetes (HR 5·28 (4·51–6·18)) and all-cause mortality (HR 3·62 (2·82–4·65)) when compared to the reference category of ≤ 1·7 mmol/L. An association with myocardial infarction, however, was only observed for more moderate hypertriglyceridaemia between 1.7 and 10 mmol/L. We found a risk interaction with age, with higher risks for all outcomes including mortality among those ≤ 40 years compared to > 40 years.
Conclusions
We highlight an exponential association between severe hypertriglyceridaemia and risk of incident acute and chronic pancreatitis, new diabetes, and mortality, especially at younger ages, but not for myocardial infarction for which only moderate hypertriglyceridemia conferred risk.
Journal Article
Risk factors, outcomes and healthcare utilisation in individuals with multimorbidity including heart failure, chronic kidney disease and type 2 diabetes mellitus: a national electronic health record study
2023
BackgroundHeart failure (HF), type 2 diabetes (T2D) and chronic kidney disease (CKD) commonly coexist. We studied characteristics, prognosis and healthcare utilisation of individuals with two of these conditions.MethodsWe performed a retrospective, population-based linked electronic health records study from 1998 to 2020 in England to identify individuals diagnosed with two of: HF, T2D or CKD. We described cohort characteristics at time of second diagnosis and estimated risk of developing the third condition and mortality using Kaplan-Meier and Cox regression models. We also estimated rates of healthcare utilisation in primary care and hospital settings in follow-up.FindingsWe identified cohorts of 64 226 with CKD and HF, 82 431 with CKD and T2D, and 13 872 with HF and T2D. Compared with CKD and T2D, those with CKD and HF and HF and T2D had more severe risk factor profile. At 5 years, incidence of the third condition and all-cause mortality occurred in 37% (95% CI: 35.9%, 38.1%%) and 31.3% (30.4%, 32.3%) in HF+T2D, 8.7% (8.4%, 9.0%) and 51.6% (51.1%, 52.1%) in HF+CKD, and 6.8% (6.6%, 7.0%) and 17.9% (17.6%, 18.2%) in CKD+T2D, respectively. In each of the three multimorbid groups, the order of the first two diagnoses was also associated with prognosis. In multivariable analyses, we identified risk factors for developing the third condition and mortality, such as age, sex, medical history and the order of disease diagnosis. Inpatient and outpatient healthcare utilisation rates were highest in CKD and HF, and lowest in CKD and T2D.InterpretationHF, CKD and T2D carry significant mortality and healthcare burden in combination. Compared with other disease pairs, individuals with CKD and HF had the most severe risk factor profile, prognosis and healthcare utilisation. Service planning, policy and prevention must take into account and monitor data across conditions.
Journal Article
Pre-Dialysis Systolic Blood Pressure-Variability Is Independently Associated with All-Cause Mortality in Incident Haemodialysis Patients
by
Pasea, Laura
,
Wilkinson, Ian B.
,
Tomlinson, Laurie A.
in
Aged
,
Angina pectoris
,
Antihypertensives
2014
Systolic blood pressure variability is an independent risk factor for mortality and cardiovascular events. Standard measures of blood pressure predict outcome poorly in haemodialysis patients. We investigated whether systolic blood pressure variability was associated with mortality in incident haemodialysis patients. We performed a longitudinal observational study of patients commencing haemodialysis between 2005 and 2011 in East Anglia, UK, excluding patients with cardiovascular events within 6 months of starting haemodialysis. The main exposure was variability independent of the mean (VIM) of systolic blood pressure from short-gap, pre-dialysis blood pressure readings between 3 and 6 months after commencing haemodialysis, and the outcome was all-cause mortality. Of 203 patients, 37 (18.2%) patients died during a mean follow-up of 2.0 (SD 1.3) years. The age and sex-adjusted hazard ratio (HR) for mortality was 1.09 (95% confidence interval (CI) 1.02-1.17) for a one-unit increase of VIM. This was not altered by adjustment for diabetes, prior cardiovascular disease and mean systolic blood pressure (HR 1.09, 95% CI 1.02-1.16). Patients with VIM of systolic blood pressure above the median were 2.4 (95% CI 1.17-4.74) times more likely to die during follow-up than those below the median. Results were similar for all measures of blood pressure variability and further adjustment for type of dialysis access, use of antihypertensives and absolute or variability of fluid intake did not alter these findings. Diastolic blood pressure variability showed no association with all cause mortality. Our study shows that variability of systolic blood pressure is a strong and independent predictor of all-cause mortality in incident haemodialysis patients. Further research is needed to understand the mechanism as this may form a therapeutic target or focus for management.
Journal Article
Clinical academic research in the time of Corona: A simulation study in England and a call for action
by
Hemingway, Harry
,
Katsoulis, Michail
,
Lai, Alvina G.
in
Betacoronavirus
,
Biomedical Research - methods
,
Case studies
2020
We aimed to model the impact of coronavirus (COVID-19) on the clinical academic response in England, and to provide recommendations for COVID-related research.
A stochastic model to determine clinical academic capacity in England, incorporating the following key factors which affect the ability to conduct research in the COVID-19 climate: (i) infection growth rate and population infection rate (from UK COVID-19 statistics and WHO); (ii) strain on the healthcare system (from published model); and (iii) availability of clinical academic staff with appropriate skillsets affected by frontline clinical activity and sickness (from UK statistics).
Clinical academics in primary and secondary care in England.
Equivalent of 3200 full-time clinical academics in England.
Four policy approaches to COVID-19 with differing population infection rates: \"Italy model\" (6%), \"mitigation\" (10%), \"relaxed mitigation\" (40%) and \"do-nothing\" (80%) scenarios. Low and high strain on the health system (no clinical academics able to do research at 10% and 5% infection rate, respectively.
Number of full-time clinical academics available to conduct clinical research during the pandemic in England.
In the \"Italy model\", \"mitigation\", \"relaxed mitigation\" and \"do-nothing\" scenarios, from 5 March 2020 the duration (days) and peak infection rates (%) are 95(2.4%), 115(2.5%), 240(5.3%) and 240(16.7%) respectively. Near complete attrition of academia (87% reduction, <400 clinical academics) occurs 35 days after pandemic start for 11, 34, 62, 76 days respectively-with no clinical academics at all for 37 days in the \"do-nothing\" scenario. Restoration of normal academic workforce (80% of normal capacity) takes 11, 12, 30 and 26 weeks respectively.
Pandemic COVID-19 crushes the science needed at system level. National policies mitigate, but the academic community needs to adapt. We highlight six key strategies: radical prioritisation (eg 3-4 research ideas per institution), deep resourcing, non-standard leadership (repurposing of key non-frontline teams), rationalisation (profoundly simple approaches), careful site selection (eg protected sites with large academic backup) and complete suspension of academic competition with collaborative approaches.
Journal Article
Identifying subtypes of type 2 diabetes mellitus with machine learning: development, internal validation, prognostic validation and medication burden in linked electronic health records in 420 448 individuals
2024
IntroductionNone of the studies of type 2 diabetes (T2D) subtyping to date have used linked population-level data for incident and prevalent T2D, incorporating a diverse set of variables, explainable methods for cluster characterization, or adhered to an established framework. We aimed to develop and validate machine learning (ML)-informed subtypes for type 2 diabetes mellitus (T2D) using nationally representative data.Research design and methodsIn population-based electronic health records (2006–2020; Clinical Practice Research Datalink) in individuals ≥18 years with incident T2D (n=420 448), we included factors (n=3787), including demography, history, examination, biomarkers and medications. Using a published framework, we identified subtypes through nine unsupervised ML methods (K-means, K-means++, K-mode, K-prototype, mini-batch, agglomerative hierarchical clustering, Birch, Gaussian mixture models, and consensus clustering). We characterized clusters using intracluster distributions and explainable artificial intelligence (AI) techniques. We evaluated subtypes for (1) internal validity (within dataset; across methods); (2) prognostic validity (prediction for 5-year all-cause mortality, hospitalization and new chronic diseases); and (3) medication burden.ResultsDevelopment: We identified four T2D subtypes: metabolic, early onset, late onset and cardiometabolic. Internal validity: Subtypes were predicted with high accuracy (F1 score >0.98). Prognostic validity: 5-year all-cause mortality, hospitalization, new chronic disease incidence and medication burden differed across T2D subtypes. Compared with the metabolic subtype, 5-year risks of mortality and hospitalization in incident T2D were highest in late-onset subtype (HR 1.95, 1.85–2.05 and 1.66, 1.58–1.75) and lowest in early-onset subtype (1.18, 1.11–1.27 and 0.85, 0.80–0.90). Incidence of chronic diseases was highest in late-onset subtype and lowest in early-onset subtype. Medications: Compared with the metabolic subtype, after adjusting for age, sex, and pre-T2D medications, late-onset subtype (1.31, 1.28–1.35) and early-onset subtype (0.83, 0.81–0.85) were most and least likely, respectively, to be prescribed medications within 5 years following T2D onset.ConclusionsIn the largest study using ML to date in incident T2D, we identified four distinct subtypes, with potential future implications for etiology, therapeutics, and risk prediction.
Journal Article
Bleeding in cardiac patients prescribed antithrombotic drugs: electronic health record phenotyping algorithms, incidence, trends and prognosis
2019
Background
Clinical guidelines and public health authorities lack recommendations on scalable approaches to defining and monitoring the occurrence and severity of bleeding in populations prescribed antithrombotic therapy.
Methods
We examined linked primary care, hospital admission and death registry electronic health records (CALIBER 1998–2010, England) of patients with newly diagnosed atrial fibrillation, acute myocardial infarction, unstable angina or stable angina with the aim to develop algorithms for bleeding events. Using the developed bleeding phenotypes, Kaplan-Meier plots were used to estimate the incidence of bleeding events and we used Cox regression models to assess the prognosis for all-cause mortality, atherothrombotic events and further bleeding.
Results
We present electronic health record phenotyping algorithms for bleeding based on bleeding diagnosis in primary or hospital care, symptoms, transfusion, surgical procedures and haemoglobin values. In validation of the phenotype, we estimated a positive predictive value of 0.88 (95% CI 0.64, 0.99) for hospitalised bleeding. Amongst 128,815 patients, 27,259 (21.2%) had at least 1 bleeding event, with 5-year risks of bleeding of 29.1%, 21.9%, 25.3% and 23.4% following diagnoses of atrial fibrillation, acute myocardial infarction, unstable angina and stable angina, respectively. Rates of hospitalised bleeding per 1000 patients more than doubled from 1.02 (95% CI 0.83, 1.22) in January 1998 to 2.68 (95% CI 2.49, 2.88) in December 2009 coinciding with the increased rates of antiplatelet and vitamin K antagonist prescribing. Patients with hospitalised bleeding and primary care bleeding, with or without markers of severity, were at increased risk of all-cause mortality and atherothrombotic events compared to those with no bleeding. For example, the hazard ratio for all-cause mortality was 1.98 (95% CI 1.86, 2.11) for primary care bleeding with markers of severity and 1.99 (95% CI 1.92, 2.05) for hospitalised bleeding without markers of severity, compared to patients with no bleeding.
Conclusions
Electronic health record bleeding phenotyping algorithms offer a scalable approach to monitoring bleeding in the population. Incidence of bleeding has doubled in incidence since 1998, affects one in four cardiovascular disease patients, and is associated with poor prognosis. Efforts are required to tackle this iatrogenic epidemic.
Journal Article