Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
146 result(s) for "Nadkarni, Girish N."
Sort by:
Distinct subtypes of polycystic ovary syndrome with novel genetic associations: An unsupervised, phenotypic clustering analysis
Polycystic ovary syndrome (PCOS) is a common, complex genetic disorder affecting up to 15% of reproductive-age women worldwide, depending on the diagnostic criteria applied. These diagnostic criteria are based on expert opinion and have been the subject of considerable controversy. The phenotypic variation observed in PCOS is suggestive of an underlying genetic heterogeneity, but a recent meta-analysis of European ancestry PCOS cases found that the genetic architecture of PCOS defined by different diagnostic criteria was generally similar, suggesting that the criteria do not identify biologically distinct disease subtypes. We performed this study to test the hypothesis that there are biologically relevant subtypes of PCOS. Using biochemical and genotype data from a previously published PCOS genome-wide association study (GWAS), we investigated whether there were reproducible phenotypic subtypes of PCOS with subtype-specific genetic associations. Unsupervised hierarchical cluster analysis was performed on quantitative anthropometric, reproductive, and metabolic traits in a genotyped cohort of 893 PCOS cases (median and interquartile range [IQR]: age = 28 [25-32], body mass index [BMI] = 35.4 [28.2-41.5]). The clusters were replicated in an independent, ungenotyped cohort of 263 PCOS cases (median and IQR: age = 28 [24-33], BMI = 35.7 [28.4-42.3]). The clustering revealed 2 distinct PCOS subtypes: a \"reproductive\" group (21%-23%), characterized by higher luteinizing hormone (LH) and sex hormone binding globulin (SHBG) levels with relatively low BMI and insulin levels, and a \"metabolic\" group (37%-39%), characterized by higher BMI, glucose, and insulin levels with lower SHBG and LH levels. We performed a GWAS on the genotyped cohort, limiting the cases to either the reproductive or metabolic subtypes. We identified alleles in 4 loci that were associated with the reproductive subtype at genome-wide significance (PRDM2/KAZN, P = 2.2 × 10-10; IQCA1, P = 2.8 × 10-9; BMPR1B/UNC5C, P = 9.7 × 10-9; CDH10, P = 1.2 × 10-8) and one locus that was significantly associated with the metabolic subtype (KCNH7/FIGN, P = 1.0 × 10-8). We developed a predictive model to classify a separate, family-based cohort of 73 women with PCOS (median and IQR: age = 28 [25-33], BMI = 34.3 [27.8-42.3]) and found that the subtypes tended to cluster in families and that carriers of previously reported rare variants in DENND1A, a gene that regulates androgen biosynthesis, were significantly more likely to have the reproductive subtype of PCOS. Limitations of our study were that only PCOS cases of European ancestry diagnosed by National Institutes of Health (NIH) criteria were included, the sample sizes for the subtype GWAS were small, and the GWAS findings were not replicated. In conclusion, we have found reproducible reproductive and metabolic subtypes of PCOS. Furthermore, these subtypes were associated with novel, to our knowledge, susceptibility loci. Our results suggest that these subtypes are biologically relevant because they appear to have distinct genetic architecture. This study demonstrates how phenotypic subtyping can be used to gain additional insights from GWAS data.
Machine learning-based marker for coronary artery disease: derivation and validation in two longitudinal cohorts
Binary diagnosis of coronary artery disease does not preserve the complexity of disease or quantify its severity or its associated risk with death; hence, a quantitative marker of coronary artery disease is warranted. We evaluated a quantitative marker of coronary artery disease derived from probabilities of a machine learning model. In this cohort study, we developed and validated a coronary artery disease-predictive machine learning model using 95 935 electronic health records and assessed its probabilities as in-silico scores for coronary artery disease (ISCAD; range 0 [lowest probability] to 1 [highest probability]) in participants in two longitudinal biobank cohorts. We measured the association of ISCAD with clinical outcomes—namely, coronary artery stenosis, obstructive coronary artery disease, multivessel coronary artery disease, all-cause death, and coronary artery disease sequelae. Among 95 935 participants, 35 749 were from the BioMe Biobank (median age 61 years [IQR 18]; 14 599 [41%] were male and 21 150 [59%] were female; 5130 [14%] were with diagnosed coronary artery disease) and 60 186 were from the UK Biobank (median age 62 [15] years; 25 031 [42%] male and 35 155 [58%] female; 8128 [14%] with diagnosed coronary artery disease). The model predicted coronary artery disease with an area under the receiver operating characteristic curve of 0·95 (95% CI 0·94–0·95; sensitivity of 0·94 [0·94–0·95] and specificity of 0·82 [0·81–0·83]) and 0·93 (0·92–0·93; sensitivity of 0·90 [0·89–0·90] and specificity of 0·88 [0·87–0·88]) in the BioMe validation and holdout sets, respectively, and 0·91 (0·91–0·91; sensitivity of 0·84 [0·83–0·84] and specificity of 0·83 [0·82–0·83]) in the UK Biobank external test set. ISCAD captured coronary artery disease risk from known risk factors, pooled cohort equations, and polygenic risk scores. Coronary artery stenosis increased quantitatively with ascending ISCAD quartiles (increase per quartile of 12 percentage points), including risk of obstructive coronary artery disease, multivessel coronary artery disease, and stenosis of major coronary arteries. Hazard ratios (HRs) and prevalence of all-cause death increased stepwise over ISCAD deciles (decile 1: HR 1·0 [95% CI 1·0–1·0], 0·2% prevalence; decile 6: 11 [3·9–31], 3·1% prevalence; and decile 10: 56 [20–158], 11% prevalence). A similar trend was observed for recurrent myocardial infarction. 12 (46%) undiagnosed individuals with high ISCAD (≥0·9) had clinical evidence of coronary artery disease according to the 2014 American College of Cardiology/American Heart Association Task Force guidelines. Electronic health record-based machine learning was used to generate an in-silico marker for coronary artery disease that can non-invasively quantify atherosclerosis and risk of death on a continuous spectrum, and identify underdiagnosed individuals. National Institutes of Health.
Advanced glycation end products dietary restriction effects on bacterial gut microbiota in peritoneal dialysis patients; a randomized open label controlled trial
The modern Western diet is rich in advanced glycation end products (AGEs). We have previously shown an association between dietary AGEs and markers of inflammation and oxidative stress in a population of end stage renal disease (ESRD) patients undergoing peritoneal dialysis (PD). In the current pilot study we explored the effects of dietary AGEs on the gut bacterial microbiota composition in similar patients. AGEs play an important role in the development and progression of cardiovascular (CVD) disease. Plasma concentrations of different bacterial products have been shown to predict the risk of incident major adverse CVD events independently of traditional CVD risk factors, and experimental animal models indicates a possible role AGEs might have on the gut microbiota population. In this pilot randomized open label controlled trial, twenty PD patients habitually consuming a high AGE diet were recruited and randomized into either continuing the same diet (HAGE, n = 10) or a one-month dietary AGE restriction (LAGE, n = 10). Blood and stool samples were collected at baseline and after intervention. Variable regions V3-V4 of 16s rDNA were sequenced and taxa was identified on the phyla, genus, and species levels. Dietary AGE restriction resulted in a significant decrease in serum Nε-(carboxymethyl) lysine (CML) and methylglyoxal-derivatives (MG). At baseline, our total cohort exhibited a lower relative abundance of Bacteroides and Alistipes genus and a higher abundance of Prevotella genus when compared to the published data of healthy population. Dietary AGE restriction altered the bacterial gut microbiota with a significant reduction in Prevotella copri and Bifidobacterium animalis relative abundance and increased Alistipes indistinctus, Clostridium citroniae, Clostridium hathewayi, and Ruminococcus gauvreauii relative abundance. We show in this pilot study significant microbiota differences in peritoneal dialysis patients' population, as well as the effects of dietary AGEs on gut microbiota, which might play a role in the increased cardiovascular events in this population and warrants further studies.
Derivation and validation of a machine learning risk score using biomarker and electronic patient data to predict progression of diabetic kidney disease
AimPredicting progression in diabetic kidney disease (DKD) is critical to improving outcomes. We sought to develop/validate a machine-learned, prognostic risk score (KidneyIntelX™) combining electronic health records (EHR) and biomarkers.MethodsThis is an observational cohort study of patients with prevalent DKD/banked plasma from two EHR-linked biobanks. A random forest model was trained, and performance (AUC, positive and negative predictive values [PPV/NPV], and net reclassification index [NRI]) was compared with that of a clinical model and Kidney Disease: Improving Global Outcomes (KDIGO) categories for predicting a composite outcome of eGFR decline of ≥5 ml/min per year, ≥40% sustained decline, or kidney failure within 5 years.ResultsIn 1146 patients, the median age was 63 years, 51% were female, the baseline eGFR was 54 ml min−1 [1.73 m]−2, the urine albumin to creatinine ratio (uACR) was 6.9 mg/mmol, follow-up was 4.3 years and 21% had the composite endpoint. On cross-validation in derivation (n = 686), KidneyIntelX had an AUC of 0.77 (95% CI 0.74, 0.79). In validation (n = 460), the AUC was 0.77 (95% CI 0.76, 0.79). By comparison, the AUC for the clinical model was 0.62 (95% CI 0.61, 0.63) in derivation and 0.61 (95% CI 0.60, 0.63) in validation. Using derivation cut-offs, KidneyIntelX stratified 46%, 37% and 17% of the validation cohort into low-, intermediate- and high-risk groups for the composite kidney endpoint, respectively. The PPV for progressive decline in kidney function in the high-risk group was 61% for KidneyIntelX vs 40% for the highest risk strata by KDIGO categorisation (p < 0.001). Only 10% of those scored as low risk by KidneyIntelX experienced progression (i.e., NPV of 90%). The NRIevent for the high-risk group was 41% (p < 0.05).ConclusionsKidneyIntelX improved prediction of kidney outcomes over KDIGO and clinical models in individuals with early stages of DKD.
A Primer on Reinforcement Learning in Medicine for Clinicians
Reinforcement Learning (RL) is a machine learning paradigm that enhances clinical decision-making for healthcare professionals by addressing uncertainties and optimizing sequential treatment strategies. RL leverages patient-data to create personalized treatment plans, improving outcomes and resource efficiency. This review introduces RL to a clinical audience, exploring core concepts, potential applications, and challenges in integrating RL into clinical practice, offering insights into efficient, personalized, and effective patient care.
Kidney-Failure Risk Projection for the Living Kidney-Donor Candidate
This study examined risk associations calibrated to the U.S. population-level incidence of end-stage renal disease and death and projected long-term incidences of ESRD. Risk projections among nondonors were lower than 15-year observed risks after donation. Nearly 30,000 people worldwide become living kidney donors each year. 1 – 3 Traditionally, living donors have been selected on the basis of an absence of risk factors for poor outcomes after donation and without a comprehensive assessment of individualized long-term risk. Although kidney donation is considered to be safe in healthy, low-risk persons, donation has lifelong implications, and the most direct effect may be an increased long-term risk of end-stage renal disease (ESRD). 4 – 7 A tool to predict a donor candidate’s long-term risk of ESRD that incorporates the combined effect of multiple demographic and health characteristics before donation could help make . . .
Use of Physiological Data From a Wearable Device to Identify SARS-CoV-2 Infection and Symptoms and Predict COVID-19 Diagnosis: Observational Study
Changes in autonomic nervous system function, characterized by heart rate variability (HRV), have been associated with infection and observed prior to its clinical identification. We performed an evaluation of HRV collected by a wearable device to identify and predict COVID-19 and its related symptoms. Health care workers in the Mount Sinai Health System were prospectively followed in an ongoing observational study using the custom Warrior Watch Study app, which was downloaded to their smartphones. Participants wore an Apple Watch for the duration of the study, measuring HRV throughout the follow-up period. Surveys assessing infection and symptom-related questions were obtained daily. Using a mixed-effect cosinor model, the mean amplitude of the circadian pattern of the standard deviation of the interbeat interval of normal sinus beats (SDNN), an HRV metric, differed between subjects with and without COVID-19 (P=.006). The mean amplitude of this circadian pattern differed between individuals during the 7 days before and the 7 days after a COVID-19 diagnosis compared to this metric during uninfected time periods (P=.01). Significant changes in the mean and amplitude of the circadian pattern of the SDNN was observed between the first day of reporting a COVID-19-related symptom compared to all other symptom-free days (P=.01). Longitudinally collected HRV metrics from a commonly worn commercial wearable device (Apple Watch) can predict the diagnosis of COVID-19 and identify COVID-19-related symptoms. Prior to the diagnosis of COVID-19 by nasal swab polymerase chain reaction testing, significant changes in HRV were observed, demonstrating the predictive ability of this metric to identify COVID-19 infection.
A foundational vision transformer improves diagnostic performance for electrocardiograms
The electrocardiogram (ECG) is a ubiquitous diagnostic modality. Convolutional neural networks (CNNs) applied towards ECG analysis require large sample sizes, and transfer learning approaches for biomedical problems may result in suboptimal performance when pre-training is done on natural images. We leveraged masked image modeling to create a vision-based transformer model, HeartBEiT, for electrocardiogram waveform analysis. We pre-trained this model on 8.5 million ECGs and then compared performance vs. standard CNN architectures for diagnosis of hypertrophic cardiomyopathy, low left ventricular ejection fraction and ST elevation myocardial infarction using differing training sample sizes and independent validation datasets. We find that HeartBEiT has significantly higher performance at lower sample sizes compared to other models. We also find that HeartBEiT improves explainability of diagnosis by highlighting biologically relevant regions of the EKG vs. standard CNNs. Domain specific pre-trained transformer models may exceed the classification performance of models trained on natural images especially in very low data regimes. The combination of the architecture and such pre-training allows for more accurate, granular explainability of model predictions.
Evaluating prompt engineering on GPT-3.5’s performance in USMLE-style medical calculations and clinical scenarios generated by GPT-4
This study was designed to assess how different prompt engineering techniques, specifically direct prompts, Chain of Thought (CoT), and a modified CoT approach, influence the ability of GPT-3.5 to answer clinical and calculation-based medical questions, particularly those styled like the USMLE Step 1 exams. To achieve this, we analyzed the responses of GPT-3.5 to two distinct sets of questions: a batch of 1000 questions generated by GPT-4, and another set comprising 95 real USMLE Step 1 questions. These questions spanned a range of medical calculations and clinical scenarios across various fields and difficulty levels. Our analysis revealed that there were no significant differences in the accuracy of GPT-3.5's responses when using direct prompts, CoT, or modified CoT methods. For instance, in the USMLE sample, the success rates were 61.7% for direct prompts, 62.8% for CoT, and 57.4% for modified CoT, with a p-value of 0.734. Similar trends were observed in the responses to GPT-4 generated questions, both clinical and calculation-based, with p-values above 0.05 indicating no significant difference between the prompt types. The conclusion drawn from this study is that the use of CoT prompt engineering does not significantly alter GPT-3.5's effectiveness in handling medical calculations or clinical scenario questions styled like those in USMLE exams. This finding is crucial as it suggests that performance of ChatGPT remains consistent regardless of whether a CoT technique is used instead of direct prompts. This consistency could be instrumental in simplifying the integration of AI tools like ChatGPT into medical education, enabling healthcare professionals to utilize these tools with ease, without the necessity for complex prompt engineering.
A machine learning model identifies patients in need of autoimmune disease testing using electronic health records
Systemic autoimmune rheumatic diseases (SARDs) can lead to irreversible damage if left untreated, yet these patients often endure long diagnostic journeys before being diagnosed and treated. Machine learning may help overcome the challenges of diagnosing SARDs and inform clinical decision-making. Here, we developed and tested a machine learning model to identify patients who should receive rheumatological evaluation for SARDs using longitudinal electronic health records of 161,584 individuals from two institutions. The model demonstrated high performance for predicting cases of autoantibody-tested individuals in a validation set, an external test set, and an independent cohort with a broader case definition. This approach identified more individuals for autoantibody testing compared with current clinical standards and a greater proportion of autoantibody carriers among those tested. Diagnoses of SARDs and other autoimmune conditions increased with higher model probabilities. The model detected a need for autoantibody testing and rheumatology encounters up to five years before the test date and assessment date, respectively. Altogether, these findings illustrate that the clinical manifestations of a diverse array of autoimmune conditions are detectable in electronic health records using machine learning, which may help systematize and accelerate autoimmune testing. Early diagnosis can significantly improve treatment options and prevent severe organ damage in individuals with autoimmune diseases. Here, the authors develop a machine learning model that uses electronic health records to identify patients with clinical suspicion of autoimmune diseases.