Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
28 result(s) for "Rajeev, Nithya"
Sort by:
Large language models and bariatric surgery patient education: a comparative readability analysis of GPT-3.5, GPT-4, Bard, and online institutional resources
BackgroundThe readability of online bariatric surgery patient education materials (PEMs) often surpasses the recommended 6th grade level. Large language models (LLMs), like ChatGPT and Bard, have the potential to revolutionize PEM delivery. We aimed to evaluate the readability of PEMs produced by U.S. medical institutions compared to LLMs, as well as the ability of LLMs to simplify their responses.MethodsResponses to frequently asked questions (FAQs) related to bariatric surgery were gathered from top-ranked health institutions. FAQ responses were also generated from GPT-3.5, GPT-4, and Bard. LLMs were then prompted to improve the readability of their initial responses. The readability of institutional responses, initial LLM responses, and simplified LLM responses were graded using validated readability formulas. Accuracy and comprehensiveness of initial and simplified LLM responses were also compared.ResultsResponses to 66 FAQs were included. All institutional and initial LLM responses had poor readability, with average reading levels ranging from 9th grade to college graduate. Simplified responses from LLMs had significantly improved readability, with reading levels ranging from 6th grade to college freshman. When comparing simplified LLM responses, GPT-4 responses demonstrated the highest readability, with reading levels ranging from 6th to 9th grade. Accuracy was similar between initial and simplified responses from all LLMs. Comprehensiveness was similar between initial and simplified responses from GPT-3.5 and GPT-4. However, 34.8% of Bard's simplified responses were graded as less comprehensive compared to initial.ConclusionOur study highlights the efficacy of LLMs in enhancing the readability of bariatric surgery PEMs. GPT-4 outperformed other models, generating simplified PEMs from 6th to 9th grade reading levels. Unlike GPT-3.5 and GPT-4, Bard’s simplified responses were graded as less comprehensive. We advocate for future studies examining the potential role of LLMs as dynamic and personalized sources of PEMs for diverse patient populations of all literacy levels.
Providers’ Knowledge and Perceptions of Bariatric Surgery: a Systematic Review
Bariatric surgery remains underutilized despite its proven efficacy in the management of obesity. Provider perceptions of bariatric surgery are important to consider when discussing utilization rates. PubMed, SCOPUS, and OVID databases were searched in April 2023, and 40 published studies discussing providers’ knowledge and perceptions of bariatric surgery were included. There were generally positive perceptions of the efficacy of bariatric surgery, although overestimations of surgical risks and postoperative complications were common. Providers’ previous training was associated with knowledge and perceptions of bariatric surgery and familiarity with perioperative management across studies. These perceptions were also associated with referral rates, suggesting that inadequate provider knowledge may contribute to bariatric surgery underutilization. We advocate for increased bariatric surgery-related education throughout all stages of medical training and across specialties.
Evaluation of hemolysis in patients supported with Impella 5.5: a single center experience
Background Hemolysis, variably defined in mechanical circulatory support (MCS), is understudied in percutaneous left ventricular assist devices. We characterize hemolytic sequelae of Impella 5.5-supported patients in the largest series to date. Methods All Impella 5.5 patients at our center from 2020 to 2023 were identified ( n  = 169) and retrospectively reviewed. Patients with a plasma free hemoglobin (PfHb) recorded (and not previously elevated) were included ( n  = 123). The top (high hemolysis [HH], n  = 26) and bottom (low hemolysis [LH], n  = 25) quintiles were categorized based on PfHb levels. Analysis between groups identified factors associated with hemolysis. Results HH patients had higher admission SCAI stages ( p  = 0.008), more Impella 5.5 days (23.5 v 10.0, p  = 0.001), more additional MCS (16/26 [61.5%] v 6/25 [24.0%], p  = 0.015), and more transfusions of packed red blood cells (12.5 v 4.0, p  = 0.001), fresh frozen plasma (2.5 v 0.0, p  = 0.033), and platelets (3.0 v 0.0, p  = 0.002). Logistic regression identified additional MCS (OR 10.82, p  = 0.004) and more Impella days (OR 1.13 p  = 0.006) as hemolysis risk factors. Eleven (44%) LH and 19/26 (73%) HH patients died, with no significant differences between postoperative complications. Compared with those who died, HH survivors had fewer platelet transfusions (2.0 vs. 5.0, p  = 0.01) and less PfHb elevation days (3.0 v 6.0, p  = 0.007). Conclusions Hemolysis in this high-risk cohort has a poor prognosis. HH patients spent more days on Impella 5.5, needed more MCS, and required more blood product transfusions.
Assessing the Accuracy of Responses by the Language Model ChatGPT to Questions Regarding Bariatric Surgery
PurposeChatGPT is a large language model trained on a large dataset covering a broad range of topics, including the medical literature. We aim to examine its accuracy and reproducibility in answering patient questions regarding bariatric surgery.Materials and methodsQuestions were gathered from nationally regarded professional societies and health institutions as well as Facebook support groups. Board-certified bariatric surgeons graded the accuracy and reproducibility of responses. The grading scale included the following: (1) comprehensive, (2) correct but inadequate, (3) some correct and some incorrect, and (4) completely incorrect. Reproducibility was determined by asking the model each question twice and examining difference in grading category between the two responses.ResultsIn total, 151 questions related to bariatric surgery were included. The model provided “comprehensive” responses to 131/151 (86.8%) of questions. When examined by category, the model provided “comprehensive” responses to 93.8% of questions related to “efficacy, eligibility and procedure options”; 93.3% related to “preoperative preparation”; 85.3% related to “recovery, risks, and complications”; 88.2% related to “lifestyle changes”; and 66.7% related to “other”. The model provided reproducible answers to 137 (90.7%) of questions.ConclusionThe large language model ChatGPT often provided accurate and reproducible responses to common questions related to bariatric surgery. ChatGPT may serve as a helpful adjunct information resource for patients regarding bariatric surgery in addition to standard of care provided by licensed healthcare professionals. We encourage future studies to examine how to leverage this disruptive technology to improve patient outcomes and quality of life.
Examining the Accuracy and Reproducibility of Responses to Nutrition Questions Related to Inflammatory Bowel Disease by Generative Pre-trained Transformer-4
Generative pre-trained transformer-4 (GPT-4) is a large language model (LLM) trained on a vast corpus of data, including the medical literature. Nutrition plays an important role in managing inflammatory bowel disease (IBD), with an unmet need for nutrition-related patient education resources. This study examines the accuracy, comprehensiveness, and reproducibility of responses by GPT-4 to patient nutrition questions related to IBD. Questions were obtained from adult IBD clinic visits, Facebook, and Reddit. Two IBD-focused registered dieticians independently graded the accuracy and reproducibility of GPT-4's responses while a third senior IBD-focused registered dietitian arbitrated. Each question was inputted twice into the model. 88 questions were selected. The model correctly responded to 73/88 questions (83.0%), with 61 (69.0%) graded as comprehensive. 15/88 (17%) responses were graded as mixed with correct and incorrect/outdated data. The model comprehensively responded to 10 (62.5%) questions related to \"Nutrition and diet needs for surgery,\" 12 (92.3%) \"Tube feeding and parenteral nutrition,\" 11 (64.7%) \"General diet questions,\" 10 (50%) \"Diet for reducing symptoms/inflammation,\" and 18 (81.8%) \"Micronutrients/supplementation needs.\" The model provided reproducible responses to 81/88 (92.0%) questions. GPT-4 comprehensively answered most questions, demonstrating the promising potential of LLMs as supplementary tools for IBD patients seeking nutrition-related information. However, 17% of responses contained incorrect information, highlighting the need for continuous refinement prior to incorporation into clinical practice. Future studies should emphasize leveraging LLMs to enhance patient outcomes and promoting patient and healthcare professional proficiency in using LLMs to maximize their efficacy.