Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
3 result(s) for "Dollerup, Ole Lindgård"
Sort by:
ChatGPT- versus human-generated answers to frequently asked questions about diabetes: A Turing test-inspired survey among employees of a Danish diabetes center
Large language models have received enormous attention recently with some studies demonstrating their potential clinical value, despite not being trained specifically for this domain. We aimed to investigate whether ChatGPT, a language model optimized for dialogue, can answer frequently asked questions about diabetes. We conducted a closed e-survey among employees of a large Danish diabetes center. The study design was inspired by the Turing test and non-inferiority trials. Our survey included ten questions with two answers each. One of these was written by a human expert, while the other was generated by ChatGPT. Participants had the task to identify the ChatGPT-generated answer. Data was analyzed at the question-level using logistic regression with robust variance estimation with clustering at participant level. In secondary analyses, we investigated the effect of participant characteristics on the outcome. A 55% non-inferiority margin was pre-defined based on precision simulations and had been published as part of the study protocol before data collection began. Among 311 invited individuals, 183 participated in the survey (59% response rate). 64% had heard of ChatGPT before, and 19% had tried it. Overall, participants could identify ChatGPT-generated answers 59.5% (95% CI: 57.0, 62.0) of the time, which was outside of the non-inferiority zone. Among participant characteristics, previous ChatGPT use had the strongest association with the outcome (odds ratio: 1.52 (1.16, 2.00), p = 0.003). Previous users answered 67.4% (61.7, 72.7) of the questions correctly, versus non-users’ 57.6% (54.9, 60.3). Participants could distinguish between ChatGPT-generated and human-written answers somewhat better than flipping a fair coin, which was against our initial hypothesis. Rigorously planned studies are needed to elucidate the risks and benefits of integrating such technologies in routine clinical practice.
Metformin Lowers Body Weight But Fails to Increase Insulin Sensitivity in Chronic Heart Failure Patients without Diabetes: a Randomized, Double-Blind, Placebo-Controlled Study
PurposeThe glucose-lowering drug metformin has recently been shown to reduce myocardial oxygen consumption and increase myocardial efficiency in chronic heart failure (HF) patients without diabetes. However, it remains to be established whether these beneficial myocardial effects are associated with metformin-induced alterations in whole-body insulin sensitivity and substrate metabolism.MethodsEighteen HF patients with reduced ejection fraction and without diabetes (median age, 65 (interquartile range 55–68); ejection fraction 39 ± 6%; HbA1c 5.5 to 6.4%) were randomized to receive metformin (n = 10) or placebo (n = 8) for 3 months. We studied the effects of metformin on whole-body insulin sensitivity using a two-step hyperinsulinemic euglycemic clamp incorporating isotope-labeled tracers of glucose, palmitate, and urea. Substrate metabolism and skeletal muscle mitochondrial respiratory capacity were determined by indirect calorimetry and high-resolution respirometry, and body composition was assessed by bioelectrical impedance analysis. The primary outcome measure was change in insulin sensitivity.ResultsCompared with placebo, metformin treatment lowered mean glycated hemoglobin levels (absolute mean difference, − 0.2%; 95% CI − 0.3 to 0.0; p = 0.03), reduced body weight (− 2.8 kg; 95% CI − 5.0 to − 0.6; p = 0.02), and increased fasting glucagon levels (3.2 pmol L−1; 95% CI 0.4 to 6.0; p = 0.03). No changes were observed in whole-body insulin sensitivity, endogenous glucose production, and peripheral glucose disposal or oxidation with metformin. Equally, resting energy expenditure, lipid and urea turnover, and skeletal muscle mitochondrial respiratory capacity remained unaltered.ConclusionIncreased myocardial efficiency during metformin treatment is not mediated through improvements in insulin action in HF patients without diabetes.Clinical Trial RegistrationURL: https://clinicaltrials.gov. Unique identifier: NCT02810132. Date of registration: June 22, 2016.
ChatGPT- versus human-generated answers to frequently asked questions about diabetes: A Turing test-inspired survey among employees of a Danish diabetes center
Large language models have received enormous attention recently with some studies demonstrating their potential clinical value, despite not being trained specifically for this domain. We aimed to investigate whether ChatGPT, a language model optimized for dialogue, can answer frequently asked questions about diabetes. We conducted a closed e-survey among employees of a large Danish diabetes center. The study design was inspired by the Turing test and non-inferiority trials. Our survey included ten questions with two answers each. One of these was written by a human expert, while the other was generated by ChatGPT. Participants had the task to identify the ChatGPT-generated answer. Data was analyzed at the question-level using logistic regression with robust variance estimation with clustering at participant level. In secondary analyses, we investigated the effect of participant characteristics on the outcome. A 55% non-inferiority margin was pre-defined based on precision simulations and had been published as part of the study protocol before data collection began. Among 311 invited individuals, 183 participated in the survey (59% response rate). 64% had heard of ChatGPT before, and 19% had tried it. Overall, participants could identify ChatGPT-generated answers 59.5% (95% CI: 57.0, 62.0) of the time, which was outside of the non-inferiority zone. Among participant characteristics, previous ChatGPT use had the strongest association with the outcome (odds ratio: 1.52 (1.16, 2.00), p = 0.003). Previous users answered 67.4% (61.7, 72.7) of the questions correctly, versus non-users' 57.6% (54.9, 60.3). Participants could distinguish between ChatGPT-generated and human-written answers somewhat better than flipping a fair coin, which was against our initial hypothesis. Rigorously planned studies are needed to elucidate the risks and benefits of integrating such technologies in routine clinical practice.