Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
ChatGPT provides inconsistent risk-stratification of patients with atraumatic chest pain
by
Heston, Thomas F.
, Lewis, Lawrence M.
in
Artificial intelligence
/ Biology and Life Sciences
/ Care and treatment
/ Case studies
/ Causes of
/ Chest pain
/ Chest Pain - diagnosis
/ Comparative analysis
/ Health aspects
/ Health risk assessment
/ Humans
/ Medicine and Health Sciences
/ Methods
/ Prospective Studies
/ Reproducibility of Results
/ Risk Assessment - methods
/ Risk Factors
/ Social Sciences
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
ChatGPT provides inconsistent risk-stratification of patients with atraumatic chest pain
by
Heston, Thomas F.
, Lewis, Lawrence M.
in
Artificial intelligence
/ Biology and Life Sciences
/ Care and treatment
/ Case studies
/ Causes of
/ Chest pain
/ Chest Pain - diagnosis
/ Comparative analysis
/ Health aspects
/ Health risk assessment
/ Humans
/ Medicine and Health Sciences
/ Methods
/ Prospective Studies
/ Reproducibility of Results
/ Risk Assessment - methods
/ Risk Factors
/ Social Sciences
2024
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
ChatGPT provides inconsistent risk-stratification of patients with atraumatic chest pain
by
Heston, Thomas F.
, Lewis, Lawrence M.
in
Artificial intelligence
/ Biology and Life Sciences
/ Care and treatment
/ Case studies
/ Causes of
/ Chest pain
/ Chest Pain - diagnosis
/ Comparative analysis
/ Health aspects
/ Health risk assessment
/ Humans
/ Medicine and Health Sciences
/ Methods
/ Prospective Studies
/ Reproducibility of Results
/ Risk Assessment - methods
/ Risk Factors
/ Social Sciences
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
ChatGPT provides inconsistent risk-stratification of patients with atraumatic chest pain
Journal Article
ChatGPT provides inconsistent risk-stratification of patients with atraumatic chest pain
2024
Request Book From Autostore
and Choose the Collection Method
Overview
ChatGPT-4 is a large language model with promising healthcare applications. However, its ability to analyze complex clinical data and provide consistent results is poorly known. Compared to validated tools, this study evaluated ChatGPT-4's risk stratification of simulated patients with acute nontraumatic chest pain.
Three datasets of simulated case studies were created: one based on the TIMI score variables, another on HEART score variables, and a third comprising 44 randomized variables related to non-traumatic chest pain presentations. ChatGPT-4 independently scored each dataset five times. Its risk scores were compared to calculated TIMI and HEART scores. A model trained on 44 clinical variables was evaluated for consistency.
ChatGPT-4 showed a high correlation with TIMI and HEART scores (r = 0.898 and 0.928, respectively), but the distribution of individual risk assessments was broad. ChatGPT-4 gave a different risk 45-48% of the time for a fixed TIMI or HEART score. On the 44-variable model, a majority of the five ChatGPT-4 models agreed on a diagnosis category only 56% of the time, and risk scores were poorly correlated (r = 0.605).
While ChatGPT-4 correlates closely with established risk stratification tools regarding mean scores, its inconsistency when presented with identical patient data on separate occasions raises concerns about its reliability. The findings suggest that while large language models like ChatGPT-4 hold promise for healthcare applications, further refinement and customization are necessary, particularly in the clinical risk assessment of atraumatic chest pain patients.
Publisher
Public Library of Science,Public Library of Science (PLoS)
Subject
This website uses cookies to ensure you get the best experience on our website.