Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Evaluating the accuracy and reliability of AI chatbots in patient education on cardiovascular imaging: a comparative study of ChatGPT, gemini, and copilot
by
Ghorab, Hossam
, Marey, Ahmed
, Backer, Hazif
, Niemierko, Julia
, Saad, Abdelrahman M
, Umair, Muhammad
, Tanas, Yousef
in
Accuracy
/ Artificial intelligence
/ Chatbots
/ Computational linguistics
/ Disease
/ Evidence-based medicine
/ Language
/ Language processing
/ Magnetic resonance imaging
/ Natural language interfaces
/ Patient education
/ Pilots and pilotage
/ Support groups
/ Terminology
/ Tomography
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Evaluating the accuracy and reliability of AI chatbots in patient education on cardiovascular imaging: a comparative study of ChatGPT, gemini, and copilot
by
Ghorab, Hossam
, Marey, Ahmed
, Backer, Hazif
, Niemierko, Julia
, Saad, Abdelrahman M
, Umair, Muhammad
, Tanas, Yousef
in
Accuracy
/ Artificial intelligence
/ Chatbots
/ Computational linguistics
/ Disease
/ Evidence-based medicine
/ Language
/ Language processing
/ Magnetic resonance imaging
/ Natural language interfaces
/ Patient education
/ Pilots and pilotage
/ Support groups
/ Terminology
/ Tomography
2025
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Evaluating the accuracy and reliability of AI chatbots in patient education on cardiovascular imaging: a comparative study of ChatGPT, gemini, and copilot
by
Ghorab, Hossam
, Marey, Ahmed
, Backer, Hazif
, Niemierko, Julia
, Saad, Abdelrahman M
, Umair, Muhammad
, Tanas, Yousef
in
Accuracy
/ Artificial intelligence
/ Chatbots
/ Computational linguistics
/ Disease
/ Evidence-based medicine
/ Language
/ Language processing
/ Magnetic resonance imaging
/ Natural language interfaces
/ Patient education
/ Pilots and pilotage
/ Support groups
/ Terminology
/ Tomography
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Evaluating the accuracy and reliability of AI chatbots in patient education on cardiovascular imaging: a comparative study of ChatGPT, gemini, and copilot
Journal Article
Evaluating the accuracy and reliability of AI chatbots in patient education on cardiovascular imaging: a comparative study of ChatGPT, gemini, and copilot
2025
Request Book From Autostore
and Choose the Collection Method
Overview
The integration of artificial intelligence (AI) chatbots in medicine is expanding rapidly, with notable models like ChatGPT by OpenAI, Gemini by Google, and Copilot by Microsoft. These chatbots are increasingly used to provide medical information, yet their reliability in specific areas such as cardiovascular imaging remains underexplored. This study aims to evaluate the accuracy and reliability of ChatGPT (versions 3.5 and 4), Gemini, and Copilot in responding to patient inquiries about cardiovascular imaging. We sourced 30 patient-oriented questions on cardiovascular imaging. The questions were submitted to ChatGPT-4, ChatGPT-3.5, Copilot Balanced Mode, Copilot Precise Mode, and Gemini. Responses were evaluated by two cardiovascular radiologists based on accuracy, clarity, completeness, neutrality, and appropriateness using a structured rubric. Inter-rater reliability was assessed using Cohen's Kappa. ChatGPT-4 achieved the highest performance with 78.3% accuracy, 86.87% clarity and appropriateness, 81.7% completeness, and 100% neutrality. Gemini showed balanced performance, while Copilot Balanced Mode excelled in clarity and accuracy but lagged in completeness. Copilot Precise Mode had the lowest scores in completeness and accuracy. Penalty assessments revealed that ChatGPT-4 had the lowest incidence of missing or misleading information. ChatGPT-4 emerged as the most reliable AI model for providing accurate, clear, and comprehensive patient information on cardiovascular imaging. While other models showed potential, they require further refinement. This study underscores the value of integrating AI chatbots into clinical practice to enhance patient education and engagement.
Publisher
Springer,Springer Nature B.V
This website uses cookies to ensure you get the best experience on our website.