Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Accuracy, comprehensiveness and understandability of AI-generated answers to questions from people with COPD: the AIR-COPD Study
by
Powell, Pippa
, Aliverti, Andrea
, Pinnock, Hilary
, Calverley, Peter Martin
, Behring, Greta E.
, Angelucci, Alessandra
, Amati, Francesco
, Nigro, Mattia
, Bossios, Apostolos
, Simonds, Anita Kay
, Stainer, Anna
, Aliberti, Stefano
, Boyd, Jeanette
, Anzueto, Antonio
in
Medicine
/ Medicine & Public Health
/ Pneumology/Respiratory System
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Accuracy, comprehensiveness and understandability of AI-generated answers to questions from people with COPD: the AIR-COPD Study
by
Powell, Pippa
, Aliverti, Andrea
, Pinnock, Hilary
, Calverley, Peter Martin
, Behring, Greta E.
, Angelucci, Alessandra
, Amati, Francesco
, Nigro, Mattia
, Bossios, Apostolos
, Simonds, Anita Kay
, Stainer, Anna
, Aliberti, Stefano
, Boyd, Jeanette
, Anzueto, Antonio
in
Medicine
/ Medicine & Public Health
/ Pneumology/Respiratory System
2025
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Accuracy, comprehensiveness and understandability of AI-generated answers to questions from people with COPD: the AIR-COPD Study
by
Powell, Pippa
, Aliverti, Andrea
, Pinnock, Hilary
, Calverley, Peter Martin
, Behring, Greta E.
, Angelucci, Alessandra
, Amati, Francesco
, Nigro, Mattia
, Bossios, Apostolos
, Simonds, Anita Kay
, Stainer, Anna
, Aliberti, Stefano
, Boyd, Jeanette
, Anzueto, Antonio
in
Medicine
/ Medicine & Public Health
/ Pneumology/Respiratory System
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Accuracy, comprehensiveness and understandability of AI-generated answers to questions from people with COPD: the AIR-COPD Study
Journal Article
Accuracy, comprehensiveness and understandability of AI-generated answers to questions from people with COPD: the AIR-COPD Study
2025
Request Book From Autostore
and Choose the Collection Method
Overview
Background
Chronic obstructive pulmonary disease (COPD) remains an underestimated and underdiagnosed condition due to low disease awareness. Generative Artificial Intelligence (AI) chatbots are convenient and accessible sources of medical information, but evaluation of the quality of answers provided by patient-generated questions about COPD has not been performed to date.
Objective
To assess and compare accuracy, comprehensiveness, understandability and reliability of different AI chatbots in response to patient-generated questions on the clinical management of COPD.
Methods
A cross-sectional study was conducted in collaboration with the European Respiratory Society (ERS), the European Lung Foundation (ELF), and the ERS CONNECT Clinical Research Collaboration (CRC). Fifteen real questions formulated by ELF COPD patient representatives were divided into three difficulty tiers (easy, medium, difficult) and submitted to ChatGPT (version 3.5), Bard, and Copilot. Experts assessed accuracy and comprehensiveness on a 0–10 scale; patients assessed understandability using the same scale. Reliability was assessed by two investigators. Reviewers were blinded to which AI system generated the answers, and only those who completed all evaluations were included in the analysis.
Results
ChatGPT responses were the most reliable (14/15), followed by Copilot (12/15) and Bard (11/15). ChatGPT scored higher for accuracy (8.0 [7.0 – 9.0]) and comprehensiveness (8.0 [6.8 – 9.0]) than Bard (6.0 [5.0 – 8.0] and 6.0 [5.0 – 7.0]) and Copilot (6.0 [5.0 – 7.3] and 6.0 [5.0 – 8.0]) (both
P
< 0.001). Understandability was similar across all software (ChatGPT: 8.0 [8.0–10.0]; Bard: 9.0 [8.0–10.0]; Copilot: 9.0 [8.0–10.0]) (
P
= 0.53). No significant effect was detected according to the difficulty of the question.
Conclusion
Our findings suggest that AI chatbots, particularly ChatGPT, can provide accurate, comprehensive and understandable answers to patients’ questions.
Publisher
BioMed Central
This website uses cookies to ensure you get the best experience on our website.