Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Performance of ChatGPT-3.5 and GPT-4 in national licensing examinations for medicine, pharmacy, dentistry, and nursing: a systematic review and meta-analysis
by
Jin, Hye Kyung
, Lee, Ha Eun
, Kim, EunYoung
in
Accuracy
/ Analysis
/ Artificial Intelligence
/ Chatbots
/ ChatGPT-3.5
/ Check Lists
/ Computation
/ Computational linguistics
/ Dentistry
/ Drugstores
/ Education
/ Education, Dental - standards
/ Education, Medical - standards
/ Education, Nursing - standards
/ Education, Pharmacy - standards
/ Educational Measurement - methods
/ Educational Measurement - standards
/ Educational research
/ Effect Size
/ Efficiency
/ English
/ GPT-4
/ Healthcare professionals
/ Humans
/ Information Seeking
/ International economic relations
/ Language
/ Language Processing
/ Large language models
/ Licenses
/ Licensing examinations
/ Licensing Examinations (Professions)
/ Licensing, certification and accreditation
/ Licensure - standards
/ Medical Education
/ Medical Evaluation
/ Medical students
/ Medical Subject Headings-MeSH
/ Medicine
/ Meta Analysis
/ Multiple choice
/ National licensing examination
/ Natural language interfaces
/ Natural language processing
/ Nursing
/ Patient Education
/ Pharmacy
/ Professional Education
/ Professional examinations
/ Professionals
/ Reference Materials
/ Search Strategies
/ Statistical Analysis
/ Systematic review
/ Theory of Medicine/Bioethics
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Performance of ChatGPT-3.5 and GPT-4 in national licensing examinations for medicine, pharmacy, dentistry, and nursing: a systematic review and meta-analysis
by
Jin, Hye Kyung
, Lee, Ha Eun
, Kim, EunYoung
in
Accuracy
/ Analysis
/ Artificial Intelligence
/ Chatbots
/ ChatGPT-3.5
/ Check Lists
/ Computation
/ Computational linguistics
/ Dentistry
/ Drugstores
/ Education
/ Education, Dental - standards
/ Education, Medical - standards
/ Education, Nursing - standards
/ Education, Pharmacy - standards
/ Educational Measurement - methods
/ Educational Measurement - standards
/ Educational research
/ Effect Size
/ Efficiency
/ English
/ GPT-4
/ Healthcare professionals
/ Humans
/ Information Seeking
/ International economic relations
/ Language
/ Language Processing
/ Large language models
/ Licenses
/ Licensing examinations
/ Licensing Examinations (Professions)
/ Licensing, certification and accreditation
/ Licensure - standards
/ Medical Education
/ Medical Evaluation
/ Medical students
/ Medical Subject Headings-MeSH
/ Medicine
/ Meta Analysis
/ Multiple choice
/ National licensing examination
/ Natural language interfaces
/ Natural language processing
/ Nursing
/ Patient Education
/ Pharmacy
/ Professional Education
/ Professional examinations
/ Professionals
/ Reference Materials
/ Search Strategies
/ Statistical Analysis
/ Systematic review
/ Theory of Medicine/Bioethics
2024
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Performance of ChatGPT-3.5 and GPT-4 in national licensing examinations for medicine, pharmacy, dentistry, and nursing: a systematic review and meta-analysis
by
Jin, Hye Kyung
, Lee, Ha Eun
, Kim, EunYoung
in
Accuracy
/ Analysis
/ Artificial Intelligence
/ Chatbots
/ ChatGPT-3.5
/ Check Lists
/ Computation
/ Computational linguistics
/ Dentistry
/ Drugstores
/ Education
/ Education, Dental - standards
/ Education, Medical - standards
/ Education, Nursing - standards
/ Education, Pharmacy - standards
/ Educational Measurement - methods
/ Educational Measurement - standards
/ Educational research
/ Effect Size
/ Efficiency
/ English
/ GPT-4
/ Healthcare professionals
/ Humans
/ Information Seeking
/ International economic relations
/ Language
/ Language Processing
/ Large language models
/ Licenses
/ Licensing examinations
/ Licensing Examinations (Professions)
/ Licensing, certification and accreditation
/ Licensure - standards
/ Medical Education
/ Medical Evaluation
/ Medical students
/ Medical Subject Headings-MeSH
/ Medicine
/ Meta Analysis
/ Multiple choice
/ National licensing examination
/ Natural language interfaces
/ Natural language processing
/ Nursing
/ Patient Education
/ Pharmacy
/ Professional Education
/ Professional examinations
/ Professionals
/ Reference Materials
/ Search Strategies
/ Statistical Analysis
/ Systematic review
/ Theory of Medicine/Bioethics
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Performance of ChatGPT-3.5 and GPT-4 in national licensing examinations for medicine, pharmacy, dentistry, and nursing: a systematic review and meta-analysis
Journal Article
Performance of ChatGPT-3.5 and GPT-4 in national licensing examinations for medicine, pharmacy, dentistry, and nursing: a systematic review and meta-analysis
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Background
ChatGPT, a recently developed artificial intelligence (AI) chatbot, has demonstrated improved performance in examinations in the medical field. However, thus far, an overall evaluation of the potential of ChatGPT models (ChatGPT-3.5 and GPT-4) in a variety of national health licensing examinations is lacking. This study aimed to provide a comprehensive assessment of the ChatGPT models’ performance in national licensing examinations for medical, pharmacy, dentistry, and nursing research through a meta-analysis.
Methods
Following the PRISMA protocol, full-text articles from MEDLINE/PubMed, EMBASE, ERIC, Cochrane Library, Web of Science, and key journals were reviewed from the time of ChatGPT’s introduction to February 27, 2024. Studies were eligible if they evaluated the performance of a ChatGPT model (ChatGPT-3.5 or GPT-4); related to national licensing examinations in the fields of medicine, pharmacy, dentistry, or nursing; involved multiple-choice questions; and provided data that enabled the calculation of effect size. Two reviewers independently completed data extraction, coding, and quality assessment. The JBI Critical Appraisal Tools were used to assess the quality of the selected articles. Overall effect size and 95% confidence intervals [CIs] were calculated using a random-effects model.
Results
A total of 23 studies were considered for this review, which evaluated the accuracy of four types of national licensing examinations. The selected articles were in the fields of medicine (
n
= 17), pharmacy (
n
= 3), nursing (
n
= 2), and dentistry (
n
= 1). They reported varying accuracy levels, ranging from 36 to 77% for ChatGPT-3.5 and 64.4–100% for GPT-4. The overall effect size for the percentage of accuracy was 70.1% (95% CI, 65–74.8%), which was statistically significant (
p
< 0.001). Subgroup analyses revealed that GPT-4 demonstrated significantly higher accuracy in providing correct responses than its earlier version, ChatGPT-3.5. Additionally, in the context of health licensing examinations, the ChatGPT models exhibited greater proficiency in the following order: pharmacy, medicine, dentistry, and nursing. However, the lack of a broader set of questions, including open-ended and scenario-based questions, and significant heterogeneity were limitations of this meta-analysis.
Conclusions
This study sheds light on the accuracy of ChatGPT models in four national health licensing examinations across various countries and provides a practical basis and theoretical support for future research. Further studies are needed to explore their utilization in medical and health education by including a broader and more diverse range of questions, along with more advanced versions of AI chatbots.
Publisher
BioMed Central,BioMed Central Ltd,Springer Nature B.V,BMC
Subject
/ Analysis
/ Chatbots
/ Education, Dental - standards
/ Education, Medical - standards
/ Education, Nursing - standards
/ Education, Pharmacy - standards
/ Educational Measurement - methods
/ Educational Measurement - standards
/ English
/ GPT-4
/ Humans
/ International economic relations
/ Language
/ Licenses
/ Licensing Examinations (Professions)
/ Licensing, certification and accreditation
/ Medical Subject Headings-MeSH
/ Medicine
/ National licensing examination
/ Nursing
/ Pharmacy
This website uses cookies to ensure you get the best experience on our website.