Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
ChatGpt’s accuracy in the diagnosis of oral lesions
by
Samami, Mohammad
, Azadpeyma, Kiana
, Hajibagheri, Pedram
, Sani, Mohammad Khosousi
, Sani, Sahba Khosousi
, Tabari-Khomeiran, Rasoul
in
Accuracy
/ Accuracy and precision
/ Artificial intelligence
/ Chatbots
/ ChatGPT
/ Chi-square test
/ Computer-aided medical diagnosis
/ Data collection
/ Decision making
/ Dentistry
/ Diagnosis
/ Evaluation
/ Generative Artificial Intelligence
/ Humans
/ Kruskal-Wallis test
/ Large Language model
/ Large language models
/ Lesions
/ Medical diagnosis
/ Medicine
/ Mouth disease
/ Mouth diseases
/ Mouth Diseases - diagnosis
/ Multiple choice
/ Oral and Maxillofacial Surgery
/ Oral diagnosis
/ Questionnaires
/ Response rates
/ Surveys and Questionnaires
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
ChatGpt’s accuracy in the diagnosis of oral lesions
by
Samami, Mohammad
, Azadpeyma, Kiana
, Hajibagheri, Pedram
, Sani, Mohammad Khosousi
, Sani, Sahba Khosousi
, Tabari-Khomeiran, Rasoul
in
Accuracy
/ Accuracy and precision
/ Artificial intelligence
/ Chatbots
/ ChatGPT
/ Chi-square test
/ Computer-aided medical diagnosis
/ Data collection
/ Decision making
/ Dentistry
/ Diagnosis
/ Evaluation
/ Generative Artificial Intelligence
/ Humans
/ Kruskal-Wallis test
/ Large Language model
/ Large language models
/ Lesions
/ Medical diagnosis
/ Medicine
/ Mouth disease
/ Mouth diseases
/ Mouth Diseases - diagnosis
/ Multiple choice
/ Oral and Maxillofacial Surgery
/ Oral diagnosis
/ Questionnaires
/ Response rates
/ Surveys and Questionnaires
2025
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
ChatGpt’s accuracy in the diagnosis of oral lesions
by
Samami, Mohammad
, Azadpeyma, Kiana
, Hajibagheri, Pedram
, Sani, Mohammad Khosousi
, Sani, Sahba Khosousi
, Tabari-Khomeiran, Rasoul
in
Accuracy
/ Accuracy and precision
/ Artificial intelligence
/ Chatbots
/ ChatGPT
/ Chi-square test
/ Computer-aided medical diagnosis
/ Data collection
/ Decision making
/ Dentistry
/ Diagnosis
/ Evaluation
/ Generative Artificial Intelligence
/ Humans
/ Kruskal-Wallis test
/ Large Language model
/ Large language models
/ Lesions
/ Medical diagnosis
/ Medicine
/ Mouth disease
/ Mouth diseases
/ Mouth Diseases - diagnosis
/ Multiple choice
/ Oral and Maxillofacial Surgery
/ Oral diagnosis
/ Questionnaires
/ Response rates
/ Surveys and Questionnaires
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Journal Article
ChatGpt’s accuracy in the diagnosis of oral lesions
2025
Request Book From Autostore
and Choose the Collection Method
Overview
Aim
ChatGPT, a large language model (LLM) developed by OpenAI, is designed to generate human-like responses through the analysis of textual data. This study aimed to assess the accuracy and diagnostic capability of ChatGPT-4 in answering clinical scenario-based questions regarding oral lesions.
Methods
The study included 133 multiple-choice questions (MCQs), each consisting of five possible answers, randomly selected from
the Clinical Guide to Oral Disease
. Two oral medicine specialists reviewed the answers in the book to ensure accuracy. A general dentist categorized the questions into three levels of difficulty, and two oral medicine specialists validated these categorizations. At each level of difficulty, 37 questions were randomly selected. Consequently, the final questionnaire, consisting of a total of 111 questions categorized by difficulty level, was prepared. The process of asking questions began using the ‘’new message’’ command, to minimize potential bias (influence of prior answers), the researchers manually cleared the chat history before presenting each new question.
Result
ChatGPT-4.0 demonstrated an accuracy rate of 97% for easy questions, 86.5% ± 34.6% for medium-level questions, and 78.4% ± 41.7% for difficult questions, with an overall accuracy rate of 87.4% ± 33.3.
Conclusion
Although ChatGPT-4.0 demonstrated satisfactory accuracy in answering clinical questions, its responses should not be exclusively relied upon for diagnostic purposes. Instead, the model should be utilized as a complementary tool under the supervision of clinicians in the diagnosis of oral lesions.
Publisher
BioMed Central,BioMed Central Ltd,Springer Nature B.V,BMC
This website uses cookies to ensure you get the best experience on our website.