Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Evaluation of Large Language Model Performance and Reliability for Citations and References in Scholarly Writing: Cross-Disciplinary Study
by
Lu, Caide
, Mugaanyi, Joseph
, Cheng, Sumei
, Huang, Jing
, Cai, Liuying
in
Accuracy
/ Analysis
/ Artificial Intelligence
/ Chatbots
/ Citations
/ Computational linguistics
/ Data mining
/ Human-computer interaction
/ Humanities
/ Humans
/ Language
/ Language modeling
/ Language processing
/ Large language models
/ Machine learning
/ Natural language
/ Natural language interfaces
/ Natural sciences
/ Reliability
/ Reproducibility of Results
/ Research Personnel
/ Researchers
/ Scholarly communication
/ Training
/ Writing
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Evaluation of Large Language Model Performance and Reliability for Citations and References in Scholarly Writing: Cross-Disciplinary Study
by
Lu, Caide
, Mugaanyi, Joseph
, Cheng, Sumei
, Huang, Jing
, Cai, Liuying
in
Accuracy
/ Analysis
/ Artificial Intelligence
/ Chatbots
/ Citations
/ Computational linguistics
/ Data mining
/ Human-computer interaction
/ Humanities
/ Humans
/ Language
/ Language modeling
/ Language processing
/ Large language models
/ Machine learning
/ Natural language
/ Natural language interfaces
/ Natural sciences
/ Reliability
/ Reproducibility of Results
/ Research Personnel
/ Researchers
/ Scholarly communication
/ Training
/ Writing
2024
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Evaluation of Large Language Model Performance and Reliability for Citations and References in Scholarly Writing: Cross-Disciplinary Study
by
Lu, Caide
, Mugaanyi, Joseph
, Cheng, Sumei
, Huang, Jing
, Cai, Liuying
in
Accuracy
/ Analysis
/ Artificial Intelligence
/ Chatbots
/ Citations
/ Computational linguistics
/ Data mining
/ Human-computer interaction
/ Humanities
/ Humans
/ Language
/ Language modeling
/ Language processing
/ Large language models
/ Machine learning
/ Natural language
/ Natural language interfaces
/ Natural sciences
/ Reliability
/ Reproducibility of Results
/ Research Personnel
/ Researchers
/ Scholarly communication
/ Training
/ Writing
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Evaluation of Large Language Model Performance and Reliability for Citations and References in Scholarly Writing: Cross-Disciplinary Study
Journal Article
Evaluation of Large Language Model Performance and Reliability for Citations and References in Scholarly Writing: Cross-Disciplinary Study
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Large language models (LLMs) have gained prominence since the release of ChatGPT in late 2022.
The aim of this study was to assess the accuracy of citations and references generated by ChatGPT (GPT-3.5) in two distinct academic domains: the natural sciences and humanities.
Two researchers independently prompted ChatGPT to write an introduction section for a manuscript and include citations; they then evaluated the accuracy of the citations and Digital Object Identifiers (DOIs). Results were compared between the two disciplines.
Ten topics were included, including 5 in the natural sciences and 5 in the humanities. A total of 102 citations were generated, with 55 in the natural sciences and 47 in the humanities. Among these, 40 citations (72.7%) in the natural sciences and 36 citations (76.6%) in the humanities were confirmed to exist (P=.42). There were significant disparities found in DOI presence in the natural sciences (39/55, 70.9%) and the humanities (18/47, 38.3%), along with significant differences in accuracy between the two disciplines (18/55, 32.7% vs 4/47, 8.5%). DOI hallucination was more prevalent in the humanities (42/55, 89.4%). The Levenshtein distance was significantly higher in the humanities than in the natural sciences, reflecting the lower DOI accuracy.
ChatGPT's performance in generating citations and references varies across disciplines. Differences in DOI standards and disciplinary nuances contribute to performance variations. Researchers should consider the strengths and limitations of artificial intelligence writing tools with respect to citation accuracy. The use of domain-specific models may enhance accuracy.
This website uses cookies to ensure you get the best experience on our website.