Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Factuality challenges in the era of large language models and opportunities for fact-checking
by
Ciampaglia, Giovanni Luca
, Chakraborty, Tanmoy
, DiResta, Renee
, Miguez, Ruben
, Scheufele, Dietram
, Cha, Meeyoung
, Augenstein, Isabelle
, Menczer, Filippo
, Zagni, Giovanni
, Hale, Scott
, Baldwin, Timothy
, Ji, Heng
, Hovy, Eduard
, Sharma, Shivam
, Corney, David
, Nakov, Preslav
, Halevy, Alon
, Ferrara, Emilio
in
4014/4009
/ 639/705/117
/ Access to information
/ Artificial intelligence
/ Chatbots
/ COVID-19
/ Engineering
/ False information
/ Generative artificial intelligence
/ Impact analysis
/ Information retrieval
/ Language
/ Large language models
/ Natural language
/ Nuclear power plants
/ Perspective
/ Search engines
/ Speech recognition
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Factuality challenges in the era of large language models and opportunities for fact-checking
by
Ciampaglia, Giovanni Luca
, Chakraborty, Tanmoy
, DiResta, Renee
, Miguez, Ruben
, Scheufele, Dietram
, Cha, Meeyoung
, Augenstein, Isabelle
, Menczer, Filippo
, Zagni, Giovanni
, Hale, Scott
, Baldwin, Timothy
, Ji, Heng
, Hovy, Eduard
, Sharma, Shivam
, Corney, David
, Nakov, Preslav
, Halevy, Alon
, Ferrara, Emilio
in
4014/4009
/ 639/705/117
/ Access to information
/ Artificial intelligence
/ Chatbots
/ COVID-19
/ Engineering
/ False information
/ Generative artificial intelligence
/ Impact analysis
/ Information retrieval
/ Language
/ Large language models
/ Natural language
/ Nuclear power plants
/ Perspective
/ Search engines
/ Speech recognition
2024
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Factuality challenges in the era of large language models and opportunities for fact-checking
by
Ciampaglia, Giovanni Luca
, Chakraborty, Tanmoy
, DiResta, Renee
, Miguez, Ruben
, Scheufele, Dietram
, Cha, Meeyoung
, Augenstein, Isabelle
, Menczer, Filippo
, Zagni, Giovanni
, Hale, Scott
, Baldwin, Timothy
, Ji, Heng
, Hovy, Eduard
, Sharma, Shivam
, Corney, David
, Nakov, Preslav
, Halevy, Alon
, Ferrara, Emilio
in
4014/4009
/ 639/705/117
/ Access to information
/ Artificial intelligence
/ Chatbots
/ COVID-19
/ Engineering
/ False information
/ Generative artificial intelligence
/ Impact analysis
/ Information retrieval
/ Language
/ Large language models
/ Natural language
/ Nuclear power plants
/ Perspective
/ Search engines
/ Speech recognition
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Factuality challenges in the era of large language models and opportunities for fact-checking
Journal Article
Factuality challenges in the era of large language models and opportunities for fact-checking
2024
Request Book From Autostore
and Choose the Collection Method
Overview
The emergence of tools based on large language models (LLMs), such as OpenAI’s ChatGPT and Google’s Gemini, has garnered immense public attention owing to their advanced natural language generation capabilities. These remarkably natural-sounding tools have the potential to be highly useful for various tasks. However, they also tend to produce false, erroneous or misleading content—commonly referred to as hallucinations. Moreover, LLMs can be misused to generate convincing, yet false, content and profiles on a large scale, posing a substantial societal challenge by potentially deceiving users and spreading inaccurate information. This makes fact-checking increasingly important. Despite their issues with factual accuracy, LLMs have shown proficiency in various subtasks that support fact-checking, which is essential to ensure factually accurate responses. In light of these concerns, we explore issues related to factuality in LLMs and their impact on fact-checking. We identify key challenges, imminent threats and possible solutions to these factuality issues. We also thoroughly examine these challenges, existing solutions and potential prospects for fact-checking. By analysing the factuality constraints within LLMs and their impact on fact-checking, we aim to contribute to a path towards maintaining accuracy at a time of confluence of generative artificial intelligence and misinformation.
Large language models (LLMs) present challenges, including a tendency to produce false or misleading content and the potential to create misinformation or disinformation. Augenstein and colleagues explore issues related to factuality in LLMs and their impact on fact-checking.
Publisher
Nature Publishing Group UK,Nature Publishing Group
Subject
This website uses cookies to ensure you get the best experience on our website.