Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora’s Box Has Been Opened
by
Černý, Martin
, Májovský, Martin
, Netuka, David
, Kasal, Matěj
, Komarc, Martin
in
Algorithms
/ Artificial Intelligence
/ Chatbots
/ Citations
/ Clinical trials
/ Coherence
/ Credibility
/ Data Analysis
/ Data quality
/ Deep learning
/ Editing
/ Errors
/ Ethics
/ Fraud
/ Humans
/ Internet
/ Language
/ Language modeling
/ Language usage
/ Languages
/ Limited partnerships
/ Medical research
/ Medicine
/ Morality
/ Multimedia
/ Neurosurgery
/ Open access publishing
/ Psychiatry
/ Quality of care
/ Scholarship
/ Semantics
/ Sentence structure
/ Statistics
/ Vigilance
2023
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora’s Box Has Been Opened
by
Černý, Martin
, Májovský, Martin
, Netuka, David
, Kasal, Matěj
, Komarc, Martin
in
Algorithms
/ Artificial Intelligence
/ Chatbots
/ Citations
/ Clinical trials
/ Coherence
/ Credibility
/ Data Analysis
/ Data quality
/ Deep learning
/ Editing
/ Errors
/ Ethics
/ Fraud
/ Humans
/ Internet
/ Language
/ Language modeling
/ Language usage
/ Languages
/ Limited partnerships
/ Medical research
/ Medicine
/ Morality
/ Multimedia
/ Neurosurgery
/ Open access publishing
/ Psychiatry
/ Quality of care
/ Scholarship
/ Semantics
/ Sentence structure
/ Statistics
/ Vigilance
2023
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora’s Box Has Been Opened
by
Černý, Martin
, Májovský, Martin
, Netuka, David
, Kasal, Matěj
, Komarc, Martin
in
Algorithms
/ Artificial Intelligence
/ Chatbots
/ Citations
/ Clinical trials
/ Coherence
/ Credibility
/ Data Analysis
/ Data quality
/ Deep learning
/ Editing
/ Errors
/ Ethics
/ Fraud
/ Humans
/ Internet
/ Language
/ Language modeling
/ Language usage
/ Languages
/ Limited partnerships
/ Medical research
/ Medicine
/ Morality
/ Multimedia
/ Neurosurgery
/ Open access publishing
/ Psychiatry
/ Quality of care
/ Scholarship
/ Semantics
/ Sentence structure
/ Statistics
/ Vigilance
2023
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora’s Box Has Been Opened
Journal Article
Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora’s Box Has Been Opened
2023
Request Book From Autostore
and Choose the Collection Method
Overview
Artificial intelligence (AI) has advanced substantially in recent years, transforming many industries and improving the way people live and work. In scientific research, AI can enhance the quality and efficiency of data analysis and publication. However, AI has also opened up the possibility of generating high-quality fraudulent papers that are difficult to detect, raising important questions about the integrity of scientific research and the trustworthiness of published papers.
The aim of this study was to investigate the capabilities of current AI language models in generating high-quality fraudulent medical articles. We hypothesized that modern AI models can create highly convincing fraudulent papers that can easily deceive readers and even experienced researchers.
This proof-of-concept study used ChatGPT (Chat Generative Pre-trained Transformer) powered by the GPT-3 (Generative Pre-trained Transformer 3) language model to generate a fraudulent scientific article related to neurosurgery. GPT-3 is a large language model developed by OpenAI that uses deep learning algorithms to generate human-like text in response to prompts given by users. The model was trained on a massive corpus of text from the internet and is capable of generating high-quality text in a variety of languages and on various topics. The authors posed questions and prompts to the model and refined them iteratively as the model generated the responses. The goal was to create a completely fabricated article including the abstract, introduction, material and methods, discussion, references, charts, etc. Once the article was generated, it was reviewed for accuracy and coherence by experts in the fields of neurosurgery, psychiatry, and statistics and compared to existing similar articles.
The study found that the AI language model can create a highly convincing fraudulent article that resembled a genuine scientific paper in terms of word usage, sentence structure, and overall composition. The AI-generated article included standard sections such as introduction, material and methods, results, and discussion, as well a data sheet. It consisted of 1992 words and 17 citations, and the whole process of article creation took approximately 1 hour without any special training of the human user. However, there were some concerns and specific mistakes identified in the generated article, specifically in the references.
The study demonstrates the potential of current AI language models to generate completely fabricated scientific articles. Although the papers look sophisticated and seemingly flawless, expert readers may identify semantic inaccuracies and errors upon closer inspection. We highlight the need for increased vigilance and better detection methods to combat the potential misuse of AI in scientific research. At the same time, it is important to recognize the potential benefits of using AI language models in genuine scientific writing and research, such as manuscript preparation and language editing.
This website uses cookies to ensure you get the best experience on our website.