Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Adversarial prompt and fine-tuning attacks threaten medical large language models
by
Huang, Furong
, Lu, Zhiyong
, Yang, Yifan
, Jin, Qiao
in
631/114/1305
/ 692/308
/ Benchmarks
/ Computer Security
/ Delivery of Health Care
/ Disease prevention
/ Health care
/ Health services
/ Humanities and Social Sciences
/ Humans
/ Immunization
/ Language
/ Large Language Models
/ Medical imaging
/ multidisciplinary
/ Patients
/ Science
/ Science (multidisciplinary)
/ Ultrasonic imaging
/ Vaccines
/ X-rays
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Adversarial prompt and fine-tuning attacks threaten medical large language models
by
Huang, Furong
, Lu, Zhiyong
, Yang, Yifan
, Jin, Qiao
in
631/114/1305
/ 692/308
/ Benchmarks
/ Computer Security
/ Delivery of Health Care
/ Disease prevention
/ Health care
/ Health services
/ Humanities and Social Sciences
/ Humans
/ Immunization
/ Language
/ Large Language Models
/ Medical imaging
/ multidisciplinary
/ Patients
/ Science
/ Science (multidisciplinary)
/ Ultrasonic imaging
/ Vaccines
/ X-rays
2025
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Adversarial prompt and fine-tuning attacks threaten medical large language models
by
Huang, Furong
, Lu, Zhiyong
, Yang, Yifan
, Jin, Qiao
in
631/114/1305
/ 692/308
/ Benchmarks
/ Computer Security
/ Delivery of Health Care
/ Disease prevention
/ Health care
/ Health services
/ Humanities and Social Sciences
/ Humans
/ Immunization
/ Language
/ Large Language Models
/ Medical imaging
/ multidisciplinary
/ Patients
/ Science
/ Science (multidisciplinary)
/ Ultrasonic imaging
/ Vaccines
/ X-rays
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Adversarial prompt and fine-tuning attacks threaten medical large language models
Journal Article
Adversarial prompt and fine-tuning attacks threaten medical large language models
2025
Request Book From Autostore
and Choose the Collection Method
Overview
The integration of Large Language Models (LLMs) into healthcare applications offers promising advancements in medical diagnostics, treatment recommendations, and patient care. However, the susceptibility of LLMs to adversarial attacks poses a significant threat, potentially leading to harmful outcomes in delicate medical contexts. This study investigates the vulnerability of LLMs to two types of adversarial attacks–prompt injections with malicious instructions and fine-tuning with poisoned samples–across three medical tasks: disease prevention, diagnosis, and treatment. Utilizing real-world patient data, we demonstrate that both open-source and proprietary LLMs are vulnerable to malicious manipulation across multiple tasks. We discover that while integrating poisoned data does not markedly degrade overall model performance on medical benchmarks, it can lead to noticeable shifts in fine-tuned model weights, suggesting a potential pathway for detecting and countering model attacks. This research highlights the urgent need for robust security measures and the development of defensive mechanisms to safeguard LLMs in medical applications, to ensure their safe and effective deployment in healthcare settings.
Large language models hold significant potential in healthcare settings. This study exposes their vulnerability in medical applications and demonstrates the inadequacy of existing safeguards, highlighting the need for future studies to develop reliable methods for detecting and mitigating these risks.
This website uses cookies to ensure you get the best experience on our website.