Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
The Role of Humanization and Robustness of Large Language Models in Conversational Artificial Intelligence for Individuals With Depression: A Critical Analysis
by
Sedlakova, Jana
, Trachsel, Manuel
, Ferrario, Andrea
in
Artificial Intelligence
/ Bioethics
/ Chatbots and Conversational Agents
/ Communication
/ Depression - psychology
/ Depression - therapy
/ Depression and Mood Disorders; Suicide Prevention
/ Development and Evaluation of Research Methods, Instruments and Tools
/ e-Mental Health and Cyberpsychology
/ Ethics
/ Generative Language Models Including ChatGPT
/ Humanism
/ Humans
/ Language
/ Large language models
/ Mental depression
/ Psychotherapy
/ Theme Issue 2023 : Responsible Design, Integration, and Use of Generative AI in Mental Health
/ Viewpoint
/ Web-based and Mobile Health Interventions
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
The Role of Humanization and Robustness of Large Language Models in Conversational Artificial Intelligence for Individuals With Depression: A Critical Analysis
by
Sedlakova, Jana
, Trachsel, Manuel
, Ferrario, Andrea
in
Artificial Intelligence
/ Bioethics
/ Chatbots and Conversational Agents
/ Communication
/ Depression - psychology
/ Depression - therapy
/ Depression and Mood Disorders; Suicide Prevention
/ Development and Evaluation of Research Methods, Instruments and Tools
/ e-Mental Health and Cyberpsychology
/ Ethics
/ Generative Language Models Including ChatGPT
/ Humanism
/ Humans
/ Language
/ Large language models
/ Mental depression
/ Psychotherapy
/ Theme Issue 2023 : Responsible Design, Integration, and Use of Generative AI in Mental Health
/ Viewpoint
/ Web-based and Mobile Health Interventions
2024
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
The Role of Humanization and Robustness of Large Language Models in Conversational Artificial Intelligence for Individuals With Depression: A Critical Analysis
by
Sedlakova, Jana
, Trachsel, Manuel
, Ferrario, Andrea
in
Artificial Intelligence
/ Bioethics
/ Chatbots and Conversational Agents
/ Communication
/ Depression - psychology
/ Depression - therapy
/ Depression and Mood Disorders; Suicide Prevention
/ Development and Evaluation of Research Methods, Instruments and Tools
/ e-Mental Health and Cyberpsychology
/ Ethics
/ Generative Language Models Including ChatGPT
/ Humanism
/ Humans
/ Language
/ Large language models
/ Mental depression
/ Psychotherapy
/ Theme Issue 2023 : Responsible Design, Integration, and Use of Generative AI in Mental Health
/ Viewpoint
/ Web-based and Mobile Health Interventions
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
The Role of Humanization and Robustness of Large Language Models in Conversational Artificial Intelligence for Individuals With Depression: A Critical Analysis
Journal Article
The Role of Humanization and Robustness of Large Language Models in Conversational Artificial Intelligence for Individuals With Depression: A Critical Analysis
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Large language model (LLM)–powered services are gaining popularity in various applications due to their exceptional performance in many tasks, such as sentiment analysis and answering questions. Recently, research has been exploring their potential use in digital health contexts, particularly in the mental health domain. However, implementing LLM-enhanced conversational artificial intelligence (CAI) presents significant ethical, technical, and clinical challenges. In this viewpoint paper, we discuss 2 challenges that affect the use of LLM-enhanced CAI for individuals with mental health issues, focusing on the use case of patients with depression: the tendency to humanize LLM-enhanced CAI and their lack of contextualized robustness. Our approach is interdisciplinary, relying on considerations from philosophy, psychology, and computer science. We argue that the humanization of LLM-enhanced CAI hinges on the reflection of what it means to simulate “human-like” features with LLMs and what role these systems should play in interactions with humans. Further, ensuring the contextualization of the robustness of LLMs requires considering the specificities of language production in individuals with depression, as well as its evolution over time. Finally, we provide a series of recommendations to foster the responsible design and deployment of LLM-enhanced CAI for the therapeutic support of individuals with depression.
Publisher
JMIR Publications,JMIR Mental Health
Subject
/ Chatbots and Conversational Agents
/ Depression and Mood Disorders; Suicide Prevention
/ Development and Evaluation of Research Methods, Instruments and Tools
/ e-Mental Health and Cyberpsychology
/ Ethics
/ Generative Language Models Including ChatGPT
/ Humanism
/ Humans
/ Language
/ Theme Issue 2023 : Responsible Design, Integration, and Use of Generative AI in Mental Health
This website uses cookies to ensure you get the best experience on our website.