Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Artificial Intelligence in mental health and the biases of language based models
by
Straw, Isabel
, Callison-Burch, Chris
in
Algorithms
/ Artificial intelligence
/ Bias
/ Biology and Life Sciences
/ Collaboration
/ Computer and Information Sciences
/ Core curriculum
/ Data Science - methods
/ Data Science - statistics & numerical data
/ Evaluation
/ Finite element method
/ Health aspects
/ Health disparities
/ Health risks
/ Health Status Disparities
/ Humans
/ Inequalities
/ Intersectoral Collaboration
/ Language
/ Language and languages
/ Linguistics
/ Literature reviews
/ Medical personnel
/ Medicine
/ Medicine and Health Sciences
/ Mental health
/ Mental Health - statistics & numerical data
/ Minority & ethnic groups
/ Natural Language Processing
/ Patients
/ Physicians
/ Psychiatry
/ Psychiatry - methods
/ Psychiatry - statistics & numerical data
/ Psychological aspects
/ Public health
/ Scientists
/ Sentiment analysis
/ Sex discrimination
/ Sexuality
/ Social networks
/ Social Sciences
/ Technology application
/ Terminology
/ Womens health
2020
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Artificial Intelligence in mental health and the biases of language based models
by
Straw, Isabel
, Callison-Burch, Chris
in
Algorithms
/ Artificial intelligence
/ Bias
/ Biology and Life Sciences
/ Collaboration
/ Computer and Information Sciences
/ Core curriculum
/ Data Science - methods
/ Data Science - statistics & numerical data
/ Evaluation
/ Finite element method
/ Health aspects
/ Health disparities
/ Health risks
/ Health Status Disparities
/ Humans
/ Inequalities
/ Intersectoral Collaboration
/ Language
/ Language and languages
/ Linguistics
/ Literature reviews
/ Medical personnel
/ Medicine
/ Medicine and Health Sciences
/ Mental health
/ Mental Health - statistics & numerical data
/ Minority & ethnic groups
/ Natural Language Processing
/ Patients
/ Physicians
/ Psychiatry
/ Psychiatry - methods
/ Psychiatry - statistics & numerical data
/ Psychological aspects
/ Public health
/ Scientists
/ Sentiment analysis
/ Sex discrimination
/ Sexuality
/ Social networks
/ Social Sciences
/ Technology application
/ Terminology
/ Womens health
2020
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Artificial Intelligence in mental health and the biases of language based models
by
Straw, Isabel
, Callison-Burch, Chris
in
Algorithms
/ Artificial intelligence
/ Bias
/ Biology and Life Sciences
/ Collaboration
/ Computer and Information Sciences
/ Core curriculum
/ Data Science - methods
/ Data Science - statistics & numerical data
/ Evaluation
/ Finite element method
/ Health aspects
/ Health disparities
/ Health risks
/ Health Status Disparities
/ Humans
/ Inequalities
/ Intersectoral Collaboration
/ Language
/ Language and languages
/ Linguistics
/ Literature reviews
/ Medical personnel
/ Medicine
/ Medicine and Health Sciences
/ Mental health
/ Mental Health - statistics & numerical data
/ Minority & ethnic groups
/ Natural Language Processing
/ Patients
/ Physicians
/ Psychiatry
/ Psychiatry - methods
/ Psychiatry - statistics & numerical data
/ Psychological aspects
/ Public health
/ Scientists
/ Sentiment analysis
/ Sex discrimination
/ Sexuality
/ Social networks
/ Social Sciences
/ Technology application
/ Terminology
/ Womens health
2020
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Artificial Intelligence in mental health and the biases of language based models
Journal Article
Artificial Intelligence in mental health and the biases of language based models
2020
Request Book From Autostore
and Choose the Collection Method
Overview
The rapid integration of Artificial Intelligence (AI) into the healthcare field has occurred with little communication between computer scientists and doctors. The impact of AI on health outcomes and inequalities calls for health professionals and data scientists to make a collaborative effort to ensure historic health disparities are not encoded into the future. We present a study that evaluates bias in existing Natural Language Processing (NLP) models used in psychiatry and discuss how these biases may widen health inequalities. Our approach systematically evaluates each stage of model development to explore how biases arise from a clinical, data science and linguistic perspective.
A literature review of the uses of NLP in mental health was carried out across multiple disciplinary databases with defined Mesh terms and keywords. Our primary analysis evaluated biases within 'GloVe' and 'Word2Vec' word embeddings. Euclidean distances were measured to assess relationships between psychiatric terms and demographic labels, and vector similarity functions were used to solve analogy questions relating to mental health.
Our primary analysis of mental health terminology in GloVe and Word2Vec embeddings demonstrated significant biases with respect to religion, race, gender, nationality, sexuality and age. Our literature review returned 52 papers, of which none addressed all the areas of possible bias that we identify in model development. In addition, only one article existed on more than one research database, demonstrating the isolation of research within disciplinary silos and inhibiting cross-disciplinary collaboration or communication.
Our findings are relevant to professionals who wish to minimize the health inequalities that may arise as a result of AI and data-driven algorithms. We offer primary research identifying biases within these technologies and provide recommendations for avoiding these harms in the future.
Publisher
Public Library of Science,Public Library of Science (PLoS)
This website uses cookies to ensure you get the best experience on our website.