Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Directionality and representativeness are differentiable components of stereotypes in large language models
by
Caliskan, Aylin
, Nicolas, Gandalf
in
Analysis
/ Artificial intelligence
/ Chatbots
/ Cognition
/ Large language models
/ Machine learning
/ Methods
/ Social and Political Sciences
/ Social psychology
/ Stereotype (Psychology)
/ Stereotypes
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Directionality and representativeness are differentiable components of stereotypes in large language models
by
Caliskan, Aylin
, Nicolas, Gandalf
in
Analysis
/ Artificial intelligence
/ Chatbots
/ Cognition
/ Large language models
/ Machine learning
/ Methods
/ Social and Political Sciences
/ Social psychology
/ Stereotype (Psychology)
/ Stereotypes
2024
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Directionality and representativeness are differentiable components of stereotypes in large language models
by
Caliskan, Aylin
, Nicolas, Gandalf
in
Analysis
/ Artificial intelligence
/ Chatbots
/ Cognition
/ Large language models
/ Machine learning
/ Methods
/ Social and Political Sciences
/ Social psychology
/ Stereotype (Psychology)
/ Stereotypes
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Directionality and representativeness are differentiable components of stereotypes in large language models
Journal Article
Directionality and representativeness are differentiable components of stereotypes in large language models
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Abstract
Representativeness is a relevant but unexamined property of stereotypes in language models. Existing auditing and debiasing approaches address the direction of stereotypes, such as whether a social category (e.g. men, women) is associated more with incompetence vs. competence content. On the other hand, representativeness is the extent to which a social category's stereotypes are about a specific content dimension, such as Competence, regardless of direction (e.g. as indicated by how often dimension-related words appear in stereotypes about the social category). As such, two social categories may be associated with competence (vs. incompetence), yet one category's stereotypes are mostly about competence, whereas the other's are mostly about alternative content (e.g. Warmth). Such differentiability would suggest that direction-based auditing may fail to identify biases in content representativeness. Here, we use a large sample of social categories that are salient in American society (based on gender, race, occupation, and others) to examine whether representativeness is an independent feature of stereotypes in the ChatGPT chatbot and SBERT language model. We focus on the Warmth and Competence stereotype dimensions, given their well-established centrality in human stereotype content. Our results provide evidence for the construct differentiability of direction and representativeness for Warmth and Competence stereotypes across models and target stimuli (social category terms, racialized name exemplars). Additionally, both direction and representativeness uniquely predicted the models' internal general valence (positivity vs. negativity) and human stereotypes. We discuss implications for the use of AI in the study of human cognition and the field of fairness in AI.
Publisher
Oxford University Press
This website uses cookies to ensure you get the best experience on our website.