Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
AI generates covertly racist decisions about people based on their dialect
by
Jurafsky, Dan
, Kalluri, Pratyusha Ria
, Hofmann, Valentin
, King, Sharese
in
639/705/117
/ 706/689/522
/ African American English
/ African Americans
/ African cultural groups
/ American English
/ Artificial intelligence
/ Artificial Intelligence - ethics
/ Black or African American - ethnology
/ Capital punishment
/ Civil rights
/ Covert
/ Decision Making - ethics
/ Dialects
/ Hiring
/ Humanities and Social Sciences
/ Language
/ Language modeling
/ Language usage
/ Minority & ethnic groups
/ multidisciplinary
/ Natural Language Processing
/ Offenses
/ Prejudice
/ Race
/ Racial bias
/ Racial discrimination
/ Racial identity
/ Racial stereotypes
/ Racism
/ Racism - ethnology
/ Racism - prevention & control
/ Science
/ Science (multidisciplinary)
/ Social sciences
/ Social scientists
/ Sociolinguistics
/ Stereotypes
/ Stereotyping
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
AI generates covertly racist decisions about people based on their dialect
by
Jurafsky, Dan
, Kalluri, Pratyusha Ria
, Hofmann, Valentin
, King, Sharese
in
639/705/117
/ 706/689/522
/ African American English
/ African Americans
/ African cultural groups
/ American English
/ Artificial intelligence
/ Artificial Intelligence - ethics
/ Black or African American - ethnology
/ Capital punishment
/ Civil rights
/ Covert
/ Decision Making - ethics
/ Dialects
/ Hiring
/ Humanities and Social Sciences
/ Language
/ Language modeling
/ Language usage
/ Minority & ethnic groups
/ multidisciplinary
/ Natural Language Processing
/ Offenses
/ Prejudice
/ Race
/ Racial bias
/ Racial discrimination
/ Racial identity
/ Racial stereotypes
/ Racism
/ Racism - ethnology
/ Racism - prevention & control
/ Science
/ Science (multidisciplinary)
/ Social sciences
/ Social scientists
/ Sociolinguistics
/ Stereotypes
/ Stereotyping
2024
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
AI generates covertly racist decisions about people based on their dialect
by
Jurafsky, Dan
, Kalluri, Pratyusha Ria
, Hofmann, Valentin
, King, Sharese
in
639/705/117
/ 706/689/522
/ African American English
/ African Americans
/ African cultural groups
/ American English
/ Artificial intelligence
/ Artificial Intelligence - ethics
/ Black or African American - ethnology
/ Capital punishment
/ Civil rights
/ Covert
/ Decision Making - ethics
/ Dialects
/ Hiring
/ Humanities and Social Sciences
/ Language
/ Language modeling
/ Language usage
/ Minority & ethnic groups
/ multidisciplinary
/ Natural Language Processing
/ Offenses
/ Prejudice
/ Race
/ Racial bias
/ Racial discrimination
/ Racial identity
/ Racial stereotypes
/ Racism
/ Racism - ethnology
/ Racism - prevention & control
/ Science
/ Science (multidisciplinary)
/ Social sciences
/ Social scientists
/ Sociolinguistics
/ Stereotypes
/ Stereotyping
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
AI generates covertly racist decisions about people based on their dialect
Journal Article
AI generates covertly racist decisions about people based on their dialect
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Hundreds of millions of people now interact with language models, with uses ranging from help with writing
1
,
2
to informing hiring decisions
3
. However, these language models are known to perpetuate systematic racial prejudices, making their judgements biased in problematic ways about groups such as African Americans
4
–
7
. Although previous research has focused on overt racism in language models, social scientists have argued that racism with a more subtle character has developed over time, particularly in the United States after the civil rights movement
8
,
9
. It is unknown whether this covert racism manifests in language models. Here, we demonstrate that language models embody covert racism in the form of dialect prejudice, exhibiting raciolinguistic stereotypes about speakers of African American English (AAE) that are more negative than any human stereotypes about African Americans ever experimentally recorded. By contrast, the language models’ overt stereotypes about African Americans are more positive. Dialect prejudice has the potential for harmful consequences: language models are more likely to suggest that speakers of AAE be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death. Finally, we show that current practices of alleviating racial bias in language models, such as human preference alignment, exacerbate the discrepancy between covert and overt stereotypes, by superficially obscuring the racism that language models maintain on a deeper level. Our findings have far-reaching implications for the fair and safe use of language technology.
Despite efforts to remove overt racial prejudice, language models using artificial intelligence still show covert racism against speakers of African American English that is triggered by features of the dialect.
Publisher
Nature Publishing Group UK,Nature Publishing Group
This website uses cookies to ensure you get the best experience on our website.