Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Fairness Hacking: The Malicious Practice of Shrouding Unfairness in Algorithms
by
Hagendorff, Thilo
, Meding, Kristof
in
Algorithms
/ Attributes
/ Classification
/ Discrimination
/ Fairness
/ Hacking
/ Infancy
/ Machine learning
/ Measurement
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Fairness Hacking: The Malicious Practice of Shrouding Unfairness in Algorithms
by
Hagendorff, Thilo
, Meding, Kristof
in
Algorithms
/ Attributes
/ Classification
/ Discrimination
/ Fairness
/ Hacking
/ Infancy
/ Machine learning
/ Measurement
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Fairness Hacking: The Malicious Practice of Shrouding Unfairness in Algorithms
Journal Article
Fairness Hacking: The Malicious Practice of Shrouding Unfairness in Algorithms
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Fairness in machine learning (ML) is an ever-growing field of research due to the manifold potential for harm from algorithmic discrimination. To prevent such harm, a large body of literature develops new approaches to quantify fairness. Here, we investigate how one can divert the quantification of fairness by describing a practice we call “fairness hacking” for the purpose of shrouding unfairness in algorithms. This impacts end-users who rely on learning algorithms, as well as the broader community interested in fair AI practices. We introduce two different categories of fairness hacking in reference to the established concept of p-hacking. The first category, intra-metric fairness hacking, describes the misuse of a particular metric by adding or removing sensitive attributes from the analysis. In this context, countermeasures that have been developed to prevent or reduce p-hacking can be applied to similarly prevent or reduce fairness hacking. The second category of fairness hacking is inter-metric fairness hacking. Inter-metric fairness hacking is the search for a specific fair metric with given attributes. We argue that countermeasures to prevent or reduce inter-metric fairness hacking are still in their infancy. Finally, we demonstrate both types of fairness hacking using real datasets. Our paper intends to serve as a guidance for discussions within the fair ML community to prevent or reduce the misuse of fairness metrics, and thus reduce overall harm from ML applications.
Publisher
Springer Nature B.V
Subject
This website uses cookies to ensure you get the best experience on our website.