all AI news
Back to the Drawing Board for Fair Representation Learning
May 29, 2024, 4:43 a.m. | Ang\'eline Pouget, Nikola Jovanovi\'c, Mark Vero, Robin Staab, Martin Vechev
cs.LG updates on arXiv.org arxiv.org
Abstract: The goal of Fair Representation Learning (FRL) is to mitigate biases in machine learning models by learning data representations that enable high accuracy on downstream tasks while minimizing discrimination based on sensitive attributes. The evaluation of FRL methods in many recent works primarily focuses on the tradeoff between downstream fairness and accuracy with respect to a single task that was used to approximate the utility of representations during training (proxy task). This incentivizes retaining only …
abstract accuracy arxiv attributes biases board cs.ai cs.lg data discrimination evaluation fair machine machine learning machine learning models representation representation learning tasks type while
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Focused Biochemistry Postdoctoral Fellow
@ Lawrence Berkeley National Lab | Berkeley, CA
Senior Data Engineer
@ Displate | Warsaw
Hybrid Cloud Engineer
@ Vanguard | Wayne, PA
Senior Software Engineer
@ F5 | San Jose
Software Engineer, Backend, 3+ Years of Experience
@ Snap Inc. | Bellevue - 110 110th Ave NE
Global Head of Commercial Data Foundations
@ Sanofi | Cambridge