Web: http://arxiv.org/abs/2107.04423

Jan. 27, 2022, 2:11 a.m. | Emily Diana, Wesley Gill, Michael Kearns, Krishnaram Kenthapadi, Aaron Roth, Saeed Sharifi-Malvajerdi

cs.LG updates on arXiv.org arxiv.org

We study the problem of training a model that must obey demographic fairness
conditions when the sensitive features are not available at training time -- in
other words, how can we train a model to be fair by race when we don't have
data about race? We adopt a fairness pipeline perspective, in which an
"upstream" learner that does have access to the sensitive features will learn a
proxy model for these features from the other attributes. The goal of …

arxiv fairness

More from arxiv.org / cs.LG updates on arXiv.org

Senior Data Engineer

@ DAZN | Hammersmith, London, United Kingdom

Sr. Data Engineer, Growth

@ Netflix | Remote, United States

Data Engineer - Remote

@ Craft | Wrocław, Lower Silesian Voivodeship, Poland

Manager, Operations Data Science

@ Binance.US | Vancouver

Senior Machine Learning Researcher for Copilot

@ GitHub | Remote - Europe

Sr. Marketing Data Analyst

@ HoneyBook | San Francisco, CA