Web: http://arxiv.org/abs/2206.10923

June 23, 2022, 1:10 a.m. | Gaurav Maheshwari, Michaël Perrot

cs.LG updates on arXiv.org arxiv.org

We tackle the problem of group fairness in classification, where the
objective is to learn models that do not unjustly discriminate against
subgroups of the population. Most existing approaches are limited to simple
binary tasks or involve difficult to implement training mechanisms. This
reduces their practical applicability. In this paper, we propose FairGrad, a
method to enforce fairness based on a reweighting scheme that iteratively
learns group specific weights based on whether they are advantaged or not.
FairGrad is easy …

arxiv fairness gradient lg

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY