Web: http://arxiv.org/abs/2206.11423

June 24, 2022, 1:10 a.m. | Jiayin Jin, Zeru Zhang, Yang Zhou, Lingfei Wu

cs.LG updates on arXiv.org arxiv.org

Only recently, researchers attempt to provide classification algorithms with
provable group fairness guarantees. Most of these algorithms suffer from
harassment caused by the requirement that the training and deployment data
follow the same distribution. This paper proposes an input-agnostic certified
group fairness algorithm, FairSmooth, for improving the fairness of
classification models while maintaining the remarkable prediction accuracy. A
Gaussian parameter smoothing method is developed to transform base classifiers
into their smooth versions. An optimal individual smooth classifier is learnt
for …

arxiv fairness group lg

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY