all AI news
Achieving Fairness at No Utility Cost via Data Reweighing with Influence. (arXiv:2202.00787v2 [cs.LG] UPDATED)
June 20, 2022, 1:11 a.m. | Peizhao Li, Hongfu Liu
cs.LG updates on arXiv.org arxiv.org
With the fast development of algorithmic governance, fairness has become a
compulsory property for machine learning models to suppress unintentional
discrimination. In this paper, we focus on the pre-processing aspect for
achieving fairness, and propose a data reweighing approach that only adjusts
the weight for samples in the training phase. Different from most previous
reweighing methods which usually assign a uniform weight for each (sub)group,
we granularly model the influence of each training sample with regard to
fairness-related quantity and …
More from arxiv.org / cs.LG updates on arXiv.org
Generalized Schr\"odinger Bridge Matching
1 day, 1 hour ago |
arxiv.org
Tight bounds on Pauli channel learning without entanglement
1 day, 1 hour ago |
arxiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Analyst
@ SEAKR Engineering | Englewood, CO, United States
Data Analyst II
@ Postman | Bengaluru, India
Data Architect
@ FORSEVEN | Warwick, GB
Director, Data Science
@ Visa | Washington, DC, United States
Senior Manager, Data Science - Emerging ML
@ Capital One | McLean, VA