June 20, 2022, 1:11 a.m. | Peizhao Li, Hongfu Liu

cs.LG updates on arXiv.org arxiv.org

With the fast development of algorithmic governance, fairness has become a
compulsory property for machine learning models to suppress unintentional
discrimination. In this paper, we focus on the pre-processing aspect for
achieving fairness, and propose a data reweighing approach that only adjusts
the weight for samples in the training phase. Different from most previous
reweighing methods which usually assign a uniform weight for each (sub)group,
we granularly model the influence of each training sample with regard to
fairness-related quantity and …

arxiv cost data fairness influence lg

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst

@ SEAKR Engineering | Englewood, CO, United States

Data Analyst II

@ Postman | Bengaluru, India

Data Architect

@ FORSEVEN | Warwick, GB

Director, Data Science

@ Visa | Washington, DC, United States

Senior Manager, Data Science - Emerging ML

@ Capital One | McLean, VA