April 10, 2024, 4:43 a.m. | Wei Yao, Zhanke Zhou, Zhicong Li, Bo Han, Yong Liu

cs.LG updates on arXiv.org arxiv.org

arXiv:2310.11211v4 Announce Type: replace
Abstract: It has been observed that machine learning algorithms exhibit biased predictions against certain population groups. To mitigate such bias while achieving comparable accuracy, a promising approach is to introduce surrogate functions of the concerned fairness definition and solve a constrained optimization problem. However, it is intriguing in previous work that such fairness surrogate functions may yield unfair results and high instability. In this work, in order to deeply understand them, taking a widely used fairness …

abstract accuracy algorithmic fairness algorithms arxiv bias cs.ai cs.lg definition fairness functions however machine machine learning machine learning algorithms optimization population predictions solve type understanding

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Associate Data Engineer

@ Nominet | Oxford/ Hybrid, GB

Data Science Senior Associate

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India