April 10, 2024, 4:43 a.m. | Wei Yao, Zhanke Zhou, Zhicong Li, Bo Han, Yong Liu

cs.LG updates on arXiv.org arxiv.org

arXiv:2310.11211v4 Announce Type: replace
Abstract: It has been observed that machine learning algorithms exhibit biased predictions against certain population groups. To mitigate such bias while achieving comparable accuracy, a promising approach is to introduce surrogate functions of the concerned fairness definition and solve a constrained optimization problem. However, it is intriguing in previous work that such fairness surrogate functions may yield unfair results and high instability. In this work, in order to deeply understand them, taking a widely used fairness …

abstract accuracy algorithmic fairness algorithms arxiv bias cs.ai cs.lg definition fairness functions however machine machine learning machine learning algorithms optimization population predictions solve type understanding

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US