all AI news
Fairness Risks for Group-conditionally Missing Demographics
Feb. 22, 2024, 5:41 a.m. | Kaiqi Jiang, Wenzhe Fan, Mao Li, Xinhua Zhang
cs.LG updates on arXiv.org arxiv.org
Abstract: Fairness-aware classification models have gained increasing attention in recent years as concerns grow on discrimination against some demographic groups. Most existing models require full knowledge of the sensitive features, which can be impractical due to privacy, legal issues, and an individual's fear of discrimination. The key challenge we will address is the group dependency of the unavailability, e.g., people of some age range may be more reluctant to reveal their age. Our solution augments general …
abstract arxiv attention challenge classification concerns cs.cy cs.lg demographics discrimination fairness fear features key knowledge legal privacy risks the key type will
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York