all AI news
Fairness-Accuracy Trade-Offs: A Causal Perspective
May 27, 2024, 4:42 a.m. | Drago Plecko, Elias Bareinboim
cs.LG updates on arXiv.org arxiv.org
Abstract: Systems based on machine learning may exhibit discriminatory behavior based on sensitive characteristics such as gender, sex, religion, or race. In light of this, various notions of fairness and methods to quantify discrimination were proposed, leading to the development of numerous approaches for constructing fair predictors. At the same time, imposing fairness constraints may decrease the utility of the decision-maker, highlighting a tension between fairness and utility. This tension is also recognized in legal frameworks, …
abstract accuracy arxiv behavior causal cs.ai cs.lg development discrimination fair fairness gender light machine machine learning perspective race religion sex stat.ml systems trade type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Data Engineer
@ Displate | Warsaw
Lead Python Developer - Generative AI
@ S&P Global | US - TX - VIRTUAL
Analytics Engineer - Design Experience
@ Canva | Sydney, Australia
Data Architect
@ Unisys | Bengaluru - RGA Tech Park
Data Architect
@ HP | PSR01 - Bengaluru, Pritech Park- SEZ (PSR01)
Streetlight Analyst
@ DTE Energy | Belleville, MI, US