all AI news
How Biased is Your Feature?: Computing Fairness Influence Functions with Global Sensitivity Analysis. (arXiv:2206.00667v1 [cs.LG])
June 3, 2022, 1:10 a.m. | Bishwamittra Ghosh, Debabrota Basu, Kuldeep S. Meel
cs.LG updates on arXiv.org arxiv.org
Fairness in machine learning has attained significant focus due to the
widespread application of machine learning in high-stake decision-making tasks.
Unless regulated with a fairness objective, machine learning classifiers might
demonstrate unfairness/bias towards certain demographic populations in the
data. Thus, the quantification and mitigation of the bias induced by
classifiers have become a central concern. In this paper, we aim to quantify
the influence of different features on the bias of a classifier. To this end,
we propose a framework …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Vice President, Data Science, Marketplace
@ Xometry | North Bethesda, Maryland, Lexington, KY, Remote
Field Solutions Developer IV, Generative AI, Google Cloud
@ Google | Toronto, ON, Canada; Atlanta, GA, USA