June 3, 2022, 1:10 a.m. | Bishwamittra Ghosh, Debabrota Basu, Kuldeep S. Meel

cs.LG updates on arXiv.org arxiv.org

Fairness in machine learning has attained significant focus due to the
widespread application of machine learning in high-stake decision-making tasks.
Unless regulated with a fairness objective, machine learning classifiers might
demonstrate unfairness/bias towards certain demographic populations in the
data. Thus, the quantification and mitigation of the bias induced by
classifiers have become a central concern. In this paper, we aim to quantify
the influence of different features on the bias of a classifier. To this end,
we propose a framework …

analysis arxiv computing fairness feature global influence

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Vice President, Data Science, Marketplace

@ Xometry | North Bethesda, Maryland, Lexington, KY, Remote

Field Solutions Developer IV, Generative AI, Google Cloud

@ Google | Toronto, ON, Canada; Atlanta, GA, USA