Web: http://arxiv.org/abs/2206.08454

June 20, 2022, 1:10 a.m. | Sanghamitra Dutta, Praveen Venkatesh, Pulkit Grover

cs.LG updates on arXiv.org arxiv.org

When a machine-learning algorithm makes biased decisions, it can be helpful
to understand the sources of disparity to explain why the bias exists. Towards
this, we examine the problem of quantifying the contribution of each individual
feature to the observed disparity. If we have access to the decision-making
model, one potential approach (inspired from intervention-based approaches in
explainability literature) is to vary each individual feature (while keeping
the others fixed) and use the resulting change in disparity to quantify its …

arxiv feature information lg theory

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY