Web: http://arxiv.org/abs/2205.04610

May 11, 2022, 1:11 a.m. | Angelina Wang, Vikram V. Ramaswamy, Olga Russakovsky

cs.LG updates on arXiv.org arxiv.org

Research in machine learning fairness has historically considered a single
binary demographic attribute; however, the reality is of course far more
complicated. In this work, we grapple with questions that arise along three
stages of the machine learning pipeline when incorporating intersectionality as
multiple demographic attributes: (1) which demographic attributes to include as
dataset labels, (2) how to handle the progressively smaller size of subgroups
during model training, and (3) how to move beyond existing evaluation metrics
when benchmarking model …

arxiv evaluation learning machine machine learning underrepresentation

More from arxiv.org / cs.LG updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California