Oct. 21, 2022, 1:12 a.m. | Changjian Shui, Gezheng Xu, Qi Chen, Jiaqi Li, Charles Ling, Tal Arbel, Boyu Wang, Christian Gagné

cs.LG updates on arXiv.org arxiv.org

We propose an analysis in fair learning that preserves the utility of the
data while reducing prediction disparities under the criteria of group
sufficiency. We focus on the scenario where the data contains multiple or even
many subgroups, each with limited number of samples. As a result, we present a
principled method for learning a fair predictor for all subgroups via
formulating it as a bilevel objective. Specifically, the subgroup specific
predictors are learned in the lower-level through a small …

accuracy arxiv fairness subgroups

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Data Scientist (Database Development)

@ Nasdaq | Bengaluru-Affluence