July 21, 2022, 1:10 a.m. | Guanchu Wang, Mengnan Du, Ninghao Liu, Na Zou, Xia Hu

cs.LG updates on arXiv.org arxiv.org

Existing work on fairness modeling commonly assumes that sensitive attributes
for all instances are fully available, which may not be true in many real-world
applications due to the high cost of acquiring sensitive information. When
sensitive attributes are not disclosed or available, it is needed to manually
annotate a small part of the training data to mitigate bias. However, the
skewed distribution across different sensitive groups preserves the skewness of
the original dataset in the annotated subset, which leads to …

algorithmic bias annotations arxiv bias lg

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Consultant - Artificial Intelligence & Data (Google Cloud Data Engineer) - MY / TH

@ Deloitte | Kuala Lumpur, MY