July 21, 2022, 1:10 a.m. | Guanchu Wang, Mengnan Du, Ninghao Liu, Na Zou, Xia Hu

cs.LG updates on arXiv.org arxiv.org

Existing work on fairness modeling commonly assumes that sensitive attributes
for all instances are fully available, which may not be true in many real-world
applications due to the high cost of acquiring sensitive information. When
sensitive attributes are not disclosed or available, it is needed to manually
annotate a small part of the training data to mitigate bias. However, the
skewed distribution across different sensitive groups preserves the skewness of
the original dataset in the annotated subset, which leads to …

algorithmic bias annotations arxiv bias lg

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US