April 2, 2024, 7:43 p.m. | Hannah Chen, Yangfeng Ji, David Evans

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.00463v1 Announce Type: cross
Abstract: Statistical fairness stipulates equivalent outcomes for every protected group, whereas causal fairness prescribes that a model makes the same prediction for an individual regardless of their protected characteristics. Counterfactual data augmentation (CDA) is effective for reducing bias in NLP models, yet models trained with CDA are often evaluated only on metrics that are closely tied to the causal fairness notion; similarly, sampling-based methods designed to promote statistical fairness are rarely evaluated for causal fairness. In …

abstract arxiv augmentation bias causal counterfactual cs.cl cs.cy cs.lg data every fairness gender nlp nlp models prediction statistical type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne