Feb. 8, 2024, 5:42 a.m. | Sanjari Srivastava Piotr Mardziel Zhikhun Zhang Archana Ahlawat Anupam Datta John C Mitchell

cs.LG updates on arXiv.org arxiv.org

Fairness and privacy are two important values machine learning (ML) practitioners often seek to operationalize in models. Fairness aims to reduce model bias for social/demographic sub-groups. Privacy via differential privacy (DP) mechanisms, on the other hand, limits the impact of any individual's training data on the resulting model. The trade-offs between privacy and fairness goals of trustworthy ML pose a challenge to those wishing to address both. We show that DP amplifies gender, racial, and religious bias when fine-tuning large …

bias cs.cr cs.cy cs.lg data differential differential privacy fairness fine-tuning impact language language model machine machine learning model bias model fine-tuning privacy reduce social stat.me trade training training data values via

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne