Web: http://arxiv.org/abs/2205.06135

May 13, 2022, 1:11 a.m. | Gaurav Maheshwari, Pascal Denis, Mikaela Keller, Aurélien Bellet

cs.LG updates on arXiv.org arxiv.org

Encoded text representations often capture sensitive attributes about
individuals (e.g., race or gender), which raise privacy concerns and can make
downstream models unfair to certain groups. In this work, we propose FEDERATE,
an approach that combines ideas from differential privacy and adversarial
training to learn private text representations which also induces fairer
models. We empirically evaluate the trade-off between the privacy of the
representations and the fairness and accuracy of the downstream model on four
NLP datasets. Our results show …

arxiv models nlp text

More from arxiv.org / cs.LG updates on arXiv.org

Data Analyst, Patagonia Action Works

@ Patagonia | Remote

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC