Web: http://arxiv.org/abs/2206.07737

June 17, 2022, 1:10 a.m. | Maria S. Esipova, Atiyeh Ashari Ghomi, Yaqiao Luo, Jesse C. Cresswell

cs.LG updates on arXiv.org arxiv.org

As machine learning becomes more widespread throughout society, aspects
including data privacy and fairness must be carefully considered, and are
crucial for deployment in highly regulated industries. Unfortunately, the
application of privacy enhancing technologies can worsen unfair tendencies in
models. In particular, one of the most widely used techniques for private model
training, differentially private stochastic gradient descent (DPSGD),
frequently intensifies disparate impact on groups within data. In this work we
study the fine-grained causes of unfairness in DPSGD and …

arxiv differential privacy gradient impact lg privacy

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY