Web: http://arxiv.org/abs/2209.10732

Sept. 23, 2022, 1:11 a.m. | Jiaqi Wang, Roei Schuster, Ilia Shumailov, David Lie, Nicolas Papernot

cs.LG updates on arXiv.org arxiv.org

When learning from sensitive data, care must be taken to ensure that training
algorithms address privacy concerns. The canonical Private Aggregation of
Teacher Ensembles, or PATE, computes output labels by aggregating the
predictions of a (possibly distributed) collection of teacher models via a
voting mechanism. The mechanism adds noise to attain a differential privacy
guarantee with respect to the teachers' training data. In this work, we observe
that this use of noise, which makes PATE predictions stochastic, enables new
forms …

arxiv differential privacy ensemble privacy

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Product Manager (Canada, Remote)

@ FreshBooks | Canada

Data Engineer

@ Amazon.com | Irvine, California, USA

Senior Autonomy Behavior II, Performance Assessment Engineer

@ Cruise LLC | San Francisco, CA

Senior Data Analytics Engineer

@ Intercom | Dublin, Ireland

Data Analyst Intern

@ ADDX | Singapore

Data Science Analyst - Consumer

@ Yelp | London, England, United Kingdom

Senior Data Analyst - Python+Hadoop

@ Capco | India - Bengaluru

DevOps Engineer, Data Team

@ SingleStore | Hyderabad, India

Software Engineer (Machine Learning, AI Platform)

@ Phaidra | Remote

Sr. UI/UX Designer - Artificial Intelligence (ID:1213)

@ Truelogic Software | Remote, anywhere in LATAM

Analytics Engineer

@ carwow | London, England, United Kingdom

HRIS Data Analyst

@ SecurityScorecard | Remote