May 26, 2022, 1:11 a.m. | Yunhao Yang, Parham Gohari, Ufuk Topcu

stat.ML updates on arXiv.org arxiv.org

We study the privacy risks that are associated with training a neural
network's weights with self-supervised learning algorithms. Through empirical
evidence, we show that the fine-tuning stage, in which the network weights are
updated with an informative and often private dataset, is vulnerable to privacy
attacks. To address the vulnerabilities, we design a post-training
privacy-protection algorithm that adds noise to the fine-tuned weights and
propose a novel differential privacy mechanism that samples noise from the
logistic distribution. Compared to the …

arxiv learning privacy self-supervised learning supervised learning

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineer, Machine Learning, Payments

@ Google | Bengaluru, Karnataka, India

Business Intelligence Analyst, Analytics and Data Science, YouTube

@ Google | Bengaluru, Karnataka, India