Aug. 16, 2022, 1:12 a.m. | Xuechen Li, Daogao Liu, Tatsunori Hashimoto, Huseyin A. Inan, Janardhan Kulkarni, Yin Tat Lee, Abhradeep Guha Thakurta

stat.ML updates on arXiv.org arxiv.org

Large pretrained models can be privately fine-tuned to achieve performance
approaching that of non-private models. A common theme in these results is the
surprising observation that high-dimensional models can achieve favorable
privacy-utility trade-offs. This seemingly contradicts known results on the
model-size dependence of differentially private convex learning and raises the
following research question: When does the performance of differentially
private learning not degrade with increasing model size? We identify that the
magnitudes of gradients projected onto subspaces is a key …

arxiv learning lg

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Enterprise Data Architect

@ Pathward | Remote

Diagnostic Imaging Information Systems (DIIS) Technologist

@ Nova Scotia Health Authority | Halifax, NS, CA, B3K 6R8

Intern Data Scientist - Residual Value Risk Management (f/m/d)

@ BMW Group | Munich, DE

Analytics Engineering Manager

@ PlayStation Global | United Kingdom, London

Junior Insight Analyst (PR&Comms)

@ Signal AI | Lisbon, Lisbon, Portugal