all AI news
On the Benefits of Public Representations for Private Transfer Learning under Distribution Shift
June 13, 2024, 4:49 a.m. | Pratiksha Thaker, Amrith Setlur, Zhiwei Steven Wu, Virginia Smith
stat.ML updates on arXiv.org arxiv.org
Abstract: Public pretraining is a promising approach to improve differentially private model training. However, recent work has noted that many positive research results studying this paradigm only consider in-distribution tasks, and may not apply to settings where there is distribution shift between the pretraining and finetuning data -- a scenario that is likely when finetuning private tasks due to the sensitive nature of the data. In this work, we show empirically across three tasks that even …
abstract apply arxiv benefits cs.cr cs.lg distribution however paradigm positive pretraining public replace research results shift stat.ml studying tasks training transfer transfer learning type work
More from arxiv.org / stat.ML updates on arXiv.org
Proximal Interacting Particle Langevin Algorithms
3 days, 8 hours ago |
arxiv.org
Cluster Quilting: Spectral Clustering for Patchwork Learning
3 days, 8 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Senior Data Engineer
@ Displate | Warsaw
Content Designer
@ Glean | Palo Alto, CA
IT&D Data Solution Architect
@ Reckitt | Hyderabad, Telangana, IN, N/A
Python Developer
@ Riskinsight Consulting | Hyderabad, Telangana, India
Technical Lead (Java/Node.js)
@ LivePerson | Hyderabad, Telangana, India (Remote)
Backend Engineer - Senior and Mid-Level - Sydney Hybrid or AU remote
@ Displayr | Sydney, New South Wales, Australia