June 13, 2024, 4:49 a.m. | Pratiksha Thaker, Amrith Setlur, Zhiwei Steven Wu, Virginia Smith

stat.ML updates on arXiv.org arxiv.org

arXiv:2312.15551v3 Announce Type: replace-cross
Abstract: Public pretraining is a promising approach to improve differentially private model training. However, recent work has noted that many positive research results studying this paradigm only consider in-distribution tasks, and may not apply to settings where there is distribution shift between the pretraining and finetuning data -- a scenario that is likely when finetuning private tasks due to the sensitive nature of the data. In this work, we show empirically across three tasks that even …

abstract apply arxiv benefits cs.cr cs.lg distribution however paradigm positive pretraining public replace research results shift stat.ml studying tasks training transfer transfer learning type work

Senior Data Engineer

@ Displate | Warsaw

Content Designer

@ Glean | Palo Alto, CA

IT&D Data Solution Architect

@ Reckitt | Hyderabad, Telangana, IN, N/A

Python Developer

@ Riskinsight Consulting | Hyderabad, Telangana, India

Technical Lead (Java/Node.js)

@ LivePerson | Hyderabad, Telangana, India (Remote)

Backend Engineer - Senior and Mid-Level - Sydney Hybrid or AU remote

@ Displayr | Sydney, New South Wales, Australia