all AI news
On the Convergence of Differentially-Private Fine-tuning: To Linearly Probe or to Fully Fine-tune?
March 1, 2024, 5:43 a.m. | Shuqi Ke, Charlie Hou, Giulia Fanti, Sewoong Oh
cs.LG updates on arXiv.org arxiv.org
Abstract: Differentially private (DP) machine learning pipelines typically involve a two-phase process: non-private pre-training on a public dataset, followed by fine-tuning on private data using DP optimization techniques. In the DP setting, it has been observed that full fine-tuning may not always yield the best test accuracy, even for in-distribution data. This paper (1) analyzes the training dynamics of DP linear probing (LP) and full fine-tuning (FT), and (2) explores the phenomenon of sequential fine-tuning, starting …
abstract arxiv convergence cs.ai cs.cr cs.lg data dataset fine-tuning machine machine learning math.oc optimization pipelines pre-training private data probe process public training type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Data Engineer
@ Kaseya | Bengaluru, Karnataka, India