March 12, 2024, 4:42 a.m. | Yuyang Deng, Junyuan Hong, Jiayu Zhou, Mehrdad Mahdavi

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.06871v1 Announce Type: new
Abstract: Recent advances in unsupervised learning have shown that unsupervised pre-training, followed by fine-tuning, can improve model generalization. However, a rigorous understanding of how the representation function learned on an unlabeled dataset affects the generalization of the fine-tuned model is lacking. Existing theoretical research does not adequately account for the heterogeneity of the distribution and tasks in pre-training and fine-tuning stage. To bridge this gap, this paper introduces a novel theoretical framework that illuminates the critical …

abstract advances arxiv cs.lg dataset fine-tuning function however model generalization pre-training pretraining representation research stat.ml training type understanding unsupervised unsupervised learning

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne