all AI news
Understanding Optimal Feature Transfer via a Fine-Grained Bias-Variance Analysis
April 22, 2024, 4:42 a.m. | Yufan Li, Subhabrata Sen, Ben Adlam
cs.LG updates on arXiv.org arxiv.org
Abstract: In the transfer learning paradigm models learn useful representations (or features) during a data-rich pretraining stage, and then use the pretrained representation to improve model performance on data-scarce downstream tasks. In this work, we explore transfer learning with the goal of optimizing downstream performance. We introduce a simple linear model that takes as input an arbitrary pretrained feature transform. We derive exact asymptotics of the downstream risk and its fine-grained bias-variance decomposition. Our finding suggests …
abstract analysis arxiv bias bias-variance cs.lg data explore feature features fine-grained learn paradigm performance pretraining representation stage stat.ml tasks transfer transfer learning type understanding variance via work
More from arxiv.org / cs.LG updates on arXiv.org
The Perception-Robustness Tradeoff in Deterministic Image Restoration
2 days, 16 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne