May 23, 2022, 1:12 a.m. | Ravid Shwartz-Ziv, Micah Goldblum, Hossein Souri, Sanyam Kapoor, Chen Zhu, Yann LeCun, Andrew Gordon Wilson

cs.CV updates on arXiv.org arxiv.org

Deep learning is increasingly moving towards a transfer learning paradigm
whereby large foundation models are fine-tuned on downstream tasks, starting
from an initialization learned on the source task. But an initialization
contains relatively little information about the source task. Instead, we show
that we can learn highly informative posteriors from the source task, through
supervised or self-supervised approaches, which then serve as the basis for
priors that modify the whole loss surface on the downstream task. This simple
modular approach …

arxiv bayesian easy learning loss transfer transfer learning

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Program Control Data Analyst

@ Ford Motor Company | Mexico

Vice President, Business Intelligence / Data & Analytics

@ AlphaSense | Remote - United States