Nov. 1, 2022, 1:11 a.m. | Andrius Ovsianas, Jason Ramapuram, Dan Busbridge, Eeshan Gunesh Dhekane, Russ Webb

cs.LG updates on arXiv.org arxiv.org

Self-supervised representation learning (SSL) methods provide an effective
label-free initial condition for fine-tuning downstream tasks. However, in
numerous realistic scenarios, the downstream task might be biased with respect
to the target label distribution. This in turn moves the learned fine-tuned
model posterior away from the initial (label) bias-free self-supervised model
posterior. In this work, we re-interpret SSL fine-tuning under the lens of
Bayesian continual learning and consider regularization through the Elastic
Weight Consolidation (EWC) framework. We demonstrate that self-regularization
against …

arxiv consolidation robustness self-supervised learning supervised learning transfer

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote