July 14, 2022, 1:10 a.m. | Prashant Bhat, Bahram Zonooz, Elahe Arani

cs.LG updates on arXiv.org arxiv.org

Continual learning (CL) over non-stationary data streams remains one of the
long-standing challenges in deep neural networks (DNNs) as they are prone to
catastrophic forgetting. CL models can benefit from self-supervised
pre-training as it enables learning more generalizable task-agnostic features.
However, the effect of self-supervised pre-training diminishes as the length of
task sequences increases. Furthermore, the domain shift between pre-training
data distribution and the task distribution reduces the generalizability of the
learned representations. To address these limitations, we propose Task …

arxiv consolidation continual learning lg representation

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote