Web: http://arxiv.org/abs/2206.08101

June 17, 2022, 1:10 a.m. | Sungmin Cha, Dongsub Shim, Hyunwoo Kim, Moontae Lee, Honglak Lee, Taesup Moon

cs.LG updates on arXiv.org arxiv.org

Continual learning (CL) aims to learn from sequentially arriving tasks
without forgetting previous tasks. Whereas CL algorithms have tried to achieve
higher average test accuracy across all the tasks learned so far, learning
continuously useful representations is critical for successful generalization
and downstream transfer. To measure representational quality, we re-train only
the output layers using a small balanced dataset for all the tasks, evaluating
the average accuracy without any biased predictions toward the current task. We
also test on several …

arxiv continual learning lg

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY