Web: http://arxiv.org/abs/2206.08492

June 20, 2022, 1:12 a.m. | Jinlin Xiang, Eli Shlizerman

stat.ML updates on arXiv.org arxiv.org

When learning new tasks in a sequential manner, deep neural networks tend to
forget tasks that they previously learned, a phenomenon called catastrophic
forgetting. Class incremental learning methods aim to address this problem by
keeping a memory of a few exemplars from previously learned tasks, and
distilling knowledge from them. However, existing methods struggle to balance
the performance across classes since they typically overfit the model to the
latest task. In our work, we propose to address these challenges with …

arxiv incremental kernel learning lg

More from arxiv.org / stat.ML updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY