April 9, 2024, 4:42 a.m. | Seungyub Han, Yeongmo Kim, Taehyun Cho, Jungwoo Lee

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.05555v1 Announce Type: new
Abstract: One of the objectives of continual learning is to prevent catastrophic forgetting in learning multiple tasks sequentially, and the existing solutions have been driven by the conceptualization of the plasticity-stability dilemma. However, the convergence of continual learning for each sequential task is less studied so far. In this paper, we provide a convergence analysis of memory-based continual learning with stochastic gradient descent and empirical evidence that training current tasks causes the cumulative degradation of previous …

abstract arxiv catastrophic forgetting continual convergence cs.ai cs.lg however multiple solutions stability stat.ml tasks type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne