Web: http://arxiv.org/abs/2206.07842

June 17, 2022, 1:10 a.m. | Tianlong Chen, Sijia Liu, Shiyu Chang, Lisa Amini, Zhangyang Wang

cs.LG updates on arXiv.org arxiv.org

Class-incremental learning (CIL) suffers from the notorious dilemma between
learning newly added classes and preserving previously learned class knowledge.
That catastrophic forgetting issue could be mitigated by storing historical
data for replay, which yet would cause memory overheads as well as imbalanced
prediction updates. To address this dilemma, we propose to leverage "free"
external unlabeled data querying in continual learning. We first present a CIL
with Queried Unlabeled Data (CIL-QUD) scheme, where we only store a handful of
past training …

arxiv data incremental learning lg

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY