Nov. 15, 2022, 2:15 a.m. | Quentin Jodelet, Xin Liu, Tsuyoshi Murata

cs.CV updates on arXiv.org arxiv.org

When incrementally trained on new classes, deep neural networks are subject
to catastrophic forgetting which leads to an extreme deterioration of their
performance on the old classes while learning the new ones. Using a small
memory containing few samples from past classes has shown to be an effective
method to mitigate catastrophic forgetting. However, due to the limited size of
the replay memory, there is a large imbalance between the number of samples for
the new and the old classes …

arxiv cross-entropy entropy incremental memory softmax

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne