March 25, 2024, 4:42 a.m. | Khanh Doan, Quyen Tran, Tung Lam Tran, Tuan Nguyen, Dinh Phung, Trung Le

cs.LG updates on arXiv.org arxiv.org

arXiv:2312.06710v3 Announce Type: replace
Abstract: Mitigating catastrophic forgetting is a key hurdle in continual learning. Deep Generative Replay (GR) provides techniques focused on generating samples from prior tasks to enhance the model's memory capabilities using generative AI models ranging from Generative Adversarial Networks (GANs) to the more recent Diffusion Models (DMs). A major issue is the deterioration in the quality of generated data compared to the original, as the generator continuously self-learns from its outputs. This degradation can lead to …

abstract adversarial ai models arxiv capabilities catastrophic forgetting class continual cs.lg diffusion diffusion model diffusion models gans generative generative adversarial networks generative ai models gradient key memory networks prior projection samples tasks type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineering Manager, Generative AI - Characters

@ Meta | Bellevue, WA | Menlo Park, CA | Seattle, WA | New York City | San Francisco, CA

Senior Operations Research Analyst / Predictive Modeler

@ LinQuest | Colorado Springs, Colorado, United States