March 28, 2024, 4:41 a.m. | Wenzhuo Liu, Fei Zhu, Cheng-Lin Liu

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.18266v1 Announce Type: new
Abstract: Self-supervised learning (SSL) has emerged as an effective paradigm for deriving general representations from vast amounts of unlabeled data. However, as real-world applications continually integrate new content, the high computational and resource demands of SSL necessitate continual learning rather than complete retraining. This poses a challenge in striking a balance between stability and plasticity when adapting to new information. In this paper, we employ Centered Kernel Alignment for quantitatively analyzing model stability and plasticity, revealing …

abstract applications arxiv challenge computational continual cs.cv cs.lg data general however paradigm retraining self-supervised learning ssl stability supervised learning type vast world

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Business Intelligence Architect - Specialist

@ Eastman | Hyderabad, IN, 500 008