March 14, 2024, 4:42 a.m. | Zhanxin Gao, Jun Cen, Xiaobin Chang

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.08568v1 Announce Type: cross
Abstract: Continual learning empowers models to adapt autonomously to the ever-changing environment or data streams without forgetting old knowledge. Prompt-based approaches are built on frozen pre-trained models to learn the task-specific prompts and classifiers efficiently. Existing prompt-based methods are inconsistent between training and testing, limiting their effectiveness. Two types of inconsistency are revealed. Test predictions are made from all classifiers while training only focuses on the current task classifier without holistic alignment, leading to Classifier inconsistency. …

abstract adapt arxiv classifiers consistent continual cs.cv cs.lg data data streams environment free knowledge learn pre-trained models prompt prompting prompts testing training type types

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US