March 19, 2024, 4:43 a.m. | Jisu Han, Jaemin Na, Wonjun Hwang

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.11537v1 Announce Type: cross
Abstract: Continual learning aims to refine model parameters for new tasks while retaining knowledge from previous tasks. Recently, prompt-based learning has emerged to leverage pre-trained models to be prompted to learn subsequent tasks without the reliance on the rehearsal buffer. Although this approach has demonstrated outstanding results, existing methods depend on preceding task-selection process to choose appropriate prompts. However, imperfectness in task-selection may lead to negative impacts on the performance particularly in the scenarios where the …

abstract arxiv continual cs.cv cs.lg image knowledge learn parameters pre-trained models prompt prompt-based learning prompting refine reliance results semantic tasks token type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Data Engineering Manager

@ Microsoft | Redmond, Washington, United States

Machine Learning Engineer

@ Apple | San Diego, California, United States