all AI news
Rehearsal-Free Modular and Compositional Continual Learning for Language Models
April 2, 2024, 7:42 p.m. | Mingyang Wang, Heike Adel, Lukas Lange, Jannik Str\"otgen, Hinrich Sch\"utze
cs.LG updates on arXiv.org arxiv.org
Abstract: Continual learning aims at incrementally acquiring new knowledge while not forgetting existing knowledge. To overcome catastrophic forgetting, methods are either rehearsal-based, i.e., store data examples from previous tasks for data replay, or isolate parameters dedicated to each task. However, rehearsal-based methods raise privacy and memory issues, and parameter-isolation continual learning does not consider interaction between tasks, thus hindering knowledge transfer. In this work, we propose MoCL, a rehearsal-free Modular and Compositional Continual Learning framework which …
abstract arxiv catastrophic forgetting continual cs.cl cs.lg data examples free however knowledge language language models memory modular parameters privacy raise store tasks type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Machine Learning Research Scientist
@ d-Matrix | San Diego, Ca