all AI news
InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning
April 2, 2024, 7:41 p.m. | Yan-Shuo Liang, Wu-Jun Li
cs.LG updates on arXiv.org arxiv.org
Abstract: Continual learning requires the model to learn multiple tasks sequentially. In continual learning, the model should possess the ability to maintain its performance on old tasks (stability) and the ability to adapt to new tasks continuously (plasticity). Recently, parameter-efficient fine-tuning (PEFT), which involves freezing a pre-trained model and injecting a small number of learnable parameters to adapt to downstream tasks, has gained increasing popularity in continual learning. Although existing continual learning methods based on PEFT …
abstract adapt arxiv continual cs.ai cs.cv cs.lg fine-tuning free interference learn low low-rank adaptation multiple peft performance pre-trained model stability tasks type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US