March 1, 2024, 5:44 a.m. | Junhao Zheng, Qianli Ma, Zhen Liu, Binquan Wu, Huawen Feng

cs.LG updates on arXiv.org arxiv.org

arXiv:2401.09181v2 Announce Type: replace
Abstract: Multimodal Continual Instruction Tuning (MCIT) enables Multimodal Large Language Models (MLLMs) to meet continuously emerging requirements without expensive retraining. MCIT faces two major obstacles: catastrophic forgetting (where old knowledge is forgotten) and negative forward transfer (where the performance of future tasks is degraded). Although existing methods have greatly alleviated catastrophic forgetting, they still suffer from negative forward transfer. By performing singular value decomposition (SVD) on input embeddings, we discover a large discrepancy in different input …

abstract arxiv beyond catastrophic forgetting continual cs.lg future knowledge language language models large language large language models major mllms multimodal negative obstacles performance positive requirements retraining tasks transfer type

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York