all AI news
MoRAL: MoE Augmented LoRA for LLMs' Lifelong Learning
Feb. 20, 2024, 5:50 a.m. | Shu Yang, Muhammad Asif Ali, Cheng-Long Wang, Lijie Hu, Di Wang
cs.CL updates on arXiv.org arxiv.org
Abstract: Adapting large language models (LLMs) to new domains/tasks and enabling them to be efficient lifelong learners is a pivotal challenge. In this paper, we propose MoRAL, i.e., Mixture-of-Experts augmented Low-Rank Adaptation for Lifelong Learning. MoRAL combines the multi-tasking abilities of MoE with the fine-tuning abilities of LoRA for effective life-long learning of LLMs. In contrast to the conventional approaches that use factual triplets as inputs MoRAL relies on simple question-answer pairs, which is a more …
abstract arxiv challenge cs.ai cs.cl domains enabling experts fine-tuning language language models large language large language models lifelong learning llms lora low low-rank adaptation moe paper pivotal tasks them type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne