Feb. 20, 2024, 5:42 a.m. | Ziyu Zhao, Leilei Gan, Guoyin Wang, Wangchunshu Zhou, Hongxia Yang, Kun Kuang, Fei Wu

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.09997v1 Announce Type: cross
Abstract: Low-Rank Adaptation (LoRA) provides an effective yet efficient solution for fine-tuning large language models (LLM). The modular and plug-and-play nature of LoRA enables the integration of diverse domain-specific LoRAs to enhance the capabilities of LLMs. Previous research on exploiting multiple LoRAs either focuses on specific isolated downstream tasks or fixes the selection of LoRAs during training. However, in real-world scenarios, LLMs receive diverse prompts covering different tasks, and the pool of candidate LoRAs is often …

abstract arxiv capabilities cs.ai cs.cl cs.lg diverse domain fine-tuning integration language language models large language large language models llm llms lora low low-rank adaptation mixed modular multiple nature research retrieval solution tasks type

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Technical Program Manager, Expert AI Trainer Acquisition & Engagement

@ OpenAI | San Francisco, CA

Director, Data Engineering

@ PatientPoint | Cincinnati, Ohio, United States