all AI news
Unlocking the Global Synergies in Low-Rank Adapters
June 24, 2024, 4:42 a.m. | Zixi Zhang, Cheng Zhang, Xitong Gao, Robert D. Mullins, George A. Constantinides, Yiren Zhao
cs.CL updates on arXiv.org arxiv.org
Abstract: Low-rank Adaption (LoRA) has been the de-facto parameter-efficient fine-tuning technique for large language models. We present HeteroLoRA, a light-weight search algorithm that leverages zero-cost proxies to allocate the limited LoRA trainable parameters across the model for better fine-tuned performance. In addition to the allocation for the standard LoRA-adapted models, we also demonstrate the efficacy of HeteroLoRA by performing the allocation in a more challenging search space that includes LoRA modules and LoRA-adapted shortcut connections. Experiments …
abstract algorithm arxiv cost cs.cl cs.lg fine-tuning global language language models large language large language models light lora low parameters performance proxies search standard tuning type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
AI Focused Biochemistry Postdoctoral Fellow
@ Lawrence Berkeley National Lab | Berkeley, CA
Senior Quality Specialist - JAVA
@ SAP | Bengaluru, IN, 560066
Aktuar Financial Lines (m/w/d)
@ Zurich Insurance | Köln, DE
Senior Network Engineer
@ ManTech | 054H - 124TchnlgyPrkWy,SBurlington,VT
Pricing Analyst
@ EDF | Exeter, GB
Specialist IS Engineer
@ Amgen | US - California - Thousand Oaks - Field/Remote