all AI news
SuperLoRA: Parameter-Efficient Unified Adaptation of Multi-Layer Attention Modules
March 19, 2024, 4:43 a.m. | Xiangyu Chen (Perry), Jing Liu (Perry), Ye Wang (Perry), Pu (Perry), Wang, Matthew Brand, Guanghui Wang, Toshiaki Koike-Akino
cs.LG updates on arXiv.org arxiv.org
Abstract: Low-rank adaptation (LoRA) and its variants are widely employed in fine-tuning large models, including large language models for natural language processing and diffusion models for computer vision. This paper proposes a generalized framework called SuperLoRA that unifies and extends different LoRA variants, which can be realized under different hyper-parameter settings. Introducing grouping, folding, shuffling, projecting, and tensor factoring, SuperLoRA offers high flexibility compared with other LoRA variants and demonstrates superior performance for transfer learning tasks …
abstract arxiv attention computer computer vision cs.ai cs.cv cs.lg diffusion diffusion models fine-tuning framework generalized language language models language processing large language large language models large models layer lora low low-rank adaptation modules natural natural language natural language processing paper processing type variants vision
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Lead Data Modeler
@ Sherwin-Williams | Cleveland, OH, United States