March 25, 2024, 4:42 a.m. | Hwichan Kim, Shota Sasaki, Sho Hoshino, Ukyo Honda

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.14946v1 Announce Type: cross
Abstract: Low-Rank Adaptation (LoRA) is a widely used Parameter-Efficient Fine-Tuning (PEFT) method that updates an initial weight matrix $W_0$ with a delta matrix $\Delta W$ consisted by two low-rank matrices $A$ and $B$. A previous study suggested that there is correlation between $W_0$ and $\Delta W$. In this study, we aim to delve deeper into relationships between $W_0$ and low-rank matrices $A$ and $B$ to further comprehend the behavior of LoRA. In particular, we analyze a …

abstract arxiv correlation cs.ai cs.cl cs.lg delta fine-tuning layer linear lora low low-rank adaptation matrix peft study type updates weight matrix

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne