April 18, 2024, 4:47 a.m. | J. Pablo Mu\~noz, Jinjie Yuan, Nilesh Jain

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.10934v1 Announce Type: cross
Abstract: Recently, several approaches successfully demonstrated that weight-sharing Neural Architecture Search (NAS) can effectively explore a search space of elastic low-rank adapters (LoRA), allowing the parameter-efficient fine-tuning (PEFT) and compression of large language models. In this paper, we introduce a novel approach called Shears, demonstrating how the integration of cost-effective sparsity and a proposed Neural Low-rank adapter Search (NLS) algorithm can further improve the efficiency of PEFT approaches. Results demonstrate the benefits of Shears compared to …

abstract adapter architecture arxiv compression cs.ai cs.cl cs.lg elastic explore fine-tuning integration language language models large language large language models lora low nas neural architecture search novel paper peft search space sparsity type unstructured

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Scientist

@ Publicis Groupe | New York City, United States

Bigdata Cloud Developer - Spark - Assistant Manager

@ State Street | Hyderabad, India