Feb. 13, 2024, 5:42 a.m. | Hongyun Zhou Xiangyu Lu Wang Xu Conghui Zhu Tiejun Zhao

cs.LG updates on arXiv.org arxiv.org

Low-Rank Adaptation (LoRA) introduces auxiliary parameters for each layer to fine-tune the pre-trained model under limited computing resources. But it still faces challenges of resource consumption when scaling up to larger models. Previous studies employ pruning techniques by evaluating the importance of LoRA parameters for different layers to address the problem. However, these efforts only analyzed parameter features to evaluate their importance. Indeed, the output of LoRA related to the parameters and data is the factor that directly impacts the …

challenges computing computing resources consumption cs.cl cs.lg evaluation importance larger models layer lora low low-rank adaptation parameters pruning resources scaling scaling up studies

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne