March 19, 2024, 4:41 a.m. | Anique Tahir, Lu Cheng, Huan Liu

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.11366v1 Announce Type: new
Abstract: The scaling of Large Language Models (LLMs) for retrieval-based tasks, particularly in Retrieval Augmented Generation (RAG), faces significant memory constraints, especially when fine-tuning extensive prompt sequences. Current open-source libraries support full-model inference and fine-tuning across multiple GPUs but fall short of accommodating the efficient parameter distribution required for retrieved context. Addressing this gap, we introduce a novel framework for PEFT-compatible fine-tuning of Llama-2 models, leveraging distributed training. Our framework uniquely utilizes JAX's just-in-time (JIT) compilation …

abstract arxiv constraints cs.cl cs.dc cs.lg current distribution fine-tuning gpus inference jax language language models large language large language models libraries library llms lora memory multiple prompt rag retrieval retrieval augmented generation scaling support tasks tensor type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Reporting & Data Analytics Lead (Sizewell C)

@ EDF | London, GB

Data Analyst

@ Notable | San Mateo, CA