March 5, 2024, 2:51 p.m. | Linhai Zhang, Jialong Wu, Deyu Zhou, Guoqiang Xu

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.01165v1 Announce Type: new
Abstract: Though Large Language Models (LLMs) have demonstrated the powerful capabilities of few-shot learning through prompting methods, supervised training is still necessary for complex reasoning tasks. Because of their extensive parameters and memory consumption, both Parameter-Efficient Fine-Tuning (PEFT) methods and Memory-Efficient Fine-Tuning methods have been proposed for LLMs. Nevertheless, the issue of large annotated data consumption, the aim of Data-Efficient Fine-Tuning, remains unexplored. One obvious way is to combine the PEFT method with active learning. However, …

abstract active learning arxiv capabilities consumption cs.ai cs.cl data dynamic few-shot few-shot learning fine-tuning language language models large language large language models llms lora memory memory consumption parameters peft prompting reasoning star supervised training tasks through training type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Quantexa | Sydney, New South Wales, Australia

Staff Analytics Engineer

@ Warner Bros. Discovery | NY New York 230 Park Avenue South