all AI news
Towards Efficient Active Learning in NLP via Pretrained Representations
Feb. 27, 2024, 5:41 a.m. | Artem Vysogorets, Achintya Gopal
cs.LG updates on arXiv.org arxiv.org
Abstract: Fine-tuning Large Language Models (LLMs) is now a common approach for text classification in a wide range of applications. When labeled documents are scarce, active learning helps save annotation efforts but requires retraining of massive models on each acquisition iteration. We drastically expedite this process by using pretrained representations of LLMs within the active learning loop and, once the desired amount of labeled data is acquired, fine-tuning that or even a different pretrained LLM on …
abstract acquisition active learning annotation applications arxiv classification cs.cl cs.lg documents fine-tuning iteration language language models large language large language models llms massive nlp process retraining save text text classification type via
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Consultant Senior Power BI & Azure - CDI - H/F
@ Talan | Lyon, France