all AI news
Retrieval-Augmented Data Augmentation for Low-Resource Domain Tasks
Feb. 22, 2024, 5:42 a.m. | Minju Seo, Jinheon Baek, James Thorne, Sung Ju Hwang
cs.LG updates on arXiv.org arxiv.org
Abstract: Despite large successes of recent language models on diverse tasks, they suffer from severe performance degeneration in low-resource settings with limited training data available. Many existing works tackle this problem by generating synthetic data from the training data and then training models on them, recently using Large Language Models (LLMs). However, in low-resource settings, the amount of seed data samples to use for data augmentation is very small, which makes generated samples suboptimal and less …
abstract arxiv augmentation augmented data cs.ai cs.cl cs.lg data diverse domain language language models large language low performance retrieval retrieval-augmented synthetic synthetic data tasks them training training data training models type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
DevOps Engineer (Data Team)
@ Reward Gateway | Sofia/Plovdiv