all AI news
Smaller Language Models are capable of selecting Instruction-Tuning Training Data for Larger Language Models
Feb. 19, 2024, 5:47 a.m. | Dheeraj Mekala, Alex Nguyen, Jingbo Shang
cs.CL updates on arXiv.org arxiv.org
Abstract: Instruction-tuning language models has become a crucial step in aligning them for general use. Typically, this process involves extensive training on large datasets, incurring high training costs. In this paper, we introduce a novel training data selection based on the learning percentage of the samples. We assert that current language models possess the capability to autonomously select high-quality training data, leading to comparable or improved performance compared to training on the entire dataset. Our experiments …
abstract arxiv become costs cs.cl data datasets general language language models large datasets novel paper process them training training costs training data type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Machine Learning Engineer - Sr. Consultant level
@ Visa | Bellevue, WA, United States