Feb. 20, 2024, 5:50 a.m. | Xuan Ren, Biao Wu, Lingqiao Liu

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.11192v1 Announce Type: new
Abstract: Fine-tuning large language models (LLMs) with a small data set for particular tasks is a widely encountered yet complex challenge. The potential for overfitting on a limited number of examples can negatively impact the model's ability to generalize and retain its original skills. Our research explores the impact of the style of ground-truth responses during the fine-tuning process. We found that matching the ground-truth response style with the LLM's inherent style results in better learning …

abstract arxiv challenge cs.ai cs.cl data data set examples fine-tuning impact language language model language models large language large language model large language models learn llms model fine-tuning overfitting set small small data speak style tasks type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US