all AI news
Preserving In-Context Learning ability in Large Language Model Fine-tuning. (arXiv:2211.00635v1 [cs.CL])
Nov. 2, 2022, 1:12 a.m. | Yihan Wang, Si Si, Daliang Li, Michal Lukasik, Felix Yu, Cho-Jui Hsieh, Inderjit S Dhillon, Sanjiv Kumar
cs.LG updates on arXiv.org arxiv.org
Pretrained large language models (LLMs) are strong in-context learners that
are able to perform few-shot learning without changing model parameters.
However, as we show, fine-tuning an LLM on any specific task generally destroys
its in-context ability. We discover an important cause of this loss, format
specialization, where the model overfits to the format of the fine-tuned task
and is unable to output anything beyond this format. We further show that
format specialization happens at the beginning of fine-tuning. To solve …
arxiv context fine-tuning language language model large language model model fine-tuning
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Data Engineer - Takealot Group (Takealot.com | Superbalist.com | Mr D Food)
@ takealot.com | Cape Town