all AI news
Parallel Structures in Pre-training Data Yield In-Context Learning
Feb. 21, 2024, 5:42 a.m. | Yanda Chen, Chen Zhao, Zhou Yu, Kathleen McKeown, He He
cs.LG updates on arXiv.org arxiv.org
Abstract: Pre-trained language models (LMs) are capable of in-context learning (ICL): they can adapt to a task with only a few examples given in the prompt without any parameter update. However, it is unclear where this capability comes from as there is a stark distribution shift between pre-training text and ICL prompts. In this work, we study what patterns of the pre-training data contribute to ICL. We find that LMs' ICL ability depends on $\textit{parallel structures}$ …
abstract adapt arxiv capability context cs.ai cs.cl cs.lg data distribution examples in-context learning language language models lms pre-training prompt shift the prompt training training data type update
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US