March 18, 2024, 4:47 a.m. | Michal \v{S}tef\'anik, Marek Kadl\v{c}\'ik, Petr Sojka

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.09703v1 Announce Type: new
Abstract: Many recent language models (LMs) are capable of in-context learning (ICL), manifested in the LMs' ability to perform a new task solely from natural-language instruction. Previous work curating in-context learners assumes that ICL emerges from a vast over-parametrization or the scale of multi-task training. However, recent theoretical work attributes the ICL ability to concept-dependent training data and creates functional in-context learners even in small-scale, synthetic settings.
In this work, we practically explore this newly identified …

abstract arxiv concept construction context cs.ai cs.cl data however in-context learning language language models lms natural scale training type vast work

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne