Nov. 5, 2023, 6:42 a.m. | Steve Yadlowsky, Lyric Doshi, Nilesh Tripuraneni

cs.LG updates on arXiv.org arxiv.org

Transformer models, notably large language models (LLMs), have the remarkable
ability to perform in-context learning (ICL) -- to perform new tasks when
prompted with unseen input-output examples without any explicit model training.
In this work, we study how effectively transformers can bridge between their
pretraining data mixture, comprised of multiple distinct task families, to
identify and learn new tasks in-context which are both inside and outside the
pretraining distribution. Building on previous work, we investigate this
question in a controlled …

arxiv bridge capabilities context data examples in-context learning input-output language language models large language large language models llms model selection narrow study tasks training transformer transformer models transformers work

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne