all AI news
Pretraining Data Mixtures Enable Narrow Model Selection Capabilities in Transformer Models. (arXiv:2311.00871v1 [cs.LG])
cs.LG updates on arXiv.org arxiv.org
Transformer models, notably large language models (LLMs), have the remarkable
ability to perform in-context learning (ICL) -- to perform new tasks when
prompted with unseen input-output examples without any explicit model training.
In this work, we study how effectively transformers can bridge between their
pretraining data mixture, comprised of multiple distinct task families, to
identify and learn new tasks in-context which are both inside and outside the
pretraining distribution. Building on previous work, we investigate this
question in a controlled …
arxiv bridge capabilities context data examples in-context learning input-output language language models large language large language models llms model selection narrow study tasks training transformer transformer models transformers work