Feb. 6, 2024, 5:42 a.m. | Jesse Hoogland George Wang Matthew Farrugia-Roberts Liam Carroll Susan Wei Daniel Murfet

cs.LG updates on arXiv.org arxiv.org

We show that in-context learning emerges in transformers in discrete developmental stages, when they are trained on either language modeling or linear regression tasks. We introduce two methods for detecting the milestones that separate these stages, by probing the geometry of the population loss in both parameter space and function space. We study the stages revealed by these new methods using a range of behavioral and structural metrics to establish their validity.

context cs.ai cs.lg function geometry in-context learning landscape language linear linear regression loss milestones modeling population regression show space study tasks transformers

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

DevOps Engineer (Data Team)

@ Reward Gateway | Sofia/Plovdiv