Nov. 5, 2023, 6:44 a.m. | Michael Hanna, Ollie Liu, Alexandre Variengien

cs.LG updates on arXiv.org arxiv.org

Pre-trained language models can be surprisingly adept at tasks they were not
explicitly trained on, but how they implement these capabilities is poorly
understood. In this paper, we investigate the basic mathematical abilities
often acquired by pre-trained language models. Concretely, we use mechanistic
interpretability techniques to explain the (limited) mathematical abilities of
GPT-2 small. As a case study, we examine its ability to take in sentences such
as "The war lasted from the year 1732 to the year 17", and …

acquired adept arxiv basic capabilities compute gpt gpt-2 interpretability language language model language models paper tasks

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne