all AI news
How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model. (arXiv:2305.00586v5 [cs.CL] UPDATED)
cs.LG updates on arXiv.org arxiv.org
Pre-trained language models can be surprisingly adept at tasks they were not
explicitly trained on, but how they implement these capabilities is poorly
understood. In this paper, we investigate the basic mathematical abilities
often acquired by pre-trained language models. Concretely, we use mechanistic
interpretability techniques to explain the (limited) mathematical abilities of
GPT-2 small. As a case study, we examine its ability to take in sentences such
as "The war lasted from the year 1732 to the year 17", and …
acquired adept arxiv basic capabilities compute gpt gpt-2 interpretability language language model language models paper tasks