April 26, 2024, 4:47 a.m. | Ulme Wennberg, Gustav Eje Henter

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.16574v1 Announce Type: new
Abstract: It has been found that Transformer-based language models have the ability to perform basic quantitative reasoning. In this paper, we propose a method for studying how these models internally represent numerical data, and use our proposal to analyze the ALBERT family of language models. Specifically, we extract the learned embeddings these models use to represent tokens that correspond to numbers and ordinals, and subject these embeddings to Principal Component Analysis (PCA). PCA results reveal that …

abstract albert analyze arxiv basic case case study cs.cl data family found language language models numerical paper quantitative quantitative reasoning reasoning study studying transformer type

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote