all AI news
Exploring Internal Numeracy in Language Models: A Case Study on ALBERT
April 26, 2024, 4:47 a.m. | Ulme Wennberg, Gustav Eje Henter
cs.CL updates on arXiv.org arxiv.org
Abstract: It has been found that Transformer-based language models have the ability to perform basic quantitative reasoning. In this paper, we propose a method for studying how these models internally represent numerical data, and use our proposal to analyze the ALBERT family of language models. Specifically, we extract the learned embeddings these models use to represent tokens that correspond to numbers and ordinals, and subject these embeddings to Principal Component Analysis (PCA). PCA results reveal that …
abstract albert analyze arxiv basic case case study cs.cl data family found language language models numerical paper quantitative quantitative reasoning reasoning study studying transformer type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote