Feb. 28, 2024, 5:43 a.m. | Mingjie Sun, Xinlei Chen, J. Zico Kolter, Zhuang Liu

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.17762v1 Announce Type: cross
Abstract: We observe an empirical phenomenon in Large Language Models (LLMs) -- very few activations exhibit significantly larger values than others (e.g., 100,000 times larger). We call them massive activations. First, we demonstrate the widespread existence of massive activations across various LLMs and characterize their locations. Second, we find their values largely stay constant regardless of the input, and they function as indispensable bias terms in LLMs. Third, these massive activations lead to the concentration of …

abstract arxiv call cs.cl cs.lg language language models large language large language models llms locations massive observe them type values

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne