Feb. 14, 2024, 5:43 a.m. | Binghui Peng Srini Narayanan Christos Papadimitriou

cs.LG updates on arXiv.org arxiv.org

What are the root causes of hallucinations in large language models (LLMs)? We use Communication Complexity to prove that the Transformer layer is incapable of composing functions (e.g., identify a grandparent of a person in a genealogy) if the domains of the functions are large enough; we show through examples that this inability is already empirically present when the domains are quite small. We also point out that several mathematical tasks that are at the core of the so-called compositional …

architecture communication complexity cs.ai cs.lg domains examples functions genealogy hallucinations identify language language models large language large language models layer limitations llms person prove show stat.ml through transformer transformer architecture

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne