all AI news
Let's Think Dot by Dot: Hidden Computation in Transformer Language Models
April 25, 2024, 5:44 p.m. | Jacob Pfau, William Merrill, Samuel R. Bowman
cs.CL updates on arXiv.org arxiv.org
Abstract: Chain-of-thought responses from language models improve performance across most benchmarks. However, it remains unclear to what extent these performance gains can be attributed to human-like task decomposition or simply the greater computation that additional tokens allow. We show that transformers can use meaningless filler tokens (e.g., '......') in place of a chain of thought to solve two hard algorithmic tasks they could not solve when responding without intermediate tokens. However, we find empirically that learning …
abstract arxiv benchmarks computation cs.ai cs.cl hidden however human human-like language language models performance responses show think thought tokens transformer transformer language models transformers type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne