April 25, 2024, 5:44 p.m. | Jacob Pfau, William Merrill, Samuel R. Bowman

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.15758v1 Announce Type: new
Abstract: Chain-of-thought responses from language models improve performance across most benchmarks. However, it remains unclear to what extent these performance gains can be attributed to human-like task decomposition or simply the greater computation that additional tokens allow. We show that transformers can use meaningless filler tokens (e.g., '......') in place of a chain of thought to solve two hard algorithmic tasks they could not solve when responding without intermediate tokens. However, we find empirically that learning …

abstract arxiv benchmarks computation cs.ai cs.cl hidden however human human-like language language models performance responses show think thought tokens transformer transformer language models transformers type

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York