Web: http://arxiv.org/abs/2205.05718

May 13, 2022, 1:10 a.m. | Katherine M. Collins, Catherine Wong, Jiahai Feng, Megan Wei, Joshua B. Tenenbaum

cs.CL updates on arXiv.org arxiv.org

Human language offers a powerful window into our thoughts -- we tell stories,
give explanations, and express our beliefs and goals through words. Abundant
evidence also suggests that language plays a developmental role in structuring
our learning. Here, we ask: how much of human-like thinking can be captured by
learning statistical patterns in language alone? We first contribute a new
challenge benchmark for comparing humans and distributional large language
models (LLMs). Our benchmark contains two problem-solving domains (planning and
explanation …

arxiv behavior benchmarking distribution human language language models large language models models reasoning

More from arxiv.org / cs.CL updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California