May 13, 2022, 1:11 a.m. | Katherine M. Collins, Catherine Wong, Jiahai Feng, Megan Wei, Joshua B. Tenenbaum

cs.LG updates on arXiv.org arxiv.org

Human language offers a powerful window into our thoughts -- we tell stories,
give explanations, and express our beliefs and goals through words. Abundant
evidence also suggests that language plays a developmental role in structuring
our learning. Here, we ask: how much of human-like thinking can be captured by
learning statistical patterns in language alone? We first contribute a new
challenge benchmark for comparing humans and distributional large language
models (LLMs). Our benchmark contains two problem-solving domains (planning and
explanation …

arxiv behavior benchmarking distribution human human-like language language models large language models reasoning

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst (Commercial Excellence)

@ Allegro | Poznan, Warsaw, Poland

Senior Machine Learning Engineer

@ Motive | Pakistan - Remote

Summernaut Customer Facing Data Engineer

@ Celonis | Raleigh, US, North Carolina

Data Engineer Mumbai

@ Nielsen | Mumbai, India