March 27, 2024, 4:48 a.m. | Chengxu Zhuang, Evelina Fedorenko, Jacob Andreas

cs.CL updates on arXiv.org arxiv.org

arXiv:2310.13257v2 Announce Type: replace
Abstract: Modern neural language models (LMs) are powerful tools for modeling human sentence production and comprehension, and their internal representations are remarkably well-aligned with representations of language in the human brain. But to achieve these results, LMs must be trained in distinctly un-human-like ways - requiring orders of magnitude more language data than children receive during development, and without perceptual or social context. Do models trained more naturalistically -- with grounded supervision -- exhibit more humanlike …

abstract arxiv brain cs.ai cs.cl data human human-like language language models learn lms low modeling modern orders production results tools type visual word

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Intern Large Language Models Planning (f/m/x)

@ BMW Group | Munich, DE

Data Engineer Analytics

@ Meta | Menlo Park, CA | Remote, US