all AI news
Visual Grounding Helps Learn Word Meanings in Low-Data Regimes
March 27, 2024, 4:48 a.m. | Chengxu Zhuang, Evelina Fedorenko, Jacob Andreas
cs.CL updates on arXiv.org arxiv.org
Abstract: Modern neural language models (LMs) are powerful tools for modeling human sentence production and comprehension, and their internal representations are remarkably well-aligned with representations of language in the human brain. But to achieve these results, LMs must be trained in distinctly un-human-like ways - requiring orders of magnitude more language data than children receive during development, and without perceptual or social context. Do models trained more naturalistically -- with grounded supervision -- exhibit more humanlike …
abstract arxiv brain cs.ai cs.cl data human human-like language language models learn lms low modeling modern orders production results tools type visual word
More from arxiv.org / cs.CL updates on arXiv.org
Benchmarking LLMs via Uncertainty Quantification
1 day, 16 hours ago |
arxiv.org
CARE: Extracting Experimental Findings From Clinical Literature
1 day, 16 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Intern Large Language Models Planning (f/m/x)
@ BMW Group | Munich, DE
Data Engineer Analytics
@ Meta | Menlo Park, CA | Remote, US