all AI news
Lexicon-Level Contrastive Visual-Grounding Improves Language Modeling
March 22, 2024, 4:43 a.m. | Chengxu Zhuang, Evelina Fedorenko, Jacob Andreas
cs.LG updates on arXiv.org arxiv.org
Abstract: Today's most accurate language models are trained on orders of magnitude more language data than human language learners receive - but with no supervision from other sensory modalities that play a crucial role in human learning. Can we make LMs' representations and predictions more accurate (and more human-like) with more ecologically plausible supervision? This paper describes LexiContrastive Grounding (LCG), a grounded language learning procedure that leverages visual supervision to improve textual representations. LexiContrastive Grounding combines …
abstract arxiv cs.ai cs.cl cs.lg data human human-like language language data language models lms modeling orders predictions role sensory supervision type visual
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Data Science Analyst
@ Mayo Clinic | AZ, United States
Sr. Data Scientist (Network Engineering)
@ SpaceX | Redmond, WA