all AI news
Learning the meanings of function words from grounded language using a visual question answering model
April 24, 2024, 4:47 a.m. | Eva Portelance, Michael C. Frank, Dan Jurafsky
cs.CL updates on arXiv.org arxiv.org
Abstract: Interpreting a seemingly-simple function word like "or", "behind", or "more" can require logical, numerical, and relational reasoning. How are such words learned by children? Prior acquisition theories have often relied on positing a foundation of innate knowledge. Yet recent neural-network based visual question answering models apparently can learn to use function words as part of answering questions about complex visual scenes. In this paper, we study what these models learn about function words, in the …
abstract acquisition arxiv children cs.cl foundation function knowledge language network numerical prior question question answering reasoning relational simple type visual word words
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Codec Avatars Research Engineer
@ Meta | Pittsburgh, PA