March 1, 2024, 5:49 a.m. | Gabriel Grand, Valerio Pepe, Jacob Andreas, Joshua B. Tenenbaum

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.19471v1 Announce Type: new
Abstract: Questions combine our mastery of language with our remarkable facility for reasoning about uncertainty. How do people navigate vast hypothesis spaces to pose informative questions given limited cognitive resources? We study these tradeoffs in a classic grounded question-asking task based on the board game Battleship. Our language-informed program sampling (LIPS) model uses large language models (LLMs) to generate natural language questions, translate them into symbolic programs, and evaluate their expected information gain. We find that …

abstract arxiv board cognitive cs.ai cs.cl facility hypothesis language people question questions reasoning resources sampling ships spaces study type uncertainty vast

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior ML Engineer

@ Carousell Group | Ho Chi Minh City, Vietnam

Data and Insight Analyst

@ Cotiviti | Remote, United States