April 10, 2024, 4:42 a.m. | Juhong Min, Shyamal Buch, Arsha Nagrani, Minsu Cho, Cordelia Schmid

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.06511v1 Announce Type: cross
Abstract: This paper addresses the task of video question answering (videoQA) via a decomposed multi-stage, modular reasoning framework. Previous modular methods have shown promise with a single planning stage ungrounded in visual content. However, through a simple and effective baseline, we find that such systems can lead to brittle behavior in practice for challenging videoQA settings. Thus, unlike traditional single-stage planning methods, we propose a multi-stage system consisting of an event parser, a grounding stage, and …

abstract arxiv cs.ai cs.cv cs.lg framework however modular paper planning question question answering reasoning simple stage systems through type via video visual

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne