March 7, 2024, 5:45 a.m. | Deepanway Ghosal, Vernon Toh Yan Han, Chia Yew Ken, Soujanya Poria

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.03864v1 Announce Type: new
Abstract: This paper introduces the novel task of multimodal puzzle solving, framed within the context of visual question-answering. We present a new dataset, AlgoPuzzleVQA designed to challenge and evaluate the capabilities of multimodal language models in solving algorithmic puzzles that necessitate both visual understanding, language understanding, and complex algorithmic reasoning. We create the puzzles to encompass a diverse array of mathematical and algorithmic topics such as boolean logic, combinatorics, graph theory, optimization, search, etc., aiming to …

abstract arxiv capabilities challenge challenges context cs.ai cs.cv dataset language language models multimodal novel paper puzzle question reasoning type visual

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne