Feb. 23, 2024, 5:46 a.m. | Yu-Chung Hsiao, Fedir Zubach, Maria Wang, Jindong Chen

cs.CV updates on arXiv.org arxiv.org

arXiv:2209.08199v2 Announce Type: replace-cross
Abstract: We present a new task and dataset, ScreenQA, for screen content understanding via question answering. The existing screen datasets are focused either on structure and component-level understanding, or on a much higher-level composite task such as navigation and task completion. We attempt to bridge the gap between these two by annotating 86K question-answer pairs over the RICO dataset in hope to benchmark the screen reading comprehension capacity.

abstract app arxiv bridge cs.cl cs.cv cs.hc dataset datasets gap mobile mobile app navigation question question answering scale type understanding via

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote