Feb. 14, 2024, 5:45 a.m. | Weizhe Lin Jingbiao Mei Jinghong Chen Bill Byrne

cs.CL updates on arXiv.org arxiv.org

Large Multimodal Models (LMMs) excel in natural language and visual understanding but are challenged by exacting tasks such as Knowledge-based Visual Question Answering (KB-VQA) which involve the retrieval of relevant information from document collections to use in shaping answers to questions. We present an extensive training and evaluation framework, M2KR, for KB-VQA. M2KR contains a collection of vision and language tasks which we have incorporated into a single suite of benchmark tasks for training and evaluating general-purpose multi-modal retrievers. We …

cs.cl document evaluation excel fine-grained framework information knowledge language large multimodal models lmms modal multi-modal multimodal multimodal models natural natural language question question answering questions retrieval scaling scaling up tasks training understanding visual

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne