March 26, 2024, 4:49 a.m. | Yuhan Chen, Lumei Su, Lihua Chen, Zhiwei Lin

cs.CV updates on arXiv.org arxiv.org

arXiv:2401.15842v2 Announce Type: replace
Abstract: In this paper, the LCV2 modular method is proposed for the Grounded Visual Question Answering task in the vision-language multimodal domain. This approach relies on a frozen large language model (LLM) as intermediate mediator between the off-the-shelf VQA model and the off-the-shelf visual grounding (VG) model, where the LLM transforms and conveys textual information between the two modules based on a designed prompt. LCV2 establish an integrated plug-and-play framework without the need for any pre-training …

abstract arxiv cs.ai cs.cv domain framework free intermediate language language model large language large language model llm modular multimodal paper pretraining question question answering type vision visual vqa

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne