all AI news
LCV2: An Efficient Pretraining-Free Framework for Grounded Visual Question Answering
March 26, 2024, 4:49 a.m. | Yuhan Chen, Lumei Su, Lihua Chen, Zhiwei Lin
cs.CV updates on arXiv.org arxiv.org
Abstract: In this paper, the LCV2 modular method is proposed for the Grounded Visual Question Answering task in the vision-language multimodal domain. This approach relies on a frozen large language model (LLM) as intermediate mediator between the off-the-shelf VQA model and the off-the-shelf visual grounding (VG) model, where the LLM transforms and conveys textual information between the two modules based on a designed prompt. LCV2 establish an integrated plug-and-play framework without the need for any pre-training …
abstract arxiv cs.ai cs.cv domain framework free intermediate language language model large language large language model llm modular multimodal paper pretraining question question answering type vision visual vqa
More from arxiv.org / cs.CV updates on arXiv.org
Compact 3D Scene Representation via Self-Organizing Gaussian Grids
1 day, 17 hours ago |
arxiv.org
Fingerprint Matching with Localized Deep Representation
1 day, 17 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne