April 22, 2024, 4:45 a.m. | Yang Luo, Zangwei Zheng, Zirui Zhu, Yang You

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.12866v1 Announce Type: cross
Abstract: The increase in parameter size of multimodal large language models (MLLMs) introduces significant capabilities, particularly in-context learning, where MLLMs enhance task performance without updating pre-trained parameters. This effectiveness, however, hinges on the appropriate selection of in-context examples, a process that is currently biased towards visual data, overlooking textual information. Furthermore, the area of supervised retrievers for MLLMs, crucial for optimal in-context example selection, continues to be uninvestigated. Our study offers an in-depth evaluation of the …

abstract arxiv capabilities context cs.ai cs.cl cs.cv examples however in-context learning information language language models large language large language models mllms multimodal parameters performance process retrieval textual type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne