April 22, 2024, 4:45 a.m. | Yang Luo, Zangwei Zheng, Zirui Zhu, Yang You

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.12866v1 Announce Type: cross
Abstract: The increase in parameter size of multimodal large language models (MLLMs) introduces significant capabilities, particularly in-context learning, where MLLMs enhance task performance without updating pre-trained parameters. This effectiveness, however, hinges on the appropriate selection of in-context examples, a process that is currently biased towards visual data, overlooking textual information. Furthermore, the area of supervised retrievers for MLLMs, crucial for optimal in-context example selection, continues to be uninvestigated. Our study offers an in-depth evaluation of the …

abstract arxiv capabilities context cs.ai cs.cl cs.cv examples however in-context learning information language language models large language large language models mllms multimodal parameters performance process retrieval textual type

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US