Feb. 28, 2024, 5:46 a.m. | Maurits Bleeker, Mariya Hendriksen, Andrew Yates, Maarten de Rijke

cs.CV updates on arXiv.org arxiv.org

arXiv:2402.17510v1 Announce Type: new
Abstract: Vision-language models (VLMs) mainly rely on contrastive training to learn general-purpose representations of images and captions. We focus on the situation when one image is associated with several captions, each caption containing both information shared among all captions and unique information per caption about the scene depicted in the image. In such cases, it is unclear whether contrastive losses are sufficient for learning task-optimal representations that contain all the information provided by the captions or …

abstract arxiv captions cs.ai cs.cv focus general image images information language language models learn per representation representation learning training type vision vision-language models vlms

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US