all AI news
Demonstrating and Reducing Shortcuts in Vision-Language Representation Learning
Feb. 28, 2024, 5:46 a.m. | Maurits Bleeker, Mariya Hendriksen, Andrew Yates, Maarten de Rijke
cs.CV updates on arXiv.org arxiv.org
Abstract: Vision-language models (VLMs) mainly rely on contrastive training to learn general-purpose representations of images and captions. We focus on the situation when one image is associated with several captions, each caption containing both information shared among all captions and unique information per caption about the scene depicted in the image. In such cases, it is unclear whether contrastive losses are sufficient for learning task-optimal representations that contain all the information provided by the captions or …
abstract arxiv captions cs.ai cs.cv focus general image images information language language models learn per representation representation learning training type vision vision-language models vlms
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US