Feb. 28, 2024, 5:46 a.m. | Maurits Bleeker, Mariya Hendriksen, Andrew Yates, Maarten de Rijke

cs.CV updates on arXiv.org arxiv.org

arXiv:2402.17510v1 Announce Type: new
Abstract: Vision-language models (VLMs) mainly rely on contrastive training to learn general-purpose representations of images and captions. We focus on the situation when one image is associated with several captions, each caption containing both information shared among all captions and unique information per caption about the scene depicted in the image. In such cases, it is unclear whether contrastive losses are sufficient for learning task-optimal representations that contain all the information provided by the captions or …

abstract arxiv captions cs.ai cs.cv focus general image images information language language models learn per representation representation learning training type vision vision-language models vlms

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US