all AI news
Exploiting Pseudo Image Captions for Multimodal Summarization
Feb. 27, 2024, 5:50 a.m. | Chaoya Jiang, Rui Xie, Wei Ye, Jinan Sun, Shikun Zhang
cs.CL updates on arXiv.org arxiv.org
Abstract: Cross-modal contrastive learning in vision language pretraining (VLP) faces the challenge of (partial) false negatives. In this paper, we study this problem from the perspective of Mutual Information (MI) optimization. It is common sense that InfoNCE loss used in contrastive learning will maximize the lower bound of MI between anchors and their positives, while we theoretically prove that MI involving negatives also matters when noises commonly exist. Guided by a more general lower bound form …
abstract arxiv captions challenge common sense cs.cl false image information language loss modal multimodal optimization paper perspective pretraining sense study summarization type vision will
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote