Feb. 27, 2024, 5:50 a.m. | Chaoya Jiang, Rui Xie, Wei Ye, Jinan Sun, Shikun Zhang

cs.CL updates on arXiv.org arxiv.org

arXiv:2305.05496v2 Announce Type: replace
Abstract: Cross-modal contrastive learning in vision language pretraining (VLP) faces the challenge of (partial) false negatives. In this paper, we study this problem from the perspective of Mutual Information (MI) optimization. It is common sense that InfoNCE loss used in contrastive learning will maximize the lower bound of MI between anchors and their positives, while we theoretically prove that MI involving negatives also matters when noises commonly exist. Guided by a more general lower bound form …

abstract arxiv captions challenge common sense cs.cl false image information language loss modal multimodal optimization paper perspective pretraining sense study summarization type vision will

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US