March 6, 2024, 5:45 a.m. | Philipp J. R\"osch, Norbert Oswald, Michaela Geierhos, Jind\v{r}ich Libovick\'y

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.02875v1 Announce Type: new
Abstract: Current multimodal models leveraging contrastive learning often face limitations in developing fine-grained conceptual understanding. This is due to random negative samples during pretraining, causing almost exclusively very dissimilar concepts to be compared in the loss function. Consequently, the models struggle with fine-grained semantic differences. To address this problem, we introduce a novel pretraining method incorporating synthetic hard negative text examples. The hard negatives permute terms corresponding to visual concepts, leading to a more fine-grained visual …

abstract arxiv concepts cs.cl cs.cv cs.ir current differences face fine-grained function limitations loss multimodal multimodal models negative pretraining random samples semantic struggle through type understanding

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US