March 6, 2024, 5:45 a.m. | Philipp J. R\"osch, Norbert Oswald, Michaela Geierhos, Jind\v{r}ich Libovick\'y

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.02875v1 Announce Type: new
Abstract: Current multimodal models leveraging contrastive learning often face limitations in developing fine-grained conceptual understanding. This is due to random negative samples during pretraining, causing almost exclusively very dissimilar concepts to be compared in the loss function. Consequently, the models struggle with fine-grained semantic differences. To address this problem, we introduce a novel pretraining method incorporating synthetic hard negative text examples. The hard negatives permute terms corresponding to visual concepts, leading to a more fine-grained visual …

abstract arxiv concepts cs.cl cs.cv cs.ir current differences face fine-grained function limitations loss multimodal multimodal models negative pretraining random samples semantic struggle through type understanding

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Machine Learning Engineer - Sr. Consultant level

@ Visa | Bellevue, WA, United States