Jan. 31, 2024, 4:41 p.m. | Zhuowan Li, Cihang Xie, Benjamin Van Durme, Alan Yuille

cs.CL updates on arXiv.org arxiv.org

Despite the impressive advancements achieved through vision-and-language
pretraining, it remains unclear whether this joint learning paradigm can help
understand each individual modality. In this work, we conduct a comparative
analysis of the visual representations in existing vision-and-language models
and vision-only models by probing a broad range of tasks, aiming to assess the
quality of the learned representations in a nuanced manner. Interestingly, our
empirical observations suggest that vision-and-language models are better at
label prediction tasks like object and attribute prediction, …

analysis arxiv comparative analysis cs.cv language language models localization multimodal multimodal models paradigm pretraining semantics through vision vision-and-language visual work

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US