Aug. 22, 2022, 1:14 a.m. | Sunan He, Taian Guo, Tao Dai, Ruizhi Qiao, Chen Wu, Xiujun Shu, Bo Ren

cs.CV updates on arXiv.org arxiv.org

Image and language modeling is of crucial importance for vision-language
pre-training (VLP), which aims to learn multi-modal representations from
large-scale paired image-text data. However, we observe that most existing VLP
methods focus on modeling the interactions between image and text features
while neglecting the information disparity between image and text, thus
suffering from focal bias. To address this problem, we propose a
vision-language masked autoencoder framework (VLMAE). VLMAE employs visual
generative learning, facilitating the model to acquire fine-grained and
unbiased …

arxiv autoencoder cv language masked autoencoder vision

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US