Web: http://arxiv.org/abs/2206.08657

June 20, 2022, 1:10 a.m. | Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Nan Duan

cs.LG updates on arXiv.org arxiv.org

Vision-Language (VL) models with the Two-Tower architecture have dominated
visual-language representation learning in recent years. Current VL models
either use lightweight uni-modal encoders and learn to extract, align and fuse
both modalities simultaneously in a cross-modal encoder, or feed the last-layer
uni-modal features directly into the top cross-modal encoder, ignoring the
semantic information at the different levels in the deep uni-modal encoders.
Both approaches possibly restrict vision-language representation learning and
limit model performance. In this paper, we introduce multiple bridge …

arxiv bridge building cv language learning representation representation learning vision

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY