all AI news
Bridge-Tower: Building Bridges Between Encoders in Vision-Language Representation Learning. (arXiv:2206.08657v1 [cs.CV])
cs.LG updates on arXiv.org arxiv.org
Vision-Language (VL) models with the Two-Tower architecture have dominated
visual-language representation learning in recent years. Current VL models
either use lightweight uni-modal encoders and learn to extract, align and fuse
both modalities simultaneously in a cross-modal encoder, or feed the last-layer
uni-modal features directly into the top cross-modal encoder, ignoring the
semantic information at the different levels in the deep uni-modal encoders.
Both approaches possibly restrict vision-language representation learning and
limit model performance. In this paper, we introduce multiple bridge …
arxiv bridge building cv language learning representation representation learning vision