all AI news
SMAUG: Sparse Masked Autoencoder for Efficient Video-Language Pre-training. (arXiv:2211.11446v2 [cs.CV] UPDATED)
cs.CL updates on arXiv.org arxiv.org
Video-language pre-training is crucial for learning powerful multi-modal
representation. However, it typically requires a massive amount of computation.
In this paper, we develop SMAUG, an efficient pre-training framework for
video-language models. The foundation component in SMAUG is masked
autoencoders. Different from prior works which only mask textual inputs, our
masking strategy considers both visual and textual modalities, providing a
better cross-modal alignment and saving more pre-training costs. On top of
that, we introduce a space-time token sparsification module, which leverages …
arxiv autoencoder language masked autoencoder pre-training training video