Web: http://arxiv.org/abs/2206.08356

June 17, 2022, 1:13 a.m. | Rohit Girdhar, Alaaeldin El-Nouby, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, Ishan Misra

cs.CV updates on arXiv.org arxiv.org

Transformer-based architectures have become competitive across a variety of
visual domains, most notably images and videos. While prior work has studied
these modalities in isolation, having a common architecture suggests that one
can train a single unified model for multiple visual modalities. Prior attempts
at unified modeling typically use architectures tailored for vision tasks, or
obtain worse performance compared to single modality models. In this work, we
show that masked autoencoding can be used to train a simple Vision Transformer …

arxiv cv images model on videos

More from arxiv.org / cs.CV updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY