all AI news
OmniMAE: Single Model Masked Pretraining on Images and Videos. (arXiv:2206.08356v1 [cs.CV])
Web: http://arxiv.org/abs/2206.08356
June 17, 2022, 1:12 a.m. | Rohit Girdhar, Alaaeldin El-Nouby, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, Ishan Misra
stat.ML updates on arXiv.org arxiv.org
Transformer-based architectures have become competitive across a variety of
visual domains, most notably images and videos. While prior work has studied
these modalities in isolation, having a common architecture suggests that one
can train a single unified model for multiple visual modalities. Prior attempts
at unified modeling typically use architectures tailored for vision tasks, or
obtain worse performance compared to single modality models. In this work, we
show that masked autoencoding can be used to train a simple Vision Transformer …
More from arxiv.org / stat.ML updates on arXiv.org
Latest AI/ML/Big Data Jobs
Machine Learning Researcher - Saalfeld Lab
@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia
Project Director, Machine Learning in US Health
@ ideas42.org | Remote, US
Data Science Intern
@ NannyML | Remote
Machine Learning Engineer NLP/Speech
@ Play.ht | Remote
Research Scientist, 3D Reconstruction
@ Yembo | Remote, US
Clinical Assistant or Associate Professor of Management Science and Systems
@ University at Buffalo | Buffalo, NY