Nov. 17, 2022, 2:15 a.m. | Tianhong Li, Huiwen Chang, Shlok Kumar Mishra, Han Zhang, Dina Katabi, Dilip Krishnan

cs.CV updates on arXiv.org arxiv.org

Generative modeling and representation learning are two key tasks in computer
vision. However, these models are typically trained independently, which
ignores the potential for each task to help the other, and leads to training
and model maintenance overheads. In this work, we propose MAsked Generative
Encoder (MAGE), the first framework to unify SOTA image generation and
self-supervised representation learning. Our key insight is that using variable
masking ratios in masked image modeling pre-training can allow generative
training (very high masking …

arxiv encoder image mage representation representation learning

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US