all AI news
MAGE: MAsked Generative Encoder to Unify Representation Learning and Image Synthesis. (arXiv:2211.09117v1 [cs.CV])
Nov. 17, 2022, 2:15 a.m. | Tianhong Li, Huiwen Chang, Shlok Kumar Mishra, Han Zhang, Dina Katabi, Dilip Krishnan
cs.CV updates on arXiv.org arxiv.org
Generative modeling and representation learning are two key tasks in computer
vision. However, these models are typically trained independently, which
ignores the potential for each task to help the other, and leads to training
and model maintenance overheads. In this work, we propose MAsked Generative
Encoder (MAGE), the first framework to unify SOTA image generation and
self-supervised representation learning. Our key insight is that using variable
masking ratios in masked image modeling pre-training can allow generative
training (very high masking …
arxiv encoder image mage representation representation learning
More from arxiv.org / cs.CV updates on arXiv.org
Retrieval-Augmented Egocentric Video Captioning
2 days, 23 hours ago |
arxiv.org
Mirror-Aware Neural Humans
2 days, 23 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US