Aug. 2, 2022, 2:10 a.m. | Chaoning Zhang, Chenshuang Zhang, Junha Song, John Seon Keun Yi, Kang Zhang, In So Kweon

cs.LG updates on arXiv.org arxiv.org

Masked autoencoders are scalable vision learners, as the title of MAE
\cite{he2022masked}, which suggests that self-supervised learning (SSL) in
vision might undertake a similar trajectory as in NLP. Specifically, generative
pretext tasks with the masked prediction (e.g., BERT) have become a de facto
standard SSL practice in NLP. By contrast, early attempts at generative methods
in vision have been buried by their discriminative counterparts (like
contrastive learning); however, the success of mask image modeling has revived
the masking autoencoder (often …

arxiv autoencoder cv learning masked autoencoder self-supervised learning supervised learning survey vision

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US