all AI news
SupMAE: Supervised Masked Autoencoders Are Efficient Vision Learners. (arXiv:2205.14540v2 [cs.CV] UPDATED)
Aug. 17, 2022, 1:12 a.m. | Feng Liang, Yangguang Li, Diana Marculescu
cs.CV updates on arXiv.org arxiv.org
Recently, self-supervised Masked Autoencoders (MAE) have attracted
unprecedented attention for their impressive representation learning ability.
However, the pretext task, Masked Image Modeling (MIM), reconstructs the
missing local patches, lacking the global understanding of the image. This
paper extends MAE to a fully-supervised setting by adding a supervised
classification branch, thereby enabling MAE to effectively learn global
features from golden labels. The proposed Supervised MAE (SupMAE) only exploits
a visible subset of image patches for classification, unlike the standard
supervised pre-training …
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Machine Learning Engineer (m/f/d)
@ StepStone Group | Düsseldorf, Germany
2024 GDIA AI/ML Scientist - Supplemental
@ Ford Motor Company | United States