all AI news
RePre: Improving Self-Supervised Vision Transformer with Reconstructive Pre-training. (arXiv:2201.06857v2 [cs.CV] UPDATED)
Jan. 20, 2022, 2:10 a.m. | Luya Wang, Feng Liang, Yangguang Li, Honggang Zhang, Wanli Ouyang, Jing Shao
cs.CV updates on arXiv.org arxiv.org
Recently, self-supervised vision transformers have attracted unprecedented
attention for their impressive representation learning ability. However, the
dominant method, contrastive learning, mainly relies on an instance
discrimination pretext task, which learns a global understanding of the image.
This paper incorporates local feature learning into self-supervised vision
transformers via Reconstructive Pre-training (RePre). Our RePre extends
contrastive frameworks by adding a branch for reconstructing raw image pixels
in parallel with the existing contrastive objective. RePre is equipped with a
lightweight convolution-based decoder that …
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Staff Software Engineer, Generative AI, Google Cloud AI
@ Google | Mountain View, CA, USA; Sunnyvale, CA, USA
Expert Data Sciences
@ Gainwell Technologies | Any city, CO, US, 99999