Jan. 20, 2022, 2:10 a.m. | Luya Wang, Feng Liang, Yangguang Li, Honggang Zhang, Wanli Ouyang, Jing Shao

cs.CV updates on arXiv.org arxiv.org

Recently, self-supervised vision transformers have attracted unprecedented
attention for their impressive representation learning ability. However, the
dominant method, contrastive learning, mainly relies on an instance
discrimination pretext task, which learns a global understanding of the image.
This paper incorporates local feature learning into self-supervised vision
transformers via Reconstructive Pre-training (RePre). Our RePre extends
contrastive frameworks by adding a branch for reconstructing raw image pixels
in parallel with the existing contrastive objective. RePre is equipped with a
lightweight convolution-based decoder that …

arxiv cv training transformer vision

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Staff Software Engineer, Generative AI, Google Cloud AI

@ Google | Mountain View, CA, USA; Sunnyvale, CA, USA

Expert Data Sciences

@ Gainwell Technologies | Any city, CO, US, 99999