June 17, 2022, 1:10 a.m. | Sukmin Yun, Hankook Lee, Jaehyung Kim, Jinwoo Shin

cs.LG updates on arXiv.org arxiv.org

Recent self-supervised learning (SSL) methods have shown impressive results
in learning visual representations from unlabeled images. This paper aims to
improve their performance further by utilizing the architectural advantages of
the underlying neural network, as the current state-of-the-art visual pretext
tasks for SSL do not enjoy the benefit, i.e., they are architecture-agnostic.
In particular, we focus on Vision Transformers (ViTs), which have gained much
attention recently as a better architectural choice, often outperforming
convolutional networks for various visual tasks. The …

arxiv cv learning representation representation learning transformers vision

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Management Assistant

@ World Vision | Amman Office, Jordan

Cloud Data Engineer, Global Services Delivery, Google Cloud

@ Google | Buenos Aires, Argentina