May 25, 2022, 1:12 a.m. | Van-Nhiem Tran, Chi-En Huang, Shen-Hsuan Liu, Kai-Lin Yang, Timothy Ko, Yung-Hui Li

cs.CV updates on arXiv.org arxiv.org

In recent years, self-supervised learning has been studied to deal with the
limitation of available labeled-dataset. Among the major components of
self-supervised learning, the data augmentation pipeline is one key factor in
enhancing the resulting performance. However, most researchers manually
designed the augmentation pipeline, and the limited collections of
transformation may cause the lack of robustness of the learned feature
representation. In this work, we proposed Multi-Augmentations for
Self-Supervised Representation Learning (MA-SSRL), which fully searched for
various augmentation policies to …

arxiv augmentation cv learning pre-training representation representation learning training

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst (CPS-GfK)

@ GfK | Bucharest

Consultant Data Analytics IT Digital Impulse - H/F

@ Talan | Paris, France

Data Analyst

@ Experian | Mumbai, India

Data Scientist

@ Novo Nordisk | Princeton, NJ, US

Data Architect IV

@ Millennium Corporation | United States