all AI news
Multi-Augmentation for Efficient Visual Representation Learning for Self-supervised Pre-training. (arXiv:2205.11772v1 [cs.CV])
cs.CV updates on arXiv.org arxiv.org
In recent years, self-supervised learning has been studied to deal with the
limitation of available labeled-dataset. Among the major components of
self-supervised learning, the data augmentation pipeline is one key factor in
enhancing the resulting performance. However, most researchers manually
designed the augmentation pipeline, and the limited collections of
transformation may cause the lack of robustness of the learned feature
representation. In this work, we proposed Multi-Augmentations for
Self-Supervised Representation Learning (MA-SSRL), which fully searched for
various augmentation policies to …
arxiv augmentation cv learning pre-training representation representation learning training