Web: http://arxiv.org/abs/2105.04906

Jan. 31, 2022, 2:11 a.m. | Adrien Bardes, Jean Ponce, Yann LeCun

cs.LG updates on arXiv.org arxiv.org

Recent self-supervised methods for image representation learning are based on
maximizing the agreement between embedding vectors from different views of the
same image. A trivial solution is obtained when the encoder outputs constant
vectors. This collapse problem is often avoided through implicit biases in the
learning architecture, that often lack a clear justification or interpretation.
In this paper, we introduce VICReg (Variance-Invariance-Covariance
Regularization), a method that explicitly avoids the collapse problem with a
simple regularization term on the variance of …

arxiv cv learning self-supervised learning supervised learning variance

More from arxiv.org / cs.LG updates on arXiv.org

Director, Data Engineering and Architecture

@ Chainalysis | California | New York | Washington DC | Remote - USA

Deep Learning Researcher

@ Topaz Labs | Dallas, TX

Sr Data Engineer (Contractor)

@ SADA | US - West

Senior Cloud Database Administrator

@ Findhelp | Remote

Senior Data Analyst

@ System1 | Remote

Speech Machine Learning Research Engineer

@ Samsung Research America | Mountain View, CA