Aug. 17, 2022, 1:12 a.m. | Tongzhou Wang, Phillip Isola

cs.CV updates on arXiv.org arxiv.org

Contrastive representation learning has been outstandingly successful in
practice. In this work, we identify two key properties related to the
contrastive loss: (1) alignment (closeness) of features from positive pairs,
and (2) uniformity of the induced distribution of the (normalized) features on
the hypersphere. We prove that, asymptotically, the contrastive loss optimizes
these properties, and analyze their positive effects on downstream tasks.
Empirically, we introduce an optimizable metric to quantify each property.
Extensive experiments on standard vision and language datasets …

alignment arxiv learning lg representation representation learning understanding

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote