Web: http://arxiv.org/abs/2205.01789

June 24, 2022, 1:11 a.m. | Pranjal Awasthi, Nishanth Dikkala, Pritish Kamath

stat.ML updates on arXiv.org arxiv.org

Recent investigations in noise contrastive estimation suggest, both
empirically as well as theoretically, that while having more "negative samples"
in the contrastive loss improves downstream classification performance
initially, beyond a threshold, it hurts downstream performance due to a
"collision-coverage" trade-off. But is such a phenomenon inherent in
contrastive learning? We show in a simple theoretical setting, where positive
pairs are generated by sampling from the underlying latent class (introduced by
Saunshi et al. (ICML 2019)), that the downstream performance of …

arxiv learning lg negative

More from arxiv.org / stat.ML updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY