all AI news
Do More Negative Samples Necessarily Hurt in Contrastive Learning?. (arXiv:2205.01789v2 [cs.LG] UPDATED)
Web: http://arxiv.org/abs/2205.01789
June 24, 2022, 1:11 a.m. | Pranjal Awasthi, Nishanth Dikkala, Pritish Kamath
cs.LG updates on arXiv.org arxiv.org
Recent investigations in noise contrastive estimation suggest, both
empirically as well as theoretically, that while having more "negative samples"
in the contrastive loss improves downstream classification performance
initially, beyond a threshold, it hurts downstream performance due to a
"collision-coverage" trade-off. But is such a phenomenon inherent in
contrastive learning? We show in a simple theoretical setting, where positive
pairs are generated by sampling from the underlying latent class (introduced by
Saunshi et al. (ICML 2019)), that the downstream performance of …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Machine Learning Researcher - Saalfeld Lab
@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia
Project Director, Machine Learning in US Health
@ ideas42.org | Remote, US
Data Science Intern
@ NannyML | Remote
Machine Learning Engineer NLP/Speech
@ Play.ht | Remote
Research Scientist, 3D Reconstruction
@ Yembo | Remote, US
Clinical Assistant or Associate Professor of Management Science and Systems
@ University at Buffalo | Buffalo, NY