all AI news
Do More Negative Samples Necessarily Hurt in Contrastive Learning?. (arXiv:2205.01789v1 [cs.LG])
Web: http://arxiv.org/abs/2205.01789
May 5, 2022, 1:10 a.m. | Pranjal Awasthi, Nishanth Dikkala, Pritish Kamath
stat.ML updates on arXiv.org arxiv.org
Recent investigations in noise contrastive estimation suggest, both
empirically as well as theoretically, that while having more "negative samples"
in the contrastive loss improves downstream classification performance
initially, beyond a threshold, it hurts downstream performance due to a
"collision-coverage" trade-off. But is such a phenomenon inherent in
contrastive learning? We show in a simple theoretical setting, where positive
pairs are generated by sampling from the underlying latent class (introduced by
Saunshi et al. (ICML 2019)), that the downstream performance of …
More from arxiv.org / stat.ML updates on arXiv.org
Latest AI/ML/Big Data Jobs
Director, Applied Mathematics & Computational Research Division
@ Lawrence Berkeley National Lab | Berkeley, Ca
Business Data Analyst
@ MainStreet Family Care | Birmingham, AL
Assistant/Associate Professor of the Practice in Business Analytics
@ Georgetown University McDonough School of Business | Washington DC
Senior Data Science Writer
@ NannyML | Remote
Director of AI/ML Engineering
@ Armis Industries | Remote (US only), St. Louis, California
Digital Analytics Manager
@ Patagonia | Ventura, California