all AI news
Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss. (arXiv:2106.04156v7 [cs.LG] UPDATED)
June 27, 2022, 1:10 a.m. | Jeff Z. HaoChen, Colin Wei, Adrien Gaidon, Tengyu Ma
cs.LG updates on arXiv.org arxiv.org
Recent works in self-supervised learning have advanced the state-of-the-art
by relying on the contrastive learning paradigm, which learns representations
by pushing positive pairs, or similar examples from the same class, closer
together while keeping negative pairs far apart. Despite the empirical
successes, theoretical foundations are limited -- prior analyses assume
conditional independence of the positive pairs given the same class label, but
recent empirical applications use heavily correlated positive pairs (i.e., data
augmentations of the same image). Our work analyzes …
More from arxiv.org / cs.LG updates on arXiv.org
Generalized Schr\"odinger Bridge Matching
1 day, 4 hours ago |
arxiv.org
Tight bounds on Pauli channel learning without entanglement
1 day, 4 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Analyst - Associate
@ JPMorgan Chase & Co. | Mumbai, Maharashtra, India
Staff Data Engineer (Data Platform)
@ Coupang | Seoul, South Korea
AI/ML Engineering Research Internship
@ Keysight Technologies | Santa Rosa, CA, United States
Sr. Director, Head of Data Management and Reporting Execution
@ Biogen | Cambridge, MA, United States
Manager, Marketing - Audience Intelligence (Senior Data Analyst)
@ Delivery Hero | Singapore, Singapore