all AI news
Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss. (arXiv:2106.04156v7 [cs.LG] UPDATED)
June 27, 2022, 1:11 a.m. | Jeff Z. HaoChen, Colin Wei, Adrien Gaidon, Tengyu Ma
stat.ML updates on arXiv.org arxiv.org
Recent works in self-supervised learning have advanced the state-of-the-art
by relying on the contrastive learning paradigm, which learns representations
by pushing positive pairs, or similar examples from the same class, closer
together while keeping negative pairs far apart. Despite the empirical
successes, theoretical foundations are limited -- prior analyses assume
conditional independence of the positive pairs given the same class label, but
recent empirical applications use heavily correlated positive pairs (i.e., data
augmentations of the same image). Our work analyzes …
More from arxiv.org / stat.ML updates on arXiv.org
Jobs in AI, ML, Big Data
Senior ML Researcher - 3D Geometry Processing | 3D Shape Generation | 3D Mesh Data
@ Promaton | Europe
Analytics Engineer
@ CircleCI | Remote (US), Remote (Canada), San Francisco, Denver
Bilingual Executive Assistant/Data Analyst - (French and English) - Export
@ Dangote Group | Lagos, Lagos, Nigeria
Workday Services Data Lead
@ WPP | Mexico City, Mexico
Business Data Analyst
@ Nordea | Tallinn, EE, 11415
Data Integrity Lead
@ BioNTech SE | Gaithersburg, MD, US, MD 20878