all AI news
Boosting Contrastive Self-Supervised Learning with False Negative Cancellation. (arXiv:2011.11765v2 [cs.CV] UPDATED)
cs.CV updates on arXiv.org arxiv.org
Self-supervised representation learning has made significant leaps fueled by
progress in contrastive learning, which seeks to learn transformations that
embed positive input pairs nearby, while pushing negative pairs far apart.
While positive pairs can be generated reliably (e.g., as different views of the
same image), it is difficult to accurately establish negative pairs, defined as
samples from different images regardless of their semantic content or visual
features. A fundamental problem in contrastive learning is mitigating the
effects of false negatives. …
arxiv cv false learning negative self-supervised learning supervised learning