Jan. 4, 2022, 9:10 p.m. | Tri Huynh, Simon Kornblith, Matthew R. Walter, Michael Maire, Maryam Khademi

cs.CV updates on arXiv.org arxiv.org

Self-supervised representation learning has made significant leaps fueled by
progress in contrastive learning, which seeks to learn transformations that
embed positive input pairs nearby, while pushing negative pairs far apart.
While positive pairs can be generated reliably (e.g., as different views of the
same image), it is difficult to accurately establish negative pairs, defined as
samples from different images regardless of their semantic content or visual
features. A fundamental problem in contrastive learning is mitigating the
effects of false negatives. …

arxiv cv false learning negative self-supervised learning supervised learning

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior AI & Data Engineer

@ Bertelsmann | Kuala Lumpur, 14, MY, 50400

Analytics Engineer

@ Reverse Tech | Philippines - Remote