Aug. 15, 2022, 1:10 a.m. | Kyungmin Lee, Jinwoo Shin

cs.LG updates on arXiv.org arxiv.org

Contrastive representation learning seeks to acquire useful representations
by estimating the shared information between multiple views of data. Here, the
choice of data augmentation is sensitive to the quality of learned
representations: as harder the data augmentations are applied, the views share
more task-relevant information, but also task-irrelevant one that can hinder
the generalization capability of representation. Motivated by this, we present
a new robust contrastive learning scheme, coined R\'enyiCL, which can
effectively manage harder augmentations by utilizing R\'enyi divergence. …

arxiv divergence learning ml representation representation learning skew

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

#13721 - Data Engineer - AI Model Testing

@ Qualitest | Miami, Florida, United States

Elasticsearch Administrator

@ ManTech | 201BF - Customer Site, Chantilly, VA