Web: http://arxiv.org/abs/2206.10137

June 23, 2022, 1:13 a.m. | Ali Lotfi Rezaabad, Sidharth Kumar, Sriram Vishwanath, Jonathan I. Tamir

cs.CV updates on arXiv.org arxiv.org

Contrastive self-supervised learning methods learn to map data points such as
images into non-parametric representation space without requiring labels. While
highly successful, current methods require a large amount of data in the
training phase. In situations where the target training set is limited in size,
generalization is known to be poor. Pretraining on a large source data set and
fine-tuning on the target samples is prone to overfitting in the few-shot
regime, where only a small number of target samples …

arxiv cv domain adaptation learning representation representation learning unsupervised

More from arxiv.org / cs.CV updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY