Web: http://arxiv.org/abs/2111.07991

June 23, 2022, 1:11 a.m. | Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, Lucas Beyer

cs.LG updates on arXiv.org arxiv.org

This paper presents contrastive-tuning, a simple method employing contrastive
training to align image and text models while still taking advantage of their
pre-training. In our empirical study we find that locked pre-trained image
models with unlocked text models work best. We call this instance of
contrastive-tuning "Locked-image Tuning" (LiT), which just teaches a text model
to read out good representations from a pre-trained image model for new tasks.
A LiT model gains the capability of zero-shot transfer to new vision …

arxiv cv image text transfer

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY