all AI news
LiT: Zero-Shot Transfer with Locked-image text Tuning. (arXiv:2111.07991v3 [cs.CV] UPDATED)
Web: http://arxiv.org/abs/2111.07991
June 23, 2022, 1:12 a.m. | Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, Lucas Beyer
cs.CL updates on arXiv.org arxiv.org
This paper presents contrastive-tuning, a simple method employing contrastive
training to align image and text models while still taking advantage of their
pre-training. In our empirical study we find that locked pre-trained image
models with unlocked text models work best. We call this instance of
contrastive-tuning "Locked-image Tuning" (LiT), which just teaches a text model
to read out good representations from a pre-trained image model for new tasks.
A LiT model gains the capability of zero-shot transfer to new vision …
More from arxiv.org / cs.CL updates on arXiv.org
Latest AI/ML/Big Data Jobs
Machine Learning Researcher - Saalfeld Lab
@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia
Project Director, Machine Learning in US Health
@ ideas42.org | Remote, US
Data Science Intern
@ NannyML | Remote
Machine Learning Engineer NLP/Speech
@ Play.ht | Remote
Research Scientist, 3D Reconstruction
@ Yembo | Remote, US
Clinical Assistant or Associate Professor of Management Science and Systems
@ University at Buffalo | Buffalo, NY