all AI news
Language Models Can See: Plugging Visual Controls in Text Generation. (arXiv:2205.02655v1 [cs.CV])
Web: http://arxiv.org/abs/2205.02655
May 6, 2022, 1:11 a.m. | Yixuan Su, Tian Lan, Yahui Liu, Fangyu Liu, Dani Yogatama, Yan Wang, Lingpeng Kong, Nigel Collier
cs.CL updates on arXiv.org arxiv.org
Generative language models (LMs) such as GPT-2/3 can be prompted to generate
text with remarkable quality. While they are designed for text-prompted
generation, it remains an open question how the generation process could be
guided by modalities beyond text such as images. In this work, we propose a
training-free framework, called MAGIC (iMAge-Guided text generatIon with CLIP),
for plugging in visual controls in the generation process and enabling LMs to
perform multimodal tasks (e.g., image captioning) in a zero-shot manner. …
arxiv cv language language models models text text generation
More from arxiv.org / cs.CL updates on arXiv.org
Latest AI/ML/Big Data Jobs
Director, Applied Mathematics & Computational Research Division
@ Lawrence Berkeley National Lab | Berkeley, Ca
Business Data Analyst
@ MainStreet Family Care | Birmingham, AL
Assistant/Associate Professor of the Practice in Business Analytics
@ Georgetown University McDonough School of Business | Washington DC
Senior Data Science Writer
@ NannyML | Remote
Director of AI/ML Engineering
@ Armis Industries | Remote (US only), St. Louis, California
Digital Analytics Manager
@ Patagonia | Ventura, California