Feb. 6, 2024, 5:49 a.m. | Fabian Paischer Markus Hofmarcher Sepp Hochreiter Thomas Adler

cs.LG updates on arXiv.org arxiv.org

Recently, vision-language models like CLIP have advanced the state of the art in a variety of multi-modal tasks including image captioning and caption evaluation. Many approaches adapt CLIP-style models to a downstream task by training a mapping network between CLIP and a language model. This is costly as it usually involves calculating gradients for large models. We propose a more efficient training protocol that fits a linear mapping between image and text embeddings of CLIP via a closed-form solution. This …

adapt advanced alignment art captioning clip cs.cl cs.cv cs.lg evaluation image language language model language models linear mapping modal multi-modal network state state of the art style tasks training vision vision-language models

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York