June 21, 2024, 4:50 a.m. | Soumya Suvra Ghosal, Samyadeep Basu, Soheil Feizi, Dinesh Manocha

cs.CV updates on arXiv.org arxiv.org

arXiv:2406.13683v1 Announce Type: new
Abstract: Image-text contrastive models such as CLIP learn transferable and robust representations for zero-shot transfer to a variety of downstream tasks. However, to obtain strong downstream performances, prompts need to be carefully curated, which can be a tedious engineering task. To address the issue of manual prompt engineering, prompt-tuning is used where a set of contextual vectors are learned by leveraging information from the training data. Despite their effectiveness, existing prompt-tuning frameworks often lack interpretability, thus …

abstract arxiv clip cs.ai cs.cv engineering however image interpretability issue language learn performances prompt prompts prompt tuning robust tasks text transfer tuning type vision vision-language zero-shot

AI Focused Biochemistry Postdoctoral Fellow

@ Lawrence Berkeley National Lab | Berkeley, CA

Senior Quality Specialist - JAVA

@ SAP | Bengaluru, IN, 560066

Aktuar Financial Lines (m/w/d)

@ Zurich Insurance | Köln, DE

Senior Network Engineer

@ ManTech | 054H - 124TchnlgyPrkWy,SBurlington,VT

Pricing Analyst

@ EDF | Exeter, GB

Specialist IS Engineer

@ Amgen | US - California - Thousand Oaks - Field/Remote