all AI news
AAPL: Adding Attributes to Prompt Learning for Vision-Language Models
April 26, 2024, 4:42 a.m. | Gahyeon Kim, Sohee Kim, Seokju Lee
cs.LG updates on arXiv.org arxiv.org
Abstract: Recent advances in large pre-trained vision-language models have demonstrated remarkable performance on zero-shot downstream tasks. Building upon this, recent studies, such as CoOp and CoCoOp, have proposed the use of prompt learning, where context within a prompt is replaced with learnable vectors, leading to significant improvements over manually crafted prompts. However, the performance improvement for unseen classes is still marginal, and to tackle this problem, data augmentation has been frequently used in traditional zero-shot learning …
abstract advances arxiv building context cs.ai cs.cv cs.lg improvements language language models performance prompt prompt learning studies tasks type vectors vision vision-language vision-language models zero-shot
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead Data Engineer
@ WorkMoney | New York City, United States - Remote