all AI news
AAPL: Adding Attributes to Prompt Learning for Vision-Language Models
April 26, 2024, 4:42 a.m. | Gahyeon Kim, Sohee Kim, Seokju Lee
cs.LG updates on arXiv.org arxiv.org
Abstract: Recent advances in large pre-trained vision-language models have demonstrated remarkable performance on zero-shot downstream tasks. Building upon this, recent studies, such as CoOp and CoCoOp, have proposed the use of prompt learning, where context within a prompt is replaced with learnable vectors, leading to significant improvements over manually crafted prompts. However, the performance improvement for unseen classes is still marginal, and to tackle this problem, data augmentation has been frequently used in traditional zero-shot learning …
abstract advances arxiv building context cs.ai cs.cv cs.lg improvements language language models performance prompt prompt learning studies tasks type vectors vision vision-language vision-language models zero-shot
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US