April 26, 2024, 4:42 a.m. | Gahyeon Kim, Sohee Kim, Seokju Lee

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.16804v1 Announce Type: cross
Abstract: Recent advances in large pre-trained vision-language models have demonstrated remarkable performance on zero-shot downstream tasks. Building upon this, recent studies, such as CoOp and CoCoOp, have proposed the use of prompt learning, where context within a prompt is replaced with learnable vectors, leading to significant improvements over manually crafted prompts. However, the performance improvement for unseen classes is still marginal, and to tackle this problem, data augmentation has been frequently used in traditional zero-shot learning …

abstract advances arxiv building context cs.ai cs.cv cs.lg improvements language language models performance prompt prompt learning studies tasks type vectors vision vision-language vision-language models zero-shot

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead Data Engineer

@ WorkMoney | New York City, United States - Remote