all AI news
In-context Prompt Learning for Test-time Vision Recognition with Frozen Vision-language Model
March 12, 2024, 4:47 a.m. | Junhui Yin, Xinyu Zhang, Lin Wu, Xianghua Xie, Xiaojie Wang
cs.CV updates on arXiv.org arxiv.org
Abstract: Existing pre-trained vision-language models, e.g., CLIP, have demonstrated impressive zero-shot generalization capabilities in various downstream tasks. However, the performance of these models will degrade significantly when test inputs present different distributions. To this end, we explore the concept of test-time prompt tuning (TTPT), which enables the adaptation of the CLIP model to novel downstream tasks through only one step of optimization on an unsupervised objective that involves the test sample. Motivated by in-context learning within …
abstract arxiv capabilities clip concept context cs.cv explore however inputs language language model language models performance prompt prompt learning prompt tuning recognition tasks test type vision vision-language models will zero-shot
More from arxiv.org / cs.CV updates on arXiv.org
Compact 3D Scene Representation via Self-Organizing Gaussian Grids
1 day, 10 hours ago |
arxiv.org
Fingerprint Matching with Localized Deep Representation
1 day, 10 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne