March 12, 2024, 4:47 a.m. | Junhui Yin, Xinyu Zhang, Lin Wu, Xianghua Xie, Xiaojie Wang

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.06126v1 Announce Type: new
Abstract: Existing pre-trained vision-language models, e.g., CLIP, have demonstrated impressive zero-shot generalization capabilities in various downstream tasks. However, the performance of these models will degrade significantly when test inputs present different distributions. To this end, we explore the concept of test-time prompt tuning (TTPT), which enables the adaptation of the CLIP model to novel downstream tasks through only one step of optimization on an unsupervised objective that involves the test sample. Motivated by in-context learning within …

abstract arxiv capabilities clip concept context cs.cv explore however inputs language language model language models performance prompt prompt learning prompt tuning recognition tasks test type vision vision-language models will zero-shot

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne