March 6, 2024, 5:45 a.m. | Zheng Li, Xiang Li, Xinyi Fu, Xing Zhang, Weiqiang Wang, Jian Yang

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.02781v1 Announce Type: new
Abstract: Prompt learning has emerged as a valuable technique in enhancing vision-language models (VLMs) such as CLIP for downstream tasks in specific domains. Existing work mainly focuses on designing various learning forms of prompts, neglecting the potential of prompts as effective distillers for learning from larger teacher models. In this paper, we introduce an unsupervised domain prompt distillation framework, which aims to transfer the knowledge of a larger teacher model to a lightweight target model through …

abstract arxiv clip cs.cv designing distillation domains forms language language models prompt prompt learning prompts tasks type unsupervised vision vision-language models vlms work

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne