March 9, 2024, 1:32 p.m. | /u/zhengli_nku

Machine Learning www.reddit.com

**Paper:** [https://arxiv.org/abs/2403.02781](https://arxiv.org/abs/2403.02781)

**Project Page:** [https://zhengli97.github.io/PromptKD/](https://zhengli97.github.io/PromptKD/)

**Github:** [https://github.com/zhengli97/PromptKD](https://github.com/zhengli97/PromptKD)



https://preview.redd.it/bop5wm9f8bnc1.png?width=1330&format=png&auto=webp&s=ad50156e81d6f9248c597ca239596f28a9f5d7cb

**Highlights:**

(1). A novel two-stage unsupervised prompt distillation framework for Vision-Language Models.

(2). Reuse high-quality teacher text features instead of training the student's own text encoder.

(3). Distillation on large amounts of unlabeled domain images using soft labels provided by the teacher.

(4). PromptKD outperforms all existing prompt learning methods on 11 diverse recognition datasets.

**Abstract:**

In this paper, we introduce an unsupervised domain **prompt distillation framework**, which aims to **transfer …

abstract datasets distillation diverse domain encoder features framework highlights images labels language language models machinelearning novel paper prompt prompt learning quality recognition stage text training unsupervised vision vision-language models

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne