all AI news
On Transferability of Prompt Tuning for Natural Language Processing. (arXiv:2111.06719v2 [cs.CL] UPDATED)
cs.CL updates on arXiv.org arxiv.org
Prompt tuning (PT) is a promising parameter-efficient method to utilize
extremely large pre-trained language models (PLMs), which can achieve
comparable performance to full-parameter fine-tuning by only tuning a few soft
prompts. However, PT requires much more training time than fine-tuning.
Intuitively, knowledge transfer can help to improve the efficiency. To explore
whether we can improve PT via prompt transfer, we empirically investigate the
transferability of soft prompts across different downstream tasks and PLMs in
this work. We find that (1) …
arxiv language language processing natural natural language natural language processing processing