April 28, 2022, 1:11 a.m. | Yusheng Su, Xiaozhi Wang, Yujia Qin, Chi-Min Chan, Yankai Lin, Huadong Wang, Kaiyue Wen, Zhiyuan Liu, Peng Li, Juanzi Li, Lei Hou, Maosong Sun, Jie Zh

cs.CL updates on arXiv.org arxiv.org

Prompt tuning (PT) is a promising parameter-efficient method to utilize
extremely large pre-trained language models (PLMs), which can achieve
comparable performance to full-parameter fine-tuning by only tuning a few soft
prompts. However, PT requires much more training time than fine-tuning.
Intuitively, knowledge transfer can help to improve the efficiency. To explore
whether we can improve PT via prompt transfer, we empirically investigate the
transferability of soft prompts across different downstream tasks and PLMs in
this work. We find that (1) …

arxiv language language processing natural natural language natural language processing processing

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Data Engineer

@ Bosch Group | San Luis Potosí, Mexico

DATA Engineer (H/F)

@ Renault Group | FR REN RSAS - Le Plessis-Robinson (Siège)

Advisor, Data engineering

@ Desjardins | 1, Complexe Desjardins, Montréal

Data Engineer Intern

@ Getinge | Wayne, NJ, US

Software Engineer III- Java / Python / Pyspark / ETL

@ JPMorgan Chase & Co. | Jersey City, NJ, United States