all AI news
Making Pre-trained Language Models Good Long-tailed Learners. (arXiv:2205.05461v1 [cs.CL])
Web: http://arxiv.org/abs/2205.05461
May 12, 2022, 1:11 a.m. | Chen Zhang, Lei Ren, Jingang Wang, Wei Wu, Dawei Song
cs.CL updates on arXiv.org arxiv.org
Prompt-tuning has shown appealing performance in few-shot classification by
virtue of its capability in effectively exploiting pre-trained knowledge. This
motivates us to check the hypothesis that prompt-tuning is also a promising
choice for long-tailed classification, since the tail classes are intuitively
few-shot ones. To achieve this aim, we conduct empirical studies to examine the
hypothesis. The results demonstrate that prompt-tuning exactly makes
pre-trained language models at least good long-tailed learners. For intuitions
on why prompt-tuning can achieve good performance in …
More from arxiv.org / cs.CL updates on arXiv.org
Latest AI/ML/Big Data Jobs
Director, Applied Mathematics & Computational Research Division
@ Lawrence Berkeley National Lab | Berkeley, Ca
Business Data Analyst
@ MainStreet Family Care | Birmingham, AL
Assistant/Associate Professor of the Practice in Business Analytics
@ Georgetown University McDonough School of Business | Washington DC
Senior Data Science Writer
@ NannyML | Remote
Director of AI/ML Engineering
@ Armis Industries | Remote (US only), St. Louis, California
Digital Analytics Manager
@ Patagonia | Ventura, California