Web: http://arxiv.org/abs/2205.05461

May 12, 2022, 1:11 a.m. | Chen Zhang, Lei Ren, Jingang Wang, Wei Wu, Dawei Song

cs.LG updates on arXiv.org arxiv.org

Prompt-tuning has shown appealing performance in few-shot classification by
virtue of its capability in effectively exploiting pre-trained knowledge. This
motivates us to check the hypothesis that prompt-tuning is also a promising
choice for long-tailed classification, since the tail classes are intuitively
few-shot ones. To achieve this aim, we conduct empirical studies to examine the
hypothesis. The results demonstrate that prompt-tuning exactly makes
pre-trained language models at least good long-tailed learners. For intuitions
on why prompt-tuning can achieve good performance in …

arxiv good language language models making models

More from arxiv.org / cs.LG updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California