all AI news
Exploring Universal Intrinsic Task Subspace via Prompt Tuning. (arXiv:2110.07867v2 [cs.CL] UPDATED)
Web: http://arxiv.org/abs/2110.07867
May 13, 2022, 1:11 a.m. | Yujia Qin, Xiaozhi Wang, Yusheng Su, Yankai Lin, Ning Ding, Jing Yi, Weize Chen, Zhiyuan Liu, Juanzi Li, Lei Hou, Peng Li, Maosong Sun, Jie Zhou
cs.CL updates on arXiv.org arxiv.org
Why can pre-trained language models (PLMs) learn universal representations
and effectively adapt to broad NLP tasks differing a lot superficially? In this
work, we empirically find evidence indicating that the adaptations of PLMs to
various few-shot tasks can be reparameterized as optimizing only a few free
parameters in a unified low-dimensional intrinsic task subspace, which may help
us understand why PLMs could easily adapt to various NLP tasks with small-scale
data. To find such a subspace and examine its universality, …
More from arxiv.org / cs.CL updates on arXiv.org
Latest AI/ML/Big Data Jobs
Director, Applied Mathematics & Computational Research Division
@ Lawrence Berkeley National Lab | Berkeley, Ca
Business Data Analyst
@ MainStreet Family Care | Birmingham, AL
Assistant/Associate Professor of the Practice in Business Analytics
@ Georgetown University McDonough School of Business | Washington DC
Senior Data Science Writer
@ NannyML | Remote
Director of AI/ML Engineering
@ Armis Industries | Remote (US only), St. Louis, California
Digital Analytics Manager
@ Patagonia | Ventura, California