all AI news
Eliciting Knowledge from Pretrained Language Models for Prototypical Prompt Verbalizer. (arXiv:2201.05411v1 [cs.CL])
Jan. 17, 2022, 2:10 a.m. | Yinyi Wei, Tong Mo, Yongtao Jiang, Weiping Li, Wen Zhao
cs.CL updates on arXiv.org arxiv.org
Recent advances on prompt-tuning cast few-shot classification tasks as a
masked language modeling problem. By wrapping input into a template and using a
verbalizer which constructs a mapping between label space and label word space,
prompt-tuning can achieve excellent results in zero-shot and few-shot
scenarios. However, typical prompt-tuning needs a manually designed verbalizer
which requires domain expertise and human efforts. And the insufficient label
space may introduce considerable bias into the results. In this paper, we focus
on eliciting knowledge …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Lead Software Engineer - Artificial Intelligence, LLM
@ OpenText | Hyderabad, TG, IN
Lead Software Engineer- Python Data Engineer
@ JPMorgan Chase & Co. | GLASGOW, LANARKSHIRE, United Kingdom
Data Analyst (m/w/d)
@ Collaboration Betters The World | Berlin, Germany
Data Engineer, Quality Assurance
@ Informa Group Plc. | Boulder, CO, United States
Director, Data Science - Marketing
@ Dropbox | Remote - Canada