all AI news
MetaPrompting: Learning to Learn Better Prompts. (arXiv:2209.11486v2 [cs.CL] UPDATED)
Sept. 28, 2022, 1:16 a.m. | Yutai Hou, Hongyuan Dong, Xinghao Wang, Bohan Li, Wanxiang Che
cs.CL updates on arXiv.org arxiv.org
Prompting method is regarded as one of the crucial progress for few-shot
nature language processing. Recent research on prompting moves from discrete
tokens based ``hard prompts'' to continuous ``soft prompts'', which employ
learnable vectors as pseudo prompt tokens and achieve better performance.
Though showing promising prospects, these soft-prompting methods are observed
to rely heavily on good initialization to take effect. Unfortunately, obtaining
a perfect initialization for soft prompts requires understanding of inner
language models working and elaborate design, which is …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Social Insights & Data Analyst (Freelance)
@ Media.Monks | Jakarta
Cloud Data Engineer
@ Arkatechture | Portland, ME, USA