May 26, 2022, 1:12 a.m. | Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yihan Wang, Han Guo, Tianmin Shu, Meng Song, Eric P. Xing, Zhiting Hu

cs.CL updates on arXiv.org arxiv.org

Prompting has shown impressive success in enabling large pretrained language
models (LMs) to perform diverse NLP tasks, especially when only few downstream
data are available. Automatically finding the optimal prompt for each task,
however, is challenging. Most existing work resorts to tuning soft prompt
(e.g., embeddings) which falls short of interpretability, reusability across
LMs, and applicability when gradients are not accessible. Discrete prompt, on
the other hand, is difficult to optimize, and is often created by "enumeration
(e.g., paraphrasing)-then-selection" heuristics …

arxiv learning reinforcement reinforcement learning text

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Staff Software Engineer, Generative AI, Google Cloud AI

@ Google | Mountain View, CA, USA; Sunnyvale, CA, USA

Expert Data Sciences

@ Gainwell Technologies | Any city, CO, US, 99999