Web: http://arxiv.org/abs/2201.11576

Jan. 28, 2022, 2:10 a.m. | Jixuan Wang, Kuan-Chieh Wang, Frank Rudzicz, Michael Brudno

cs.CL updates on arXiv.org arxiv.org

Large pretrained language models (LMs) like BERT have improved performance in
many disparate natural language processing (NLP) tasks. However, fine tuning
such models requires a large number of training examples for each target task.
Simultaneously, many realistic NLP problems are "few shot", without a
sufficiently large training set. In this work, we propose a novel conditional
neural process-based approach for few-shot text classification that learns to
transfer from other diverse tasks with rich annotation. Our key idea is to
represent …

arxiv classification representation text text classification

More from arxiv.org / cs.CL updates on arXiv.org

Machine Learning Product Manager (Europe, Remote)

@ FreshBooks | Germany

Field Operations and Data Engineer, ADAS

@ Lucid Motors | Newark, CA

Machine Learning Engineer - Senior

@ Novetta | Reston, VA

Analytics Engineer

@ ThirdLove | Remote

Senior Machine Learning Infrastructure Engineer - Safety

@ Discord | San Francisco, CA or Remote

Internship, Data Scientist

@ Everstream Analytics | United States (Remote)