Web: http://arxiv.org/abs/2205.09229

Sept. 15, 2022, 1:14 a.m. | Canyu Chen, Kai Shu

cs.CL updates on arXiv.org arxiv.org

Recent advances in large pre-trained language models (PLMs) lead to
impressive gains on natural language understanding (NLU) tasks with
task-specific fine-tuning. However, direct fine-tuning PLMs heavily relies on a
large amount of labeled instances, which are usually hard to obtain.
Prompt-based tuning on PLMs has proven valuable for various few-shot tasks.
Existing works studying prompt-based tuning for few-shot NLU tasks mainly focus
on deriving proper label words with a verbalizer or generating prompt templates
for eliciting semantics from PLMs. In …

arxiv augmentation data

More from arxiv.org / cs.CL updates on arXiv.org

Postdoctoral Fellow: ML for autonomous materials discovery

@ Lawrence Berkeley National Lab | Berkeley, CA

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Research Engineer - VFX, Neural Compositing

@ Flawless | Los Angeles, California, United States

[Job-TB] Senior Data Engineer

@ CI&T | Brazil

Data Analytics Engineer

@ The Fork | Paris, France