May 25, 2022, 1:12 a.m. | Zihan Wang, Kewen Zhao, Zilong Wang, Jingbo Shang

cs.CL updates on arXiv.org arxiv.org

Fine-tuning pre-trained language models has recently become a common practice
in building NLP models for various tasks, especially few-shot tasks. We argue
that under the few-shot setting, formulating fine-tuning closer to the
pre-training objectives shall be able to unleash more benefits from the
pre-trained language models. In this work, we take few-shot named entity
recognition (NER) for a pilot study, where existing fine-tuning strategies are
much different from pre-training. We propose a novel few-shot fine-tuning
framework for NER, FFF-NER. Specifically, …

arxiv fine-tuning language language model pilot pre-training study training

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Computer Vision Engineer

@ Motive | Pakistan - Remote

Data Analyst III

@ Fanatics | New York City, United States

Senior Data Scientist - Experian Health (This role is remote, from anywhere in the U.S.)

@ Experian | ., ., United States

Senior Data Engineer

@ Springer Nature Group | Pune, IN