all AI news
Formulating Few-shot Fine-tuning Towards Language Model Pre-training: A Pilot Study on Named Entity Recognition. (arXiv:2205.11799v1 [cs.CL])
cs.CL updates on arXiv.org arxiv.org
Fine-tuning pre-trained language models has recently become a common practice
in building NLP models for various tasks, especially few-shot tasks. We argue
that under the few-shot setting, formulating fine-tuning closer to the
pre-training objectives shall be able to unleash more benefits from the
pre-trained language models. In this work, we take few-shot named entity
recognition (NER) for a pilot study, where existing fine-tuning strategies are
much different from pre-training. We propose a novel few-shot fine-tuning
framework for NER, FFF-NER. Specifically, …
arxiv fine-tuning language language model pilot pre-training study training