all AI news
AcTune: Uncertainty-aware Active Self-Training for Semi-Supervised Active Learning with Pretrained Language Models. (arXiv:2112.08787v2 [cs.CL] UPDATED)
cs.CL updates on arXiv.org arxiv.org
While pre-trained language model (PLM) fine-tuning has achieved strong
performance in many NLP tasks, the fine-tuning stage can be still demanding in
labeled data. Recent works have resorted to active fine-tuning to improve the
label efficiency of PLM fine-tuning, but none of them investigate the potential
of unlabeled data. We propose {\ours}, a new framework that leverages unlabeled
data to improve the label efficiency of active PLM fine-tuning. AcTune switches
between data annotation and model self-training based on uncertainty: it …
active learning arxiv language language models learning self-training semi-supervised training uncertainty