all AI news
Low-Resource Multi-Granularity Academic Function Recognition Based on Multiple Prompt Knowledge. (arXiv:2305.03287v1 [cs.CL])
cs.CL updates on arXiv.org arxiv.org
Fine-tuning pre-trained language models (PLMs), e.g., SciBERT, generally
requires large numbers of annotated data to achieve state-of-the-art
performance on a range of NLP tasks in the scientific domain. However,
obtaining the fine-tune data for scientific NLP task is still challenging and
expensive. Inspired by recent advancement in prompt learning, in this paper, we
propose the Mix Prompt Tuning (MPT), which is a semi-supervised method to
alleviate the dependence on annotated data and improve the performance of
multi-granularity academic function recognition …
academic annotated data art arxiv data fine-tuning function knowledge language language models low multiple nlp numbers performance prompt recognition state