June 25, 2024, 4:43 a.m. | Jiawei Liu, Zi Xiong, Yi Jiang, Yongqiang Ma, Wei Lu, Yong Huang, Qikai Cheng

cs.CL updates on arXiv.org arxiv.org

arXiv:2305.03287v2 Announce Type: replace
Abstract: Fine-tuning pre-trained language models (PLMs), e.g., SciBERT, generally requires large numbers of annotated data to achieve state-of-the-art performance on a range of NLP tasks in the scientific domain. However, obtaining the fine-tune data for scientific NLP task is still challenging and expensive. Inspired by recent advancement in prompt learning, in this paper, we propose the Mix Prompt Tuning (MPT), which is a semi-supervised method to alleviate the dependence on annotated data and improve the performance …

abstract academic annotated data art arxiv cs.ai cs.cl data domain fine-tune fine-tuning function however knowledge language language models low multi multiple nlp numbers performance prompt recognition replace scientific state tasks tuning type

AI Focused Biochemistry Postdoctoral Fellow

@ Lawrence Berkeley National Lab | Berkeley, CA

Senior Quality Specialist - JAVA

@ SAP | Bengaluru, IN, 560066

Aktuar Financial Lines (m/w/d)

@ Zurich Insurance | Köln, DE

Senior Network Engineer

@ ManTech | 054H - 124TchnlgyPrkWy,SBurlington,VT

Pricing Analyst

@ EDF | Exeter, GB

Specialist IS Engineer

@ Amgen | US - California - Thousand Oaks - Field/Remote