May 8, 2023, 12:44 a.m. | Jiawei Liu, Zi Xiong, Yi Jiang, Yongqiang Ma, Wei Lu, Yong Huang, Qikai Cheng

cs.CL updates on arXiv.org arxiv.org

Fine-tuning pre-trained language models (PLMs), e.g., SciBERT, generally
requires large numbers of annotated data to achieve state-of-the-art
performance on a range of NLP tasks in the scientific domain. However,
obtaining the fine-tune data for scientific NLP task is still challenging and
expensive. Inspired by recent advancement in prompt learning, in this paper, we
propose the Mix Prompt Tuning (MPT), which is a semi-supervised method to
alleviate the dependence on annotated data and improve the performance of
multi-granularity academic function recognition …

academic annotated data art arxiv data fine-tuning function knowledge language language models low multiple nlp numbers performance prompt recognition state

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Enterprise Data Architect

@ Pathward | Remote

Diagnostic Imaging Information Systems (DIIS) Technologist

@ Nova Scotia Health Authority | Halifax, NS, CA, B3K 6R8

Intern Data Scientist - Residual Value Risk Management (f/m/d)

@ BMW Group | Munich, DE

Analytics Engineering Manager

@ PlayStation Global | United Kingdom, London

Junior Insight Analyst (PR&Comms)

@ Signal AI | Lisbon, Lisbon, Portugal