Sept. 16, 2022, 1:11 a.m. | Yue Yu, Rongzhi Zhang, Ran Xu, Jieyu Zhang, Jiaming Shen, Chao Zhang

cs.LG updates on arXiv.org arxiv.org

We propose PATRON, a new method that uses prompt-based uncertainty estimation
for data selection for pre-trained language model fine-tuning under cold-start
scenarios, i.e., no initial labeled data are available. In PATRON, we design
(1) a prompt-based uncertainty propagation approach to estimate the importance
of data points and (2) a partition-then-rewrite (PTR) strategy to promote
sample diversity when querying for annotations. Experiments on six text
classification datasets show that PATRON outperforms the strongest cold-start
data selection baselines by up to 6.9%. …

arxiv data fine-tuning language language model model fine-tuning uncertainty

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior AI & Data Engineer

@ Bertelsmann | Kuala Lumpur, 14, MY, 50400

Analytics Engineer

@ Reverse Tech | Philippines - Remote