all AI news
Black-box Prompt Learning for Pre-trained Language Models. (arXiv:2201.08531v1 [cs.CL])
Web: http://arxiv.org/abs/2201.08531
Jan. 24, 2022, 2:10 a.m. | Shizhe Diao, Xuechun Li, Yong Lin, Zhichao Huang, Tong Zhang
cs.CL updates on arXiv.org arxiv.org
Domain-specific fine-tuning strategies for large pre-trained models received
vast attention in recent years. In previously studied settings, the model
architectures and parameters are tunable or at least visible, which we refer to
as white-box settings. This work considers a new scenario, where we do not have
access to a pre-trained model, except for its outputs given inputs, and we call
this problem black-box fine-tuning. To illustrate our approach, we first
introduce the black-box setting formally on text classification, where the …
More from arxiv.org / cs.CL updates on arXiv.org
Latest AI/ML/Big Data Jobs
Director, Data Science (Advocacy & Nonprofit)
@ Civis Analytics | Remote
Data Engineer
@ Rappi | [CO] Bogotá
Data Scientist V, Marketplaces Personalization (Remote)
@ ID.me | United States (U.S.)
Product OPs Data Analyst (Flex/Remote)
@ Scaleway | Paris
Big Data Engineer
@ Risk Focus | Riga, Riga, Latvia
Internship Program: Machine Learning Backend
@ Nextail | Remote job