all AI news
Reducing Retraining by Recycling Parameter-Efficient Prompts. (arXiv:2208.05577v1 [cs.CL])
Aug. 12, 2022, 1:11 a.m. | Brian Lester, Joshua Yurtsever, Siamak Shakeri, Noah Constant
cs.CL updates on arXiv.org arxiv.org
Parameter-efficient methods are able to use a single frozen pre-trained large
language model (LLM) to perform many tasks by learning task-specific soft
prompts that modulate model behavior when concatenated to the input text.
However, these learned prompts are tightly coupled to a given frozen model --
if the model is updated, corresponding new prompts need to be obtained. In this
work, we propose and investigate several approaches to "Prompt Recycling'"
where a prompt trained on a source model is transformed …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Alternance DATA/AI Engineer (H/F)
@ SQLI | Le Grand-Quevilly, France