Feb. 16, 2024, 5:43 a.m. | Chengshuai Shi, Kun Yang, Jing Yang, Cong Shen

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.09723v1 Announce Type: cross
Abstract: The remarkable instruction-following capability of large language models (LLMs) has sparked a growing interest in automatically learning suitable prompts. However, while many effective methods have been proposed, the cost incurred during the learning process (e.g., accessing LLM and evaluating the responses) has not been considered. To overcome this limitation, this work explicitly incorporates a finite budget constraint into prompt learning. Towards developing principled solutions, a novel connection is established between prompt learning and fixed-budget best …

abstract arm arxiv budget capability cost cs.ai cs.it cs.lg identification language language models large language large language models llm llms math.it process prompt prompt learning prompts responses stat.ml type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne