April 24, 2024, 4:45 a.m. | Siyin Wang, Chao-Han Huck Yang, Ji Wu, Chao Zhang

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.14716v1 Announce Type: cross
Abstract: Large language models (LLMs) can adapt to new tasks through in-context learning (ICL) based on a few examples presented in dialogue history without any model parameter update. Despite such convenience, the performance of ICL heavily depends on the quality of the in-context examples presented, which makes the in-context example selection approach a critical choice. This paper proposes a novel Bayesian in-Context example Selection method (ByCS) for ICL. Extending the inference probability conditioned on in-context examples …

abstract adapt arxiv bayesian context cs.ai cs.cl cs.cv cs.sd dialogue eess.as example examples history in-context learning language language models large language large language models llms performance quality speech tasks text through type update visual

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne