April 24, 2024, 4:45 a.m. | Siyin Wang, Chao-Han Huck Yang, Ji Wu, Chao Zhang

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.14716v1 Announce Type: cross
Abstract: Large language models (LLMs) can adapt to new tasks through in-context learning (ICL) based on a few examples presented in dialogue history without any model parameter update. Despite such convenience, the performance of ICL heavily depends on the quality of the in-context examples presented, which makes the in-context example selection approach a critical choice. This paper proposes a novel Bayesian in-Context example Selection method (ByCS) for ICL. Extending the inference probability conditioned on in-context examples …

abstract adapt arxiv bayesian context cs.ai cs.cl cs.cv cs.sd dialogue eess.as example examples history in-context learning language language models large language large language models llms performance quality speech tasks text through type update visual

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York