April 12, 2024, 4:47 a.m. | Quanyu Long, Yin Wu, Wenya Wang, Sinno Jialin Pan

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.07546v1 Announce Type: new
Abstract: In-context Learning (ICL) has emerged as a powerful capability alongside the development of scaled-up large language models (LLMs). By instructing LLMs using few-shot demonstrative examples, ICL enables them to perform a wide range of tasks without updating millions of parameters. However, the precise contributions of demonstrations towards improving end-task performance have not been thoroughly investigated in recent analytical studies. In this paper, we empirically decompose the overall performance of ICL into three dimensions, label space, …

abstract arxiv capability context cs.cl development discrimination examples few-shot format in-context learning language language models large language large language models llms solve space tasks them type via

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US