Feb. 9, 2024, 5:47 a.m. | Tianjun Zhang Aman Madaan Luyu Gao Steven Zheng Swaroop Mishra Yiming Yang Niket Tandon Uri Al

cs.CL updates on arXiv.org arxiv.org

In-context learning (ICL, also known as few-shot prompting) has been the standard method of adapting LLMs to downstream tasks, by learning from a few input-output examples. Nonetheless, all ICL-based approaches only learn from correct input-output pairs. In this paper, we revisit this paradigm, by learning more from the few given input-output examples. We introduce Learning Principles (LEAP): First, we intentionally induce the model to make mistakes on these few examples; then we reflect on these mistakes, and learn explicit task-specific …

context cs.ai cs.cl examples few-shot in-context learning input-output learn learning from mistakes llms mistakes paper paradigm prompting standard tasks

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote