Feb. 9, 2024, 5:47 a.m. | Tianjun Zhang Aman Madaan Luyu Gao Steven Zheng Swaroop Mishra Yiming Yang Niket Tandon Uri Al

cs.CL updates on arXiv.org arxiv.org

In-context learning (ICL, also known as few-shot prompting) has been the standard method of adapting LLMs to downstream tasks, by learning from a few input-output examples. Nonetheless, all ICL-based approaches only learn from correct input-output pairs. In this paper, we revisit this paradigm, by learning more from the few given input-output examples. We introduce Learning Principles (LEAP): First, we intentionally induce the model to make mistakes on these few examples; then we reflect on these mistakes, and learn explicit task-specific …

context cs.ai cs.cl examples few-shot in-context learning input-output learn learning from mistakes llms mistakes paper paradigm prompting standard tasks

Research Scholar (Technical Research)

@ Centre for the Governance of AI | Hybrid; Oxford, UK

HPC Engineer (x/f/m) - DACH

@ Meshcapade GmbH | Remote, Germany

Data Scientist AI / ML - Associate 2 -Bangalore

@ PwC | Bengaluru (SDC) - Bagmane Tech Park

Staff ML Engineer - Machine Learning

@ Visa | Bengaluru, India

Senior Data Scientist

@ IQVIA | Dublin, Ireland

Data Analyst ETL Expert

@ Bosch Group | Bengaluru, India