all AI news
A Study on the Calibration of In-context Learning
March 29, 2024, 4:43 a.m. | Hanlin Zhang, Yi-Fan Zhang, Yaodong Yu, Dhruv Madeka, Dean Foster, Eric Xing, Himabindu Lakkaraju, Sham Kakade
cs.LG updates on arXiv.org arxiv.org
Abstract: Accurate uncertainty quantification is crucial for the safe deployment of machine learning models, and prior research has demonstrated improvements in the calibration of modern language models (LMs). We study in-context learning (ICL), a prevalent method for adapting static LMs through tailored prompts, and examine the balance between performance and calibration across a broad spectrum of natural language understanding and reasoning tasks. Through comprehensive experiments, we observe that, with an increasing number of ICL examples, models …
abstract arxiv balance context cs.ai cs.cl cs.lg deployment improvements in-context learning language language models lms machine machine learning machine learning models modern prior prompts quantification research study through type uncertainty
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Reporting & Data Analytics Lead (Sizewell C)
@ EDF | London, GB
Data Analyst
@ Notable | San Mateo, CA