all AI news
A Study on the Calibration of In-context Learning
March 29, 2024, 4:43 a.m. | Hanlin Zhang, Yi-Fan Zhang, Yaodong Yu, Dhruv Madeka, Dean Foster, Eric Xing, Himabindu Lakkaraju, Sham Kakade
cs.LG updates on arXiv.org arxiv.org
Abstract: Accurate uncertainty quantification is crucial for the safe deployment of machine learning models, and prior research has demonstrated improvements in the calibration of modern language models (LMs). We study in-context learning (ICL), a prevalent method for adapting static LMs through tailored prompts, and examine the balance between performance and calibration across a broad spectrum of natural language understanding and reasoning tasks. Through comprehensive experiments, we observe that, with an increasing number of ICL examples, models …
abstract arxiv balance context cs.ai cs.cl cs.lg deployment improvements in-context learning language language models lms machine machine learning machine learning models modern prior prompts quantification research study through type uncertainty
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Data Scientist, Mid
@ Booz Allen Hamilton | DEU, Stuttgart (Kurmaecker St)
Tech Excellence Data Scientist
@ Booz Allen Hamilton | Undisclosed Location - USA, VA, Mclean