all AI news
Uncertainty Estimation and Quantification for LLMs: A Simple Supervised Approach
April 25, 2024, 5:44 p.m. | Linyu Liu, Yu Pan, Xiaocheng Li, Guanting Chen
cs.CL updates on arXiv.org arxiv.org
Abstract: Large language models (LLMs) are highly capable of many tasks but they can sometimes generate unreliable or inaccurate outputs. To tackle this issue, this paper studies the problem of uncertainty estimation and calibration for LLMs. We begin by formulating the uncertainty estimation problem for LLMs and then propose a supervised approach that takes advantage of the labeled datasets and estimates the uncertainty of the LLMs' responses. Based on the formulation, we illustrate the difference between …
abstract arxiv cs.cl cs.lg generate issue language language models large language large language models llms paper quantification simple studies tasks type uncertainty
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne