all AI news
SPUQ: Perturbation-Based Uncertainty Quantification for Large Language Models
March 6, 2024, 5:47 a.m. | Xiang Gao, Jiaxin Zhang, Lalla Mouatadid, Kamalika Das
cs.CL updates on arXiv.org arxiv.org
Abstract: In recent years, large language models (LLMs) have become increasingly prevalent, offering remarkable text generation capabilities. However, a pressing challenge is their tendency to make confidently wrong predictions, highlighting the critical need for uncertainty quantification (UQ) in LLMs. While previous works have mainly focused on addressing aleatoric uncertainty, the full spectrum of uncertainties, including epistemic, remains inadequately explored. Motivated by this gap, we introduce a novel UQ method, sampling with perturbation for UQ (SPUQ), designed …
abstract arxiv become capabilities challenge cs.ai cs.cl highlighting language language models large language large language models llms predictions quantification text text generation type uncertainty
More from arxiv.org / cs.CL updates on arXiv.org
Benchmarking LLMs via Uncertainty Quantification
2 days, 4 hours ago |
arxiv.org
CARE: Extracting Experimental Findings From Clinical Literature
2 days, 4 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Research Scientist
@ Meta | Menlo Park, CA
Principal Data Scientist
@ Mastercard | O'Fallon, Missouri (Main Campus)