April 16, 2024, 4:51 a.m. | Ruixin Yang, Dheeraj Rajagopa, Shirley Anugrah Hayati, Bin Hu, Dongyeop Kang

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.09127v1 Announce Type: new
Abstract: Uncertainty estimation is a significant issue for current large language models (LLMs) that are generally poorly calibrated and over-confident, especially with reinforcement learning from human feedback (RLHF). Unlike humans, whose decisions and confidences not only stem from intrinsic beliefs but can also be adjusted through daily observations, existing calibration methods for LLMs focus on estimating or eliciting individual confidence without taking full advantage of the "Collective Wisdom": the interaction among multiple LLMs that can collectively …

abstract agent arxiv confidence cs.cl current daily decisions feedback human human feedback humans intrinsic issue language language models large language large language models llms multi-agent reinforcement reinforcement learning rlhf stem through type uncertainty via

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York