April 15, 2024, 4:43 a.m. | Jiayi Huang, Sangwoo Park, Osvaldo Simeone

cs.LG updates on arXiv.org arxiv.org

arXiv:2305.07504v2 Announce Type: replace
Abstract: Deep learning models, including modern systems like large language models, are well known to offer unreliable estimates of the uncertainty of their decisions. In order to improve the quality of the confidence levels, also known as calibration, of a model, common approaches entail the addition of either data-dependent or data-independent regularization terms to the training loss. Data-dependent regularizers have been recently introduced in the context of conventional frequentist learning to penalize deviations between confidence and …

abstract arxiv bayesian confidence cs.lg data decisions deep learning eess.sp independent language language models large language large language models modern quality regularization systems terms type uncertainty

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York