May 6, 2024, 4:43 a.m. | Margarida M. Campos, Ant\'onio Farinhas, Chrysoula Zerva, M\'ario A. T. Figueiredo, Andr\'e F. T. Martins

cs.LG updates on arXiv.org arxiv.org

arXiv:2405.01976v1 Announce Type: cross
Abstract: The rapid proliferation of large language models and natural language processing (NLP) applications creates a crucial need for uncertainty quantification to mitigate risks such as hallucinations and to enhance decision-making reliability in critical applications. Conformal prediction is emerging as a theoretically sound and practically useful framework, combining flexibility with strong statistical guarantees. Its model-agnostic and distribution-free nature makes it particularly promising to address the current shortcomings of NLP systems that stem from the absence of …

abstract and natural language processing applications arxiv cs.cl cs.lg decision framework hallucinations language language models language processing large language large language models making natural natural language natural language processing nlp prediction processing quantification reliability risks sound survey type uncertainty

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US