all AI news
Reliable Hallucination Detection in Large Language Models // Jiaxin Zhang // AI in Production Talk
April 25, 2024, 1 p.m. | MLOps.community
MLOps.community www.youtube.com
Hallucination detection is a critical step toward understanding the trustworthiness of modern language models (LMs). To achieve this goal, we re-examine existing detection approaches based on the self-consistency of LMs and uncover two types of hallucinations resulting from 1) question-level and 2) model-level, which cannot be effectively identified through self-consistency check alone. Building upon this discovery, we propose a novel sampling-based method, i.e., semantic-aware cross-check consistency (SAC3) that expands on the principle of self-consistency checking. Our SAC3 approach …
abstract detection hallucination hallucinations language language models large language large language models lms modern production question talk types understanding
More from www.youtube.com / MLOps.community
AI Quality in Mo's Eyes // Mohamed Elgendy // MLOps Podcast #229 clip
2 days, 17 hours ago |
www.youtube.com
Jobs in AI, ML, Big Data
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York