all AI news
HaluEval-Wild: Evaluating Hallucinations of Language Models in the Wild
March 8, 2024, 5:47 a.m. | Zhiying Zhu, Zhiqing Sun, Yiming Yang
cs.CL updates on arXiv.org arxiv.org
Abstract: Hallucinations pose a significant challenge to the reliability of large language models (LLMs) in critical domains. Recent benchmarks designed to assess LLM hallucinations within conventional NLP tasks, such as knowledge-intensive question answering (QA) and summarization, are insufficient for capturing the complexities of user-LLM interactions in dynamic, real-world settings. To address this gap, we introduce HaluEval-Wild, the first benchmark specifically designed to evaluate LLM hallucinations in the wild. We meticulously collect challenging (adversarially filtered by Alpaca) …
abstract arxiv benchmarks challenge complexities cs.cl domains dynamic hallucinations interactions knowledge language language models large language large language models llm llm hallucinations llms nlp question question answering reliability summarization tasks type
More from arxiv.org / cs.CL updates on arXiv.org
Benchmarking LLMs via Uncertainty Quantification
1 day, 18 hours ago |
arxiv.org
CARE: Extracting Experimental Findings From Clinical Literature
1 day, 18 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Director, Clinical Data Science
@ Aura | Remote USA
Research Scientist, AI (PhD)
@ Meta | Menlo Park, CA | New York City