March 8, 2024, 5:47 a.m. | Zhiying Zhu, Zhiqing Sun, Yiming Yang

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.04307v1 Announce Type: new
Abstract: Hallucinations pose a significant challenge to the reliability of large language models (LLMs) in critical domains. Recent benchmarks designed to assess LLM hallucinations within conventional NLP tasks, such as knowledge-intensive question answering (QA) and summarization, are insufficient for capturing the complexities of user-LLM interactions in dynamic, real-world settings. To address this gap, we introduce HaluEval-Wild, the first benchmark specifically designed to evaluate LLM hallucinations in the wild. We meticulously collect challenging (adversarially filtered by Alpaca) …

abstract arxiv benchmarks challenge complexities cs.cl domains dynamic hallucinations interactions knowledge language language models large language large language models llm llm hallucinations llms nlp question question answering reliability summarization tasks type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Director, Clinical Data Science

@ Aura | Remote USA

Research Scientist, AI (PhD)

@ Meta | Menlo Park, CA | New York City