all AI news
In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation
March 5, 2024, 2:43 p.m. | Shiqi Chen, Miao Xiong, Junteng Liu, Zhengxuan Wu, Teng Xiao, Siyang Gao, Junxian He
cs.LG updates on arXiv.org arxiv.org
Abstract: Large language models (LLMs) frequently hallucinate and produce factual errors, yet our understanding of why they make these errors remains limited. In this study, we delve into the underlying mechanisms of LLM hallucinations from the perspective of inner representations, and discover a salient pattern associated with hallucinations: correct generations tend to have sharper context activations in the hidden states of the in-context tokens, compared to the incorrect ones. Leveraging this insight, we propose an entropy-based …
abstract alerts arxiv context cs.ai cs.cl cs.lg errors hallucination hallucinations language language models large language large language models llm llm hallucinations llms perspective representation study type understanding
More from arxiv.org / cs.LG updates on arXiv.org
Digital Over-the-Air Federated Learning in Multi-Antenna Systems
2 days, 11 hours ago |
arxiv.org
Bagging Provides Assumption-free Stability
2 days, 11 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Research Scientist, Demography and Survey Science, University Grad
@ Meta | Menlo Park, CA | New York City
Computer Vision Engineer, XR
@ Meta | Burlingame, CA