March 28, 2024, 4:48 a.m. | Lei Yu, Meng Cao, Jackie Chi Kit Cheung, Yue Dong

cs.CL updates on

arXiv:2403.18167v1 Announce Type: new
Abstract: State-of-the-art language models (LMs) sometimes generate non-factual hallucinations that misalign with world knowledge. Despite extensive efforts to detect and mitigate hallucinations, understanding their internal mechanisms remains elusive. Our study investigates the mechanistic causes of hallucination, specifically non-factual ones where the LM incorrectly predicts object attributes in response to subject-relation queries. With causal mediation analysis and embedding space projection, we identify two general mechanistic causes of hallucinations shared across LMs of various scales and designs: 1) …

abstract art arxiv generate hallucination hallucinations knowledge language language models lms object state study type understanding world

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

[Job - 14823] Senior Data Scientist (Data Analyst Sr)

@ CI&T | Brazil

Data Engineer

@ WorldQuant | Hanoi

ML Engineer / Toronto

@ Intersog | Toronto, Ontario, Canada

Analista de Business Intelligence (Industry Insights)

@ NielsenIQ | Cotia, Brazil