all AI news
Language Models Hallucinate, but May Excel at Fact Verification
March 22, 2024, 4:48 a.m. | Jian Guan, Jesse Dodge, David Wadden, Minlie Huang, Hao Peng
cs.CL updates on arXiv.org arxiv.org
Abstract: Recent progress in natural language processing (NLP) owes much to remarkable advances in large language models (LLMs). Nevertheless, LLMs frequently "hallucinate," resulting in non-factual outputs. Our carefully-designed human evaluation substantiates the serious hallucination issue, revealing that even GPT-3.5 produces factual outputs less than 25% of the time. This underscores the importance of fact verifiers in order to measure and incentivize progress. Our systematic investigation affirms that LLMs can be repurposed as effective fact verifiers with …
abstract advances arxiv cs.cl evaluation excel gpt gpt-3 gpt-3.5 hallucination human issue language language models language processing large language large language models llms natural natural language natural language processing nlp processing progress type verification
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Business Data Scientist, gTech Ads
@ Google | Mexico City, CDMX, Mexico
Lead, Data Analytics Operations
@ Zocdoc | Pune, Maharashtra, India