all AI news
HALO: An Ontology for Representing and Categorizing Hallucinations in Large Language Models
April 3, 2024, 4:47 a.m. | Navapat Nananukul, Mayank Kejriwal
cs.CL updates on arXiv.org arxiv.org
Abstract: Recent progress in generative AI, including large language models (LLMs) like ChatGPT, has opened up significant opportunities in fields ranging from natural language processing to knowledge discovery and data mining. However, there is also a growing awareness that the models can be prone to problems such as making information up or `hallucinations', and faulty reasoning on seemingly simple problems. Because of the popularity of models like ChatGPT, both academic scholars and citizen scientists have documented …
abstract arxiv chatgpt cs.ai cs.cl data data mining discovery fields generative hallucinations halo however knowledge language language models language processing large language large language models llms mining natural natural language natural language processing ontology opportunities processing progress type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne