March 19, 2024, 4:45 a.m. | Garima Agrawal, Tharindu Kumarage, Zeyad Alghamdi, Huan Liu

cs.LG updates on arXiv.org arxiv.org

arXiv:2311.07914v2 Announce Type: replace-cross
Abstract: The contemporary LLMs are prone to producing hallucinations, stemming mainly from the knowledge gaps within the models. To address this critical limitation, researchers employ diverse strategies to augment the LLMs by incorporating external knowledge, aiming to reduce hallucinations and enhance reasoning accuracy. Among these strategies, leveraging knowledge graphs as a source of external information has demonstrated promising results. In this survey, we comprehensively review these knowledge-graph-based augmentation techniques in LLMs, focusing on their efficacy in …

abstract accuracy arxiv cs.cl cs.lg diverse graphs hallucinations knowledge knowledge graphs llms reasoning reduce researchers stemming strategies survey type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

DevOps Engineer (Data Team)

@ Reward Gateway | Sofia/Plovdiv