Feb. 20, 2024, 5:51 a.m. | Shiyu Ni, Keping Bi, Jiafeng Guo, Xueqi Cheng

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.11457v1 Announce Type: new
Abstract: Large Language Models (LLMs) have been found to have difficulty knowing they do not possess certain knowledge and tend to provide specious answers in such cases. Retrieval Augmentation (RA) has been extensively studied to mitigate LLMs' hallucinations. However, due to the extra overhead and unassured quality of retrieval, it may not be optimal to conduct RA all the time. A straightforward idea is to only conduct retrieval when LLMs are uncertain about a question. This …

abstract arxiv augmentation cases cs.cl extra found hallucinations knowledge language language models large language large language models llms retrieval type

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US