all AI news
How faithful are RAG models? Quantifying the tug-of-war between RAG and LLMs' internal prior
April 17, 2024, 4:46 a.m. | Kevin Wu, Eric Wu, James Zou
cs.CL updates on arXiv.org arxiv.org
Abstract: Retrieval augmented generation (RAG) is often used to fix hallucinations and provide up-to-date knowledge for large language models (LLMs). However, in cases when the LLM alone incorrectly answers a question, does providing the correct retrieved content always fix the error? Conversely, in cases where the retrieved content is incorrect, does the LLM know to ignore the wrong information, or does it recapitulate the error? To answer these questions, we systematically analyze the tug-of-war between a …
abstract arxiv cases cs.ai cs.cl error hallucinations however knowledge language language models large language large language models llm llms prior question rag retrieval retrieval augmented generation type war
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Research Scientist (Computer Science)
@ Nanyang Technological University | NTU Main Campus, Singapore
Intern - Sales Data Management
@ Deliveroo | Dubai, UAE (Main Office)