all AI news
ARES: An Automated Evaluation Framework for Retrieval-Augmented Generation Systems
April 2, 2024, 7:52 p.m. | Jon Saad-Falcon, Omar Khattab, Christopher Potts, Matei Zaharia
cs.CL updates on arXiv.org arxiv.org
Abstract: Evaluating retrieval-augmented generation (RAG) systems traditionally relies on hand annotations for input queries, passages to retrieve, and responses to generate. We introduce ARES, an Automated RAG Evaluation System, for evaluating RAG systems along the dimensions of context relevance, answer faithfulness, and answer relevance. By creating its own synthetic training data, ARES finetunes lightweight LM judges to assess the quality of individual RAG components. To mitigate potential prediction errors, ARES utilizes a small set of human-annotated …
abstract annotations ares arxiv automated context cs.ai cs.cl cs.ir dimensions evaluation framework generate queries rag responses retrieval retrieval-augmented systems type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US