April 17, 2024, 4:46 a.m. | Liyan Tang, Philippe Laban, Greg Durrett

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.10774v1 Announce Type: new
Abstract: Recognizing if LLM output can be grounded in evidence is central to many tasks in NLP: retrieval-augmented generation, summarization, document-grounded dialogue, and more. Current approaches to this kind of "fact-checking" are based on verifying each piece of a model generation against potential evidence using an LLM. However, this process can be very computationally expensive, requiring many calls to LLMs to check a single response. In this work, we show how to build small models that …

abstract arxiv cs.ai cs.cl current dialogue document documents evidence fact-checking however kind llm llms nlp retrieval retrieval-augmented summarization tasks type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer - New Graduate

@ Applied Materials | Milan,ITA

Lead Machine Learning Scientist

@ Biogen | Cambridge, MA, United States