March 28, 2024, 6:55 p.m. | /u/dxtros

Machine Learning www.reddit.com

Abstract:
We demonstrate a technique which allows to dynamically adapt the number of documents in a top-k retriever RAG prompt using feedback from the LLM. This allows a 4x cost reduction of RAG LLM question answering while maintaining the same level of accuracy. We also show that the method helps explain the lineage of LLM outputs.
The reference implementation works with most models (GPT4, many local models, older GPT-3.5 turbo) and can be used with most vector databases exposing a …

abstract accuracy adapt cost documents feedback index llm machinelearning prompt question question answering rag reduce retrieval token vector

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Data Engineering Manager

@ Microsoft | Redmond, Washington, United States

Machine Learning Engineer

@ Apple | San Diego, California, United States