Dec. 5, 2023, 6:38 p.m. | /u/kindly_formation71

Machine Learning www.reddit.com

Ran an experiment comparing information retrieval performance between Open AI's Assistants API's RAG, GPT-4 Turbo (with context window stuffing) and Llama Index with GPT4.

I recently added a new **document-oriented** react hook to [CopilotKit](https://github.com/RecursivelyAI/CopilotKit), made specifically to accommodate (potentially long-form) documents and wanted to get the best performance.

**Got pretty striking results:** The assistant's API beats Llama index in a big way in performance and is 25x cheaper than context window stuffing with GPT-4 Turbo.



[accuracy performance](https://preview.redd.it/to7d2oqiti4c1.jpg?width=1456&format=pjpg&auto=webp&s=2481940f6e07bb027a1bb744e8e48ecc7d374b81)



[costs](https://preview.redd.it/b7c7xltjti4c1.jpg?width=1456&format=pjpg&auto=webp&s=0f33be91986569f58e197d769a8b9b9d483b01f7) …

api assistant assistants big context context window cost experiment gpt gpt-4 gpt4 haystack index information llama machinelearning open ai performance rag retrieval turbo

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote