Nov. 6, 2023, 12:48 p.m. | Andrej Baranovskij

Andrej Baranovskij www.youtube.com

I explain how to get structured JSON output from LLM RAG running using Haystack API on top of Llama.cpp. Vector embeddings are stored in Weaviate database, the same as in my previous video. When extracting data, a structured JSON response is preferred because we are not interested in additional descriptions.

Invoice Data Processing with Llama2 13B LLM RAG on Local CPU [Weaviate, Llama.cpp, Haystack]:
https://www.youtube.com/watch?v=XuvdgCuydsM

GitHub repo:
https://github.com/katanaml/llm-rag-invoice-cpu

0:00 Intro
0:55 Prompts
5:18 Summary

CONNECT:
- Subscribe to this YouTube …

api cpp cpu data database embeddings haystack json llama llm llm rag rag running vector vector embeddings video weaviate

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US