July 29, 2023, 1:52 p.m. | James Briggs

James Briggs www.youtube.com

Retrieval Augmented Generation (RAG) allows us to keep our Large Language Models (LLMs) up to date with the latest information, reduce hallucinations, and allow us to cite the original source of information being used by the LLM.

We build the RAG pipeline using a Pinecone vector database, a Llama 2 13B chat model, and wrap everything in Hugging Face and LangChain code.

📌 Code:
https://github.com/pinecone-io/examples/blob/master/learn/generation/llm-field-guide/llama-2-13b-retrievalqa.ipynb

🌲 Subscribe for Latest Articles and Videos:
https://www.pinecone.io/newsletter-signup/

👋🏼 AI Consulting:
https://aurelio.ai

👾 Discord:
https://discord.gg/c5QtDB9RAP …

build chat database hallucinations information language language models large language large language models llama llama 2 llm llms pinecone pipeline reduce retrieval retrieval augmented generation up to date vector vector database

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne