Sept. 6, 2023, 2:57 p.m. | LangChain

LangChain blog.langchain.dev

Most complex and knowledge-intensive LLM applications require runtime data retrieval for Retrieval Augmented Generation (RAG). A core component of the typical RAG stack is a vector store, which is used to power document retrieval.

Using a vector store requires setting up an indexing pipeline to load data from sources (a

applications core data data sources indexing knowledge llm llm applications pipeline power rag retrieval retrieval augmented generation stack vector

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Consultant Senior Power BI & Azure - CDI - H/F

@ Talan | Lyon, France