Sept. 6, 2023, 2:57 p.m. | LangChain

LangChain blog.langchain.dev

Most complex and knowledge-intensive LLM applications require runtime data retrieval for Retrieval Augmented Generation (RAG). A core component of the typical RAG stack is a vector store, which is used to power document retrieval.

Using a vector store requires setting up an indexing pipeline to load data from sources (a

applications core data data sources indexing knowledge llm llm applications pipeline power rag retrieval retrieval augmented generation stack vector

Senior AI/ML Developer

@ Lemon.io | Remote

Senior Applied Scientist

@ Tractable | London, UK

Senior Data Scientist, Product (Pro Growth)

@ Thumbtack | Remote, Ontario

Specialist Solutions Architect - Data Science / Machine Learning

@ Databricks | United States

Specialist Solutions Architect - Data Engineering (Financial Services)

@ Databricks | United States

Data Engineer I (R-15080)

@ Dun & Bradstreet | Hyderabad - India