Dec. 14, 2023, 1:34 a.m. | LlamaIndex

LlamaIndex www.youtube.com

In this talk we cover:
* What LlamaIndex is
* What LlamaHub and create-llama are
* The stages of Retrieval-Augmented Generation (RAG)
* LlamaIndex's ingestion pipeline with caching
* The set of vector stores, LLMs and embedding models available in LlamaIndex
* Inspecting and customizing your prompts
* And then seven advanced querying strategies, including
* SubQuestionQueryEngine for complex questions
* Small-to-big retrieval for improved precision
* Metadata filtering, also for improved precision
* Hybrid search including traditional search engine …

advanced caching deep dive embedding embedding models llama llamaindex llms pipeline prompts rag retrieval set talk vector

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Data Engineering Manager

@ Microsoft | Redmond, Washington, United States

Machine Learning Engineer

@ Apple | San Diego, California, United States