Aug. 22, 2023, 5:13 p.m. | James Briggs

James Briggs www.youtube.com

In this video, we'll learn how to build Large Language Model (LLM) + Retrieval Augmented Generation (RAG) pipelines using open-source models from Hugging Face deployed on AWS SageMaker. We use the MiniLM sentence transformer to power our semantic search component with Pinecone.

📌 Code:
https://github.com/pinecone-io/examples/blob/master/learn/generation/aws/sagemaker/sagemaker-huggingface-rag.ipynb

🌲 Subscribe for Latest Articles and Videos:
https://www.pinecone.io/newsletter-signup/

👋🏼 AI Consulting:
https://aurelio.ai

👾 Discord:
https://discord.gg/c5QtDB9RAP

Twitter: https://twitter.com/jamescalam
LinkedIn: https://www.linkedin.com/in/jamescalam/

00:00 Open Source LLMs on AWS SageMaker
00:27 Open Source RAG Pipeline
04:25 Deploying Hugging Face …

articles aws aws sagemaker build code face hugging face language language model large language large language model learn llm llms open-source models pinecone pipelines power rag retrieval retrieval augmented generation sagemaker search semantic transformer video videos

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US