Sept. 30, 2023, 1 p.m. | James Briggs

James Briggs www.youtube.com

In chapter 10 of the LangChain series we'll work from LangChain streaming 101 through to developing streaming for LangChain Agents and serving it through FastAPI.

With what we cover here, you'll be able to go from never having used streaming to deploying it in production in no time.

We'll focus on using OpenAI's GPT-3.5-turbo model via LangChain's ChatOpenAI object. Learning how to do simple terminal (StdOut) streaming with LLMs, up to parsing stream outputs with Async Iterator streaming.

📌 Code …

agents fastapi focus gpt gpt-3 gpt-3.5 gpt-3.5-turbo langchain openai production series streaming through turbo work

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne