July 14, 2023, 12:26 p.m. | Abhishek Thakur

Abhishek Thakur www.youtube.com

In this video, I'll show you how you can deploy and run large language model (LLM) chatbots locally. The steps followed are also valid for production environment and the tutorial is also production ready! By the end of the tutorial, you will be running an LLM like Falcon-7B (or 40B or any LLM) locally and you would have also deployed a chat interface to use the local llm and chat with it!

For the video, we will be using text-generation-inference: …

chatbots deploy environment falcon language language model large language large language model llm llm chatbots production running show tutorial video

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Scientist

@ ITE Management | New York City, United States