Nov. 27, 2023, 9 p.m. | Venelin Valkov

Venelin Valkov www.youtube.com

Discover vLLM, UC Berkeley's open-source library for fast LLM inference, featuring a PagedAttention algorithm for up to 24x higher throughput than HuggingFace Transformers. We'll compare vLLM and HuggingFace using the LLama 2 7b model, and learn how to easily integrate vLLM into your projects.

vLLM page: https://blog.vllm.ai/2023/06/20/vllm.html

Discord: https://discord.gg/UaNPxVD6tv
Prepare for the Machine Learning interview: https://mlexpert.io
Subscribe: http://bit.ly/venelin-subscribe
GitHub repository: https://github.com/curiousily/Get-Things-Done-with-Prompt-Engineering-and-LangChain

Join this channel to get access to the perks and support my work:
https://www.youtube.com/channel/UCoW_WzQNJVAjxo4osNAxd_g/join

00:00 - What is vLLM? …

ai predictions algorithm berkeley boost huggingface inference join language language model large language large language model learn library llama llama 2 llm predictions projects speed transformers uc berkeley

More from www.youtube.com / Venelin Valkov

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Lead Data Modeler

@ Sherwin-Williams | Cleveland, OH, United States