May 6, 2023, 2:30 p.m. | Venelin Valkov

Venelin Valkov www.youtube.com

In this video, we'll explore OpenLLaMA, an open-source reproduction of Meta AI's LLaMA large language model. We'll load the 7B model in a Google Colab notebook and run a couple of prompts with the HuggingFace Transformers weights. We'll also discuss a strategy for sampling that improves the generated responses.

The creators of OpenLLaMA have released a public preview of the 7B model, trained with 200 billion tokens, and provide PyTorch and Jax weights of pre-trained OpenLLaMA models. They also compare …

colab discuss google huggingface language language model large language model llama meta meta ai notebook prompts sampling strategy transformers video

More from www.youtube.com / Venelin Valkov

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Associate (Data Science/Information Engineering/Applied Mathematics/Information Technology)

@ Nanyang Technological University | NTU Main Campus, Singapore

Associate Director of Data Science and Analytics

@ Penn State University | Penn State University Park

Student Worker- Data Scientist

@ TransUnion | Israel - Tel Aviv

Vice President - Customer Segment Analytics Data Science Lead

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India

Middle/Senior Data Engineer

@ Devexperts | Sofia, Bulgaria