Jan. 30, 2024, midnight | Venelin Valkov

Venelin Valkov www.youtube.com

Getting bad predictions from your Tiny LLM? Learn how to fine-tune a small LLM (e.g. Phi-2, TinyLlama) and (possibly) increase your model's performance. You'll understand how to set up a dataset, model, tokenizer, and LoRA adapter. We'll train the model (Tiny Llama) on a single GPU with custom data and evaluate the predictions.

Full text tutorial (in progress, requires MLExpert Pro): https://www.mlexpert.io/bootcamp/fine-tuning-tiny-llm-on-custom-dataset

AI Bootcamp (in preview): https://www.mlexpert.io/membership
Discord: https://discord.gg/UaNPxVD6tv
Subscribe: http://bit.ly/venelin-subscribe
GitHub repository: https://github.com/curiousily/Get-Things-Done-with-Prompt-Engineering-and-LangChain

00:00 - Intro
00:36 - Text …

analysis data dataset fine-tuning gpu learn llama llm lora performance phi phi-2 predictions sentiment sentiment analysis set small s performance train

More from www.youtube.com / Venelin Valkov

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Developer AI Senior Staff Engineer, Machine Learning

@ Google | Sunnyvale, CA, USA; New York City, USA

Engineer* Cloud & Data Operations (f/m/d)

@ SICK Sensor Intelligence | Waldkirch (bei Freiburg), DE, 79183