Jan. 30, 2024, midnight | Venelin Valkov

Venelin Valkov www.youtube.com

Getting bad predictions from your Tiny LLM? Learn how to fine-tune a small LLM (e.g. Phi-2, TinyLlama) and (possibly) increase your model's performance. You'll understand how to set up a dataset, model, tokenizer, and LoRA adapter. We'll train the model (Tiny Llama) on a single GPU with custom data and evaluate the predictions.

Full text tutorial (in progress, requires MLExpert Pro): https://www.mlexpert.io/bootcamp/fine-tuning-tiny-llm-on-custom-dataset

AI Bootcamp (in preview): https://www.mlexpert.io/membership
Discord: https://discord.gg/UaNPxVD6tv
Subscribe: http://bit.ly/venelin-subscribe
GitHub repository: https://github.com/curiousily/Get-Things-Done-with-Prompt-Engineering-and-LangChain

00:00 - Intro
00:36 - Text …

analysis data dataset fine-tuning gpu learn llama llm lora performance phi phi-2 predictions sentiment sentiment analysis set small s performance train

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US