Sept. 4, 2023, 3:30 p.m. | Venelin Valkov

Venelin Valkov www.youtube.com

Full text tutorial (requires MLExpert Pro): https://www.mlexpert.io/prompt-engineering/fine-tuning-llama-2-on-custom-dataset

Learn how to fine-tune the Llama 2 7B base model on a custom dataset (using a single T4 GPU). We'll use the QLoRa technique to train an LLM for text summarization of conversations between support agents and customers over Twitter.

Discord: https://discord.gg/UaNPxVD6tv
Prepare for the Machine Learning interview: https://mlexpert.io
Subscribe: http://bit.ly/venelin-subscribe
GitHub repository: https://github.com/curiousily/Get-Things-Done-with-Prompt-Engineering-and-LangChain

Join this channel to get access to the perks and support my work:
https://www.youtube.com/channel/UCoW_WzQNJVAjxo4osNAxd_g/join

00:00 - When to Fine-tune …

agents conversations customers dataset gpu join learn llama llama 2 llm summarization support text text summarization twitter work

More from www.youtube.com / Venelin Valkov

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

#13721 - Data Engineer - AI Model Testing

@ Qualitest | Miami, Florida, United States

Elasticsearch Administrator

@ ManTech | 201BF - Customer Site, Chantilly, VA