June 22, 2023, 8:36 a.m. | 1littlecoder

1littlecoder www.youtube.com

This Tutorial shows how to fine-tune the recent Falcon-7b model on a single Google colab and turn it into a chatbot

We will leverage PEFT library from Hugging Face ecosystem, as well as QLoRA for more memory efficient finetuning.

We will use the Guanaco dataset, which is a clean subset of the OpenAssistant dataset adapted to train general purpose chatbots.

Here we will use the SFTTrainer from TRL library that gives a wrapper around transformers Trainer to easily fine-tune models …

chatbot colab dataset ecosystem face falcon fine-tuning finetuning google how-to hugging face library memory openassistant shows tutorial

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Consultant - Artificial Intelligence & Data (Google Cloud Data Engineer) - MY / TH

@ Deloitte | Kuala Lumpur, MY