July 30, 2023, noon | Prompt Engineering

Prompt Engineering www.youtube.com

In this video, I will show you the easiest way to fine-tune the Llama-2 model on your own data using the auto train-advanced package from HuggingFace.

Steps to follow:
---installation of packages:
!pip install autotrain-advanced
!pip install huggingface_hub

!autotrain setup --update-torch (optional - needed for Google Colab)

---- HuggingFace credentials:
from huggingface_hub import notebook_login
notebook_login()

--- single line command!
!autotrain llm --train --project_name your_project_name --model TinyPixel/Llama-2-7B-bf16-sharded --data_path your_data_set --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 2 --num_train_epochs 3 --trainer sft --model_max_length 2048 …

advanced auto colab data google huggingface install installation llama package pip setup show video

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US