all AI news
Llama 2: Fine-tuning Notebooks - QLoRA, DeepSpeed
July 20, 2023, 2 p.m. | code_your_own_AI
code_your_own_AI www.youtube.com
We will leverage PEFT library from Hugging Face ecosystem, as well as QLoRA for more memory efficient finetuning.
https://huggingface.co/meta-llama/Llama-2-7b
https://huggingface.co/meta-llama/Llama-2-13b
https://huggingface.co/meta-llama/Llama-2-13b-chat
https://huggingface.co/meta-llama/Llama-2-70b
https://huggingface.co/meta-llama/Llama-2-13b-chat-hf
#ai
#finetune
#llama2
#shorts
apply data deepspeed ecosystem face fine-tuning finetuning hugging face library llama llama 2 memory notebooks shorts
More from www.youtube.com / code_your_own_AI
NEW LLM Test: Reasoning & gpt2-chatbot
6 days, 5 hours ago |
www.youtube.com
Autonomous AI Agents: 14 % MAX Performance
1 week, 2 days ago |
www.youtube.com
480B LLM as 128x4B MoE? WHY?
1 week, 4 days ago |
www.youtube.com
No more Fine-Tuning: Unsupervised ICL+
1 week, 5 days ago |
www.youtube.com
NEW Phi-3 mini 3.8B LLM for Your PHONE: 1st TEST
1 week, 6 days ago |
www.youtube.com
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Machine Learning Engineer - Sr. Consultant level
@ Visa | Bellevue, WA, United States