Nov. 4, 2023, 1:35 a.m. | /u/IXMachina

Machine Learning www.reddit.com

I've been trying to fine tune the llama 2 13b model (not quantized) on AWS g5.12x instance which has 4*24gb A10GPUs, and 192gb ram. I'm also using PEFT lora for fine tuning. I've been trying to fine-tune it with hugging face trainer along with deepspeed stage 3 because it could offload the parameters into the cpu, but I run into out of memory errors irrespective of the batch size or my sequence length. In the deepspeed configuration file I have …

aws deepspeed face hugging face instance llama llama 2 llms lora machinelearning peft stage trainer

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Sr. BI Analyst

@ AkzoNobel | Pune, IN