Nov. 30, 2023, 3:27 a.m. | /u/lightSpeedBrick

Machine Learning www.reddit.com

TL;DR: Why does GPU memory usage spike during gradient update step (can't account for 10gbs) but then drop down?

I've been working on fine-tuning some of the larger LMs available on HuggingFace (e.g. Falcon40B and Llama-2-70B) and so far all my estimates for memory requirements don't add up. I have access to 4 A100-80gb GPUs and was fairly confident that I should have enough RAM to fine-tune Falcon40B with LoRA but I keep getting CUDA OOMs errors. I have figured …

falcon40b fine-tuning gpu gradient huggingface large models llama machinelearning memory requirements training understanding usage

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York