Nov. 30, 2023, 3:27 a.m. | /u/lightSpeedBrick

Machine Learning www.reddit.com

TL;DR: Why does GPU memory usage spike during gradient update step (can't account for 10gbs) but then drop down?

I've been working on fine-tuning some of the larger LMs available on HuggingFace (e.g. Falcon40B and Llama-2-70B) and so far all my estimates for memory requirements don't add up. I have access to 4 A100-80gb GPUs and was fairly confident that I should have enough RAM to fine-tune Falcon40B with LoRA but I keep getting CUDA OOMs errors. I have figured …

falcon40b fine-tuning gpu gradient huggingface large models llama machinelearning memory requirements training understanding usage

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Snowflake Analytics Engineer - Technology Sector

@ Winning | Lisbon, Lisbon

Business Data Analyst

@ RideCo | Waterloo, Ontario, Canada

Senior Data Scientist, Payment Risk

@ Block | Boston, MA, United States

Research Scientist, Data Fusion (Climate TRACE)

@ WattTime | Remote

Technical Analyst (Data Analytics)

@ Contact Government Services | Fayetteville, AR