Jan. 5, 2024, 6:51 a.m. | /u/Electronic_Hawk524

Machine Learning www.reddit.com

I have to make a choice between A100 (80Gb) vs 4x4096 (92GB).
I am looking to train a 7B model. Looks like 7B model will take 55 GB (using Adam as optimizer).
So, if I have a 4x4096 GPUs, is that even enough? If I train using DPO or rhf, which will have two models, will that make the GPU 3x?

Which one should I use, A100 or 4x4096?
~

a100 adam gpus llm machinelearning train training will

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US