Jan. 5, 2024, 6:51 a.m. | /u/Electronic_Hawk524

Machine Learning www.reddit.com

I have to make a choice between A100 (80Gb) vs 4x4096 (92GB).
I am looking to train a 7B model. Looks like 7B model will take 55 GB (using Adam as optimizer).
So, if I have a 4x4096 GPUs, is that even enough? If I train using DPO or rhf, which will have two models, will that make the GPU 3x?

Which one should I use, A100 or 4x4096?
~

a100 adam gpus llm machinelearning train training will

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US