Aug. 7, 2023, 5:52 p.m. | /u/Hugejiji

Machine Learning www.reddit.com

Hey,

I've been trying to build an ML workstation and was considering the idea of using two RTX 3090's to get the extra VRAM instead of a single 4090. However, I've come across some confusion regarding whether they can share their VRAM or not. Do I need to run them via NVLink to achieve this? I believe PyTorch's data parallelism splits the batches across both GPUs, but that wouldn't effectively combine their VRAM right?

Any advice or insights you can …

build extra gpus hey machinelearning memory rtx rtx 3090 workstation

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Analyst

@ Alstom | Johannesburg, GT, ZA