April 16, 2023, 10:41 a.m. | /u/lolman2215

Deep Learning www.reddit.com

Hello guys. With the new RTX6000, are there some general guidelines for building a "small" deep learning workstation ?

How do the latest A100 80GB GPUs compare with the new RTX 6000 ADA 48GB when

a) Training LLMs?

b) Performing inference with LLMs?

The 2x A100 setup provides 160GB VRAM, the 3x 6000 provides 144. But probably more data transfer between GPUs is a bottleneck.

a100 ada building data deep learning deeplearning general gpus guidelines inference llm llms rtx rtx 6000 setup small training training llms transfer vit workstation

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Data Engineer

@ Kaseya | Bengaluru, Karnataka, India