Jan. 16, 2022, 7:19 p.m. | /u/xenomarz-84

Deep Learning www.reddit.com

Hi Guys,

My research group is interested in buying a deep-learning GPUs cluster. After eliminating over-budget options, we have left with the following two options to choose from:

  1. 8x A40 48GB PCIe GPUs (with 2-way NVLink bridge between each pair of cards)
  2. 4x A100 80G SXM4 with 4-way NVLink

Which option do you think will yield better performance in general?

After reading this blog, I understood that the real bottleneck in GPUs for deep-learning is the memory bandwidth, which …

deeplearning nvidia

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Lead Software Engineer - Artificial Intelligence, LLM

@ OpenText | Hyderabad, TG, IN

Lead Software Engineer- Python Data Engineer

@ JPMorgan Chase & Co. | GLASGOW, LANARKSHIRE, United Kingdom

Data Analyst (m/w/d)

@ Collaboration Betters The World | Berlin, Germany

Data Engineer, Quality Assurance

@ Informa Group Plc. | Boulder, CO, United States

Director, Data Science - Marketing

@ Dropbox | Remote - Canada