April 29, 2023, 10:04 a.m. | /u/dpeckett

Machine Learning www.reddit.com

So it looks like NVIDIA has the ML space in a complete vice grip, credit where it is due I guess, but while training costs remain prohibitively high innovation is going to be stifled.

From what I can see a lot of that cost is due to the requirement to operate what basically amounts to a supercomputer (eg. datacenter class cards with GPUDirect, NVLINK, infiniband RDMA, NVIDIA infiniband Clos fabrics). Everything here is right at home in HPC but completely …

cards cost costs credit datacenter fabrics gpudirect home hpc infiniband innovation latency machinelearning nvidia space supercomputer training training costs

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York