all AI news
Intel and Nvidia Square Off in GPT-3 Time Trials
IEEE Spectrum spectrum.ieee.org
For the first time, a large language model—a key driver of recent AI hype and hope—has been added to
MLPerf, a set of neural network training benchmarks that have previously been called the Olympics of machine learning. Computers built around Nvidia’s H100 GPU and Intel’s Habana Gaudi2 chips were the first to be tested on how quickly they could perform a modified train of GPT-3, the large language model behind ChatGPT.
A 3,584-GPU computer run as …
artificial intelligence benchmarks chips computers driver gpt gpt-3 gpu h100 h100 gpu habana intel language language model large language large language model large language models llms machine machine learning mlperf network network training neural network nvidia olympics set training