June 28, 2023, 6:39 p.m. | Samuel K. Moore

IEEE Spectrum spectrum.ieee.org




For the first time, a large language model—a key driver of recent AI hype and hope—has been added to
MLPerf, a set of neural network training benchmarks that have previously been called the Olympics of machine learning. Computers built around Nvidia’s H100 GPU and Intel’s Habana Gaudi2 chips were the first to be tested on how quickly they could perform a modified train of GPT-3, the large language model behind ChatGPT.


A 3,584-GPU computer run as …

artificial intelligence benchmarks chips computers driver gpt gpt-3 gpu h100 h100 gpu habana intel language language model large language large language model large language models llms machine machine learning mlperf network network training neural network nvidia olympics set training

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

#13721 - Data Engineer - AI Model Testing

@ Qualitest | Miami, Florida, United States

Elasticsearch Administrator

@ ManTech | 201BF - Customer Site, Chantilly, VA