Sept. 18, 2023, 2:13 p.m. | Samuel K. Moore

IEEE Spectrum spectrum.ieee.org



Large language models like Llama2 and ChatGPT are where much of the action is in AI. But how well do today’s datacenter-class computers execute them? Pretty well, according to the latest set of benchmark results for machine learning, with the best able to summarize more than 100 articles in a second. MLPerf’s twice-a-year data delivery was released on 11 September and included, for the first time, a test of a large-language model (LLM), GPT-J. Fifteen computer companies submitted …

articles artificial intelligence benchmark chatgpt computers datacenter intel language language models large language large language models llama2 machine machine learning mlperf nvidia set them

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Cint | Gurgaon, India

Data Science (M/F), setor automóvel - Aveiro

@ Segula Technologies | Aveiro, Portugal