March 27, 2024, 6:45 p.m. | Samuel K. Moore

IEEE Spectrum spectrum.ieee.org




Times change, and so must benchmarks. Now that we’re firmly in the age of massive generative AI, it’s time to add two such behemoths,
Llama 2 70B and Stable Diffusion XL, to MLPerf’s inferencing tests. Version 4.0 of the benchmark tests more than 8,500 results from 23 submitting organizations. As has been the case from the beginning, computers with Nvidia GPUs came out on top, particularly those with its H200 processor. But AI accelerators from Intel and Qualcomm were in …

70b age artificial intelligence benchmark benchmarks change diffusion generative inferencing intel large language models llama llama 2 llms machine learning massive mlperf nvidia organizations qualcomm results speed stable diffusion stable diffusion xl tests

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Machine Learning Engineer (AI, NLP, LLM, Generative AI)

@ Palo Alto Networks | Santa Clara, CA, United States

Consultant Senior Data Engineer F/H

@ Devoteam | Nantes, France