April 9, 2024, 6:28 p.m. |

Latest stories for ZDNET in Artificial-Intelligence www.zdnet.com

The chip is almost twice as fast at training large language models versus Nvidia's H100, says Intel, and fifty percent faster on inference.

ai chip chip enterprises faster gaudi h100 inference intel language language models large language large language models nvidia shows training

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Risk Management - Machine Learning and Model Delivery Services, Product Associate - Senior Associate-

@ JPMorgan Chase & Co. | Wilmington, DE, United States

Senior ML Engineer (Speech/ASR)

@ ObserveAI | Bengaluru