April 9, 2024, 6:28 p.m. |

Latest stories for ZDNET in Artificial-Intelligence www.zdnet.com

The chip is almost twice as fast at training large language models versus Nvidia's H100, says Intel, and fifty percent faster on inference.

ai chip chip enterprises faster gaudi h100 inference intel language language models large language large language models nvidia shows training

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US