March 13, 2024, 2 p.m. | Stephanie Palazzolo

The Information www.theinformation.com

If you tuned into Nvidia’s and Microsoft’s recent quarterly earnings calls, you probably heard their leaders discuss how inference—a fancy word for operating an artificial intelligence model—is becoming a bigger part of the computing workloads in their data centers. In other words, chatbots like ChatGPT Enterprise are serving real customers. That contrasts with what was going on for much of last year, when major tech firms were developing, or training, the models, which also uses a lot of computing capacity. …

artificial artificial intelligence bigger chatbots chatgpt chatgpt enterprise computing customers data data centers discuss earnings enterprise inference intelligence leaders microsoft nvidia part startups word words workloads

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York