March 13, 2024, 2 p.m. | Stephanie Palazzolo

The Information www.theinformation.com

If you tuned into Nvidia’s and Microsoft’s recent quarterly earnings calls, you probably heard their leaders discuss how inference—a fancy word for operating an artificial intelligence model—is becoming a bigger part of the computing workloads in their data centers. In other words, chatbots like ChatGPT Enterprise are serving real customers. That contrasts with what was going on for much of last year, when major tech firms were developing, or training, the models, which also uses a lot of computing capacity. …

artificial artificial intelligence bigger chatbots chatgpt chatgpt enterprise computing customers data data centers discuss earnings enterprise inference intelligence leaders microsoft nvidia part startups word words workloads

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Director, Clinical Data Science

@ Aura | Remote USA

Research Scientist, AI (PhD)

@ Meta | Menlo Park, CA | New York City