Jan. 13, 2022, 4:59 p.m. | Ernest Chan

Towards Data Science - Medium towardsdatascience.com

Serve hundreds to thousands of ML models — architectures from industry

When you only have one or two models to deploy, you can simply put your models in a serving framework and deploy your models on a couple of instances/containers. However, if your ML use cases grow or you build separate models on many segments of your data (like per-customer models), you may eventually need to serve a large number of models. This post will explore the question: How do …

editors pick industry machine learning machine-learning-systems ml ml models ml-platform notes-from-industry serve

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne