July 21, 2022, 1 p.m. | TensorFlow

TensorFlow www.youtube.com

Wei Wei, Developer Advocate at Google, overviews deploying ML models into production with TensorFlow Serving, a framework that makes it easy to serve the production ML models with low latency and high throughput. Learn how to start a TF Serving model server and send POST requests using the command line tool. Wei covers what it is, its architecture, general workflow, and how to use it.

Stay tuned for the upcoming episodes on Deploying production ML models with TensorFlow Serving. Wei …

ml ml models overview production tensorflow

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Social Insights & Data Analyst (Freelance)

@ Media.Monks | Jakarta

Cloud Data Engineer

@ Arkatechture | Portland, ME, USA