all AI news
Serverless NLP Inference via HTTP API on AWS
Jan. 18, 2022, 8:29 p.m. | Heiko Hotz
Towards Data Science - Medium towardsdatascience.com
How to set up an API for your serverless endpoint using Boto3
Photo by Tianshu Liu on UnsplashWhat is this about?
In a previous blog post I described how we can deploy NLP models on Amazon SageMaker for serverless inference. As a next step we would like to measure the performance of this serverless endpoint. To do that we need to invoke the endpoint with some test data. This can easily be done using the Boto3 sagemaker-runtime client:
https://medium.com/media/19384bcd965b3ed66d00d2c530bf3057/href …More from towardsdatascience.com / Towards Data Science - Medium
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Business Intelligence Analyst
@ Rappi | COL-Bogotá
Applied Scientist II
@ Microsoft | Redmond, Washington, United States