Feb. 22, 2024, 1:51 p.m. | /u/uberdev

Machine Learning www.reddit.com

I'm launching an photo processing app that uses GPUs for inference on a large-ish image processing model. Right now, there are only a few users, so my GPU instance is idle 99.99% of the time–but I'm paying $400+ per month for the GPU to be idle. I could spin up the GPU instance when a user triggers an inference job, but this would mean the user waits for several minutes for the instance to come up. Can't afford that latency. …

app cloud compute gpu gpus image image processing inference instance machinelearning per photo processing

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Machine Learning Engineer (AI, NLP, LLM, Generative AI)

@ Palo Alto Networks | Santa Clara, CA, United States

Consultant Senior Data Engineer F/H

@ Devoteam | Nantes, France