Aug. 29, 2023, 6:57 p.m. | /u/GinjaTurtles

Machine Learning www.reddit.com

### Context:
___
I have an app that needs GPUs for DL inference (I don’t need GPUs for training, I own a 3070 TI). My DL model inference is pretty slow (the model framework I'm using is known to be slow) so either one machine with multiple beefy GPUs or multiple GPUs on separate machines will be necessary. My machines will be running custom docker containers.

### Slow inference:
---
I was planning on putting a few GPU instances behind …

app cloud context easy framework gpu gpus inference machine machinelearning multiple project scale training

Senior AI/ML Developer

@ Lemon.io | Remote

Consultant(e) Confirmé(e) Power BI & Azure - H/F

@ Talan | Lyon, France

Research Manager-Data Science

@ INFICON | East Syracuse, NY, United States

Data Scientist

@ Ubisoft | Singapore, Singapore

Data Science Assistant – Stage Janvier 2024 (F/H/NB)

@ Ubisoft | Paris, France

Data Scientist

@ dentsu international | Milano, Italy