Sept. 27, 2023, 2:02 p.m. | /u/velobro

Deep Learning www.reddit.com

Hi r/deeplearning,

**TL;DR:** Train and deploy custom models on pay-per-use GPUs that turn off when you're not using them.

**Documentation:** [https://docs.beam.cloud](https://docs.beam.cloud/examples/stable-diffusion-gpu)

I’m Eli, and my co-founder and I built [Beam](https://beam.cloud/) to run workloads on serverless cloud GPUs with hot reloading, autoscaling, and (of course) fast cold start. You don’t need Docker or AWS to use it, and everyone who signs up gets 10 hours of free GPU credit to try it out.

Here a few examples of things you can …

aws custom models deeplearning deploy developer developer experience development docker examples experience gpu gpus iterative per serverless them

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Scientist

@ ITE Management | New York City, United States