Sept. 14, 2023, 12:22 p.m. | /u/devolvedai

Machine Learning www.reddit.com

Hello everyone,

I'm currently at a crossroads with a decision that I believe many in this community might have faced or will face at some point: Should I use cloud-based GPU instances like AWS's p3.2xlarge EC2 (with Tesla V100) or invest in building a high-performance rig at home with multiple RTX 4090s for training a large language model?

**Context:** I run a startup and we're currently fine-tuning an open source LLM, and the computational demands are of course high. We …

aws building cloud cloud-based community decision ec2 face gpu hello home instance instances llm machinelearning multiple performance rtx tesla training v100

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada