July 6, 2023, 12:28 p.m. | /u/paulo_zip

Machine Learning www.reddit.com

Hi everyone,
I'm trying some LLM open-source to assess its viability in my company. One of the best GPUs that one can have for that purpose is an Nvidia A100 since it supports bfloat. However, my current company uses AWS, and SageMaker only contains one instance that supports that GPU `p4d.24xlarge`, but it's costly.

Do you guys have alternatives to use this GPU in other clouds? Does GCP and Azure offer them for a cheaper price?

a100 a100 gpu aws azure cloud current gcp gpu gpus instances llm machinelearning nvidia nvidia a100 sagemaker

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Science Analyst

@ Mayo Clinic | AZ, United States

Sr. Data Scientist (Network Engineering)

@ SpaceX | Redmond, WA