Dec. 6, 2023, 4:13 p.m. | /u/thefreemanever

Deep Learning www.reddit.com

Considering we have an LLM model sized 48GB, can we use 2x 24GB or 3x16GB GPUs (With no NVLink) to run the model? (I mean model inference by run.)

deeplearning gpus inference llm mean multiple nvlink small

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Senior DevOps/MLOps

@ Global Relay | Vancouver, British Columbia, Canada

Senior Statistical Programmer for Clinical Development

@ Novo Nordisk | Aalborg, North Denmark Region, DK

Associate, Data Analysis

@ JLL | USA-CLIENT Boulder CO-Google

AI Compiler Engineer, Model Optimization, Quantization & Framework

@ Renesas Electronics | Duesseldorf, Germany

Lead AI Security Researcher

@ Grammarly | United States; Hybrid