all AI news
[D] Results from Deploying Quantized version of SOLAR 10.7B-Instruct
Jan. 4, 2024, 1:11 p.m. | /u/Tiny_Cut_8440
Machine Learning www.reddit.com
Been working on optimizing upstart.ai SOLAR-10.7B-Instruct-v1.0 model and wanted to share our insights:
🚀 **Our Approach:** Quantized the model using Auto-GPTQ, then deployed with vLLM.
Results: In a serverless setup, we saw 1.37 sec inference, 111.54 tokens/sec, and an 11.69 sec cold start on Nvidia A100 GPU.
https://preview.redd.it/kel8cn5dafac1.png?width=1600&format=png&auto=webp&s=5bca8b5e4a48f5f7a709f44bc431844746c61a77
Other Methods Tested: Although Auto-GPTQ was an option, our experience suggests that vLLM is the superior choice for deployment.
Looking forward to hearing about your experiences with similar projects!
a100 a100 gpu auto cold start gpu hello inference insights machinelearning nvidia nvidia a100 nvidia a100 gpu sec serverless setup solar tokens
More from www.reddit.com / Machine Learning
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne