Nov. 30, 2023, 12:07 p.m. | /u/Tiny_Cut_8440

machinelearningnews www.reddit.com

Hi everyone,

We've recently experimented with deploying the CodeLlama 34 Bn model and wanted to share our key findings for those interested:

* **Best Performance:** Quantized GPTQ, 4-bit CodeLlama-Python-34B model using vLLM.
* **Results:** Average lowest latency of 3.51 sec, average token generation at 58.40/sec, and a cold start time of 21.8 sec on our platform, using Nvidia A100 GPU.

https://preview.redd.it/0shrxpa67h3c1.png?width=1600&format=png&auto=webp&s=dc8baf512a79784ec4b39f5f1ca0c268f93ecac0

* **Other Libraries Tested:** HuggingFace Transformer Pipeline, AutoGPTQ, Text Generation Inference.

Keen to hear your experiences and learnings in …

codellama cold start insights latency libraries machinelearningnews multiple performance python sec token

More from www.reddit.com / machinelearningnews

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Data Analyst, Client Insights and Analytics - New Graduate, Full Time

@ Scotiabank | Toronto, ON, CA

Consultant Senior Data Scientist (H/F)

@ Publicis Groupe | Paris, France

Data Analyst H/F - CDI

@ Octapharma | Lingolsheim, FR

Lead AI Engineer

@ Ford Motor Company | United States

Senior Staff Machine Learning Engineer

@ Warner Bros. Discovery | CA San Francisco 153 Kearny Street