Nov. 30, 2023, 12:07 p.m. | /u/Tiny_Cut_8440


Hi everyone,

We've recently experimented with deploying the CodeLlama 34 Bn model and wanted to share our key findings for those interested:

* **Best Performance:** Quantized GPTQ, 4-bit CodeLlama-Python-34B model using vLLM.
* **Results:** Average lowest latency of 3.51 sec, average token generation at 58.40/sec, and a cold start time of 21.8 sec on our platform, using Nvidia A100 GPU.

* **Other Libraries Tested:** HuggingFace Transformer Pipeline, AutoGPTQ, Text Generation Inference.

Keen to hear your experiences and learnings in …

codellama cold start insights latency libraries machinelearningnews multiple performance python sec token

More from / machinelearningnews

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Data Analyst, Client Insights and Analytics - New Graduate, Full Time

@ Scotiabank | Toronto, ON, CA

Consultant Senior Data Scientist (H/F)

@ Publicis Groupe | Paris, France

Data Analyst H/F - CDI

@ Octapharma | Lingolsheim, FR

Lead AI Engineer

@ Ford Motor Company | United States

Senior Staff Machine Learning Engineer

@ Warner Bros. Discovery | CA San Francisco 153 Kearny Street