Nov. 30, 2023, 6:13 a.m. | /u/Tiny_Cut_8440

Machine Learning www.reddit.com

Hi everyone,

We've recently experimented with deploying the CodeLlama 34 Bn model and wanted to share our key findings for those interested:

* **Best Performance:** Quantized GPTQ, 4-bit CodeLlama-Python-34B model using vLLM.
* **Results:** Average lowest latency of 3.51 sec, average token generation at 58.40/sec, and a cold start time of 21.8 sec on our platform, using Nvidia A100 GPU.

[CodeLlama 34Bn](https://preview.redd.it/wn5u6sczff3c1.png?width=1600&format=png&auto=webp&s=44e6b15250c38066f05a2c4bbf8d3474f1db141c)

* **Other Libraries Tested:** HuggingFace Transformer Pipeline, AutoGPTQ, Text Generation Inference.

Keen to hear your experiences and learnings …

codellama cold start insights latency libraries machinelearning multiple performance python sec token

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Codec Avatars Research Engineer

@ Meta | Pittsburgh, PA