all AI news
Insights from Deploying CodeLlama 34Bn Model with Multiple Libraries
Nov. 30, 2023, 12:07 p.m. | /u/Tiny_Cut_8440
machinelearningnews www.reddit.com
We've recently experimented with deploying the CodeLlama 34 Bn model and wanted to share our key findings for those interested:
* **Best Performance:** Quantized GPTQ, 4-bit CodeLlama-Python-34B model using vLLM.
* **Results:** Average lowest latency of 3.51 sec, average token generation at 58.40/sec, and a cold start time of 21.8 sec on our platform, using Nvidia A100 GPU.
https://preview.redd.it/0shrxpa67h3c1.png?width=1600&format=png&auto=webp&s=dc8baf512a79784ec4b39f5f1ca0c268f93ecac0
* **Other Libraries Tested:** HuggingFace Transformer Pipeline, AutoGPTQ, Text Generation Inference.
Keen to hear your experiences and learnings in …
codellama cold start insights latency libraries machinelearningnews multiple performance python sec token
More from www.reddit.com / machinelearningnews
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Codec Avatars Research Engineer
@ Meta | Pittsburgh, PA