all AI news
KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
Feb. 1, 2024, 12:45 p.m. | Coleman Hooper Sehoon Kim Hiva Mohammadzadeh Michael W. Mahoney Yakun Sophia Shao Kurt Keutzer Amir Gh
cs.LG updates on arXiv.org arxiv.org
analysis applications cache consumption context context windows contributor cs.lg document inference llm llms memory quantization summarization surface windows
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US