Feb. 1, 2024, 12:45 p.m. | Coleman Hooper Sehoon Kim Hiva Mohammadzadeh Michael W. Mahoney Yakun Sophia Shao Kurt Keutzer Amir Gh

cs.LG updates on arXiv.org arxiv.org

LLMs are seeing growing use for applications such as document analysis and summarization which require large context windows, and with these large context windows KV cache activations surface as the dominant contributor to memory consumption during inference. Quantization is a promising approach for compressing KV cache activations; however, existing solutions fail to represent activations accurately in ultra-low precisions, such as sub-4-bit. In this work, we present KVQuant, which addresses this problem by incorporating novel methods for quantizing cached KV activations, …

analysis applications cache consumption context context windows contributor cs.lg document inference llm llms memory quantization summarization surface windows

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US