all AI news
KV-Runahead: Scalable Causal LLM Inference by Parallel Key-Value Cache Generation
May 10, 2024, 4:46 a.m. | Minsik Cho, Mohammad Rastegari, Devang Naik
cs.CL updates on arXiv.org arxiv.org
Abstract: Large Language Model or LLM inference has two phases, the prompt (or prefill) phase to output the first token and the extension (or decoding) phase to the generate subsequent tokens. In this work, we propose an efficient parallelization scheme, KV-Runahead to accelerate the prompt phase. The key observation is that the extension phase generates tokens faster than the prompt phase because of key-value cache (KV-cache). Hence, KV-Runahead parallelizes the prompt phase by orchestrating multiple processes …
abstract arxiv cache causal cs.ai cs.cl cs.dc decoding extension generate inference key language language model large language large language model llm parallelization prompt scalable the prompt token tokens type value work
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US