all AI news
[R] Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference
March 16, 2024, 4:40 p.m. | /u/alancucki
Machine Learning www.reddit.com
Paper: [https://arxiv.org/abs/2403.09636](https://arxiv.org/abs/2403.09636)
X: [https://x.com/p\_nawrot/status/1768645461689168365](https://x.com/p_nawrot/status/1768645461689168365)
Abstract:
>Transformers have emerged as the backbone of large language models (LLMs). However, generation remains inefficient due to the need to store in memory a cache of key-value representations for past tokens, whose size scales linearly with the input sequence length and batch size. As a solution, we propose Dynamic Memory Compression (DMC), a method for on-line key-value cache compression at inference time. Most importantly, the model learns …
abstract cache compression dynamic however inference key language language models large language large language models llms machinelearning memory store tokens transformers value
More from www.reddit.com / Machine Learning
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US