all AI news
[R] Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference
March 16, 2024, 4:40 p.m. | /u/alancucki
Machine Learning www.reddit.com
Paper: [https://arxiv.org/abs/2403.09636](https://arxiv.org/abs/2403.09636)
X: [https://x.com/p\_nawrot/status/1768645461689168365](https://x.com/p_nawrot/status/1768645461689168365)
Abstract:
>Transformers have emerged as the backbone of large language models (LLMs). However, generation remains inefficient due to the need to store in memory a cache of key-value representations for past tokens, whose size scales linearly with the input sequence length and batch size. As a solution, we propose Dynamic Memory Compression (DMC), a method for on-line key-value cache compression at inference time. Most importantly, the model learns …
abstract cache compression dynamic however inference key language language models large language large language models llms machinelearning memory store tokens transformers value
More from www.reddit.com / Machine Learning
[D] Strange Loss Curve while training
13 hours ago |
www.reddit.com
[Research] xLSTM: Extended Long Short-Term Memory
1 day, 1 hour ago |
www.reddit.com
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US