all AI news
Retentive Network: A Successor to Transformer for Large Language Models. (arXiv:2307.08621v4 [cs.CL] UPDATED)
cs.LG updates on arXiv.org arxiv.org
In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. …
architecture arxiv attention computation cost foundation good inference language language models large language large language models low modeling network performance retention training transformer work