all AI news
Latent Attention for Linear Time Transformers
Feb. 28, 2024, 5:44 a.m. | Rares Dolga, Marius Cobzarenco, David Barber
stat.ML updates on arXiv.org arxiv.org
Abstract: The time complexity of the standard attention mechanism in a transformer scales quadratically with the length of the sequence. We introduce a method to reduce this to linear scaling with time, based on defining attention via latent vectors. The method is readily usable as a drop-in replacement for the standard attention mechanism. Our "Latte Transformer" model can be implemented for both bidirectional and unidirectional tasks, with the causal version allowing a recurrent implementation which is …
abstract arxiv attention complexity cs.cl linear reduce replacement scaling standard stat.ml transformer transformers type vectors via
More from arxiv.org / stat.ML updates on arXiv.org
Uniform Inference for Subsampled Moment Regression
1 day, 14 hours ago |
arxiv.org
Partial information decomposition as information bottleneck
1 day, 14 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York