all AI news
Latent Attention for Linear Time Transformers
Feb. 28, 2024, 5:44 a.m. | Rares Dolga, Marius Cobzarenco, David Barber
stat.ML updates on arXiv.org arxiv.org
Abstract: The time complexity of the standard attention mechanism in a transformer scales quadratically with the length of the sequence. We introduce a method to reduce this to linear scaling with time, based on defining attention via latent vectors. The method is readily usable as a drop-in replacement for the standard attention mechanism. Our "Latte Transformer" model can be implemented for both bidirectional and unidirectional tasks, with the causal version allowing a recurrent implementation which is …
abstract arxiv attention complexity cs.cl linear reduce replacement scaling standard stat.ml transformer transformers type vectors via
More from arxiv.org / stat.ML updates on arXiv.org
Nuisance Function Tuning for Optimal Doubly Robust Estimation
2 days, 23 hours ago |
arxiv.org
CHANI: Correlation-based Hawkes Aggregation of Neurons with bio-Inspiration
3 days, 23 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Senior Applied Data Scientist
@ dunnhumby | London
Principal Data Architect - Azure & Big Data
@ MGM Resorts International | Home Office - US, NV