all AI news
FAST: Factorizable Attention for Speeding up Transformers
Feb. 13, 2024, 5:43 a.m. | Armin Gerami Monte Hoover Pranav S. Dulepet Ramani Duraiswami
cs.LG updates on arXiv.org arxiv.org
attention comparison complexity computational cs.ai cs.lg cs.na dimensions factorization form gauss math.na memory transformers
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Scientist
@ Publicis Groupe | New York City, United States
Bigdata Cloud Developer - Spark - Assistant Manager
@ State Street | Hyderabad, India