all AI news
SEA: Sparse Linear Attention with Estimated Attention Mask
March 26, 2024, 4:45 a.m. | Heejun Lee, Jina Kim, Jeffrey Willette, Sung Ju Hwang
cs.LG updates on arXiv.org arxiv.org
Abstract: The transformer architecture has driven breakthroughs in recent years on tasks which require modeling pairwise relationships between sequential elements, as is the case in natural language understanding. However, long seqeuences pose a problem due to the quadratic complexity of the attention operation. Previous research has aimed to lower the complexity by sparsifying or linearly approximating the attention matrix. Yet, these approaches cannot straightforwardly distill knowledge from a teacher's attention matrix and often require complete retraining …
abstract architecture arxiv attention case complexity cs.cl cs.lg however language language understanding linear modeling natural natural language relationships research tasks transformer transformer architecture type understanding
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Research Scientist
@ Meta | Menlo Park, CA
Principal Data Scientist
@ Mastercard | O'Fallon, Missouri (Main Campus)