all AI news
PolySketchFormer: Fast Transformers via Sketching Polynomial Kernels
Feb. 8, 2024, 5:43 a.m. | Praneeth Kacham Vahab Mirrokni Peilin Zhong
cs.LG updates on arXiv.org arxiv.org
approximation assumptions attention attention mechanisms challenge complexity computational cs.lg deployment language language models memory paper polynomial scale self-attention softmax training transformer transformers via
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Software Engineer, Generative AI (C++)
@ SoundHound Inc. | Toronto, Canada