all AI news
Treeformer: Dense Gradient Trees for Efficient Attention Computation. (arXiv:2208.09015v1 [cs.CL])
Aug. 22, 2022, 1:13 a.m. | Lovish Madaan, Srinadh Bhojanapalli, Himanshu Jain, Prateek Jain
cs.CL updates on arXiv.org arxiv.org
Standard inference and training with transformer based architectures scale
quadratically with input sequence length. This is prohibitively large for a
variety of applications especially in web-page translation, query-answering
etc. Consequently, several approaches have been developed recently to speedup
attention computation by enforcing different attention structures such as
sparsity, low-rank, approximating attention using kernels. In this work, we
view attention computation as that of nearest neighbor retrieval, and use
decision tree based hierarchical navigation to reduce the retrieval cost per
query …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Analyst (Commercial Excellence)
@ Allegro | Poznan, Warsaw, Poland
Senior Machine Learning Engineer
@ Motive | Pakistan - Remote
Summernaut Customer Facing Data Engineer
@ Celonis | Raleigh, US, North Carolina
Data Engineer Mumbai
@ Nielsen | Mumbai, India