all AI news
Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization. (arXiv:2208.00579v1 [cs.LG])
cs.LG updates on arXiv.org arxiv.org
Transformers have achieved remarkable success in sequence modeling and beyond
but suffer from quadratic computational and memory complexities with respect to
the length of the input sequence. Leveraging techniques include sparse and
linear attention and hashing tricks; efficient transformers have been proposed
to reduce the quadratic complexity of transformers but significantly degrade
the accuracy. In response, we first interpret the linear attention and residual
connections in computing the attention map as gradient descent steps. We then
introduce momentum into these …
arxiv attention gap lg linearization performance self-attention transformer