all AI news
Ripple Attention for Visual Perception with Sub-quadratic Complexity. (arXiv:2110.02453v2 [cs.CV] UPDATED)
Web: http://arxiv.org/abs/2110.02453
June 16, 2022, 1:11 a.m. | Lin Zheng, Huijie Pan, Lingpeng Kong
cs.LG updates on arXiv.org arxiv.org
Transformer architectures are now central to sequence modeling tasks. At its
heart is the attention mechanism, which enables effective modeling of long-term
dependencies in a sequence. Recently, transformers have been successfully
applied in the computer vision domain, where 2D images are first segmented into
patches and then treated as 1D sequences. Such linearization, however, impairs
the notion of spatial locality in images, which bears important visual clues.
To bridge the gap, we propose ripple attention, a sub-quadratic attention
mechanism for …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Machine Learning Researcher - Saalfeld Lab
@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia
Project Director, Machine Learning in US Health
@ ideas42.org | Remote, US
Data Science Intern
@ NannyML | Remote
Machine Learning Engineer NLP/Speech
@ Play.ht | Remote
Research Scientist, 3D Reconstruction
@ Yembo | Remote, US
Clinical Assistant or Associate Professor of Management Science and Systems
@ University at Buffalo | Buffalo, NY