June 16, 2022, 1:13 a.m. | Lin Zheng, Huijie Pan, Lingpeng Kong

cs.CV updates on arXiv.org arxiv.org

Transformer architectures are now central to sequence modeling tasks. At its
heart is the attention mechanism, which enables effective modeling of long-term
dependencies in a sequence. Recently, transformers have been successfully
applied in the computer vision domain, where 2D images are first segmented into
patches and then treated as 1D sequences. Such linearization, however, impairs
the notion of spatial locality in images, which bears important visual clues.
To bridge the gap, we propose ripple attention, a sub-quadratic attention
mechanism for …

arxiv attention complexity cv perception ripple

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US