Feb. 20, 2024, 5:47 a.m. | Yulong Shi, Mingwei Sun, Yongshuai Wang, Rui Wang, Hui Sun, Zengqiang Chen

cs.CV updates on arXiv.org arxiv.org

arXiv:2402.11303v1 Announce Type: new
Abstract: Vision transformers have achieved encouraging progress in various computer vision tasks. A common belief is that this is attributed to the competence of self-attention in modeling the global dependencies among feature tokens. Unfortunately, self-attention still faces some challenges in dense prediction tasks, such as the high computational complexity and absence of desirable inductive bias. To address these above issues, we revisit the potential benefits of integrating vision transformer with Gabor filter, and propose a Learnable …

abstract arxiv attention belief challenges computational computer computer vision cs.cv dependencies feature filter global modeling prediction progress self-attention tasks tokens transformer transformers type vision vision transformers

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne