April 23, 2024, 4:48 a.m. | Yulong Shi, Mingwei Sun, Yongshuai Wang, Jiahao Ma, Zengqiang Chen

cs.CV updates on arXiv.org arxiv.org

arXiv:2310.06629v3 Announce Type: replace
Abstract: Thanks to the advancement of deep learning technology, vision transformers has demonstrated competitive performance in various computer vision tasks. Unfortunately, vision transformers still faces some challenges such as high computational complexity and absence of desirable inductive bias. To alleviate these issues, we propose a novel Bi-Fovea Self-Attention (BFSA) inspired by the physiological structure and visual properties of eagle eyes. This BFSA is used to simulate the shallow and deep fovea of eagle vision, prompting the …

arxiv attention cs.cv self-attention transformer type vision

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York