Web: http://arxiv.org/abs/2201.10801

Jan. 27, 2022, 2:10 a.m. | Guangting Wang, Yucheng Zhao, Chuanxin Tang, Chong Luo, Wenjun Zeng

cs.CV updates on arXiv.org arxiv.org

Attention mechanism has been widely believed as the key to success of vision
transformers (ViTs), since it provides a flexible and powerful way to model
spatial relationships. However, is the attention mechanism truly an
indispensable part of ViT? Can it be replaced by some other alternatives? To
demystify the role of attention mechanism, we simplify it into an extremely
simple case: ZERO FLOP and ZERO parameter. Concretely, we revisit the shift
operation. It does not contain any parameter or arithmetic …

arxiv attention cv transformer vision

Data Engineer, Buy with Prime

@ Amazon.com | Santa Monica, California, USA

Data Architect – Public Sector Health Data Architect, WWPS

@ Amazon.com | US, VA, Virtual Location - Virginia

[Job 8224] Data Engineer - Developer Senior

@ CI&T | Brazil

Software Engineer, Machine Learning, Planner/Behavior Prediction

@ Nuro, Inc. | Mountain View, California (HQ)

Lead Data Scientist

@ Inspectorio | Ho Chi Minh City, Ho Chi Minh City, Vietnam - Remote

Data Engineer

@ Craftable | Portugal - Remote