July 18, 2022, 1:11 a.m. | Zhicai Wang, Yanbin Hao, Xingyu Gao, Hao Zhang, Shuo Wang, Tingting Mu, Xiangnan He

cs.CV updates on arXiv.org arxiv.org

Vision multi-layer perceptrons (MLPs) have shown promising performance in
computer vision tasks, and become the main competitor of CNNs and vision
Transformers. They use token-mixing layers to capture cross-token interactions,
as opposed to the multi-head self-attention mechanism used by Transformers.
However, the heavily parameterized token-mixing layers naturally lack
mechanisms to capture local information and multi-granular non-local relations,
thus their discriminative power is restrained. To tackle this issue, we propose
a new positional spacial gating unit (PoSGU). It exploits the attention …

arxiv cv encoding mlp positional encoding relations vision

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Program Control Data Analyst

@ Ford Motor Company | Mexico

Vice President, Business Intelligence / Data & Analytics

@ AlphaSense | Remote - United States