March 27, 2024, 4:46 a.m. | Deepak Sridhar, Yunsheng Li, Nuno Vasconcelos

cs.CV updates on arXiv.org arxiv.org

arXiv:2312.00412v2 Announce Type: replace
Abstract: Vision Transformers have received significant attention due to their impressive performance in many vision tasks. While the token mixer or attention block has been studied in great detail, the channel mixer or feature mixing block (FFN or MLP) has not been explored in depth albeit it accounts for a bulk of the parameters and computation in a model. In this work, we study whether sparse feature mixing can replace the dense connections and confirm this …

abstract arxiv attention block cs.cv feature mlp performance scalable tasks token transformers type vision vision transformers

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Global Data Architect, AVP - State Street Global Advisors

@ State Street | Boston, Massachusetts

Data Engineer

@ NTT DATA | Pune, MH, IN