all AI news
ConvFormer: Closing the Gap Between CNN and Vision Transformers. (arXiv:2209.07738v2 [cs.CV] UPDATED)
Sept. 23, 2022, 1:12 a.m. | Zimian Wei, Hengyue Pan, Xin Niu, Dongsheng Li
cs.LG updates on arXiv.org arxiv.org
Vision transformers have shown excellent performance in computer vision
tasks. However, the computation cost of their (local) self-attention mechanism
is expensive. Comparatively, CNN is more efficient with built-in inductive
bias. Recent works show that CNN is promising to compete with vision
transformers by learning their architecture design and training protocols.
Nevertheless, existing methods either ignore multi-level features or lack
dynamic prosperity, leading to sub-optimal performance. In this paper, we
propose a novel attention mechanism named MCA, which captures different
patterns …
More from arxiv.org / cs.LG updates on arXiv.org
A Single-Loop Algorithm for Decentralized Bilevel Optimization
1 day, 4 hours ago |
arxiv.org
CLEANing Cygnus A deep and fast with R2D2
1 day, 4 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Alternant Data Engineering
@ Aspire Software | Angers, FR
Senior Software Engineer, Generative AI
@ Google | Dublin, Ireland