Sept. 19, 2022, 1:14 a.m. | Zimian Wei, Hengyue Pan, Xin Niu, Dongsheng Li

cs.CV updates on arXiv.org arxiv.org

Vision transformers have shown excellent performance in computer vision
tasks. However, the computation cost of their (local) self-attention mechanism
is expensive. Comparatively, CNN is more efficient with built-in inductive
bias. Recent works show that CNN is promising to compete with vision
transformers by learning their architecture design and training protocols.
Nevertheless, existing methods either ignore multi-level features or lack
dynamic prosperity, leading to sub-optimal performance. In this paper, we
propose a novel attention mechanism named MCA, which captures different
patterns …

arxiv cnn gap transformers vision

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Associate (Data Science/Information Engineering/Applied Mathematics/Information Technology)

@ Nanyang Technological University | NTU Main Campus, Singapore

Associate Director of Data Science and Analytics

@ Penn State University | Penn State University Park

Student Worker- Data Scientist

@ TransUnion | Israel - Tel Aviv

Vice President - Customer Segment Analytics Data Science Lead

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India

Middle/Senior Data Engineer

@ Devexperts | Sofia, Bulgaria