all AI news
Pruning Self-attentions into Convolutional Layers in Single Path. (arXiv:2111.11802v3 [cs.CV] UPDATED)
Aug. 16, 2022, 1:13 a.m. | Haoyu He, Jing Liu, Zizheng Pan, Jianfei Cai, Jing Zhang, Dacheng Tao, Bohan Zhuang
cs.CV updates on arXiv.org arxiv.org
Vision Transformers (ViTs) have achieved impressive performance over various
computer vision tasks. However, modelling global correlations with multi-head
self-attention (MSA) layers leads to two widely recognized issues: the massive
computational resource consumption and the lack of intrinsic inductive bias for
modelling local visual patterns. To solve both issues, we devise a simple yet
effective method named Single-Path Vision Transformer pruning (SPViT), to
efficiently and automatically compress the pre-trained ViTs into compact models
with proper locality added. Specifically, we first propose …
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Senior ML Researcher - 3D Geometry Processing | 3D Shape Generation | 3D Mesh Data
@ Promaton | Europe
Data Architect
@ Western Digital | San Jose, CA, United States
Senior Data Scientist GenAI (m/w/d)
@ Deutsche Telekom | Bonn, Deutschland
Senior Data Engineer, Telco (Remote)
@ Lightci | Toronto, Ontario
Consultant Data Architect/Engineer H/F - Innovative Tech
@ Devoteam | Lyon, France
(Senior) ML Engineer / Software Engineer Machine Learning & AI (m/f/x) onsite or remote (in Germany or Austria)
@ Scalable GmbH | Wien, Germany